當前位置: 首頁>>代碼示例>>Python>>正文


Python elasticsearch_dsl.token_filter方法代碼示例

本文整理匯總了Python中elasticsearch_dsl.token_filter方法的典型用法代碼示例。如果您正苦於以下問題:Python elasticsearch_dsl.token_filter方法的具體用法?Python elasticsearch_dsl.token_filter怎麽用?Python elasticsearch_dsl.token_filter使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在elasticsearch_dsl的用法示例。


在下文中一共展示了elasticsearch_dsl.token_filter方法的5個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: add_analyzer

# 需要導入模塊: import elasticsearch_dsl [as 別名]
# 或者: from elasticsearch_dsl import token_filter [as 別名]
def add_analyzer(index: Index):
    """Agrega un nuevo analyzer al índice, disponible para ser usado
    en todos sus fields. El analyzer aplica lower case + ascii fold:
    quita acentos y uso de ñ, entre otros, para permitir búsqueda de
    texto en español
    """

    synonyms = list(Synonym.objects.values_list('terms', flat=True))

    filters = ['lowercase', 'asciifolding']
    if synonyms:
        filters.append(token_filter(constants.SYNONYM_FILTER,
                                    type='synonym',
                                    synonyms=synonyms))

    index.analyzer(
        analyzer(constants.ANALYZER,
                 tokenizer='standard',
                 filter=filters)
    ) 
開發者ID:datosgobar,項目名稱:series-tiempo-ar-api,代碼行數:22,代碼來源:index.py

示例2: test_simulate_complex

# 需要導入模塊: import elasticsearch_dsl [as 別名]
# 或者: from elasticsearch_dsl import token_filter [as 別名]
def test_simulate_complex(client):
    a = analyzer('my-analyzer',
                 tokenizer=tokenizer('split_words', 'simple_pattern_split', pattern=':'),
                 filter=['lowercase', token_filter('no-ifs', 'stop', stopwords=['if'])])

    tokens = a.simulate('if:this:works', using=client).tokens

    assert len(tokens) == 2
    assert ['this', 'works'] == [t.token for t in tokens] 
開發者ID:elastic,項目名稱:elasticsearch-dsl-py,代碼行數:11,代碼來源:test_analysis.py

示例3: gen_name_analyzer_synonyms

# 需要導入模塊: import elasticsearch_dsl [as 別名]
# 或者: from elasticsearch_dsl import token_filter [as 別名]
def gen_name_analyzer_synonyms(synonyms):
    """Crea un analizador para nombres con sinónimos.

    Args:
        synonyms (list): Lista de sinónimos a utilizar, en formato Solr.

    Returns:
        elasticsearch_dsl.analysis.Analyzer: analizador de texto con nombre
            'name_analyzer_synonyms'.

    """
    name_synonyms_filter = token_filter(
        'name_synonyms_filter',
        type='synonym',
        synonyms=synonyms
    )

    return analyzer(
        name_analyzer_synonyms,
        tokenizer='standard',
        filter=[
            'lowercase',
            'asciifolding',
            name_synonyms_filter,
            spanish_stopwords_filter
        ]
    ) 
開發者ID:datosgobar,項目名稱:georef-ar-api,代碼行數:29,代碼來源:es_config.py

示例4: gen_name_analyzer_excluding_terms

# 需要導入模塊: import elasticsearch_dsl [as 別名]
# 或者: from elasticsearch_dsl import token_filter [as 別名]
def gen_name_analyzer_excluding_terms(excluding_terms):
    """Crea un analizador para nombres que sólo retorna TE (términos
    excluyentes).

    Por ejemplo, si el archivo de configuración de TE contiene las siguientes
    reglas:

    santa, salta, santo
    caba, cba

    Entonces, aplicar el analizador a la búsqueda 'salta' debería retornar
    'santa' y 'santo', mientras que buscar 'caba' debería retornar 'cba'.

    El analizador se utiliza para excluir resultados de búsquedas específicas.

    Args:
        excluding_terms (list): Lista de TE a utilizar especificados como
            sinónimos Solr.

    Returns:
        elasticsearch_dsl.analysis.Analyzer: analizador de texto con nombre
            'name_analyzer_excluding_terms'.

    """
    name_excluding_terms_filter = token_filter(
        'name_excluding_terms_filter',
        type='synonym',
        synonyms=excluding_terms
    )

    return analyzer(
        name_analyzer_excluding_terms,
        tokenizer='standard',
        filter=[
            'lowercase',
            'asciifolding',
            name_excluding_terms_filter,
            synonyms_only_filter,
            spanish_stopwords_filter
        ]
    ) 
開發者ID:datosgobar,項目名稱:georef-ar-api,代碼行數:43,代碼來源:es_config.py

示例5: configure_index

# 需要導入模塊: import elasticsearch_dsl [as 別名]
# 或者: from elasticsearch_dsl import token_filter [as 別名]
def configure_index(idx):
    """Configure ES index settings.

    NOTE: This is unused at the moment. Current issues:
    1. The index needs to be created (index.create() or search_index --create)
    setting update_all_types=True because of the attribute name being the same
    in Person and Company.
    https://elasticsearch-py.readthedocs.io/en/master/api.html#elasticsearch.client.IndicesClient.create

    name = fields.TextField(attr="fullname", analyzer=lb_analyzer)

    2. How to specifiy token filter for an attribute?

    Therefore the index needs to be configured outside Django.
    """
    idx.settings(number_of_shards=1, number_of_replicas=0)
    lb_filter = token_filter(
        "lb_filter",
        "stop",
        stopwords=["i"]
    )
    lb_analyzer = analyzer(
        "lb_analyzer",
        tokenizer="standard",
        filter=["standard", "lb_filter", "asciifolding", "lowercase"]
    )
    return lb_analyzer, lb_filter 
開發者ID:PabloCastellano,項目名稱:libreborme,代碼行數:29,代碼來源:documents.py


注:本文中的elasticsearch_dsl.token_filter方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。