Asynchronous API Documentation
Below please find the documentation for the asychronous classes of elasticsearch_dsl
.
- class elasticsearch_dsl.AsyncSearch(**kwargs: Any)
Search request to elasticsearch.
- Parameters:
using – Elasticsearch instance to use
index – limit the search to index
doc_type – only query this type.
All the parameters supplied (or omitted) at creation type can be later overridden by methods (using, index and doc_type respectively).
- collapse(field: str | InstrumentedField | None = None, inner_hits: Dict[str, Any] | None = None, max_concurrent_group_searches: int | None = None) Self
Add collapsing information to the search request. If called without providing
field
, it will remove all collapse requirements, otherwise it will replace them with the provided arguments. The API returns a copy of the Search object and can thus be chained.
- async count() int
Return the number of hits matching the query and filters. Note that only the actual number is returned.
- async delete() executes the query by delegating to delete_by_query()
- doc_type(*doc_type: type | str, **kwargs: Callable[[AttrDict[Any]], Any]) Self
Set the type to search through. You can supply a single value or multiple. Values can be strings or subclasses of
Document
.You can also pass in any keyword arguments, mapping a doc_type to a callback that should be used instead of the Hit class.
If no doc_type is supplied any information stored on the instance will be erased.
Example:
s = Search().doc_type(‘product’, ‘store’, User, custom=my_callback)
- async execute(ignore_cache: bool = False) Response[_R]
Execute the search and return an instance of
Response
wrapping all the data.- Parameters:
ignore_cache – if set to
True
, consecutive calls will hit ES, while cached result will be ignored. Defaults to False
- extra(**kwargs: Any) Self
Add extra keys to the request body. Mostly here for backwards compatibility.
- classmethod from_dict(d: Dict[str, Any]) Self
Construct a new Search instance from a raw dict containing the search body. Useful when migrating from raw dictionaries.
Example:
s = Search.from_dict({ "query": { "bool": { "must": [...] } }, "aggs": {...} }) s = s.filter('term', published=True)
- highlight(*fields: str | InstrumentedField, **kwargs: Any) Self
Request highlighting of some fields. All keyword arguments passed in will be used as parameters for all the fields in the
fields
parameter. Example:Search().highlight('title', 'body', fragment_size=50)
will produce the equivalent of:
{ "highlight": { "fields": { "body": {"fragment_size": 50}, "title": {"fragment_size": 50} } } }
If you want to have different options for different fields you can call
highlight
twice:Search().highlight('title', fragment_size=50).highlight('body', fragment_size=100)
which will produce:
{ "highlight": { "fields": { "body": {"fragment_size": 100}, "title": {"fragment_size": 50} } } }
- highlight_options(**kwargs: Any) Self
Update the global highlighting options used for this request. For example:
s = Search() s = s.highlight_options(order='score')
- index(*index: str | List[str] | Tuple[str, ...]) Self
Set the index for the search. If called empty it will remove all information.
Example:
s = Search() s = s.index('twitter-2015.01.01', 'twitter-2015.01.02') s = s.index(['twitter-2015.01.01', 'twitter-2015.01.02'])
- async iterate(keep_alive: str = '1m') AsyncIterator[_R]
Return a generator that iterates over all the documents matching the query.
This method uses a point in time to provide consistent results even when the index is changing. It should be preferred over
scan()
.- Parameters:
keep_alive – the time to live for the point in time, renewed with each new search request
- knn(field: str | InstrumentedField, k: int, num_candidates: int, query_vector: List[float] | None = None, query_vector_builder: Dict[str, Any] | None = None, boost: float | None = None, filter: Query | None = None, similarity: float | None = None, inner_hits: Dict[str, Any] | None = None) Self
Add a k-nearest neighbor (kNN) search.
- Parameters:
field – the vector field to search against as a string or document class attribute
k – number of nearest neighbors to return as top hits
num_candidates – number of nearest neighbor candidates to consider per shard
query_vector – the vector to search for
query_vector_builder – A dictionary indicating how to build a query vector
boost – A floating-point boost factor for kNN scores
filter – query to filter the documents that can match
similarity – the minimum similarity required for a document to be considered a match, as a float value
inner_hits – retrieve hits from nested field
Example:
s = Search() s = s.knn(field='embedding', k=5, num_candidates=10, query_vector=vector, filter=Q('term', category='blog')))
- params(**kwargs: Any) Self
Specify query params to be used when executing the search. All the keyword arguments will override the current values. See https://elasticsearch-py.readthedocs.io/en/latest/api/elasticsearch.html#elasticsearch.Elasticsearch.search for all available parameters.
Example:
s = Search() s = s.params(routing='user-1', preference='local')
- point_in_time(keep_alive: str = '1m') AsyncIterator[Self]
Open a point in time (pit) that can be used across several searches.
This method implements a context manager that returns a search object configured to operate within the created pit.
- Parameters:
keep_alive – the time to live for the point in time, renewed with each search request
- rank(rrf: bool | Dict[str, Any] | None = None) Self
Defines a method for combining and ranking results sets from a combination of searches. Requires a minimum of 2 results sets.
- Parameters:
rrf – Set to
True
or an options dictionary to set the rank method to reciprocal rank fusion (RRF).
Example:
s = Search() s = s.query('match', content='search text') s = s.knn(field='embedding', k=5, num_candidates=10, query_vector=vector) s = s.rank(rrf=True)
Note: This option is in technical preview and may change in the future. The syntax will likely change before GA.
- response_class(cls: Type[Response[_R]]) Self
Override the default wrapper used for the response.
- async scan() AsyncIterator[_R]
Turn the search into a scan search and return a generator that will iterate over all the documents matching the query.
Use
params
method to specify any additional arguments you with to pass to the underlyingscan
helper fromelasticsearch-py
- https://elasticsearch-py.readthedocs.io/en/master/helpers.html#elasticsearch.helpers.scanThe
iterate()
method should be preferred, as it provides similar functionality using an Elasticsearch point in time.
- script_fields(**kwargs: Any) Self
Define script fields to be calculated on hits. See https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-script-fields.html for more details.
Example:
s = Search() s = s.script_fields(times_two="doc['field'].value * 2") s = s.script_fields( times_three={ 'script': { 'lang': 'painless', 'source': "doc['field'].value * params.n", 'params': {'n': 3} } } )
- search_after() Self
Return a
Search
instance that retrieves the next page of results.This method provides an easy way to paginate a long list of results using the
search_after
option. For example:page_size = 20 s = Search()[:page_size].sort("date") while True: # get a page of results r = await s.execute() # do something with this page of results # exit the loop if we reached the end if len(r.hits) < page_size: break # get a search object with the next page of results s = s.search_after()
Note that the
search_after
option requires the search to have an explicitsort
order.
- sort(*keys: str | InstrumentedField | Dict[str, Dict[str, str]]) Self
Add sorting information to the search request. If called without arguments it will remove all sort requirements. Otherwise it will replace them. Acceptable arguments are:
'some.field' '-some.other.field' {'different.field': {'any': 'dict'}}
so for example:
s = Search().sort( 'category', '-title', {"price" : {"order" : "asc", "mode" : "avg"}} )
will sort by
category
,title
(in descending order) andprice
in ascending order using theavg
mode.The API returns a copy of the Search object and can thus be chained.
- source(fields: bool | str | InstrumentedField | List[str | InstrumentedField] | Dict[str, List[str | InstrumentedField]] | None = None, **kwargs: Any) Self
Selectively control how the _source field is returned.
- Parameters:
fields – field name, wildcard string, list of field names or wildcards, or dictionary of includes and excludes
kwargs –
includes
orexcludes
arguments, whenfields
isNone
.
When no arguments are given, the entire document will be returned for each hit. If
fields
is a string or list of strings, the field names or field wildcards given will be included. Iffields
is a dictionary with keys of ‘includes’ and/or ‘excludes’ the fields will be either included or excluded appropriately.Calling this multiple times with the same named parameter will override the previous values with the new ones.
Example:
s = Search() s = s.source(includes=['obj1.*'], excludes=["*.description"]) s = Search() s = s.source(includes=['obj1.*']).source(excludes=["*.description"])
- suggest(name: str, text: str | None = None, regex: str | None = None, **kwargs: Any) Self
Add a suggestions request to the search.
- Parameters:
name – name of the suggestion
text – text to suggest on
All keyword arguments will be added to the suggestions body. For example:
s = Search() s = s.suggest('suggestion-1', 'Elasticsearch', term={'field': 'body'})
- # regex query for Completion Suggester
s = Search() s = s.suggest(‘suggestion-1’, regex=’py[thon|py]’, completion={‘field’: ‘body’})
- to_dict(count: bool = False, **kwargs: Any) Dict[str, Any]
Serialize the search into the dictionary that will be sent over as the request’s body.
- Parameters:
count – a flag to specify if we are interested in a body for count - no aggregations, no pagination bounds etc.
All additional keyword arguments will be included into the dictionary.
- update_from_dict(d: Dict[str, Any]) Self
Apply options from a serialized body to the current instance. Modifies the object in-place. Used mostly by
from_dict
.
- using(client: str | Elasticsearch | AsyncElasticsearch) Self
Associate the search request with an elasticsearch client. A fresh copy will be returned with current instance remaining unchanged.
- Parameters:
client – an instance of
elasticsearch.Elasticsearch
to use or an alias to look up inelasticsearch_dsl.connections
- class elasticsearch_dsl.AsyncMultiSearch(**kwargs: Any)
Combine multiple
Search
objects into a single request.- add(search: SearchBase[_R]) Self
Adds a new
Search
object to the request:ms = MultiSearch(index='my-index') ms = ms.add(Search(doc_type=Category).filter('term', category='python')) ms = ms.add(Search(doc_type=Blog))
- doc_type(*doc_type: type | str, **kwargs: Callable[[AttrDict[Any]], Any]) Self
Set the type to search through. You can supply a single value or multiple. Values can be strings or subclasses of
Document
.You can also pass in any keyword arguments, mapping a doc_type to a callback that should be used instead of the Hit class.
If no doc_type is supplied any information stored on the instance will be erased.
Example:
s = Search().doc_type(‘product’, ‘store’, User, custom=my_callback)
- async execute(ignore_cache: bool = False, raise_on_error: bool = True) List[Response[_R]]
Execute the multi search request and return a list of search results.
- extra(**kwargs: Any) Self
Add extra keys to the request body. Mostly here for backwards compatibility.
- index(*index: str | List[str] | Tuple[str, ...]) Self
Set the index for the search. If called empty it will remove all information.
Example:
s = Search() s = s.index('twitter-2015.01.01', 'twitter-2015.01.02') s = s.index(['twitter-2015.01.01', 'twitter-2015.01.02'])
- params(**kwargs: Any) Self
Specify query params to be used when executing the search. All the keyword arguments will override the current values. See https://elasticsearch-py.readthedocs.io/en/latest/api/elasticsearch.html#elasticsearch.Elasticsearch.search for all available parameters.
Example:
s = Search() s = s.params(routing='user-1', preference='local')
- using(client: str | Elasticsearch | AsyncElasticsearch) Self
Associate the search request with an elasticsearch client. A fresh copy will be returned with current instance remaining unchanged.
- Parameters:
client – an instance of
elasticsearch.Elasticsearch
to use or an alias to look up inelasticsearch_dsl.connections
- class elasticsearch_dsl.AsyncDocument(meta: Dict[str, Any] | None = None, **kwargs: Any)
Model-like class for persisting documents in elasticsearch.
- async classmethod bulk(actions: AsyncIterable[Self | Dict[str, Any]], using: str | AsyncElasticsearch | None = None, index: str | None = None, validate: bool = True, skip_empty: bool = True, **kwargs: Any) Tuple[int, int | List[Any]]
Allows to perform multiple indexing operations in a single request.
- Parameters:
actions – a generator that returns document instances to be indexed, bulk operation dictionaries.
using – connection alias to use, defaults to
'default'
index – Elasticsearch index to use, if the
Document
is associated with an index this can be omitted.validate – set to
False
to skip validating the documentsskip_empty – if set to
False
will cause empty values (None
,[]
,{}
) to be left on the document. Those values will be stripped out otherwise as they make no difference in Elasticsearch.
Any additional keyword arguments will be passed to
Elasticsearch.bulk
unchanged.- Returns:
bulk operation results
- async delete(using: str | AsyncElasticsearch | None = None, index: str | None = None, **kwargs: Any) None
Delete the instance in elasticsearch.
- Parameters:
index – elasticsearch index to use, if the
Document
is associated with an index this can be omitted.using – connection alias to use, defaults to
'default'
Any additional keyword arguments will be passed to
Elasticsearch.delete
unchanged.
- async classmethod exists(id: str, using: str | AsyncElasticsearch | None = None, index: str | None = None, **kwargs: Any) bool
check if exists a single document from elasticsearch using its
id
.- Parameters:
id –
id
of the document to check if existsindex – elasticsearch index to use, if the
Document
is associated with an index this can be omitted.using – connection alias to use, defaults to
'default'
Any additional keyword arguments will be passed to
Elasticsearch.exists
unchanged.
- async classmethod get(id: str, using: str | AsyncElasticsearch | None = None, index: str | None = None, **kwargs: Any) Self | None
Retrieve a single document from elasticsearch using its
id
.- Parameters:
id –
id
of the document to be retrievedindex – elasticsearch index to use, if the
Document
is associated with an index this can be omitted.using – connection alias to use, defaults to
'default'
Any additional keyword arguments will be passed to
Elasticsearch.get
unchanged.
- async classmethod init(index: str | None = None, using: str | AsyncElasticsearch | None = None) None
Create the index and populate the mappings in elasticsearch.
- async classmethod mget(docs: List[Dict[str, Any]], using: str | AsyncElasticsearch | None = None, index: str | None = None, raise_on_error: bool = True, missing: str = 'none', **kwargs: Any) List[Self | None]
Retrieve multiple document by their
id
s. Returns a list of instances in the same order as requested.- Parameters:
docs – list of
id
s of the documents to be retrieved or a list of document specifications as per https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-multi-get.htmlindex – elasticsearch index to use, if the
Document
is associated with an index this can be omitted.using – connection alias to use, defaults to
'default'
missing – what to do when one of the documents requested is not found. Valid options are
'none'
(useNone
),'raise'
(raiseNotFoundError
) or'skip'
(ignore the missing document).
Any additional keyword arguments will be passed to
Elasticsearch.mget
unchanged.
- async save(using: str | AsyncElasticsearch | None = None, index: str | None = None, validate: bool = True, skip_empty: bool = True, return_doc_meta: bool = False, **kwargs: Any) Any
Save the document into elasticsearch. If the document doesn’t exist it is created, it is overwritten otherwise. Returns
True
if this operations resulted in new document being created.- Parameters:
index – elasticsearch index to use, if the
Document
is associated with an index this can be omitted.using – connection alias to use, defaults to
'default'
validate – set to
False
to skip validating the documentskip_empty – if set to
False
will cause empty values (None
,[]
,{}
) to be left on the document. Those values will be stripped out otherwise as they make no difference in elasticsearch.return_doc_meta – set to
True
to return all metadata from the update API call instead of only the operation result
Any additional keyword arguments will be passed to
Elasticsearch.index
unchanged.- Returns:
operation result created/updated
- classmethod search(using: str | AsyncElasticsearch | None = None, index: str | None = None) AsyncSearch[Self]
Create an
Search
instance that will search over thisDocument
.
- to_dict(include_meta: bool = False, skip_empty: bool = True) Dict[str, Any]
Serialize the instance into a dictionary so that it can be saved in elasticsearch.
- Parameters:
include_meta – if set to
True
will include all the metadata (_index
,_id
etc). Otherwise just the document’s data is serialized. This is useful when passing multiple instances intoelasticsearch.helpers.bulk
.skip_empty – if set to
False
will cause empty values (None
,[]
,{}
) to be left on the document. Those values will be stripped out otherwise as they make no difference in elasticsearch.
- async update(using: str | AsyncElasticsearch | None = None, index: str | None = None, detect_noop: bool = True, doc_as_upsert: bool = False, refresh: bool = False, retry_on_conflict: int | None = None, script: str | Dict[str, Any] | None = None, script_id: str | None = None, scripted_upsert: bool = False, upsert: Dict[str, Any] | None = None, return_doc_meta: bool = False, **fields: Any) Any
Partial update of the document, specify fields you wish to update and both the instance and the document in elasticsearch will be updated:
doc = MyDocument(title='Document Title!') doc.save() doc.update(title='New Document Title!')
- Parameters:
index – elasticsearch index to use, if the
Document
is associated with an index this can be omitted.using – connection alias to use, defaults to
'default'
detect_noop – Set to
False
to disable noop detection.refresh – Control when the changes made by this request are visible to search. Set to
True
for immediate effect.retry_on_conflict – In between the get and indexing phases of the update, it is possible that another process might have already updated the same document. By default, the update will fail with a version conflict exception. The retry_on_conflict parameter controls how many times to retry the update before finally throwing an exception.
doc_as_upsert – Instead of sending a partial doc plus an upsert doc, setting doc_as_upsert to true will use the contents of doc as the upsert value
script – the source code of the script as a string, or a dictionary with script attributes to update.
return_doc_meta – set to
True
to return all metadata from the index API call instead of only the operation result
- Returns:
operation result noop/updated
- class elasticsearch_dsl.AsyncIndex(name: str, using: str | AsyncElasticsearch = 'default')
- Parameters:
name – name of the index
using – connection alias to use, defaults to
'default'
- aliases(**kwargs: Any) Self
Add aliases to the index definition:
i = Index('blog-v2') i.aliases(blog={}, published={'filter': Q('term', published=True)})
- async analyze(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Perform the analysis process on a text and return the tokens breakdown of the text.
Any additional keyword arguments will be passed to
Elasticsearch.indices.analyze
unchanged.
- analyzer(*args: Any, **kwargs: Any) None
Explicitly add an analyzer to an index. Note that all custom analyzers defined in mappings will also be created. This is useful for search analyzers.
Example:
from elasticsearch_dsl import analyzer, tokenizer my_analyzer = analyzer('my_analyzer', tokenizer=tokenizer('trigram', 'nGram', min_gram=3, max_gram=3), filter=['lowercase'] ) i = Index('blog') i.analyzer(my_analyzer)
- async clear_cache(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Clear all caches or specific cached associated with the index.
Any additional keyword arguments will be passed to
Elasticsearch.indices.clear_cache
unchanged.
- clone(name: str | None = None, using: str | AsyncElasticsearch | None = None) Self
Create a copy of the instance with another name or connection alias. Useful for creating multiple indices with shared configuration:
i = Index('base-index') i.settings(number_of_shards=1) i.create() i2 = i.clone('other-index') i2.create()
- Parameters:
name – name of the index
using – connection alias to use, defaults to
'default'
- async close(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Closes the index in elasticsearch.
Any additional keyword arguments will be passed to
Elasticsearch.indices.close
unchanged.
- async create(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Creates the index in elasticsearch.
Any additional keyword arguments will be passed to
Elasticsearch.indices.create
unchanged.
- async delete(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Deletes the index in elasticsearch.
Any additional keyword arguments will be passed to
Elasticsearch.indices.delete
unchanged.
- async delete_alias(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Delete specific alias.
Any additional keyword arguments will be passed to
Elasticsearch.indices.delete_alias
unchanged.
- document(document: DocumentMeta) DocumentMeta
Associate a
Document
subclass with an index. This means that, when this index is created, it will contain the mappings for theDocument
. If theDocument
class doesn’t have a default index yet (by definingclass Index
), this instance will be used. Can be used as a decorator:i = Index('blog') @i.document class Post(Document): title = Text() # create the index, including Post mappings i.create() # .search() will now return a Search object that will return # properly deserialized Post instances s = i.search()
- async exists(using: str | AsyncElasticsearch | None = None, **kwargs: Any) bool
Returns
True
if the index already exists in elasticsearch.Any additional keyword arguments will be passed to
Elasticsearch.indices.exists
unchanged.
- async exists_alias(using: str | AsyncElasticsearch | None = None, **kwargs: Any) bool
Return a boolean indicating whether given alias exists for this index.
Any additional keyword arguments will be passed to
Elasticsearch.indices.exists_alias
unchanged.
- async flush(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Performs a flush operation on the index.
Any additional keyword arguments will be passed to
Elasticsearch.indices.flush
unchanged.
- async forcemerge(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
The force merge API allows to force merging of the index through an API. The merge relates to the number of segments a Lucene index holds within each shard. The force merge operation allows to reduce the number of segments by merging them.
This call will block until the merge is complete. If the http connection is lost, the request will continue in the background, and any new requests will block until the previous force merge is complete.
Any additional keyword arguments will be passed to
Elasticsearch.indices.forcemerge
unchanged.
- async get(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
The get index API allows to retrieve information about the index.
Any additional keyword arguments will be passed to
Elasticsearch.indices.get
unchanged.
- async get_alias(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Retrieve a specified alias.
Any additional keyword arguments will be passed to
Elasticsearch.indices.get_alias
unchanged.
- async get_field_mapping(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Retrieve mapping definition of a specific field.
Any additional keyword arguments will be passed to
Elasticsearch.indices.get_field_mapping
unchanged.
- async get_mapping(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Retrieve specific mapping definition for a specific type.
Any additional keyword arguments will be passed to
Elasticsearch.indices.get_mapping
unchanged.
- async get_settings(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Retrieve settings for the index.
Any additional keyword arguments will be passed to
Elasticsearch.indices.get_settings
unchanged.
- mapping(mapping: MappingBase) None
Associate a mapping (an instance of
Mapping
) with this index. This means that, when this index is created, it will contain the mappings for the document type defined by those mappings.
- async open(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Opens the index in elasticsearch.
Any additional keyword arguments will be passed to
Elasticsearch.indices.open
unchanged.
- async put_alias(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Create an alias for the index.
Any additional keyword arguments will be passed to
Elasticsearch.indices.put_alias
unchanged.
- async put_mapping(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Register specific mapping definition for a specific type.
Any additional keyword arguments will be passed to
Elasticsearch.indices.put_mapping
unchanged.
- async put_settings(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Change specific index level settings in real time.
Any additional keyword arguments will be passed to
Elasticsearch.indices.put_settings
unchanged.
- async recovery(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
The indices recovery API provides insight into on-going shard recoveries for the index.
Any additional keyword arguments will be passed to
Elasticsearch.indices.recovery
unchanged.
- async refresh(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Performs a refresh operation on the index.
Any additional keyword arguments will be passed to
Elasticsearch.indices.refresh
unchanged.
- async save(using: str | AsyncElasticsearch | None = None) ObjectApiResponse[Any] | None
Sync the index definition with elasticsearch, creating the index if it doesn’t exist and updating its settings and mappings if it does.
Note some settings and mapping changes cannot be done on an open index (or at all on an existing index) and for those this method will fail with the underlying exception.
- search(using: str | AsyncElasticsearch | None = None) AsyncSearch
Return a
Search
object searching over the index (or all the indices belonging to this template) and itsDocument
s.
- async segments(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Provide low level segments information that a Lucene index (shard level) is built with.
Any additional keyword arguments will be passed to
Elasticsearch.indices.segments
unchanged.
- settings(**kwargs: Any) Self
Add settings to the index:
i = Index('i') i.settings(number_of_shards=1, number_of_replicas=0)
Multiple calls to
settings
will merge the keys, later overriding the earlier.
- async shard_stores(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Provides store information for shard copies of the index. Store information reports on which nodes shard copies exist, the shard copy version, indicating how recent they are, and any exceptions encountered while opening the shard index or from earlier engine failure.
Any additional keyword arguments will be passed to
Elasticsearch.indices.shard_stores
unchanged.
- async shrink(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
The shrink index API allows you to shrink an existing index into a new index with fewer primary shards. The number of primary shards in the target index must be a factor of the shards in the source index. For example an index with 8 primary shards can be shrunk into 4, 2 or 1 primary shards or an index with 15 primary shards can be shrunk into 5, 3 or 1. If the number of shards in the index is a prime number it can only be shrunk into a single primary shard. Before shrinking, a (primary or replica) copy of every shard in the index must be present on the same node.
Any additional keyword arguments will be passed to
Elasticsearch.indices.shrink
unchanged.
- async stats(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Retrieve statistics on different operations happening on the index.
Any additional keyword arguments will be passed to
Elasticsearch.indices.stats
unchanged.
- updateByQuery(using: str | AsyncElasticsearch | None = None) AsyncUpdateByQuery
Return a
UpdateByQuery
object searching over the index (or all the indices belonging to this template) and updating Documents that match the search criteria.For more information, see here: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update-by-query.html
- async validate_query(using: str | AsyncElasticsearch | None = None, **kwargs: Any) ObjectApiResponse[Any]
Validate a potentially expensive query without executing it.
Any additional keyword arguments will be passed to
Elasticsearch.indices.validate_query
unchanged.
- class elasticsearch_dsl.AsyncFacetedSearch(query: str | Query | None = None, filters: Dict[str, str | datetime | Sequence[str]] = {}, sort: Sequence[str] = [])
- Parameters:
query – the text to search for
filters – facet values to filter
sort – sort information to be passed to
Search
- add_filter(name: str, filter_values: str | datetime | Sequence[str] | List[str | datetime | Sequence[str]]) None
Add a filter for a facet.
- aggregate(search: SearchBase[_R]) None
Add aggregations representing the facets selected, including potential filters.
- build_search() SearchBase[_R]
Construct the
Search
object.
- async execute() Response[_R]
Execute the search and return the response.
- filter(search: SearchBase[_R]) SearchBase[_R]
Add a
post_filter
to the search request narrowing the results based on the facet filters.
- highlight(search: SearchBase[_R]) SearchBase[_R]
Add highlighting for all the fields
- params(**kwargs: Any) None
Specify query params to be used when executing the search. All the keyword arguments will override the current values. See https://elasticsearch-py.readthedocs.io/en/master/api.html#elasticsearch.Elasticsearch.search for all available parameters.
- query(search: SearchBase[_R], query: str | Query) SearchBase[_R]
Add query part to
search
.Override this if you wish to customize the query used.
- search() AsyncSearch[_R]
Returns the base Search object to which the facets are added.
You can customize the query by overriding this method and returning a modified search object.
- sort(search: SearchBase[_R]) SearchBase[_R]
Add sorting information to the request.
- class elasticsearch_dsl.AsyncUpdateByQuery(**kwargs: Any)
Update by query request to elasticsearch.
- Parameters:
using – Elasticsearch instance to use
index – limit the search to index
doc_type – only query this type.
All the parameters supplied (or omitted) at creation type can be later overridden by methods (using, index and doc_type respectively).
- doc_type(*doc_type: type | str, **kwargs: Callable[[AttrDict[Any]], Any]) Self
Set the type to search through. You can supply a single value or multiple. Values can be strings or subclasses of
Document
.You can also pass in any keyword arguments, mapping a doc_type to a callback that should be used instead of the Hit class.
If no doc_type is supplied any information stored on the instance will be erased.
Example:
s = Search().doc_type(‘product’, ‘store’, User, custom=my_callback)
- async execute() UpdateByQueryResponse[_R]
Execute the search and return an instance of
Response
wrapping all the data.
- extra(**kwargs: Any) Self
Add extra keys to the request body. Mostly here for backwards compatibility.
- classmethod from_dict(d: Dict[str, Any]) Self
Construct a new UpdateByQuery instance from a raw dict containing the search body. Useful when migrating from raw dictionaries.
Example:
ubq = UpdateByQuery.from_dict({ "query": { "bool": { "must": [...] } }, "script": {...} }) ubq = ubq.filter('term', published=True)
- index(*index: str | List[str] | Tuple[str, ...]) Self
Set the index for the search. If called empty it will remove all information.
Example:
s = Search() s = s.index('twitter-2015.01.01', 'twitter-2015.01.02') s = s.index(['twitter-2015.01.01', 'twitter-2015.01.02'])
- params(**kwargs: Any) Self
Specify query params to be used when executing the search. All the keyword arguments will override the current values. See https://elasticsearch-py.readthedocs.io/en/latest/api/elasticsearch.html#elasticsearch.Elasticsearch.search for all available parameters.
Example:
s = Search() s = s.params(routing='user-1', preference='local')
- response_class(cls: Type[UpdateByQueryResponse[_R]]) Self
Override the default wrapper used for the response.
- script(**kwargs: Any) Self
Define update action to take: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-scripting-using.html for more details.
Note: the API only accepts a single script, so calling the script multiple times will overwrite.
Example:
ubq = Search() ubq = ubq.script(source="ctx._source.likes++"") ubq = ubq.script(source="ctx._source.likes += params.f"", lang="expression", params={'f': 3})
- to_dict(**kwargs: Any) Dict[str, Any]
Serialize the search into the dictionary that will be sent over as the request’ubq body.
All additional keyword arguments will be included into the dictionary.
- update_from_dict(d: Dict[str, Any]) Self
Apply options from a serialized body to the current instance. Modifies the object in-place. Used mostly by
from_dict
.
- using(client: str | Elasticsearch | AsyncElasticsearch) Self
Associate the search request with an elasticsearch client. A fresh copy will be returned with current instance remaining unchanged.
- Parameters:
client – an instance of
elasticsearch.Elasticsearch
to use or an alias to look up inelasticsearch_dsl.connections