NomicEmbeddings#
- class langchain_nomic.embeddings.NomicEmbeddings(
- *,
- model: str,
- nomic_api_key: str | None = ...,
- dimensionality: int | None = ...,
- inference_mode: Literal['remote'] = ...,
- class langchain_nomic.embeddings.NomicEmbeddings(
- *,
- model: str,
- nomic_api_key: str | None = ...,
- dimensionality: int | None = ...,
- inference_mode: Literal['local', 'dynamic'],
- device: str | None = ...,
- class langchain_nomic.embeddings.NomicEmbeddings(
- *,
- model: str,
- nomic_api_key: str | None = ...,
- dimensionality: int | None = ...,
- inference_mode: str,
- device: str | None = ...,
NomicEmbeddings embedding model.
Example
from langchain_nomic import NomicEmbeddings model = NomicEmbeddings()
Initialize NomicEmbeddings model.
- Parameters:
model (str) β model name
nomic_api_key (str | None) β optionally, set the Nomic API key. Uses the
NOMIC_API_KEY
environment variable by default.dimensionality (int | None) β The embedding dimension, for use with Matryoshka-capable models. Defaults to full-size.
inference_mode (str) β How to generate embeddings. One of
'remote'
,'local'
(Embed4All), or'dynamic'
(automatic). Defaults to'remote'
.device (str | None) β The device to use for local embeddings. Choices include
'cpu'
,'gpu'
,'nvidia'
,'amd'
, or a specific device name. See the docstring forGPT4All.__init__
for more info. Typically defaults to'cpu'
. Do not use on macOS.vision_model (str | None)
Methods
__init__
()Initialize NomicEmbeddings model.
aembed_documents
(texts)Asynchronous Embed search docs.
aembed_query
(text)Asynchronous Embed query text.
embed
(texts,Β *,Β task_type)Embed texts.
embed_documents
(texts)Embed search docs.
embed_image
(uris)embed_query
(text)Embed query text.
- __init__(
- *,
- model: str,
- nomic_api_key: str | None = None,
- dimensionality: int | None = None,
- inference_mode: Literal['remote'] = 'remote',
- __init__(
- *,
- model: str,
- nomic_api_key: str | None = None,
- dimensionality: int | None = None,
- inference_mode: Literal['local', 'dynamic'],
- device: str | None = None,
- __init__(
- *,
- model: str,
- nomic_api_key: str | None = None,
- dimensionality: int | None = None,
- inference_mode: str,
- device: str | None = None,
Initialize NomicEmbeddings model.
- Parameters:
model β model name
nomic_api_key β optionally, set the Nomic API key. Uses the
NOMIC_API_KEY
environment variable by default.dimensionality β The embedding dimension, for use with Matryoshka-capable models. Defaults to full-size.
inference_mode β How to generate embeddings. One of
'remote'
,'local'
(Embed4All), or'dynamic'
(automatic). Defaults to'remote'
.device β The device to use for local embeddings. Choices include
'cpu'
,'gpu'
,'nvidia'
,'amd'
, or a specific device name. See the docstring forGPT4All.__init__
for more info. Typically defaults to'cpu'
. Do not use on macOS.
- async aembed_documents(
- texts: list[str],
Asynchronous Embed search docs.
- Parameters:
texts (list[str]) β List of text to embed.
- Returns:
List of embeddings.
- Return type:
list[list[float]]
- async aembed_query(text: str) list[float] #
Asynchronous Embed query text.
- Parameters:
text (str) β Text to embed.
- Returns:
Embedding.
- Return type:
list[float]
- embed(
- texts: list[str],
- *,
- task_type: str,
Embed texts.
- Parameters:
texts (list[str]) β list of texts to embed
task_type (str) β the task type to use when embedding. One of
'search_query'
,'search_document'
,'classification'
,'clustering'
- Return type:
list[list[float]]
- embed_documents(
- texts: list[str],
Embed search docs.
- Parameters:
texts (list[str]) β list of texts to embed as documents
- Return type:
list[list[float]]