NomicEmbeddings#

class langchain_nomic.embeddings.NomicEmbeddings(
*,
model: str,
nomic_api_key: str | None = ...,
dimensionality: int | None = ...,
inference_mode: Literal['remote'] = ...,
)[source]#
class langchain_nomic.embeddings.NomicEmbeddings(
*,
model: str,
nomic_api_key: str | None = ...,
dimensionality: int | None = ...,
inference_mode: Literal['local', 'dynamic'],
device: str | None = ...,
)
class langchain_nomic.embeddings.NomicEmbeddings(
*,
model: str,
nomic_api_key: str | None = ...,
dimensionality: int | None = ...,
inference_mode: str,
device: str | None = ...,
)

NomicEmbeddings embedding model.

Example

from langchain_nomic import NomicEmbeddings

model = NomicEmbeddings()

Initialize NomicEmbeddings model.

Parameters:
  • model (str) – model name

  • nomic_api_key (str | None) – optionally, set the Nomic API key. Uses the NOMIC_API_KEY environment variable by default.

  • dimensionality (int | None) – The embedding dimension, for use with Matryoshka-capable models. Defaults to full-size.

  • inference_mode (str) – How to generate embeddings. One of 'remote', 'local' (Embed4All), or 'dynamic' (automatic). Defaults to 'remote'.

  • device (str | None) – The device to use for local embeddings. Choices include 'cpu', 'gpu', 'nvidia', 'amd', or a specific device name. See the docstring for GPT4All.__init__ for more info. Typically defaults to 'cpu'. Do not use on macOS.

  • vision_model (str | None)

Methods

__init__()

Initialize NomicEmbeddings model.

aembed_documents(texts)

Asynchronous Embed search docs.

aembed_query(text)

Asynchronous Embed query text.

embed(texts,Β *,Β task_type)

Embed texts.

embed_documents(texts)

Embed search docs.

embed_image(uris)

embed_query(text)

Embed query text.

__init__(
*,
model: str,
nomic_api_key: str | None = None,
dimensionality: int | None = None,
inference_mode: Literal['remote'] = 'remote',
)[source]#
__init__(
*,
model: str,
nomic_api_key: str | None = None,
dimensionality: int | None = None,
inference_mode: Literal['local', 'dynamic'],
device: str | None = None,
)
__init__(
*,
model: str,
nomic_api_key: str | None = None,
dimensionality: int | None = None,
inference_mode: str,
device: str | None = None,
)

Initialize NomicEmbeddings model.

Parameters:
  • model – model name

  • nomic_api_key – optionally, set the Nomic API key. Uses the NOMIC_API_KEY environment variable by default.

  • dimensionality – The embedding dimension, for use with Matryoshka-capable models. Defaults to full-size.

  • inference_mode – How to generate embeddings. One of 'remote', 'local' (Embed4All), or 'dynamic' (automatic). Defaults to 'remote'.

  • device – The device to use for local embeddings. Choices include 'cpu', 'gpu', 'nvidia', 'amd', or a specific device name. See the docstring for GPT4All.__init__ for more info. Typically defaults to 'cpu'. Do not use on macOS.

async aembed_documents(
texts: list[str],
) β†’ list[list[float]]#

Asynchronous Embed search docs.

Parameters:

texts (list[str]) – List of text to embed.

Returns:

List of embeddings.

Return type:

list[list[float]]

async aembed_query(text: str) β†’ list[float]#

Asynchronous Embed query text.

Parameters:

text (str) – Text to embed.

Returns:

Embedding.

Return type:

list[float]

embed(
texts: list[str],
*,
task_type: str,
) β†’ list[list[float]][source]#

Embed texts.

Parameters:
  • texts (list[str]) – list of texts to embed

  • task_type (str) – the task type to use when embedding. One of 'search_query', 'search_document', 'classification', 'clustering'

Return type:

list[list[float]]

embed_documents(
texts: list[str],
) β†’ list[list[float]][source]#

Embed search docs.

Parameters:

texts (list[str]) – list of texts to embed as documents

Return type:

list[list[float]]

embed_image(
uris: list[str],
) β†’ list[list[float]][source]#
Parameters:

uris (list[str])

Return type:

list[list[float]]

embed_query(text: str) β†’ list[float][source]#

Embed query text.

Parameters:

text (str) – query text

Return type:

list[float]