ChatNetmind
This will help you getting started with Netmind chat models. For detailed documentation of all ChatNetmind features and configurations head to the API reference.
- See https://www.netmind.ai/ for an example.
Overviewโ
Integration detailsโ
Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatNetmind | langchain-netmind | โ | โ | โ |
Model featuresโ
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
โ | โ | โ | โ | โ | โ | โ | โ | โ | โ |
Setupโ
To access Netmind models you'll need to create a/an Netmind account, get an API key, and install the langchain-netmind
integration package.
Credentialsโ
Head to https://www.netmind.ai/ to sign up to Netmind and generate an API key. Once you've done this set the NETMIND_API_KEY environment variable:
import getpass
import os
if not os.getenv("NETMIND_API_KEY"):
os.environ["NETMIND_API_KEY"] = getpass.getpass("Enter your Netmind API key: ")
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
Installationโ
The LangChain Netmind integration lives in the langchain-netmind
package:
%pip install -qU langchain-netmind
Instantiationโ
Now we can instantiate our model object and generate chat completions:
from langchain_netmind import ChatNetmind
llm = ChatNetmind(
model="model-name",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# other params...
)
Invocationโ
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
print(ai_msg.content)
Chainingโ
We can chain our model with a prompt template like so:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)
chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API referenceโ
For detailed documentation of all ChatNetmind features and configurations head to the API reference: API reference
Relatedโ
- Chat model conceptual guide
- Chat model how-to guides