Netmind
Netmind AI Build AI Faster, Smarter, and More Affordably Train, Fine-tune, Run Inference, and Scale with our Global GPU Network—Your all-in-one AI Engine.
This example goes over how to use LangChain to interact with Netmind AI models.
# Querying chat models with Netmind AI
from langchain_netmind import ChatNetmind, NetmindEmbeddings
# choose from our 50+ models here: https://www.netmind.ai/
chat = ChatNetmind(
# netmind_api_key="YOUR_API_KEY",
model="meta-llama/Llama-3.3-70B-Instruct",
)
# stream the response back from the model
for m in chat.stream("Who are you?"):
print(m.content, end="", flush=True)
# if you don't want to do streaming, you can use the invoke method
# chat.invoke("Who are you?")
embeddings = NetmindEmbeddings(
# netmind_api_key="YOUR_API_KEY",
model="nvidia/NV-Embed-v2",
)
single_vector = embeddings.embed_query("I love programming.")
print(str(single_vector)[:100])