ChatNVIDIA
features and configurations head to the API reference.
Overview
Thelangchain-nvidia-ai-endpoints
package contains LangChain integrations building applications with models on
NVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking models
from the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIA
accelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a single
command on NVIDIA accelerated infrastructure.
NVIDIA hosted deployments of NIMs are available to test on the NVIDIA API catalog. After testing,
NIMs can be exported from NVIDIA’s API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud,
giving enterprises ownership and full control of their IP and AI application.
NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog.
At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.
This example goes over how to use LangChain to interact with NVIDIA supported via the ChatNVIDIA
class.
For more information on accessing the chat models through this api, check out the ChatNVIDIA documentation.
Integration details
Class | Package | Local | Serializable | JS support | Downloads | Version |
---|---|---|---|---|---|---|
ChatNVIDIA | langchain-nvidia-ai-endpoints | ✅ | beta | ❌ |
Model features
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |
Setup
To get started:- Create a free account with NVIDIA, which hosts NVIDIA AI Foundation models.
- Click on your model of choice.
-
Under
Input
select thePython
tab, and clickGet API Key
. Then clickGenerate Key
. -
Copy and save the generated key as
NVIDIA_API_KEY
. From there, you should have access to the endpoints.
Credentials
Installation
The LangChain NVIDIA AI Endpoints integration lives in thelangchain-nvidia-ai-endpoints
package:
Instantiation
Now we can access models in the NVIDIA API Catalog:Invocation
Working with NVIDIA NIMs
When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications. Learn more about NIMsStream, Batch, and Async
These models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples.Supported models
Queryingavailable_models
will still give you all of the other models offered by your API credentials.
The playground_
prefix is optional.
Model types
All of these models above are supported and can be accessed viaChatNVIDIA
.
Some model types support unique prompting techniques and chat messages. We will review a few important ones below.
To find out more about a specific model, please navigate to the API section of an AI Foundation model as linked here.
General Chat
Models such asmeta/llama3-8b-instruct
and mistralai/mixtral-8x22b-instruct-v0.1
are good all-around models that you can use for with any LangChain chat messages. Example below.
Code Generation
These models accept the same arguments and input structure as regular chat models, but they tend to perform better on code-generation and structured code tasks. An example of this ismeta/codellama-70b
.
Multimodal
NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs isnvidia/neva-22b
.
Below is an example use:
Passing an image as a URL
Passing an image as a base64 encoded string
At the moment, some extra processing happens client-side to support larger images like the one above. But for smaller images (and to better illustrate the process going on under the hood), we can directly pass in the image as shown below:Directly within the string
The NVIDIA API uniquely accepts images as base64 images inlined within<img/>
HTML tags. While this isn’t interoperable with other LLMs, you can directly prompt the model accordingly.
Example usage within a RunnableWithMessageHistory
Like any other integration, ChatNVIDIA is fine to support chat utilities like RunnableWithMessageHistory which is analogous to usingConversationChain
. Below, we show the LangChain RunnableWithMessageHistory example applied to the mistralai/mixtral-8x22b-instruct-v0.1
model.
Tool calling
Starting in v0.2,ChatNVIDIA
supports bind_tools.
ChatNVIDIA
provides integration with the variety of models on build.nvidia.com as well as local NIMs. Not all these models are trained for tool calling. Be sure to select a model that does have tool calling for your experimention and applications.
You can get a list of models that are known to support tool calling with,
Chaining
We can chain our model with a prompt template like so:API reference
For detailed documentation of allChatNVIDIA
features and configurations head to the API reference: python.langchain.com/api_reference/nvidia_ai_endpoints/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html