Overview
Integration details
Class | Package | Local | Serializable | JS support | Downloads | Version |
---|---|---|---|---|---|---|
ChatClovaX | langchain-naver | ❌ | ❌ | ❌ |
Model features
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |
Setup
Before using the chat model, you must go through the four steps below.- Creating NAVER Cloud Platform account
- Apply to use CLOVA Studio
- Create a CLOVA Studio Test App or Service App of a model to use (See here.)
- Issue a Test or Service API key (See here.)
Credentials
Set theCLOVASTUDIO_API_KEY
environment variable with your API key.
You can add them to your environment variables as below:
Installation
The LangChain Naver integration lives in thelangchain-naver
package:
Instantiation
Now we can instantiate our model object and generate chat completions:Invocation
In addition toinvoke
below, ChatClovaX
also supports batch, stream and their async functionalities.
Chaining
We can chain our model with a prompt template like so:Streaming
Tool calling
CLOVA Studio supports tool calling (also known as “function calling”) that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool. It is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally. Note: You should setmax_tokens
larger than 1024 to utilize the tool calling feature in CLOVA Studio.
ChatClovaX.bind_tools()
WithChatClovaX.bind_tools
, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an OpenAI-compatible tool schemas, which looks like:
AIMessage.tool_calls
Notice that the AIMessage has atool_calls
attribute. This contains in a standardized ToolCall format that is model-provider agnostic.
Structured Outputs
For supporting model(s), you can use the Structured Outputs feature to force the model to generates responses in a specific structure, such as Pydantic model or TypedDict or JSON. Note: Structured Outputs requires Thinking mode to be disabled. Setthinking.effort
to none
.
method
to json_schema
.
Thinking
For supporting model(s), when Thinking feature is enabled (by default), it will output the step-by-step reasoning process that led to its final answer. Specify thethinking
parameter to control the feature—enable or disable the thinking process and configure its depth.
Accessing the thinking process
When Thinking mode is enabled, you can access the thinking process through thethinking_content
attribute in AIMessage.additional_kwargs
.
Additional functionalities
Using fine-tuned models
You can call fine-tuned models by passing thetask_id
to the model
parameter as: ft:{task_id}
.
You can check task_id
from corresponding Test App or Service App details.