Gradient
allows to fine tune and get completions on LLMs with a simple web API.
This notebook goes over how to use LangChain with Gradient.
Imports
Set the Environment API Key
Make sure to get your API key from Gradient AI. You are given $10 in free credits to test and fine-tune different models.GRADIENT_ACCESS_TOKEN
and GRADIENT_WORKSPACE_ID
to get currently deployed models. Using the gradientai
Python package.
Create the Gradient instance
You can specify different parameters such as the model, max_tokens generated, temperature, etc. As we later want to fine-tune out model, we select the model_adapter with the id674119b5-f19e-4856-add2-767ae7f7d7ef_model_adapter
, but you can use any base or fine-tunable model.
Create a Prompt Template
We will create a prompt template for Question and Answer.Initiate the LLMChain
Run the LLMChain
Provide a question and run the LLMChain.Improve the results by fine-tuning (optional)
Well - that is wrong - the San Francisco 49ers did not win. The correct answer to the question would beThe Dallas Cowboys!
.
Let’s increase the odds for the correct answer, by fine-tuning on the correct answer using the PromptTemplate.