Banana provided serverless GPU inference for AI models,
a CI/CD build pipeline and a simple Python framework (Potassium
) to server your models.
This page covers how to use the Banana ecosystem within LangChain.
Installation and Setup
- Install the python package
banana-dev
:
- Get an Banana api key from the Banana.dev dashboard and set it as an environment variable (
BANANA_API_KEY
) - Get your model’s key and url slug from the model’s details page.
Define your Banana Template
You’ll need to set up a Github repo for your Banana app. You can get started in 5 minutes using this guide. Alternatively, for a ready-to-go LLM example, you can check out Banana’s CodeLlama-7B-Instruct-GPTQ GitHub repository. Just fork it and deploy it within Banana. Other starter repos are available here.Build the Banana app
To use Banana apps within LangChain, you must include theoutputs
key
in the returned json, and the value must be a string.
app.py
file in CodeLlama-7B-Instruct-GPTQ.