GPT4All
models.
Import GPT4All
Set Up Question to pass to LLM
Specify Model
To run locally, download a compatible ggml-formatted model. The gpt4all page has a usefulModel Explorer
section:
- Select a model of interest
- Download using the UI and move the
.bin
to thelocal_path
(noted below)
This integration does not yet support streaming in chunks via the
.stream()
method. The below example uses a callback handler with streaming=True
: