GreenNode is a global AI solutions provider and a NVIDIA Preferred Partner, delivering full-stack AI capabilities—from infrastructure to application—for enterprises across the US, MENA, and APAC regions. Operating on world-class infrastructure (LEED Gold, TIA‑942, Uptime Tier III), GreenNode empowers enterprises, startups, and researchers with a comprehensive suite of AI servicesThis guide provides a walkthrough on getting started with the
GreenNodeRerank
retriever. It enables you to perform document search using built-in connectors or by integrating your own data sources, leveraging GreenNode’s reranking capabilities for improved relevance.
Integration details
- Provider: GreenNode Serverless AI
- Model Types: Reranking models
- Primary Use Case: Reranking search results based on semantic relevance
- Available Models: Includes BAAI/bge-reranker-v2-m3 and other high-performance reranking models
- Scoring: Returns relevance scores used to reorder document candidates based on query alignment
Setup
To access GreenNode models you’ll need to create a GreenNode account, get an API key, and install thelangchain-greennode
integration package.
Credentials
Head to this page to sign up to GreenNode AI Platform and generate an API key. Once you’ve done this, set the GREENNODE_API_KEY environment variable:Installation
This retriever lives in thelangchain-greennode
package:
Instantiation
TheGreenNodeRerank
class can be instantiated with optional parameters for the API key and model name:
Usage
Reranking Search Results
Reranking models enhance retrieval-augmented generation (RAG) workflows by refining and reordering initial search results based on semantic relevance. The example below demonstrates how to integrate GreenNodeRerank with a base retriever to improve the quality of retrieved documents.Direct Usage
TheGreenNodeRerank
class can be used independently to perform reranking of retrieved documents based on relevance scores. This functionality is particularly useful in scenarios where a primary retrieval step (e.g., keyword or vector search) returns a broad set of candidates, and a secondary model is needed to refine the results using more sophisticated semantic understanding. The class accepts a query and a list of candidate documents and returns a reordered list based on predicted relevance.