Observability

I can’t create API keys or manage users in the UI, what’s wrong?

  • You have likely deployed LangSmith without setting up SSO. LangSmith requires SSO to manage users and API keys. You can find more information on setting up SSO in the configuration section.

How does load balancing/ingress work?

  • You will need to expose the frontend container/service to your applications/users. This will handle routing to all downstream services.
  • You will need to terminate SSL at the ingress level. We recommend using a managed service like AWS ALB, GCP Load Balancer, or Nginx.

How can we authenticate to the application?

  • Currently, our self-hosted solution supports SSO with OAuth2.0 and OIDC as an authn solution. Note, we do offer a no-auth solution but highly recommend setting up oauth before moving into production.
You can find more information on setting up SSO in the configuration section.

Can I use external storage services?

  • You can configure LangSmith to use external versions of all storage services. In a production setting, we strongly recommend using external storage services. Check out the configuration section for more information.

Does my application need egress to function properly?

Our deployment only needs egress for a few things (most of which can reside within your VPC):
  • Fetching images (If mirroring your images, this may not be needed)
  • Talking to any LLM endpoints
  • Talking to any external storage services you may have configured
  • Fetching OAuth information
  • Subscription Metrics and Operational Metadata (if not running in offline mode)
    • Requires egress to https://beacon.langchain.com
    • See Egress for more information
Your VPC can set up rules to limit any other access. Note: We require the X-Organization-Id and X-Tenant-Id headers to be allowed to be passed through to the backend service. These are used to determine which organization and workspace (previously called “tenant”) the request is for.

Resource requirements for the application?

  • In kubernetes, we recommend a minimum helm configuration which can be found in here. For docker, we recommend a minimum of 16GB of RAM and 4 CPUs.
  • For Postgres, we recommend a minimum of 8GB of RAM and 2 CPUs.
  • For Redis, we recommend 4GB of RAM and 2 CPUs.
  • For Clickhouse, we recommend 32GB of RAM and 8 CPUs.

SAML SSO FAQs

How do I change a SAML SSO user’s email address?

Some identity providers retain the original User ID through an email change while others do not, so we recommend that you follow these steps to avoid duplicate users in LangSmith:
  1. Remove the user from the organization (see here)
  2. Change their email address in the IdP
  3. Have them login to LangSmith again via SAML SSO - this will trigger the usual JIT provisioning flow with their new email address

How do I fix “405 method not allowed”?

Ensure you’re using the correct ACS URL: https://auth.langchain.com/auth/v1/sso/saml/acs

SCIM FAQs

Can I use SCIM without SAML SSO?

  • Cloud: No, SAML SSO is required for SCIM in cloud deployments
  • Self-hosted: Yes, SCIM works with OAuth with Client Secret authentication mode

What happens if I have both JIT provisioning and SCIM enabled?

JIT provisioning and SCIM can conflict with each other. We recommend disabling JIT provisioning before enabling SCIM to ensure consistent user provisioning behavior.

How do I change a user’s role or workspace access?

Update the user’s group membership in your IdP. The changes will be synchronized to LangSmith according to the role precedence rules.

What happens when a user is removed from all groups?

The user will be deprovisioned from your LangSmith organization according to your IdP’s deprovisioning settings.

Can I use custom group names?

Yes. If your identity provider supports syncing alternate fields to the displayName group attribute, you may use an alternate attribute (like description) as the displayName in LangSmith and retain full customizability of the identity provider group name. Otherwise, groups must follow the specific naming convention described in the Group Naming Convention section to properly map to LangSmith roles and workspaces.
Why is my Okta integration not working?
See Okta’s troubleshooting guide here: https://help.okta.com/en-us/content/topics/users-groups-profiles/usgp-group-push-troubleshoot.htm.

Deployment

Do I need to use LangChain to use LangGraph? What’s the difference?

No. LangGraph is an orchestration framework for complex agentic systems and is more low-level and controllable than LangChain agents. LangChain provides a standard interface to interact with models and other components, useful for straight-forward chains and retrieval flows.

How is LangGraph different from other agent frameworks?

Other agentic frameworks can work for simple, generic tasks but fall short for complex tasks bespoke to a company’s needs. LangGraph provides a more expressive framework to handle companies’ unique tasks without restricting users to a single black-box cognitive architecture.

Does LangGraph impact the performance of my app?

LangGraph will not add any overhead to your code and is specifically designed with streaming workflows in mind.

Is LangGraph open source? Is it free?

Yes. LangGraph is an MIT-licensed open-source library and is free to use.

How are LangGraph and LangGraph Platform different?

LangGraph is a stateful, orchestration framework that brings added control to agent workflows. LangGraph Platform is a service for deploying and scaling LangGraph applications, with an opinionated API for building agent UXs, plus an integrated developer studio.
FeaturesLangGraph (open source)LangGraph Platform
DescriptionStateful orchestration framework for agentic applicationsScalable infrastructure for deploying LangGraph applications
SDKsPython and JavaScriptPython and JavaScript
HTTP APIsNoneYes - useful for retrieving & updating state or long-term memory, or creating a configurable assistant
StreamingBasicDedicated mode for token-by-token messages
CheckpointerCommunity contributedSupported out-of-the-box
Persistence LayerSelf-managedManaged Postgres with efficient storage
DeploymentSelf-managed• Cloud
• Free self-hosted
• Enterprise (paid self-hosted)
ScalabilitySelf-managedAuto-scaling of task queues and servers
Fault-toleranceSelf-managedAutomated retries
Concurrency ControlSimple threadingSupports double-texting
SchedulingNoneCron scheduling
MonitoringNoneIntegrated with LangSmith for observability
IDE integrationStudioStudio

Is LangGraph Platform open source?

No. LangGraph Platform is proprietary software. There is a free, self-hosted version of LangGraph Platform with access to basic features. The Cloud deployment option and the Self-Hosted deployment options are paid services. Contact our sales team to learn more. For more information, see our LangGraph Platform pricing page.

Does LangGraph work with LLMs that don’t support tool calling?

Yes! You can use LangGraph with any LLMs. The main reason we use LLMs that support tool calling is that this is often the most convenient way to have the LLM make its decision about what to do. If your LLM does not support tool calling, you can still use it - you just need to write a bit of logic to convert the raw LLM string response to a decision about what to do.

Does LangGraph work with OSS LLMs?

Yes! LangGraph is totally ambivalent to what LLMs are used under the hood. The main reason we use closed LLMs in most of the tutorials is that they seamlessly support tool calling, while OSS LLMs often don’t. But tool calling is not necessary (see this section) so you can totally use LangGraph with OSS LLMs.

Can I use Studio without logging in to LangSmith

Yes! You can use the development version of LangGraph Server to run the backend locally. This will connect to the studio frontend hosted as part of LangSmith. If you set an environment variable of LANGSMITH_TRACING=false, then no traces will be sent to LangSmith.

What does “nodes executed” mean for LangGraph Platform usage?

Nodes Executed is the aggregate number of nodes in a LangGraph application that are called and completed successfully during an invocation of the application. If a node in the graph is not called during execution or ends in an error state, these nodes will not be counted. If a node is called and completes successfully multiple times, each occurrence will be counted.