Documentation Index
Fetch the complete documentation index at: https://launchdarkly-preview.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Before you ship
You built a working chatbot. When building your own application, check the following before releasing it to end users.Your config is actually being used
Errors do not crash the application
You have a rollout plan
Do not roll out changes to all users at once:- Test with your internal team at 5%
- Roll out to beta users at 25%
- Monitor error rates and token usage
- Gradually increase to 100%
Online evals are considered
You will not know if your AI is giving good answers unless you measure it. Consider adding online evaluations once you are live. Read when to add online evals for guidance.When AI Configs make sense
AI Configs are not the right fit for every project. Here are some common use cases:You’re experimenting with prompts
If you are updating prompts frequently, hardcoding them can slow iteration. AI Configs let you test different prompts without redeploying. Example: You run a customer support chatbot. You want to test whether a formal tone or casual tone works better. With AI Configs, you create two variations and switch between them from the dashboard.You need different AI behavior for different users
Free users get faster, lower-cost responses. Paid users get slower, higher-quality responses. Example: A SaaS application with tiered pricing. The free tier usesgpt-4o-mini with temperature 0.
-
The premium tier uses
claude-3-5-sonnetwith temperature 0. -
You target based on the
tierattribute in the user context.
You want to switch providers without code changes
Your primary provider is unavailable. You need to switch to a backup immediately. Example: Anthropic has an outage. You log in to LaunchDarkly, change the default variation from Anthropic to OpenAI, and save. All requests now use OpenAI, with no redeployment needed.You’re running cost optimization experiments
You think a lower-cost model might perform well enough for most queries. You want to test it with real traffic. Example: You create one variation usingclaude-3-haiku and another using claude-3-5-sonnet. You roll out the lower-cost model to 20% of users and compare quality metrics.
When AI Configs might be overkill
- One-off batch jobs: If you’re processing 10,000 documents once, hardcoding the configuration may be sufficient.
- Single model, no experimentation: If you’re using one model and do not plan to change it, AI Configs may add unnecessary complexity.
Completion mode versus agent mode
If you’re using LangGraph or CrewAI, use agent mode instead of the completion mode shown in this tutorial. This tutorial uses completion mode with messages in array format. If you’re building:- Simple chatbots, content generation, or single-turn responses: Use completion mode
- Complex multi-step workflows with LangGraph, CrewAI, or custom agents: Use agent mode
client.chat.completions.create() or a similar method, completion mode is probably the right choice.
Conclusion
In this guide, you learned how to:- Build an AI chatbot with support for multiple providers
- Create and manage AI variations in LaunchDarkly
- Use contexts for targeted AI behavior
- Monitor AI performance and usage