Skip to main content

Documentation Index

Fetch the complete documentation index at: https://launchdarkly-preview.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Before you ship

You built a working chatbot. When building your own application, check the following before releasing it to end users.

Your config is actually being used

    config, tracker = ai_client.completion_config(key, context, fallback)

    # Log this somewhere visible
    print(f"Using model: {config.model.name}")
    print(f"Provider: {config.provider.name}")
If you always see the fallback model, your LaunchDarkly connection is not working.

Errors do not crash the application

    try:
        response = provider.generate(config, user_message)
    except Exception as e:
        logger.error(f"AI generation failed: {e}")
        response = "Sorry, I'm having trouble right now. Try again?"
AI providers can become unavailable. Your application should continue to function.

You have a rollout plan

Do not roll out changes to all users at once:
  1. Test with your internal team at 5%
  2. Roll out to beta users at 25%
  3. Monitor error rates and token usage
  4. Gradually increase to 100%
LaunchDarkly makes this easy with percentage rollouts on the Targeting tab.

Online evals are considered

You will not know if your AI is giving good answers unless you measure it. Consider adding online evaluations once you are live. Read when to add online evals for guidance.

When AI Configs make sense

AI Configs are not the right fit for every project. Here are some common use cases:

You’re experimenting with prompts

If you are updating prompts frequently, hardcoding them can slow iteration. AI Configs let you test different prompts without redeploying. Example: You run a customer support chatbot. You want to test whether a formal tone or casual tone works better. With AI Configs, you create two variations and switch between them from the dashboard.

You need different AI behavior for different users

Free users get faster, lower-cost responses. Paid users get slower, higher-quality responses. Example: A SaaS application with tiered pricing. The free tier uses gpt-4o-mini with temperature 0.
  1. The premium tier uses claude-3-5-sonnet with temperature 0.
  2. You target based on the tier attribute in the user context.

You want to switch providers without code changes

Your primary provider is unavailable. You need to switch to a backup immediately. Example: Anthropic has an outage. You log in to LaunchDarkly, change the default variation from Anthropic to OpenAI, and save. All requests now use OpenAI, with no redeployment needed.

You’re running cost optimization experiments

You think a lower-cost model might perform well enough for most queries. You want to test it with real traffic. Example: You create one variation using claude-3-haiku and another using claude-3-5-sonnet. You roll out the lower-cost model to 20% of users and compare quality metrics.

When AI Configs might be overkill

  • One-off batch jobs: If you’re processing 10,000 documents once, hardcoding the configuration may be sufficient.
  • Single model, no experimentation: If you’re using one model and do not plan to change it, AI Configs may add unnecessary complexity.

Completion mode versus agent mode

If you’re using LangGraph or CrewAI, use agent mode instead of the completion mode shown in this tutorial. This tutorial uses completion mode with messages in array format. If you’re building:
  • Simple chatbots, content generation, or single-turn responses: Use completion mode
  • Complex multi-step workflows with LangGraph, CrewAI, or custom agents: Use agent mode
The choice depends on your architecture. If you’re calling client.chat.completions.create() or a similar method, completion mode is probably the right choice.

Conclusion

In this guide, you learned how to:
  • Build an AI chatbot with support for multiple providers
  • Create and manage AI variations in LaunchDarkly
  • Use contexts for targeted AI behavior
  • Monitor AI performance and usage
LaunchDarkly AI Configs let you manage AI behavior across multiple providers without code changes, so you can iterate more quickly and deploy more safely.