Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,8 @@ jobs:
uses: abatilo/actions-poetry@7b6d33e44b4f08d7021a1dee3c044e9c253d6439

- name: Install dependencies
run: poetry install --all-extras
run: |
for dir in examples/*/; do
echo "Installing $dir"
poetry -C "$dir" install
done
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
.python-version
poetry.lock
__pycache__/
.env
105 changes: 15 additions & 90 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,106 +8,31 @@ This demo requires Python 3.10 or higher.

## Build Instructions

This repository includes examples for `OpenAI`, `Bedrock`, and `LangChain` for multi-provider support. Depending on your preferred provider, you may have to take some additional steps.
This repository includes examples for `OpenAI`, `Bedrock`, `Gemini`, `LangChain`, `LangGraph`, `Judge`, and `Observability`. Depending on your preferred provider, you may have to take some additional steps.

### General setup

1. [Create an AI Config](https://launchdarkly.com/docs/home/ai-configs/create) using the key specified in each example, or copy the key of existing AI Config in your LaunchDarkly project that you want to evaluate.
1. Set the environment variable `LAUNCHDARKLY_SDK_KEY` to your LaunchDarkly SDK key and `LAUNCHDARKLY_AI_CONFIG_KEY` to the AI Config key; otherwise, an AI Config of `sample-ai-config` or `sample-ai-agent-config` will be assumed for most examples.

```bash
export LAUNCHDARKLY_SDK_KEY="1234567890abcdef"
export LAUNCHDARKLY_AI_CONFIG_KEY="sample-ai-config"
```

1. Ensure you have [Poetry](https://python-poetry.org/) installed.

### Provider-Specific Setup

#### OpenAI setup (single provider)

1. Install the required dependencies with `poetry install -E openai` or `poetry install --all-extras`.
1. Set the environment variable `OPENAI_API_KEY` to your OpenAI key.
1. On the command line, run `poetry run openai-example`.

#### Chat with observability (observability plugin example)

This example demonstrates how to use the LaunchDarkly observability SDK plugin to monitor AI chat operations. For more details, see the [Python SDK observability reference](https://launchdarkly.com/docs/sdk/observability/python).
1. Create a `.env` file in the repository root with at least your LaunchDarkly SDK key:

1. Install the required dependencies with `poetry install -E observability` or `poetry install --all-extras`.
1. Set the environment variable for your AI provider (e.g., `OPENAI_API_KEY`), or configure your AI Config to use a different provider.
1. Optionally, set service identification:
```bash
export SERVICE_NAME="my-ai-service"
export SERVICE_VERSION="1.0.0"
```
1. On the command line, run `poetry run chat-observability-example`.

The observability plugin automatically captures and sends data to LaunchDarkly:
- **Observability tab**: SDK operations, flag evaluations, error monitoring, logging, and distributed tracing
- **AI Config Monitoring tab**: Token usage, duration, success/error rates, and custom metadata for filtering and analysis

View your data in the LaunchDarkly dashboard under **Observability** tabs.

#### Bedrock setup (single provider)

1. Install the required dependencies with `poetry install -E bedrock` or `poetry install --all-extras`.
1. Ensure the required AWS credentials can be [auto-detected by the `boto3` library](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html). Examples might include environment variables, role providers, or shared credential files.
1. On the command line, run `poetry run bedrock-example`.

#### Gemini setup (single provider)

1. Install the required dependencies with `poetry install -E gemini` or `poetry install --all-extras`.
1. Set the environment variable `GOOGLE_API_KEY` to your Google API key.
1. On the command line, run `poetry run gemini-example`.

#### LangChain setup (multiple providers)

This example uses `OpenAI`, `Bedrock`, and `Gemini` LangChain provider packages. You can add additional LangChain providers using the `poetry add` command.

1. Install all dependencies with `poetry install -E langchain` or `poetry install --all-extras`.
1. Set up API keys for the providers you want to use.
1. On the command line, run `poetry run langchain-example`

#### LangGraph setup (multiple providers, single agent)

1. Install all dependencies with `poetry install -E langgraph` or `poetry install --all-extras`.
1. Set up API keys for the providers you want to use.
1. Optionally set this environment variable to use a different agent config:
```bash
export LAUNCHDARKLY_AGENT_CONFIG_KEY="sample-ai-agent-config"
LAUNCHDARKLY_SDK_KEY=your-launchdarkly-sdk-key
```
1. On the command line, run `poetry run langgraph-agent-example`.

#### LangGraph setup (multiple providers, multiple agents)

1. Install all dependencies with `poetry install -E langgraph` or `poetry install --all-extras`.
1. Set up API keys for the providers you want to use.
1. [Create an AI Config (Agent-based)](https://launchdarkly.com/docs/home/ai-configs/agents) using the keys below. Write a goal for each config and enable it with targeting rules.
1. Optionally set these environment variables to use different agent configs:
```bash
export LAUNCHDARKLY_ANALYZER_CONFIG_KEY="code-review-analyzer"
export LAUNCHDARKLY_DOCUMENTATION_CONFIG_KEY="code-review-documentation"
```
1. On the command line, run `poetry run langgraph-multi-agent-example`.

#### Judge setup (judge evaluation)

These examples demonstrate how to use LaunchDarkly's judge functionality to evaluate AI responses for accuracy, relevance, and other metrics.

1. Install dependencies with `poetry install -E langchain` or `poetry install --all-extras`.
1. Set up API keys for the provider you want to use (OpenAI, Bedrock, or Gemini).
1. [Create an AI Config](https://launchdarkly.com/docs/home/ai-configs/create) for chat functionality.
1. [Create a Judge Config](https://launchdarkly.com/docs/home/ai-configs/judges) for evaluation.
1. Set the required environment variables:
```bash
export LAUNCHDARKLY_SDK_KEY="your-sdk-key"
export LAUNCHDARKLY_AI_CONFIG_KEY="sample-ai-config"
export LAUNCHDARKLY_AI_JUDGE_KEY="sample-ai-judge-accuracy"
```
Note: The default values are `sample-ai-config` for AI Config and `sample-ai-judge-accuracy` for Judge Config if not specified.
Each example README describes the full set of environment variables needed. The `.env` file is loaded automatically when running any example.

##### Available judge examples:
### Examples

- **Chat with automatic judge evaluation** (`poetry run chat-judge-example`): Uses the chat functionality which automatically evaluates responses with any judges defined in the AI config.
- **Direct judge evaluation** (`poetry run direct-judge-example`): Evaluates specific input/output pairs using a judge configuration directly.
| Example | Description | README |
| --- | --- | --- |
| **OpenAI** | Single provider using OpenAI | [examples/openai](examples/openai/README.md) |
| **Bedrock** | Single provider using AWS Bedrock | [examples/bedrock](examples/bedrock/README.md) |
| **Gemini** | Single provider using Google Gemini | [examples/gemini](examples/gemini/README.md) |
| **LangChain** | Multiple providers via LangChain | [examples/langchain](examples/langchain/README.md) |
| **LangGraph Agent** | Single agent using LangGraph | [examples/langgraph_agent](examples/langgraph_agent/README.md) |
| **LangGraph Multi-Agent** | Multiple agents using LangGraph | [examples/langgraph_multi_agent](examples/langgraph_multi_agent/README.md) |
| **Judge** | Judge evaluation of AI responses | [examples/judge](examples/judge/README.md) |
| **Chat with Observability** | Observability plugin for AI chat monitoring | [examples/chat_observability](examples/chat_observability/README.md) |
43 changes: 43 additions & 0 deletions examples/bedrock/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Bedrock Example (Single Provider)

This example demonstrates how to use LaunchDarkly's AI Config with the AWS Bedrock provider.

## Prerequisites

- Python 3.10 or higher
- [Poetry](https://python-poetry.org/) installed
- A LaunchDarkly account with an [AI Config](https://launchdarkly.com/docs/home/ai-configs/create) created
- AWS credentials configured for Bedrock access

## Setup

1. Create a `.env` file in this directory with the following variables:

```
LAUNCHDARKLY_SDK_KEY=your-launchdarkly-sdk-key
LAUNCHDARKLY_AI_CONFIG_KEY=sample-ai-config
```

> `LAUNCHDARKLY_AI_CONFIG_KEY` defaults to `sample-ai-config` if not set.

2. Ensure your AWS credentials can be [auto-detected by the `boto3` library](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html). You can set them in your `.env` file:

```
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
AWS_DEFAULT_REGION=us-east-1
```

Other options include role providers or shared credential files.

3. Install the required dependencies:

```bash
poetry install
```

## Run

```bash
poetry run bedrock-example
```
Original file line number Diff line number Diff line change
@@ -1,11 +1,14 @@
import os
from dotenv import load_dotenv
import ldclient
from ldclient import Context
from ldclient.config import Config
from ldai import LDAIClient
import boto3

client = boto3.client("bedrock-runtime", region_name="us-east-1")
load_dotenv()

client = boto3.client("bedrock-runtime", region_name=os.getenv('AWS_DEFAULT_REGION', 'us-east-1'))

# Set sdk_key to your LaunchDarkly SDK key.
sdk_key = os.getenv('LAUNCHDARKLY_SDK_KEY')
Expand Down
21 changes: 21 additions & 0 deletions examples/bedrock/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
[tool.poetry]
name = "hello-python-ai-bedrock"
version = "0.1.0"
description = "Hello LaunchDarkly for Python AI - Bedrock"
authors = ["LaunchDarkly <dev@launchdarkly.com>"]
license = "Apache-2.0"
readme = "README.md"
packages = [{include = "bedrock_example.py"}]

[tool.poetry.scripts]
bedrock-example = "bedrock_example:main"

[tool.poetry.dependencies]
python = "^3.10"
python-dotenv = ">=1.0.0"
launchdarkly-server-sdk-ai = "^0.17.0"
boto3 = ">=0.2.0"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
48 changes: 48 additions & 0 deletions examples/chat_observability/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Chat with Observability (Observability Plugin Example)

This example demonstrates how to use the LaunchDarkly observability SDK plugin to monitor AI chat operations. For more details, see the [Python SDK observability reference](https://launchdarkly.com/docs/sdk/observability/python).

The observability plugin automatically captures and sends data to LaunchDarkly:

- **Observability tab**: SDK operations, flag evaluations, error monitoring, logging, and distributed tracing
- **AI Config Monitoring tab**: Token usage, duration, success/error rates, and custom metadata for filtering and analysis

View your data in the LaunchDarkly dashboard under **Observability** tabs.

## Prerequisites

- Python 3.10 or higher
- [Poetry](https://python-poetry.org/) installed
- A LaunchDarkly account with an [AI Config](https://launchdarkly.com/docs/home/ai-configs/create) created
- An API key for your AI provider (e.g., OpenAI)

## Setup

1. Create a `.env` file in this directory with the following variables:

```
LAUNCHDARKLY_SDK_KEY=your-launchdarkly-sdk-key
LAUNCHDARKLY_AI_CONFIG_KEY=sample-ai-config
OPENAI_API_KEY=your-openai-api-key
```

> `LAUNCHDARKLY_AI_CONFIG_KEY` defaults to `sample-ai-config` if not set.

Optionally, set service identification:

```
SERVICE_NAME=my-ai-service
SERVICE_VERSION=1.0.0
```

2. Install the required dependencies:

```bash
poetry install
```

## Run

```bash
poetry run chat-observability-example
```
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import os
from dotenv import load_dotenv
import asyncio
import logging
import ldclient
Expand All @@ -7,6 +8,8 @@
from ldai import LDAIClient, AICompletionConfigDefault
from ldobserve import ObservabilityConfig, ObservabilityPlugin

load_dotenv()

logging.getLogger('ldclient').setLevel(logging.WARNING)

# Set sdk_key to your LaunchDarkly SDK key.
Expand Down
22 changes: 22 additions & 0 deletions examples/chat_observability/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
[tool.poetry]
name = "hello-python-ai-chat-observability"
version = "0.1.0"
description = "Hello LaunchDarkly for Python AI - Chat with Observability"
authors = ["LaunchDarkly <dev@launchdarkly.com>"]
license = "Apache-2.0"
readme = "README.md"
packages = [{include = "chat_observability_example.py"}]

[tool.poetry.scripts]
chat-observability-example = "chat_observability_example:main"

[tool.poetry.dependencies]
python = "^3.10"
python-dotenv = ">=1.0.0"
launchdarkly-server-sdk-ai = "^0.17.0"
launchdarkly-observability = ">=0.1.0"
openai = ">=0.2.0"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
34 changes: 34 additions & 0 deletions examples/gemini/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Gemini Example (Single Provider)

This example demonstrates how to use LaunchDarkly's AI Config with the Google Gemini provider.

## Prerequisites

- Python 3.10 or higher
- [Poetry](https://python-poetry.org/) installed
- A LaunchDarkly account with an [AI Config](https://launchdarkly.com/docs/home/ai-configs/create) created
- A [Google API key](https://aistudio.google.com/apikey)

## Setup

1. Create a `.env` file in this directory with the following variables:

```
LAUNCHDARKLY_SDK_KEY=your-launchdarkly-sdk-key
LAUNCHDARKLY_AI_CONFIG_KEY=sample-ai-config
GOOGLE_API_KEY=your-google-api-key
```

> `LAUNCHDARKLY_AI_CONFIG_KEY` defaults to `sample-ai-config` if not set.

2. Install the required dependencies:

```bash
poetry install
```

## Run

```bash
poetry run gemini-example
```
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import os
from dotenv import load_dotenv
import ldclient
from ldclient import Context
from ldclient.config import Config
Expand All @@ -8,6 +9,8 @@
from google.genai import types
from typing import List, Optional, Tuple

load_dotenv()

# Set sdk_key to your LaunchDarkly SDK key.
sdk_key = os.getenv('LAUNCHDARKLY_SDK_KEY')

Expand Down
21 changes: 21 additions & 0 deletions examples/gemini/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
[tool.poetry]
name = "hello-python-ai-gemini"
version = "0.1.0"
description = "Hello LaunchDarkly for Python AI - Gemini"
authors = ["LaunchDarkly <dev@launchdarkly.com>"]
license = "Apache-2.0"
readme = "README.md"
packages = [{include = "gemini_example.py"}]

[tool.poetry.scripts]
gemini-example = "gemini_example:main"

[tool.poetry.dependencies]
python = "^3.10"
python-dotenv = ">=1.0.0"
launchdarkly-server-sdk-ai = "^0.17.0"
google-genai = "^1.30.0"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
Loading
Loading