Langsmith Rails
LangsmithrbRails provides seamless integration with LangSmith for your Rails applications. LangSmith is a platform for debugging, testing, evaluating, and monitoring LLM applications.
This gem makes it easy to add LangSmith tracing to your Rails application, allowing you to monitor and debug your LLM operations. It provides a comprehensive set of features including request tracing, PII redaction, local buffering, evaluation frameworks, and more.
Features
- ๐ Request Tracing: Automatically trace HTTP requests with middleware
- ๐งฉ Service & Job Tracing: Trace your services and background jobs
- ๐ PII Redaction: Automatically redact sensitive information
- ๐พ Local Buffering: Store traces locally and send them in batches
- ๐ Evaluations: Evaluate LLM responses with customizable criteria
- ๐ CI Integration: Run evaluations in your CI pipeline
- ๐ Demo Application: Get started quickly with a sample application
- ๐ LLM Provider Wrappers: Trace OpenAI, Anthropic, and other LLM providers
- ๐ณ Advanced Tracing: Hierarchical run trees for complex workflows
- ๐ก OpenTelemetry Integration: Distributed tracing with OpenTelemetry
- ๐งช Enhanced Evaluation Framework: String and LLM-based evaluators
Installation
Add this line to your application's Gemfile:
gem 'langsmithrb_rails'
And then execute:
$ bundle install
Or install it yourself as:
$ gem install langsmithrb_rails
Setup
After installing the gem, run the install generator to set up LangSmith in your Rails application:
$ rails generate langsmithrb_rails:install
This will:
- Create a LangSmith initializer at
config/initializers/langsmith.rb - Generate a YAML configuration file at
config/langsmith.yml - Display post-install instructions
Configuration
LangSmith is configured through environment variables and a YAML configuration file:
# Required
LANGSMITH_API_KEY=your_api_key
# Optional
LANGSMITH_PROJECT=your_project_name
LANGSMITH_SAMPLING=1.0 # Sampling rate (0.0 to 1.0)
LANGSMITH_ENABLED=true # Enable/disable tracing
You can get your API key from LangSmith.
The YAML configuration file (config/langsmith.yml) allows for environment-specific settings:
default: &default
api_key: <%= ENV.fetch("LANGSMITH_API_KEY", nil) %>
project_name: <%= ENV.fetch("LANGSMITH_PROJECT", nil) %>
sampling_rate: <%= ENV.fetch("LANGSMITH_SAMPLING", 1.0).to_f %>
enabled: <%= ENV.fetch("LANGSMITH_ENABLED", "true") == "true" %>
redact_pii: true
timeout: 5
development:
<<: *default
test:
<<: *default
production:
<<: *default
sampling_rate: <%= ENV.fetch("LANGSMITH_SAMPLING", 0.1).to_f %>
You can also configure LangSmith programmatically in your initializer:
LangsmithrbRails.configure do |config|
config.enabled = true # Enable LangSmith tracing
config.api_key = "your_api_key" # Your LangSmith API key
config.project_name = "your_project" # Optional project name
config.sampling_rate = 0.1 # Sample 10% of traces
config.redact_pii = true # Redact PII from traces
# Advanced tracing options
config.trace_all = true # Trace all operations
config.trace_level = "info" # Trace level (debug, info, warn, error, fatal)
# OpenTelemetry options
config.otel_enabled = false # Enable OpenTelemetry integration
config.otel_service_name = "my-rails-app" # Service name for OpenTelemetry
# Evaluation options
config.evaluation_enabled = true # Enable evaluation framework
# Logging options
config.log_level = "info" # Log level (debug, info, warn, error, fatal)
config.log_to_stdout = true # Log to STDOUT
end
Generators
LangsmithrbRails provides several generators to help you set up different features:
Install Generator
$ rails generate langsmithrb_rails:install
Sets up the basic configuration for LangSmith.
Tracing Generator
$ rails generate langsmithrb_rails:tracing
Adds middleware and concerns for request-level tracing:
- Creates a middleware for tracing HTTP requests
- Adds a concern for tracing services
- Adds a concern for tracing background jobs
- Updates your application configuration
Buffer Generator
$ rails generate langsmithrb_rails:buffer
Sets up local buffering for traces:
- Creates a migration for the buffer table
- Adds a model for the buffer
- Adds a job for flushing the buffer
- Adds rake tasks for manual flushing
Evals Generator
$ rails generate langsmithrb_rails:evals
Adds an evaluation framework for LLM responses:
- Creates sample datasets for evaluation
- Adds evaluation checks (correctness, LLM-graded)
- Adds evaluation targets (HTTP, Ruby)
- Adds rake tasks for running evaluations
CI Generator
$ rails generate langsmithrb_rails:ci
Sets up CI integration for LangSmith evaluations:
- Creates a GitHub Actions workflow
- Adds a script for generating evaluation summaries
- Configures PR comments with evaluation results
Privacy Generator
$ rails generate langsmithrb_rails:privacy
Adds privacy features for LangSmith traces:
- Creates a custom redactor for PII
- Adds a privacy initializer
- Generates a privacy configuration file
Demo Generator
$ rails generate langsmithrb_rails:demo
Adds a demo application with LangSmith tracing:
- Creates a chat interface with LLM integration
- Adds a service for interacting with LLMs
- Configures tracing for all LLM operations
Usage
Basic Tracing
Once configured, you can trace your code using the LangsmithTraced concern:
class MyService
include LangsmithTraced
def process(input)
langsmith_trace("my_operation", inputs: { input: input }) do |run|
# Your code here
result = do_something(input)
# Record the output
run.outputs = { result: result }
result
end
end
end
LLM Provider Wrappers
You can wrap LLM providers to automatically trace all operations:
# Wrap an OpenAI client
require "openai"
openai_client = OpenAI::Client.new(access_token: ENV["OPENAI_API_KEY"])
traced_client = LangsmithrbRails.wrap(openai_client, provider: :openai, project_name: "my-project")
# Use the traced client normally
response = traced_client.chat(
parameters: {
model: "gpt-4",
messages: [{ role: "user", content: "Hello, world!" }]
}
)
# Wrap an Anthropic client
require "anthropic"
anthropic_client = Anthropic::Client.new(api_key: ENV["ANTHROPIC_API_KEY"])
traced_client = LangsmithrbRails.wrap(anthropic_client, provider: :anthropic)
# Use the traced client normally
response = traced_client.(
model: "claude-3-opus-20240229",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, world!" }]
)
# Wrap any LLM client
custom_client = MyCustomLLM.new
traced_client = LangsmithrbRails.wrap(custom_client, provider: :llm)
# Make any method traceable
LangsmithrbRails.traceable(object, :method_name, run_name: "my-run")
Tracing Background Jobs
For background jobs, use the LangsmithTracedJob concern:
class MyJob < ApplicationJob
include LangsmithTracedJob
def perform(args)
# Job is automatically traced
# ...
end
end
Advanced Tracing with Run Trees
For complex workflows, use run trees to create hierarchical traces:
LangsmithrbRails.run(name: "parent-operation", run_type: "chain", inputs: { query: "Hello" }) do |parent_run|
# Do some work in the parent run
intermediate_result = process_query(parent_run.inputs[:query])
# Create a child run
child_run = parent_run.create_child(
name: "child-operation",
run_type: "llm",
inputs: { prompt: intermediate_result }
)
# Do some work in the child run
result = call_llm(child_run.inputs[:prompt])
# End the child run with outputs
child_run.end(outputs: { response: result })
# Return the final result
result
end
Local Buffering
If you've set up buffering, traces will be stored locally and sent in batches. You can manually flush the buffer:
$ rails langsmith:flush
OpenTelemetry Integration
You can enable OpenTelemetry integration for distributed tracing:
# Initialize OpenTelemetry
LangsmithrbRails::OTEL.init(
service_name: "my-rails-app",
service_version: "1.0.0"
)
# Trace an operation with OpenTelemetry
LangsmithrbRails::OTEL.trace("my-operation", attributes: { key: "value" }) do
# Your code here
result = do_something()
result
end
# Trace an LLM operation
LangsmithrbRails::OTEL.trace_llm(
"llm-operation",
inputs: { prompt: "Hello, world!" },
run_type: "llm",
project_name: "my-project",
tags: ["tag1", "tag2"]
) do
# Your LLM code here
response = llm.generate("Hello, world!")
response
end
Running Evaluations
To run an evaluation:
$ rails langsmith:eval[sample,http,my_experiment]
Where:
sampleis the dataset namehttpis the target namemy_experimentis the experiment name
Enhanced Evaluation Framework
Use the enhanced evaluation framework to evaluate LLM responses:
# Create a string evaluator
string_evaluator = LangsmithrbRails.evaluator(
:string,
match_type: :exact,
case_sensitive: false,
project_name: "my-project"
)
# Evaluate a prediction against a reference
result = string_evaluator.evaluate("Hello, world!", "hello, world!")
# => { score: 1.0, metadata: { match: true, match_type: "exact", case_sensitive: false } }
# Create an LLM-based evaluator
llm_client = OpenAI::Client.new(access_token: ENV["OPENAI_API_KEY"])
llm_evaluator = LangsmithrbRails.evaluator(
:llm,
llm: llm_client,
criteria: "Evaluate the response for accuracy and relevance.",
project_name: "my-project"
)
# Evaluate a run
result = llm_evaluator.evaluate_run("run-id")
# Evaluate a dataset
results = LangsmithrbRails::Evaluation.evaluate_dataset(
"dataset-id",
[string_evaluator, llm_evaluator],
experiment_name: "my-experiment"
)
Comparing Experiments
To compare two experiments:
$ rails langsmith:compare[exp_a,exp_b]
Development
After checking out the repo, run bin/setup to install dependencies. Then, run rake spec to run the tests. You can also run bin/console for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run bundle exec rake install.
Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/protocolgrid/langsmithrb_rails.
License
The gem is available as open source under the terms of the MIT License.