OpenSkill
A Ruby implementation of the OpenSkill rating system for multiplayer games. OpenSkill is a Bayesian skill rating system that can handle teams of varying sizes, asymmetric matches, and complex game scenarios.
Features
- ๐ฎ Multiplayer Support: Handle 2+ teams of any size
- โ๏ธ Asymmetric Teams: Teams don't need equal player counts
- ๐ฏ Multiple Ranking Methods: Use ranks or scores
- ๐ Prediction Methods: Predict win probabilities, draws, and final rankings
- ๐ข Player Weights: Account for partial participation or contribution
- ๐ Score Margins: Factor in impressive wins
- ๐ Tie Handling: Properly handle drawn matches
- ๐ฒ 5 Rating Models: PlackettLuce, BradleyTerryFull, BradleyTerryPart, ThurstoneMostellerFull, ThurstoneMostellerPart
- โก Fast: Efficient Ruby implementation
- ๐งช Well Tested: Comprehensive test suite matching reference implementation
Installation
Add this line to your application's Gemfile:
gem 'openskill'
And then execute:
bundle install
Or install it yourself as:
gem install openskill
Available Models
OpenSkill Ruby includes 5 rating models from the Weng-Lin family:
- PlackettLuce - Multidimensional model, recommended default (Algorithm 4)
- BradleyTerryFull - Full pairing with logistic regression (Algorithm 1)
- BradleyTerryPart - Partial pairing with sliding window, more efficient (Algorithm 2)
- ThurstoneMostellerFull - Full pairing with Gaussian CDF (Algorithm 3)
- ThurstoneMostellerPart - Partial pairing with Gaussian CDF, most efficient
All models support the same API and features. Choose based on your accuracy vs performance needs.
Quick Start
require 'openskill'
# Create a model (PlackettLuce recommended)
model = OpenSkill::Models::PlackettLuce.new
# Or use other models:
# model = OpenSkill::Models::BradleyTerryFull.new
# model = OpenSkill::Models::BradleyTerryPart.new(window_size: 4)
# model = OpenSkill::Models::ThurstoneMostellerFull.new(epsilon: 0.1)
# model = OpenSkill::Models::ThurstoneMostellerPart.new(epsilon: 0.1, window_size: 4)
# Create player ratings
alice = model.(name: "Alice")
bob = model.(name: "Bob")
charlie = model.(name: "Charlie")
dave = model.(name: "Dave")
# Simple 1v1 match (alice wins)
team1 = [alice]
team2 = [bob]
= model.([team1, team2])
alice, bob = .flatten
puts "Alice: #{alice.mu.round(2)} ยฑ #{alice.sigma.round(2)}"
puts "Bob: #{bob.mu.round(2)} ยฑ #{bob.sigma.round(2)}"
Usage
Creating Ratings
model = OpenSkill::Models::PlackettLuce.new
# Create with defaults (mu=25, sigma=8.333)
player = model.
# Create with custom values
player = model.(mu: 30.0, sigma: 5.0, name: "Alice")
# Load from database [mu, sigma]
player = model.([28.5, 7.2], name: "Bob")
Calculating New Ratings
Simple Match (Team 1 wins)
team1 = [alice, bob]
team2 = [charlie, dave]
updated_teams = model.([team1, team2])
Match with Explicit Ranks
Lower rank = better performance (0 is best)
teams = [[alice], [bob], [charlie]]
# Charlie wins, Bob second, Alice third
updated = model.(teams, ranks: [2, 1, 0])
Match with Scores
Higher score = better performance
teams = [[alice, bob], [charlie, dave]]
# Team 2 wins 100-80
updated = model.(teams, scores: [80, 100])
Match with Ties
teams = [[alice], [bob], [charlie]]
# Alice and Charlie tie for first, Bob comes third
updated = model.(teams, ranks: [0, 2, 0])
Player Contribution Weights
When players contribute different amounts:
teams = [
[alice, bob], # Alice contributed more
[charlie, dave] # Dave carried the team
]
updated = model.(
teams,
weights: [[2.0, 1.0], [1.0, 2.0]]
)
Score Margins (Impressive Wins)
Factor in score differences:
model = OpenSkill::Models::PlackettLuce.new(margin: 5.0)
# Large score difference means more rating change
updated = model.(
[[alice], [bob]],
scores: [100, 20] # Alice dominated
)
Predictions
Win Probability
teams = [[alice, bob], [charlie, dave], [eve]]
probabilities = model.predict_win_probability(teams)
# => [0.35, 0.45, 0.20] (sums to 1.0)
Draw Probability
Higher values mean more evenly matched:
probability = model.predict_draw_probability([[alice], [bob]])
# => 0.25
Rank Prediction
teams = [[alice], [bob], [charlie]]
predictions = model.predict_rank_probability(teams)
# => [[1, 0.504], [2, 0.333], [3, 0.163]]
# Format: [predicted_rank, probability]
Rating Display
The ordinal method provides a conservative rating estimate:
player = model.(mu: 30.0, sigma: 5.0)
# 99.7% confidence (3 standard deviations)
puts player.ordinal # => 15.0 (30 - 3*5)
# 99% confidence
puts player.ordinal(z: 2.576) # => 17.12
# For leaderboards
players.sort_by(&:ordinal).reverse
Model Options
model = OpenSkill::Models::PlackettLuce.new(
mu: 25.0, # Initial mean skill
sigma: 25.0 / 3, # Initial skill uncertainty
beta: 25.0 / 6, # Performance variance
kappa: 0.0001, # Minimum variance (regularization)
tau: 25.0 / 300, # Skill decay per match
margin: 0.0, # Score margin threshold
limit_sigma: false, # Prevent sigma from increasing
balance: false # Emphasize rating outliers in teams
)
Advanced Features
Prevent Rating Uncertainty from Growing
# Useful for active players
updated = model.(teams, limit_sigma: true)
Balance Outliers in Teams
model = OpenSkill::Models::PlackettLuce.new(balance: true)
# Gives more weight to rating differences within teams
Custom Tau (Skill Decay)
# Higher tau = more rating volatility
updated = model.(teams, tau: 1.0)
How It Works
OpenSkill uses a Bayesian approach to model player skill as a normal distribution:
- ฮผ (mu): The mean skill level
- ฯ (sigma): The uncertainty about the skill level
After each match:
- Compute team strengths from individual player ratings
- Calculate expected outcomes based on team strengths
- Update ratings based on actual vs expected performance
- Reduce uncertainty (sigma) as more matches are played
The ordinal value (ฮผ - 3ฯ) provides a conservative estimate where the true skill is 99.7% likely to be higher.
Why OpenSkill?
vs Elo
- โ Handles multiplayer (3+ players/teams)
- โ Works with team games
- โ Accounts for rating uncertainty
- โ Faster convergence to true skill
vs TrueSkill
- โ Open source (MIT license)
- โ Faster computation
- โ Similar accuracy
- โ More flexible (weights, margins, custom parameters)
API Design Philosophy
This Ruby implementation uses idiomatic Ruby naming conventions:
| Python API | Ruby API |
|---|---|
model.rating() |
model.create_rating |
model.create_rating([25, 8.3]) |
model.load_rating([25, 8.3]) |
model.rate(teams) |
model.calculate_ratings(teams) |
model.predict_win(teams) |
model.predict_win_probability(teams) |
model.predict_draw(teams) |
model.predict_draw_probability(teams) |
model.predict_rank(teams) |
model.predict_rank_probability(teams) |
Examples
2v2 Team Game
model = OpenSkill::Models::PlackettLuce.new
# Create players
alice = model.(name: "Alice")
bob = model.(name: "Bob")
charlie = model.(name: "Charlie")
dave = model.(name: "Dave")
# Match: Alice + Bob vs Charlie + Dave (Team 1 wins)
teams = [[alice, bob], [charlie, dave]]
updated = model.(teams)
# Updated ratings
updated[0].each { |p| puts "#{p.name}: #{p.ordinal.round(1)}" }
updated[1].each { |p| puts "#{p.name}: #{p.ordinal.round(1)}" }
Free-for-All (5 players)
players = 5.times.map { model. }
# Player 3 wins, 1 second, 4 third, 0 fourth, 2 fifth
updated = model.(
players.map { |p| [p] },
ranks: [3, 1, 4, 0, 2]
)
Tracking Player Progress
class Player
attr_accessor :name, :mu, :sigma
def initialize(name, model)
@name = name
= model.
@mu = .mu
@sigma = .sigma
end
def (model)
model.([@mu, @sigma], name: @name)
end
def ()
@mu = .mu
@sigma = .sigma
end
def ordinal(z: 3.0)
@mu - z * @sigma
end
end
# Usage
model = OpenSkill::Models::PlackettLuce.new
alice = Player.new("Alice", model)
bob = Player.new("Bob", model)
# Play match
teams = [[alice.(model)], [bob.(model)]]
updated = model.(teams)
# Update players
alice.(updated[0][0])
bob.(updated[1][0])
Testing
bundle install
bundle exec rake test
Development
This gem follows the OpenSkill specification and maintains compatibility with the Python reference implementation.
Contributing
- Fork it
- Create your feature branch (
git checkout -b my-new-feature) - Commit your changes (
git commit -am 'Add some feature') - Push to the branch (
git push origin my-new-feature) - Create new Pull Request
License
MIT License. See LICENSE for details.
References
- OpenSkill Python Implementation
- OpenSkill Documentation
- Original Paper: A Bayesian Approximation Method for Online Ranking by Ruby C. Weng and Chih-Jen Lin
Acknowledgments
This Ruby implementation is based on the excellent openskill.py Python library by Vivek Joshy.
The Plackett-Luce model implemented here is based on the work by Weng and Lin (2011), providing a faster and more accessible alternative to Microsoft's TrueSkill system.