# Ruby Vector Space Model (VSM) with tf*idf weights

Calculates the similarity between texts using a bag-of-words Vector Space Model with Term Frequency-Inverse Document Frequency (tf*idf) weights. If your use case demands performance, use Lucene (see below).

## Usage

```
require 'matrix'
require 'tf-idf-similarity'
```

Create a set of documents:

```
document1 = TfIdfSimilarity::Document.new("Lorem ipsum dolor sit amet...")
document2 = TfIdfSimilarity::Document.new("Pellentesque sed ipsum dui...")
document3 = TfIdfSimilarity::Document.new("Nam scelerisque dui sed leo...")
corpus = [document1, document2, document3]
```

Create a document-term matrix using Term Frequency-Inverse Document Frequency function:

```
model = TfIdfSimilarity::TfIdfModel.new(corpus)
```

Or, create a document-term matrix using the Okapi BM25 ranking function:

```
model = TfIdfSimilarity::BM25Model.new(corpus)
```

Create a similarity matrix:

```
matrix = model.similarity_matrix
```

Find the similarity of two documents in the matrix:

```
matrix[model.document_index(document1), model.document_index(document2)]
```

Print the tf*idf values for terms in a document:

```
tfidf_by_term = {}
document1.terms.each do |term|
tfidf_by_term[term] = model.tfidf(document1, term)
end
puts tfidf_by_term.sort_by{|_,tfidf| -tfidf}
```

Tokenize a document yourself, for example by excluding stop words:

```
require 'unicode_utils'
text = "Lorem ipsum dolor sit amet..."
tokens = UnicodeUtils.each_word(text).to_a - ['and', 'the', 'to']
document1 = TfIdfSimilarity::Document.new(text, :tokens => tokens)
```

Provide, by yourself, the number of times each term appears and the number of tokens in the document:

```
require 'unicode_utils'
text = "Lorem ipsum dolor sit amet..."
tokens = UnicodeUtils.each_word(text).to_a - ['and', 'the', 'to']
term_counts = Hash.new(0)
size = 0
tokens.each do |token|
# Unless the token is numeric.
unless token[/\A\d+\z/]
# Remove all punctuation from tokens.
term_counts[token.gsub(/\p{Punct}/, '')] += 1
size += 1
end
end
document1 = TfIdfSimilarity::Document.new(text, :term_counts => term_counts, :size => size)
```

Read the documentation at RubyDoc.info.

## Troubleshooting

```
NoMethodError: undefined method `[]' for Matrix:Module
```

The `matrix`

gem conflicts with Ruby's internal `Matrix`

module. Don't use the `matrix`

gem.

## Speed

Instead of using the Ruby Standard Library's Matrix class, you can use one of the GNU Scientific Library (GSL), NArray or NMatrix (0.0.9 or greater) gems for faster matrix operations. For example:

```
require 'narray'
model = TfIdfSimilarity::TfIdfModel.new(corpus, :library => :narray)
```

NArray seems to have the best performance of the three libraries.

The NMatrix gem gives access to Automatically Tuned Linear Algebra Software (ATLAS), which you may know of through Linear Algebra PACKage (LAPACK) or Basic Linear Algebra Subprograms (BLAS). Follow these instructions to install the NMatrix gem.

## Extras

You can access more term frequency, document frequency, and normalization formulas with:

```
require 'tf-idf-similarity/extras/document'
require 'tf-idf-similarity/extras/tf_idf_model'
```

The default tf*idf formula follows the Lucene Conceptual Scoring Formula.

## Why?

At the time of writing, no other Ruby gem implemented the tf*idf formula used by Lucene, Sphinx and Ferret.

- rsemantic now uses the same term frequency and document frequency formulas as Lucene.
- treat offers many term frequency formulas, one of which is the same as Lucene.
- similarity uses cosine normalization, which corresponds roughly to Lucene.

### Term frequencies

- The vss gem does not normalize the frequency of a term in a document; this occurs frequently in the academic literature, but only to demonstrate why normalization is important.
- The tf_idf and similarity gems normalize the frequency of a term in a document to the number of terms in that document, which never occurs in the literature.
- The tf-idf gem normalizes the frequency of a term in a document to the number of
*unique*terms in that document, which never occurs in the literature.

### Document frequencies

- The vss gem does not normalize the inverse document frequency.
- The treat, tf_idf, tf-idf and similarity gems use variants of the typical inverse document frequency formula.

### Normalization

- The treat, tf_idf, tf-idf, rsemantic and vss gems have no normalization component.

## Additional adapters

Adapters for the following projects were also considered:

- Ruby-LAPACK is a very thin wrapper around LAPACK, which has an opaque Fortran-style naming scheme.
- Linalg and RNum give access to LAPACK from Ruby but are old and unavailable as gems.

## Reference

- G. Salton and C. Buckley. "Term Weighting Approaches in Automatic Text Retrieval."" Technical Report. Cornell University, Ithaca, NY, USA. 1987.
- E. Chisholm and T. G. Kolda. "New term weighting formulas for the vector space method in information retrieval." Technical Report Number ORNL-TM-13756. Oak Ridge National Laboratory, Oak Ridge, TN, USA. 1999.

## Further Reading

Lucene implements many more similarity functions, such as:

- a divergence from randomness (DFR) framework
- a framework for the family of information-based models
- a language model with Bayesian smoothing using Dirichlet priors
- a language model with Jelinek-Mercer smoothing

Lucene can even combine similarity measures.

Copyright (c) 2012 James McKinney, released under the MIT license