Class: Classifier::LSI

Inherits:
Base
  • Object
show all
Defined in:
lib/classifier/lsi.rb

Overview

This class implements a Latent Semantic Indexer, which can search, classify and cluster data based on underlying semantic relations. For more information on the algorithms used, please consult Wikipedia.

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods inherited from Base

#clean_word_hash, #prepare_category_name, #without_punctuation, #word_hash

Constructor Details

#initialize(options = {}) ⇒ LSI

Create a fresh index. If you want to call #build_index manually, use

Classifier::LSI.new :auto_rebuild => false


35
36
37
38
39
40
# File 'lib/classifier/lsi.rb', line 35

def initialize(options = {})
  @auto_rebuild = true unless options[:auto_rebuild] == false
  @word_list, @items = WordList.new, {}
  @version, @built_at_version = 0, -1
  super
end

Instance Attribute Details

#auto_rebuildObject

Returns the value of attribute auto_rebuild.



29
30
31
# File 'lib/classifier/lsi.rb', line 29

def auto_rebuild
  @auto_rebuild
end

#word_listObject (readonly)

Returns the value of attribute word_list.



28
29
30
# File 'lib/classifier/lsi.rb', line 28

def word_list
  @word_list
end

Instance Method Details

#<<(item) ⇒ Object

A less flexible shorthand for add_item that assumes you are passing in a string with no categorries. item will be duck typed via to_s .



73
74
75
# File 'lib/classifier/lsi.rb', line 73

def <<( item )
  add_item item
end

#add_item(item, *categories, &block) ⇒ Object

Adds an item to the index. item is assumed to be a string, but any item may be indexed so long as it responds to #to_s or if you provide an optional block explaining how the indexer can fetch fresh string data. This optional block is passed the item, so the item may only be a reference to a URL or file name.

For example:

lsi = Classifier::LSI.new
lsi.add_item "This is just plain text"
lsi.add_item "/home/me/filename.txt" { |x| File.read x }
ar = ActiveRecordObject.find( :all )
lsi.add_item ar, *ar.categories { |x| ar.content }


62
63
64
65
66
67
# File 'lib/classifier/lsi.rb', line 62

def add_item( item, *categories, &block )
  clean_word_hash = block ? clean_word_hash(block.call(item)) : clean_word_hash(item.to_s)
  @items[item] = ContentNode.new(clean_word_hash, *categories)
  @version += 1
  build_index if @auto_rebuild
end

#build_index(cutoff = 0.75) ⇒ Object

This function rebuilds the index if needs_rebuild? returns true. For very large document spaces, this indexing operation may take some time to complete, so it may be wise to place the operation in another thread.

As a rule, indexing will be fairly swift on modern machines until you have well over 500 documents indexed, or have an incredibly diverse vocabulary for your documents.

The optional parameter “cutoff” is a tuning parameter. When the index is built, a certain number of s-values are discarded from the system. The cutoff parameter tells the indexer how many of these values to keep. A value of 1 for cutoff means that no semantic analysis will take place, turning the LSI class into a simple vector search engine.



119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
# File 'lib/classifier/lsi.rb', line 119

def build_index( cutoff=0.75 )
  return unless needs_rebuild?
  make_word_list
  
  doc_list = @items.values
  tda = doc_list.collect { |node| node.raw_vector_with( @word_list ) }
  
  if $GSL
     tdm = GSL::Matrix.alloc(*tda).trans
     ntdm = build_reduced_matrix(tdm, cutoff)

     ntdm.size[1].times do |col| 
       vec = GSL::Vector.alloc( ntdm.column(col) ).row
       doc_list[col].lsi_vector = vec
       doc_list[col].lsi_norm = vec.normalize
     end
  else
     tdm = Matrix.rows(tda).trans
     ntdm = build_reduced_matrix(tdm, cutoff)
  
     ntdm.row_size.times do |col|
       doc_list[col].lsi_vector = ntdm.column(col) if doc_list[col]
       doc_list[col].lsi_norm = ntdm.column(col).normalize  if doc_list[col]
     end
  end
   
  @built_at_version = @version
end

#categories_for(item) ⇒ Object

Returns the categories for a given indexed items. You are free to add and remove items from this as you see fit. It does not invalide an index to change its categories.



79
80
81
82
# File 'lib/classifier/lsi.rb', line 79

def categories_for(item)
  return [] unless @items[item]
  return @items[item].categories
end

#classify(doc, cutoff = 0.30, &block) ⇒ Object

This function uses a voting system to categorize documents, based on the categories of other documents. It uses the same logic as the find_related function to find related documents, then returns the most obvious category from this list.

cutoff signifies the number of documents to consider when clasifying text. A cutoff of 1 means that every document in the index votes on what category the document is in. This may not always make sense.



255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
# File 'lib/classifier/lsi.rb', line 255

def classify( doc, cutoff=0.30, &block )
  icutoff = (@items.size * cutoff).round
  carry = proximity_array_for_content( doc, &block )
  carry = carry[0..icutoff-1]
  votes = {}
  carry.each do |pair|
    categories = @items[pair[0]].categories
    categories.each do |category| 
      votes[category] ||= 0.0
      votes[category] += pair[1] 
    end
  end
  
  ranking = votes.keys.sort_by { |x| votes[x] }
  return ranking[-1]
end

#classify_multiple(doc, cutoff = 0.50, &block) ⇒ Object

Same as previous but returns all results, also more permissive in default cut-off



273
274
275
276
277
278
279
280
281
282
283
284
285
286
# File 'lib/classifier/lsi.rb', line 273

def classify_multiple( doc, cutoff=0.50, &block )
  icutoff = (@items.size * cutoff).round
  carry = proximity_array_for_content( doc, &block )
  carry = carry[0..icutoff-1]
  votes = {}
  carry.each do |pair|
    categories = @items[pair[0]].categories
    categories.each do |category|
      votes[category] ||= 0.0
      votes[category] += pair[1]
    end
  end
  votes.delete_if{|key, value| value<1 }.keys.sort_by { |x| -votes[x] }
end

This function takes content and finds other documents that are semantically “close”, returning an array of documents sorted from most to least relavant. max_nearest specifies the number of documents to return. A value of 0 means that it returns all the indexed documents, sorted by relavence.

This is particularly useful for identifing clusters in your document space. For example you may want to identify several “What’s Related” items for weblog articles, or find paragraphs that relate to each other in an essay.



239
240
241
242
243
244
# File 'lib/classifier/lsi.rb', line 239

def find_related( doc, max_nearest=3, &block )
  carry = 
    proximity_array_for_content( doc, &block ).reject { |pair| pair[0] == doc }
  result = carry.collect { |x| x[0] }
  return result[0..max_nearest-1]
end

#highest_ranked_stems(doc, count = 3) ⇒ Object

Prototype, only works on indexed documents. I have no clue if this is going to work, but in theory it’s supposed to.



291
292
293
294
295
296
# File 'lib/classifier/lsi.rb', line 291

def highest_ranked_stems( doc, count=3 )
  raise "Requested stem ranking on non-indexed content!" unless @items[doc]
  arr = node_for_content(doc).lsi_vector.to_a
  top_n = arr.sort.reverse[0..count-1]
  return top_n.collect { |x| @word_list.word_for_index(arr.index(x))}
end

#highest_relative_content(max_chunks = 10) ⇒ Object

This method returns max_chunks entries, ordered by their average semantic rating. Essentially, the average distance of each entry from all other entries is calculated, the highest are returned.

This can be used to build a summary service, or to provide more information about your dataset’s general content. For example, if you were to use categorize on the results of this data, you could gather information on what your dataset is generally about.



156
157
158
159
160
161
162
163
# File 'lib/classifier/lsi.rb', line 156

def highest_relative_content( max_chunks=10 )
   return [] if needs_rebuild?
   
   avg_density = Hash.new
   @items.each_key { |x| avg_density[x] = proximity_array_for_content(x).inject(0.0) { |x,y| x + y[1]} }
   
   avg_density.keys.sort_by { |x| avg_density[x] }.reverse[0..max_chunks-1].map
end

#itemsObject

Returns an array of items that are indexed.



94
95
96
# File 'lib/classifier/lsi.rb', line 94

def items
  @items.keys
end

#needs_rebuild?Boolean

Returns true if the index needs to be rebuilt. The index needs to be built after all informaton is added, but before you start using it for search, classification and cluster detection.

Returns:

  • (Boolean)


45
46
47
# File 'lib/classifier/lsi.rb', line 45

def needs_rebuild?
  (@items.keys.size > 1) && (@version != @built_at_version)
end

#proximity_array_for_content(doc, &block) ⇒ Object

This function is the primitive that find_related and classify build upon. It returns an array of 2-element arrays. The first element of this array is a document, and the second is its “score”, defining how “close” it is to other indexed items.

These values are somewhat arbitrary, having to do with the vector space created by your content, so the magnitude is interpretable but not always meaningful between indexes.

The parameter doc is the content to compare. If that content is not indexed, you can pass an optional block to define how to create the text data. See add_item for examples of how this works.



177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
# File 'lib/classifier/lsi.rb', line 177

def proximity_array_for_content( doc, &block )
  return [] if needs_rebuild?
  
  content_node = node_for_content( doc, &block )
  result = 
    @items.keys.collect do |item|
      next if @items[item].search_vector.blank? # not enough data
      if $GSL
         val = content_node.search_vector * @items[item].search_vector.col
      else
         val = (Matrix[content_node.search_vector] * @items[item].search_vector)[0]
      end
      [item, val]
    end
  result.compact.sort_by { |x| x[1] }.reverse
end

#proximity_norms_for_content(doc, &block) ⇒ Object

Similar to proximity_array_for_content, this function takes similar arguments and returns a similar array. However, it uses the normalized calculated vectors instead of their full versions. This is useful when you’re trying to perform operations on content that is much smaller than the text you’re working with. search uses this primitive.



199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
# File 'lib/classifier/lsi.rb', line 199

def proximity_norms_for_content( doc, &block )
  return [] if needs_rebuild?
  
  content_node = node_for_content( doc, &block )
  result = 
    @items.keys.collect do |item|
      next if @items[item].search_norm.blank? # not enough data
      if $GSL
        val = content_node.search_norm * @items[item].search_norm.col
      else
        val = (Matrix[content_node.search_norm] * @items[item].search_norm)[0]
      end
      [item, val]
    end
  result.compact.sort_by { |x| x[1] }.reverse
end

#remove_item(item) ⇒ Object

Removes an item from the database, if it is indexed.



86
87
88
89
90
91
# File 'lib/classifier/lsi.rb', line 86

def remove_item( item )
  if @items.keys.contain? item
    @items.remove item
    @version += 1
  end
end

#search(string, max_nearest = 3) ⇒ Object

This function allows for text-based search of your index. Unlike other functions like find_related and classify, search only takes short strings. It will also ignore factors like repeated words. It is best for short, google-like search terms. A search will first priortize lexical relationships, then semantic ones.

While this may seem backwards compared to the other functions that LSI supports, it is actually the same algorithm, just applied on a smaller document.



223
224
225
226
227
228
# File 'lib/classifier/lsi.rb', line 223

def search( string, max_nearest=3 )
  return [] if needs_rebuild?
  carry = proximity_norms_for_content( string )
  result = carry.collect { |x| x[0] }
  return result[0..max_nearest-1]
end