Semaphore Classification

Ruby wrapper around the Semaphore Classification Server (CS) API.

Usage

Before you can classify documents, you must set the path to your CS:

Semaphore::Client.set_realm(<uri_to_classification_server>)

To classify documents:

Semaphore::Client.classify([options])

Mostly likely you will specify a :document_uri when classifying documents, but if you do not you will need to specify an :alternate_body

Note that the classify method will only return the term IDs that matched and not the term itself. In order to set it to return the terms as well, you can use the option:

Semaphore::Client.decode_term_ids = true

This will slow down your classifications, however, as a new HTTP request is made for each term in the response.

Semaphore::Client.decode_term_id(<term_id>) Options

This method is used when you have the ID of a term and would like to retrieve its name from the Search Enhancement Server.

Semaphore::Client.classify() Options

:document_uri (optional)

This may use the following protocols (FTP, FTPS, HTTP, HTTPS). For example: mybucket.s3.amazonaws.com/some_file.pdf Supported document types include: Microsoft Office files, Lotus files, OpenOffice files, PDFs, WordPerfect docs, HTML docs and most other common file formats. The document type will be automatically identified by the CS.

:title (optional)

Value String

The title in the request is used mainly for classification of documents held by a content management system.

Default none

:alternate_body (optional)

Value String

This will be used to classify on if the document fails to be retrieved by the CS for some reason.

Default none

:article_mode (optional)

Value :single or :multi

:single will process the document in 1 large chunk. This will mean that evidence from all parts of the document are considered at the same time. Depending on the design of the rulenet this may increase the chance of mis-classifications. Singlearticle may also require large amounts of memory (or if this is restricted, large amounts of time) due to the size of evidence tables which have to be evaluated.

:multi will attempt to split the document into “articles” so that the rules only consider evidence within an article and then clustering is applied to calculate which categories are representative for the document as a whole rather than simply for an article.

Default :single

:debug (optional)

Value true or false

Will return the article(s) as well as rule matches in the response. Useful for troubleshooting, but results in large responses.

Default false

:clustering_type (optional)

Value [:all, :average, :average_scored_only, :common_scored_only, :common, :rms_scored_only, :rms, :none]

Clustering type specifies the type of calculation to use in deriving the document level scores from the article scores. This only applies to multiarticle style classifications.

Default :rms_scored_only

:clustering_threshold (optional)

Value [0-100]

The clustering threshold is only used in multiarticle mode. When the clustering algorithm is selected, the result is checked against this threshold and a score is only promoted to document level if it is >= this value.

Default 48

:threshold (optional)

Value [0-100]

The threshold is used to decide at what level of significance a category rule will fire.

The score (or significance if you prefer) varies between 0 and 100 sometimes this is displayed as 0.00 - 1.00 depending on whether it is used for integer calculations (0-100) or for statistical floating point operations (0.00 - 1.00 ie a normalised value is generally better here).

Default 48

:language (optional)

Value [:english, :english_marathon_stemmer, :english_morphological_stemmer, :english_morph_and_derivational_stemmer, :french, :italian, :german, :spanish, :dutch, :portuguese, :danish, :norwegian, :swedish, :arabic]

Note for Standard Language processing only English has multiple stemmers available - The other languages supported only have Marathon stemmer available.

Default :english_marathon_stemmer

:generated_keys (optional)

Value true or false

Using generated keys will mean that all rules will have a unique key (which is simply the index of the rule in the rulenet).

Default true

:min_avg_article_page_size (optional)

Value Decimal

The minimum average article page size is only relevant in multi article mode

For documents which contain page information (ie not html and other continuous formats) the count of pages in the document is used to check whether automatic splitting has provided a sensible result. If the number of articles made multiplied by this value is greater than the count of pages in the document then CS will assume that the splitting does not make sense for this document and will revert back to a single article.

The idea is that this gives an easy to use approximate measure for checking splitting - ie a min average article page size of 1 means that on average we want 1 article to be bigger than a single page so if a document of 10 pages splits into 20 articles then we probably have a bad statistical split so classifying as a single article will give better results

Default 1.0

:character_cutoff (optional)

Values FixNum

The character count cutoff is a mechanism for avoiding errors or lengthy classification times on large documents.

if the corpus of documents that is to be classified is likely to include junk (eg automatically generated log files from SQL servers etc) then a cutoff can make sense.

A value of 0 means no cutoff action is performed.

The cutoff defines the approximate size (in characters) after which CS will stop parsing the data. The value is used as an approximation so various parsers implement this behaviour in different manners - eg pdf documents are cutoff at the next page boundary when this limit is reached, word documents are cut off after the next complete word etc.

Measuring in characters appears to be the most sensible unit here since this cutoff is applied fairly early on in the processing of a document - at this point CS may not yet know what language the document is in so possibly cannot assume that the language is space seperated words - let alone count the number of sentences or paragraphs yet.

Other multi document type parsing systems (for example the simple parser included with google search appliance) often have a cutoff value (which is generally not configurable) defined in terms of the file size - however a word document with many embedded pictures may have a very large physical size but a relatively small number of characters so would be able to be classified perfectly well - hence the choice of a cutoff defined in terms of number of characters.

Generally this value is set high enough that any reasonable document (ie one produced by a person) will be fully considered so a value of 1/2 a million is realistic - automatically generated text files which cause lengthy classification times often have more than this characters but the information is very rarely of any use to an end user.

Default 500000

:document_score_limit (optional)

Value FixNum

The document level score limit is a mechanism for restricting the document level classifications to the top-N results only.

This is generally not a good idea since the confidence of a classification is meant to provide an absolute measure of the confidence of a particular classification across documents. That is a higher confidence means that the document is more likely to be “about” or “mainly concerned with” the particular topic. So if document 1 classifies category “A” with a confidence of 0.75 whilst document 2 has a confidence of 0.6 for category “A” then document 1 should be returned before document 2 (if this is a search style installation rather than a conformance checking). If whilst processing document 1 the classification of “A” is discarded since document 1 has too many higher confidence classifications then the results across the corpus are skewed and search will not be as accurate.

However some systems have rather strict limits on the number of “tags” or “meta information” which a particular document is allowed to contain - when CS is integrated with one of these systems it is probably better to allow CS pick the top-N rather than having this code at the integration layer.

Currently the implementation is pretty simplistic (will return N or less scores sorted by the confidence) so could easily be implemented in the integration layer but it is possible that further work could go here so that CS could check particular categories or classes of categories in a specific rulenet defined manner so that “important” classes of categorisations (though with a low confidence) are not excluded by large numbers of higher confidence classifications in some less important class of rules.

Default 0

Dependencies

Copyright © 2010 Gemini SBS. See LICENSE for details.

Authors