What algorithm does Readability use for extracting text from URLs?

Readability mainly consists of heuristics that “just somehow work well” in many cases.

I have written some research papers about this topic and I would like to explain the background of why it is easy to come up with a solution that works well and when it gets hard to get close to 100% accuracy.

There seems to be a linguistic law underlying in human language that is also (but not exclusively) manifest in Web page content, which already quite clearly separates two types of text (full-text vs. non-full-text or, roughly, “main content” vs. “boilerplate”).

To get the main content from HTML, it is in many cases sufficient to keep only the HTML text elements (i.e. blocks of text that are not interrupted by markup) which have more than about 10 words. It appears that humans choose from two types of text (“short” and “long”, measured by the number of words they emit) for two different motivations of writing text. I would call them “navigational” and “informational” motivations.

If an author wants you to quickly get what is written, he/she uses “navigational” text, i.e. few words (like “STOP”, “Read this”, “Click here”). This is the mostly prominent type of text in navigational elements (menus etc.)

If an author wants you to deeply understand what he/she means, he/she uses many words. This way, ambiguity is removed at the cost of an increase in redundancy. Article-like content usually falls into this class as it has more than only a few words.

While this separation seems to work in a plethora of cases, it is getting tricky with headlines, short sentences, disclaimers, copyright footers etc.

There are more sophisticated strategies, and features, that help separating main content from boilerplate. For example the link density (number of words in a block that are linked versus the overall number of words in the block), the features of the previous/next blocks, the frequency of a particular block text in the “whole” Web, the DOM structure of HTML document, the visual image of the page etc.

You can read my latest article “Boilerplate Detection using Shallow Text Features” to get some insight from a theoretical perspective. You may also watch the video of my paper presentation on VideoLectures.net.

“Readability” uses some of these features. If you carefully watch the SVN changelog, you will see that the number of strategies varied over time, and so did the extraction quality of Readability. For example, the introduction of link density in December 2009 very much helped improving.

In my opinion, it therefore makes no sense in saying “Readability does it like that”, without mentioning the exact version number.

I have published an Open Source HTML content extraction library called boilerpipe, which provides several different extraction strategies. Depending on the use case, one or the other extractor works better. You can try these extractors on pages on your choice using the companion boilerpipe-web app on Google AppEngine.

To let numbers speak, see the “Benchmarks” page on the boilerpipe wiki which compares some extraction strategies, including boilerpipe, Readability and Apple Safari.

I should mention that these algorithms assume that the main content is actually full text. There are cases where the “main content” is something else, e.g. an image, a table, a video etc. The algorithms won’t work well for such cases.

Leave a Comment