Dynamic column/variable names with dplyr using Standard Evaluation functions

Data manipulation works like a charm in R when using a library like dplyr. An often overlooked feature of this library is called Standard Evaluation (SE) which is also described in the vignette about the related Non-standard Evaluation. It basically allows you to use dynamic arguments in many dplyr functions (“verbs”).

Read More →

Bringing SVG to life with d3.js

Scalable Vector Graphics (SVG) are create to display high quality, scalable, graphics on the web. Most graphics software like Adobe Illustrator or Inkscape can export it. The graphics are of course static, but with a little help from the JavaScript data visualization library d3.js, they can be brought to life by animating parts of them or making some elements respond to actions like mouse clicks.

In this post I will explain how to do that using the example of an interactive map for the LATINNO project.

Read More →

A tip for the impatient: Simple caching with Python pickle and decorators

During testing and development, it is sometimes necessary to rerun tasks that take quite a long time. One option is to drink coffee in the mean time, the other is to use caching, i.e. save once calculated results to disk and load them from there again when necessary. The Python module pickle is perfect for caching, since it allows to store and read whole Python objects with two simple functions. I already showed in another article that it’s very useful to store a fully trained POS tagger and load it again directly from disk without needing to retrain it, which saves a lot of time.

Read More →

Displaying translated ForeignKey objects in Django admin with django-hvad

For multilingual websites built with Django the extension django-hvad is a must, since it allows to specify, edit and fetch multilingual (i.e. translatable) objects very easily and is generally well integrated into the Django framework. However, some caveats for using django-hvad’s TranslatableModel in the Django model admin backend system exist, especially when dealing with relations to TranslatableModel objects. I want to address three specific problems in this post: First, it’s not possible to display a translatable field directly in a model admin list display. Secondly, related (and also translated) ForeignKey objects are only displayed by their primary key ID in such a list display. And lastly, a similar problem exists for the list display filter and drop-down selection boxes in the edit form.

Read More →

Accurate Part-of-Speech tagging of German texts with NLTK

Part-of-speech tagging or POS tagging of texts is a technique that is often performed in Natural Language Processing. It allows to disambiguate words by lexical category like nouns, verbs, adjectives, and so on. This is useful in many cases, for example in order to filter large corpora of texts only for certain word categories. It is also often a prerequisite of lemmatization.

For English texts, POS tagging is implemented in the pos_tag() function of the widely used Python library NLTK. However, if you’re dealing with other languages, things get trickier. You can try to find a specialized library for your language, for example the pattern library from CLiPS Research Center, which implements POS taggers for German, Spanish and other languages. But apart from this library being only available for Python 2.x, its accuracy is suboptimal — only 84% for German language texts.

Another approach is to use supervised classification for POS tagging, which means that a tagger can be trained with a large text corpus as training data like the TIGER corpus from the Institute for Natural Language Processing / University of Stuttgart. It contains a large set of annotated and POS-tagged German texts. After training with such a dataset, the POS tagging accuracy is about 96% with the mentioned corpora. In this post I will explain how to load a corpus into NLTK, train a tagger with it and then use the tagger with your texts. Furthermore I’ll show how to save the trained tagger and load it from disk in order not to re-train it every time you need to use it.

Read More →

Autocorrecting misspelled Words in Python using HunSpell

When you’re dealing with natural language data, especially survey data, misspelled words occur quite often in free-text answers and might cause problems during later analyses. A fast and easy to implement approach to deal with these issues is to use a spellchecker and automatically correct misspelled words. I’ll show how to do this with PyHunSpell, a set of Python bindings for the open source spellchecker engine HunSpell which is also used in well-known software projects like Firefox, OpenOffice and works with many languages.

Read More →

Data Mining OCR PDFs – Getting Things Straight

The first article of my series about extracting tabular data from PDFs focused on rather simple cases; cases that allowed us to convert the PDFs to plain text documents and parse the extracted text line-per-line. We also learned from the first article that the only information that we can access in PDFs is the textual data that is distributed across the pages in the form of individual text boxes, which have properties like a position, width, height and the actual content (text). There’s usualy no information stored about rows/columns or other table-like structures.

Now in the next two articles I want to focus on rather complicated documents: PDFs that have complex table structures or are even scans of documents that were processed via Optical Character Recognition (OCR). Such documents are often “messy” — someone scanned hundreds of pages and of course sometimes the pages are sloped or skewed and the margins differ. It is mostly impossible to extract structured information from such a messy data source by just converting the PDF to a plain text document as described in the previous article. Hence we must use the attributes of the OCR-procossed text boxes (such as the texts’ positions) to recognize patterns in them from which we might infer the table layout.

So the basic goal is to analyse the text boxes and their properties, especially their positions in form of the distribution of their x- and y-coordinates on the page and see if we can construct a table layout from that, so that we can “fit” the text boxes into the calculated table cells. This is something that I’ll explain in the third article of this series. Because before we can do that, we need to clarify some prerequisites which I’ll do in this article:

  1. When we use OCR, to what should we pay attention?
  2. How can we extract the text boxes and their properties from a PDF document?
  3. How can we display and inspect the text boxes?
  4. How can we straighten skewed pages?

Read More →

Data Mining PDFs – The simple cases

Extracting data from PDFs can be a laborious task. When you only want to extract all text from a PDF and don’t care about which text is a headline or a paragraph or how text boxes relate to each other, you won’t have much headaches with PDFs, because this is quite straight forward to achieve. But if you want to extract structured information (especially tabular data) it really gets cumbersome, because unlike many other document formats, PDFs usually don’t carry any information about row-column-relationships, even if it looks like you have a table in front of you when you open a PDF document. From a technical point of view, the only information we usually have in PDFs is in forms of text boxes, which have some attributes like:

  • position in relation to the page
  • width and height
  • font attributes (font family, size, etc.)
  • the actual content (text) of the text box

So there’s no information in the document like “this text is in row 3, column 5” of a table. All we have is the above attributes from which we might infer a cell position in a table. In a short series of blog posts I want to explain how this can be done. In this first post I will focus on the “simple cases” of data extraction from PDFs, which means cases where we can extract tabular information without the need to calculate the table cells from the individual text box positions. In the upcoming posts I will explain how to handle the harder cases of PDFs: So called “sandwich” documents, i.e. PDFs that contain the scanned pages from some document together with “hidden” text from optical character recognition (OCR) of the scanned pages.

Read More →

LATINNO Project Website launched

I’m happy to announce that the website for the LATINNO project was launched this week. The WZB project LATINNO, lead by Thamy Pogrebinschi, collects and analyses data on democratic innovations of Latin America since the 1990. Currently the website informs about the project, the research design and publications as well as news related to the project. In the near future, a database of coded cases of innovations will be published for open access.

The website was designed by Caroline della Croce and the frontend was implemented by Benedikt Hebeisen, while the backend and database is implemented by me. This multilingual website is developed in Python with the Django framework. We chose Django because it allows rapid website development, has a clear and well documented programming model and features an easy to use administration backend. We additionally used Django hvad to enable multilingual database content.

Read More →

Reading textual data from CSV and Excel files correctly with pandas

The pandas library is great for data analysis with Python, but it has some caveats and gotchas. One of it is importing textual data from CSV and Excel files that is automatically converted to numeric values when it only consists of digits. This is mostly a nice feature, but sometimes it is not what you want, for example in the case of codes with leading zeros like a FIPS state code. If you have a column with FIPS state codes in your CSV or Excel file, it will show up as an integer series after importing it with pandas, so the FIPS code of ’03’ will become the integer ‘3’.

To prevent pandas from doing this, a good guess would be specifying the dtype directly so that it doesn’t need to be guessed, but unfortunately this is not supported:

import pandas as pd

df = pd.read_excel('some_excelfile.xls', dtype=object)
>>> ValueError: The 'dtype' option is not supported with the 'python' engine

It also doesn’t work with other “engines” yet, so we need another solution: Converters. You can pass a dict that specifies a conversion function for each column (either by column index or column name). For example, if we want to have strings instead of numeric values in the columns with indices 3 and 7, we could pass a dict with the conversion function str() like this:

converters = {col: str for col in (3, 7)}
df = pd.read_excel('some_excelfile.xls', converters=converters)

pandas will not guess the data type of the columns where a conversion function is defined but will use the output type of the conversion function, so we will have a series of strings with the leading zeros as we wanted it.