Category Archives: Python

Creating and plotting Voronoi regions for geographic data with geovoronoi

Recently, I’ve worked a lot with geospatial data in Python. One thing that we needed for our analysis was generating Voronoi regions (or “cells”) from a given set of coordinates inside certain administrative boundaries (a country, a state, etc.). Such regions are interesting for spatial analysis, because each random point inside a Voronoi region is closest to the cell’s “origin point” (the point the cell was generated from) than to any other cell’s origin. As a practical example: In Melbourne parents can see which is the closest school for their home, by looking at an online map of Voronoi regions of schools.

These regions allow to calculate an estimate of a “coverage”: For each point’s Voronoi region, the area can be calculated, which represents the area theoretically covered by this point. Referring to the Melbourne example: The schools at the edge of the city cover a larger area than those in the city center. This approach of course does not take geographic properties into account. So if there’s a large lake inside a cell, it is also part of the covered area. Still, Voronoi tessellation is useful when looking at how the shape of the Voronoi regions changes over time, for example when new schools open or others close. We could then see for example, if the coverage of schools in the city center becomes better over the years, whereas in the rural areas it gets more sparse.

So all in all, Voronoi regions can be a very useful tool in spatial data analysis. QGIS provides a tool for Voronoi tessellation but we needed a more flexible approach that also fit into our workflow and could be used in our Python scripts. I decided to write a small Python package named geovoronoi that takes a set of points, a boundary object (the geographic shape enclosing the points – e.g. a country boundary) and then calculates the Voronoi regions using SciPy. These regions are then “cut” to the enclosing shape (using the excellent shapely package). The resulting Voronoi cells can then be used for further calculations (areas, distances, unions, etc.) and can also be visualized on a map.

The package geovoronoi is now available on PyPI (install it with pip install geovoronoi[plotting]) and the source is uploaded on the WZB’s GitHub page.

Vectorization and parallelization in Python with NumPy and Pandas

Modern computers are equipped with processors that allow fast parallel computation at several levels: Vector or array operations, which allow to execute similar operations simultaneously on a bunch of data, and parallel computing, which allows to distribute data chunks on several CPU cores and process them in parallel. When working with large amounts of data, it is important to know how to exploit these features because this can reduce computation time drastically. Taking advantage of this usually requires some extra effort during implementation. With packages like NumPy and Python’s multiprocessing module the additional work is manageable and usually pays off when compared to the enormous waiting time that you may need when doing large-scale calculations inefficiently.

Read More →

Web scraping with automated browsers using Selenium

Web scraping, i.e. automated data mining from websites, usually involves fetching a web page’s HTML document, parsing it, extracting the required information, and optionally follow links within this document to other web pages to repeat this process. This approach is sufficient for many websites that display information in a static way, i.e. do not respond to user interaction dynamically by the means of JavaScript. In these cases, web scraping can be implemented with Python packages such as requests and BeautifulSoup. Even interactive elements such as forms can be emulated by observing the HTTP POST and GET data that is send to the server, whenever a form is submitted. However, this approach has limits. Sometimes, it is necessary to automate a whole browser in order to implement web scraping on JavaScript-heavy websites as will be shown with a short example in this post.

Read More →

Topic Model Evaluation in Python with tmtoolkit

Topic modeling is a method for finding abstract topics in a large collection of documents. With it, it is possible to discover the mixture of hidden or “latent” topics that varies from document to document in a given corpus. As an unsupervised machine learning approach, topic models are not easy to evaluate since there is no labelled “ground truth” data to compare with. However, since topic modeling typically requires defining some parameters beforehand (first and foremost the number of topics k to be discovered), model evaluation is crucial in order to find an “optimal” set of parameters for the given data.

Several metrics exist for this task and some of them will be covered in this post. Furthermore, as calculating many models on a large text corpus is a computationally intensive task, I introduce the Python package tmtoolkit which allows to utilize all availabel CPU cores in your machine by computing and evaluating the models in parallel.

Read More →

Geocoding an address and performing point-polygon tests with GDAL/OGR in Python

Suppose you have a list of addresses and want to connect them with some kind of location-based information. For example, your addresses might scatter across several neighborhoods and you want to find out to which neighborhood each address belongs, because you have further information (like mean income, percentage of migrants, etc.) about each neighborhood and want to combine it with your data. In many countries, administrative authorities gather such geographical information and provide the data on their websites.

In the given scenario, three steps are necessary in order to combine the addresses with geographical information:

  1. Geocoding the address, i.e. finding out the geographical coordinates (latitude, longitude) for this address
  2. Given a file with geographical information (GIS data) that form several distinct areas as polygons, finding out which of these polygons contains the geocoded address
  3. Obtain necessary information such as a neighborhood identifier from the polygon

This short post shows how to do that with the Python packages googlemaps and GDAL.

Read More →

Slides on Text Preprocessing and Feature Extraction for Quantitative Text Analysis

I’ve recently given a small workshop on Text Preprocessing and Feature Extraction for Quantitative Text Analysis with Python at the WZB. In the first part, we discussed different methods for normalizing, parsing and filtering the raw input text like tokenization, Part-of-Speech tagging, stemming and lemmatization. The second part focuses on feature extraction, explaining the Bag-of-Words model and the tf-idf approach as prominent examples. Both are the foundation for many text analysis algorithms used in text classification, topic modeling or clustering. The slides emphasize the importance of these processing steps that come before the actual text analysis algorithms are applied, because: garbage in, garbage out.

The explanations on the slides are quite detailed, so I thought putting them online might be informative for others. So here we go:

Slides for Text Processing and Feature Extraction for Quantitative Text Analysis (WZB Python User Group Workshop)

I can recommend the following supplementary resources:

Speeding up NLTK with parallel processing

When doing text processing with NLTK on large corpora, you often need a lot of patience since even simple methods like word tokenization take quite some time when you’re processing a large amount of text data. This is because NLTK does not often harness the power of modern multicore computers — the code will only run on a single core even if you have four processing cores in your machine. You will need to add parallel processing of your documents yourself. Fortunately this is quite straight forward to implement with Python’s multiprocessing module and I will show how to do this in this small post.

Read More →

Lemmatization of German language text

Lemmatization is the process of finding the base (or dictionary) form of a possibly inflected word — its lemma. It is similar to stemming, which tries to find the “root stem” of a word, but such a root stem is often not a lexicographically correct word, i.e. a word that can be found in dictionaries. A lemma is always a lexicographically correct word.

When using text mining models that depend on term frequency, such as Bag of Words or tf-idf, accurate lemmatization is often crucial, because you might not want to count the occurrences of the terms “book”, and “books” separately; you might want to reduce “books” to its lemma “book” so that it is included in the term frequency of “book”.

For English, automatic lemmatization is supported in many Python packages, for example in NLTK (via WordNetLemmatizer) or spaCy. For German, however, I could only find the CLiPS pattern package which has limited use (e.g. it cannot handle declined nouns) and is not supported in Python 3. By using the annotated TIGER corpus of the University of Stuttgart, I will try to measure the accuracy of a lemmatizer based on the pattern.de module and will suggest an improved lemmatizer which improves pattern.de’s accuracy by about 10%.

Read More →

Using Django with an existing legacy database

The Django web framework is well suited for creating medium sized research databases. It allows rapid development of a convenient data administration backend (using the Django Admin Site) as well as appealing frontends for published data (as done in the LATINNO project at the WZB). This works well when you build a database from ground up by defining model classes at first and then let Django generate the database schema itself (Django models → Database schema). Often enough however, it is necessary to revise an existing database or at least the data administration interface. In this scenario, the database schema is already defined and hence it is necessary to create Django models from the schema (Database schema → Django models). Django can handle this situation pretty well but some advises have to be followed which I’ll explain here.

Read More →

Data Mining OCR PDFs — Using pdftabextract to liberate tabular data from scanned documents

Detected clusters of vertical lines with pdftabextract

During the last months I often had to deal with the problem of extracting tabular data from scanned documents. These documents included quite old sources like catalogs of German newspapers in the 1920s to 30s or newer sources like lists of schools in Germany from the 1990s. All sources were of mixed scanning quality (including rotated or skewed pages) and had very different table layouts. Some had visible table column borders, others only table header borders so the actual table cells were only visually separated by “white-space”. Automated data extraction with tools from ABBYY or using Tabula failed in most cases. Because of the big variety of scanning quality and table layouts, a general single-solution approach didn’t work out. Hence I created a set of common tools that allow to detect table layouts on scanned pages in OCR PDFs, enable visual verification of the detected layouts and finally allow the extraction of the data in the tables. To detect and extract the data I created a Python library named pdftabextract which is now published on PyPI and can be installed with pip. The detected layouts can be verified page by page using pdf2xml-viewer. This post will cover an introduction to both tools by showing all necessary steps in order to extract tabular data from an example page. The necessary files can be found in the examples directory of the pdftabextract github repository. A Jupyter Notebook for this example is also available there.

Read More →