Checkboxes and crosses: data mining PDFs with the help of image processing

From time to time, I work with “open data” published by public authorities. Often, these data do not deserve the label “open data” and this is mainly because they are provided as PDF files. PDFs are not machine readable, at least not without lot of programming work. I don’t know if this way of publishing data is done on purpose (because authorities are requested to publish open data but they do not want it to be actually analyzed in large scale) or if it is sheer ignorance.

For a recent project I came across a particular nasty type of PDFs: Scores from a school inspection are listed in a large table where each score is marked with a cross (see a full PDF for such a school inspection):

While most data can be extracted from PDF by converting them to a plain text representation, this is not possible for such PDFs. This is because the most important information, the scores, is not existent in the plain text representation of the PDF. The crosses that mark the score are essentially vector-graphics embedded in the PDF. In this article I will explain how to extract such information.

Read More →

Tools and packages for geospatial processing with Python

In the social sciences, geospatial data appears quite often. You may have social indicators for different places on earth at different administrative levels, e.g. countries, states or municipalities. Or you may study spatial distribution of hospitals or schools in a given area, or visualize GPS referenced data from an experiment. For such scenarios, there’s fortunately a rich supply of open-source tools and packages. As I’ve worked recently quite a lot with geospatial data, I want to introduce some of this software, especially those available for the Python programming language.

Read More →

Three ways of visualizing a graph on a map

When visualizing a network with nodes that refer to a geographic place, it is often useful to put these nodes on a map and draw the connections (edges) between them. By this, we can directly see the geographic distribution of nodes and their connections in our network. This is different to a traditional network plot, where the placement of the nodes depends on the layout algorithm that is used (which may for example form clusters of strongly interconnected nodes).

In this blog post, I’ll present three ways of visualizing network graphs on a map using R with the packages igraph, ggplot2 and optionally ggraph. Several properties of our graph should be visualized along with the positions on the map and the connections between them. Specifically, the size of a node on the map should reflect its degree, the width of an edge between two nodes should represent the weight (strength) of this connection (since we can’t use proximity to illustrate the strength of a connection when we place the nodes on a map), and the color of an edge should illustrate the type of connection (some categorical variable, e.g. a type of treaty between two international partners).

Read More →

Slides on practical Topic Modeling: Preparation, evaluation, visualization

I gave a presentation on Topic Modeling from a practical perspective*, using data about the proceedings of plenary sessions of the 18th German Bundestag as provided by offenesparlament.de. The presentation covers preparation of the text data for Topic Modeling, evaluating models using a variety of model quality metrics and visualizing the complex distributions in the models. You can have a look at the slides here:

Probabilistic Topic Modeling with LDA – Practical topic modeling: Preparation, evaluation, visualization

The source code of the example project is available on GitHub. It shows how to perform the preprocessing and model evaluation steps with Python using tmtoolkit. The models can be inspected using PyLDAVis and some (exemplary) analyses on the data are performed.

* This presentation builds up on a first session on the theory behind Topic Modeling

Visualizing graphs with overlapping node groups

I recently came across some data about multilateral agreements, which needed to be visualized as network plots. This data had some peculiarities that made it more difficult to create a plot that was easy to understand. First, the nodes in the graph were organized in groups but each node could belong to multiple groups or to no group at all. Second, there was one “super node” that was connected to all other nodes (while “normal” nodes were only connected within their group). This made it difficult to find the right layout that showed the connections between the nodes as well as the group memberships. However, digging a little deeper into the R packages igraph and ggraph it is possible to get satisfying results in such a scenario.

Read More →

Creating and plotting Voronoi regions for geographic data with geovoronoi

Recently, I’ve worked a lot with geospatial data in Python. One thing that we needed for our analysis was generating Voronoi regions (or “cells”) from a given set of coordinates inside certain administrative boundaries (a country, a state, etc.). Such regions are interesting for spatial analysis, because each random point inside a Voronoi region is closest to the cell’s “origin point” (the point the cell was generated from) than to any other cell’s origin. As a practical example: In Melbourne parents can see which is the closest school for their home, by looking at an online map of Voronoi regions of schools.

These regions allow to calculate an estimate of a “coverage”: For each point’s Voronoi region, the area can be calculated, which represents the area theoretically covered by this point. Referring to the Melbourne example: The schools at the edge of the city cover a larger area than those in the city center. This approach of course does not take geographic properties into account. So if there’s a large lake inside a cell, it is also part of the covered area. Still, Voronoi tessellation is useful when looking at how the shape of the Voronoi regions changes over time, for example when new schools open or others close. We could then see for example, if the coverage of schools in the city center becomes better over the years, whereas in the rural areas it gets more sparse.

So all in all, Voronoi regions can be a very useful tool in spatial data analysis. QGIS provides a tool for Voronoi tessellation but we needed a more flexible approach that also fit into our workflow and could be used in our Python scripts. I decided to write a small Python package named geovoronoi that takes a set of points, a boundary object (the geographic shape enclosing the points – e.g. a country boundary) and then calculates the Voronoi regions using SciPy. These regions are then “cut” to the enclosing shape (using the excellent shapely package). The resulting Voronoi cells can then be used for further calculations (areas, distances, unions, etc.) and can also be visualized on a map.

The package geovoronoi is now available on PyPI (install it with pip install geovoronoi[plotting]) and the source is uploaded on the WZB’s GitHub page.

Vectorization and parallelization in Python with NumPy and Pandas

Modern computers are equipped with processors that allow fast parallel computation at several levels: Vector or array operations, which allow to execute similar operations simultaneously on a bunch of data, and parallel computing, which allows to distribute data chunks on several CPU cores and process them in parallel. When working with large amounts of data, it is important to know how to exploit these features because this can reduce computation time drastically. Taking advantage of this usually requires some extra effort during implementation. With packages like NumPy and Python’s multiprocessing module the additional work is manageable and usually pays off when compared to the enormous waiting time that you may need when doing large-scale calculations inefficiently.

Read More →

Slides on Topic Modeling – Background, Hyperparameters and common pitfalls

I just uploaded my slides on probabilistic Topic Modeling with LDA that give an overview of the theory, the basic assumptions and prerequisites of LDA and some notes on common pitfalls that often happen when trying out this method for the first time. Furthermore I added a Jupyter Notebook that contains a toy implementation of the Gibbs sampling algorithm for LDA with lots of comments and plots to illustrate each step of the algorithm.

Web scraping with automated browsers using Selenium

Web scraping, i.e. automated data mining from websites, usually involves fetching a web page’s HTML document, parsing it, extracting the required information, and optionally follow links within this document to other web pages to repeat this process. This approach is sufficient for many websites that display information in a static way, i.e. do not respond to user interaction dynamically by the means of JavaScript. In these cases, web scraping can be implemented with Python packages such as requests and BeautifulSoup. Even interactive elements such as forms can be emulated by observing the HTTP POST and GET data that is send to the server, whenever a form is submitted. However, this approach has limits. Sometimes, it is necessary to automate a whole browser in order to implement web scraping on JavaScript-heavy websites as will be shown with a short example in this post.

Read More →

Topic Model Evaluation in Python with tmtoolkit

Topic modeling is a method for finding abstract topics in a large collection of documents. With it, it is possible to discover the mixture of hidden or “latent” topics that varies from document to document in a given corpus. As an unsupervised machine learning approach, topic models are not easy to evaluate since there is no labelled “ground truth” data to compare with. However, since topic modeling typically requires defining some parameters beforehand (first and foremost the number of topics k to be discovered), model evaluation is crucial in order to find an “optimal” set of parameters for the given data.

Several metrics exist for this task and some of them will be covered in this post. Furthermore, as calculating many models on a large text corpus is a computationally intensive task, I introduce the Python package tmtoolkit which allows to utilize all availabel CPU cores in your machine by computing and evaluating the models in parallel.

Read More →