I recently published a major update for the Python tmtoolkit package for text mining and topic modeling. Since it is a fairly large research software package, I’m using a Continuous Integration (CI) system for automated testing on different platforms. This system makes sure that every code update that is pushed to the software repository is automatically checked by running the test suite on all three major operating systems (Linux, MacOS, Windows). For the recent update of tmtoolkit, I decided to move the CI system from Travis CI to GitHub Actions (GHA) since GHA is directly integrated into GitHub and easy to set up. Still, there are some obstacles to overcome so this short post shows how to set up GHA for a Python project with a few extra requirements such as installing system packages on the test runner machine or running tests with tox and hypothesis.
Read More →Batch transfer GitLab projects with the GitLab API
This is a bit off-topic to be filed under DevOps / workflow automation but I still wanted to share it: We use GitLab at the WZB for collaborative software development and project management and I recently had to transfer all my GitLab projects to a GitLab group.[1]In case you don’t know GitLab: It’s similar to GitHub but open-source and you can install your own instance on your server so that all your data stays within your organization’s IT … Continue reading Since transferring a personal project to a group is not something that is done regularly, it’s quite hidden in the project settings and involves a lot of steps. Transferring a project manually with the GitLab web interface means visiting the project page, navigating to the “transfer project” pane in its advanced settings, selecting the group, clicking “Transfer group” and typing a confirmation string. Nobody want’s to do this manually with more than a handful of projects. Luckily GitLab comes with it’s own, well-documented REST API which can save us a lot of time by letting us automating such tedious tasks.
Read More →Footnotes
↑1 | In case you don’t know GitLab: It’s similar to GitHub but open-source and you can install your own instance on your server so that all your data stays within your organization’s IT realm. That’s better for data projection, customizability and you’re less dependent on the services of an external company. |
---|
Robust web scraping or web API based data collection
There are thousands of articles on the web about web scraping and accessing web APIs. Most of them show you how to extract information from specific elements on a web page or how to communicate with a specific API in order to collect data. For smaller data collection projects, this knowledge may be sufficient, but large scale data collection which must run reliably over days or even weeks brings up additional problems that mainly focus on the robustness of the data collection process. I will try to tackle some of these problems in this post. I will use examples in Python, but the basic concepts can easily be translated to R or other programming languages.
Read More →Spiegel Online news topics and COVID-19 β a topic modeling approach
I created a project to showcase topic modeling with the tmtoolkit Python package: I use a corpus of articles from the German online news website Spiegel Online (SPON) to create a topic model for before and during the COVID-19 pandemic. This topic model is then used to analyze the volume of media coverage regarding the pandemic and how it changed over time.
National daily infection numbers clearly drive the volume of media coverage on COVID-19 during the observation period (January 2020 to end of August 2020) on SPON, which is probably not very surprising. Even though infection rates increased dramatically in the world in summer 2020 (e.g. in Brazil, India and USA), media coverage first decreased and then stayed at a moderate level, indicating that SPON doesn’t respond so much to rising infection rates at an international level.
You can have a look at the report here. All scripts are available in the GitHub repository.
Using Google Places data to analyze changes in mobility during the COVID-19 pandemic
During the COVID-19 pandemic, it’s apparent that location data gathered by private IT companies and telcos is a primary source for many studies about the effect of mobility restrictions on people’s behaviors and movements. In this blog post, I’d like to have a look at the “popular times” data provided by Google Places. I explain the limitations of this data, show how to gather it and provide some results from data that I fetched during March and April.
Read More →Property based testing for scientific code in Python
Automated software testing starts with the often annoying and time-consuming process of writing tests. But no matter how annoying it is, in the end it always pays off, at least that’s my experience. For this article, I assume that the reader acknowledges the importance of automated software testing, because I would like to point to a way on how to write better tests in less time by using property based testing.
Read More →Lab report: Development of school sites in eastern Germany
I wanted to share a small lab report on a project about the development of school sites in eastern Germany since 1992. Rita Nikolai (HU Berlin), Marcel Helbig (WZB) and I published our results a few months ago (see this WZB Discussion Paper or this WZBrief), but I’d like to provide some additional information on the (technical) background in this post as this was not the aim of the mentioned papers.
Checkboxes and crosses: data mining PDFs with the help of image processing
From time to time, I work with “open data” published by public authorities. Often, these data do not deserve the label “open data” and this is mainly because they are provided as PDF files. PDFs are not machine readable, at least not without lot of programming work. I don’t know if this way of publishing data is done on purpose (because authorities are requested to publish open data but they do not want it to be actually analyzed in large scale) or if it is sheer ignorance.
For a recent project I came across a particular nasty type of PDFs: Scores from a school inspection are listed in a large table where each score is marked with a cross (see a full PDF for such a school inspection):
While most data can be extracted from PDF by converting them to a plain text representation, this is not possible for such PDFs. This is because the most important information, the scores, is not existent in the plain text representation of the PDF. The crosses that mark the score are essentially vector-graphics embedded in the PDF. In this article I will explain how to extract such information.
Tools and packages for geospatial processing with Python
In the social sciences, geospatial data appears quite often. You may have social indicators for different places on earth at different administrative levels, e.g. countries, states or municipalities. Or you may study spatial distribution of hospitals or schools in a given area, or visualize GPS referenced data from an experiment. For such scenarios, there’s fortunately a rich supply of open-source tools and packages. As I’ve worked recently quite a lot with geospatial data, I want to introduce some of this software, especially those available for the Python programming language.
Recent Comments