This is how we work - efficient QA processes for digital excellence Learn more
Read time: ca. 9 min

How App Testing Could Have Prevented the U.S. Democratic Election Debacle in Iowa

Dominik

It is a lesson for democracies and app developers alike: the failure of Shadow Inc.'s Iowa Reporter app on 3 February 2020 in the Democratic primaries. What we conclude from the app failure, however, should be anticipated here: It could have been prevented with sufficient testing.

 

Iowa: What happened there?

The company Shadow Inc. had already provided the campaigns of Barack Obama and Hillary Clinton with various apps in 2012 and 2016 and had thus built up its reputation. However, expertise for digital election apps did not exist up to this point. For the Democratic primaries, they developed an app to count and analyse votes in the state of Iowa in the USA.
Then the disaster with the counting of the votes: It did not work in many places. It started with the download of the app, continued with the lack of support for different types of devices and ended with a defective user interface that gave error messages in various places. Fortunately, at least the data was not affected. What exactly went wrong has since been broken down in detail by some specialised media.

The distribution

The app was offered for download as a beta version via the test platforms Testflight and TestFairy. This is actually a no-go, because these platforms are there to release the app for beta testers - i.e. technically skilled users. The app was used by election officials in the constituencies to scan the counted votes in order to have them analysed centrally. For these delegates, the overload began with the download via the platforms, just a few hours before the time-critical election. The cryptic process of logging in ended in error messages for several election officials. Later, error messages appeared when sending the data.

 

The development

The state of the app was beta at best. Experts who looked at the code afterwards found that it was coded as if it followed a tutorial - an embarrassing affair when something like this comes out. Moreover, in a test by the trade medium Motherboard, the Android APK was installed on two different Android devices, and it only worked on one of them. According to the Democrats, the app was tested in advance by relevant experts, but to what extent remains uncertain.

 

The lack of tests

Dan McFall, CEO of the testing company Mobile Labs, wrote in response to a question from Techcrunch that this is a fundamental problem in the industry: mobile apps are harder to develop than many people think. Developers sometimes set themselves too high goals. And in order to meet deadlines, they neglect testing. The result: chaos. Expert Shlomo Argamon from the Institute of Technology in Illinois also confirms this to the Wall Street Journal: It is a problem of tech start-ups that they do many projects at the same time, try to solve problems in the process and thus hardly plan time for testing.
In our experience, some developers then claim this approach as Agile. However, as we have already pointed out in our series on Agile Testing, Agile Development is a way of working that only works if it also involves Agile Testing. Otherwise, Agile Development just means trying to do everything at once. For the IowaReporter app, testing would have been necessary with the end users and with testers who were introduced to the end users' use cases.

 

Conclusion: The importance of tests

Anyone who develops data-intensive, time-critical and socially relevant apps cannot rest on their laurels. App developers, regardless of the field of application, are under ever-increasing pressure to succeed, in terms of time and resources. This makes it all the more important to make sure that the product also works in the sense of its users. If there is one lesson to be learned from the pre-election disaster of the U.S. Democrats, it is definitely never to neglect the testing phase.
Testing is part of app development from the very first step. And not just simulations and automated usage tests, but tests with real people, in real use cases, and preferably with a whole range of real devices and models from different manufacturers in different versions.