Big data and data science transform organizational decision-making. We increasingly defer decisions to algorithms because machines have earned a reputation of outperforming us. As algorithms become embedded within organizations, they become more influential and increasingly opaque. Those who create algorithms may make arbitrary decisions in all stages of the ‘data value chain’, yet these subjectivities are obscured from view. Algorithms come to reflect the biases of their creators, can reinforce established ways of thinking, and may favour some political orientations over others. This is a cause for concern and calls for more transparency in the development, implementation, and use of algorithms in public- and private-sector organizations. We argue that one elementary – yet key – question remains largely undiscussed. If transparency is a primary concern, then to whom should algorithms be transparent? We consider algorithms as socio-technical assemblages and conclude that without a critical audience, algorithms cannot be held accountable.
Society collects more data than ever before. Our databases contain emails, videos, audios, images, click streams, logs, posts, search queries, health records, and more. The abundance of available data and decreasing cost of computing capability leads to the digitization and automation of public- and private-sector decision-making. Application areas in government span from traffic management to public sector budgeting and food safety monitoring to cyber security. The private sector has also taken to the algorithm and found applications from e-commerce to logistics. Some algorithms, such as profiling systems, are used in either context. Examples of algorithms that people encounter include Google’s PageRank algorithm that serves us with relevant search results, Spotify’s weekly music recommendation algorithm, and dynamic pricing models that try to maximize the amount we pay for goods and services.