Cooperative robotics and the social side to (big data) modelling

June 14, 2015 | 0 Comments | Uncategorized

In a recent podcast by The Economist, Jason Palmer and Ken Cukier reported on something called “cooperative robots”. These are an exciting new breed of robots which aim to benefit optimally from the respective qualities of human and robot. For instance, robots can be very precise and people are adept at creative problem solving. The concept of cooperative robotics is particularly exciting because traditional robotics has focused on mimicking the human brain (artificial intelligence) rather than aiming for robot and human to complement each other. This development also provides a strong metaphor to think about other areas in which people turn to computers for support. In this article, I draw on cooperative robotics to illustrate some challenges associated with the use of computer models and (big) data analytics more generally.

The idea of a cooperative robot originates from the field of intelligence augmentation. This domain of scientific inquiry has existed separately from the more well-known realm of artificial intelligence. Those working on intelligence augmentation look to improve the way people complete tasks by using machines. Examples of such augmentations include the computer mouse or graphic user interfaces (like Microsoft Windows).

By transplanting these ideas to the field of robotics Austin Gregg-Smith and Walterio Mayol-Cuevas have created a cooperative hand-held robot. A press release on the University of Bristol’s webpage explains:

Handheld robots, aim to share physical proximity with users but are neither fully independent as is a humanoid robot nor are part of the user’s body, as are exoskeletons. The aim with handheld robots is to capitalise on exploiting the intuitiveness of using traditional handheld tools while adding embedded intelligence and action to allow for new capabilities.

A YouTube video demonstrates the extraordinary capabilities of the robot; As a child, I used to get utterly frustrated by the wire-loop game which seems to be a piece of cake to the human-robot team (0:50). The podcast mentions more productive applications of the cooperative robot, which include things like cleaning an operating room to a level of perfection that a human could never deliver. When you think about it, any task which requires both precision or strength and intuitiveness or creativity could benefit from the newly found collaboration.

Now, augmenting human capabilities with computers may be new to robotics, but it has actually been around for quite a while in other fields. One of those fields includes decision making. Ever since the seminal paper of Tversky and Kahneman, the limits and biases of human cognition have received a lot of attention and are now relatively well understood. An example is confirmation bias; our tendency to discredit information that does not support our beliefs. In a somewhat humbling paper, Robyn Dawes demonstrates that even faulty computer models can outperform a human decision maker for these reasons. Even an inaccurate computer is superior because it is more consistent and better at integrating different sources of information.

More recent work has shown that augmenting computer models with expert opinion improves the accuracy of economic forecasts. This approach combines the precision and information processing power of computers with the creativity and intuition of the human expert. Despite the benefits attributed to model-aided decision making, there have been cases of “modeling gone wrong” around the world (The Netherlands and the United Kingdom). So what has led such computer-human cooperation astray?

One reason may be that unlike the successful cooperative robot, computer models tend to be used in a social setting. That is, computer models are not directly operated by the person who ultimately makes a decision. The outcomes of computer models – augmented with expert opinion or not – are mediated through a series of human to human interactions. In a professional variation to the game of Chinese Whispers, model outcomes tend to get simplified as they are communicated up the chain of command.

The Red River Flood at Grand Forks (Wikipedia)

Such simplification is particularly concerning because unlike the cooperative robot, models are representations of our world. The caveats and limitations that get dropped with every stop, could have stark implications for how we may wish to intervene. Such was the case in the events leading up to the Red River Flood in the United States in 1997. Misinterpretation of a computer model resulted in inadequate preparation and response from the government.

This is not to argue that the use of computer models in a social setting is hopeless. On the contrary, such models can also help by bringing lots of expertise together and by allowing a variety of stakeholders to be included in a decision making process. Moreover, by using models for scenario analysis, decision makers can experiment with intended policies before moving on to costly real world experiments.

The take home point is that our understanding of the social processes of model use is in its infancy. Especially when compared with the amount of resources and research effort allocated towards the development of new analysis techniques, our understanding of how models are used in practice is lagging behind. This presents a hurdle to the effective implementation and use of new techniques. Fortunately, social studies of model use are gaining momentum and have yielded some surprising results. For instance, sociologist David Mackenzie has shown that financial models can actually affect the way financial markets work. In this view, models are not just representations of our world, but actually contribute to shaping it.

The rise of big data (Wikipedia)

Such work demonstrates that we are only beginning to understand the social side to computer modelling. So although the application of intelligence augmentation to robotics presents the very cutting edge of science, it may actually be fraught with less challenges than model-assisted decision making. Even though we have been using models for years, our understanding of computer model use has progressed little. Considering the ongoing increase of computer processing speeds and the related trend of (big) data analytics, improving our understanding of the social side to computer modelling is both urgent and important.

Leave a Reply

Your email address will not be published. Required fields are marked *