WHEN ROBOTS COLLUDE: Computers are adopting a legally questionable means to crush the competition
Two law professors, Ariel Ezrachi of Oxford and Maurice E. Stucke of the University of Tennessee, have a new working paper on how when computers take get involved in pricing for goods and services (like, say, at Amazon or Uber), the potential for collusion is even greater than when humans are making the prices.
Computers can't have a back-room conversation to fix prices, but they can predict the way that other computers are going to behave. And with that information, they can effectively cooperate with each other in advancing their own profit-maximizing interests. Ezrachi and Stucke explain:
Computers may limit competition not only through agreement or concerted practice, but also through more subtle means. For example, this may be the case when similar computer algorithms promote a stable market environment in which they predict each other's reaction and dominant strategy. Such a digitalised environment may be more predictable and controllable. Furthermore, it does not suffer from behavioral biases and is less susceptive to possible deterrent effects generated through antitrust enforcement.
The problem is that the law hasn't caught up to the technology. The first prosecution for this type of collusion wrapped up last month, but the law is still way behind. More frighteningly, it isn't clear if it can ever catch up.
Sometimes, a computer is just a tool used to help humans collude, which theoretically can be prosecuted. But sometimes, the authors find, the computer learns to collude on its own. Can a machine be prosecuted?
In a type of algorithmic collusion the authors call Autonomous Machine, "the computer executes whichever strategy it deems optimal, based on learning and ongoing feedback collected from the market. Issues of liability, as we will discuss, raise challenging legal and ethical issues."
How does antitrust law punish a computer? If an algorithm isn't programmed to collude, but ends up doing so independently through machine learning, it isn't clear that the law can stop it.
(via Jill Priluck)