A Cornell University-led experiment in which two people play a modified version of Tetris revealed that players who get fewer turns perceived the other player as less likable, regardless of whether a person or an algorithm allocated the turns.
Most studies on algorithmic fairness focus on the algorithm or the decision itself, but researchers sought to explore the relationships among the people affected by the decisions.

“We are starting to see a lot of situations in which AI makes decisions on how resources should be distributed among people,” said Malte Jung, associate professor of information science, whose group conducted the study. “We want to understand how that influences the way people perceive one another and behave towards each other. We see more and more evidence that machines mess with the way we interact with each other.”
In an earlier study, a robot chose which person to give a block to and studied the reactions of each individual to the machine’s allocation decisions.
“We noticed that every time the robot seemed to prefer one person, the other one got upset,” said Jung. “We wanted to study this further, because we thought that, as machines making decisions becomes more a part of the world – whether it be a robot or an algorithm – how does that make a person feel?”