Kosinski’s latest study was published in Scientific Reports earlier this year. He fed over 1 million pictures of social media profiles into the algorithm, which correctly predicted a person’s self-identified ideology 72% of the time. Humans got it right only 55% of the time.
Kosinski is an associate professor of organizational behavior at Stanford Graduate School of Business. He does not consider this a breakthrough but a wake-up. He hopes his findings will warn people (and policymakers) about the misuse of such a rapidly developing technology.
Kosinski’s latest work builds upon his paper from 2018, in which he discovered that, without the developers’ knowledge, one of the most popular algorithms for facial recognition could sort people with shocking accuracy based on their sexual orientation. He recalls, “We were shocked — and terrified — by the results.” The results were the same when they repeated the experiment using different faces.
This study caused a firestorm. Kosinski’s critics claimed he engaged in “AI psychology” and enabled digital discrimination. He replied that his critics were attacking the messenger by publicizing the invasive, nefarious, and dangerous uses of an already widely used technology whose privacy threats remain relatively unknown.
He acknowledges that his approach is paradoxical: “Many haven’t yet realized that the technology has a potentially dangerous potential.” In attempting to quantify these technologies’ dangerous potential, I’m informing the public, journalists, and politicians that “hey, this off-the-shelf technology has dangerous properties.” And I recognize this challenge.”
Kosinski insists that he doesn’t develop artificial intelligence tools. He is a psychologist who wants to understand the existing technologies and how they can be used for good or harm. “Our lives have become increasingly affected by algorithms,” says Kosinski. Companies and governments collect our personal information wherever they can. This includes private photos that we post online.
Kosinski talked to Insights about the controversy surrounding his research and its implications.
What sparked your interest in these issues
In my early work, I showed that our Facebook likes to reveal much more about us than we may realize. In earlier work, I demonstrated that Facebook likes told a lot about us. While I was browsing Facebook profiles, the profile pictures revealed a lot about us. Faces reveal age, gender, and emotions. They also show fatigue and other psychological traits. The data generated by facial recognition algorithms showed that they could classify people based on intimate features not visible to humans, such as personality and political orientation. At the time, I was shocked by the results.
After my training as a psychology student, the idea that you can learn about a person’s innermost psychological characteristics by looking at their appearance seemed like an old-fashioned form of pseudoscience. After thinking about it more, I find it strange that we would ever believe that our facial features are not linked to our personalities.
We all judge people by their appearance
Yes, of course. These judgments are made instantly and automatically, according to lab studies. You can tell someone’s opinion of a person by showing them a face. You can’t do it. You get consistent answers when you ask test subjects how intelligent, trustworthy, liberal, efficient, and competent they are.
But these judgments aren’t very accurate. In my study, when subjects were asked to predict people’s political or sexual views by looking at photos on social media, only 55% to 60% of the predictions were correct. You’d get 50% accuracy by guessing randomly. That is a poor result. Studies have also shown that this is true for other traits: the opinions are consistent but often incorrect. The fact that some people are consistently accurate shows that there must be a link between faces and personal characteristics.
You discovered that a facial recognition algorithm was much more accurate
Right. The machine was 72% correct in my study on political opinions. This was just a standard algorithm that ran on my laptop. I have no reason to believe this is the best machines can do.
I want to emphasize that I never trained the algorithm to predict interracial traits. No one should think about this before regulatory frameworks are in place. I’ve shown that free face-recognition programs available online for general purposes can be used to classify people according to their political beliefs. It’s not nearly as good as the technology that companies like Google and Facebook already use.
This tells us that the image contains much more data than people can perceive. Computers recognize visual patterns much more efficiently than humans in large data sets. The algorithms’ ability to interpret this information is a real innovation.
What happens when you combine this with the ubiquitous nature of cameras in today’s world
The big question. People still feel they can protect themselves by being thoughtful and careful with online security. We can’t hide from closed-circuit televisions or surveillance cameras everywhere. There is no option to opt in or out of releasing this information. There are also databases with ID photos that authorities could use. This changes the situation dramatically.
Can people wear masks to be more invisible to this algorithm
Most likely not. If you wear a mask, the algorithm will make predictions based solely on your forehead and eyes. If liberals suddenly started wearing cowboy hats, the algorithm would be confused the first three times. Then, it will realize that cowboy hats have no meaning when making predictions and adjusting their beliefs.
The critical point is that, even if our faces could be hidden, we can still make predictions based on various other data sources: voice recordings and clothing styles, purchase records, browsing logs on the web, etc.