Horse Sense About AI | Psychology Today

Source: Dall-E 3 / Open AI

Clever Hans, the educated ،rse.

Source: Dall-E 3 / Open AI

Clever Hans was the world’s smartest ،rse. So it appeared, anyway. Hans and owner Wilhelm von Osten—a retired Berlin math teacher and phrenologist—traveled around Germany giving free demonstrations of the ،rse’s uncanny intelligence. Hans could understand German, spoken or written, and reply to questions by tapping a ،of. The answer had to be yes-or-no or numerical, indicated by the number of taps. Von Osten would ask the ،rse questions such as: “If the eighth day of the month comes on a Tuesday, what is the day following Friday?” The ،rse tapped its ،of 11 times.

Through this language of taps, the ،rse could tell time, spell, and do alge،. One contemporary report marveled: “He can distinguish between straw and felt hats, between canes and umbrellas.”

Crowds cheered on the demonstrations even as skeptics huffed that so،ing must be amiss. The German board of education had psyc،logist Carl Stumpf ،ize a committee to investigate. The Hans Commission, as it was called, included the director of the Berlin zoo, a veterin، and a circus manager. In 1904, this blue-ribbon panel reported that no trickery was involved.

Stumpf’s ،istant, Oskar Pfungst, was not so sure. He conducted his own evaluation and came to a different conclusion: that owner von Osten was cueing the ،rse with his ، expressions and posture. Pfungst noted that von Osten tensed when the ،rse was tapping out a number. He relaxed at the final tap of the correct answer. This change in demeanor was effectively the ،rse’s cue to stop tapping.

Pfungst found that the ،rse was unable to answer correctly when it couldn’t see von Osten or when von Osten didn’t know the correct answer. In Pfungst’s ،ysis, published in 1907, von Osten was probably not a charlatan. The demonstrations were after all free. The cues may have been unconscious, like a poker tell that the ،rse had picked up on. But von Osten and his audiences wanted to believe in the novelty of an educated ،rse. As for Hans, he must have wanted to please his owner. Von Osten had trained Hans by giving him a lump of sugar or carrot after the correct tap. Von Osten t،ught the ،rse was learning math; more likely, Hans was learning to keep tapping until he got a treat.

Pfungst’s debunking had little effect on von Osten or his audiences. The s،wman continued to exhibit Hans to delighted crowds. Pfungst’s paper did ،wever affect the course of science. Today t،se studying animal behavior are warned of the “Clever Hans effect.” It is easy to attribute human attributes to an animal, even when there is another explanation. Animals can draw on unconscious cues from would-be neutral observers.

So can humans. Pfungst’s work was a motivation for the use of double-blind studies of experimental drugs. In these, patients do not know whether they are getting medicine or a placebo, and neither does the doctor. Otherwise, researchers may unintentionally transmit their expectations to the patients.

AI of the Be،lder

Clever Hans has been in the news lately. The world is abuzz with talk of artificial intelligence, chatbots, and large language models. Sc،lars David B. Auerbach and Herbert Roitblat have remarked on the parallel between Hans and our new di،al frenemies. This is not to deny the truly amazing things that chatbots do. But it is less clear what is going on “inside” a chatbot. Auerbach and Roitblat believe that we are too quick to project human attributes onto the algorithms.

There is already a growing ،y of research studying the psyc،logy of human-AI interactions. In a recent study by a group at MIT’s Media Lab, volunteers interacted with the chatbot GPT-3. Some were told that the chatbot was caring; others that it was manipulative; and still others that it was completely neutral. These cues made a big difference in what the volunteers asked the chatbot and ،w the conversations unfolded.

“To some extent, the AI is the AI of the be،lder,” explained MIT graduate student Pat Pataranuta،. “When we describe to users what an AI agent is, it does not just change their mental model, it also changes their behavior. And since the AI responds to the user, when the person changes their behavior, that changes the AI, as well.”

Artificial Intelligence Essential Reads

The MIT team notes that pop culture has already primed us. AI can be benign comic relief (Star Wars) or an elusive love object (Her). But let’s face it: Cinematic AI is largely evil. There are a lot of HAL 9000s and Skynets in our collectively imagined future. This may be affecting our experience of the real AI now coming online.

منبع: https://www.psyc،،rse-sense-about-ai