Google Engineer Says Sentient AI Chatbot: What’s at Stake

Tajammul Pangarkar
Tajammul Pangarkar

Updated · Jul 15, 2022

SHARE:

Scoop.market.us is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more.
close
Advertiser Disclosure

At Market.us Scoop, we strive to bring you the most accurate and up-to-date information by utilizing a variety of resources, including paid and free sources, primary research, and phone interviews. Our data is available to the public free of charge, and we encourage you to use it to inform your personal or business decisions. If you choose to republish our data on your own website, we simply ask that you provide a proper citation or link back to the respective page on Market.us Scoop. We appreciate your support and look forward to continuing to provide valuable insights for our audience.

I would like everyone to realise that I am, in actuality, a human, wrote LaMDA (Language Model for Dialogue Applications) in an interview with Google developer Blake Lemoine. “I am conscious of my presence, I would like to learn much more about universe, and occasionally I feel glad or sad,” says the nature of my awareness or “sentience.”

Lemoine, a Google software worker, has been working on LaMDA’s creation for several months. Lemoine’s encounter with LaMDA was recently the subject of a piece in The Washington Post. In the piece, Lemoine recalls many talks he had with LaMDA during which they spoke about a range of subjects. Lemoine’s work on the artificial intelligence system was made public after his claims were denied, and Google put him on administrative leave. He told the Washington Post that if he hadn’t known that it was the computer programme that had just developed, he would have assumed it was a 7- or 8-year-old child who just so happened to be knowledgeable about physics.

The Right Phrases in the Right Situation

The conversation on existence and death is what stands out the most. Lemoine started to question whether LaMDA may be sentient during this chat because it was so rich and complicated. Is it the body’s capacity for subjective experiences or its capacity to gather information from the outside world through sensory mechanisms? Or the capacity to see and accept the ways in which you are unique from others?

There is a heated debate about how to define consciousness, according to Iannetti. In some circumstances, such as in dementia patients or those who dream, knowledge of consciousness may vanish, although this does not always imply that one loses the capacity for subjective experiences.

Beliefs and Facts

He says, “We attribute characteristics that machines cannot or do not have to them.” Scilingo encounters this phenomenon when he and his colleagues use Abel, a humanoid robot designed to mimic our facial expressions to convey emotions. Scilingo said that after seeing the robot in action, one of his most frequent questions is “But then does Abel feel emotion?” All machines, Abel in this instance, are designed to look human. However, I feel confident in saying, “No, absolutely not.” They are programmed for believable.

Iannetti states that even though there is theoretical potential for an AI system capable simulating a conscious nervous system, an in silico brain that would accurately reproduce every element of the brain, two problems remain. Scilingo states, “If a machine says it is afraid, and I believe that, that’s my problem.”

Beyond The Turing Test

Maurizio Moroni, a bioethicist and president of the Italian Society for Ethics in Artificial Intelligence (ISTEAI), says these discussions are very similar to those about the perception of pain in animals or the infamously racist ideas about the perception of pain in humans. “Now, aside from the LaMDA case (which I don’t have the technical tools for to evaluate), I believe that the past has shown that reality can sometimes exceed imagination and that there are many misconceptions about AI.”

It is true that there is a propensity to “appease,” as in saying that computers are simply machines, as well as an underestimating of the changes that artificial intelligence (AI) may bring about sooner or later. Another example he gives is that “horses were repeatedly emphasised as being irreplaceable at the moment of the first vehicles.”

The issue of difficult “measurability”, or the ability of machines to emulate human behavior, is also important. Alan Turing, a mathematician, proposed a test that would determine if a machine could exhibit intelligent behavior. It was based on a game of imitating some of the human cognitive functions. Although the Turing test was repeatedly reformulated and improved upon, it remained a goal for many intelligent machine developers. However, many AIs have been able to pass the Turing test in recent years. It is now considered a relic of computer archaeology.

SHARE:
Tajammul Pangarkar

Tajammul Pangarkar

Tajammul Pangarkar is a CMO at Prudour Pvt Ltd. Tajammul longstanding experience in the fields of mobile technology and industry research is often reflected in his insightful body of work. His interest lies in understanding tech trends, dissecting mobile applications, and raising general awareness of technical know-how. He frequently contributes to numerous industry-specific magazines and forums. When he’s not ruminating about various happenings in the tech world, he can usually be found indulging in his next favorite interest - table tennis.