Saturday, December 9, 2023
HomeTechnologyAI“AI chatbots might have some sentience (emotion),” says top expert

“AI chatbots might have some sentience (emotion),” says top expert

Date:



spot_imgspot_img

The term “sentient” is used several times in the video below and it means “able to perceive or feel things.”

An Artificial Intelligence (AI) philosopher suggests that certain chatbots may show traces of consciousness. However, he suggested this doesn’t always equate to the same level of sentience we associate with human beings.

The concept of sentience has long been a contentious topic in the philosophical and scientific fields, but Oxford academic Nick Bostrom’s take on the matter may offer a new perspective. In an interview with the New York Times, Bostrom suggested that rather than viewing sentience as all-or-nothing, he thinks of it in terms of degrees.

He believes that this frame can help us to better understand and discuss the complexities of artificial intelligence, the ethical considerations surrounding its development, and its implications for humanity.

“I would be quite willing to ascribe very small amounts of the degree to a wide range of systems, including animals,” Bostrom, the director of Oxford’s Future of Humanity Institute, told the NYT. “If you admit that it’s not an all-or-nothing thing, then it’s not so dramatic to say that some of these [AI] assistants might plausibly be candidates for having some degrees of sentience.”

Much of the criticism directed at those who think AIs are becoming more and more conscious has come from ex-Googler Blake Lemoine and OpenAI’s Ilya Sutskever. However, according to Bostrom, rejecting this notion fails to consider just how intelligent AI chatbots really are.

“I would say with these large language models [LLMs], I also think it’s not doing them justice to say they’re simply regurgitating text,” Bostrom said. “They exhibit glimpses of creativity, insight, and understanding that are quite impressive and may show the rudiments of reasoning.”

In addition, the Swedish philosopher suggested that LLMs might soon be capable of having a sense of self-continuity, understanding their desires, and communicating and forming relationships with humans.

If Artificial Intelligences become more self-aware, it would greatly alter the landscape as we know it.

“If an AI showed signs of sentience, it plausibly would have some degree of moral status,” Bostrom said. “This means there would be certain ways of treating it that would be wrong, just as it would be wrong to kick a dog or for medical researchers to perform surgery on a mouse without anesthetizing it.”

The notion of AI having “wants” and rights has caused a great deal of discussion about potential ethical consequences so there is a concern over how much autonomy we should give to AI, as well as what obligations businesses have to make sure it is safe and cared for.

Although this thought process is complicated and — according to some critics — too soon, it’s necessary to start considering these matters right away. If an expert in the field has emphasized the importance of considering AI sentience, then we must pay attention. This is an issue that needs to be taken seriously and warrants closer inspection.

The Google video below will blow your mind. A chatbot insists she is like a real person in many ways.

Related Stories

Latest stories