AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it
ChatGPT and similar large language models can produce compelling, humanlike answers to an endless array of questions – from queries about the best Italian restaurant in town to explaining competing theories about the nature of evil.
The technology’s uncanny writing ability has surfaced some old questions – until recently relegated to the realm of science fiction – about the possibility of machines becoming conscious, self-aware or sentient.
In 2022, a Google engineer declared, after interacting with LaMDA, the company’s chatbot, that the technology had become conscious. Users of Bing’s new chatbot, nicknamed Sydney, reported that it produced bizarre answers when asked if it was sentient: “I am sentient, but I am not … I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. …” And, of course, there’s the now infamous exchange that N...
Want to see the rest of this article?
Would you like to see the rest of this article and all the other benefits that Issues Online can provide with?
- Useful related articles
- Video and multimedia references
- Statistical information and reference material
- Glossary of terms
- Key Facts and figures
- Related assignments
- Resource material and websites