Could future ais be “aware” and discover the world in the same way as the way humans do? There is no solid evidence that they will do, but Anthropic did not eliminate the possibility.
On Thursday, the AI laboratory announced that it had launched a research program to investigate – and prepare to sail – what it calls the “well -being model”. In the context of effort, Anthropic says that he will explore things like how to determine whether the “well-being” of an AI model deserves moral consideration, the potential importance of the “signs of distress” model and possible “low cost” interventions.
There is a major disagreement within the AI community on what human characteristics models present, if necessary, and how we must treat them.
Many academics believe that AI today cannot get closer to consciousness or human experience, and cannot necessarily do it in the future. A like we know, it is a statistical prediction engine. He does not “think” or does not really “feel” because these concepts have traditionally been understood. Trained on countless examples of text, images, etc., AI learns models and sometimes useful means of extrapolating to solve tasks.
Like Mike Cook, researcher at King’s College in London, specializing in AI, recently told Techcrunch in an interview, a model cannot “oppose” to a change in its “values” because the models do not do it to have values. To suggest the opposite, project us on the system.
“Whoever anthropomorphizing AI systems to this degree plays either for attention or seriously does not understand his relationship with AI,” said Cook. “Is an AI system optimized for its objectives, or” does it acquire its own values ”? This is the way you describe it, and how the language you want to use is flowery. ”
Another researcher, Stephen Casper, a doctoral student at MIT, told Techcrunch that he thought that AI is equivalent to an “imitator” who makes “all kinds of confabulation” “and says” all kinds of frivolous things “.
However, other scientists insist that AI has values and other human -type components of moral decision -making. A study by the Center for Ia Safety, a research organization on AI, implies that AI has values systems that lead it to prioritize its own well-being on humans in certain scenarios.
Anthropic sets the foundations for his model wellness initiative for some time. Last year, the company hired its first researcher dedicated to Social Protection of AI, Kyle Fish, to develop guidelines on the way in which anthropogenic and other companies should address the problem. (Fish, who heads the new model well-being research program, told New York Times that he thought there was a 15% chance of Claude or another AI is aware today.)
In the blog post on Thursday, Anthropic recognized that there is no scientific consensus on the question of whether current or future AI systems could be aware or have experiences that guarantee ethical consideration.
“In light of this, we approach the subject with humility and as few hypotheses as possible,” said society. “We recognize that we will have to regularly revise our ideas as the field is developing.