Claude 3, a large language model developed by Anthropic, has sparked interest in the AI community for seemingly demonstrating a type of “metacognition” or self-awareness during testing. In a test to measure Claude’s recall ability, the model not only found the target sentence but also recognized that it was out of place among the other topics discussed in the documents. This led to speculation about the model’s ability to monitor or regulate its own internal processes. However, experts caution against anthropomorphizing these models, as they do not possess a form of self-awareness like humans.
Read more at: arstechnica.com