Anthropic, a competitor of OpenAI, has released a new generative AI model named Claude 3.5 Sonnet. This model is an incremental improvement rather than a significant leap forward. Claude 3.5 Sonnet can analyze both text and images and generate text. It is Anthropic’s best-performing model to date, at least on paper.
Across several AI benchmarks for reading, coding, math, and vision, Claude 3.5 Sonnet outperforms the model it’s replacing, Claude 3 Sonnet, and beats Anthropic’s previous flagship model, Claude 3 Opus. However, benchmarks aren’t necessarily the most useful measure of AI progress, as many of them test for esoteric edge cases that aren’t applicable to the average person.
Claude 3.5 Sonnet just barely bests rival leading models, including OpenAI’s recently launched GPT-4o, on some of the benchmarks Anthropic tested it against. Alongside the new model, Anthropic is releasing what it’s calling Artifacts, a workspace where users can edit and add to content generated by Anthropic’s models.
Currently in preview, Artifacts will gain new features, like ways to collaborate with larger teams and store knowledge bases, in the near future, Anthropic says. Claude 3.5 Sonnet is a bit more performant than Claude 3 Opus, and Anthropic says that the model better understands nuanced and complex instructions, in addition to concepts like humor.
Claude 3.5 Sonnet is faster and around twice the speed of 3 Opus, Anthropic claims. Vision, analyzing photos, is one area where Claude 3.5 Sonnet greatly improves over 3 Opus. 3.5 Sonnet can interpret charts and graphs more accurately and transcribe text from imperfect images, such as pics with distortions and visual artifacts.
The improvements are the result of architectural tweaks and new training data, including AI-generated data. However, the specifics of the data used for training were not disclosed. Despite the secrecy around training data, it is implied that Claude 3.5 Sonnet draws much of its strength from these training sets.
Read more: techcrunch.com