Apple’s latest prototype AI tool can animate images using text descriptions

  • Apple’s research team has announced Keyframer, an AI tool that animates still images using large language models.
  • Keyframer uses OpenAI’s large language model, GPT-4, as its base model. It generates CSS code to realize text prompts entered by the user, using SVG file format images uploaded by the user.
  • Users can create animations from still images by simply uploading an image, entering a text prompt, and clicking “Generate”.
  • In Apple’s demo, prompts such as “make the stars twinkle” were used to demonstrate changes in the background. Users can freely adjust properties such as the color code used in the animation and the length of the animation.
  • Keyframer automatically converts user changes into CSS, eliminating the need for complex coding knowledge. Users can also directly edit the CSS code.
  • Apple explains that Keyframer supports further exploration and improvement of animations through a combination of generated output prompts and direct editing.
  • A professional motion designer who participated in Apple’s Keyframer test expressed concern that their work might be replaced by Keyframer, indicating the tool’s potential. They plan to use Keyframer as one of the tools for animation production.
  • However, Apple pointed out that the Keyframer test was a small-scale survey of only 13 people, and during the tool test, only two pre-selected SVG file format images could be used.
  • Apple also revealed that Keyframer focuses on web-based animation production, such as sequence loading, data visualization, and animated transitions, and cannot currently produce complex animations seen in movies or games.

Read more at: https://www.theverge.com