Apple quietly moving past Chat GPT with new AI called MM1, a type of multimodal assistant that can answer complex questions and describe photos or documents

Apple-Storefront

Revelation 13:14 “…by the signs that it is allowed to work in the presence of the beast it deceives those who dwell on earth…”

Important Takeaways:

  • Apple’s MM1 AI Model Shows a Sleeping Giant Is Waking Up
  • A research paper quietly released by Apple describes an AI model called MM1 that can answer questions and analyze images. It’s the biggest sign yet that Apple is developing generative AI capabilities.
  • “This is just the beginning. The team is already hard at work on the next generation of models.”
  • …a research paper quietly posted online last Friday by Apple engineers suggests that the company is making significant new investments into AI that are already bearing fruit. It details the development of a new generative AI model called MM1 capable of working with text and images. The researchers show it answering questions about photos and displaying the kind of general knowledge skills shown by chatbots like ChatGPT. The model’s name is not explained but could stand for MultiModal 1.
  • MM1 appears to be similar in design and sophistication to a variety of recent AI models from other tech giants, including Meta’s open source Llama 2 and Google’s Gemini. Work by Apple’s rivals and academics shows that models of this type can be used to power capable chatbots or build “agents” that can solve tasks by writing code and taking actions such as using computer interfaces or websites. That suggests MM1 could yet find its way into Apple’s products.
  • “The fact that they’re doing this, it shows they have the ability to understand how to train and how to build these models,”…
  • MM1 could perhaps be a step toward building “some type of multimodal assistant that can describe photos, documents, or charts and answer questions about them.”

Read the original article by clicking here.