Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Business

Apple Created a New AI That Understands Conversational Subtleties

  • Apple researchers have developed a new AI system to “see” and interpret the context of content on the screen.
  • The “Reference Resolution As Language Modeling” system allows for more natural interactions with AI.
  • The researchers behind ReaLM claim that it outperforms OpenAI’s GPT-4 in terms of understanding context.

Apple’s new AI development aims to take on OpenAI’s GPT products and could make your interactions with virtual assistants like Siri more intuitive.

The ReaLM system, which stands for “Reference Resolution As Language Modeling”, understands ambiguous on-screen images and content as well as conversational context to enable more natural interactions with AI.

Apple’s new system outperforms other major language models like GPT-4 at determining context and what linguistic expressions refer to, according to the researchers who created it. And, as a less complex system than other large language models like OpenAI’s GPT series, the researchers called ReaLM an “ideal choice” for a context decryption system “that can exist on the device without compromise performance.

For example, let’s say you ask Siri to show you a list of local pharmacies. Once the list is presented, you can ask it to “Call the one on Rainbow Road” or “Call the one at the bottom”. With ReaLM, instead of receiving an error message asking for more information, Siri could decipher the context needed to complete such a task better than GPT-4, according to Apple researchers. who created the system.

“Human speech typically contains ambiguous references such as ‘they’ or ‘that,’ whose meaning is obvious (to other humans) given the context,” the researchers wrote of ReaLM’s capabilities. “Being able to understand context, including references like these, is essential for a conversational assistant that aims to allow a user to naturally communicate their needs to, or have a conversation with, an agent.”

The ReaLM system can interpret images embedded in text, which researchers say can be used to extract information such as phone numbers or recipes from images on the page.

OpenAI’s GPT-3.5 only accepts text input, and GPT-4, which can also contextualize images, is a large system trained primarily on natural, real-world images, not screenshots – which, according to Apple researchers, hampers its practical performance and makes ReaLM the best option for understanding on-screen information.

“Apple has long been considered a laggard compared to Microsoft, Google and Amazon in the development of conversational AI,” reports The Information. “The iPhone maker has a reputation for being a careful and deliberate developer of new products – a tactic that has worked well in gaining consumer trust, but could hurt it in the rapid race to AI.”

But with the teasing of ReaLM’s capabilities, it looks like Apple is seriously preparing to enter the race.

The researchers behind ReaLM and representatives for OpenAI did not immediately respond to Business Insider’s requests for comment.

It’s not yet clear when or if ReaLM will be implemented in Siri or other Apple products, but CEO Tim Cook said in a recent conference call that the company is “happy to share details of our ongoing work in the field of AI later this year.

businessinsider

Back to top button