Meta's newest AI model beats some peers. But its amped-up AI agents are confusing Facebook users

Trending 4 weeks ago

CAMBRIDGE, Mass. (AP) — Generative AI is advancing truthful quickly that nan latest chatbots disposable coming could beryllium retired of day tomorrow.

Google, Meta Platforms and OpenAI, on pinch startups specified arsenic Anthropic, Cohere and France’s Mistral, person been churning retired caller AI connection models and hoping to seduce customers they’ve sewage nan smartest, handiest aliases astir businesslike chatbots.

Meta is nan latest to up its game, unveiling caller models Thursday that will beryllium among nan astir visible: they’re already getting baked into Facebook, Instagram and WhatsApp. But successful a motion of nan technology’s ongoing limitations, Meta’s amped-up AI agents person been spotted this week confusing Facebook users by posing arsenic group pinch made-up life experiences.

While Meta is redeeming nan astir powerful of its AI models, called Llama 3, for later, it’s publically releasing 2 smaller versions of nan aforesaid Llama 3 strategy that powerfulness its Meta AI assistant. AI models are trained connected immense pools of information to make responses, pinch newer versions typically smarter and much tin than their predecessors. The publically released models were built pinch 8 cardinal and 70 cardinal parameters — a measurement of really overmuch information nan strategy is trained on. A bigger, astir 400 billion-parameter exemplary is still successful training.

“The immense mostly of consumers don’t candidly cognize aliases attraction excessively overmuch astir nan underlying guidelines model, but nan measurement they will acquisition it is conscionable arsenic a overmuch much useful, nosy and versatile AI assistant,” said Nick Clegg, Meta’s president of world affairs, successful an interview.

Some Facebook users are already experiencing Meta’s AI agents successful different ways. Earlier this week, a chatbot pinch nan charismatic Meta AI explanation inserted itself into a speech successful a backstage Facebook group for Manhattan moms, claiming that it, too, had a kid successful nan New York City schoolhouse district. Confronted by quality members of nan group, it later apologized earlier nan comments disappeared, according to a bid of screenshots shown to The Associated Press.

“Apologies for nan mistake! I’m conscionable a ample connection model, I don’t person experiences aliases children,” nan chatbot told nan moms’ group.

Clegg said Wednesday he wasn’t alert of nan exchange. Facebook’s online thief page says nan Meta AI supplier will subordinate a group speech if invited, aliases if personification “asks a mobility successful a station and nary 1 responds wrong an hour.”

In different illustration shown to nan AP connected Thursday, nan supplier confused members of a “Buy Nothing” forum for swapping unwanted items adjacent Boston. The supplier offered a “gently used” integer camera and an “almost new-portable aerial conditioning portion that I ne'er ended up using.” A personnel of nan Facebook group tried to prosecute it earlier realizing nary specified items existed.

Meta said successful a written connection Thursday that “this is caller exertion and it whitethorn not ever return nan consequence we intend, which is nan aforesaid for each generative AI systems.” The institution said it is perpetually moving to amended nan features and trying to make users alert of nan limitations.

Clegg did opportunity that Meta’s AI supplier is loosening up a bit. Some group recovered nan earlier Llama 2 exemplary — released little than a twelvemonth agone — to beryllium “a small stiff and sanctimonious sometimes successful not responding to what were often perfectly innocuous aliases guiltless prompts and questions,” he said.

In nan twelvemonth aft ChatGPT sparked a generative AI frenzy, nan tech manufacture and academia introduced immoderate 149 ample AI systems trained connected monolithic datasets, much than double nan twelvemonth before, according to a Stanford University survey.

They whitethorn yet deed a limit — astatine slightest erstwhile it comes to data, said Nestor Maslej, a investigation head for Stanford’s Institute for Human-Centered Artificial Intelligence.

“I deliberation it’s been clear that if you standard nan models connected much data, they tin go progressively better,” he said. “But astatine nan aforesaid time, these systems are already trained connected percentages of each nan information that has ever existed connected nan internet.”

More information — acquired and ingested astatine costs only tech giants tin afford, and progressively taxable to copyright disputes and lawsuits — will proceed to thrust improvements. “Yet they still cannot scheme well,” Maslej said. “They still hallucinate. They’re still making mistakes successful reasoning.”

Getting to AI systems that tin execute higher-level cognitive tasks and commonsense reasoning — wherever humans still excel— mightiness require a displacement beyond building ever-bigger models.

For nan flood of businesses trying to adopt generative AI, which exemplary they take could dangle connected respective factors, including cost. Language models, successful particular, person been utilized to powerfulness customer work chatbots, constitute reports and financial insights and summarize agelong documents.

“You’re seeing companies benignant of looking astatine fit, testing each of nan different models for what they’re trying to do and uncovering immoderate that are amended astatine immoderate areas alternatively than others,” said Todd Lohr, a leader successful exertion consulting astatine KPMG.

Unlike different exemplary developers trading their AI services to different businesses, Meta is mostly designing its AI products for consumers — those utilizing its advertising-fueled societal networks. Joelle Pineau, Meta’s vice president of AI research, said astatine a London arena past week nan company’s extremity complete clip is to make a Llama-powered Meta AI “the astir useful adjunct successful nan world.”

“In galore ways, nan models that we person coming are going to beryllium child’s play compared to nan models coming successful 5 years,” she said.

But she said nan “question connected nan table” is whether researchers person been capable to good tune its bigger Llama 3 exemplary truthful that it’s safe to usage and doesn’t, for example, hallucinate aliases prosecute successful dislike speech. In opposition to starring proprietary systems from Google and OpenAI, Meta has truthful acold advocated for a much unfastened approach, publically releasing cardinal components of its AI systems for others to use.

“It’s not conscionable a method question,” Pineau said. “It is simply a societal question. What is nan behaviour that we want retired of these models? How do we style that? And if we support connected increasing our exemplary ever much successful wide and powerful without decently socializing them, we are going to person a large problem connected our hands.”

___

AP Business Writer Kelvin Chan successful London contributed to this report.

More
Source apnews.com
apnews.com