By MATT O’BRIEN And SARAH PARVINIthe Associated Press
If 2023 was a year of wonder when it came to artificial intelligence, 2024 was the year to try to get this wonder to do something useful without breaking the bank.
There has been a “shift from making models to building products,” said Arvind Narayanan, a professor of computer science at Princeton University and co-author of the new book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.
The approximately 100 million people who experienced ChatGPT when it was released two years ago actively sought out the chatbot, finding it incredibly useful for some tasks or ridiculously poor for others.
Today, this generative AI technology is integrated into a growing number of technology services, whether we seek it or not – for example, through AI-generated answers in Google search results or new search techniques. AI in photo editing tools.
“The main problem with generative AI in the last year is that companies have launched these very powerful models without being able to put them to practical use,” Narayanan said. “What we’re seeing this year is the gradual creation of products that can take advantage of these capabilities and do things that are useful for people. »
At the same time, since OpenAI released GPT-4 in March 2023 and its competitors introduced large AI language models with similar performance, these models have stopped becoming significantly “bigger and qualitatively better”, resetting inflated expectations that AI was rushing every few months toward some sort of better-than-human intelligence, Narayanan said. It also means that public discourse has moved away from the question “will AI kill us?” ” to treat it like normal technology, he said.
AI Sticker Clash
On quarterly earnings conference calls this year, technology executives have often heard questions from Wall Street analysts seeking assurance of future profits from huge spending on technology research and development. ‘AI. Building AI systems behind generative AI tools like OpenAI’s ChatGPT or Google’s Gemini requires investing in power-hungry computing systems running on powerful and expensive AI chips. They need electricity so badly that tech giants this year announced deals to harness nuclear power to power them.
“We’re talking about hundreds of billions of dollars of capital that has been invested in this technology,” said Kash Rangan, an analyst at Goldman Sachs.
Another analyst at the New York investment bank attracted attention this summer by arguing that AI doesn’t solve the complex problems that would justify its costs. He also questioned whether AI models, even though trained on much of the written and visual data produced throughout human history, will ever be able to do what humans do so well. Rangan has a more optimistic view.
“We were fascinated that this technology was going to be absolutely revolutionary, which hasn’t been the case in the two years since ChatGPT was introduced,” Rangan said. “It’s more expensive than we thought and it’s not as productive as we thought.”
Rangan, however, remains optimistic about its potential and says that AI tools are already proving “increasingly productive” in sales, design and a number of other professions.
AI and your work
Some workers wonder whether AI tools will be used to supplement their jobs or replace them as technology continues to grow. Tech company Borderless AI uses an AI chatbot from Cohere to draft employment contracts for workers in Turkey or India without the help of outside lawyers or translators.
Video game artists at the Screen Actors Guild-American Federation of Television and Radio Artists who went on strike in July said they fear AI could reduce or eliminate job opportunities because it could be used to reproduce a performance in a number of other movements without their consent. . Concerns about how movie studios will use AI helped fuel union strikes in film and television last year, which lasted four months. Video game companies also signed side agreements with the union that codified certain AI protections in order to continue working with actors during the strike.
Musicians and authors have expressed similar concerns that AI is suppressing their voices and their books. But generative AI still can’t create unique works or “completely new things,” said Walid Saad, a professor of electrical and computer engineering and an AI expert at Virginia Tech.
“We can train it with more data so it has more information. But having more information doesn’t mean you’re more creative,” he said. “As humans, we understand the world around us, right? We understand physics. You understand that if you throw a ball on the ground, it will bounce. AI tools currently don’t understand the world.
Saad cited a meme about AI as an example of this gap. When someone asked an AI engine to create a picture of salmon swimming in a river, he said, the AI created a photo of a river with cut pieces of salmon found in the grocery stores.
“What AI lacks today is the common sense that humans have, and I think that’s the next step,” he said.
A “future agent”
This type of thinking is a key part of the process of making AI tools more useful to consumers, said Vijoy Pandey, senior vice president of Cisco’s innovation and incubation arm, Outshift. AI developers are increasingly pitching the next wave of generative AI chatbots as AI “agents” that can do more useful things on behalf of people.
This could mean being able to ask an ambiguous question of an AI agent and allow the model to reason and plan what steps to take to solve an ambitious problem, Pandey said. According to him, many technologies will evolve in this direction in 2025.
Pandey predicts that eventually, AI agents will be able to come together and do work in the same way that multiple people come together and solve a problem as a team rather than simply completing tasks as individual AI tools. AI agents of the future will work together, he said.
Future Bitcoin software, for example, will likely rely on the use of AI software agents, Pandey said. These officers will each have a specialty, he said, with “officers checking for accuracy, officers checking for safety, officers checking for scale.”
“We are moving toward an agentic future,” he said. “You’re going to have all these agents who are very good at certain skills, but also have a little bit of character or color, because that’s how we operate.”
AI is making progress in medicine
AI tools have also streamlined, or in some cases lent a real helping hand, to the medical field. This year’s Nobel Prize in Chemistry – one of two Nobel Prizes awarded to AI-related science – was awarded to work by Google that could help discover new drugs.
Saad, the Virginia Tech professor, said AI has helped speed up diagnoses by quickly giving doctors a starting point from which to determine what care to give a patient. AI can’t detect diseases, he said, but it can quickly digest data and flag potential problems for a real doctor to investigate. However, as in other areas, this presents the risk of perpetuating falsehoods.
Tech giant OpenAI has touted its AI-powered transcription tool, Whisper, as having “near human-level robustness and accuracy,” for example. But experts said Whisper has a major flaw: It tends to compose chunks of text, or even entire sentences.
Cisco’s Pandey said some of the company’s customers who work in the pharmaceutical sector have noted that AI has helped bridge the gap between “wet labs,” in which humans conduct physical experiments and research, and “dry labs”, where people analyze data and often use computers for modeling.
When it comes to pharmaceutical development, this collaborative process can take several years, he said. With AI, the process can be reduced to just a few days.
“For me, that was the most spectacular use,” Pandey said.
Get more business news by subscribing to our Economy Now newsletter.
denverpost