Fears of AI reaching black market raise concerns about criminals evading government regulations: expert

Artificial intelligence – especially big language models like ChatGPT – can theoretically give criminals the information needed to cover their tracks before and after a crime, then erase that evidence, an expert warns.
Large Language Models, or LLMs, are a segment of AI technology that uses algorithms capable of recognizing, summarizing, translating, predicting, and generating text and other content based on knowledge gained from sets massive data.
ChatGPT is the best-known LLM, and its rapid and successful development has created unease among some experts and sparked a Senate hearing to hear Sam Altman, the CEO of ChatGPT maker OpenAI, pushing for oversight.
Companies like Google and Microsoft are developing AI at a rapid pace. But when it comes to crime, that’s not what scares Dr. Harvey Castro, a board-certified emergency physician and national speaker on artificial intelligence who has created his own LLM called “Sherlock.”
WORLD’S FIRST AI UNIVERSITY PRESIDENT SAYS TECH WILL DISRUPT EDUCATIONAL PRINCIPLES AND CREATE ‘RENAISSANCE SCHOLARSHIPS’
Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on May 16, 2023 in Washington, D.C. The committee held an oversight hearing to review AI, considering focusing on the rules of artificial intelligence. ((Photo by Win McNamee/Getty Images))
It’s “the unscrupulous 18-year-old” who can create his own LLM without the guardrails and protections and sell it to would-be criminals, he said.
“One of my biggest worries isn’t really the big players, like Microsoft or Google or OpenAI ChatGPT,” Castro said. “I’m actually not very worried about them, because I feel like they’re self-regulating, and the government is watching and the world is watching and everyone is going to regulate them.
“I’m actually more worried about those teenagers or someone who’s just out there, who’s able to create their own big language model themselves that won’t abide by the regulations, and who can even sell it on the market. black. I’m really worried about that as a possibility in the future.”
WHAT IS AI?
On April 25, OpenAI.com said that the latest ChatGPT model will have the ability to disable chat history.
“When chat history is disabled, we will retain new chats for 30 days and review them only as needed to monitor abuse, before permanently deleting them,” OpenAI.com said in its announcement.
WATCH DR. HARVEY CASTRO EXPLAINS AND DEMONSTRATES HIS “SHERLOCK” LLM
The ability to use this type of technology, with chat history disabled, could prove beneficial for criminals and problematic for investigators, Castro warned. To translate the concept into real-life scenarios, consider two ongoing criminal cases in Idaho and Massachusetts.
OPENAI CHIEF ALTMAN HAS DESCRIBED WHAT AI ‘SCARY’ MEANS TO HIM, BUT CHATGPT HAS HIS OWN EXAMPLES
Bryan Kohberger was pursuing a doctorate. in criminology when he allegedly killed four University of Idaho undergraduates in November 2022. Friends and acquaintances described him as a “genius” and “really smart” in previous interviews with Fox NewsDigital.
In Massachusetts, there is the case of Brian Walshe, who allegedly killed his wife, Ana Walshe, in January and disposed of her body. The murder case against him rests on circumstantial evidence, including a long list of alleged Google searches, such as how to dispose of a body.
BRYAN KOHBERGER UNKNOWN IN IDAHO STUDENT MURDERS
Castro’s fear is that someone with more expertise than Kohberger could create an AI chat and erase search history that could include vital evidence in a case like the one against Walshe.
“Generally, people can get caught using Google in their history,” Castro said. “But if someone created their own LLM and allowed the user to ask questions but told them not to keep a history of it all, then they can get information on how to kill a person and get rid of the body.”
Currently, ChatGPT refuses to answer these types of questions. It blocks “certain types of dangerous content” and does not respond to “inappropriate requests”, according to OpenAI.
WHAT IS THE HISTORY OF AI?

Dr. Harvey Castro, a board-certified emergency physician and national lecturer on artificial intelligence who created his own LLM called “Sherlock,” speaks to Fox News Digital about the potential criminal uses of AI. (Chris Eberhart)
During Senate testimony last week, Altman told lawmakers that GPT-4, the latest model, will deny harmful requests such as violent content, content about self-harm and adult content.
“Not that we think adult content is inherently harmful, but there are things that might be associated with it that we can’t reliably differentiate between. So we refuse all of that,” said Altman, who also discussed other safeguards such as age restrictions.
“I would create a set of safety standards focused on what you said in your third assumption as dangerous capability ratings,” Altman said in response to questions from a senator about what rules to implement.
AI TOOLS USED BY POLICE WHO ‘DON’T UNDERSTAND HOW THESE TECHNOLOGIES WORK’: STUDY
“An example we’ve used in the past is looking to see if a model can self-replicate and sell exfiltration in the wild. We can give your office another long list of things we think are important there, but specific tests that a model must pass before it can be rolled out to the world.
“And then third, I would require independent audits. So not just from the company or the agency, but from experts who can say that the model does or does not meet these declared safety thresholds and at these performance percentages on question X or Y.”
To put the concepts and theory into perspective, Castro said, “I guess 95% of Americans don’t know what LLMs or ChatGPT are,” and he wishes it were that way.
ARTIFICIAL INTELLIGENCE: FREQUENTLY ASKED QUESTIONS ABOUT AI

Artificial intelligence will hack data in the near future. (Stock)
But it’s possible that Castro’s theory will become reality in the not-too-distant future.
He alluded to a now-completed AI research project by Stanford University, dubbed “Alpaca.”
According to the university’s initial announcement, a group of computer scientists created a product that cost less than $600 to build and had “very similar performance” to OpenAI’s GPT-3.5 model, and ran on computers Raspberry Pi and a Pixel 6 smartphone.
WHAT ARE THE DANGERS OF AI? DISCOVER WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE
Despite its success, the researchers terminated the project, citing licensing and security issues. The product was not “designed with adequate safety measures,” the researchers said in a press release.
“We emphasize that Alpaca is intended for academic research only and that any commercial use is prohibited,” according to the researchers. “There are three factors in this decision: First, Alpaca is based on LLaMA, which has a non-commercial license, so we necessarily inherit this decision.”
CLICK HERE TO GET THE FOX NEWS APP
The researchers went on to say that the instruction data is based on OpenAI’s text-davinci-003, “whose terms of use prohibit developing models that compete with OpenAI. Finally, we have not designed any adequate security measures, so Alpaca is not ready for general-purpose deployment.”
But Stanford’s successful creation is scary in Castro’s otherwise half-full vision of how OpenAI and LLMs can potentially change humanity.
“I tend to be a positive thinker,” Castro said, “and I think this will all be over for good. . “
Fox