Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
politicsUSA

How to identify a gen AI deepfake imposter when it enters your life

Carl Froggett worked as one of Citibank’s chief information security officers for more than two decades, protecting the bank’s infrastructure from increasingly sophisticated cyberattacks. And while criminal deceptions, from low-tech paper counterfeiting to rudimentary email scams, have long plagued the banking industry and business in general, deep counterfeiting technology powered by generative AI is something new.

“I’m very concerned about the use of deepfakes in the business world,” said Froggett, now CIO of Deep Instinct, a company that uses AI to fight cybercrime.

Industry experts say meeting rooms and offices are quickly becoming a battleground where cybercriminals will regularly deploy deep fake technologies to try to steal millions from businesses. Therefore, they provide a good testing ground for efforts to spot AI impostors before they succeed. in scams.

“The challenge we face is that generative AI is so realistic,” Froggett said.

Generative AI video and audio tools are being deployed and improving rapidly. OpenAI released its Sora video generation tool in February, and in late March introduced an audio tool called Voice Engine that can realistically recreate an individual speaking from a 15-second sound clip. OpenAI said it launched Voice Engine to a small group of users given the dangers the technology poses.

Originally from the United Kingdom, Froggett uses his regional British accent as an example.

“I use nuances and words you’ve never heard of, but generative AI consumes things I’ve made public; I’m sure there’s a speech I gave somewhere , and from it it generates hyper-realistic voice messages, emails and videos,” he said.

Experts cite a widely reported case in Hong Kong last year in which an employee of a multinational company was duped into transferring $25 million to a fake account run by cybercriminals after attending a Zoom call attended by her colleagues, including the company’s CFO – except all of my colleagues were convincing deepfakes. Experts say this case illustrates what is to come.

Even though OpenAI limits access to audio and video tools, the number of dark web sites has exploded in recent months selling counterfeit GPT products. “The bad guys literally just got their hands on these tools. … they’re just getting started,” Froggett said.

It only takes a clip of 30 seconds or less of someone speaking to create a flawless deepfake, said Rupal Hollenbeck, president of Check Point Software, and cybercriminals can now access AI-powered deepfake tools for dollars, or even a few cents. “But that’s just for audio. The same thing is now true for video, and it’s a game changer,” Hollenbeck said.

The steps companies are beginning to take to prevent successful deepfakes are instructive for how all individuals should live their lives in an AI-powered world and interact with friends, family, and colleagues.

How to identify an AI video impostor

There are many ways to spot an AI imposter, some relatively simple.

To start, Hollenbeck says if there’s any doubt about the veracity of a person’s video, ask them to turn their head to the right or left, or look back. If the person complies but their head disappears on the video screen, end the call immediately, Hollenbeck said.

“Right now, I’m going to teach this to everyone I know, to make them look right or left. AI doesn’t have the ability to go beyond what you can see. L “AI is flat today, and it’s very powerful,” she said. said.

But we don’t know how long this will last.

Chris Pierson, CEO of Blackcloak, a company specializing in digital executive protection, thinks it’s only a matter of time before deepfakes have 3D capability. “The models are improving so quickly that these tricks will be abandoned,” Pierson said.

He also says don’t be afraid to ask for old-fashioned video proof of authenticity, like asking the person at the conference to show a company report or even a newspaper. If they can’t follow these basic commands, that’s a red flag.

How Using Codewords and QR Codes Can Help

Old-fashioned code words can also be effective, but just transmit them through a separate medium and keep them in unwritten form. Hollenbeck and Pierson recommend that business management teams generate a password for each month, stored in encrypted password vaults. If you have any doubts about who you are talking to, you can ask for the password to be sent to you by SMS. And set a threshold to deploy the codeword. For example, if someone asks you to make a transaction over $100,000, the code word tactic should be deployed.

For businesses, making business calls only on company-approved channels also significantly reduces the risk of being fooled by a deepfake.

“Where we have problems is going off-grid,” Pierson said.

Real-world examples of commercial deepfakes are growing, said Nirupam Roy, an assistant professor of computer science at the University of Maryland, and it’s not just criminal transfers of bank accounts. “It is not difficult to imagine how such deepfakes can be used for targeted defamation to tarnish the reputation of a product or company,” he said.

Roy and his team have developed a system called TalkLock that can identify both deepfakes and shallowfakes – which he describes as relying “less on complex editing techniques and more on connecting partial truths to little lies.”

It may not be the answer to highly personalized AI-generated scams, but it is designed so that individuals (who can access an app) and businesses (who have access to a verification module) can detect AI manipulation. It works by embedding a QR code in broadcast media such as live public appearances by politicians and celebrities, as well as social media posts, advertisements and news, which it claims can prove authenticity . It combats a growing problem associated with unofficial recordings – for example, videos and audio recordings taken by spectators at events which, unlike official media, cannot be identified by metadata.

How to Live an Offline Multi-Factor Authentication Life

Even with more protection techniques, experts predict a spiraling arms race between deepfake and deepfake tools. For companies, certain procedures can be put in place to prevent the worst consequences of deepfakes, which are less easily adaptable to individual life.

Eyal Benishti, CEO of Ironscales, an email security software company, said organizations will increasingly embrace segregation of duties so that no one person can be fooled enough to harm a business. This means in particular a division of work processes for the management of sensitive data and assets. For example, changes to bank account information used to pay bills or salaries should require two people to change them and (ideally) a third person to notify them. “This way, even if an employee falls for a social engineering attack that asks them to redirect payment of an invoice, there will be workarounds as different stakeholders are brought in to fulfill their roles in the chain of command,” Benishti said.

At the most basic level, Hollenbeck says, organizations and their people need to start living their lives in a multi-factor authentication way, with multiple ways to check reality. In the end, the old school still works, like walking down the hall to see the boss in person. For now, this cannot be faked.

“Seeing used to be believing, but that’s not so much the case today,” Hollenbeck said.

It’s also wise to remember that deepfakes are just the latest in a long line of scams, from the three-card game to the pigeon game, that prey on human vulnerabilities by creating a false sense of urgency. That means the best antidote to a deepfake, according to Pierson, may be the simplest: slow down. This is a tactic that is arguably easier for individuals to use in their personal lives than for employees in their professional lives.

“Slowing down almost always gives a definitive answer. Every company should have a safe harbor policy that if you ever feel rushed to make a decision, an employee should feel the right to refuse, contact security and be kept harmless,” Pierson said. Often, company culture doesn’t give employees much respect.

“We need to give people the advantage of being able to stop and say no. If they don’t feel like they can say no – and no one feels like they can say no – that’s up to That’s when mistakes happen,” Pierson said.

cnbc

Back to top button