Deep Fakes and Social Engineering: Emerging Threats

0
23
deep-fakes-acfe-global-fraud-conference.jpg

Humans communicate. It’s what we do. It’s what we’ve always done. At its most basic level, communication is simply the exchange and transference of information. Since the very beginning, humans have also been trying to hide, obfuscate and muddle information to keep it from getting to the wrong audience.  

During his session titled “The Future of Cybercrime: AI, Deepfakes and Beyond,” Dominique Brack, CFE, CISSP, CISA, provided attendees of the 32nd Annual ACFE Global Fraud Conference with a brief but thorough history of information security.

One fun example of an information security tactic used hundreds of years ago was that monks, in order to carry a secret message great distances, would shave their head, tattoo the message on their scalp, wait for their hair to grow back out and then travel to the intended recipient. As Brack dryly pointed out, with the speed of information these days, that would not be a viable method of data transfer.

“When you know the history about information technology, about networking, about communication, communication protocols, it will help you in all areas of information security,” Brack said, explaining why he wanted to share some of this history. “It will help you with investigation. It will help you with forensics. It will also give you hints about future technologies and how they might be used.”

Why are deep fakes becoming a threat?

According to Brack, the answer to that question is simple: social engineering.

“Social engineering is always borrowing technology, the latest hacking technology which is available,” Brack shared with attendees. “Of course, we also have encryption, the cloud, artificial intelligence, and we now have deep fakes available. Social engineering basically combines all these threats and composes them into one threat area.”

On top of this, from the fraudster’s perspective, the risk of being detected is very low. Fraudsters will start with social engineering. Then they will move on to later stages, maybe installing a piece of malware or exfiltrating data. But none of those later stages are possible without that first bit of social engineering where a fraudster gains their first foothold.  

Deep fakes are no longer a theory

In 2019, a chief executive at a U.K. energy company was social engineered into transferring nearly a quarter of a million dollars to what he thought was a supplier in Hungary. Turns out, the CEO’s voice was used in a deep fake attack. Brack believes this was only the start of deep fake attacks and fraud examiners can expect a whole new era with video.

“The chance of a cybercriminal using the technology is really, really high,” Brack warned. It’s difficult for people to spot these kinds of attacks, he said, so fraud examiners will need to leverage technology to be able to spot these kinds of manipulations.

What can fraud examiners do?

“Imagine now,” Brack said. “We are all now web-based. How would you actually be able to distinguish if someone you see on screen is a deep fake?”

He finished by cautioning attendees about the growing threat of Cybercrime as a Service (CaaS). This burgeoning type of service, Brack said, enables a greater portion of the public to commit cybercrimes, which they otherwise wouldn’t be capable of executing. This means there are far more threat actors than before, and organizations must use any and all tools available to thwart attacks.

In the deep fake arena, Brack shared some tips and strategies for spotting a deep fake social engineering attack:

  • Learn more about deep fake technology

  • Pause when someone requests an action

  • Consult with other people when you receive a request

  • For phone calls, hang up and call the person back on a trusted number