pageview
Banner Default Image

How artificial intelligence is disrupting cyber crime

about 6 years ago by Lucy Cinder

How artificial intelligence is disrupting cyber crime

Cyber Security

Whilst users are getting better at spotting basic attacks like phishing, cyber criminals are using new technologies like AI and machine learning to trick us, steal data, and ultimately make hundreds of millions of pounds.

That's the opinion of Mariana Pereira, director at Darktrace, speaking at Computing's recent Artificial Intelligence and Machine Learning conference.

"We're seeing an increase in people trying to trick humans into inadvisable situations," said Pereira. "Even nuclear power stations and other heavily protected organisations are still operated by humans, who can be tricked.

"We're getting good at avoiding certain tricks. When we see poor grammar, misuse of capitalisation, emails offering a love interest or from a Nigerian prince - we're accustomed to being suspicious," she added.

However, she gave an example of a colleague who received an email which appeared to come from another member of staff, discussing a project they were working on. The only reason he didn't click on attached file was because the tone of the email was slightly off.

"The giveaway was that the email was too polite, this colleague is usually not polite at all!" Pereira explained.

She said that in this instance her colleagues had been discussing the project in a local coffee shop, and the attackers had evidently overheard the conversation, looked the pair up on social media, and crafted the email accordingly.

She also warned of the potential threat of virtual assistants.

"When you have a virtual assistant, you add it to the communication, it read what you're trying to do and facilitates whatever's needed. So if you're trying to find time in your calendar for a coffee with someone, the assistant responds with options, presents them to other party, they agree, then the assistant sets up invites."

Pereira shared her concern though around what an AI-powered attacker could do with access to that level of information.

"Access to the type of information that an AI-powered attacker might have scares me. Imagine a piece of malware that has access to those communications, whether via email, Slack, Whatsapp, or your calendars. Imagine getting an email inviting you to a dentist appointment which comes with a map, and that map has a piece of malware injected into it which turns it into a malicious payload.

"Would you click? Yes, it's relevant, contextual, and in tune with the conversation you're already having."

She also discussed the prevelance of cameras and microphones, and the dangers presented by their vulnerability.

"We see cameras and microphones everywhere. I'm not concerned if the CIA hacks my smart TV and watches me watching Bakeoff. But we're now seeing attackers listening in to our conversations.

"There was a law firm we were working with, and we saw a large of amount of data being exfiltrated each week after their board meetings. It turned out the video conferencing equipment was compromised, and the microphone was recording conversations and sending the data out to an unknown destination. That could be about an M&A deal being prepared, or legal proceedings."

She pointed out that this is a form of attack she expects to see more often in the near future.

"We haven't seen this at scale yet, but we're not far from it. You can train computers to say this is the information I'm interested in, go forth and find me more like that. You can attack entire infrastructure of a building, working at scale. Or maybe you only want to find information from one particular baord member - so you use facial recognition to only record when that person's in the room.

"That type of attack can scalevery rapidly, and AI and machine learning technologies are primed to analyse it."

source computing

Industry: Cyber Security News

Banner Default Image

Latest Jobs