If you think about it, this makes a lot of sense, as computers are capable of working much faster than humans. Plus, they are less prone to user error. Hackers have found A.I. to be effective for the deployment of phishing attacks. According to a study conducted by ZeroFOX in 2016, an A.I. called SNAP_R was capable of administering spear-phishing tweets at a rate of about 6.75 per minute, tricking 275 out of 800 users into thinking they were legitimate messages. In comparison, a staff writer at Forbes could only churn out about 1.075 tweets a minute, and they only fooled 49 out of 129 users.
A more recent development by IBM is using machine learning to create programs capable of breaking through some of the best security measures out there. Of course, this also means that we’ll eventually have to deal with malware powered by artificial intelligence, assuming that it isn’t already being leveraged somewhere.
IBM’s project, DeepLocker, showcased how video conferencing software can be hacked. The process involved the software being activated by the target’s face being detected in a photograph. The IBM team, including lead researcher Marc Ph. Stoecklin, has this to say about these kinds of attacks: “This may have happened already, and we will see it two or three years from now.”
Other researchers have demonstrated that A.I. can be used in cyberattacks, even going as far as using open-source tools to make them happen. What do you think about this development? Do you think that these threats are already present, or do you think that the biggest threat is yet to come? Let us know in the comments.