Google Alert – Why Predicting the Future of Cyber Security Means Thinking Like a Hacker (Cyber Attack)

Evolving Patterns

Today’s conversations around cyber security emphasize the increasingly sophisticated attack methodology and growing threat landscape. And while these concerns present a massive challenge to security professionals worldwide, some of the most nefarious threats still come in via one of the most basic channels – almost 94 percent of attacks penetrate organizations through the inbox.

In terms of scalability, spam email attacks remain the simplest and easiest way of breaching a network. When first invented, spam was combatted by relatively simple filters which were placed at the edge of the email environment. However, attackers soon learnt that by packing their messages with random words, the Bayesian cross-checking operative at the border could be duped enough to let a decent proportion of suspect messages through.

As evident, the age-old adage ‘where there’s a will there’s a way’ holds true. Constantly evolving, cyber-criminals are quick to adapt to new defense tactics. Cyber-attacks are already slipping through existing security systems; the traditional approach to defense will thereby be insufficient against the attacks ahead, and organizations will be forced to play catch-up.

The Current AI Misnomer

In an attempt to combat this, several cybersecurity companies have added in elements of machine learning to their products. While these solutions represent a step-change from traditional defense models, most of them amount to ‘AI building blocks’ requiring consistent tuning and configuration. Ultimately, they rely on training sets of historical data to detect attacks.

But, information around yesterday’s attacks cannot predict tomorrow’s threats. Hackers have taken advantage of this machine-learning strategy to deploy malicious algorithms that can adapt, learn, and continuously improve in order to evade detection.

As cyber-criminals add new tools to their toolkit, we can soon anticipate an era where malicious AI is an increasingly common feature of attacker techniques. After all, hacking-as-a-service now exists on the internet, and AI as a means of attack will indubitably join the list of paid-for services which hackers can deploy. This game of cat-and-mouse, therefore, is set to stay.

Predicting Future Threats

While companies are trying to out-compete each other in terms of innovation claims and use of AI technology, the fundamental issue they face is a static way of thinking. By constantly coming at protection from the angle of the defender, they fail to account for the sheer creativity of hackers. A robust security strategy recognizes this. The emphasis must be to anticipate, rather than being reactionary; it is time to think like a hacker.

Hackers’ methods tend to follow set formulae:

  • The easiest targets are the most popular. Broadly, that means people, not perimeters.
  • Malicious code is disseminated as widely as possible, often by botnets or other automated systems.
  • Methods of hacking improve in line with wider developments in technology and software.
  • Currently, a key way that hackers are infiltrating networks is via unsecured Internet of Things devices. IoT tends to be a simple example of fairly insecure software and hardware, making it easy prey for attackers. Supercharged with AI, attackers could have the ability to infiltrate IoT devices at scale – automating the cyber-attack to target more potential victims at a higher success rate than a human criminal ever could.

    Alternatively, an AI-powered attack might take form by applying machine learning to an existing attack vector, thus improving its effectiveness and scalability. One such early indicator of this type of attack is the Emotet trojan.

    The Emotet trojan sends out relatively sophisticated emails; sophisticated in that they are partly personalized. They piggy-back onto existing communication threads in a victim’s email application, and spoof the contact address of one of the participants.

    The trojan sends a message claiming to be from someone familiar. Sprinkling in some typical conversation pieces, followed by a stock phrase such as “Please download the attachment”, only the most observant have not be caught out.

    But, by using natural language processing algorithms, freely available for download in relatively simple coding languages, the complexity of the cyber-game has increased. It would be easy to see how the malware might be able to use its analysis of the target’s usual language in order to better insert itself into an email thread.

    Suddenly, the call to “click the attachment” might become a more specific request, contextualized within the email chain.

    For instance, the AI-powered malware would be able to write: “attached are Jenny’s comments on the plans for the sales pitch, please review ASAP”. This type of message will be extremely effective at fooling recipients, and it will become nearly impossible to tell the real communication from the fake.

    Future-proofing your Workplace

    Cyber security is already a minefield for most organizations. And with very few employees or employers not constantly busy in the modern workplace, it’s easy to see how security experts’ repeated warnings of caution and care could be missed, forgotten, or simply fall on deaf ears.

    However, when we enter the era of AI-powered attacks, even the most security-conscious employee could still fall victim to these sophisticated forms of threat and social engineering. Human security teams will fail to keep up with these types of attacks, and will instead have to turn to the machine defender to fight back.

    Companies across the globe are turning to AI-powered ‘immune systems’ for the answer. Capable of self-learning an organization’s DNA to the extent that it understands ‘self’ and ‘not self’, these technologies are capable of identifying even the subtlest threats in real time – and taking autonomous, precise action to curb an attack before it’s too late.

    The sophistication of AI-powered attacks will render traditional security tools obsolete. But by arming ourselves with an always-on, evolving defensive AI system, the defenders will have the best chance of fighting another day.

    Darktrace is a world leader when it comes to defending companies against these and other threats in cyberspace, using AI and its proprietory Autonomous Response technology.

    Its self-learning AI is modeled on the human immune system and used by over 3,000 organizations to protect against threats to the cloud, email, IoT, networks, and industrial systems. This includes insider threat, industrial espionage, IoT compromises, zero-day malware, data loss, supply chain risk, and long-term infrastructure vulnerabilities.

    Organizations looking to aggressively focus on growth in the digital age while being confident that they’re protected against all threats in cyberspace need a reliable partner. Darktrace has over 1,000 employees, 44 offices, and headquarters in San Francisco and Cambridge, UK. Every 3 seconds, Darktrace AI fights back against a cyber-threat, preventing it from causing damage. If you want Darktrace to defend your organization, explore their online resources here or reach out to them now.

    *Some of the companies featured on this article are commercial partners of Tech Wire Asia

    Article source at https://techwireasia.com/2020/02/spam-cybersecurity-ai-artificial-intelligence-defence/