Elements in Securing AI - Part 4 AI for Defence


The fourth and final part in this series about securing the elements of AI gives an overview of AI  for defence. This is about the ability of AI when benignly used to develop better and automatic security technologies to defend against cyberattacks.

Examples of AI for Defence

  • Discovering new vulnerabilities­ -- and, more importantly, new types of vulnerabilities­ in systems, both by the offence to exploit and by the defence to patch, and then automatically exploiting or patching them.
  • Reacting and adapting to an adversary's actions, again both on the offence and defence sides. This includes reasoning about those actions and what they mean in the context of the attack and the environment.
  • Abstracting lessons from individual incidents, generalising them across systems and networks, and applying those lessons to increase attack and defence effectiveness elsewhere.
  • Identifying strategic and tactical trends from large datasets and using those trends to adapt attack and defence tactics.
Machine learning is a key feature to enabling these. If systems can automatically learn and keep evolving, they stand a much better chance of being unaffected by attacks. The threat landscape is constantly evolving and keeping the mitigation systems the same as they are today will only end in disaster. There are 2 types of machine learning; supervised and unsupervised. The first requires the machine to be fed with examples to be able to complete tasks; it is limited to the current threat landscape and what has happened in the past. The latter is much more proactive, after initial training it teaches itself how to do things. Choosing the right type of machine learning is crucial in keeping up to date with the threats. If the system can only learn from past examples, how can it be expected to protect against a new kind of attack in the future?

Behaviour analytics is another avenue to explore. Machine learning techniques can be used to monitor system and human activity to detect potential malicious deviations. Which will be vital for preventing social engineering attacks. Also, awareness should be raised among users on preventing social engineering attacks, discouraging password re-use and advocating for two-factor-authentication where possible.

Importantly though, when using AI for defence, we should assume that attackers will anticipate it. We must also keep track of AI development and its application in cyber applications to be able to credibly predict malicious applications. To achieve this, a collaboration between industry practitioners, academic researchers and policymakers are essential. Legislators must account for the potential use of AI and refresh some of the definitions of “hacking.” Researchers should carefully consider the malicious application of their work. Patching and vulnerability management programs should be given due attention in the corporate world.


Hopefully, this post has provided some useful points and an  overview of using AI for defence. If you are an optimist the applications of AI should favour cybersecurity operations over cyber-attacks  because the defence is currently in a worse position than offence precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation. It has been said that we overestimate the short-term effects of new technologies, but underestimate their long-term effects. AI is notoriously hard to predict, so many of the details speculated could be wrong­ and AI is likely to introduce new asymmetries that can't be foreseen. But AI is the most promising technology for bringing defence up to par with the offence. For Internet security, that will change everything.




Popular posts

Personal Interest - Unbuilt fleets of the Royal Navy

Balancing functionality, usability and security in design

Personal Interest - RAF Unbuilt Projects