Elements in Securing AI - Part 1 Attacks and Mitigations of AI Components

Introduction

The first part in the series about elements in securing AI will give an overview of the attacks and mitigations involving AI components, which is also known as AI self-security. This is made up of securing AI components from attacks and mitigating AI component vulnerabilities.

Types of Attacks against AI Components

  1. Attacks that target the underlying hardware, software and networks that form part of the AI system. These are essentially your traditional types of cyberattacks that make use of malware, unpatched vulnerabilities and zero-day exploits to cause damage. 
  2. Adversarial attacks are where data has been tweaked to trick a neural network and fool systems into seeing something that isn't there, ignoring what is, or misclassifying objects entirely. There are other attacks which work in a similar way. A training time attack, for example, occurs when the machine-learning model is being built, with malicious data being used to train the system, for example In a face detection algorithm, the attacker could poison the model such that it recognises the attacker’s face as an authorised person. An inference time attack, on the other hand, is showing specially crafted inputs to the model using a range of algorithms — the Fast Gradient Sign Method or the Carlini and Wagner attack are two popular methods — that subtly alter images to confuse neural networks.
  3. Sybil attacks designed to poison the AI systems that people use every day, such as recommendation systems, are a common occurrence. Sybil attacks involve a single entity creating and controlling multiple fake accounts in order to manipulate the data that AI uses to make decisions. A popular example of this attack is manipulating search engine rankings or recommendation systems to promote or demote certain pieces of content. However, these attacks can also be used to socially engineer individuals in targeted attack scenarios. 

Types of Mitigations for AI Components

  1. To protect the underlying hardware, software and networks that make up the AI systems will rely on traditional cybersecurity measures and ideally secure by default/design to prevent against successful cyberattacks. 
  2. Defences that detect adversarial examples and defences that eliminate the existence of adversarial examples will be the main way to prevent or minimise successfully adversarial attacks.  For example when training neural networks to spot adversarial images by including them in the training data. This ensures, the network 'learns' to be somewhat robust to adversarial examples. But this relies on generating your own adversarial examples which may have limited effectiveness and then waiting for attackers adversarial data to appear which can be identified and then eliminated before any damage is done. 
  3. Trusted Certification is a partial solution to defeating Sybil attacks. It involves the presence of a trusted certifying authority (CA) that validates the one is to one correspondence between an entity on the network and its associated identity. This centralised CA thus eliminates the problem of establishing a trust relationship between two communicating nodes. Although this approach intuitively seems like the ideal method to tackle these attacks, there are a number of implementation issues specifically about how the CA shall establish the entity-identity mapping. In real-world applications, this may incur an appreciable performance cost particularly if performed manually on large scale systems.

Conclusion

Hopefully, this post has provided some useful introductory information to attacks and mitigations of AI components that will be needed to provide elements in securing AI.

Comments

Popular posts

Balancing functionality, usability and security in design

Personal Interest - Unbuilt fleets of the Royal Navy

Personal Interest - RAF Unbuilt Projects