Posts

Elements in Securing AI - Part 4 AI for Defence

Introduction

The fourth and final part in this series about securing the elements of AI gives an overview of AI  for defence. This is about the ability of AI when benignly used to develop better and automatic security technologies to defend against cyberattacks.

Examples of AI for Defence

Discovering new vulnerabilities­ -- and, more importantly, new types of vulnerabilities­ in systems, both by the offence to exploit and by the defence to patch, and then automatically exploiting or patching them. Reacting and adapting to an adversary's actions, again both on the offence and defence sides. This includes reasoning about those actions and what they mean in the context of the attack and the environment. Abstracting lessons from individual incidents, generalising them across systems and networks, and applying those lessons to increase attack and defence effectiveness elsewhere. Identifying strategic and tactical trends from large datasets and using those trends to adapt attack and defence …

Elements in Securing AI - Part 3 AI for Cyberattacks

Introduction

The third part in this series with give an overview of AI for cyberattacks which involves attackers leveraging the ability of AI to auto-launch or speed up attacks typically with serious impacts on services and infrastructure.


Examples of AI for Cyberattacks

Impersonation of trusted users: Analysing large data sets helps attackers prioritise their victims based on online behaviour and estimated wealth. Predictive models can go further and determine willingness to pay the ransom based on historical data, and even adjust the size of pay-out to maximise the chances and, therefore, revenue for cyber-criminals.

With all the data available in the public domain, as well as previously leaked secrets, through various data breaches are now combined for the ultimate victim profiling in a matter of seconds with no human effort. When the victim is selected, AI can be used to create and tailor emails and sites that would be most likely clicked on based on crunched data. Trust is built by e…

COVID-19 and eHealth standards

As nobody can have missed, the world is under sustained pressure resulting from the COVID-19 pandemic. At ETSI the eHealth group has been trying to work out what the response of the standards world should be. Whilst we have active work items at ETSI looking at the development of the underlying use cases for diagnostic and therapeutic eHealth, and at the requirements for data in support of eHealth, neither of these explicitly addresses the COVID-19 associated crisis. So as part of our response Suno Wood and myself have been working away at a white paper, to be published by ETSI, but presenting a personal opinion. I'm using this blog post to review a few of the points from the white paper, sometimes in a much more forceful way too. 
COVID-19, and pandemics of the same scale, are rare, but even rarer is a health crisis that affects every citizen of our modern, interconnected world leading to a global, economic crisis. Far-reaching political decisions are being made and changed daily. …

Elements in Securing AI - Part 2 Attacks and Defences to AI Systems

Introduction

The second part will give an overview of discovering security vulnerabilities and attacks to AI systems or systems with AI components and develop effective defensive techniques to address these type of attacks.

Attacks to AI Systems

Backdoor Attacks: Machine learning models are often trained on data from potentially untrustworthy sources, including crowd-sourced information, social media data, and user-generated data such as customer satisfaction ratings, purchasing history, or web traffic. Recent work has shown that adversaries can introduce backdoors or “trojans” in machine learning models by poisoning training sets with malicious samples. The resulting models perform as expected on normal training and testing data, but behave badly on specific attacker-chosen inputs.

For example, an attacker could introduce a backdoor in a deep neural network (DNN) trained to recognise traffic signs so that it achieves high accuracy on standard inputs but misclassifies a stop sign as a sp…

Elements in Securing AI - Part 1 Attacks and Mitigations of AI Components

Introduction

The first part in the series about elements in securing AI will give an overview of the attacks and mitigations involving AI components, which is also known as AI self-security. This is made up of securing AI components from attacks and mitigating AI component vulnerabilities.

Types of Attacks against AI Components

Attacks that target the underlying hardware, software and networks that form part of the AI system. These are essentially your traditional types of cyberattacks that make use of malware, unpatched vulnerabilities and zero-day exploits to cause damage. Adversarial attacks are where data has been tweaked to trick a neural network and fool systems into seeing something that isn't there, ignoring what is, or misclassifying objects entirely. There are other attacks which work in a similar way. A training time attack, for example, occurs when the machine-learning model is being built, with malicious data being used to train the system, for example In a face detectio…

Translations, Culture and the Splinternet

Introduction
This post is essential a meandering of ideas which will have some sort of point. It is about how things connect and taking a holistic approach to examining them. Elements and ideas never exist in isolation they always connect and impact each other. A hobby of mine is translating with French being the main language (mainly as a means to try and keep knowledge of it) and also attempting Japanese. In browsing the web outside your language you being to notice that for some things you often have to jump through a couple of extras hoops. Largely these can be overcome through automatic translations service (Google translate in chrome being straightforward to use) which does open up new sources of information media and entertainment to a lesser extent. 
So how does this connect to the splinternet? The majority of articles about the Splinternet view it as something imposed or created by a technological and regulatory barrier. While there is another splinternet which is defined by la…

Applying Coopers' Colour Code to Cybersecurity

Introduction

This is a topic that has been on my mind for years which wondering how can present this idea across. With my area of expertise forcing on the human factors in cybersecurity often I find for certain topics covering the mindset that is needed to for people to avoid making security failures (clinking links in emails without thinking the most common) how do you give them a system to set their behaviour to identify and avoid security threats. I believe the Coopers Code has the potential to form part of a toolkit to achieving this though as with all things in security it is not a silver bullet and needs to used complimentarily with other tools. 

In the 1980s, handgun expert Jeff Cooper invented something called the Color Codeto describe what he called the "combat mindset." Here is his summary:

In White you are unprepared and unready to take lethal action. If you are attacked in White you will probably die unless your adversary is totally inept. In Yellow you bring yours…