Posts

Showing posts from May, 2020

Elements in Securing AI - Part 4 AI for Defence

Introduction The fourth and final part in this series about securing the elements of AI gives an overview of AI  for defence. This is about the ability of AI when benignly used to develop better and automatic security technologies to defend against cyberattacks. Examples of AI for Defence Discovering new vulnerabilities­ -- and, more importantly, new types of vulnerabilities­ in systems, both by the  offence  to exploit and by the defence to patch, and then automatically exploiting or patching them. Reacting and adapting to an adversary's actions, again both on the  offence  and defence sides. This includes reasoning about those actions and what they mean in the context of the attack and the environment. Abstracting lessons from individual incidents, generalising them across systems and networks, and applying those lessons to increase attack and defence effectiveness elsewhere. Identifying strategic and tactical trends from large datasets and...

Elements in Securing AI - Part 3 AI for Cyberattacks

Introduction The third part in this series with give an overview of AI for cyberattacks which involves attackers leveraging the ability of AI to auto-launch or speed up attacks typically with serious impacts on services and infrastructure. Examples of AI for Cyberattacks Impersonation of trusted users:  Analysing  large data sets helps attackers  prioritise  their victims based on online behaviour and estimated wealth. Predictive models can go further and determine willingness to pay the ransom based on historical data, and even adjust the size of pay-out to maximise the chances and, therefore, revenue for cyber-criminals. With all the data available in the public domain, as well as previously leaked secrets, through various data breaches are now combined for the ultimate victim profiling in a matter of seconds with no human effort.  When the victim is selected, AI can be used to create and tailor emails and sites that would be most likely clicked on bas...

COVID-19 and eHealth standards

As nobody can have missed, the world is under sustained pressure resulting from the COVID-19 pandemic. At ETSI the eHealth group has been trying to work out what the response of the standards world should be. Whilst we have active work items at ETSI looking at the development of the underlying use cases for diagnostic and therapeutic eHealth, and at the requirements for data in support of eHealth, neither of these explicitly addresses the COVID-19 associated crisis. So as part of our response Suno Wood and myself have been working away at a white paper, to be published by ETSI, but presenting a personal opinion. I'm using this blog post to review a few of the points from the white paper, sometimes in a much more forceful way too.  COVID-19, and pandemics of the same scale, are rare, but even rarer is a health crisis that affects every citizen of our modern, interconnected world leading to a global, economic crisis.  Far-reaching political decisions are being made and cha...

Elements in Securing AI - Part 2 Attacks and Defences to AI Systems

Introduction The second part will give an overview of discovering security vulnerabilities and attacks to AI systems or systems with AI components and develop effective defensive techniques to address these type of attacks. Attacks to AI Systems Backdoor Attacks:  Machine learning models are often trained on data from potentially untrustworthy sources, including crowd-sourced information, social media data, and user-generated data such as customer satisfaction ratings, purchasing history, or web traffic. Recent work has shown that adversaries can introduce backdoors or “trojans” in machine learning models by poisoning training sets with malicious samples. The resulting models perform as expected on normal training and testing data, but behave badly on specific attacker-chosen inputs. For example, an attacker could introduce a backdoor in a deep neural network (DNN) trained to  recognise  traffic signs so that it achieves high accuracy on standard inputs but miscla...

Elements in Securing AI - Part 1 Attacks and Mitigations of AI Components

Introduction The first part in the series about elements in securing AI will give an overview of the attacks and mitigations involving AI components, which is also known as AI self-security. This is made up of securing AI components from attacks and mitigating AI component vulnerabilities. Types of Attacks against AI Components Attacks that target the underlying hardware, software and networks that form part of the AI system. These are essentially your traditional types of cyberattacks that make use of malware, unpatched vulnerabilities and zero-day exploits to cause damage.  Adversarial  attacks are where data has been tweaked to trick a neural network and fool systems into seeing something that isn't there, ignoring what is, or misclassifying objects entirely. There are other attacks which work in a  similar  way. A training time attack, for example, occurs when the machine-learning model is being built, with malicious data being used to train...