Security: the impact of, and on, standards

The text of this post has also appeared, with minor differences, in the ETSI Newsletter "The Standard". The post asserts that standards are good and also that ETSI is good at making them.


ETSI has a long and illustrious history of developing effective standards, something that all ETSI members hold as true. Whilst many of the successes of ETSI lie in radio and networks, through GSM and its successors, DECT and TETRA, underpinning those successes is a commitment to providing security capabilities.

Effective provision of security in a standard is backed up by consideration of risk and here ETSI has over a number of years led the world with its TVRA method. It is now extending this foundation work with TC CYBER moving towards publication of standards that address "Secure by Default" and "Privacy by Design" alongside the structural elements of successful security in refining the Critical Security Controls (CSC) originally published by the SANS Institute and now available with an ETSI perspective as TR 103 335 (a multipart standard)

Misapplication of the CSC by human error, malicious or accidental, will lead to system vulnerabilities. The security domain has always sought input from users and human factors experts in addressing such errors. This is particularly important if the application or deployment of security measures relies on human users.

The first of the CSC requires that organizations make an inventory of authorized and unauthorized devices. It looks relatively simple - identify the devices you want to authorize and those you don't. However this introduces the Rumsfeld,[1] conundrum "… there are known knowns … there are known unknowns … there are also unknown unknowns …", and we have to assume that it is not possible to identify everything. The second of the CSC to prepare an Inventory of authorized and unauthorized software also has the Rumsfeld conundrum at the root of its problem.

The more flexible a device, the more likely it is to be attacked by exploiting its flexibility. We can also assert that the less flexible a device, the less able it will be to react to a threat by allowing itself to be modified. The nature of the devices, the mix of devices, and the connectivity of devices, are all critical elements in identifying where a system may be attacked and where the most effective defenses have to be placed. This is the world where you now find new and emerging technologies such as virtualization, led by ETSI’s ISG NFV, M2M and IoT technologies, dealt with in oneM2M group and ETSI smartM2M or autonomic and semi-intelligent networks, to name a few. Most of these technologies fall into the unknown unknowns’ category at the beginning of their development which will be explored a little more in this post.
Standards enable and assert interoperability on the understanding that:

Interoperability = The union of {Semantics, Syntax, Language, Mechanics}  

Quite simply if any of the elements is missing then interoperability cannot be guaranteed.

The Rumsfeld conundrum
The use of the Johari Window method to identify issues is of interest here and is illustrated using the phrasing of Rumsfeld in Table 1.

Table 1.  Security concerns in Johari window style with Rumsfeld phrasing. 

Known to self
Not known to self
Known to others
Known knowns 

BOX 1
Unknown knowns

BOX 2
Not known to others
Known unknowns

BOX 3
Unknown unknowns

BOX 4

The target of security designers is to maximize the size of box 1 and to minimize the relative size of each of box 2 and box 3. In doing so, the possibility for box 4 to be of unrestrained size is hopefully minimized (it can never be of zero size).

We can consider the effect of each "box" on the on the nature of the security we can offer:
BOX 1: Knowledge of an attack is public and resources can be brought to bear to counter the fear by determining an effective countermeasure
BOX 2: The outside world is aware of a vulnerability in your system and will distrust any claim you make if you do not address this blind spot
BOX 3: The outside world is unaware of your knowledge and cannot make a reasonable assessment of the impact of any attack in this domain and the countermeasures applied to counter it
BOX 4: The stuff you can do nothing about as, as far as you know, nothing exists there.

The obvious challenge is therefore to bring tools such as the 20 critical security controls to bear to maximize box 1 at the same time as using education and dissemination to minimize the size of boxes 2 and 3. Box 3 is characteristic of the old, mostly discredited, approach of security by secrecy, whereas Box 1 is characteristic of the open dissemination and collaborative approach of the world of open standards and open source development. Box 1 approaches do not guarantee of never having a security problem. Generally speaking we expect problems to migrate from box 4 to boxes 2 and 3 before reaching box 1 and, hopefully, mitigation.

The role of standards in security is crucial here as standards have a primary role in giving assurance of interoperability. Opening up the threat model and the threats you anticipate, moving everything you can into box 1[2], in a format that is readily exchangeable and understandable is key. The corollary of the above is that if we do not embrace a standards’ view we cannot share knowledge effectively.  That means we grow our box 2, 3, 4 visions of the world. The lack of knowledge on the issue leading to the inability to defend our systems, and their users, gets ever more difficult and ultimately will act against us.

Confidentiality Integrity Availability paradigm
Application of the CIA paradigm works for Box 1 problems and will work reasonably well to mitigate problems from Boxes 2 and 3. One of the big problems in the real world is that many of the problems are either in Box 4 or at the limits of Boxes 2 and 3. As systems become more complex how they react to stimuli become less certain and more problems will be hidden in box 4.

In the security domain understanding that we need interoperability is considered a “by default” criterion but simply achieving interoperability is a necessary but insufficient metric for making any claim for security. The technical domain of security is often described in terms of the CIA paradigm (Confidentiality Integrity Availability) wherein security capabilities are selected from the CIA paradigm to counter systems’ risks from several forms of cyber attacks. The common model is to consider security in broad terms as determination of the triplet {threat, security-dimension, countermeasure} leading to a triple such as {interception, confidentiality, encryption} being formed. The threat in this example is interception which puts confidentiality of communication at risk, and to which the recommended countermeasure (protection measure) is encryption.

The challenge of quantum computers
One pressing concern for security but perhaps less so in other domains is the constant change of attackers’ ability to break systems. As said above, whilst achieving interoperability is a necessary metric, it is insufficient unless it is continuously reviewed. In ETSI the technical groups have a good track record of review and refinement of all published standards and security is included. However whilst much of this work is careful evolution there is one domain which, for security, is causing a revolution in our thinking. It is the revolution represented by Quantum Computing and the impact this will have on our cryptographically protected systems. ETSI CYBER and the ETSI ISG QSC groups have published guides on this particular threat which may indeed destroy large chunks of our cryptographic toolkit. However one simple formula has been published in EG 203 310 which gives an understanding of the time to update an organisation’s computing and security base against attacks.

•       Y = the time taken to patch the current system with one that is “safe” against all known vulnerabilities
•       Z = the time taken to develop an attack against a system

If "Y > Z" then the system is vulnerable to the attack represented by Z.

For the special case of the threat from quantum computing to public key cryptography the equation above is extended:

  • X = the number of years the public-key cryptography needs to remain unbroken.
  • Y = the number of years it will take to replace the current system with one that is quantum-safe.
  •  Z = the number of years it will take to break the current tools, using quantum computers or other means.
If "X + Y > Z" any data protected by that public key cryptographic system is at risk and immediate action needs to be taken. Thus if Z is estimated as 15 years then both X and Y have to be significantly less than 15 years, and the sum of X and Y also has to be less than 15 years, to be safe. With the increasing pace of development of viable quantum computers the value of Z is shrinking and there is a genuine challenge to ETSI to address this in a period less than our best knowledge of Z. This has to apply to all of ETSI's technologies - IoT, M2M, ITS, eHealth, cellular radio, NFV …. Quite simply nothing is exempt.

Main threats
In the security domain we can achieve our goals both on a technical and procedural stand point.. This also has to be backed up by a series of non-system deterrents that may include the criminalization under law of the attack and a sufficient judiciary penalty (e.g. interment, financial penalty) with adequate law enforcement resources to capture and prosecute the perpetrator. This also requires proper identification of the perpetrator as traditionally security is considered as attacked by threat agents, entities that adversely act on the system. However in many cases there is a need to distinguish between the threat source and the threat actor, even if the end result in terms of technical countermeasures will be much the same, although some aspects of policy and access to non-system deterrents will differ. A threat source is a person or organization that desires to breach security and will ultimately benefit from a compromise in some way (e.g. nation state, criminal organization, activist) and who is in a position to recruit, influence or coerce a threat actor to mount an attack on their behalf. A Threat Actor is a person, or group of persons, who actually performs the attack (e.g. hackers, script kiddy, insider such as an employee, physical intruders). In using botnets of course the coerced actor is a machine and its recruiter may be a machine itself. This requires a great deal of work to eliminate the innocent threat actor and to determine the threat source.

The relative simplicity, the lack of connectivity and the relatively low reconfiguration capability of many IoT/M2M devices, offer an attractive class of devices for mounting attacks. Their large number makes defense somewhat more challenging than devices with strong authentication and control models built in.

Application of the CIA paradigm works for Box 1 problems and will work reasonably well to mitigate problems from Boxes 2 and 3. One of the big problems in the real world, is that many of the problems are either in Box 4 or at the limits of Boxes 2 and 3. Our easy to solve problems are almost always in box 1 but challenges are  nearly everywhere.

The very broad view is thus that security functions are there to protect user content from eavesdropping (using encryption as the known counter to eavesdropping) and networks from fraud (authentication and key management services as the known counters to masquerade and manipulation attacks). What security standards cannot do is provide a guarantee of their security function out of the context for which that function was designed. Technical security measures give hard and fast assurance that, for example, the contents of an encrypted file cannot, ever, be seen by somebody without the key to decrypt it. In other words you don't lock your house and hang the key next to the door in open view, you must take precautions to prevent the key from getting into the wrong hands. The French mathematician Kerchoff has stated “A cryptosystem should be secure even if everything about the system, except the key, is public knowledge”. In very crude terms the mathematics of security, cryptography, provide us with a complicated set of locks and we need to apply locks to a technical system with the same degree of care as the one we use when we choose where to lock up a building or a car. Quite simply we don’t need to bother installing a lock on a door if we have an open window next to it - the attacker will ignore the locked door and enter the house through the open window. Similarly for a cyber system if crypto locks are put in the wrong place the attacker will bypass them.



[1] "Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones." Attributed to Donald Rumsfeld on 12-February-2002.
[2] The box 1, box 2 terminology arises from classifying the Rumsfeld conundrum to a Johari Window analysis and is addressed in more detail later in the post

Comments

Popular posts

Balancing functionality, usability and security in design

Personal Interest - Unbuilt fleets of the Royal Navy

Personal Interest - RAF Unbuilt Projects