Posts

Showing posts from March, 2020

An Overview of AI Standards and Protocols Develpment from Standards Organisations

Image
Introduction This post aims to give an overview of the standards development work that other organisations and groups are carrying out. The majority of the time these groups aim to be in sync with each other not  necessarily   duplicating each other work but that does happen though often tweaked to suit the regional or national  environment/industry  the standard is going the used in or applies to. Often these groups will  liaise with each other and member companies will work across multiply groups. There are a couple of cartoon strips which summaries nicely what we want to avoid with standards. https://governancexborders.com/2011/07/20/wise-cartoons-4-xkcd-on-standards/ https://dilbert.com/strip/2009-09-02 A partial solution to avoiding these problems is how to maintain knowledge of what is going in the worlds of standards and the direction they are moving in. The list below provides a summary and in the future will most likely get longer as standards groups for different

Cybersecurity and Disinformation

Introduction In the current situation relating to   COVID-19, I thought it is worth reminding about the risks and nature of  disinformation  and its links to cybersecurity.  In parallel to cyber-attacks that impact technological assets, threat actors are now often conducting an increasing number of multi-faceted disinformation operations. Alleged objectives of these attacks are to infiltrate dependable information sources and influence and distract public opinion, (social) media and the press. This is attempted by seeding distrust,  undermining widely accepted societal and democratic values , and potentially influencing the outcome of important events such as elections. Such attacks can be perfectly disguised beneath the vast amount of publicly available information (often tailored to individual or group p rofiles) that people “consume” on a daily basis. This renders those attacks difficult to detect and mitigate. Disinformation operations are a clear reminder that cyber-space

Personal Interest - Hybrid and Electric Aircraft

Image
Introduction As we pick-up the pace to decarbonise the aviation industry is meeting the challenge through R&D and demonstrator aircraft. I find it can interesting area which if we need to succeed at. This post aims to give an overview of what type of research is going on and the direction it is heading in. Roles and Types Small propeller Aircraft  - Ultralight and sports planes will be  the first to go electric with different demonstrators already flying and in service. Often this involves replacing the engine and fuel tanks with a compact but high powered electric motor and batteries. But at the moment they are range limited.  Air taxis or eVTOL -  Electric vertical take-off and landing applications have huge potential to revolutionise commuting within cities.  There are a variety of developments ongoing to create air taxis for short haul, intra-city and city-airport trips, with infrastructure providers, regulatory authorities and municipalities all on board to make thi

Introduction to the ETSI "Securing Artificial Intelligence" Group

Image
Introduction •         ETSI ISG SAI is the first technology standardization group focusing on securing AI.  •         ETSI ISG SAI was officially formed in September 2019; Kickoff meeting on 23 rd  Oct 2019; Second meeting on 20 Jan 2020; Third meeting will be held on 2-3 April.   •         Founding members : NCSC , BT , Huawei UK , Telefonica S.A , C3L  •         Current scale : 33 members , 6 Participants, together with European Commission as Counsellor •         The Securing Artificial Intelligence Industry Specification Group (ISG SAI) will develop technical specifications that mitigate against threats arising from the deployment of AI, and threats to AI systems, from both other AIs, and from conventional sources.  •         As a pre-standardisation activity, the ISG SAI is intended to frame the security concerns arising from AI and to build the foundation of a longer-term response to the threats to AI in sponsoring the future developme6t of normative techni