Though and Opinions from London Computer Conferences 2019

Introduction

Computing Conference (formerly called Science and Information (SAI) Conference) is a research conference held in London, the UK since 2013. The conference series has featured keynote talks, special sessions, poster presentation, tutorials, workshops, and contributed papers each year. This year the event happened on the 16th to 17th July. This blog post will aim to show some of the ideas and research that was presented and how they might apply to the standards work that I observed.

Day 1

Keynote Talks
The first keynote talk was about the rise of accelerators in computing. The requirement for specialist chips because general-purpose chip designs have plateaued or are at there limit when it comes to speed increases and die shrinkage. There is now a move for greater energy efficiency in chip designs are becoming more important. Part of this solution is decreased memory/data latency when fetching that information and processing it. Already, for key applications, greater use of purpose-built single-use chips e.g. for Machine learning is also happening such as  Google's Tensor processing unit. Already a common chip design that is used is the general-purpose computing on graphics processing units (GPGPU) as accelerator devices. There is now the question being posed can we make single-use chips adaptable. One possible solution put forward in the talk is Software Defined Hardware (DARPA funded) which has the goal to tailor use/change the chip for each type of application that is run on it.

The second keynote was on Robotics and Computer vision - Making machines see. Within the research field, there was slow progress from the 80s until to 2000s. Then during the 2010s, it was accelerated by the use of  GPUs along with deep/machine learning. By making use of neural networks, they helped accelerate object recognition from images along with deep-learning also accelerate by computer vision research. These advances happened because of other advances in big data, chip design and algorithms. Some of the real-world use cases include AR smart glass devices for the visually impaired. Helping them ID people and objects. This big goal is autonomous cars/driving which will require AR and computer vision to be a success. The socio-technical trend of moving away from ownership to on-demand mobility service. Likely companies will specialise and optimise services for key cities or countries due to different environments and behaviour. But this is still a long-term goal. Most likely will be a feature creep to full autonomy. Costs for widespread use will need to fall below owning a car or public transportation. Self-driving cars as a service. There a few key technologies that are needed including 360-degree depth stereo vision paired with lidar, to allow cars to navigate without GNSS in urban canyons in any weather. Along with detailed offline maps paired with inertial navigation. Also, to make self-driving a safe success we need to understand and predict human behaviour. Finally, computer vision systems can generate false-positive too easily, we need to understand why they fail.


Healthcare Applications

There were elements with relevance to C3L work on eHealth Standards. The work being done on developing and collating data sets for diagnosis from known stages of disease for X-Ray tests. These can increase the ID rate and decrease the time of discovery from X-Rays. This also allows for better diagnosis of a rare disease that a Doctor may have limited knowledge of. For Machine Language to generate useful results or to be used efficiently in healthcare and other related areas, requires optimised datasets. There is a need to remove noise to improve the efficiency and accuracy of results. Also, when using AI within eHealth is needs to be understandable and able to provide domain-specific information and context. This aim is needed to make AI useful as an everyday tool and not just used by AI specialists. Though by adapting already existing datasets to train AI on we do not have to reinvent the wheel. But most of these systems are not yet ready for clinical use. Since they are missing "healthy" data sets.

Software Engineering

When it comes to designing and developing we need a broad representation from society to prevent bias towards one set of users over others. The expanded use of virtual testing allows for a greater number of variables and scenarios to be run on software and its associated systems. Can do many more runs than using the systems. Also, automated testing of software will serve to see if or how the software design will fail under key conditions or scenarios. Can be done thousands of times in a short period. A research project which aimed to be able to identify bad vs good practises in software development showed useful promise. "A Methodology for Performing Meta-analyses of Developers Attitudes towards Programming Practises - Thomas Butler". If this research could be extended and applied to able to identify secure vs non-secure practises would be a good real-life application.

Ambient Intelligence

Ambient intelligence is the element of a pervasive computing environment that enables it to interact with and respond appropriately to the humans in that environment. Includes augmented (AR), mixed (MR) and virtual reality (VR).  Some of the ideas and practices presented are relevant to the potential future ETSI has planned with their AR Working Group.

By using tools that include AR, VR and MR to enhance training and/or education can allow practising/drill of scenarios that are difficult or costly to do regularly. Though any methods used require testing and validation by comparison to real-world testing to prove useful. Everyday devices can be adapted to be used with AR/VR/MR. These include smartphones, glasses, screens, surfaces in general and these can use alongside purpose-built devices to enhance realism. Finally, AR/MR have the potential to increase safety by enhancing awareness or highlighting the danger to users.

Day 2

Keynote Talks
The first talk about the need to better understand the negative (not just the positives) consequences of the socio-technical construct. The negative symptoms include digital alienation, dumbing down of users and addiction. We need to consider and better understand the actions and roles of humans in the computing loop. With the move from personalised computing to socialised computing which both have positives and negatives. With coming of 5G and the coming IoT revolution how do we enable society to stay within the processing loop? We need to know and understand how companies plan to use the data, technology and knowledge because these elements can be used for the good of society or negatively affect it. Understanding motivations within the socio-technical construct of technology companies vs society will be vital. Though I did have a point of disagreement with the keynotes talks because when they talked about the ICT paradox it was presented as information vs knowledge, while I would argue with the hierarchy of information leads from data to knowledge. Instead, I think its open free access to information/knowledge versus closed or restricted access. Another takeaway from the keynote talks was how a users behaviour through their smartphones/apps can affect their privacy or lead them to give away their data with little to no understanding what it is being used for. There is a need for the user to better understand the link between their actions and consequences of their use of technology and related services.

Privacy and Security

There was a single session broken into three on privacy and security. Part 1 was in the morning. Part 2 was the session after lunch and Part 3 was the final session of the day. There is no theme to each part it's just how I recorded my notes for the day.

Part 1
With the continued push towards password managers due to the advice for unique passwords for each site or service and an individual uses, there is a growing need for proof testing that these password managers are secure. One possible solution is the greater use of hashing of stored details paired with encryption and good practise such as the master password or login passwords should never be stored or accessible as plain text. Also, users should make use of 2FA as the norm.
When it comes to eMail forensics, investigators should not violate a persons privacy. Best practises are to use keyword searches. This is to prevent abuse of power. Ideally, the tools used should strip out identifiable information. The main applications being law enforcement investigations. Also, can be used to better identify malicious emails to block junk or phishing emails.
There was some interesting research on solving the problems of quantum-safe cryptography, with a focus on lattice-based algorithms. Looking at ways to optimise/improve the efficiency of the algorithms being used. In order, to reduce time to run from days to hours.
Cyber-Physical Systems and Interactions, looking at how to integrate real-time situational awareness of cyber and physical systems. To improve the security of assets, networks and users by monitoring the dependances between them. For example to ID red flag, an employee leaves the building at the end of the working day, conditionals are then used inside the building would be flagged.
An interesting talk about research into improving biometrics which aimed to use a combination of different biometric points to counter weaknesses in individual types. Aiming to improve accuracy by reducing incidents of false positives/negatives. Though if drift or changes occur in part of the data could still cause failure, unless updated regularly. With age, illness, trauma, dramatic change in appearance can all affect biometrics. Also, would still vulnerable to attacks that swap data. Biometrics will always be useful but shouldn't be relied upon as the sole means of verification.


Part 2
Some of the research at this conference (and conferences in general) shown about cybersecurity have unknown/unproven value in real-world testing. Though some do have future potential, such as "Deep Random based Key Exchange Protocol Resisting Unlimited MITM by Thibault de Balroger". A presentation on Dynamic Password, with the password having two parts a static part and a dynamic part that regularly change (e.g using time or longitude) showed some interesting ideas. Though implementation would rely on companies adopting or supporting it. Would complement the work of password managers if supported by them. It would act in a similar way to 2FA, but without a 3rd-party involved. But the key question that wasn't answer how would such a system work under real-world conditions and how would your typical end-user make use of it. Malicious Domain Detection using open source data to train algorithms to detect them. A pro-active approach to cybersecurity. Can potentially ID unknown listed domains, based on predictions of behaviour. One thing I noticed when it comes to presentations of cryptography with goals to be quantum-proof while interesting are often not friendly to non-mathematicians. Also, a question will they be functional under real-world use when they face attacks from a quantum computer. IoT, optimising and increasing the power efficiency of encryption schemes vital ensuring protection from attack. Key problem with the potential of 5G will lead to an explosion in IoT device numbers.

Part 3
For systems and service that vital to public life e.g. eVoting schemes, need to have a high degree of trust and integrity. Trust is vital otherwise results will no longer be trusted which is vital in democratic nations. Financial Fraud and money laundering, link to how cyber attackers will aim to hide their gains. There is a need to better ID false negatives. Using simulators to train detection systems to better detect fraud. Can also be used to model fraud agent and customer agent. The software can also be used to train to detect behaviour. Allowed for control to be tested for effectiveness.

Conclusion
Overall, I found it a good and useful conference. I also, found out that have your presentation right at the end of the conference and talk about a more emotive topic (the socio-technical environment of online sextortion) after listening to interesting but dry subjects is the easiest way to lead to faltering talk. I did get through it, will re-think how to presentations in the future about this subject.

Comments

Popular posts

Balancing functionality, usability and security in design

Personal Interest - Unbuilt fleets of the Royal Navy

Personal Interest - RAF Unbuilt Projects