April 10, 2020

How GCHQ Classifies Computer Security – Computerphile



The UK’s Government Communications Headquarters deal in classified material, but how to decide if a computer is secure? – GCHQ asked Professor Uwe Aickelin and his team to investigate a means of scoring computer systems.

More Information (academic paper): http://bit.ly/computerphileuwegchq

The Problem with JPEG: https://youtu.be/yBX8GFqt6GA
XOR and the Half Adder: https://youtu.be/VPw9vPN-3ac
Face Recognition: COMING SOON
Nuggets of Data Gold: https://youtu.be/Zel2NCKej50

http://www.facebook.com/computerphile

This video was filmed and edited by Sean Riley.

Computer Science at the University of Nottingham: http://bit.ly/nottscomputer

Computerphile is a sister project to Brady Haran’s Numberphile. More at http://www.bradyharan.com

source

23 thoughts on “How GCHQ Classifies Computer Security – Computerphile

  1. wait what? shouldnt the weakest part of the system be how secure the system is? If all my admin accounts would take 1000 years to hack but there is a user with a 4 letter pasword and who is a sudoer then the complete system can be infiltrated and taken control over by cracking a 4 letter password which should take 5 seconds at most plus the time you need to elevate and actually take control.

  2. I think they overlooked the simple answer to this question. If an organization comes to you and asks if their system(s) is(are) secure, look at the organization first. If a vulnerability is found tomorrow, the important factor is going to be whether they have a team which can recognize and quickly fix the vulnerability. You can judge whether an organization has this capability. Find out how much power technical experts have in the organization. If they can be overriden by the CEO, or a VP, or a senior manager, or a lower level one, every level that can override the technical assessment and prevent action raises the 'level of insecurity' by an order of magnitude. Check the organizations structure. Do they hire security experts from all over the globe and let them work remotely? If not, that makes them 100x more insecure (if not more). It limits their talent pool to those geographically nearby or willing and able to relocate to be nearby. In addition to the simple fact that they abandon 99+% of the talent pool, it also shows where the organizations priorities lie. Also, check their hiring practices. Do they even MENTION security skills when hiring software and hardware engineers? You would be astonished how many companies claiming to produce secure systems do not even mention security knowledge or skills as desired, let alone required, on their job listings. If they seek to pay industry standard rates for software and hardware engineers, they are guaranteed to not be able to hire anyone who knows about security. Security knowledge is complex, difficult, and rarely taught in computer science and engineering courseloads. The vast majority of software and hardware engineers do not know anything about security beyond maybe sensible password strength policies. They are who the industry average salaries are targetted at. And if that's all you pay, that is all you will get. You should also seek to find out how important domain knowledge is to them. If they have industry-typical turnover rates (almost always due to not giving raises larger than the rate at which experience causes a persons market value to grow), they see domain-specific experience (the domain being their own systems) as worthless. That increases their vulnerability tremendously.

    One of my favorite hobbies is to see a business in the news for getting hacked, then go visit their job listings. You'll almost never find that they have any history of hiring anyone with security as even a tertiary duty. Most organizations think security is black magic or something you just need to get a nice rating from some consultants (who you pay to give you a good rating, and ignore the expensive suggestions they make as impractical) and cross your fingers and hope doesn't happen to them until after the executive in charge of making policy decisions regarding security has moved on to another company.

  3. Couldn't that metric be improved upon? You might have an expert who just knows parts to be trivial that others identify as hard because they lack specific knowledge. And you'll never hear about it because he identifies another step as harder.

    I'd modify this by asking each of the experts again about any step some other expert identified as hardest, and find the lowest rating for each step (possibly dropping outliers beforehand). And then take the hardest step as your final result.

  4. Saying you shouldn't do work for intelligence agencies because they've done bad things is like saying you shouldn't cooperate with the police because some of them are bad.

  5. If every intelligent person with the "right" mindset would refuse to work for the government because he's thinks their doings are wrong, then the result is only more negative.

  6. Seems obvious to me. Its like "how robust is this car we are building" Then you need to find the most fragile part of the whole mashine and the lifespan of that gives you the idea of how long till the car needs checking

  7. But you have everything from social engineering, to compromised hardware to stupid employees, hell I could list of 20 attack vectors, that won't even need to go through any classical pathways, that seem to be the only one discussed.

  8. Using your example of the bicycle locked to the post, What if that lock was actually a rubber band? and this rubber band was very flexible and never broke? could you program this rubber band to only stretch so far and how much tensile is required to break it if it could be broken? This is just me trying to reverse engineer from example/concept to actual security programming architecture.

Comments are closed.