by Simon Crosby
Over millennia we humans evolved a powerful and personal instinct—trust—to help protect us as we make our way through life. We feel less vulnerable with those we trust, fostering creativity and collaboration. Trust, and its dark complement distrust, shape our identities and weave the fabric of society. They frame constitutions, underpin bills of rights and regulations, buttress international treaties, and fuel unrest and war.
What is the meaning of trust in the digital world? The benefits of cloud computing and online collaboration seem limitless, but online our finely tuned instincts for trust, perfected for the physical world, are useless: Is that email really from a colleague? Is the attachment a photo, or a virus? Are your Twitter followers friends, foes, or ‘bots? A single misplaced click can give an attacker access to your device and an opportunity compromise your entire organization.
Unfortunately the attacker will probably succeed even if your network is protected and you use the best security software. Why? It is mathematically provable that reliably detecting and blocking malware—the rationale of the security software industry—is impossible. In practice, detection rates for today’s advanced threats—typical of nation-state cyber-attacks—are under ten percent. With nearly 200,000 new attacks being launched per day, today’s defenses are being overrun.
As the world rushes headlong into the online domain, the horrifying truth is that we cannot protect ourselves from the targeted attacks of financially and politically motivated attackers. And, make no mistake, there has never been a time in history where the fruits of our innovation—our money, intellectual property and critical infrastructure—were so readily accessible online, or so easily plundered.
Fortunately there is a light at the end of this tunnel, and it is made possible by making our devices a little more like us—by enforcing the principle of “need to know” between each application and the operating system: Each application gets the minimum access and information needed to operate correctly. This turns the computer security problem on its head because it avoids the need for detection. If applications and the operating system are mutually isolated we can still protect the system when an application is attacked and compromised. Two technologies are required: First, a way to know that the system starts in a “known good” state, and second, a robust isolation mechanism to enforce the principle of “need to know” at runtime. Both can now be achieved by relying only on the device hardware itself, instead of software.
Many devices (including PCs, Macs and mobile devices) check at boot time that the installed OS is unmodified by comparing its computed signature with a securely stored signature of the device manifest. This ensures that the system is in a known good state each time it is turned on.
Once the device is running, the principal of “need to know” requires that each application must be able to access only the minimal set of resources (networks, files, and devices) that it needs to function correctly—and no more. For example, the (untrusted) Facebook browser tab only needs a single file—the cookie that stores data for www.facebook.com—and it only needs access to other untrusted websites. But it must never be allowed to access other documents or high-value websites such as your bank, or applications on your corporate network because otherwise a malicious advertisement delivered by Facebook could steal information or attack sites you value. Most websites don’t need USB device access, but if you want to use video in Skype® then it (and only it) should be given access to the webcam.
A robust implementation of “need to know” requires unbreakable isolation—traditionally a difficult challenge in operating systems design because all software is vulnerable. Now, thanks to the relentless progress of Moore’s Law, CPU features can be used to enforce mutual isolation between application tasks and the OS—using a technology called micro-virtualization. Hardware isolation makes the attacker’s job hundreds of thousands of times harder, and allows each device to protect itself by design, on untrusted networks.
The evolution from software-centric to hardware-based protection promises a revolution in online security and heralds unforeseen benefits: Although computers can’t discern good from bad, they are excellent at enforcing the rules of “need to know”—even when we humans make mistakes. So, when I mistakenly open a malicious PDF document, or click on a poisoned URL, I can rely on my computer to protect me—by design.
The future of our society involves an immersive online existence and nation-state sponsored cyber-attacks are a daily occurrence. By making every computer or mobile device a little more like us—using hardware to enforce “need to know”—we will be able to securely maximize our creative potential in the digital domain, protecting ourselves while taking advantage of the enormous opportunities of a global, social Internet.
About the Author
Simon Crosby is co-founder and CTO, Bromium, where he evangelizes the benefits of micro-virtualization for information security. Previously, Crosby served as co-founder and CTO of XenSource, leading to its acquisition by Citrix in 2007. Crosby earned his PhD in Computer Science from the University of Cambridge, UK.