WannaCry: the fear of change is to blame
With IT infrastructure a hodgepodge of legacy and modern technology, resistance to applying a simple patch contributed to the global cyber epidemic – and could contribute to the next
After every major cyberattack there follows a period of extended soul-searching on the part of IT and cybersecurity professionals. How could this have happened? And how can we keep it from happening again?
As we look back upon the spate of ransomware attacks so far this year, it’s natural to question why victims were so slow to implement readily available protections. A patch that would have prevented the now-infamous WannaCry outbreak, which infected parts of the NHS, had been available for two months. Moreover, the malware exploited an out of date Windows feature.
What’s often overlooked is that institutionalised resistance to change – not simple neglect – is one of the biggest reasons so many companies were left vulnerable.
I’ve had the opportunity to glimpse some of the internal response processes at dozens of businesses around the world in the wake of the WannaCry ransomware attack. In doing so, I witnessed all-too-frequent disconnects between perception and reality.
Many organisations initially assumed that the majority of their systems had been patched, only to realise that upwards of 20-30 percent of their systems remained susceptible. Others had patched, but with no way of validating whether the patch actually took proper effect. And still others remained hesitant to implement any fixes, fearing that the risk of a patch “breaking things” outweighed the risks of infection.
A clash of philosophies
The situation is not unique to the WannaCry attack. Even as annual worldwide IT security approaches $100bn in 2017, businesses continue to struggle with basic systems management tasks, like patching, which are critical to so-called security hygiene. Software engineering disciplines have increasingly embraced agile, iterative approaches to software development. In contrast, IT and security operations teams have remained comparatively stagnant in many organisations. It’s a clash of philosophies: rapid, incremental, automated change versus methodical, planned-and-vetted broad strokes.
There are very practical reasons why operations teams are resistant to change. Most IT infrastructure is a hodgepodge of legacy and modern technology, pieced together over time.
According to a 2017 Frost & Sullivan survey, the average organisation runs four to seven different operating systems across their workstations and servers. Hundreds of business-critical applications – some that may be very sensitive to changes that break compatibility – may run on top of these platforms. And the responsibility for managing all of this infrastructure is often highly siloed among numerous teams – including outsourcers – that don’t always move at the same cadence. Nobody wants to be the one whose patching efforts bring down a crucial part of the business.
Unfortunately, the patch which could have stopped WannaCry from spreading had the “perfect” blend of attributes to slow its uptake. It affected nearly all versions of Windows, which means more system roles and types requiring testing. It also required a reboot (not all patches do), necessitating schedule coordination to minimise business impact.
Mind the gap
To be fair, modern IT change control processes all offer provisions for handling emergencies. But these ransomware attacks demanded a rapid out-of-band response and many organisations lacked the fundamental systems management technology needed to quickly enact fixes and monitor outcomes at scale. This technical gap underpins failed change management: people need more time for deliberation, process and consensus, so they lose agility when it’s most critical.
The frightening truth is that ransomware campaigns like WannaCry, and NotPetya that followed it, could have been much worse. While NotPetya had a far broader impact – shipping giant Maersk reported that it cost them between $200-$300m alone – many organisations were simply lucky enough to escape the crosshairs. And this is despite the fact that our dependence on IT has grown significantly in the last 10 years.
IT leaders need to seize this opportunity to drive modernisation of the processes and technologies underpinning systems management and security operations – not simply focus on improving attack detection and response.
When future attacks inevitably evade our first lines of defence, taking an agile approach can keep small-scale, scattered infections from becoming the next global epidemic.
Tanium is a strategic partner of the CBI's Cyber Security Conference, held in London on 13 September
Next post: Japan trade talks prompt questions
Previous post: Cyber security: how to avoid the catastrophic risks