Plato Data Intelligence.
Vertical Search & Ai.

Why Legacy System Users Prioritize Uptime Over Security

Date:

Dirk Hodgson, the director of cybersecurity for NTT Australia, tells a story. He once worked with a company that did scientific measurements. The highly specialized firm used highly specialized equipment, and one large piece of equipment cost them $2 million when purchased years ago.

The hardware did not cause any issues, and the manufacturer routinely replaced parts and performed maintenance, as per their contract. The security problem was the operating system, which was Windows XP. The company went to the manufacturer and asked if it could upgrade the OS to a current and supported OS.

Not a problem, replied the manufacturer. The company merely has to buy a new multimillion-dollar system, and that will come with a current OS. As for updating the OS on the current machine? The manufacturer declined.

“That thousand-dollar upgrade would require a multimillion-dollar investment,” Hodgson says. “Legacy software is definitely a big problem.”

For decades, security executives have battled legacy systems. The fight has gotten more intense as the threat landscape has grown more complicated, tangled up in remote workers, partners, IoT, and cloud integrations. There are many technological ways to try to mitigate the legacy threat — isolation, virtualization, replication in a sandbox, etc. — but none of those deal with corporate politics and the fear of letting security teams touch legacy systems at all.

Uptime Issues Take Priority for Line-of-Business

The issues with legacy systems fall into two distinct buckets: cybersecurity issues and uptime issues. For the line-of-business (LOB) executive, the uptime issue — the fear that touching anything in the legacy environment could cause the system to crash — is far more frightening. And because these legacy systems usually operate quite well day to day, the business executive sees zero reason to toy with them.

The LOB also often legitimately worries that they won’t have the capabilities to restore the system if it does crash because the people who wrote the code are long gone, the vendor who manufactured the hardware may no longer be in business, and the documentation for the software is either nonexistent or woefully inadequate.

Worst of all, legacy systems are often truly mission-critical, such as those running assembly lines. Those systems crashing could easily halt production for an indeterminate period, and worse, could trigger cascading failures across connected systems.

“The big surprise about legacy systems is that since they have been around for so long, almost everything else is connected to them,” says Michael Smith, field CTO at Vercara. “So you have this huge Gordian knot of dependencies that make it nearly impossible to upgrade or decommission that legacy system, and you have to do a lot of network and log analysis to understand what other systems are connecting to them and when.”

Bubble Wrap Doesn’t Work for Everything

“Business executives are right to be cautious when allowing security teams to touch mission-critical legacy systems,” says Eoin Hinchy, founder and CEO of Tines. “Security teams should instead focus on reducing the attack surface area of legacy systems. In other words, wrap them in bubble wrap.”

Although the bubble-wrap concept is a popular means of dealing with legacy, it doesn’t always work. And therein lies the real conundrum. Not only does this effort still sometimes fail, but there is no reliable way of predicting such a failure.

“One of the challenges with legacy is that is an accumulation of a technical debt that amasses over time,” says David Burg, cybersecurity leader for Ernst & Young Americas. “When they were built, (developers) were working with the institutional knowledge that existed at that time. The documentation of architecture, interoperability, and dependencies and such were likely never documented. People come and go.”

Beyond the traditional security risks, NTT Australia’s Hodgson points out that system certification is another complicating factor. “A system is certified to a particular level. If patched, there is a reasonable chance that it will work fine, but you might lose that accreditation that you bought,” he says.

And many of these specialized systems are physically difficult to replace even if the LOB chooses to do so. “Consider medical facilities installing MRI machines. They have to be craned in, you have to install lead in the walls,” Hodgson says. “You are going to be keeping that for a very long time.”

What CISOs Want

This brings the debate to a conflict between ideal and practical. From the board/CEO/CISO perspective, the ideal would be to replace all of the legacy systems with modernized systems that can effortlessly support today’s cybersecurity and compliance requirements. But even if the enterprise is willing to spend the money to make that switch, it may simply not be practical.

“For many legacy system applications, data access, calculation, and even communications performance cannot be easily matched in a PC environment, if at all,” says Bob Hansmann, senior product marketing manager for security at Infoblox. “The work to migrate/rewrite Cobol, Fortran, RPG II, and other applications to PC platforms is mountainous and hard to cost-justify. And even if the code is migrated, it needs to be heavily tested and modified for performance — as in speed and accuracy — often due to how different PC hardware is from mainframe and mini hardware.”

The lack of actionable documentation is a critical factor in updating legacy systems, but the problem is not limited to legacy. Today’s developers — whether it’s a software vendor creating apps for wide distribution or an enterprise developer creating homegrown software — still do not document code in any usable way. Thus, the next generation of legacy systems may suffer from the same problems.

Build Documentation Into Future Legacy

Ayman Al Issa, the industrial cybersecurity lead at McKinsey, labels the lack of actionable documentation today “a major issue.”

“We don’t see good documentation at all,” he says. “It’s a cultural issue. They don’t see the value of documentation. This includes maintenance issues and any change to the system. They are simply not documented. People are lazy about documenting everything.”

Al Issa suggests that companies need to create their own documentation based on the teams managing the systems. But to avoid the single-point-of-failure problem, “they need to do a rotation of duties so that there’s not only one person who can operate the systems,” he says.

In theory, management should insist that proper documentation happen, but instead, managers are pressured to deliver. Once the developer completes Project A, do they insist that the developer spend a week documenting everything, or do they tell the developer to move onto the next project, which is what the developer wants to do anyway?

Burg says the only viable fix is to incorporate strong document requirements into the DevSecOps process: “We have to make this contemporaneous documentation or it won’t happen.”

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?