Should software developers be held liable for vulnerabilities?

A question that appears regularly on LinkedIn is that of liability for software “defects” – that is, vulnerabilities that are discovered in the software after it is in use by customers. One example was in 2023, when the White House put out a cybersecurity strategy document that included the memorable statement, ““We must begin to shift liability onto those entities that fail to take reasonable precautions to secure their software…”

In my post in response to that statement (as well as similar statements in the document), I made two main points:

1.      There already are liability laws. If a software user can prove they were breached due to the developer’s defective software security practices, they should receive compensation (and perhaps punitive damages). A new law isn’t needed.

2.      Even if the supplier didn’t follow secure development practices, they shouldn’t be 100% liable if the user didn’t take reasonable steps to secure their software – for example, if they never applied the patch that the supplier released that fixed the vulnerability exploited by the attackers.

I concluded the post with these two paragraphs:

In other words, it now seems the White House (and perhaps DHS) thinks there’s no need at all for court cases to determine liability for software breaches. As far as they’re concerned, the outcome of these cases is determined from the start (actually, from the date of the strategy document).

This is like the story of two cowboys that captured a cattle rustler and were starting to hang him. One of the cowboys had a sudden attack of conscience and asked, “Shouldn’t we give this man a fair trial?” The other cowboy exclaimed, “You’re right! We’ll give this man a fair trial…then we’ll hang him.”

I might also have referenced the noted legal authority, Lewis Caroll. In Alice’s Adventures in Wonderland, the tyrannical Red Queen – who never saw a head that didn’t need to be removed – puts the Knave of Hearts on trial for stealing tarts (a capital offense in her kingdom, of course). She demands "Sentence first—verdict afterwards!”

In a discussion this week on LinkedIn (the discussion sprang out of a post on a different topic), Sinclair Koelemij provided two sentences that I believe encapsulate the feelings of many software users and regulators. The first sentence is “…software has often not been treated as a “product” for strict product liability, so defects do not automatically trigger manufacturer accountability in the way automotive defects do.”

Sinclair doesn’t provide any evidence for this statement (probably because many people consider it self-evident), but he should have. After all, given that there are liability laws and they’re being applied to automotive manufacturers, why is it that they’re not being applied to software developers? I admit I’ve never heard of a developer being held liable for a vulnerability in one of their products, but what’s the reason for that?

I suppose the reason is that it’s very hard to provide a direct link between a manufacturer’s development practices and a vulnerability in their software. For example, this might be because new vulnerabilities are being discovered all the time. If a vulnerability hadn’t been identified and reported (in a CVE record) before the software was developed, it’s impossible to hold the developer liable for the vulnerability – unless, even after the vulnerability was reported, the developer never patched it.

Whatever the reason why a vulnerability can’t be traced to the supplier’s development practices, does it follow that another law will somehow fix that problem? How could it do that? It seems that any law would have the same issues.

If another law isn’t the solution to the problem, how about a regulation requiring secure development practices? That was required by Executive Order 14028, issued in 2021 by the Biden administration. The order mandated “standards, procedures or criteria” regarding “secure software development environments”, but mandatory standards were quickly ruled out when it came time to implement the EO. NIST was required to produce a “Secure Software Development Framework”, which turned out to be excellent. But it’s just a framework, not a mandate.

On the other hand, there are definitely cases when a software developer should be held liable for defects in their software; if nothing else, the SolarWinds attacks (to which EO 14028 was a response) show that something more than just general liability is needed. Sinclair’s next sentence reads “On top of that, software is typically licensed, not sold, under terms that disclaim warranties and cap liability (EULAs), which further limits vendor responsibility even when failures have real world consequences.”

EULA stands for “end user license agreement” – you know, those interminable statements that you must attest you’ve read in great detail before you can start using even the most trivial piece of software. These will usually absolve the developer of liability for almost anything that can go wrong. This isn’t to say there shouldn’t be some reasonable protection for the developer in a EULA. But, just as it’s wrongheaded to hold the developer 100% liable for every breach, it’s also wrongheaded to give them a “Get out of jail free” card.

Since general liability laws clearly weren’t enough to prevent atrocities like SolarWinds, I think there needs to be some regulation of EULAs, perhaps by the FTC.[i] But there don’t need to be new laws assigning 100% (or any fixed percentage) liability for software breaches to developers.

Tom Alrich’s Blog, too is a reader-supported publication. You can view new posts for two months after they come out by becoming a free subscriber. You can also access all of my 1300 existing posts dating back to 2013, as well as support my work, by becoming a paid subscriber for $30 for one year (and if you feel so inclined, you can donate more than that!). Please subscribe. 

If you would like to comment on what you have read here, I would love to hear from you. Please comment below or email me at [email protected].


[i] When a large company purchases software, they typically negotiate a contract with the supplier. If they do this, or at least if they’re allowed to do this, there’s no way the government could or should regulate the terms of that contract, even if one of the terms exempts the supplier from liability for defects. 

However, the situation is very different when it comes to the rest of us. I doubt many people have ever read a complete EULA (I certainly haven’t). It’s ridiculous to pretend that checking a box to indicate you have read the EULA absolves the developer of liability for any defects in their software. This is a perfect example of a case in which my former professor Milton Friedman would have asserted that free markets alone cannot ensure an equitable outcome; there needs to be some regulation.