Attacking Supply Chains at the Source
The xz Utils attack almost succeeded. Will we be as lucky next time?
We’ve been very lucky. A couple of weeks ago, a supply-chain attack against the Linux xz Utils package, which includes the liblzma compression library, was discovered just weeks before the compromised version of the library would have been incorporated into the most widely used Linux distributions. The attack inserted a backdoor into sshd that would have given threat actors remote shell access on any infected system.
The details of the attack have been thoroughly discussed online. If you want a blow-by-blow exposition, here are two chronologies. ArsTechnica, Bruce Schneier, and other sources have good discussions of the attack and its implications. For the purposes of this article, here’s a brief summary.
The malware was introduced into xz Utils by one of its maintainers, an entity named Jia Tan. That’s almost certainly not a person’s name; the actual perpetrator is unknown. It’s likely that the attacker is a collective operating under a single name. Jia Tan began several years ago by submitting a number of changes and fixes to xz, which were included in the distribution, establishing a reputation for doing useful work. A coordinated attack against xz’s creator and maintainer, Lasse Collin, complained that Collin wasn’t approving patches quickly enough. This pressure eventually convinced him to add Jia Tan as a maintainer.
Over two years, Jia Tan gradually added compromised source files to xz Utils. There’s nothing really obvious or actionable; the attackers were slow, methodical, and patient, gradually introducing components of the malware and disabling tests that might have detected the malware. There were no changes significant enough to attract attention, and the compromises were carefully concealed. For example, one test was disabled by the introduction of an innocuous single-character typo.
Only weeks before the compromised xz Utils would have become part of the general release of RedHat, Debian, and several other distributions, Andres Freund noticed some performance anomalies with the beta distribution he was using. He investigated further, discovered the attack, and notified the security community. Freund made it clear that he is not a security researcher, and that there may be other problems with the code that he did not detect.
Is that the end of the story? The compromised xz Utils was never distributed widely, and never did any damage. However, many people remain on edge, with good reason. Although the attack was discovered in time, it raises a number of important issues that we can’t sweep under the rug:
- We’re looking at a social engineering attack that achieves its aims by bullying—something that’s all too common in the Open Source world.
- Unlike most supply chain attacks, which insert malware covertly by slipping it by a maintainer, this attack succeeded in inserting a corrupt maintainer, corrupting the release itself. You can’t go further upstream than that. And it’s possible that other packages have been compromised in the same way.
- Many in the security community believe that the quality of the malware and the patience of the actors is a sign that they’re working for a government agency.
- The attack was discovered by someone who wasn’t a security expert. The security community is understandably disturbed that they missed this.
What can we learn from this?
Everyone is responsible for security. I’m not concerned that the attack wasn’t discovered by the a security expert, though that may be somewhat embarrassing. It really means that everyone is in the security community. It’s often said “Given enough eyes, all bugs are shallow.” You really only need one set of eyeballs, and in this case, those eyeballs belonged to Andres Freund. But that only begs the question: how many eyeballs were watching? For most projects, not enough—possibly none. If you notice something that seems funny, look at it more deeply (getting a security expert’s help if necessary); don’t just assume that everything is OK. “If you see something, say something.” That applies to corporations as well as individuals: don’t take the benefits of open source software without committing to its maintenance. Invest in ensuring that the software we share is secure. The Open Source Security Foundation (OpenSSF) lists some suspicious patterns, along with best practices to secure a project.
It’s more concerning that a particularly abusive flavor of social engineering allowed threat actors to compromise the project. As far as I can tell, this is a new element: social engineering usually takes a form like “Can you help me?” or “I’m trying to help you.” However, many open source projects tolerate abusive behavior. In this case, that tolerance opened a new attack vector: badgering a maintainer into accepting a corrupted second maintainer. Has this happened before? No one knows (yet). Will it happen again? Given that it came so close to working once, almost certainly. Solutions like screening potential maintainers don’t address the real issue. The kind of pressure that the attackers applied was only possible because that kind of abuse is accepted. That has to change.
We’ve learned that we know much less about the integrity of our software systems than we thought. We’ve learned that supply chain attacks on open source software can start very far upstream—indeed, at the stream’s source. What we need now is to make that fear useful by looking carefully at our software supply chains and ensuring their safety—and that includes social safety. If we don’t, next time we may not be so lucky.