The malicious changes were submitted by JiaT75, one of the two main xz Utils developers with years of contributions to the project.

“Given the activity over several weeks, the committer is either directly involved or there was some quite severe compromise of their system,” an official with distributor OpenWall wrote in an advisory. “Unfortunately the latter looks like the less likely explanation, given they communicated on various lists about the ‘fixes’” provided in recent updates. Those updates and fixes can be found here, here, here, and here.

On Thursday, someone using the developer’s name took to a developer site for Ubuntu to ask that the backdoored version 5.6.1 be incorporated into production versions because it fixed bugs that caused a tool known as Valgrind to malfunction.

“This could break build scripts and test pipelines that expect specific output from Valgrind in order to pass,” the person warned, from an account that was created the same day.

One of maintainers for Fedora said Friday that the same developer approached them in recent weeks to ask that Fedora 40, a beta release, incorporate one of the backdoored utility versions.

“We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added),” the Ubuntu maintainer said.

He has been part of the xz project for two years, adding all sorts of binary test files, and with this level of sophistication, we would be suspicious of even older versions of xz until proven otherwise.

    • Buelldozer@lemmy.today
      link
      fedilink
      English
      arrow-up
      78
      arrow-down
      1
      ·
      edit-2
      7 months ago

      Jia Tan, University of Hong Kong in China. He’s been the sole maintainer of the package for almost two years.

      • Dark Arc@social.packetloss.gg
        link
        fedilink
        English
        arrow-up
        76
        ·
        7 months ago

        Looks like he’d done a lot for various US companies on his LinkedIn.

        I would not be surprised if he was previously legit but pressured into doing this by the CCP.

        • Takios@feddit.de
          link
          fedilink
          English
          arrow-up
          24
          ·
          7 months ago

          Maybe he wasn’t sloppy by accident if he was indeed coerced by someone. I don’t think we’ll ever find out the backstory of this though.

          • anlumo@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            7 months ago

            I’ve watched a rundown of what the backdoor does. It’s impossible that this was an accident. It hides a compiled library in test data and injects that into the ssh binary to override code there.

            • ghterve@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              7 months ago

              They didn’t mean the backdoor was (or was not) an accident. They meant the backdoor was implemented sloppily enough to be discovered and maybe that was not an accident (as in, he wanted it to be found, but still wanted to plausibly be seen as trying his best to keep those coercing him appeased)

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        30
        ·
        7 months ago

        It would make more sense to compromise developers in trusted positions, or steal their credentials, than going through the time and effort of building trusted users and projects only to burn them with easily spotted vulnerabilities.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          ·
          7 months ago

          This wasn’t easily spotted. They use words like sloppy, but it all started with someone digging in because starting ssh season was about a half second slower that it used to be. I could easily imagine 99.99% of people shrugging and deciding just something in the chain of session startup took a bit longer for a reason not worth digging into.

          Also, this was a maintainer that just started two years ago. xz is much older than that, just he took over.

  • Cosmic Cleric@lemmy.world
    link
    fedilink
    English
    arrow-up
    91
    arrow-down
    4
    ·
    edit-2
    7 months ago

    From the article…

    Will Dormann, a senior vulnerability analyst at security firm Analygence, said in an online interview. “BUT that’s only because it was discovered early due to bad actor sloppiness. Had it not been discovered, it would have been catastrophic to the world.”

    Is auditing for security reasons ever done on any open source code? Is everyone just assuming that everyone else is doing it, and hence no one is really doing it?


    EDIT: I’m not attacking open source, I’m a big believer in open source.

    I’m just trying to start a conversation about a potential flaw that needs to be addressed.

    Once the conversation was started I was going to expand the conversation by suggesting an open source project that does security audits on other open source projects.

    Please put the pitchforks away.

    • 5C5C5C@programming.dev
      link
      fedilink
      English
      arrow-up
      64
      arrow-down
      7
      ·
      7 months ago

      You’re making a logical fallacy called affirming the consequent where you’re assuming that just because the backdoor was caught under these particular conditions, these are the only conditions under which it would’ve been caught.

      Suppose the bad actor had not been sloppy; it would still be entirely possible that the backdoor gets identified and fixed during a security audit performed by an enterprise grade Linux distribution.

      In this case it was caught especially early because the bad actor did not cover their tracks very well, but now that that has occurred, it cannot necessarily be proven one way or the other whether the backdoor would have been caught by other means.

      • FauxPseudo @lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        3
        ·
        7 months ago

        Also they are counting the hits and ignoring the misses. They are forgetting that sneaking a backdoor into an open source project is extremely difficult because people are reviewing the code and such a thing will be recognized. So people don’t typically try to sneak back doors in. Also, backdoors have been discovered in an amazing amount of closed source projects where no one was even able to review the code.

        • Cosmic Cleric@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          edit-2
          7 months ago

          They are forgetting that sneaking a backdoor into an open source project is extremely difficult because people are reviewing the code and such a thing will be recognized.

          Everyone assumes what you have stated, but how often does it actually happen?

          How many people, and how often, and how rigorous, are code reviews actually done? Especially with large volume projects?

          • SMillerNL@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            ·
            7 months ago

            Depends on the project, but for a lot of projects code review is mandatory before merging. For XZ the sole maintainer can do whatever they want.

            • Cosmic Cleric@lemmy.world
              link
              fedilink
              English
              arrow-up
              11
              arrow-down
              1
              ·
              edit-2
              7 months ago

              Depends on the project, but for a lot of projects code review is mandatory before merging. For XZ the sole maintainer can do whatever they want.

              I’ve done plenty of code reviews in my time, and I know one thing, the more busy you are, the faster you go through code reviews, and the more chance things can be missed.

              I would hope that for the real serious shit (like security) the code reviews are always thorough and complete, but I know my fellow coding brethren, and we all know that’s not always the case. Time is a precious resource, and managers don’t always give you the time you need to do the job right.

              Personally I use a distro backed indirectly by a corporation and hope that each release gets the thorough review that it needs, but human nature is always a factor in these things as well, and honestly, there are times when everyone thinks everyone else is doing a certain task, and the task falls between the cracks.

      • TheKMAP@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        7 months ago

        Have those audits you allude to ever caught anything before it went live? Cuz this backdoor has been around for a month and RedHat is affected, too. Plus this was the single owner of a package who is implicitly trusted, it’s not like it was a random contributor whose PRs would get reviewed.

        The code being open source helps people track it down once they try to debug an issue (performance issue and crashes because in their setup the memory layout was not what the backdoor was expecting), that’s true. But what actually triggered the investigation was the bug. After that it’s just a matter of time to trace it back to the backdoor. You understimate reverse engineers. Or maybe I’m just spoiled.

        How long until US bans code from developers with ties to CN/RU?

        • Teppic@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          7 months ago

          How long until US bans code from developers with ties to CN/RU?

          That won’t happen because it would effectively mean banning all FOS which isn’t remotely practical.

          • TheKMAP@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            How do you propose we meaningfully fix this issue? Hoping random people catch stuff doesn’t count.

            • Teppic@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              7 months ago

              In time it may become a trade-off between new (with associated features and speed) Vs tried and tested/secure.
              To us now this sounds perverse, but remember that NASA generally use very old hardware because they can be more certain the various bugs & features have been found and documented. In NASA’s case this is for reliability. I’ll concede ‘brute force’ does add another dimension when applying this logic to security.

              This may also become an AI arms race. Finding exploits is likely something AI could become very good at - but a better AI seeking to obfuscate?

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        7 months ago

        It’s maybe possible, but perhaps even unlikely still.

        Overwhelmingly thorough security review is time consuming and expensive. It’s also not perfect, as evidenced by just how many security issues accidentally live long enough to land Even in enterprise releases. That’s even without a bad actor trying to obfuscate the changes. I think this general approach had several aspects that would made it likely to pass scrutiny:

        • It was in XZ, which was likely not perceived as a security critical library. A security person would recognize any thing as potentially security critical, but they don’t always have the resources and so are directed to focus on obviously security related and historically security incident magnets.
        • it was carried out by someone who spent years building up an innocuous reputation. Investigation may even show previous “test samples” to be malicious but not caught, or else it was a red herring to get people used to random test samples getting placed in the project.
        • The only “source code” he touched was “just build scripts”. Even during a security audit, build shell scripts are likely going to be ignored, they are just build scripts and maybe you run some tests on all scripts, but those tests aren’t going to catch this sort of misbehavior.
        • The actual runtime malicious code was delivered as portions of ostensibly throw away test sample xz files. The malicious code is applied by binary patch of the build output. A security audit won’t be thinking too hard about a sea of binary files that are just throwaway samples as fodder for test.

        So while I see the point about logical fallacy about it accidentally not getting far enough to see if the enterprise release process would have caught it, I think we know track records well enough to deem this approach likely to get through. Now that it has been caught, I could see some changes that may mitigate this in the future. Like package build scripts deleting all test samples and skipping tests when building for release, as well as more broad scrutiny.

        There’s also the reality that a lot of critical applications deem themselves too cool to settle for “old crusty enterprise distributions”. They think that approach is antiquated and living on the edge is better. Admittedly I doubt theyd go as far as arch, tumbleweed, or rawhide, but this one could have easily made it to Debian testing, fedora release, or an Ubuntu release.

        • Cosmic Cleric@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          I think we know track records well enough to deem this approach likely to get through.

          That was my concern, and why I brought up my point.

          Human nature, especially when volunteer work versus paid work is being done, as well as someone who purposely over the long-term is trying to be devious, could be a potent combination for disaster.

          I still wonder if there should be an actual open source project that does nothing but security audits of all other open source projects, hence my original question as an opener to a conversation that I never got to elaborate on because I was getting attacked almost immediately by people who are very sensitive about bringing any criticisms/concerns about open source out in the open.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            The issue is that it implies that open source has a problem due to volunteers that is not found in closed source, which is not really the reality.

            You can look at a closed source vendor like Cisco and see backdoors, generally left over from developer access, yet open for abuse. The nature of those is so blatantly obvious any open review would have spotted it instantly, yet there it was

            With this, you had a much more device obfuscated attack that probably would have passed through even serious security audits unnoticed, yet it was caught because someone was curious about a slight performance degradation and investigated. Having been in the closed source world,I can tell you that they never would have caught someone like this. Anyone even vaguely saying they wanted to spend some time investigating a session startup delay of half a second would be chastised for wasting time.

            Further, open source projects are also the fodder for security researchers to build their resumes. Hard to prove your mettle without works, and catching vulnerabilities in OSS code is a popular feather in their cap.

            It also implies that open source is strictly a volunteer affair. Most commercial applications of a Linux platform involve paid employees doing some enablement, and that differs place to place. There’s of course red hat paying for security research, Google, Microsoft also. I know at least one company that distrusts everything and repeats a whole bunch of security audits, including paying external companies to audit open source code. I would wager that folks downstream of say centos stream or certain embedded platforms can feel pretty good about audits. Of course all bets are off when you go grab yarballs, npm, pip, etc.

            • Cosmic Cleric@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              The issue is that it implies that open source has a problem due to volunteers that is not found in closed source, which is not really the reality.

              I (partially) disagree. Fundamentally, my belief is that someone who gets paid to do the work is more rigorous doing the work than someone who does it on a volunteer basis, a human nature thing. Granted, I’m speaking very generally, and what I stated is not always true, but still.

              Also, corporations that write close source programs are much more legally adverse to being sued if their product fails (there’s a reason why we’re seeing so many corporations slapping in arbitration clauses into their agreements these days; risk-averse).

              Open source projects tend to just be more careful about their code base not being tainted, and write in disclaimers (“As-is”) to protect themselves legally for the failure of the product scenario, and call it a day (again, very generally speaking (I use Fedora specifically for a reason)).

              And speaking of Fedora, I do agree with your point that some open source projects are actually done by paid coders. I just believe that’s more of the outlier, than the norm, though. Some of that work is done by corporate employees, but still on a volunteer basis.

              Not dismissing at all, I am thankful for corporations that actually spend time letting their employees do open source work, even if it’s just for their own direct benefit, as it also benefits everyone else.

      • Cosmic Cleric@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        10
        ·
        7 months ago

        You’re making a logical fallacy called affirming the consequent where you’re assuming that just because the backdoor was caught under these particular conditions, these are the only conditions under which it would’ve been caught.

        No, I’m actually making that comment based on a career as a software developer, who has actually worked on a few open source projects before.

          • Cosmic Cleric@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            7 months ago

            What, experience doesn’t matter?

            As Groucho Marx would say, “I can believe you, or my lying eyes”.

            • UckyBon@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              7 months ago

              Experience doesn’t matter if you don’t read Wikipedia links given to you by random people :)

              Edit:

              I’m actually making that comment based on

              has another tone to “in my experience as”

              Didn’t actually want to educate you, but I feel this edit won’t hurt. Literally.

              • Cosmic Cleric@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                7 months ago

                Experience doesn’t matter if you don’t read Wikipedia links given to you by random people :)

                You’re assuming I don’t already know what’s being discussed in the link (or have read the link), but disagree with how it’s being applied to me.

                Also, experience doesn’t evaporate into the ether just because someone does not read a link. That’s a fallacy for sure.

    • perestroika@lemm.ee
      link
      fedilink
      English
      arrow-up
      22
      ·
      edit-2
      7 months ago

      Having once worked on an open source project that dealt with providing anonymity - it was considered the duty of the release engineer to have an overview of all code committed (and to ask questions, publicly if needed, if they had any doubts) - before compiling and signing the code.

      On some months, that was a big load of work and it seemed possible that one person might miss something. So others were encouraged to read and report about irregularities too. I don’t think anyone ever skipped it, because the implications were clear: “if one of us fails, someone somewhere can get imprisoned or killed, not to speak of milder results”.

      However, in case of an utility not directly involved with functions that are critical for security - it might be easier to pass through the sieve.

      • Cosmic Cleric@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        I don’t think anyone ever skipped it, because the implications were clear: “if one of us fails, someone somewhere can get imprisoned or killed, not to speak of milder results”.

        However, in case of an utility not directly involved with functions that are critical for security - it might be easier to pass through the sieve.

        I’ve actually seen people checking in code that doesn’t get reviewed properly on mission critical apps before (like in the health industry).

        My understanding is basically the same as yours, and in theory I agree with you. However, the problem is we all tend to hand-wave away any possibility of bad things happening, because it’s open source, and don’t take into account human nature, especially when it comes to volunteer versus paid work.

    • uis@lemm.ee
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      4
      ·
      7 months ago

      Auditing can be done only on open source code. No code = no audit. Reverse engieneering doesn’t count.

      • Cosmic Cleric@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        True, but does it actually get done, or just everyone just assuming gets done, because it’s open source?

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 months ago

      The answer is the same as closed source software: sometimes.

      But that’s beside the point, a security audit is not perfect. Plenty of audited codebases are the source of security vulnerabilities in the wild. We know based on analysis that the malicious actor’s approach would have a high chance of successfully hiding from a typical security audit.

      • Cosmic Cleric@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Oh I know security audits are not perfect, I’m just wondering if they actually get done, or everyone just assumes they get done because of “Open Source”, but they don’t.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    86
    arrow-down
    11
    ·
    7 months ago

    Long game supply chain attacks, pretty much going to be state actors. And I wouldn’t chalk it up to the usual malicious ones like China and Russia. This could be the NSA just as easily.

    • Dark Arc@social.packetloss.gg
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      5
      ·
      7 months ago

      I honestly think the NSA has changed. If you look at the known backdoors they haven’t got caught making any new backdoors since like 2010. Their MO also seems to be more hardware and encryption (more of an observational charter) than manipulation.

      There’s also evidence US Congress acted to stop the NSA from doing these underhanded tacits at least once https://www.wired.com/story/nsa-backdoors-closed/

      They’re not idiots, lots of smart people there that surely understand the risk of something like this to US national security interests. It’s not the NSA that’s been asking for encryption to be broken in recent years. They’ve been warning about quantum threats and … from what I’m aware of actually been taking on the defensive role they were conducted to perform https://gizmodo.com/nsa-plans-to-act-now-to-ensure-quantum-computers-cant-b-1757038212

      This seems like something that could actually be weaponized against predominantly western technology companies so I’d be very surprised if it was them and very surprised if they used someone that appears to be a Chinese born resident to do it.

      • ikidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        34
        arrow-down
        5
        ·
        7 months ago

        I really can’t believe they’ve stopped. Their mentality is “national security has no morals”. They’ll do everything they can do to facilitate that mission, though not getting caught is a big part of the facade they need to put on to keep or renovate their image to do this.

        Maybe they’re being more careful, and doing simple things like putting in timestamps that emulate working hours in other timezones are certainly the first thing they’re going to think about. That one has always cracked me up, security researchers point to it like it’s proof of something, which is ridiculous. Just like our people are smart, I don’t think the foreign actors are dumb either.

        And before you say it, I’d be all over not being paranoid if it hadn’t been proven to me time and again that these agencies won’t change, that they don’t give a shit about what’s right if it gets in the way of their mandate. The only thing that might change is how well they hide things now and intimidate their people into staying quiet. Because potential whistleblowers have seen the examples that have been made.

        • 5C5C5C@programming.dev
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          2
          ·
          edit-2
          7 months ago

          Personally I suspect they’re getting all the information they care about via subpoenas on big data and social media companies. They don’t have a need to compromise security on a technical level anymore because the justice system itself is compromised. That means backdoors only benefit national enemies at this point, so the NSA of today would rather those not exist at all.

          Of course that’s not to say anyone should trust those agencies at their word on anything.

        • JasonDJ@lemmy.zip
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          7 months ago

          Backdoors at a mation-state level are a double edged sword. In order to successfully implement a backdoor, you need to ensure that you are more clever than your adversaries, because those same backdoors can be used against you. You must assume that they will eventually discover them, and be able to leverage them against you. Then you must be able to identify that it had been compromised, and then “responsibly disclose” the vulnerability before too much damage is done.

          Much better to be on the defensive. Discover 0days first, either accidental or intentional, and then use them until someone else discloses them and they get patched to hell.

      • ElCanut@jlai.lu
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        7 months ago

        That’s not true, Shadow broker leaks for example contained 0-day found by the NSA well after 2010. And that’s only what got published, there’s probably more !

        • Dark Arc@social.packetloss.gg
          link
          fedilink
          English
          arrow-up
          6
          ·
          7 months ago

          There is a difference from finding something you can take advantage of and putting it there though, no? This sounds like the former.

          But still, it’s a good point, thanks.

          • ElCanut@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            7 months ago

            Ah sorry, english is not my native language so I’m not sure I fully got what you meant, your point was that they stopped inserting backdoors and instead concentrated on getting access by finding vulnerabilities ?

            • Dark Arc@social.packetloss.gg
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 months ago

              Basically two points, they stopped inserting backdoors and their backdoors seem to have only ever been to show them what’s going on (so this just doesn’t look like them to me).

              I didn’t really comment on “what they do now” as much. I think they do continue to spy, finding preexisting vulnerabilities is definitely one way to spy. I wouldn’t be surprised if they report the worst ones in NATO systems to be repaired and keep the others for themselves.

              They also tap into weak points like Google and Apple’s notification services where things aren’t end to end encrypted to gather information. I believe this was revealed recently.

              Snowden I recall saying the modern NSA is more interested in metadata than what’s actually in the message as well.

              In general, I think they still do some shady stuff, but I don’t think they do shady stuff that risks compromising a system. This exploit is quite literally a system compromise as (if I understand it correctly) it allows bypassing sshd authentication.

      • uis@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        7 months ago

        It’s not the NSA that’s been asking for encryption to be broken in recent years.

        I remember 2013 backdoored crypto by NSA. If they get caught less doesn’t mean they make less backdoors.

        EIDT: it was discovered in 2007 and revoked as standard in 2014

        Also they owned corporation that made backdoored crypto algos till 2018. And the only reason they stopped is FOIA.

    • mister_monster@monero.town
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      7 months ago

      I don’t know man. Imagine you could have ssh access to every Debian and fedora server on the planet, and all you had to do was write tests for some compression library for 2 years and sneak in a clever patch. I’d guess such an exploit is worth millions. You wouldn’t work 2 years for millions of dollars?

      This is sophisticated but it doesn’t have to be a state actor.

    • uis@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      I think you are greatly underestimating FSB incompetense.

  • Xianshi@lemm.ee
    link
    fedilink
    English
    arrow-up
    56
    ·
    edit-2
    7 months ago

    Thankfully this was discovered before hitting stable distros but I’m hoping it increases scrutiny across the board. We dodged a bullet on this one.

    • UckyBon@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      Across the board indeed. Scrutiny in code is one thing, where this story, as far as is known right now, really went south is the abuse of a trusted, but vulnerable, member of the community.

      I know the (negative) spotlight is targeting Jia Tan right now (and who knows if they (still) exist), but I really hope Larhzu is doing okay. Who’s name is mentioned in the same articles.

      Mental health is a serious issue, that, if you read the back story, is easily ignored or abused. And it wasn’t an unknown in this story. Don’t only check the code, check up on your people too.

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    7 months ago

    There are no known reports of those versions being incorporated into any production releases for major Linux distributions

    A stable release of Arch Linux is also affected.

    … BTW.

    • liara@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      The malicious code is only thought to have affected deb/rpm packaging (i.e the backdoor only included itself with those packaging methods). Additionally, arch doesn’t link ssh against liblzma which means this specific vulnerability wasn’t applicable to arch. Arch may have still been vulnerable in other ways, but this specific vulnerability targeted deb/rpm distros

  • Technus@lemmy.zip
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    7 months ago

    The backdoor appears to specifically target RSA public key authentication, so they must have had a target in mind that they know uses RSA keys.

    • subtext@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      Would another less complex answer simply be that many (most?) people and organizations use RSA because it was first and elliptic signing is not yet as prevalent?

      Going with Occam’s Razor here…

        • subtext@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          I genuinely don’t understand if I missed something here, and would love more explanation.

          • ghterve@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            7 months ago

            I’m assuming the original post you replied to was meant to be a joke, since, like you pointed out, many or most people use RSA. I assume (using Occam’s Razor) that is more likely than them not knowing that and intending their post at face value.

            • subtext@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              It’s the 46 upvotes that have me concerned that many people do not in fact see it as a joke

    • JATth@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      I would highly recommend Curve25519, etc., just because such keys are faster and less common than RSA public-private keys in today’s world. RSA 2048-bit keys are considered weak today, while the Curve25519 256-bit keys remain stronger. Also, the ChaCha20-Poly1305 cipher has an interesting backstory and doesn’t necessarily need hardware acceleration (which, in theory, could be borked by the HW-vendor) to obtain good performance.

      Unfortunately, some SSH front-ends don’t play nice with Curve25519 public-private keys yet… (I’m pointing at the putty SSH client, but that may have improved from the last time I had to use it)

  • DAMunzy@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    edit-2
    7 months ago

    A stable release of Arch Linux is also affected. That distribution, however, isn’t used in production systems.

    Shots fired!

    It seems WSL Ubuntu and Kali are safe with versions 5.2.5 and 5.4.4 installed respectfully.

      • Zink@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Is that when you install it in a VM, as if to show it that it’s not a good enough little distro to boot directly on your hardware?

        I suddenly feel so guilty!

    • HarriPotero@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      14
      ·
      edit-2
      7 months ago

      I think the AI that wrote the article misunderstood.

      Arch doesn’t build from release tar balls, but straight from git. Arch also doesn’t link sshd against liblzma. So while they’ve shipped the dirty version of xz utils, at least sshd is not affected.

      It’s possible that the dirty version affected some of the other things that link liblzma. Like a handful of kde components for example.

      • d13@programming.dev
        link
        fedilink
        English
        arrow-up
        34
        arrow-down
        1
        ·
        7 months ago

        the AI that wrote the article

        The linked article is by Dan Goodin from Ars Technica. He’s not immune to mistakes, but he’s been writing good articles about security for years.

        Can we please not accuse everybody of being AI just because they made a mistake?

        • chtk@feddit.nl
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          1
          ·
          7 months ago

          Can we please not accuse everybody of being AI

          Suspicious. /s

        • HarriPotero@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          15
          ·
          edit-2
          7 months ago

          Well, he’s credited as the editor overseeing security stuff. Reading between the lines I’d say he’s just taking responsibility for the articles correctness.

          This article in particular is just so poorly written that you’d forgive me for assuming it wasn’t man-written.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        Also, the malicious code only activated if it detected being built as dpkg or rpm.

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    7 months ago

    Please help me as a novice Linux user- is this something that comes preinstalled with Mint Cinnamon? And if so, what can I do about it?

    • subtext@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      7 months ago

      As the other person said it’s likely that xz is already installed on your system, but almost certainly a much older version than the compromised one. It’s likely that no action is required on your part assuming you’ve not been downloading tarballs of bleeding edge software.

      As the other person said, just keep doing updates as soon Mint recommends them (since it’s based on Ubuntu LTS, it’s a lot less likely to have these bleeding edge vulnerabilities).

    • Secret300@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      7 months ago

      You’re good. Even if you do use xz and ssh the version with the vulnerability only made it’s way to rolling release distros or beta version of distros like fedora 40

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 months ago

      The library itself is very common and used by a lot of things (in this case it seems that the payload only activated when used by specific programs, like SSH).

      What you can do about it is keep your system up-to-date using your distribution update mechanisms. This kind of thing, when found out, is usually fixed quickly in security updates. In Mint (which I don’t use, but I believe is based on either debian or ubuntu, which uses dpkg/apt) security updates are flagged differently anc can be installed automatically, depending on your configuration.

      tl;dr: keep your system up-to-date, it will keep known vulnerabilities away as much as it can;

      • Flying Squid@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        Thanks. I do my best to regularly update, so here’s hoping it will not be a problem for me before an update fixes it!

          • Flying Squid@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            7 months ago

            There are definitely times where (at least based on the instructions I read) that I have had to use ssh for various reasons, so I think it will be a problem in the future if I don’t get a fix in an update. But I’m guessing a fix will be coming soon.

      • areyouevenreal@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 months ago

        In this case though the backdoor was added recently so updating could do the opposite of help here. Luckily I don’t think any stable distros added the new version.

        • cley_faye@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          It was added recently, but at this point in the timeline, fixes are available for most mainstream distro at least. Except for rare cases where a fix can’t be made available quickly, this kind of publicity is only done when a fix is broadly available. There are extreme cases of course, but in this case, it’s fixed.

    • spikederailed@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      dpkg --list | grep xz

      should return what version of xz package is on your system. Likely 5.4, in which case you should be okay.

  • Aatube@kbin.melroy.org
    link
    fedilink
    arrow-up
    14
    ·
    7 months ago

    openssh does not directly use liblzma. However debian and several other distributions patch openssh to support systemd notification, and libsystemd does depend on lzma.

    • uis@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      Had redhat’s patch write READY=1 into $NOTIFY_SOCKET instead linking to libsystemd to do this, this backdoor would not be possible.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 months ago

    This is the best summary I could come up with:


    Researchers have found a malicious backdoor in a compression tool that made its way into widely used Linux distributions, including those from Red Hat and Debian.

    An update the following day included a malicious install script that injected itself into functions used by sshd, the binary file that makes SSH work.

    So-called GIT code available in repositories aren’t affected, although they do contain second-stage artifacts allowing the injection during the build time.

    In the event the obfuscated code introduced on February 23 is present, the artifacts in the GIT version allow the backdoor to operate.

    “This could break build scripts and test pipelines that expect specific output from Valgrind in order to pass,” the person warned, from an account that was created the same day.

    The malicious versions, researchers said, intentionally interfere with authentication performed by SSH, a commonly used protocol for connecting remotely to systems.


    The original article contains 810 words, the summary contains 146 words. Saved 82%. I’m a bot and I’m open source!

    • Aatube@kbin.melroy.org
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      7 months ago

      So-called GIT code available in repositories aren’t affected

      I wonder what convinced the model to treat git as an acronym

      • TheGrandNagus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 months ago

        I imagine many aren’t familiar with British slang and therefore assume git must stand for something, especially considering software devs love their acronyms.

  • uis@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    7 months ago

    Technically it breaks libsystemd encryption, which is not used in upstream openssh. There are unofficial redhat patches that use this library instead of reading one enviroment variable and writing to one file.

  • TechNerdWizard42@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    7 months ago

    Who wants to bet he received a nice lump sum deposit of cash from a five eyes state to make an “accident”…