• Zak@lemmy.world
    link
    fedilink
    arrow-up
    29
    arrow-down
    3
    ·
    4 months ago

    If someone can read my Signal keys on my desktop, they can also:

    • Replace my Signal app with a maliciously modified version
    • Install a program that sends the contents of my desktop notifications (likely including Signal messages) somewhere
    • Install a keylogger
    • Run a program that captures screenshots when certain conditions are met
    • [a long list of other malware things]

    Signal should change this because it would add a little friction to a certain type of attack, but a messaging app designed for ease of use and mainstream acceptance cannot provide a lot of protection against an attacker who has already gained the ability to run arbitrary code on your user account.

    • gomp@lemmy.ml
      link
      fedilink
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      4 months ago

      Those are outside Signal’s scope and depend entirely on your OS and your (or your sysadmin’s) security practices (eg. I’m almost sure in linux you need extra privileges for those things on top of just read access to the user’s home directory).

      The point is, why didn’t the Signal devs code it the proper way and obtain the credentials every time (interactively from the user or automatically via the OS password manager) instead of just storing them in plain text?

      • Zak@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        You’d need write access to the user’s home directory, but doing something with desktop notifications on modern Linux is as simple as

        dbus-monitor "interface='org.freedesktop.Notifications'" | grep --line-buffered "member=Notify\|string" | [insert command here]

        Replacing the Signal app for that user also doesn’t require elevated privileges unless the home directory is mounted noexec.

      • douglasg14b@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        They’re arguing a red herring. They don’t understand security risk modeling, argument about signals scope let’s their broken premise dig deeper. It’s fundamentally flawed.

        It’s a risk and should be mitigated using common tools already provided by every major operating system (ie. Keychain).

        • Liz@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          “Highways shouldn’t have guard rails because if you hit one you’ve already gone off the road anyway.”

        • gomp@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          I don’t see the reasoning in your answer (I do see its passive-aggressiveness, but chose to ignore it).

          I asked “why?”; does your reply mean “because lack of manpower”, “because lack of skill” or something else entirely?

          In case you are new to the FOSS world, that being “open source” doesn’t mean that something cannot be criticized or that people without the skill (or time!) to submit PRs must shut the fu*k up.

    • douglasg14b@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      2
      ·
      edit-2
      4 months ago

      Not necessarily.

      https://en.m.wikipedia.org/wiki/Swiss_cheese_model

      If you read anything, at least read this link to self correct.


      This is a common area where non-security professionals out themselves as not actually being such: The broken/fallacy reasoning about security risk management. Generally the same “Dismissive security by way of ignorance” premises.

      It’s fundamentally the same as “safety” (Think OSHA and CSB) The same thought processes, the same risk models, the same risk factors…etc

      And similarly the same negligence towards filling in holes in your “swiss cheese model”.

      “Oh that can’t happen because that would mean x,y,z would have to happen and those are even worse”

      “Oh that’s not possible because A happening means C would have to happen first, so we don’t need to consider this is a risk”

      …etc

      The same logic you’re using is the same logic that the industry has decades of evidence showing how wrong it is.

      Decades of evidence indicating that you are wrong, you know infinitely less than you think you do, and you most definitely are not capable of exhaustively enumerating all influencing factors. No one is. It’s beyond arrogant for anyone to think that they could 🤦🤦 🤦

      Thus, most risks are considered valid risks (this doesn’t necessarily mean they are all mitigatable though). Each risk is a hole in your model. And each hole is in itself at a unique risk of lining up with other holes, and developing into an actual safety or security incident.

      In this case

      • signal was alerted to this over 6 years ago
      • the framework they use for the desktop app already has built-in features for this problem.
        • this is a common problem with common solutions that are industry-wide.
      • someone has already made a pull request to enable the electron safe storage API. And signal has ignored it.

      Thus this is just straight up negligence on their part.

      There’s not really much in the way of good excuses here. We’re talking about a run of the mill problem that has baked in solutions in most major frameworks including the one signal uses.

      https://www.electronjs.org/docs/latest/api/safe-storage

      • fuzzzerd@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        I was just nodding along, reading your post thinking, yup, agreed. Until I saw there was a PR to fix it that signal ignored, that seems odd and there must be some mitigating circumstances on why they haven’t merged it.

        Otherwise that’s just inexcusable.