• Beej Jorgensen@lemmy.sdf.org
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    I’m on the “OK but keep an eye on it” train, here.

    Devs need feedback to know how people are using the product, and opt-out tracking is the best way to do it. In this case, it seems like my personal data is completely unidentifiable.

    I was coding in the IE6 era, so I’d really prefer to not end up in a browser engine monoculture again.

  • GolfNovemberUniform@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 months ago

    There are definitely 2 kinds of people commenting this post. The first one who supports telemetry (and Big Tech) and another one that supports freedom and opt-in. This is interesting to see on something like Lemmy. I think the ones who support telemetry are devs and it is a little bit concerning to me

    • Zaktor@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      This isn’t even telemetry, it’s data collection for AI. That they refused to say that let’s you know that they think what they’re doing needs to be obfuscated.

      • Blisterexe@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        If they refused to say it how do you know its the case? Also how would the data described in the article be useful to an ai, genuine question.

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          In life, people will frequently say things to you that won’t be the whole truth, but you can figure out what’s actually going on by looking at the context of the situation. This is commonly referred to as “being deceptive” or sometimes just “lying”. Corporate PR and salespeople, the ones who put out this press release, do it regularly.

          You don’t need to record content categories of searches to make a good tool for displaying websites, you need it to perform predictions about what users will search for. They’ve already said they wanted to focus on AI and linked to an example of the system they want to improve, it’s their site recommender, complete with sponsored recommendations that could be sold for a higher price if the Mozilla AI could predict that “people in country X will soon be looking for vacations”.

  • Lexi Sneptaur@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Importantly, if you have already opted out of sending data to Mozilla, this change will not affect you. It only sends data if you have the setting turned on. It takes just a few clicks to entirely disable it, and Mozilla deletes all record of your browser within 30 days from turning off this feature. If you’re worried about it, do it now, it’s just under Settings > Privacy & Security. Instructions are also linked in the blog post.

    • GolfNovemberUniform@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I’m not a fan of the telemetry being enabled by default but having the option to completely disable it makes it not that bad. Though Mozilla definitely doesn’t need search history data (unless the law enforcements told them to collect it) so this change is kinda sus

          • fartsparkles@sh.itjust.works
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Mozilla Foundation has a wholly owned subsidiary that is Mozilla Corporation that is for-profit.

            For instance the revenue from Google, so they’re the default search engine, is seen by Mozilla Corporation. So things search-related will indeed be part of their for-profit arm.

              • fartsparkles@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                2 months ago

                It’s not a loophole. As a subsidiary, profits are still invested into the nonprofit and they’re still guided by the Mozilla manifesto. It just lets them do more and raise more funds which would be difficult to do with nonprofit status (selling default search engine for instance). Here’s their original press release when they incorporated Mozilla Corporation in 2005.

  • heavyboots@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    All we want is 1990s Google, guys. That’s really all we want. None of this AI BS that kind find a country in Africa that starts with a K, just Google without the evil enshitification layer on top.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I think people forget how awful google pre ~2008 was. Not in terms of the bullshit they do nowadays, just in quality of results really.

      • anachronist@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        I switched from Alta Vista at Google in the early 2000s because the Alta Vista index was stale and full of spam. Google search tools were comparatively primitive (av let you do things like word stem search) but the results were really good.

      • heavyboots@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Huh. I used it pretty much since the start and I certainly don’t recall it being that bad? Like you got a lot of relevant content up front usually.

        • notfromhere@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          2 months ago

          I feel like you had to learn how to use it, operators and phrasing etc. They dumbed it down with search suggestions and even further by changing search terms to synonyms, and now outright ignoring terms. Height of Internet search was definitely pre 2008. More like 2005.

        • Eager Eagle@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          If you had the right query, yes. But getting there if you didn’t know the exact words in the website used to take a number of attempts and google-fu. By early 2010s this was vastly improved.

  • onlinepersona@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    To improve Firefox based on your needs, understanding how users interact with essential functions like search is key.

    Buddy, I just want to type a search term and get results. Stop spying on my search. Your only job is to transfer it to the server and then present the result. I don’t need you to suggest some bullshit to me, or think of “ways to improve search”.

    This helps us take a step forward in providing a browsing experience that is more tailored to your needs, without us stepping away from the principles that make us who we are.

    No. What the fuck? They are sounding more and more like Google. We need a new alternative that isn’t built from Gecko or Blink or whatever the engines are called.

    Anti Commercial-AI license

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Buddy, I just want to type a search term and get results.

      Telemetry can help them do better at providing that. Devs aren’t magical beings, they don’t know what’s working and what’s not unless someone tells them.

      • Zaktor@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        2 months ago

        Telemetry doesn’t need topic categorization. This is building a dataset for AI.

          • Zaktor@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            edit-2
            2 months ago

            The example of the “search optimization” they want to improve is Firefox Suggest, which has sponsored results which could be promoted (and cost more) based on predictions of interest based on recent trends of topics in your country. “Users in Belgium search for vacations more during X time of day” is exactly the sort of stuff you’d use to make ads more valuable. “Users in France follow a similar pattern, but two weeks later” is even better. Similarly predicting waves of infection based on the rise and fall of “health” searches is useful for public health, but also for pushing or tabling ad campaigns.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I want an open source AI to sort my tabs and understand them and answer my question about their content. But locally running and offline

      • Zaktor@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        2 months ago

        Unless they’re going to publish their data, AI can’t be meaningfully open source. The code to build and train a ML model is mostly uninteresting. The problems come in the form of data and hyperparameter selection which either intentionally or unintentionally do most of the shaping of the resulting system. When it’s published it’ll just be a Python project with some magic numbers and “put data here” with no indications of what went into data selection or choosing those parameters.

        • interdimensionalmeme@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          I just want a command line interface to my browser, then I’ll tell my local mixtral 8x7B instance to “look in all my tabs and place all tabs about ‘magnetic loop antennas’ in a new window, order them with the most concrete build instructions first” 100% open source model. I’m looking into the marionette protocol to accomplish this. It would be nice if it came with that out of the box.

          • Zaktor@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            2 months ago

            What does “open source” mean to you? Just free/noncorporate? Because a “100% open source model” doesn’t really make sense by the traditional definition. The “source” for a model is its data, not the code and not the model itself. Without the data you can’t build the model yourself, can’t modify it, and can’t inspect why it does what it does.

            • interdimensionalmeme@lemmy.ml
              link
              fedilink
              arrow-up
              0
              ·
              2 months ago

              I think the model can be modified with LoRa without tge source data ? In any case, if the inference software is actually open source and all the necessary data is free of any intellectual property encumberances, it runs without internet access or non commodity hardware.

              Then it’s open source enough to live in my browser.

              • Zaktor@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                2 months ago

                You can technically modify any network weights however you want with whatever data you have lying around, but without the core training data you can’t verify that your modifications aren’t hurting the original capabilities. Fine-tuning (which LoRa is for) isn’t the same thing as modifying a trained network. You’re still generally stuck with their original trained capabilities you’re just reworking the final layer(s) to redirect/tune it towards your problem. You can’t add pet faces into a human face detector, and if a new technique comes out that could improve accuracy you can’t rebuild the model with it.

                In any case, if the inference software is actually open source and all the necessary data is free of any intellectual property encumberances, it runs without internet access or non commodity hardware.

                Then it’s open source enough to live in my browser.

                So just free/noncorporate. A model is effectively a binary and the data is the source (the actual ML code is the compiler). If you don’t get the source, it’s not open source. A binary can be free and non-corporate, but it’s still not source code.