Wanting to profit from AI companies hunt for training data (over and above the community that created that data) is a big part of what created the context for the recent migration away from Reddit. How will the fediverse approach this problem?

  • fubo@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Is your web site indexable by search engines?

    The way that works is they make a complete copy of all the public content on the site — anything that a non-logged-in user can see — and then use that for indexing. Googlebot, BaiduSpider, Bingbot, DuckDuckBot, etc. simply copy the public data from your site onto those companies’ own servers.

    Once they’ve done that, they can do anything with that data, without further interaction with your site.

    That includes using it for ML/AI training.

    You cannot technologically prevent that without becoming invisible to search engine indexing. That means not being public on the web.

    Your choice. You can’t both be public and not public. You can’t be both indexable and not indexable.

    Public federation requires being public. Which thereby requires being indexable, which thereby means everything written here can be ingested into training pipelines.

    That’s simply true. It’s not good or bad; it’s just true. Your alternative is to not post your words on the public web.

  • JohnnyCanuck@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    With the fediverse, couldn’t they just create their own private instance and let all the other instances just feed them data?

    As long as they don’t do anything to inspire other instances to defederate with them (which by not posting anything and not drawing attention they probably won’t) they can just sit there pulling data.

  • key@lemmy.keychat.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    By not spitting into the wind. It’s infeasible to try to prevent all web scraping from any possible IP which is what you would need to do. Reddit just took advantage of the media topic as a justification, they’re not doing anything real.

    • sachasage@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Fair, but then there’s a line between scraping through ordinary traffic and using API access to gather large data sets.

      • key@lemmy.keychat.org
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        Is there? Effect is the same. Use machine learning to parse html generically and throw hardware and a pool of IPs at it. A lot more efficient than coding an API client for every service out there. It’s the same approach search engines use.

        I don’t see anything being done effectively without legal protections.