But in a separate Fortune editorial from earlier this month, Stanford computer science professor and AI expert Fei-Fei Liargued that the “well-meaning” legislation will “have significant unintended consequences, not just for California but for the entire country.”
The bill’s imposition of liability for the original developer of any modified model will “force developers to pull back and act defensively,” Li argued. This will limit the open-source sharing of AI weights and models, which will have a significant impact on academic research, she wrote.
They should be doing the exact opposite and making it incredibly difficult not to open source it. Major platforms open sourcing much of their systems is basically the only good part of the AI space.
Also, they used our general knowledge and culture to train the damn things. They should be open sourced for that reason alone. Llms should be seen and treated like libraries, as collections of our common intellect, accessible by everyone.
Holy shit this is a fucking terrible idea.
I read that as “incentivizing keeping AI in labs and out of the hands of people who shouldn’t be using it”.
That said, you’d think they would learn by now from Piracy: once it’s out there, it’s out there. Can’t put it back in the jar.
They should be doing the exact opposite and making it incredibly difficult not to open source it. Major platforms open sourcing much of their systems is basically the only good part of the AI space.
Also, they used our general knowledge and culture to train the damn things. They should be open sourced for that reason alone. Llms should be seen and treated like libraries, as collections of our common intellect, accessible by everyone.
Damn straight. I don’t fear AI, I fear an even more uneven playing field