An LLM can't make something original, it can only make something derivative. But that derivative work isn't the same as when a human makes a derivative work because a human isn't writing each word or phrase based on the likely "correct" next word or phrase through an algorithmic process. What humans do is magnitudes more complex, though it can at times also be accidental or intentional plagiarism.
In short, an LLM's output is necessarily a string of preexisting human inputs. A human's output, while it can be informed by and reference other human inputs, can be an original analysis. The AI that is publicly available is not sophisticated enough to be more than fancy predictive text.
But when the answers aren't original thoughts but regurgitations of other peoples' thoughts about the book, then it's plagiarism. LLMs can't provide original output, only variations on what people have made available (whether legally or not). The answer might not even be correct or make any sense. It's just predictive text to a crazy degree.
When you copy someone's work without attribution, that's plagiarism. When your output is only possible because of someone else's work over which they own copyright and the output replicated the copyrighted material, that's copyright infringement.
They never had to learn. removed and Google spent a shitload of money on UI and UX, so we've hit a point where babies, who cannot talk, can navigate a tablet. If that's your version of the internet, your computer literacy goes way down.
The fact that the platform makes the community isn't necessarily something I'd consider at first, and I don't think Elon considered it either.
Small correction, it's actually the community that makes the platform. The community exists regardless of platform, the platform is there to help the community connect. The platform can help make new communities by facilitating connections but the platform needs communities to exist. People will form communities tailored to their interests without the internet all day, they've done it for millenia. If the platform makes it difficult for communities to connect, then the community will just go elsewhere.
That tracks with what I've heard from people in the industry. For Musk and now Huffman, it's some sort of ideological or philosophical thing in terms of how they've dramatically shifted the focus and operation of these previously (mostly) stable companies.
I agree, the decentralized aspect is a huge plus and makes this system . But I think the OP's approach is fundamentally misguided and I have my suspicions for a few reasons.
It's a 45 minute meeting that provides an insight into Meta's operations. There's no need to contribute anything, just sit back and listen.
There's no reason to post about this and brag about it now. Compare this with what Christian did when Reddit tried to claim Apollo was blackmailing them. There's no leverage now, just some internet points.
We have one email and a response. Was there any further communication? How do we know this is all that was said? I could go further and question the legitimacy of this screencap but I'm willing to give OP the benefit of the doubt here.
As others have pointed out, how does shutting them out completely stay in keeping with fediverse principles? This is legitimate question since, to me, it seems like despite the risks, it's antithetical to the spirit of the fediverse until they demonstrate bad behavior here.
To quote OP's email, "Zero interest in having a conversation with #Meta 'off the record or otherwise." "Otherwise" here is...on the record. So OP also won't meet with them in a completely open meeting?
Look, I get it, I dislike Meta too. But this just seems like a misstep and bragging for zero actual gain.
An LLM can't make something original, it can only make something derivative. But that derivative work isn't the same as when a human makes a derivative work because a human isn't writing each word or phrase based on the likely "correct" next word or phrase through an algorithmic process. What humans do is magnitudes more complex, though it can at times also be accidental or intentional plagiarism.
In short, an LLM's output is necessarily a string of preexisting human inputs. A human's output, while it can be informed by and reference other human inputs, can be an original analysis. The AI that is publicly available is not sophisticated enough to be more than fancy predictive text.