Since you didn’t specify the kind of “AI” model, I’ll assume you mean LLMs. But this really applies to any so-called AI model. All of them are trained on stolen content. Every last one of them is a hurricane of millions upon millions of copyright violations. So the answer to your question is a loud, resounding “no”.
Since you didn’t specify the kind of “AI” model, I’ll assume you mean LLMs. But this really applies to any so-called AI model. All of them are trained on stolen content. Every last one of them is a hurricane of millions upon millions of copyright violations. So the answer to your question is a loud, resounding “no”.
okay thanks, no existing model is ethical is what you’re saying. i were thinking of the biases of models which might skew machiavellian in the output.