Major search engines (like Google) have made it clear that they care about content quality and usefulness, not whether a human or an AI wrote it.
In practice, online platforms aim to filter out low-quality, spammy, or unhelpful material regardless of how it was produced, rather than blanket-banning all AI-generated content.
Responsible use of AI-generated content typically involves transparency about AI involvement (e.g. disclosing or “citing” the use of AI where appropriate), rigorous human editing and fact-checking of the AI’s output, and infusion of real human expertise or insights before publication.
Notably, a high “AI-generated” score by a detector does not automatically imply the content is poor or unethical; AI-assisted content can be acceptable if it’s of high quality and vetted by humans.