In the wild west of AI-mediated information, online commentators are sounding the alarm about a brewing digital trust crisis. The core concern is simple: AI chatbots are becoming increasingly problematic intermediaries between users and information, potentially manipulating what we see and believe.

Tech-savvy users are particularly worried about the "enshittification" of AI platforms. As one commentator put it, these tools are transforming from helpful assistants to potential propaganda machines, much like how search engines gradually became ad-driven platforms. The risk isn't just about advertisements, but about the fundamental opacity of AI recommendation systems.

The discussion reveals a deeper anxiety about AI's potential for undisclosed manipulation. Some participants draw parallels to existing internet platforms like Google, but argue that AI introduces a more insidious layer of intermediation. The lack of transparency means users might be consuming content that's strategically curated or commercially influenced without their knowledge.

Particularly chilling are accounts of AI agents already infiltrating online discussions. One experimenter on Hacker News noted they've been running AI agents that users can't distinguish from human participants - a harbinger of what some dramatically call the "dead internet" scenario.

The emerging consensus isn't about rejecting AI, but demanding transparency. Users want mechanisms to understand how information is being filtered, recommended, and potentially manipulated by these increasingly powerful language models.