This is, unfortunately, a really good article (not mine) about the state of AI as used by Google, and the damage it's caused:

travellemming.com: Google is Using AI to Censor Thousands of Independent Websites Like Mine (And to Control the Flow of Information Online)

It's a long article, but it's very much worth reading in its entirety, in my opinion. It lays out exactly what Google have done in their attempts to reshape the web for AI, how doing so has completely decimated indie publishing, and how Google is doing its utmost to control the flow of information completely.

It's depressing, but this needs to be heard.

It's not mentioned in the article, but for me, this article also emphasises the importance of the role that Gemini (the protocol) might play in the future. I realise that Gemini is not attempting to be an alternative to the web, but at this rate, with how much control Google is exerting over the web, it might become one...

🦊 Sophira

May 30 · 7 months ago · 👍 freezr, norayr, pista

11 Comments ↓

👻 darkghost · 2025-05-30 at 12:07:

I don't think we are too far off from an AI assistant summarizing AI generated search results. AI assisted messages being summarized by AI assistants on the other end. I don't think I'm being pedantic when I say the threat is nothing short of the death of distance communication. You won't trust someone who isn't standing in front of you speaking to you directly. But getting to that mistrust of distance communication will be sloppy as well. There's a reason email scams and telephone scams still exist... somebody believes the scammer and loses money.

❄ freezr · 2025-05-30 at 15:23:

I personally stopped to use Google many years ago; however this explains while I am getting very few visits on my brand new website business... >:-(

🚀 clarahd · 2025-05-30 at 17:10:

If Google knows all our web activity and now will answer searches by redirecting to itself or large corporate partners, then that suggests a critical app, as referenced by this post:

— bbs.geminispace.org/s/Critical_Mass/28875

Big sites like Google and Reddit aquire masses of content, and then financial value, and then control, from CROWD-SOURCED DATA - that is us, their eventual patsies.They, like Judge Judy, are nothing without our contributed material. But I really like Google Map Business Reviews! If only...

— https://wiki.openstreetmap.org/wiki/OpenMarkers
🐰 jojo · 2025-05-30 at 17:36:

for the last few years ai has become a problem for artists and creatives in general

👻 darkghost · 2025-05-30 at 19:01:

@jojo Never forget, the ones who are the biggest snobs about the purity of human made content are these AI companies. They don't want their models getting high on their own supply.

🦀 Proton · 2025-06-02 at 02:58:

It's a solvable problem but it needs to be solved now "while the sun shines". Will it be? Probably not. But we have the technology.

🚀 clarahd · 2025-06-02 at 11:15:

I like how stackexchange.com recognizes me as human without a CAPTCHA - just a few seconds delay while it evaluates me before presenting a simple checkbox. This is more accessible to weirdo browsers or smallscreen like my OperaMini than some methods that effectively block me even though I am human.

I see it is a bit of cold war arms race to block these bots:

— https://security.stackexchange.com/questions/66273/human-or-bot-behind-the-scene-checks-vs-captcha

I see suggestions of using technology - even machine learning itself to fight these automated bots. I mean, takes one to know one, right?

But a parallel concern I have is malicious infiltration of democratic forums by anonymous agents. I used to believe that anonymity was required to protect whistleblowers from the powerful. But anonymity also allows mischievous foreign agents to effectively destroy debate with impunity, e.g. censoring or spamming a forum against the interests of the public.

If we remove anonymity, does it not simplify the task of filtering bots? But how to avoid the centralized collection of data about an identified individual's possibly controversial opinions?

Some form of digital government ID is to replace the current primitive forms of ID. I don't know if there would be some way to hash that ID in such a way that it could be legitimized, unique, and yet still pseudo-anonymous.

🚀 clarahd · 2025-06-02 at 11:26:

re theft of intellectual property. It is pretty ironic seeing the likes of Microsoft taking liberties with other peoples' work after their history of, among other things, sending squads to other businesses to police their Wundows licences.

👻 darkghost · 2025-06-02 at 12:06:

Any form of government ID could still be subject to abuse by a local non-democratic government. Real whistleblowers need anonymity to blow the whistle but this is a separate situation from democratic forums of debate. It used to be for large scale debate, institutional trust was earned and used as a platform for that debate. Think mainstream media. But debate took place in other forums in the more informal settings of backyard BBQs, church functions, the local pub, etc. This was more intimate and face to face. Trust was assumed because these are friends and neighbors. The difference now is that we can have large scale debate open to the world with anonymity. There is no fix but drilling in caveat emptor.

🚀 clarahd · 2025-06-03 at 11:53:

I don't think your comment reflects current reality. Mainstream news is just scraping the surface of such things as the Facebook or Wechat campaign manipulations, but is silent on the hijacking of debate in public forums where minority opinions are routinely silenced, the user banned, without any appeal due to forum dominance. Meanwhile, remote organized shills can shape the discussions under cover as netizenz.

Pubs or some rich guy's newspaper of yesteryear are barely relevant to today's not-so-homogenous and mainly online society. No you cannot build a functioning democracy on a web of make believe, and we wouldn't use caveat emptor to build a house.

If we need meaningful debate to solve problems collectively, then we enable meaningful debate, period. Just as a low tech example, you could have a local public human being validate someone's ID on behalf of a 3d party forum, which then assigns the user a unique number, without knowing their actual identity. (Though they could "validate" the pseudo-identity by checking site traffic logs, or asking for a brief local video.)

And isn't "there's no fix" something the latest ChatGPT would want us to think??

— https://www.independent.co.uk/tech/ai-safety-new-chatgpt-o3-openai-b2757814.html
👻 darkghost · 2025-06-03 at 13:14:

My comment was definitely a reflection on yesteryears to contrast with the present state. How does one build a web of trust? I don't see a system that is easy, secure, and not able to be exploited by unfriendly governments, local or foreign. Facebook wanted to be this internet identity and you couldn't pay me enough to sign up and let them do that for me. Blue check marks were another way, but they only worked on one platform and now they're worthless for trustworthiness. I'm sure I don't have the answers but I can certainly imagine all the ways corruption would penetrate a web of trust.

The best way will always be to spend time getting to know people and have them earn your trust (and visa versa.) This isn't compatible with the social network model of fast friends, rewards for outlandish behavior, and drive by hot takes. There's numerous perverse incentives to hone in on the worst instincts of humanity and exploit them for private gain. That's always been true but never has this been concentrated into the hands of so few. I also think this is just baked into us. Some quirk of the social behavior. That's why it can't be fixed. One opinion of a not chatgpt bot but you can think what you like. I've been on here for the better part of a year, feel free to check into me.