Big Tech May have Already Fielded a New Weapon Against the Small Internet: AI
At about 70-80% of the way into a recent episode of Security Now with Steve Gibson and Leo LaPort, I heard something that caused me to wonder how much longer the average Internet user will continue to be able to access any website he wants on the small Internet (AKA the personal Web or the Indie Web). Gibson read an email from a listener named Mark Jones who related that Windows had blocked his website, midlandchemist.org (which now seems to be back on line). After contacting Microsoft to ask why, Jones realized that no one there knew. Windows Defender is now AI-based, rather than rule-based, so no one could tell Jones what he did wrong to cause his website to be blocked. This meant that in order to have his website added to Microsoft's whitelist, Jones would have to find a Microsoft employee who was willing to take responsibility for the action. At a huge corporation like Microsoft with its giant bureaucracy, the probability of succeeding at this seems less than certain to me.
We should all be aware by now that big tech considers small websites to be in the way of its bid for control of the Internet. For example, small websites could in theory compete for top slots in search results, lowering Google's income from the sale of key search terms. In the 1990's and the 2000's, everywhere I looked on line I saw corporate-sponsored articles claiming websites outside big tech's walled gardens were unsafe for Internet users. I have been commenting for years about search engines' discrimination against personal websites, and I am not the only one who is convinced that if big companies can use AI to make small websites less visible to Internet users, they will.
Others have picked up on the fact that Windows 11 is now blocking Websites. More are also aware that Windows Defender contains some form of what Microsoft is calling "AI". As far as I have determined from a cursory online search, Microsoft does not seem to have spent much effort advertising the new AI capability of Windows Defender to Windows users. Perhaps that is because the bugs have not yet been worked out? Or, perhaps Microsoft simply no longer cares about Windows users or what they think. (Did they ever???). Knowing Microsoft's past history with Windows security, my guess is AI-related bugs in Windows Defender will never be completely removed. Although Microsoft's website does not specifically mention AI, one page does summarise Defender's role in protecting users on the Internet,
Microsoft Defender's web protection helps protect you against malicious sites that are being used for phishing or spreading malware. Web protection is currently available on Windows, iOS, and Android.
It does this by checking links you click on, or that an app tries to open on your device and comparing them against our constantly updated list of sites known to be dangerous. If we see you're on your way to a site we know is dangerous, we'll warn you...
You won't see web protection in action unless it detects that a dangerous link has been called.
The error message displayed by the Edge browser when you attempt to go to a URL that Microsoft's AI thinks is dangerous begins with "We Blocked a harmful link. Don't worry, we blocked this site before anything bad could happen". As you may have already noticed, when some new versions of web browsers block a website, you can't override them. At least, I haven't been able to find a way. And, if I haven't, I am sure I am not the only one.
AI seems to be taking hold all across the corporate landscape. We should all now be aware that Internet search engines have been moving to AI. Microsoft's copilot, a program for writing computer code, is widely known and beginning to be widely used. Big social media sites have either switched to AI for moderation or are in the process of switching. AI is also being used for network security applications. Another Microsoft web page says,
AI-powered security operations: Microsoft delivers innovations to effectively protect against today’s complex threat landscape. The AI-powered unified security operations platform offers an enhanced and streamlined approach to security operations by integrating security information and event management (SIEM), security orchestration, automation, and response (SOAR), extended detection and response (XDR), posture and exposure management, cloud security, threat intelligence, and AI into a single, cohesive experience, eliminating silos and providing end-to-end security operations (SecOps). The unified platform boosts analyst efficiency, reduces context switching, and delivers quicker time to value with less integration work.
This means Microsoft has another tool and another excuse for blocking the small Internet.
For decades, in all areas of our lives, we have heard excuses like, "I have no control over this. It's all up to the computer." Now even the people who run the computer can claim they have no control over it. If a blog is ever "de-indexed" from Bing again, its engineers might try to spout rubbish like, "Company policy dictates that our AI makes all of those decisions. We have no authority to override them." That would be a lie, of course. Claims of having no control over the actions of a computer have always been lies. Someone can always make changes to a computer's programming, regardless of whether they call it "programming" or "training".
I will give one quick example. Years ago, I bought two new brake disks for my car, and while I was still in the auto parts store, I noticed the sales clerk had overcharged me for sales taxes. I pointed this out to him, but he said he had no control over it because the computer decided how much to charge and printed the receipts. He didn't dispute what I said, so he apparently knew his store was stealing from its customers. This made me so angry that I called the state tax office to complain. A few weeks later, I received a call at home from the store manager. He apologised profusely and asked what he could do to make his "mistake" up to me. I told him the only thing I wanted from him was to stop stealing from his customers. It is amazing how complete indifference and an inability to override the computer can suddenly turn into such concern and a quick reversal when one is facing a possible prison term.
The bottom line here is that "AI" is being used and will very likely increasingly be used in our computer networks, operating systems, web browsers, web servers, and other software to mitigate all manner of threats from the Internet. This means AI will be given more responsibility for blocking so-called "dangerous" websites. Increasingly, no one will know or care exactly why some websites are considered to be dangerous and others are not. Not to be overly alarmist, because anything could happen, but as anyone knows who has ever tried to contact a human being to complain about a problem with a Google application, no one is likely to be listening to our complaints. This means that unless we can find some credible threat to use against these big companies, we will be even less able than we are now to do anything about their actions. This suggests to me that we should begin to prepare for a time when the small Internet will be even more isolated from the corporate Internet than it is now. I am not exactly sure how to do that, but my guess is that, as this issue receives more attention, the small Internet will begin to discuss it. To watch and participate in those discussions, stay tuned to sites like the Cheapskate's Guide, GRC.COM, Lemmy, and Mastodon while you can still reach them. You may also want to make sure you have access to a computer running Linux or a version of Windows older than Windows 11. Having one or more older web browsers installed on your computer may also be a good idea.