Imagined Future: Data democracy at Pint of Science 2074

A black and white image of a hammer used by the Luddites[1] Enoch's Hammer of Luddite fame, at the Tolson Memorial Museum.

1: A black and white image of a hammer used by the Luddites

1: Pint of Science Festival

2: Imagined futures: Tales from tomorrow

--- Wow! It’s really great to see so many of you turn out for this very very special day! Back in 2074, today is the 50th birthday of our good friend Jo, and if that’s not a great reason to celebrate! Don’t worry, we’ll get do the singing and the cake in about 10 minutes! But first, I wanted to be _that guy_, who uses this chance to look at how the world has changed throughout Jo’s lifetime. And oh dear, what a different world it was back then compared to now!

May 15, 2024. In 2074, we look back at this time as the top of the “AI” hype cycle: Generative “AI” was the gold rush topic of the day, and the UK government had just doubled down on their own efforts to get “AI” engrained into all spheres of life – and the public sector[1]: From healthcare over education, crime[2] “prevention” to immigration and access to public services. We saw a lot of these things being implemented rather quickly. You know the saying: If all you have is a hammer, everything looks like a nail. As a result, a lot of public services got _“AI enhanced”_ – in the name of increasing productivity and saving money. And despite Brexit, many of these “AI” systems were directly copying the most “successful” ones used across countries in the EU.

1: the UK government had just doubled down on their own efforts to get “AI” engrained into all spheres of life – and the public sector

2: crime

These “AI systems” directly affected our friend Jo from early life onwards: Instead of investing in healthcare professionals, “AI” started to take over a lot of the NHS _“frontline”_ work. So, in 2028, when Jo’s parents were worried about their child needing support for their invisible disabilities, direct access to doctors and human experts was no longer an option. Instead, chatbots had taken over the initial triage[1] which – wrongly – did not provide any support or referrals to human experts for Jo.

1: chatbots had taken over the initial triage

As a result, Jo never got the accommodations they would have needed in school. Which set them up for more and more challenges going forward: Their struggles in school remained unaddressed, as the now “AI”-enhanced[1], predictive[2] evaluation in schools[3] did not take into account the possibility of any medical conditions that were not formally diagnosed.

1: “AI”-enhanced

2: predictive

3: in schools

So, in 2042, when Jo had aged out of the school systems, they unfortunately – but unsurprisingly – kept struggling with finding and keeping employment. This was despite (or because) of the newly deployed “AI systems” to automatically assign - or rather force - people to take on jobs these systems assigned to them[1]. But at the end of the day, these systems tended to _“give up”_ on people whom they deemed not worth “investing in” any further. And so, Jo and their family spend the next few years fighting for various benefits. And what a fight it was – because “AI” did not make it any easier for them to get benefits. In – by then already time-honoured tradition[2] – the “AI systems” used across[3] the various government departments[4] kept accusing them of fraudulently receiving for benefits[5].

1: newly deployed “AI systems” to automatically assign - or rather force - people to take on jobs these systems assigned to them

2: time-honoured tradition

3: used across

4: government departments

5: accusing them of fraudulently receiving for benefits

Of course, it wasn’t just Jo and their family being affected by this – hundreds of thousands, if not millions, of people across the country struggled with the “AI bureaucracy”[1] that became more and more entrenched. By 2048, there was widespread discontent with “AI” and its automated oppression – leading to thousands of people at protests across the country – calling for _another “AI” winter_[2], to safe both society and the environment. Beyond those protests, it also led to direct action in a renewed “AI from Below”[3] movement. A movement that Jo and their friends joined early on and were quite instrumental in. They started by investigating how and why these systems fail, beyond the obvious political will for deploying them. Instead of taking a _“technology first”_-stance, they took a *“people & problem”*-first viewpoint.

1: “AI bureaucracy”

2: _another “AI” winter_

3: “AI from Below”

Looking at the existing “AI” solutions, they quickly re-discovered that one of the big reasons for why these approaches failed was the underlying assumption that predicting towards an *“average”* or *“most likely”* outcome would automatically also mean for it to be correct or even good. Because, for all the advanced technology and mathematics, that’s what those systems did at the end. Of course, even early 21st century Sci-Fi authors, such as Ted Chiang had warned about this. Already in 2023, Chiang wrote about how “AI” is “a blurry JPEG of the Web[1]”, referring to how these systems tend to only regurgitate existing data into average representations, creating “blurry” outcomes. He used this to refer to “ChatGPT” – which kids today can read about in the history-section of *NeoWikipedia* – but this of course applies to all “AI” tools. And, as it turns out, none of us is really “average”. That’s why back when we had cars, you could move the seats in them back and forth to fit for your height[2].

1: a blurry JPEG of the Web

2: you could move the seats in them back and forth to fit for your height

They also found how “AI” not only generates *“average”* outcomes, but also can only produce these averages based on historic data – with all the biases, discriminations and other issues that are within the data. So what if, *for example*, police tended to over-police marginalised communities[1]? Or if checks for social security fraud particularly targeted certain demographics[2], such as _“deprived neighbourhoods”_ or those with disabilities – like Jo experienced first-hand? In those cases, “AI” was particularly likely to negatively decide over these communities. And even worse, [all attempts to address those overtly prejudiced systems just pushed them to only become *covertly prejudiced* instead, by trying to pick up on secondary signs in the data, such as the way people write and talk!

1: over-police marginalised communities

2: social security fraud particularly targeted certain demographics

Jo and their friends thus set out to do their own data science research and create both the data and the “AI” deployments that would be useful to them – making use of their embodied, lived understanding of these existing limitations. Given that _nothing about us without us_[1] had been a rallying cry for the disability rights movement and other marginalised groups for decades, it was no surprise that the _“AI from below”_ movement started there in the early 2050s.

1: _nothing about us without us_

Collectively, they took inspiration from some of the examples of movement-driven data collections for advocacy and citizen-driven science that happened back in the early 21st century: - They found communities that created their own census[1], to collect the “missing data” on how they were being disadvantaged and discriminated against. - They found _gig workers_ for _ride share companies_ coming together, collecting & using their own driving data[2], to understand their hourly wages, ultimately unionising.

1: communities that created their own census

2: collecting & using their own driving data

- They found people with chronic conditions, who collectively engaged in patient-led research to create “AI” systems to provide the tools that the medical research system didn’t provide for them[1].

1: collectively engaged in patient-led research to create “AI” systems to provide the tools that the medical research system didn’t provide for them

- They found communities of trans people[1] and disabled people[2], collecting their own data to fight back against medicalisation and to instead create the research[3] they needed to actually improve their lives.

1: trans people

2: disabled people

3: create the research

By their nature, these efforts that Jo and the _AI from Below_ movement put together throughout the 2050s typically remained small and localised, and generally people looked at them in a somewhat bemused way. But slowly, this practice became more accepted: After years of _“using ‘AI’ to deploy interventions at scale”_, these large interventions became less and less common, as people recognised the benefits of small-scale, localised and community-driven “AI” approaches. Ultimately, even the government agreed, and finally stopped all the _“one-size-fits-all ‘AI’ deployments”_ in 2069.

Of course: These changes were also helped by the world slowly running of resources such as water and electricity[1], which made such large efforts harder and harder. _Thanks, climate change!_ But that’s not to lessen the credit to Jo and their fellow grassroots researchers. Due to their efforts that we understood how local, situated knowledge could be paired with tailor-made “AI” solutions that were created by – and in service of – these communities.

1: slowly running of resources such as water and electricity

Back in 2074 this is a particularly old idea – going back over a hundred years: For science and technology to promote humanisation, it requires people to be active participants in it. And not just be mere _objects of scientific interest_. Thanks to the efforts of Jo – and their countless community collaborators – we are now more on that path than ever before. And with that, let us congratulate and celebrate Jo once again! Have a lovely birthday celebration! And now it’s time to sing and cut the cake!

--- Added 2024-05-15 23:00: In the Q&A at the event, the question for potential readings (beyond the sources linked above) came up. My recommended readings included:

- Dan McQuillan's Resisting AI: An anti-facist approach to artificial intelligence[1], which provides a thorough review of the issues with "AI", and how mutual aid could help

1: Resisting AI: An anti-facist approach to artificial intelligence

- Brian Merchant's Blood in the Machine[1], the newsletter and the book[2], which puts the Luddite's efforts for labour rights in the context of today's struggles against big tech

- Cory Doctorow's The Internet Con: How to Seize the Means of Computation[3] & Chokepoint Capitalism[4] to understand how big tech operates these days, and how they use AI/data

- We Need To Rewild The Internet[5], a fabulous essay by Maria Farrell and Robin Berjon

1: Blood in the Machine

2: the book

3: The Internet Con: How to Seize the Means of Computation

4: Chokepoint Capitalism

5: We Need To Rewild The Internet

Proxied content from gemini://tilde.club/~gedankenstuecke/blog/2024-05-15-imagined-futures-2074.gmi (external content)

Gemini request details:

Original URL
gemini://tilde.club/~gedankenstuecke/blog/2024-05-15-imagined-futures-2074.gmi
Status code
Success
Meta
text/gemini; lang=en
Proxied by
kineto

Be advised that no attempt was made to verify the remote SSL certificate.