Are you feeding your algorithm or is it feeding you?
Analysing our public thought experiment, with Kyle Chayka
Could our internet be better? Why exactly should we be scared of AI? Are we shaping algorithms or are they shaping us? These kinds of existential dilemmas are what fuels us here at MØRNING. We believe the web can be a powerful tool for good, and that it’s our duty as brands and consumers to help right today’s wrongs and push culture forward.
The fact is, the world wide web is not delivering on its decentralised promises. The Metaverse hasn’t come to fruition. NFTs have become a dirty term. Screen fatigue is on the rise and social media is causing body, mind and soul issues. The internet - once a beautiful space for play and discovery - is now flattening culture, one core, advert, echo chamber and algorithm at a time. We’re all-too-familiar with the never-ending treadmill of the attention economy, forever getting in the way of our ability to think deeply, or act on what’s important to us (goodbye, latest abandoned creative project…). But as we reckon with the ramifications of this late-capitalist Web 2.0 and the looming threats of AI, the time to think and act is ever-urgent. It’s time to re-wild this digital hellscape we call home. Break a new day for the internet.
But staging a socio-digital revolution isn’t possible without some united energy. Do we even agree on the rights and wrongs of the digital universe living in our pockets? And if we do, then what comes next? MØRNING being MØRNING, we decided to stage a public thought experiment to find out. Step 1: erect 3 giant billboards in central London, to see what the public really thinks about the internet. Step 2: speak to Kyle Chayka, author of Filterworld and bonafide expert on the internet’s impact on society, to get his hot take on our findings, and what comes next. Lucky you, here’s the full download…
QUESTION 1: ARE YOU FEEDING YOUR ALGORITHM OR IS IT FEEDING YOU?
PUBLIC RESULTS: Was this a trick question? Maybe. Over half of respondents said a resounding ‘both’: “the internet is now our nervous system - symbiotically connected”, “ I feel like it's a mutually beneficial arrangement”, and, with added doom, “we are part of the same snake but ultimately I’m being eaten”. Meanwhile around 30% gave a definitive yes to the latter: “I consume therefore I am”, as one respondent put it.
But there’s hope in the remaining 20% of responses, who felt a better sense of control over the content they consume. “I have free choice and the tools to do my own research, block accounts, preserve my privacy etc. It's just that most people can't be bothered to do that”, said one response. “The ‘not interested’ feature keeps me on a slim diet” says another.
KYLE CHAYKA SAYS: I’m not surprised that people feel they are being consumed by algorithmic feeds. Algorithmic recommendations work by digesting the data that all of us users put out and then serving it back to us in aggregate. We feel we can’t control the algorithm because we have no agency in its face, no way to talk back to it or modulate how it works. Recommendation algorithms work the same way for everyone, one equation fits all. This could change if platforms gave us the ability to modulate recommendation variables or dynamically adjust how feeds work; currently there are largely just two options: totally chronological, or totally algorithmic.
MØRNING SAYS: If we’re all feeling a lack of control over our algorithms, why are hardly any of us using the tools we have at our disposal to help regain that control? If we’re all too addicted to our algorithms to want to change them, how can we ever reach a better symbiosis? As Kyle says, the answer lies in the agency platforms give us over our feeds. By being active social media users (choosing what we want to view, when we want to view it) rather than passive (consuming whatever we’re fed, whenever we’re fed it) we can grip onto our power over social media companies, and our sanity too. In a world that feels so out of our control, holding onto the agency we have over society, culture, the economy and our own mental health couldn’t be more critical.
QUESTION 2: WHAT DOES A ‘GOOD’ INTERNET LOOK LIKE?
PUBLIC RESULTS: “Freedom” was the most frequently used word in these answers. “Space for freedom and expression that puts the individual in control rather than controlling the individual ' said someone, “a space for people to learn, to utilise with free will” said another. “Limits” were often seen as the paradoxical but inevitable means to this freedom: “An internet with time limits, information limits and room for people to get their imaginations back”, as one respondent said.
But how? Power and governance were referenced by the majority of people as the key to a free and fair internet. “Governed by the people, and not owned by big tech”, “democratic”, “decentralised”, “free of Web 2.0 monopolies”, “not for profit” were all cited in people’s dreams of a “good internet”. But how can the public at large achieve this ‘good’ internet we all dream of? Are we too paralysed by our feeds to exercise the agency we have over our internet, or are the public’s perceptions of powerlessness real?
KYLE CHAYKA SAYS: I don’t think Internet governance will change much; the incumbents of Google, Meta, and Amazon are likely to stay more or less where they are. Government regulation could break down this monopolization, but I think the more likely future is that users gravitate away from experiences that get increasingly worse. (Which we’ve seen with Facebook, Instagram, and Google Search.) Users ultimately have freedom to go wherever they want — we don’t need to log on to Instagram or X. Right now, there are more democratic and decentralized options that people could use, including Urbit, Farcaster, Bluesky, and others. The problem is that they aren’t much fun to use yet.
More active users need to embrace their freedom and build new culture elsewhere online. I do think the noise and bustle of the currently popular platforms is a distraction from moving elsewhere. I think by “freedom” people might mean the freedom to connect directly with another person online, which has gotten harder and harder as things are more algorithmic.
MØRNING SAYS: We know what we want, we’re just not there yet: we’re already seeing an exciting new generation of Web 3.0 projects emerge, like Metalabel or Softer, who are helping foster a more equitable internet. But the comfort we gain from our existing social networks is preventing many of us from making any active leaps towards new digital spaces. That’s no bad thing: many of us need less screen time full stop. As Kyle says, maybe the most powerful catalyst for a better internet is our sheer dissatisfaction with our current platforms. Maybe a bad internet is what we all need to force us break bad habits, and reassess how we want to use the internet at all. Maybe we need to fall out of love with the internet, to start anew.
ARE YOU SCARED OF AI OR THE HUMANS MAKING IT?
PUBLIC RESULTS: Here we have an overwhelmingly unified response. Except the roughly 15% of respondents who said “both”, almost everyone agreed that they were most scared of human involvement in AI. “The threat of AI was never man vs machine but man vs man. AI is an ugly mirror to society” said one response, “I’m only ever scared of humans. Not just the ones that control the machines but the human history that is input into machines to define a future. Our past should be one of learning to be better, yet we seem doomed to repeat old mistakes” said another.
Others cited the specific contexts we’re living in, like the reality of Naomi Klein’s ‘Doppelganger Effect’, or those leading the development of AI: “techno-optimists like Musk and Marc Andreessen harbour and spread dangerous libertarian worldviews informed by Dark Enlightenment thinkers like Nick Land. The disregard for human life at the heart of techno optimism and right Accelerationist thinking is a far more dangerous threat”, one answer (quite chillingly) wrote.
KYLE CHAYKA SAYS: Humans are the ones making and releasing the AI tools without the necessary guardrails or sustainable business models. I don’t think the tools themselves are the problem; it’s how the entirety of human culture has been digested and regurgitated by them. People can resist AI by refusing to use it and pushing for proper licensing fees for training models. The public can shape public opinion and pressure politicians, ideally regulating AI before it gets too all-encompassing — though they failed to do that with social media. Particularly with AI, there is a philosophy of accelerationism which is inhuman and unsustainable; I think it lends to this sense that the future is impossible. There seems to be little human benefit to this technology, so why pursue it?
MØRNING SAYS: We’ve already seen tech giants derail EU plans to regulate advanced AI systems, leaving hugely powerful technology vulnerable to getting into the wrong hands (even worse than the tech giants themselves). But does the public have any power over the future of these technologies? In short, yes.
As Kyle says, the future in the hands of AI feels impossible, but it doesn’t have to be that way. There are things we can do to stop the damaging pursuit of tech: exercising our agency, giving ourselves time to think and act, engaging with movements that will help regulate the development of damaging technology. The bad news? Technology, yes, is terrifying. But the good news? We’re not totally powerless.
And thus concludes our thought experiment… for now. What do you think, should we be scared of the internet? Are you ready to re-think how you use technology? As always, thanks for reading, and don’t forget to catch us in the comments or over at @morning.fyi.
Until next time!
Love this point "Particularly with AI, there is a philosophy of accelerationism which is inhuman and unsustainable; I think it lends to this sense that the future is impossible. There seems to be little human benefit to this technology, so why pursue it?"
It reminds me this notion by Paul Ricoeur of what imagination is: a remix or a true "new" thing. I feel like AI at the moment absorbs so much from us without necessarily inciting us to think "new", except in very niche communities.