Predictive Media: Our changing relationship with technology
The following passage is a thought moment, and by no means exhaustive of placing the idea within existing theories/fields. It would be interesting, and probably the published version of this will do so, to align it with media and cultural studies, queer theory or perhaps discrimination studies. That said, here is a thought process…
I have been undertaking substantial research into artificial intelligence (AI) and automation since arriving here at Hans Bredow. I am beginning to think that perhaps automation/AI isn’t the best or most appropriate way to frame our contemporary media lives. Those concepts certainly are a part of our media lives, but there may be a better way to describe the entire environment or ecosystem as I have previously written.
What I do understand at this point is that media curation/recommendation is responding to us as humans, but we are also responding to how that technology is responding and adapting to us. This is a human/technology relationship, and one that is constantly being refined, modified, adapted and changed – not by either agent alone, but collectively as any two agents would negotiate a relationship.
This type of framing, then, suggests we should no longer be thinking about algorithmic media, or automated media alone. Perhaps what we should be thinking about is the relationship of processed and calculated digital media with its consumers – for this I will use the term predictive media.
I will attempt to explain how I have arrived at predictive media.
Artificial Intelligence (AI) Media
Media certainly isn’t in an AI moment – I’m not entirely sure I align with AI to be honest (or at least I am still working through the science/concept and implications). Beyond its actual meaning, it feels as thought it is the new business catch phrase – “and put some AI in there with our big data and machine learning things”. If artificial intelligence is based on machine learning, the machine requires three phases of data to process: to interpret external data, to learn from those data, and to achieve a specific goal from those learnings. This implies that the machine has the capacity for cognitive processing, much like a human brain.
AI is completely reliant on data processing to produce a baseline, incorporate constant feedback data after the decisions have been made, and the recalculation of information to continue to improve its understanding of the data. Often, there is a human touch during many of these points placing a cloud of doubt over the entire machine learning capacity. While this iterative process is very impressive when done well, there will always be data points that are indistinguishable to a computer.
We should instead be thinking about these processes as a series of decision points, of which we also have input data.
Say for example, you are making a decision to board a bus to travel into town. AI would process data like distance, timetable, the number of people on the bus, for example and recommend which bus you should catch. What it can’t tell is if the bus driver is drunk and is driving erratically, or that the bus has advertising that you fundamentally disagree with, or that you have 10 students travelling with you. In that scenario, it will be the combination of AI processes along with your human decision making that prove to be the best interpretation of which will be the best bus to catch in to town.
As I see it, we are not in a pure Algorithmic Media moment – and this will be a long way away, if it manifests at all.
Algorithmic Media
We have also seen the rise of algorithmic media, which often presents itself as recommender systems or the like, which essentially suggests you should consume a particular type of media based on your past viewing habits or because of your demographic data.
Algorithmic media can be very useful, given our media saturated lives that have Netflix, Spotify, blogs, journalism, Medium, TikTok, and whatever else makes up our daily consumption habits. We need some help to sort, organise and curate our media lives to make the process possible (efficient).
Think of a Google search. It is often the case we search for specific information based on our needs. Google knows the sorts of information we are interested in and will attempt to return information that is relevant and useful. Of course this information result has a number of levers in operation behind the mechanics of results, for example commercial priorities, legislation, trends, etc., Further, we have also seen how algorithms can be incredibly racist, selective, indeed chauvinistic.
In some areas, developers have started addressing these areas, given the algorithms are developed by humans. But there is still a long way to go with this work.
So in that sense, I’m not algorithmic media makes a whole lot of sense due to the problems associated with it. It could be that by the time the algorithmic issues are entirely addressed, we will have moved on to our next media distribution and consumption phenomena.
Predictive Media
So if this is our background (and I understand I have raced through media and technology history, and critical studies here – I will flesh this out in an upcoming article), humans have altered their relationship with technology.
Heather Ford and I are about to (hopefully!!) have an article published that explores the human/technology relationship in detail through newsbots, but I think it is broader than bot conversations alone.
Indeed, content producers adapt and shift their relationship with algorithms daily to ensure their content remains visible. But I think consumers are now beginning to shift their relationship with how technology displays information. If not shift, we are definitely recognising these digital intermediary artefacts that impact, suspend, redirect, or omit our access to information.
Last week, Jessa Lingel published this cracking article on Culture Digitally, The gentrification of the internet. She likened our current internet to urbanisation, and made the argument that the process of gentrification is clearly in operation:
an economic and social process whereby private capital (real estate firms, developers) and individual homeowners and renters reinvest in fiscally neglected neighborhoods through housing rehabilitation, loft conversions, and the construction of new housing stock. Unlike urban renewal, gentrification is a gradual process, occurring one building or block at a time, slowly reconfiguring the neighborhood landscape of consumption and residence by displacing poor and working-class residents unable to afford to live in ‘revitalized’ neighborhoods with rising rents, property taxes, and new businesses catering to an upscale clientele
Perez, 2004, p.139
In her closing paragraphs, Jessa made a recommendation that is so obvious and excellent, why haven’t we done this before?
Be your own algorithm. Rather than passively accepting the networks and content that platforms feed us, we need to take more ownership over what our networks look like so that we can diversify that content that comes our way.
Lingel, 2019, n.p.
It made me think about food and supermarkets – certainly in Sydney we have two (maybe three) major supermarkets. But there is a growing trend to avoid them and shop local, shop in food co-ops, join food co-ops, and change our food consumption habits entirely. If those major chains want to push inferior products and punish their suppliers to increase the bottom line, as consumers we (in the privileged Australian context) have the option to purchase our food elsewhere.
Why wouldn’t we do the same with our digital internet services? Is this a solution to bias, mismatched, commercially oriented media algorithms and the so-called AI?
Is Predictive Media the Solution?
I think we can apply the same approach towards predictive media.
We cannot consume the amount of media that is produced, suggesting we may be missing crucial information. We cannot trust automated media because it has proven to be incredibly bias. But perhaps it is in changing our relationship with technology and understanding how they work a little better, we might find a satisfactory medium.
It is not only greater transparency that is required to address our problems of automated and algorithmic media, but it is a proactive engagement with those machines to train the programs to understand us better. But changing that relationship is difficult if you don’t know that is an option. So perhaps the real call here is to establish alternative and transparent digital communication protocols that are easily accessible and decipherable for users. In education, change is possible, and this may be a defence against the current trajectory for digital media.
The combination of both increased understanding/transparency and more active engagement with training ‘our’ algorithms could be the basis for predictive media, where predictive media helps us beyond a commercial platform’s profit lines, and exposes us to more important and critical public affairs.
Original Image by Hello I’m Nik.