Tag Archive for: aI

a hand holding a guitar pick

Fishbowl 3.1: When “open” isn’t: open-washing and the politics of openness in the age of Generative AI. Anna-Maria Sichani, Marina Markellou, Douglas McCarthy

Open Source has demonstrated that massive benefits accrue to everyone after removing the barriers to learning, using, sharing and improving software systems. These benefits are the result of using licenses that adhere to the Open Source Definition. For AI, society needs at least the same essential freedoms of Open Source to enable AI developers, deployers and end users to enjoy those same benefits: autonomy, transparency, frictionless reuse and collaborative improvement.

Research & Education

Claudia Montanaro – What ‘Meaning’ Means in LLMs’ Research: An Interdisciplinary Conceptual Map

  • LLMs outputs meaningful ideas
  • Can LLMs make sense? Can they represent meaning? Noam Chomsky interview and his answer is “It’s like asking whether submarines swim”
  • Hemran Cappelen – it can make language make sense
  • Difficulty in borrowing terms. Key terms are borrowed from human cognition and are ill-defined
  • Method: dataset of academic articles , used ATLAS.ti
  • Identified six clusters
  • Three main themes: meaning is measurable, meaning as emergent property, meaning as something to be understood
  • Meaning and understanding in LLMs is
  • The call for scholars from a rnage of fields ot understna

Q: Can you explain the terms a litter further? unpack the network map how does that relate to the table?


Maede Mirsonbol – Learning with GenAI Images: Supporting Higher Education Students’ Reflection on Inclusive Education

  • AI literacy in in higher education, we need different frameworks to introduce to students
  • Relational Pedagogy – Concept and Contest Teacher <-> Learner <-> Content
  • Where to place AI?
  • Leaning on UNESCO guideline on AI and Education
  • Student <-> Teacher <-> Text <-> AI, Student learning model
  • Two case studies, Lancaster University, England, and University of Tartu, Esrtonia
  • Groups learning using a semiosis-based method to learn with images
  • 5 Steps of inclusive as diversity education: exploration (what is AI education?), co-creation (groups prompting),
  • Then they create images of inclusivity (race, gender, access/communication)

Q: what is the take up of this research like in higher ed? Did participants understand diversity within the four categories demographic, communication, species, nature/material/place?

Berk Alkoç – Generative AI in Design Bootcamps: A Critical Inquiry into Pedagogy, Dependency, and Creative Practice

  • Figma and Make Design, prompt with an idea and it will make an app, borrowing the Apple version/platform/tool
  • Bootcamp and education – Careerfoundary UX Design bootcamp
  • Discourse analysis of marketing and and advertising of a number (10?) UX design bootcamp – a determinism built in of ‘if you don’t know AI, you won’t get a job”
  • How to create a persona in 5 minutes – this demonstrates a ‘frictionless’ process, where the process is more important than the outcome
  • Are they training designers or AI operators?
  • Reference to Hooked
  • Design fixation – pre-existing solution and restricts creative thinking
  • PLatform deactivation very similar to the API reliability of platforms
  • Should we go backwards and ignore the algorithm? (can we though? we’ve ‘drunk the Koolaide’)

Q: what does the industry want/what does the environment look like? Is this quick turnaround what is needed as we build design expertise? what does a co-designer model look like?

Panel 5.8: Creative Work

Nirvi Maru, Vivian H. H. Chen – A Human-AI collaboration approach to creative workflow

  • A GenAI inflection point – democratisation vs. diminishment (erosion of human agency, homogenization), to be understood not as a binary
  • Literature approaches this as a discrete problem to be solved
  • scoping review – 216 papers
  • Conceptual fragmentation – collaboration, teaming, autonomy, etc.
  • Trust-calibration
  • Erosion of human agency – skill degradation,
  • task collaboration misalignment – poor fit between AI capabilities and workflow
  • Reframing Human-AI Creative Collaboration (HAIC): interdependence, hidden costs, navigation tensions
  • Agency Automation Tension – tension needs to be managed not solved
  • Principle 1 – ROle Design as Creative Act: we negotiate the ‘role’ within HAIC
  • Principle 2 & 3: Transparency and Human Experience. As a creative uses more AI, they trade off the critical aspects of design. Human experience should be centred.

Yunus Emre Öztaş – How Creative Workers Make Do with GenAI in Visual Media Production

  • Critical creative labour studies, mostly in cultural and creative industries – high value in autonomy within an environment of augmentation-replacement binary
  • A spectrum of GenAI use model (in iterative development): Human Creative Agency Dominant (task related use) <-> Machine agency dominated (task-agnostic use)
  • Modes of use are embedded in agency

Tolulope Oke, Robert Prey, Femke de Rijk – Beyond the Global North: Generative AI and the Future of Musicians’ Work in Nigeria

  • Based in Nigeria Music Market, growing rapidly (three times larger than the rest of the world’s music industry)
  • There is a talented yet unstructured market (copyright and infrastructures are lacking)
  • Futures of the industry which is usually dominatd by the Global North, where GenAI becomes a crucial infrastructure
  • Africanfuturism – visions of the future, not concerned with what could have been but what is possible/what is the future
  • Decolonial AI: extends beyond data colonialism and into all material aspects of data generation (check this)
  • Future reference about Phillips Olajide, First African trained AI music generator
  • Korin AI – social-technical artefact that reflects and shapes how Nigerians imagine and navigate AI Futures
  • Jamai Fabuyi and legal contracts in Africa – interview here
  • Conclusions: opportunity to restructure global industries. It’s not about if it’s changing it’s about how it’s changing.

Closing Panel with invited speakers – Pei Sze Chow, Maximilian Schich, Naureen Mahmood

Pei Sze Chow

  • Specificity & pluralism across the variety of talks across the past few days. Contractual obligations and the sorts of creative work and their ecologies.
  • Skills. De-skilling. Up-skilling. the ways in which creatives are adopting new skills, retool a skill-set, etc.
  • (my thought – can we stop with the ‘democratisation’ as waves of media technologies emerge. We saw this in Web 2.0, social media, and in some respect through platformisation, and we know this is never the case)

Naureen Mahmood

  • Meshcapade (swing this to Rangi and see what he thinks)
  • Job displacement – everytime a technology revolution, it’s never a loss but a change. “how we let that happen is up to us”
  • Fear around AI and how others are using it (i.e. Governments especially). This community has a lot to offer in terms of understanding what it actually is and the kinds of applications that are possible.

Maximilian Schich

hutchinson_predictive_media

The following passage is a thought moment, and by no means exhaustive of placing the idea within existing theories/fields. It would be interesting, and probably the published version of this will do so, to align it with media and cultural studies, queer theory or perhaps discrimination studies. That said, here is a thought process…

I have been undertaking substantial research into artificial intelligence (AI) and automation since arriving here at Hans Bredow. I am beginning to think that perhaps automation/AI isn’t the best or most appropriate way to frame our contemporary media lives. Those concepts certainly are a part of our media lives, but there may be a better way to describe the entire environment or ecosystem as I have previously written.

What I do understand at this point is that media curation/recommendation is responding to us as humans, but we are also responding to how that technology is responding and adapting to us. This is a human/technology relationship, and one that is constantly being refined, modified, adapted and changed – not by either agent alone, but collectively as any two agents would negotiate a relationship.

This type of framing, then, suggests we should no longer be thinking about algorithmic media, or automated media alone. Perhaps what we should be thinking about is the relationship of processed and calculated digital media with its consumers – for this I will use the term predictive media.

I will attempt to explain how I have arrived at predictive media.

Artificial Intelligence (AI) Media

Media certainly isn’t in an AI moment – I’m not entirely sure I align with AI to be honest (or at least I am still working through the science/concept and implications). Beyond its actual meaning, it feels as thought it is the new business catch phrase – “and put some AI in there with our big data and machine learning things”. If artificial intelligence is based on machine learning, the machine requires three phases of data to process: to interpret external data, to learn from those data, and to achieve a specific goal from those learnings. This implies that the machine has the capacity for cognitive processing, much like a human brain.

AI is completely reliant on data processing to produce a baseline, incorporate constant feedback data after the decisions have been made, and the recalculation of information to continue to improve its understanding of the data. Often, there is a human touch during many of these points placing a cloud of doubt over the entire machine learning capacity. While this iterative process is very impressive when done well, there will always be data points that are indistinguishable to a computer.

We should instead be thinking about these processes as a series of decision points, of which we also have input data.

Say for example, you are making a decision to board a bus to travel into town. AI would process data like distance, timetable, the number of people on the bus, for example and recommend which bus you should catch. What it can’t tell is if the bus driver is drunk and is driving erratically, or that the bus has advertising that you fundamentally disagree with, or that you have 10 students travelling with you. In that scenario, it will be the combination of AI processes along with your human decision making that prove to be the best interpretation of which will be the best bus to catch in to town.

As I see it, we are not in a pure Algorithmic Media moment – and this will be a long way away, if it manifests at all.

Algorithmic Media

We have also seen the rise of algorithmic media, which often presents itself as recommender systems or the like, which essentially suggests you should consume a particular type of media based on your past viewing habits or because of your demographic data.

Algorithmic media can be very useful, given our media saturated lives that have Netflix, Spotify, blogs, journalism, Medium, TikTok, and whatever else makes up our daily consumption habits. We need some help to sort, organise and curate our media lives to make the process possible (efficient).

Think of a Google search. It is often the case we search for specific information based on our needs. Google knows the sorts of information we are interested in and will attempt to return information that is relevant and useful. Of course this information result has a number of levers in operation behind the mechanics of results, for example commercial priorities, legislation, trends, etc., Further, we have also seen how algorithms can be incredibly racist, selective, indeed chauvinistic.

In some areas, developers have started addressing these areas, given the algorithms are developed by humans. But there is still a long way to go with this work.

So in that sense, I’m not algorithmic media makes a whole lot of sense due to the problems associated with it. It could be that by the time the algorithmic issues are entirely addressed, we will have moved on to our next media distribution and consumption phenomena.

Predictive Media

So if this is our background (and I understand I have raced through media and technology history, and critical studies here – I will flesh this out in an upcoming article), humans have altered their relationship with technology.

Heather Ford and I are about to (hopefully!!) have an article published that explores the human/technology relationship in detail through newsbots, but I think it is broader than bot conversations alone.

Indeed, content producers adapt and shift their relationship with algorithms daily to ensure their content remains visible. But I think consumers are now beginning to shift their relationship with how technology displays information. If not shift, we are definitely recognising these digital intermediary artefacts that impact, suspend, redirect, or omit our access to information.

Last week, Jessa Lingel published this cracking article on Culture Digitally, The gentrification of the internet. She likened our current internet to urbanisation, and made the argument that the process of gentrification is clearly in operation:

an economic and social process whereby private capital (real estate firms, developers) and individual homeowners and renters reinvest in fiscally neglected neighborhoods through housing rehabilitation, loft conversions, and the construction of new housing stock. Unlike urban renewal, gentrification is a gradual process, occurring one building or block at a time, slowly reconfiguring the neighborhood landscape of consumption and residence by displacing poor and working-class residents unable to afford to live in ‘revitalized’ neighborhoods with rising rents, property taxes, and new businesses catering to an upscale clientele

Perez, 2004, p.139

In her closing paragraphs, Jessa made a recommendation that is so obvious and excellent, why haven’t we done this before?

Be your own algorithm. Rather than passively accepting the networks and content that platforms feed us, we need to take more ownership over what our networks look like so that we can diversify that content that comes our way. 

Lingel, 2019, n.p.

It made me think about food and supermarkets – certainly in Sydney we have two (maybe three) major supermarkets. But there is a growing trend to avoid them and shop local, shop in food co-ops, join food co-ops, and change our food consumption habits entirely. If those major chains want to push inferior products and punish their suppliers to increase the bottom line, as consumers we (in the privileged Australian context) have the option to purchase our food elsewhere.

Why wouldn’t we do the same with our digital internet services? Is this a solution to bias, mismatched, commercially oriented media algorithms and the so-called AI?

Is Predictive Media the Solution?

I think we can apply the same approach towards predictive media.

We cannot consume the amount of media that is produced, suggesting we may be missing crucial information. We cannot trust automated media because it has proven to be incredibly bias. But perhaps it is in changing our relationship with technology and understanding how they work a little better, we might find a satisfactory medium.

It is not only greater transparency that is required to address our problems of automated and algorithmic media, but it is a proactive engagement with those machines to train the programs to understand us better. But changing that relationship is difficult if you don’t know that is an option. So perhaps the real call here is to establish alternative and transparent digital communication protocols that are easily accessible and decipherable for users. In education, change is possible, and this may be a defence against the current trajectory for digital media.

The combination of both increased understanding/transparency and more active engagement with training ‘our’ algorithms could be the basis for predictive media, where predictive media helps us beyond a commercial platform’s profit lines, and exposes us to more important and critical public affairs.

Original Image by Hello I’m Nik.