Recently, I was invite to deliver a keynote for a joint session with the News and Media Research Center and the Centre for Deliberative Democracy to explore the ideas and concepts of digital intermediation.
The blurb:
How might generative artificial intelligence (AI) and automation be undertaken to produce social good? In an increasingly automated digital media world, user agency is challenged through the loss of interaction functionality on the platforms, technologies and interfaces of everyday digital media use. Instead, algorithmically designed decision making processes function for users to assist them in making sense of these environments as a means of assisting them to seek out content that is relevant, of interest and entertaining. However, if the last five years are anything to go by, these sorts of recommendations, particularly across social media, have caused anything but social cohesion and unity amongst users, and have instead spread misinformation, vitriol and hurtful media. Would our society be different had we designed systems that focused on, while still entertaining, content that places the wellbeing of humans at the forefront over content that is, for the most part, popular?
This presentation uses the lens of digital intermediation to explore how civic algorithms might be designed and implemented in digital spaces to improve social cohesion. By unpacking the technologies, institutions and automation surrounding the cultural production practices of digital intermediation, it becomes clearer how these leavers can be adjusted to nudge and encourage platforms, users and content creators to engage in improved civic processes. As a digital intermediation challenge, creating and working with civic algorithms presents as a potentially useful approach towards improving the cornerstone of our democracies by ensuring citizens have access to accurate information, are engaging in the discussions that are important and relevant to them, and are operating within digital environments that value social good alongside commercial gains.
And here’s the recording of the session, slides included:
https://i0.wp.com/jonathonhutchinson.com.au/wp-content/uploads/2023/08/20230809-Hutchinson-scaled.jpg?fit=2560%2C1440&ssl=114402560Jonathon Hutchinsonhttps://jonathonhutchinson.com.au/wp-content/uploads/2019/01/cropped-JH_Logo_icon-150x150.jpgJonathon Hutchinson2023-08-23 11:47:382023-08-16 11:57:41Civic Algorithms: A digital intermediation challenge
I have just completed a world-wind European tour, giving lectures at some of the best media institutes this side of the planet. Thanks to all the folk who made this possible, and took the time to promote my work. I’d like to reflect on that work and the discussions I’ve had with many great people within this post as I prepare this thinking for my next book – namely who should be facilitating and innovating transparent automated media systems? I argue public Service Media (PSM).
The thrust of this latest research was to problematise the concept of the ‘black-box’ as has been argued by so many scholars as something that we have no control over and are almost helpless to its control.
I think some of the most important work in this space was undertaken by Frank Pasquale and his Black Box Society book, which highlights the role algorithms play in society from a finance, legal and economic’s perspective. His argument of how algorithms control not only finance, but our digital lives, is a call for increased transparency and accountability on those who facilitate these technologies.
I also appreciate the work of many scholars who contribute and develop this arena of scholarship. Safiya Noble has done amazing work here and here book Algorithms of Oppression is a landmark piece of scholarship that brings to bear the real world implications of how algorithms are not only bias, but racist and oppressive.
Noble’s book leads well into Tania Bucher’s also groundbreaking book If…Then, that further develops the politics of algorithms and automated systems, to offer media academics a framework to help think through some of the implications of these socio-technical constructs.
I also find Christian Sandvig’s work incredibly inspiring here. While Sandvig’s work on algorithms and discrimination is super interesting, this particular piece on Auditing Algorithms sparked a particular interest in me on how to research algorithms.
But what I have found through most of this literature are two things, and this is perhaps where my ‘application brain’ is most curious. Firstly, most scholars tend to ignore user agency in these relationships, as if we are helplessly at the mercy of mathematical equations that are determining our society. Most (some) people are aware of the algorithm, and how to work alongside it these days, if our interface with platforms like Netflix, Spotify, YouTube etc. is anything to go by. Secondly, no-one talks of who should be responsible for facilitating a better system. Should we simply make more policy that tries to calm the overlord digital tech companies of now, or should we be thinking five to ten years ahead on how that technology can be used for society’s benefit (and not in a Chineses Social Credit System sense, either)?
So that is what I have been talking about in the last few weeks, and I think it is really important to include in the automated media conversation. I have been developing a digital intermediation framework that incorporates a number of these actors, and trying to understand how the intermediation process occurs. Check this out:
This is a first parse at what will become an important tool for a facilitating organisation who should be leading and innovating in this space: public service media.
Work has already commenced in this space, and we can draw on the thoughts of Bodó et al. (2018):
Public service media have charters that oblige them to educate, inform, and sustain social cohesion, and an ongoing challenge for public service media is interpreting their mission in the light of the contemporary societal and technological context. The performance metrics by which these organizations measure the success of their algorithmic recommendations will reflect these particular goals, namely profitability, loyalty, trust, or social cohesion.
Bodó, B., Helberger, N., Eskens, S., & Möller, J. (2019). Interested in Diversity. Digital Journalism, 7(2), 206–229.
So then, how does PSM do this? One way is to embed it in editorial policies to ensure PSM employees are operating as such. Another is to undertake PSM innovation remit and start teaching its users on how to work with algorithms effectively.
I don’t think ‘cracking open the black-box’ is all that useful to operationalise. They are often complex algorithmic formulas that require specialist expertise to design and interpret. But affording a control mechanism that enables users to ‘tweak’ how the algorithm performs may be not only possible, but crucial.
This is my focus for my last few weeks while I am working as a Research Fellow here in Hamburg.
https://i0.wp.com/jonathonhutchinson.com.au/wp-content/uploads/2019/05/Jonathon_Hutchinson_Transparent_Infrastructures.jpg?fit=1920%2C1080&ssl=110801920Jonathon Hutchinsonhttps://jonathonhutchinson.com.au/wp-content/uploads/2019/01/cropped-JH_Logo_icon-150x150.jpgJonathon Hutchinson2019-05-18 09:38:572019-05-18 09:39:06Data Intermediation: Towards transparent public automated media
EDIT: It is worth noting that News UK has teamed up with The Fifth to undertake exactly the point of this article. Read the Digiday article here.
In around 2017, Mike Williams and I had a few beers (can I say that?) in one of the studios at the ABC with a view to thinking through what was happening in the media at that time.
Instagram was ‘blowing up’, YouTube was going nuts, and a swag of micro platforms such as Vine, Musical.,ly, and others were fuelling the rise of branded content producers – otherwise known as solo content producers, otherwise known as influencers.
But what we were interested in that night was understanding how this exploding creative industry was running alongside the existing media organisations, or was it all – where we both had a keen interest in how the ABC was shaping up in comparison.
One of the concepts we started throwing around was this idea that social media content producers now make their celebrity-ness online, build these massive audiences (or highly engaged audiences), and then often make the jump to traditional media. At the time, #7dayslater had just finished season 1 and I thought it was going to be a new production model the ABC would indeed pursue (but then, funding cuts).
What #7dayslater did represent however, was the praxis between online content producers and media organisations such as the ABC. And so was born the first concept of the Digital First Personality.
Of course this concept only raised more questions that night, like:
Why would a content producer become popular with their own style, and then switch over to somewhere like the ABC (with a remit for public service)?
How could they maintain their platform salary if they were to go off brand with their audience (suddenly start talking about the ABC as part of their suite of everyday-ness)
Should online content producers be trained by media organisations, and if so does that mean traditional celebrities should ‘learn’ social media?
Does the digital first personality become the new cultural intermediary?
Now that we have finished several beers, shall we go and have dumplings?
I’ve been thinking, researching and developing these questions for the last few years (beyond the call for dumplings), and have developed the concept of the digital first personality significantly. I first took it for a test drive with my MECO3602 Online Media students who bought into it and then also pulled the idea apart. I have presented the idea at a few conferences and have received some great feedback from colleagues along the way. Recently, I have resubmitted an article with major revisions to an A ranked journal, and am hopeful it will be published soon.
The last round of revisions with that journal really pushed me to think through some of the fundamental and theoretical concepts of the digital first personality. More broadly, I am beginning to draw connections between the digital first personality and microplatformization as part of the Digital Intermediation research project – how online content producers craft their skills as cultural intermediaries that are both experts at social influence and understanding platform automation, i.e. recommender systems. This is now starting to feed into the infrastructure work I am undertaking within the automated media space.
Here’s a basic introduction to how I am approaching the framework of the digital first personality:
Intermediation has traditionally been undertaken by a number of stakeholders including institutions, humans and non-human actors, to transfer information from one group of individuals to another. Recently, two new actors have emerged within the digital media ecology through cultural intermediation: social media influencers and automated media systems engaging algorithms. Cultural intermediation as a framework is a useful way to understand emerging social and cultural forms as a result of new media technologies. Cultural intermediation (Bourdieu, 1984) that describes how social capital can be exchanged between different stakeholder groups also incorporates market economics (Smith Maguire and Matthews, 2014) and expertise exchange. The latest iteration of cultural intermediation includes the agency of platforms, social media influencers and increasingly algorithms. Understanding this new form of cultural intermediation is crucial to enable items of public importance to remain visible.
Social influencers, which have previously been referred to as microcelebrities (Marwick, 2013; Senft 2013) and digital influencers (Abidin, 2016), are a particular subset of cultural intermediaries. Through their developed expertise to identify ‘cool’ boundary objects, they are able to engage in multiple media production practices to demonstrate the value of those objects to their large audiences. Examples of this practice include Zoella who often engages her audience with the products from her latest shopping haul (revealing the contents of one’s shopping bag), Evan’s Tube who engages his younger audience with an ‘unboxing’ of the latest Lego kit, or Fun for Louis who is often travelling to exotic locations to reveal its most appealing side. In each instance of these social influencers producing content, they engage in high levels of media literacy to transfer the value of the chosen product or service to their large fan base: a trustworthy, word of mouth news sharing technique. They will typically do this across a number of social media platforms, including their TikTok channel for the behind-the-scenes content, the Instagram platform for the ‘hype’ photo or Insta-Story, and a YouTube video to engage their largest audience.
The second emerging aspect of cultural intermediation is the algorithmic arena, which to a large extent describes how automation is undertaken across digital media platforms. As Gillespie (2014: 167) notes, algorithms “are encoded procedures for transforming input data into a desired output, based on specified calculations”. Within a media ecology that sees significantly more content produced than can be consumed, algorithms, in one sense, are seen as mechanisms to assist users in finding and consuming content that is relevant to their interests. In most cases, this manifests as a recommender system, which is represented as ‘Recommended for you’, ‘Up Next’ or ‘You will Like’ types of automated mechanisms. However, there is an increasing body of literature, which is described in detail below, that challenges the bias, power and relationships with content, society and culture that are represented by automated media systems.
Cultural intermediation that combines both social influencers and algorithms, then, acts as a process for media visibility across emerging networked platforms. What has become the process of blending private with public media (Meikle, 2016) has, as Turner (2010) highlights through the demotic turn, enabled ordinary folk to become key influential media producers. However, these key actors within cultural intermediation are typically engaging with the content production and distribution process for the social media entertainment (Cunningham and Craig, 2017) benefits such as increased social and economic capital. This cultural intermediation process is operationalised by what I argue is the digital first personality: those individuals that produce digital content for maximum visibility by engaging social influencer publication strategies that appease platform algorithms. In many cases, their media production focus is on commercial products and services to increase their social and economic capital. Within the social influencer genre that excludes fake news and disinformation, public issues, public affairs, news and current affairs, are often ignored in lieu of highly profitable alternatives.
So here is a beginning for a new area of research. I feel as though I have completed my fieldwork in digital agencies for now, but i can see a new space opening up that looks at the intersection of microplatformization and digital first personalities as the backbone of digital intermediation.
https://i0.wp.com/jonathonhutchinson.com.au/wp-content/uploads/2019/03/Jonathon_Hutchinson_Digital_First-Personality.jpg?fit=1920%2C1080&ssl=110801920Jonathon Hutchinsonhttps://jonathonhutchinson.com.au/wp-content/uploads/2019/01/cropped-JH_Logo_icon-150x150.jpgJonathon Hutchinson2019-03-18 23:36:212019-03-19 16:45:12Digital First Personality: An overview
We have just returned from a week of interviews in Seoul, South Korea and Tokyo, Japan as part of our Australian Research Council funded Discovery Project, Media Pluralism and Online News. In this post I will focus on the South Korean case only, as we still require more work to understand the Japanese arena completely. During our time in South Korea, we interviewed key stakeholders from Daum, The Korea Herald, Yonhap news Agency, and the Korea Press Foundation.
The South Korean news media industry is unlike any other in the world, especially in terms of how the Koreans access their news. Unlike other parts of the world that typically use Facebook, Twitter and increasingly messaging apps (Kalogeropoulos, 2018), South Korea has the News Portals Naver and Daum. The statistics are around 70% of Koreans access news via Naver, 20% via Daum and the rest from directly accessing the news websites or messaging apps (Korea Press Foundation, 2018). This makes the market voice of Naver incredibly loud in the news media. But it is the news ecosystem in its entirety that is also of interest to understand how South Korean access their news online.
Who are the Key Players in South Korean News?
The media industry in South Korea is governed by the Broadcasting Act (2008), the telecom and ISP industry is dominated by KT, SK and LG, and a number of television networks, newspapers and outlets. Within the online news sector, there is also the News Assessment Council and the News Portals. While there is much work already done on the laws and incumbent stakeholders, there is little understanding on the news portals and the News Assessment Council – an area we focus on.
The role of the News Assessment Council includes allocating a Board of members from the news industry, media experts and appoints its own staff members. Sometimes they work as a proxy regulator for the news portals. Twice a year they accept applications from news sources to become part of the news portals, where portals will sponsor the Council to remain in operation. Essentially, the News Assessment Council acts as a self regulating body for the online news sector.
As news portals, Daum and Naver will pay a number of news providers to submit their news articles. News providers are required to accept the conditions of the portals to be published in that space. As the access data suggests, South Koreans consume most of their news via the portals (most significantly Naver) and the news organisation’s partnership with the news portals is crucial for those organisations to survive. The portal partnership enables the news organisation’s content to be searched on the portal and receives better visibility through search engines. If the news organisation level of partnership is high enough, the portals will pay increased money to the news provider (news fees). While the subscription money is not that much, the real money comes from search, which then leads to larger traffic.
What are the news portals?
Both Naver and Daum are more than just news portals: they are a place where most Koreans undertake activities such as search, messaging, and they also include cash payment systems. They are an online destination for many users, making them an attractive space to also publish online news.
Users are presented with a series of categories on the news site including Breaking News, Society, Environment, and Lifestyle. The front page displays a selection of the top news articles and users are invited to either directly click on those articles or select from their categories of interest.
In talking with our interviewee at Daum, we established the following:
At Daum the breaking news priority is determined by their pre-determined categories on the main page. This is now based on how users access their information – this data is gleaned and based on browser behaviour and not a logged-in state (they say for user privacy). So algorithms, huh?
Users are given a random number but then the number can be reset, to avoid the privacy issues. There is no priority on sectors/genres, it is based on audience, and based on customer choice. The introduction of the algorithm is not to be political, it is to increase customer satisfaction.
Users can comment, share and vote up/down on each of those articles to determine where information will appear on the website.
News Aggregation
In talking with many interviewees, it became obvious that Yonhap News is the most consumed news service (the highest percentage at around 25% of all news consumed).
There might be a few reasons for this including the news agency is a 24/7 and can provide up-to-the minute journalism. Users also trust Yonhap more than other news agencies, increasing their consumption rate. Further, Yonhap are not subject to the constraints that stop other outlets publishing news simultaneously across news portals AND their own broadcast outlets.
So on the surface, it would appear that the self-regulatory body, the News Assessment Council, determine who can publish on the portals. Yonhap is the most consumed media source across those portals, and there is little to no intervention into community management of those conversations. Users determine, through popularity, where content will be displayed on the portals. This model was questioned by a number of stakeholders across the online news industry.
Media Diversity?
We will continue to analyse the preliminary data findings from this field work over the coming months to determine to what level there is media diversity. Other factors that need to be included in the analysis beyond the media environment are user behaviour, the impact of the portal algorithms, user experience, and the age of the news consumers (apparently news manipulation is over blown because young people don’t read the comments and hardly access journalism).
One interesting item to really think through is the arrival of YouTube and Instagram as a key news source for people. Anecdotally, YouTube is an easier interface for older people to access information, and users trust information if it is sent to them via Instagram. The role of other platforms is certainly changing the diversity of the media landscape in South Korea.
No doubt we will publish an article or book chapter from the findings and you can continue to follow the Media Pluralism blog for updates on this research.
And of course if you have any first-hand experiences with South Korean News, or have insights you can offer, please leave a comment or question below.
https://i0.wp.com/jonathonhutchinson.com.au/wp-content/uploads/2019/02/Jonathon_Hutchinson_South_Korea.jpg?fit=1920%2C1280&ssl=112801920Jonathon Hutchinsonhttps://jonathonhutchinson.com.au/wp-content/uploads/2019/01/cropped-JH_Logo_icon-150x150.jpgJonathon Hutchinson2019-02-25 03:38:482019-02-25 03:38:58News Diversity in South Korea
I’m cooking up some ideas while I’m away on Sabbatical at the Hans Bredow Institute. My core focus at this stage is the ‘how to’ research automation and algorithms. My current approach is integrating retro engineering through ethnography and design thinking. At this stage, I’m calling it Data Ethnography and below sets out a guideline for what I think that should be.
No doubt this is the skeleton for a journal article, but what is here is the early developing of a new method I am currently working on.
If you think this methodology could be useful, or you have any feedback or suggestions, please leave them below in the comments.
Why Data Ethnography?
Humanities and social science digital research methods have been interrupted due to the prominence of privacy and surveillance concerns of platform interoperability that produces large quantities of personified data. The Facebook Cambridge Analytica scandal, especially the revelation of its ability to construct predictive models of its user’s behaviors, brought to the public interest concerns over how platform user data is harvested, shared and manipulated by third party providers. The global pushback against the platform provider’s use of these data resulted in platforms closing down some access to application programming interfaces (APIs) to inhibit data manipulation. However, these restrictions also impact on how public benefit research is conducted, providing a useful prompt to rethink how humanities, social scientists and human computer interaction scholars research the digital.
While the datafication of our digital lives has provided us with new insights, the digital methods that enable us to research our digital selves have always been mixed to understand the field of enquiry, along with its surrounding political, cultural and economic constructs.Increased digital practices built on sophisticated calculations, for example the use of algorithmic recommendations, connected devices, internet of things, and the like, have impacted on our research environments, prompting the question, how do we research what we can’t see?This article provides evidence from investigating the visual cultures that surround YouTube that a new methodology is required to research the apparent ‘black boxes’ that operate alongside our digital selves through data ethnography. Data ethnography is the combination of stakeholder consultation, top level data analysis, persona construction, fine data analysis and finally topic or genre analysis. Data ethnography enables not only what we cannot see, but provides a useful way to understand government interoperability mandates and inform appropriate policy development.
Overview of Data Ethnography
Consultation
This
methodology emerged from asking the question, what does the Australian YouTube
visual culture look like? Building on the long-term participant observation
that is synonymous with ethnography, a researcher is able to understand the
norms, cultural affordances, communication practices, and so on. The researcher
is required to both produce and consume videos on the platform to understand
how users will create content to suit the platform constraints. Simultaneously,
viewing the content provides insights into how viewing publics are constructed,
how they communicate, what is considered important, norms and languages. In the
context of YouTube, this included the platform, but also the intermediaries
such as digital agencies, multichannel networks and other digital
intermediaries such as influencers to highlight publication strategies. The
combination of this ethnographic data provides a compelling starting point for
the additional methods that emerge.
The video content was analysed using discourse analysis reflective of Jakobson (1960) to understand the video language function as referential, poetic, emotive, conative, phatic, and/or metalingual. As such the discourse in one of four ways: contact enunciation – looking into the camera & addressing the audience; emotive enunciation which is the expressive or affective relating to the style of the YouTuber; genre including thematic content, style and compositional structure; enunciative contract which is the reading contract (Véron, 1985) between the enunciator (addressor) and enunciatee (addressee). The discourse analysis enabled the vast amounts of YouTubers to be categorised into a smaller, more manageable group of users.
Building
on the discourse analysis, I asked the users of the platform the following
questions:
What is your gender?
What is your age?
How often do you use YouTube in a week?
What is your favourite category of YouTube video?
Are you likely to watch the next video that YouTube suggests for you?
Do you ever watch the trending videos?
When you enter this address into your web browser, what is the URL of the “up next” video that it suggests for you: https://youtu.be/4_HssY_Y9Qs
The
results of these several questions then guided the following snowballing
process of the additional methods.
Top Level Data Analysis
Before
undertaking comprehensive data scraping processes that rely on platform data
availability, it is useful to observe how various incidental metrics are
available. In the case of YouTube, this related to likes, comments, views, and
the like that provide insights into what people are watching, how they engage
with the content, and how they talk about the content. These top level metric
data observations enable the researcher to direct the research or focus on
areas of interest that are not otherwise obvious through the consultation phase
of data ethnography. The top level metrics further support the user practices
on how content is produced, published, shared, and consumed amongst a wide
variety of users. Finally, the top level data analysis enables the researcher
to ask questions such as what data are available, which processes might be
automated, and how might these data be repurposed for other sorts of
measurements.
For
YouTube, the top level data analysis translated to the following areas of
interest:
Views
Likes
Dislikes
Published On
Comment Numbers
Reaction to those comments
Comments on comments
On
the YouTube platform, these are the metrics that are available to the
non-social science data scraping process. Researchers with no data programming
skills are able to extract these data.
Persona Construction
Persona construction is a research approach that is based in human-computer interaction (HCI), user-centred design (UCD) and user-experience (UX). Emerging from the Design Thinking field which is human-centred to solve problems, persona construction is useful to understand how problems can be addressed between human and machine interaction. “Design Thinking is an iterative process in which knowledge is constantly being questioned and acquired so it can help us redefine a problem in an attempt to identify alternative strategies and solutions that might not be instantly apparent with our initial level of understanding” (Interaction Design, n.p.). It can have between 3 and seven stages, but these stages are not sequential or hierarchical, but rather iterative and the process typically does not abide to the dominant or common approaches of problem solving methods.
There
are 5 phases in Design Thinking:
Empathise – with your users
Define – your user’s needs, their problem, and your insights
Ideate – by challenging assumptions and creating ideas for innovative solutions
Prototype – to start creating solutions
Test – solutions
Persona
Construction in Design Thinking is in the second phase of the process, which
enables the researcher to define user needs and problems alongside one’s
insights. There are four types of personas: Goal-directed, Role-based,
Engaging, and Fictional personas. The data ethnography methodology uses
Fictional Personas which “The personas in the fiction-based perspective are
often used to explore design and generate discussion and insights in the field”
(Nielsen, 2013, p.16). In this environment, a persona “is represented through a
fictional individual, who in turn represents a group of real consumers with
similar characteristics” (Miaskiewicz & Kozar, 2011, p. 419). Secondly, and similarly to ethnography,
a persona is described in narrative form. This narrative has two goals: (1) to
make the persona seem like a real person, and (2) to provide a vivid story
concerning the needs of the persona in the context of the product being
designed.
In
the context of YouTube research, the key criteria for the fictional personas
were:
Name
Age, gender
Marital status
Occupation
Hobbies
Technology familiarity
Devices used
To
ensure the accuracy of the process, the research was conducted behind the
university wall which has a large range of IP addresses. The research was
conducted using Firefox under a new username for each persona, the researcher
was not in a signed in state for Google or YouTube, a new Google account was
created for each persona and the location of user was set by suggesting a phone
area code as per their country. Their interests (Hobbies) became the search
terms and the algorithmically generated results were recorded in a pre-trained
and post-trained state.
Fine Grained Data Scrape
By engaging the persona construction method which reveals insights into how an algorithm will treat its users, or within the context of this research the sorts of results it will recommend, it is then possible to engage in a fine-grained data scrape. A fine grained data scrape is defined as ….[ref]. In this research, it become possible to understand which were the top related videos, which channels were the most viewed, and sorts of networks that emerge around those videos. This process is most useful for not only identifying specific nodes or videos, but also clusters which can be translated into thematic areas, issue publics (Burgess and Matamoros-Fernández, 2016), and audience clusters. I have previously written about the specific social network analysis (SNA) method so I will not go into that detail here, but in order to find these thematic clusters there is a process of data extraction, cleaning and processing which can be followed. SNA is defined as a computational practice “which draws on computational technologies to help identify, aggregate, visualise and interpret people’s social networking and social media data traces” (p.1). In the first instance, I engaged the YouTube Network Analysis Tool (Ref) to extract the network data of related videos to those which returned as popular in the persona construction method – a post trained algorithm state. This digital method tool extracts the data as a Gephi file which can then be manipulated to provide a social network analysis (SNA) across the dataset.
Topic Modelling
The
final method to understand how users congregate around popular content on
YouTube, and how they communicate about the material, was to engage in topic
modelling.
Topic Modelling is the final method which attempts to understand
how users talk about certain things in particular ways. Specifically, I was
trying to understand how certain topics emerged in relationship to other
topics, which can be understood through the Latent Dirichlet Allocation topic
modelling approach. Smith and Graham note, “Informally, LDA represents a set of
text documents in terms of a mixture of topics that generate words with
particular probabilities” through a predetermined number of topics. This
provides the researchers with a “heuristic approach that aim[s] to maximise the
interpretability and usefulness of the topics”.
For
example, if we wanted to find out what are the popular topics that are
discussed by a 14 year old Australian boy, we would construct the persona with
interests, which in turn become search terms of, bike riding, Lego,
Playstation, and Phil and Dan. The top YouTube Channel recommendations for this
user before the algorithm training were:
Social media audiences consume approximately three percent of the entire amount of content published across platforms (Bärtl, 2018). Of this three percent, a small number of popular digital influencers create that content, for example Casey Neistat, Logan Paul, or Zoella that, arguably, leads to media homogenisation through the limited focus of popular themes and topics. Moreover, platform providers, such as YouTube and Instagram, operate on algorithmic recommender systems such as ‘trending’ and ‘up next’ mechanisms to ensure popular content remains highly visible. While platforms in the digital era exercise a social and political influence, they are largely free from the social, political and cultural constraints applied by regulators on the mass media. Beyond vague community guidelines, there remains very little media policy to ensure that the content produced by digital influencers and amplified by platforms is accurate, diverse to include public interest, or are indeed beneficial.
This project will research the content production process of automated media systems that engage digital influencers, or leading social media users, who interact with extraordinarily large and commercially oriented audiences. The evidence base will assist in developing theory on contemporary digital media and society, which will consequently shape how communities access public information. Instead of harnessing this knowledge for commercial imperatives, this research project will examine the findings in the context of socially aware digital influencers who occupy similar roles to those found in traditional media organisations. Further, this project will examine how algorithms are making decisions for media consumers based on commercial executions, which are often void of the social awareness associated with public affairs and issues.
At a time when mass media comes under scrutiny for its involvement in perpetuating misinformation around public issues, accurate media becomes increasingly crucial to the provision of educative material, journalistic independence, media pluralism, and universal access for citizens. At present, media organisations are attempting to repurpose traditional broadcast content on new media platforms, including social media, through automation built on somewhat experimental algorithms. In many cases, these organisations are failing in this new environment, with many automated media attempts appearing more as ‘experimental’. This should be an opportunity for media organisations to rethink how they produce content, and how new informed publics might be brought into being around that content.
Instead of thinking of automation as a solution to their increasing media environmental pressures, media organisations should be looking toward algorithms to curate and publish informative media for its audiences. This moment provides a unique opportunity to research the contemporary social media environment as media organisations experiment with automated media processes. It also challenges our understanding of automated media through popular vanity metrics such as likes and shares, in what Cunningham and Craig (2017) are calling ‘social media entertainment’. Under this moniker, these scholars highlight the intersection point of social media platforms, content production, and entrepreneurial influencers who commercialise their presence to develop their own self-branded existence. Abidin (2016) refers to these users as digital influencers, to include YouTube and Instagram superstars who demonstrate an unprecedented capacity to manifest new commercially oriented publics. Digital influencers are typically young social media users who commercially create content across a host of social media platforms, which is liked, commented on and shared by millions of fans. It is estimated the top ten 18-24 year old YouTubers are worth $104.3 million collectively (Leather, 2016), indicating a burgeoning new media market. This model of exercising digital influence within automated media systems has potential to translate into the support of an informed public sphere amid a chorus of social media communication noise.
The research is innovative in a number of ways. Firstly, it is groundbreaking through its approach of collecting and comparing datasets of contemporary social media practice from within the commercial and non-commercial media sectors. Secondly, it theoretically combines media studies, science and technology studies, sociology and internet studies to bolster the emerging field of contemporary life online: an interdisciplinary approach to everyday social media. Thirdly, methodologically it combines traditional qualitative methods such as interviews and focus groups, and blends these with contemporary digital ethnography techniques and emerging social network analysis. Fourth, this research contributes to the emerging field of automation and algorithmic culture, by providing a groundbreaking exploration of data science with traditional audience research: a field of particular importance for media organisations. Finally, the outcomes will provide innovative insights for digital agencies and leading media organisations.
Aims and Outcomes
The aims of the project are:
to understand how digital influencers operate across social media, in both commercial and non-commercial media environments;
to document how digital media agencies enable digital influencers to create large consumer based publics;
to examine and understand how algorithms are operating within large-scale media content production;
to identify how global media is incorporating digital influencer roles and automation (if at all) into their production methodologies; and
to provide a new theoretical framework, recommendations and a policy tool that enables media organisations to manifest and engage with its audiences on critical public issues.
The aims will be met by engaging in digital ethnography methods that documents how digital influencers produce content and actively engage with their audiences in an online community. These users are responsible for creating discussion around a number of issues they deem to be important, yet are typically driven by commercial imperatives. These conversations inspired through influencer content production is then compounded by the digital agencies who operate as amplifying agents for those messages, by especially ‘gaming’ the exposure mechanisms of YouTube and Instagram. However, this research will seek to prove that if this model can work in the commercial media environment, can socially aware digital influencers adopt the same techniques.
The primary research question is:
how do digital influencers operate to create large consumer based publics?
The research subquestions are:
how does automation operate in media content production and distribution?
how do automated media systems make content distribution decisions based on behavioural assumptions?
how can media organisations incorporate the successful methods of automation and digital influencers in their publishing practice?
Background
Digital influencers are social media users, typically ‘vloggers’ or video bloggers, who create content about products or lifestyles on popular themes including toys, makeup, travel, food and health amongst other subject areas. Increasingly, digital influencers are using a number of social media platforms to build their brand and publish content to their niche and considerably large audiences. This process of content production and distribution is emblematic of digital intermediation through social media platforms that afford individuals to operate in a media ecology, while determined through algorithmic processes. As Gillespie (2014, p.167) notes, algorithms “provide a means to know what there is to know and how to know it, to participate in social and political discourse, and to familiarize ourselves with the publics in which we participate”. At the heart of these algorithmic platforms distributing trending and popular content are the digital influencers who are creating popular, entertaining media and represent the flow of traffic and information between increasingly large audiences.
Media organisations have been experimenting with both digital influencers and automation to create and distribute its branded content. In many cases, commercial media have employed the services of digital influencers to boost their traditionally produced media content, while deploying, in many ways, crude experiments in automation. Media brands consistently send digital influencers products and services to integrate into their ‘lifestyle’ videos and images. Recommender systems (Striphas, 2015), such as those used for distribution platforms such as Netflix have proved most popular, where content is suggested based on an audience member’s past viewing habits. Recommendation systems have been adopted across a number of media services including Spotify, Apple iTunes, and most news and media websites. The integration of chatbots is also rising, where the most interesting experiment has emerged from the public media sector through the ABC News Chatbot. Ford and Hutchinson (forthcoming) note that the ABC News Chatbot is not only an experiment in automated media systems, but also a process of educating media consumers on how to access crucial information from within a cornucopia of media.
The key theoretical problem demonstrated in these examples is an asymmetric distribution of agency when automated systems make ‘decisions’ that can be based on flawed normative or behavioural assumptions (Vedder 1999). At worst, there is no possibility to override the automated decision. That is why algorithmic recommendations are sensitive matters and should be explained to users (Tintarev & Masthoff 2015). But explaining and understanding recommendation systems requires deep technical knowledge as the results are produced by a series of complex and often counter-intuitive calculations (Koren et al 2009). Furthermore, recommendations are often the result of more than one algorithm applied in the online and offline processing of consumer behaviour data (Amatriain & Basilico 2015). The asymmetrical relationship this creates between users and media content providers is especially problematic due to the public complexion and social responsibility obligations that should be demonstrated by media organisations.
Digital influencers as cultural intermediaries are tastemakers that operate across traditional media platforms such as television and radio, and have become more effective at their translation ability across social media platforms such as Instagram, Twitter and Vine for example. Digital intermediation is the next phase of this research, which builds on cultural intermediation, yet focuses on its relationship with automated media systems.
https://i0.wp.com/jonathonhutchinson.com.au/wp-content/uploads/2019/02/Jonathon_Hutchinson_Digital_Intermediation.jpg?fit=1920%2C1280&ssl=112801920Jonathon Hutchinsonhttps://jonathonhutchinson.com.au/wp-content/uploads/2019/01/cropped-JH_Logo_icon-150x150.jpgJonathon Hutchinson2019-02-10 17:27:412019-02-10 17:27:50Digital Intermediation: A study of automated media influencers