This article first appeared as the Editorial for Volume 13, Issue 4 of Policy & Internet Journal and is co-authored with Milica Stilinovic. I am sharing this pre-print here as a resource for others who may not have access to the copyedited and typeset version.
As our societies approach and engage with a number of policy moments, we are presented with opportunities to develop the discourse surrounding a sustained vitality of the worlds in which we live. One way of undertaking this is through extensive consultation with those of whom the discussions impact the most. Often this includes industry and government stakeholders, but more importantly is the process of including the voices of the groups and individuals who will be left to function within and around these policy interventions. This article highlights several moments there has been a successful consultation process within policy and internet arenas, and also highlights some key examples of when user consultation failed or was marginalised, resulting in less than acceptable outcomes. Further to this, policy intervention is a unique opportunity for scholarly research to demonstrate some of its best impact: its inclusion of intensive and often publicly funded research within the societies it examines. The contemporary take on the inclusion of user perspectives and scholarly research provides the backdrop for the seven articles of this issue that in many ways integrate this approach and highlight the importance of all voices within the policy development for the internet and its surrounding technological and cultural spaces.
policy; user-centred design; platforms; communication; scholarly research
As our world leaders gathered in Glasgow for the United Nations Climate Change, or COP26 as it has been adopted by the world, our attention has been directed toward the impact of humanity on the enduring existence of our planet. We had not been in this moment since the Paris Agreement of 2015, also known as COP21, where all countries agreed to limit global warming well below 2 degrees, with an aim of 1.5 degrees. The goals of the 2021 gathering are four:
- Secure global net zero by mid-century and keep 1.5 degrees within reach
- Adapt and protect communities and natural habitats
- Mobilise Finance
- Work together to deliver the outcomes (ukcop26, 2021)
This list seems a feasible agenda for a gathering of this sort, yet it was not without its own controversies as leaders gathered from most countries to talk through their strategies for the next five years.
Central to these discussions was the communication of the critical information about climate change. In the lead up to the summit, and in the moment of the Pandemic, we have seen such an increase in misinformation, which has been fuelled too often by disinformation. Entangled in this space are also “propaganda, selective reporting, conspiracy theory, inadvertent misinformation, and deliberate disinformation” (Angus et al. 2021, p. 1), where these scholars argue misinformation is not new in itself but is being amplified by users and algorithmic distribution techniques. It is the information communication input and output moments of COP26 that is crucial for citizens around the world to ensure we are across the current debates, science and outcomes of this critical gathering for climate change.
Yet in all this discussion, there remains one thing that is missing: us. And this is a somewhat common practice at these often-high level policy-generating gatherings between our global leaders, where the focus could be for reasons other than the titles on the banners. Aditya Chakrabortty of the Guardian makes a wonderful argument for just that: it’s a great idea to get everyone wanting the same thing, but in reality it’s just unworkable. The key point to take away from his discussion is that any significant moment, movement or policymaking gesture should always try to make change withpeople and not just for them. While discussions are underway at the global level for environmental decisions that are, and continue to remain, unfolding at time of writing, the user-focus is something that is still relatively un-inclusive and emblematic of our communication and technology policy dialogue.
In our last Editorial for Issue 13, volume 3, we revisited the Mark Deuze concept of living in the media to make sense of some of the contemporary media and technology policymaking moments currently underway (Stilinovic and Hutchinson, 2021). Yet in the three months since its publication, the adaptation of top-down driven regulatory decisions has appeared across several our favourite and everyday platforms – a return to the enduring issues that plague Tumblr. In 2012, Tumblr changed its content publishing policy to remove all blogs that were related to self-harm, attempting to improve the so-called quality of its published content. This was driven internally, and the decision was made in the interest of ‘all’ users. However, as Tiidenberg, Hendry and Abidin (2021) articulate, this was catastrophic for users who found community and belonging on Tumblr when they had been pushed away from other online spaces. The questions then emerge, where do these users go and what happens when they cannot connect with other users? Could the policy of this one social media platform been derived in a better method to not other, alienate or displace digital users? How might we learn from the outcomes of Tumblr and apply those insights to other aspects of our digital communication environments?
The stretch of platform-oriented regulation
The absence of users, particularly in the online environment, is nothing new and we have seen this occur across a number of settings. To briefly return to Tumblr as one of the most fascinating cases studies in recent times, it’s useful to highlight the spectrum of issues at play when activities come under the policy re-shaping spotlight. Seko and Lewis (2016) made a critical observation of Tumblr and the fringe sorts of online communities that gather in that space, highlighting the importance of community-making though online cultural curation. In describing the unique reblogging affordance of the Tumblr platform, they describe how online community has formed, consolidating what boyd and Marwick (2009) argued as avoiding a fear of being stigmatized online. Seko and Lewis argue “that unique affordances of the Tumblr platform shape distinct ways in which SI (self-harm) is told and shared. These affordances also imply the broader sociocultural context drawn by the particular audiences in making sense of the shared story” (2016, p. 182). The important takeaway from these scholars and others working in this space is the variety of users and how they rely on a broad range of sociocultural affordances in their online selves, communities and existence.
For Tumblr, they were trying to combat the public view that providing a place for those interested in self-harming would encourage the activity to occur. In fact, as the research indicates, this type of content provides a place for users to gather, provide support, share knowledge and feel included. Instagram has shown a consultative approach towards policy design, but also has significant leaps to make that work for its large user base. In 2019, the platform made a move to restrict content that includes dieting, detoxing, losing weight and cosmetic surgery (iweigh, 2020). What was of note here is the consultative approach they used to include the users themselves, body positivity campaigners (see for example Jameela Jamil), alongside academics researching in this space (see for example Dr Ysabel Gerrard), to produce policy that was inclusive and representative.
While this may seem a positive step forward, it also introduces the broad discussion of content moderation and the role it plays within policymaking, especially for areas such as young social media users and their health. Further to this, content moderation demonstrates the role that those who administer or police these sorts of policies play across social media, but also in other spaces of communication and technology. “Emerging policies will be limited if they do not draw on the kind of expansive understanding of content moderation that scholars can provide” (Gillespie et al. 2020, p. 2). These scholars highlight that what is happening on social media platforms, is reflective of what then happens across other digital spaces – it is a call for academics to take an increasingly important role in policymaking. Yet they further argue academic involvement in policymaking discussions “must be grounded in an understanding of moderation as an expansive socio-technical phenomenon, one that functions in many contexts and takes many forms” (p. 3). Beyond the location of content moderation of social media platforms, the point of socio-technical phenomenon is a grounding standard for all policymaking processes.
The current focus within the communication and media space is very much on the regulation of platforms as a key issue for a variety of governments, it is important to also be considerate of how this discussion extends beyond platforms alone and towards other aspects of policymaking. In this conversation, it is important to highlight the importance of the user voice and perspective when these debates emerge and indeed when the action of policymaking occurs. Further to this, the inclusion of these voices, such as activists, campaigners, interest groups, lobbyists and academics, the holistic understanding of the social and cultural affordances of these spaces are critical – these are the indicators on how good policy should be designed and implemented. This is where much of our policymaking falls short in the contemporary space. While the policymaking is accommodating for certain sectors of the userbase, it is inherently ignorant of others which often causes sever ruptures of the status-quo – this has consistently been the experience in the Australian context.
The most recent case study to demonstrate this exact argument is in the shift to update the advertisement policy at Facebook. On January 19, 2022, Facebook will remove detailed targeting options “that relate to topics people may perceive as sensitive, such as options referencing causes, organizations, or public figures that relate to health, race or ethnicity, political affiliation, religion, or sexual orientation” (Mudd, 2021, n.p.). Facebook sees this as a positive move forward that will protect individuals from content that is not relative to them. They are attempting to balance a policymaking approach that on the one hand is trying a balanced approach:
“we want to better match people’s evolving expectations of how advertisers may reach them on our platform and address feedback from civil rights experts, policymakers and other stakeholders on the importance of preventing advertisers from abusing the targeting options we make available” (Mudd, 2021, n.p.)
Yet, the outcome is less than desirable for a breadth of Facebook stakeholders. Jordan Taylor (2021) wrote on Twitter that this move not be a good thing: “Public health researchers & communicators, local advocacy groups, and Big Pharma likely use this feature to target ads related to HIV treatment and prevention. Removing this targeting may make reaching marginalized communities more difficult for small advertisers”. This is but one area of a policy change that will impact a significant group of stakeholders on the platform. Kath Albury (2021) continued Taylor’s support against this policy change by highlighting the continued difficulties academics and researchers face when trying to interface with social media platforms: “…these kinds of policies make it very hard to do health promotion – or recruit research participants”. In many cases, these moves are as Jean Burgess (2021) noted, going to cause harm if they are not connected to context and theories of power.
This is an important backdrop for the setting of this issue of Policy & Internet: the importance of users and their social and technological contexts when policymaking consultation emerges and through its implementation. Good policymaking is done with a sincere and thorough understanding of all those who will be effected by changes, and done with a view to increasing the status quo of the environment. It is also important to continually highlight that while this is happening in the media and communication space, particularly in the social media space, it is representative of how policymaking should be undertaken more broadly. It is a continuing discussion between the users (Hoffman, Proferes & Zimmer, 2018), the technologies (Hutchinson, 2021), the socio-cultures (Cabalquinto & Soriano, 2020), the moderators (Gerrard & Thornham, 2020), the researchers (Gillespie et al., 2020) and the policymakers that enable good policy design and implementation.
With an undertow for more scholarly research for and within policymaking as a thematic thread throughout this issue, the first article by Arho Suominen and Arash Hajikhana (2021) systematically explores the implication of big data on the policymaking processes. Through three key indices that have emerged from nine communities engaging in this research – namely the policy cycle, data-base decision-making, and productivity – these authors articulate a research agenda for further studies in public policy through big data. That is, these scholars are grounding the research agenda for policymaking in the context of what it seeks to regulate.
The second article in this issue turns its attention toward the China context, specifically looking at the role social media has played on the country’s family planning policy. As the media landscape has shifted and changed in the last ten years, so too has the agenda setting mechanisms for policy surrounding public issues, where a strong focus was on the revision of the one-child policy. Deng et al. (2021) have engaged text-mining analysis across a corpora of 74,000 Sina Weibo user comments on The People’s Daily published content to understand how the agenda has shifted. They specifically focus on the interactions between users and state-run news media to reveal how other public issues have emerged as important to the Weibo users during this time and how approaches towards agenda setting have changed over time.
Daniel Konikoff (2021) explores the space of toxicity and abusive content policy for Twitter in our third article, and argues for how we might reconceptualise policy within this communication space. By implementing a gatekeeper theory approach, and engaging in a discourse analysis, Konikoff presents a compelling argument that the policy and the technological affordances indeed perpetuate the toxicity of hate and vitriol content on the Twitter platform. This article signals how the freedom of speech rhetoric is not a useful mechanism to limit hate speech, which can also be applied to other social media platforms beyond Twitter itself.
Public policy discussion between the US and the UK come under the comparison lens in our fourth article by Barrett, Dommett and Kreiss (2021). These scholars identify six common ideals across the UK and the US for digital threats to democracy: transparency, accountability, engagement, informed public social solidarity, and freedom of expression. They also argue that policymaking is out of step with current literature by demonstrating how the framing of these democratic ideals often lead to specific sorts of evocative practices and promotion of the ideals themselves. This work is especially important with the growing rise of right-wing politics and other democratic crises.
Costa e Silva and Lameiras (2021) provide a Portuguese perspective on civil society and internet governance in our fifth article of this issue. By taking a national level perspective, these scholars critically examine the impact of participation on the conditions of civil society by reviewing activism within Portugal, based on the Portuguese and EU policy space. Through the integration of mobilization theory, these scholars identify a lack of horizontal networking because of the lack of civil structural resources such as knowledge and legitimization which is often achieved through international collaboration. Ultimately, they argue, and are in support of, multistakeholderism is a core principle for European civil society, yet the Portuguese case provides unique insights and a space for learning for the broader European context.
Lynette H. X. Ng and Araz Taeihagh (2021) turn to a more technology approach in the sixth article of this issue, in How does fake news spread? Understanding pathways of disinformation spread through APIs. As an often-overlooked stakeholder for internet policy, these scholars ask how technology infrastructures are impacting on how disinformation spreads through our networks. Through extensive analysis of how application interface programs (APIs) operate – especially their code repositories on GitHub and the like– across Telegram, Twitter, Facebook and Reddit. Through this analysis, these scholars highlight a four-stage framework that enables disinformation to travel across social media platforms, with a view to providing policy input on how APIs can be better managed. This research provides unique insights on how platform governance can continue to evolve for improved regulation on social media networks.
The final article in this issue focusses on the scholarship of digital diplomacy while exploring the Twitter use of three countries: Israel, Turkey and Russia. Through a content analysis and topic mapping, these scholars highlight how the countries’ Ministries of Foreign Affairs use Twitter to present an international and somewhat positive face to their audiences. What is of interest here is how these digital diplomacy efforts act differently towards their Ministry’s strategies to promote their foreign policy goals. By highlighting that Twitter is a legitimate platform to implement foreign state policy, and through a sociopragmatic framework, it can be seen that each country is addressing foreign audiences within the international arena.
Albury, K. (2021) [@KathAlbnury]. Thread of the day – these kinds of policies make it very hard to do health promotion – or recruit research participants. [Tweet]. (Twitter) https://twitter.com/KathAlbury/status/1458307404076969990
Angus, D., Bruns, A., Hurcombe, E., and Harrington, S. (2021). ‘Fake news’ on Facebook: a large-scale longitudinal study of problematic link-sharing practices from 2016 to 2020. In Selected Papers in Internet Research 2021: Research from the Annual Conference of the Association of Internet Researchers. AoIR – Association of Internet Researchers, United States of America. https://eprints.qut.edu.au/213835/
Barrett, B., Dommett, K., and Kreiss, D. (2021). The capricious relationship between technology and democracy: Analyzing public policy discussions in the UK and US. Policy & Internet. 13(4).
Burgess. J. (2021). [@jeanburgess] Once again for the choir in the back: all platform governance strategies enacted without connection to a. context and b. a theory of power are c. not going to work and d. likely to cause harm. [Tweet] (Twitter). https://twitter.com/jeanburgess/status/1458312489792200706
Cabalquinto, E. C., & Soriano, C. R. R. (2020). ‘Hey, I like ur videos. Super relate!’ Locating sisterhood in a postcolonial intimate public on YouTube. Information Communication & Society, Forthocming, 1–28.
Chakrabortty, A. Muddled, top-down, technocratic: why the green new deal should be scrapped. The Guardian. https://www.theguardian.com/commentisfree/2021/nov/11/green-new-deal-bad-idea-policy-left-joe-biden-john-mcdonnell?CMP=Share_iOSApp_Other
Costa e Silva, E., and Lameiras, M. (2021). What is the role of civil society in Internet governance? Confronting institutional passive perspectives with resource mobilization in Portugal. Policy & Internet. 13(4).
Danziger, R. and Schreiber, M. (2021). Digital Diplomacy: Face management in MFA Twitter accounts. Policy & Internet. 13(4).
Deng, W., Hsu, J H., Löfgren, K., and Cho, W. (2021). Who is Leading the China’s Family Policy Discourse? Policy & Internet. 13(4).
Gerrard, Y., & Thornham, H. (2020). Content moderation: Social media’s sexist assemblages. New Media & Society, 22(7), 1266–1286.
Gillespie, T. & Aufderheide, P. & Carmi, E. & Gerrard, Y. & Gorwa, R. & Matamoros-Fernández, A. & Roberts, S. T. & Sinnreich, A. & Myers West, S. (2020). Expanding the debate about content moderation: scholarly research agendas for the coming policy debates. Internet Policy Review, 9(4).
Hoffmann, A. E., Proferes, N., & Zimmer, M. (2018). “Making the World More Open and Connected”: Mark Zuckerberg and the discursive construction of Facebook and its users. New Media & Society, 20(1), 199–218.
Hutchinson, J. (2021). Digital intermediation: Unseen infrastructures for cultural production. New Media & Society. doi:10.1177/14614448211040247
iweigh. (2020) Instagram Bans Weight Loss Content for Under 18-year-olds. https://iweighcommunity.com/instagram-policy-change/
Konikoff, D. (2021). Gatekeepers of Toxicity: Reconceptualizing Twitter’s Abuse and Hate Speech Policies. Policy & Internet. 13(4).
Mudd, G. (2021). Removing Certain Ad Targeting Options and Expanding our Ad Controls. Available at: https://www.facebook.com/business/news/removing-certain-ad-targeting-options-and-expanding-our-ad-controls
Ng, L. H. X., and Taeihagh, A. (2021). How does Fake News Spread? Understanding pathways of disinformation spread through APIs. Policy & Internet. 13(4).
Seko Y, Lewis SP. (2018) The self—harmed, visualized, and reblogged: Remaking of self-injury narratives on Tumblr. New Media & Society. 20(1):180-198.
Stilinovic, M. and Hutchinson, J. (2021). Living in media and the era of regulation: Policy & Internet during a pandemic. Policy & Internet. 13(3).
Suominen, A. and Hajikhana, A. (2021) Review of Big Data Analytics in Policy-Making: A Research Agenda. Policy & Internet. 13(4).
Taylor, J. (2021). [@nprandchill] Facebook just announced they are removing ad targeting based on sexual orientation, but that may not be a good thing. [Tweet]. Twitter. https://twitter.com/nprandchill/status/1458298483887247361
Tiidenberg, K., Hendry, N. A., & Abidin, C. (2021). Tumblr. New York: Polity Press.