Recently, I was invite to deliver a keynote for a joint session with the News and Media Research Center and the Centre for Deliberative Democracy to explore the ideas and concepts of digital intermediation.

The blurb:

How might generative artificial intelligence (AI) and automation be undertaken to produce social good? In an increasingly automated digital media world, user agency is challenged through the loss of interaction functionality on the platforms, technologies and interfaces of everyday digital media use. Instead, algorithmically designed decision making processes function for users to assist them in making sense of these environments as a means of assisting them to seek out content that is relevant, of interest and entertaining. However, if the last five years are anything to go by, these sorts of recommendations, particularly across social media, have caused anything but social cohesion and unity amongst users, and have instead spread misinformation, vitriol and hurtful media. Would our society be different had we designed systems that focused on, while still entertaining, content that places the wellbeing of humans at the forefront over content that is, for the most part, popular?

This presentation uses the lens of digital intermediation to explore how civic algorithms might be designed and implemented in digital spaces to improve social cohesion. By unpacking the technologies, institutions and automation surrounding the cultural production practices of digital intermediation, it becomes clearer how these leavers can be adjusted to nudge and encourage platforms, users and content creators to engage in improved civic processes. As a digital intermediation challenge, creating and working with civic algorithms presents as a potentially useful approach towards improving the cornerstone of our democracies by ensuring citizens have access to accurate information, are engaging in the discussions that are important and relevant to them, and are operating within digital environments that value social good alongside commercial gains.

And here’s the recording of the session, slides included:

Floating in water

It’s been a while since I’ve published anything here. Hi, and welcome back!

In my academic career, much has been happening which may explain my silence. I’ve been super busy with research while also undertaking some of the most important leadership roles of my professional career. Right now, I hold the three executive roles of Editor-in-Chief of the Policy & Internet Journal, President of the Australian and New Zealand Communication Association (ANZCA), and the Chair of Discipline, Media and Communication at the University of Sydney.

In this post, I’m going to focus on the latter leadership role (Chair of Discipline) and describe how my first six months in the role has gone.

Where I’m at right now

8ish years ago, as a young Level B scholar and just after my own PhD journey, I sat with my Chair of Department and my Head of School to talk about my career goals as an academic. Today, as Chair of Discipline, I sat with my Head of School and one of our newest Level B colleagues and spoke about their goals for their academic career. I walked out chuffed with my progression in this life path, but also thankful for those in front of me who supported, guided, shielded, prompted, directed, and generally helped along the way.

For me, today was an important reminder of the trail we leave behind us and how working with our next generation is crucial for everyone. It’s only academia, but it’s bigger than that, too.

I also had the opportunity to sit with my Head of School (HoS) today and just get to know each other. It was lovely! It was also an opportunity for us both to share our stories on how we ended up where we are today. While I was fascinated with my HoS ‘s story, and some of the similarities, it was a moment to reflect on how I’ve grown and developed over the last while. I’m happy and focussed for the minute, and have a new found passion in my professional life.

What I’ve learned in the last six months

It’s hard to try to compact what I’ve learned in the last six months into a few lines, but if I can summarise it – the ability to listen a broad range of people, understand individual perspectives, and to then make the best decision on how to proceed.

When I was working in live production, we would rehearse shows and working as a collection of experts in each medium (audio, video, lighting, producing, etc.) we would work through a series of scheduled motions. Most times it was perfect, but there was also unexpected problems that pop up. That pressure and immediacy of making decisions that was informed and directed at success is not unlike the Chair role.

My first six months in the role was learning from my predecessor, while also meeting with everyone around me to understand their needs. There were a few calls I had to make in terms of direction, and most were fine, but it was really about listening.

Now, I move into a more driving mode and need to listen to what is at play from around my own Discipline and think how to position us for the next 12 months. This is tough, but again something that is not done in solo mode but with the input of everyone around me. It’s exciting, and a bit terrifying at the same time.

Where to from here?

So. I have some new skills and am keen to keep developing these. But as I look towards my next step post these leadership roles, I’m also focussed on succession planning. I’m beginning to think about who will be in the chair next and what they need to be ready to take over.

Just as those who paved the way for me, it’s my responsibility to provide the opportunity to the next gen.

Providing opportunities is probably the most important thing I’ve learned in this gig.

Photo by Evie S. on Unsplash

Sydney Protests

I was asked to write an opinion piece for the Sydney Morning Herald last week as a reaction to the anti-lockdown protests in Sydney, which took place the week before.

I found it incredibly interesting how those who attended the event left a trail of ‘evidence’ across social media, and who that would be the first place that police agencies, who are incredibly angry, would go for that evidence. There self-posted celebrations would be their downfall ultimately.

What is more interesting is that this is an event which is master-minded by a far-right group in Germany, who has brought quite disparate groups together to march on the status quo. They, who are at an arm’s length, will not be touched by any form of Sydney based policing.

Below is a version of the final article which appears here in the SMH.

Hiding in plain sight: Facebook a ‘honeypot’ for police to monitor protests


Calls for tighter regulation on Facebook are the standard reaction to the spread of disinformation across the social media giant, but considering the recent anti-lockdown protests, perhaps the platform should do nothing.

Activist groups are calling on Facebook to tighten measures on misinformation, claiming “disinfo kills”. Killer disinformation was potentially manifest in the anti-lockdown protests in Sydney last weekend, where the social media platform was likely used to promote and organise. The protest organisers are again encouraging their networks to take to the streets on Saturday to call for change against mandatory vaccinations and lockdowns. These protests have prompted renewed pressure to monitor Facebook’s capacity to attract problematic groups.

However, digital traces of the anti-lockdown protesters on Facebook serve as aides to law enforcement agencies who seek to identify and prosecute hundreds of individuals since the chaos erupted in Sydney. The platform also provides insights for planned activities of these same groups.

The amplification of social media messaging left a trail of videos, images, chats and discussion among thousands of individuals who opposed the current public health orders to stay at home, wear a mask when in public and get vaccinated against COVID-19; a veritable honeypot of data for use by law enforcement agencies seeking to identify and prosecute those who flout public health orders.

So Facebook needs to weigh the potential for the disinformation it hosts to be destructive with the intel it can gather on groups who spread it. And the intel can be vast. We now know that a combination of activity on Instagram, Telegram and Facebook, supported by a German-based group, Freie Bürger Kassel (or the Free Citizens of Kassel), was able to mobilise thousands of individuals in a number of cities around the globe. The Worldwide Rally for Freedom, of which the Sydney protest formed a part, saw a collection of somewhat aligned cohorts of anti-vaxxers, conspiracy theorists, lockdown-opposers, health and wellness groups and far-right extremists come together to protest for their freedom.

This prompts the question: What will Facebook do to prevent these sorts of ill-intent events from occurring in the future? The answer is likely to be nothing. This is the approach that remains consistent with Facebook’s right to freedom of speech position, which enables a wide variety of opinion and conversation to continue. Given the array of horrific moments that have been broadcast live, organised and discussed by its users, one might ask why there isn’t more done to protect the safety of others on Facebook. But perhaps the best thing Facebook could do after the recent Worldwide Rally for Freedom is nothing.

While this approach may be counter to a growing public opinion of Facebook’s responsibility for safe and civil societies, the platform finds itself in a unique position that sees it collecting and profiling the personal data of those who seek to “remain free”. Inherently, through its crowd gathering and mobilisation applications – for example, Facebook Events – the platform is able to collect, sort, organise and archive the personal and network data of those who participated in the rally and documented their efforts on Facebook.

By not de-platforming, silencing or delisting the public event, has Facebook provided law enforcement the breadcrumb trail and the evidence it needs to identify those at the heart of the protest and to prosecute accordingly? This unique and insightful database may indeed be the last chance that police and law enforcement agencies have before these sorts of organisations disappear to the dark web. From there, it becomes increasingly difficult for the law to find and follow leaders of such groups.

The regulatory pressure of this moment places Facebook again in the challenging position to decide on how, in a post-Christchurch massacre world, to manage free speech against the negative and ill-conceived events that are harmful to our societies. But as we saw in the fallout of the ANOM app that brought down more than 200 members of Australia’s underworld, digital databases remain unfriendly to activities that contrast with lawful directives. In that sense, it is the users themselves who are undertaking the detective work.

Ironically, those who wish to be “free” are further incriminating themselves through their public digital traces left on social media platforms. NSW Police have used these traces to administer hundreds of fines to the anti-lockdown protesters.

Danke für die lustigen Zeiten, meine neuen Freunde!

You may notice I call it a sabbatical, even though technically it is called Special Studies Program (SSP) here at the University of Sydney. But, so a broader audience gets what’s going on here, let’s go with sabbatical for now…

Good times…

During my sabbatical, which I undertook during December 2018 to July 2019, I was embedded as a Visiting Research Fellow at the Hans Bredow Institute (now called the Leibnitz Institute for Media Research) in Hamburg Germany. I was working on the Algorithmed Public Sphere project alongside my two amazing colleagues Dr Cornelius Puschmann and Dr Felix Münch.

It was also a unique opportunity to meet a diverse group of like-minded researchers from around the world as we all converged on Hamburg to get going with some ground-breaking research. The powerhouse of researchers include Arjen van Dalen (Denmark), Christiano Ferri (Brazil), and Maris Männiste (Estonia).

The Algorithmed Public Sphere Fellows, 2019, L to R: Christiano Ferri, Jonathon Hutchinson, Cornelius Puschmann, Felix Münch, Maris Männiste (Arjen was missing that day).

Beyond having an amazing experience with these folk and learning about the bizarre similarities and differences of our countries, we shared insights into our research on automation, algorithms, media policy, and social cohesion. We also moved forward with some innovative digital methods, and have hatched a number of new research projects, including Bot visibility and authenticity: Automated social media conversation detection:

Bots are increasingly simple to produce and used as key communication protocols for individuals and institutions across social media platforms as one form of automated media production. Simultaneously, however, bot use is emerging as a relationship creator (Ford & Hutchinson, 2019) between consumers across platforms, skewing content visibility. Recent work by Münch et al. (forthcoming) identify bots within the German Twittersphere, resulting in a high probability of bots within Marketing and public relations (PR) conversations. Conversely, there is a low probability of bots communicating within the YouTuber Creator conversations across the same Twittersphere. This observation supports the argument that YouTubers may have a better strategy at visibility than bots, yet their content production is determined by their cultural, economic and political backgrounds. This project seeks to test bot-detection methods, for example the Botometer. It will design a ‘human’ baseline for bot detection within the German and Australian Twittersphere that can be compared against the automated bot-detection processes currently utilised. It will produce a bot-detection classifier that will be able to categorise accounts across a scale of malign, benign, or not likely automated.

Keynotes and Public Lectures

I also spent a small amount of time travelling to other European Universities to strengthen networks and develop future research projects. I was invited to deliver a Keynote Lecture to the Baltic Film, Media, Arts and Communication Institute of Tallin University in Estonia. My exceptional host was Dr Katrin Tiidenberg, who made me feel very much at home, but also exposed me to the life in the few countries within post-Soviet Union (like, I saw a real KGB interrogation room!!). Thanks to everyone who came along and asked engaging questions to help me continue to think through my new research area.

The view of Old Town in Estonia.

I also had the pleasure of visiting a number of other universities, to catch up with friends, colleagues and hatch new ideas and projects, including:

  • The University of Amsterdam, Netherlands;
  • London School of Economics, United Kingdom;
  • City University, London, United Kingdom;
  • Alexander von Humboldt Institute, Berlin, Germany.

The result:

  • Hutchinson J. (2019). Towards transparent public automated media: Digital intermediation. Keynote Lecture. University of Tallin, Estonia. 16 May.
  • Hutchinson J. (2019). Towards transparent public automated media: Digital intermediation. Keynote Lecture. Leibnitz Institute for Media and Communication, Hamburg, Germany. 15 May.

Methodology Masterclass

While I was in Tallin, I also delivered a masterclass on Data Ethnography with a group a diverse folk from Lecturers across the Arts, through to Masters Students from Information Technology. While the participants genuinely enjoyed the class, I think I always take more from this workshop as I keep developing the method. Thanks everyone for coming along the ride with me:

  • Hutchinson J. (2019). Data ethnography: How do we research what we can’t see? Postgraduate Masterclass. University of Tallin, Estonia. 17 May

Publications

I mean, this is what it all comes down to, right? Of which I am most delighted to have some time to finish those articles that were stuck on my hard drive, complete some new pieces, and then start work on the next few areas.

My time away during my SSP was spent for the most part writing and researching on my new emerging area of research, digital intermediation which I argue highlights the new media ecology that incorporates the agency of digital agencies, automation and algorithms.

I bought some books, they were heavy to carry home.

I was also able to have three articles published as a precursor to this work, and commenced work on new research and writing in this space. As I return to work, I have two articles under review and one book proposal in with the editors of Media Series for MIT Press.

Published Journal Articles:

  • Hutchinson J. (2019). Micro-platformization for digital activism on social media. Information, Communication & Society. DOI: 10.1080/1369118X.2019.1629612
  • Ford H & Hutchinson J. (2019). Newsbots That Mediate Journalist and Audience Relationships. Digital Journalism. DOI: 10.1080/21670811.2019.1626752
  • Hutchinson J. (2019). Digital first personality: Automation and influence within evolving media ecologies. Convergence. DOI: 10.1177/1354856519858921

Journal Articles Under Review:

  • Hutchinson J. (2019). Data ethnography for digital intermediation: How do we research what we can’t see? Big Data & Society.
  • Hutchinson J. (2019). Theorizing digital intermediation: Automating our media. Media, Culture & Society.

Book (almost with commissioning editors):

  • Hutchinson J. Revealing digital intermediation: Towards transparent infrastructure. Distribution Matters book series, MIT Press.

Grants

I was also awarded a few smaller grants to assist in developing my research towards an external grant application. At this stage, I have focussed on an ARC Discovery Grant to be submitted in March 2020 for funding in 2021. I am also looking at other funding opportunities such as the Australian Communications Consumer Action Network (ACCAN) Grants Program for funding in July 2020.

SLAM Research Support Scheme: $2966.44.

This grant is being used for the research project Bot visibility and authenticity: Automated social media conversation detection, which is underway with colleagues from the Leibnitz Institute for Media and Communication, Hamburg. This project seeks to understand the reliability of the Botometer project in comparison with the bot detection methods we have already developed, to understand how perceived real communication occurs around events within the Australian and German Twitterspheres. The immediate output is a paper for the 2020 ICA Conference, with a view to continue working on this project in the near several years.

Faculty Research Support Scheme: $5000.00

This grant is being used to bring several colleagues together on a project with a view to advance the research project to a competitive ARC Discovery Grant application. The project’s title is Promoting digital equality through better platform algorithmic policy, and brings expertise from Political Science, Computational Science, Design and Media Studies. While in Europe, I have received support for the project from Dr Jan Schmidt at the Leibnitz Institute for Media and Communication, Hamburg, and Associate Professor Thomas Poell from the University of Amsterdam – both leading academics in this field who are interested in becoming international advisors in the project.

Current Thinking…

I continue working on my book, which will create the field of digital intermediation. I describe the book in the following way:

Our media consumption is increasingly curated and designed by digital infrastructures that are informed by economic and infrastructural environments that determine the creation of content and how that content is distributed. Often, this is represented through algorithmically calculated decisions: recommendation systems on media applications and platforms. While this can be seen as a useful mechanism to sort, curate and present a digestible media diet within a saturated media market, automation is also an unseen digital infrastructure that contributes to the decrease in diversification of our exposure to information. Social media platforms increasingly promote what they see to be important content, which is often aligned with their commercial interests. Smart TVs are purchased with a bundle of pre-installed applications that are often unable to be uninstalled. Connected devices and interoperable systems are developed on information efficiency calculations with little concern for user and information equality. It is the commercial operators such as Netflix, Prime, YouTube and Apple who are succeeding in the content exposure battle, crowding out other key content creators, media organisations and cultural institutions. This is a digital distribution problem: the mismanagement of automated infrastructures.

This book constructs a theoretical model of digital intermediation within increasingly automated media systems. Digital intermediation can be applied to the process of digital media communication across the majority of social media platforms, which now drive the news and media cycle, highlighting the agency of users that becomes restricted and refined by the digital intermediaries that create, publish and distribute content. Through digital intermediation, it is also possible to understand the strategies of its most successful social media users, the platforms that privilege this content production process, and explain how some media is more visible than others. The book answers this question: How is media content produced nowadays, in what context(s), and within which structural pressures? Digital intermediation is a content production process that incorporates the culture and political economy that surrounds the technologies, online content producers, digital agencies and automation. The book describes these four unseen infrastructures of digital intermediation in detail by highlighting the production and distribution of content within our contemporary media ecology. The book then moves on to describe the cultural dimensions that surround how particular types of content is created as a means to represent our current societal understandings. The use of political economy is incorporated to then frame the regulation and economical practices that surround the production and consumption of content that is produced and distributed across digital spaces through the digital intermediation process. Finally, the book provides a series of recommendations that includes improved interface design that incorporates the dimensions of digital intermediation for content production and distribution to encourage the education and involvement of user agency within these media ecologies.

I’m super focussed, enjoying teaching again, and ready to develop my skills in the research service roles (HDR Coordinator). I am also managing the three International Executive Roles and learning so much from being on these Boards. Until the next three years!

Danke für die lustigen Zeiten, meine neuen Freunde!

That day I cooked for the Institute, with Philip, and it rocked!
hutchinson_predictive_media

The following passage is a thought moment, and by no means exhaustive of placing the idea within existing theories/fields. It would be interesting, and probably the published version of this will do so, to align it with media and cultural studies, queer theory or perhaps discrimination studies. That said, here is a thought process…

I have been undertaking substantial research into artificial intelligence (AI) and automation since arriving here at Hans Bredow. I am beginning to think that perhaps automation/AI isn’t the best or most appropriate way to frame our contemporary media lives. Those concepts certainly are a part of our media lives, but there may be a better way to describe the entire environment or ecosystem as I have previously written.

What I do understand at this point is that media curation/recommendation is responding to us as humans, but we are also responding to how that technology is responding and adapting to us. This is a human/technology relationship, and one that is constantly being refined, modified, adapted and changed – not by either agent alone, but collectively as any two agents would negotiate a relationship.

This type of framing, then, suggests we should no longer be thinking about algorithmic media, or automated media alone. Perhaps what we should be thinking about is the relationship of processed and calculated digital media with its consumers – for this I will use the term predictive media.

I will attempt to explain how I have arrived at predictive media.

Artificial Intelligence (AI) Media

Media certainly isn’t in an AI moment – I’m not entirely sure I align with AI to be honest (or at least I am still working through the science/concept and implications). Beyond its actual meaning, it feels as thought it is the new business catch phrase – “and put some AI in there with our big data and machine learning things”. If artificial intelligence is based on machine learning, the machine requires three phases of data to process: to interpret external data, to learn from those data, and to achieve a specific goal from those learnings. This implies that the machine has the capacity for cognitive processing, much like a human brain.

AI is completely reliant on data processing to produce a baseline, incorporate constant feedback data after the decisions have been made, and the recalculation of information to continue to improve its understanding of the data. Often, there is a human touch during many of these points placing a cloud of doubt over the entire machine learning capacity. While this iterative process is very impressive when done well, there will always be data points that are indistinguishable to a computer.

We should instead be thinking about these processes as a series of decision points, of which we also have input data.

Say for example, you are making a decision to board a bus to travel into town. AI would process data like distance, timetable, the number of people on the bus, for example and recommend which bus you should catch. What it can’t tell is if the bus driver is drunk and is driving erratically, or that the bus has advertising that you fundamentally disagree with, or that you have 10 students travelling with you. In that scenario, it will be the combination of AI processes along with your human decision making that prove to be the best interpretation of which will be the best bus to catch in to town.

As I see it, we are not in a pure Algorithmic Media moment – and this will be a long way away, if it manifests at all.

Algorithmic Media

We have also seen the rise of algorithmic media, which often presents itself as recommender systems or the like, which essentially suggests you should consume a particular type of media based on your past viewing habits or because of your demographic data.

Algorithmic media can be very useful, given our media saturated lives that have Netflix, Spotify, blogs, journalism, Medium, TikTok, and whatever else makes up our daily consumption habits. We need some help to sort, organise and curate our media lives to make the process possible (efficient).

Think of a Google search. It is often the case we search for specific information based on our needs. Google knows the sorts of information we are interested in and will attempt to return information that is relevant and useful. Of course this information result has a number of levers in operation behind the mechanics of results, for example commercial priorities, legislation, trends, etc., Further, we have also seen how algorithms can be incredibly racist, selective, indeed chauvinistic.

In some areas, developers have started addressing these areas, given the algorithms are developed by humans. But there is still a long way to go with this work.

So in that sense, I’m not algorithmic media makes a whole lot of sense due to the problems associated with it. It could be that by the time the algorithmic issues are entirely addressed, we will have moved on to our next media distribution and consumption phenomena.

Predictive Media

So if this is our background (and I understand I have raced through media and technology history, and critical studies here – I will flesh this out in an upcoming article), humans have altered their relationship with technology.

Heather Ford and I are about to (hopefully!!) have an article published that explores the human/technology relationship in detail through newsbots, but I think it is broader than bot conversations alone.

Indeed, content producers adapt and shift their relationship with algorithms daily to ensure their content remains visible. But I think consumers are now beginning to shift their relationship with how technology displays information. If not shift, we are definitely recognising these digital intermediary artefacts that impact, suspend, redirect, or omit our access to information.

Last week, Jessa Lingel published this cracking article on Culture Digitally, The gentrification of the internet. She likened our current internet to urbanisation, and made the argument that the process of gentrification is clearly in operation:

an economic and social process whereby private capital (real estate firms, developers) and individual homeowners and renters reinvest in fiscally neglected neighborhoods through housing rehabilitation, loft conversions, and the construction of new housing stock. Unlike urban renewal, gentrification is a gradual process, occurring one building or block at a time, slowly reconfiguring the neighborhood landscape of consumption and residence by displacing poor and working-class residents unable to afford to live in ‘revitalized’ neighborhoods with rising rents, property taxes, and new businesses catering to an upscale clientele

Perez, 2004, p.139

In her closing paragraphs, Jessa made a recommendation that is so obvious and excellent, why haven’t we done this before?

Be your own algorithm. Rather than passively accepting the networks and content that platforms feed us, we need to take more ownership over what our networks look like so that we can diversify that content that comes our way. 

Lingel, 2019, n.p.

It made me think about food and supermarkets – certainly in Sydney we have two (maybe three) major supermarkets. But there is a growing trend to avoid them and shop local, shop in food co-ops, join food co-ops, and change our food consumption habits entirely. If those major chains want to push inferior products and punish their suppliers to increase the bottom line, as consumers we (in the privileged Australian context) have the option to purchase our food elsewhere.

Why wouldn’t we do the same with our digital internet services? Is this a solution to bias, mismatched, commercially oriented media algorithms and the so-called AI?

Is Predictive Media the Solution?

I think we can apply the same approach towards predictive media.

We cannot consume the amount of media that is produced, suggesting we may be missing crucial information. We cannot trust automated media because it has proven to be incredibly bias. But perhaps it is in changing our relationship with technology and understanding how they work a little better, we might find a satisfactory medium.

It is not only greater transparency that is required to address our problems of automated and algorithmic media, but it is a proactive engagement with those machines to train the programs to understand us better. But changing that relationship is difficult if you don’t know that is an option. So perhaps the real call here is to establish alternative and transparent digital communication protocols that are easily accessible and decipherable for users. In education, change is possible, and this may be a defence against the current trajectory for digital media.

The combination of both increased understanding/transparency and more active engagement with training ‘our’ algorithms could be the basis for predictive media, where predictive media helps us beyond a commercial platform’s profit lines, and exposes us to more important and critical public affairs.

Original Image by Hello I’m Nik.

With a bit of time to stop and think over the Xmas break, I’ve had a thought or two on my academic trajectory for 2017 – which of course came to me over a few beers with friends and family…

I’ve just returned from holidays where I spent much time with family, friends and people who I have no idea are. I kind of did an ‘East Coast Suburbs’ tour of Australia, and realised that with a young family of three boys, suburbia is pretty alright. Dudes can run, scooter, play, basketball, hang, meet other young kids, etc., and rarely did we have that overwhelming city feeling of ‘Where’s Mr 4? Is he in someone’s car?’ kind of vibe that is just part of the tapestry of city living I guess.

I also spent a lot of time with people not in my usual networks – weak ties to the network studies people reading this. It was awesome, and as an ethnographer it really sharpened my skills for my upcoming field work this year. One of the most interesting realisations I had was while camping in one of the many coastal spaces in Northern NSW was that most people couldn’t give a shit about what academics think, or where there trajectory is heading over the next few years.

I watched on with awe of this camping family that lived across the road from us in Minnie Waters. The parents owned a permanent caravan which they had for about 30 years, and there was generations of family that would pop in over the course of the week or so that we were there. They had 4WDs, boats, fishing and crabbing gear – all the toys that made their holiday life totally fun and enjoyable. They had friends and wives and girlfriends and kids… all the while laughing and hanging out with each other (and no doubt eating the best seafood).

It was at that moment that I thought ‘these people have a great life, and they just don’t care what stuff we academics are doing. Like, at all. So that was the impetus to the thinking behind this year’s academic goal:

Make my academic work have some real world impact!

In the day to day grind of academic life, I think we head down and go for glory with our publications, grant writing, strategy development, research and teaching and kind of forget the bigger picture (well I certainly do at least). Some of the real world stuff that has been surfacing in the latter part of 2016 gathered much academic interest (that has enormous real world impact), but I never really saw much of it move from the ‘ivory tower‘ and into any kind of action. I may be wrong here, and tell me if I am in the comments, but I see this as a pretty flaw of the academic profession.

I think we are pretty good at pointing out the short-comings of politics, economics, cultures, governance, communication, etc., but I think we have a problem with converting that into any real impact. Certainly any impact that would have an effect on our fishing family in Minnie Waters.

I saw some great research last year that worked it’s way into the main stream press and reached wider audiences, but on the whole, I only heard a lot of talk about all the bad stuff. I’m using this as a launch pad to try and use 2017 as a time to not only critically analyse shit, but to also make some kind of positive change with it.

Topics that are always coming up in my filter bubble include (to name a few):

  • Trump
  • Fake News
  • Alt Right
  • Algorithms (the algorithm made me do it)
  • Negative economic growth
  • Disastrous left-leaning parties

If we put all of this together, it could be a right disaster for our entire contemporary civilisation. Are we not in über privileged positions to have a bird’s eye view of this stuff, and should it not be our responsibility to feed this in to the public sphere?

It’s one thing to identify this, but another thing to action it. I’m not clear on how to do my part yet – maybe more media appearances? Maybe I will engage local community groups and feed information into them? Maybe I’ll run in politics as Obama Reckons I should?

But I certainly need to get out my academic circles this year and talk more with ordinary people about how they can change things – you know, encourage people to lead their own shit.

Oh yeah, and I’m going to do this after I finish my book, submit my DECRA, take on my responsibilities as the new undergraduate coordinator, write a new university wide online course for social media, continue to teach my Social Media MOOC, coordinate and teach four subjects and undertake the field work for my new research project.

I do love how time off makes you put things into some sort of perspective 😉