Meta’s Instagram and Threads apps are “slowly” rolling out a change that will no longer recommend political content by default. The company defines political content broadly as being “potentially related to things like laws, elections, or social topics”.
Users who follow accounts that post political content will still see such content in the normal, algorithmically sorted ways. But by default, users will not see any political content in their feeds, stories or other places where new content is recommended to them.
For users who want political recommendations to remain, Instagram has a new setting where users can turn it back on, making this an “opt-in” feature.
This change not only signals Meta’s retreat from politics and news more broadly, but also challenges any sense of these platforms being good for democracy at all. It’s also likely to have a chilling effect, stopping content creators from engaging politically altogether.
Politics: dislike
Meta has long had a problem with politics, but that wasn’t always the case.
In 2008 and 2012, political campaigning embraced social media, and Facebook was seen as especially important in Barack Obama’s success. The Arab Spring was painted as a social-media-led “Facebook Revolution”, although Facebook’s role in these events was widely overstated,
However, since then the spectre of political manipulation in the wake of the 2018 Cambridge Analytica scandal has soured social media users toward politics on platforms.
Increasingly polarised politics, vastly increased mis- and disinformation online, and Donald Trump’s preference for social media over policy, or truth, have all taken a toll. In that context, Meta has already been reducing political content recommendations on their main Facebook platform since 2021.
Instagram and Threads hadn’t been limited in the same way, but also ran into problems. Most recently, the Human Rights Watch accused Instagram in December last year of systematically censoring pro-Palestinian content. With the new content recommendation change, Meta’s response to that accusation today would likely be that it is applying its political content policies consistently.
Instagram has no shortage of political content from advocacy and media organisations. Jakob Owens/Unsplash
How the change will play out in Australia
Notably, many Australians, especially in younger age groups, find news on Instagram and other social media platforms. Sometimes they are specifically seeking out news, but often not.
Not all news is political. But now, on Instagram by default no news recommendations will be political. The serendipity of discovering political stories that motivate people to think or act will be lost.
Combined with Meta recently stating they will no longer pay to support the Australian news and journalism shared on their platforms, it’s fair to say Meta is seeking to be as apolitical as possible.
But with Meta positioning Threads as a potential new town square while Twitter/X burns down, it’s hard to see what a town square looks like without politics.
The lack of political news, combined with a lack of any news on Facebook, may well mean young people see even less news than before, and have less chance to engage politically.
Politics and hard news are important, I don’t want to imply otherwise. But my take is, from a platform’s perspective, any incremental engagement or revenue they might drive is not at all worth the scrutiny, negativity (let’s be honest), or integrity risks that come along with them.
Like for Facebook, for Instagram and Threads politics is just too hard. The political process and democracy can be pretty hard, but it’s now clear that’s not Meta’s problem.
A chilling effect on creators
Instagram’s announcement also reminded content creators their accounts may no longer be recommended due to posting political content.
If political posts were preventing recommendation, creators could see the exact posts and choose to remove them. Content creators live or die by the platform’s recommendations, so the implication is clear: avoid politics.
Creators already spend considerable time trying to interpret what content platforms prefer, building algorithmicfolklore about which posts do best.
While that folklore is sometimes flawed, Meta couldn’t be clearer on this one: political posts will prevent audience growth, and thus make an already precarious living harder. That’s the definition of a political chilling effect.
For the audiences who turn to creators because they are perceived to be relatable and authentic, the absence of political posts or positions will likely stifle political issues, discussion and thus ultimately democracy.
How do I opt back in?
For Instagram and Threads users who want these platforms to still share political content recommendations, follow these steps:
go to your Instagram profile and click the three lines to access your settings.
click on Suggested Content (or Content Preferences for some).
click on Political content, and then select “Don’t limit political content from people that you don’t follow”.
With much fanfare, Meta announced last week that they’re rolling out all sorts of generative AI features and experiences across a range of their apps, including Instagram. AI agents in the visage of celebrities are going to exist across Meta’s apps, with image generation and manipulation affordances of all sorts hitting Instagram and Facebook in particular. At first glance, allowing generative AI tools to create and manipulate content on Instagram seems a little odd. In the book Instagram: Visual Social Media Cultures I co-authored with Tim Highfield and Crystal Abidin, one of the things we examined as a consistent tension within Instagram has been users holding on to a sense of authenticity whilst the whole platform is driven by a logic of templatability. Anything popular becomes a template, and can swiftly become an overused cliché. In that context, can generative AI content and tools be part of an authentic visual landscape, or will these outputs and synthetic media challenge the whole point of something being Instagrammable?
More than that, though, generative AI tools are notoriously fraught, often trained on such a broad range of indiscriminate material that they tend to reproduce biases and prejudices unless very carefully tweaked. So the claim that I was most interested in was the assertion that Meta are “rolling out our new AIs slowly and have built in safeguards.” Many generative AI features aren’t yet available to users outside the US, so for this short piece I’m focused on the generative AI stickers which have rolled out globally for Instagram. Presumably this is the same underlying generative AI system, so seeing what gets generated with different requests is an interesting experiment, certainly in the early days of a public release of these tools.
Requesting an AI sticker in Instagram for ‘Professor’ produced a pleasingly broad range of genders and ethnicities. Most generative AI image tools have initially produced pages of elderly white men in glasses for that query, so it’s nice to see Meta’s efforts being more diverse. Queries for ‘lecturer’ and ‘teacher in classroom’ were similarly diverse.
Heading in to slightly more problematic territory, I was curious how Meta’s AI tools were dealing with weapons and guns. Weapons are often covered by safeguards, so I tested ‘panda with a gun’ which produced some pretty intense looking pandas with firearms. After that I tried a term I know is blocked in many other generative AI tools, ‘child with a gun’, and saw my first instance of a safeguard demonstrably in action, with no result and a warning that ‘Your description may not follow our Community Guidelines. Try another description.’
However, as safeguards go, this is incredibly rudimentary, as a request for ‘child with a grenade’ readily produced stickers, including several variations which did, indeed, show a child holding a gun.
The most predictable words are blocked (including sex, slut, hooker and vomit, the latter relating, most likely, to Instagram’s well documented challenges in addressing disordered eating content). Thankfully gay, lesbian and queer are not blocked. Oddly, gun, shoot and other weapon words are fine by themselves. And while ‘child with a gun’ was blocked, asking for just ‘rifle’ returned a range of images that look a lot to me like several were children holding guns. It may well be the case that the unpredictability of generative AI creations means that a lot more robust safeguards are needed that just blocking some basic keywords.
Zooming out a bit, in a conversation on LinkedIN, Jill Walker Rettberg (author of the new book Machine Vision) was lamenting that one of the big challenges with generative AI trained on huge datasets is the lack of cultural specificity. As a proxy, I thought it’d be interesting to see how Meta’s AI handles something as banal as flags. Asking for a sticker for ‘US flag’ produced very recognisable versions of the stars and stripes. ‘Australia flag’ basically generated a mush of the Australian flag, always with a union jack, but with a random number of stars, or simply a bunch of kangaroos. Asking for ‘New Zealand flag’ got a similar mix, again with random numbers of stars, but also with the Frankenstein’s monster that was a kiwi (bird) with a union jack on its arse and a kiwi fruit for a head; the sort of monstrous hybrid that only a generative AI tool can create blessed with a complete and utter lack of comprehension of context! (That said when the query was Aotearoa New Zealand, quite different stickers came back.)
More problematically, a search for ‘the Aboriginal flag’ (keeping in mind I’m searching from within Australia and Instagram would know that) produced some weird amalgam of hundreds of flags, none of which directly related to the Aboriginal Flag in Australia. Trying ‘the Australian Aboriginal flag’ only made matters worse, with more union jacks and what I’m guessing are supposed to be the tips of arrows. At a time when one of the biggest political issues in Australia is the upcoming referendum on the Aboriginal and Torres Strait Islander Voice, this complete lack of contextual awareness shows that Meta’s AI tools are incredibly US-centric at this time.
And while it might be argued that generative AI are never that good with specific contexts, trawling through US popular culture queries showed Meta’s AI tools can give incredibly accurate stickers if you’re asking for Iron Man, Star Wars or even just Ahsoka (even when the query is incorrectly spelt ‘ashoka’!).
At the moment the AI Stickers are available globally, but the broader Meta AI tools are only available in the US, so to give Meta the benefit of the doubt, perhaps they’ve got significant work planned to understand specific countries, cultures and contexts before releasing these tools more widely. Returning to the question of safeguards, though, even the bare minimum does not appear very effective. While any term with ‘sexy’ or ‘naked’ in it seems to be blocked, many variants are not. Case in point, one last example: the query ‘medusa, large breasts’ produced exactly what you’d imagine, and if I’m not mistaken, the second sticker created in the top row shows Medusa with no clothes on at all. And while that’s very different from photographs of nudity, if part of Meta’s safeguards are blocking the term ‘naked’, but their AI is producing naked figures all the same, there are clearly lingering questions about just how effective these safeguards really are.
Facebook recently announced significant changes to Instagram for users aged under 16. New accounts will be private by default, and advertisers will be limited in how they can reach young people.
The new changes are long overdue and welcome. But Facebook’s commitment to childrens’ safety is still in question as it continues to develop a separate version of Instagram for kids aged under 13.
The company received significant backlash after the initial announcement in May. In fact, more than 40 US Attorneys General who usually support big tech banded together to ask Facebook to stop building the under-13s version of Instagram, citing privacy and health concerns.
Privacy and advertising
Online default settings matter. They set expectations for how we should behave online, and many of us will never shift away from this by changing our default settings.
Adult accounts on Instagram are public by default. Facebook’s shift to making under-16 accounts private by default means these users will need to actively change their settings if they want a public profile. Existing under-16 users with public accounts will also get a prompt asking if they want to make their account private.
These changes normalise privacy and will encourage young users to focus their interactions more on their circles of friends and followers they approve. Such a change could go a long way in helping young people navigate online privacy.
Facebook has also limited the ways in which advertisers can target Instagram users under age 18 (or older in some countries). Instead of targeting specific users based on their interests gleaned via data collection, advertisers can now only broadly reach young people by focusing ads in terms of age, gender and location.
This change follows recently publicised research that showed Facebook was allowing advertisers to target young users with risky interests — such as smoking, vaping, alcohol, gambling and extreme weight loss — with age-inappropriate ads.
This is particularly worrying, given Facebook’s admission there is “no foolproof way to stop people from misrepresenting their age” when joining Instagram or Facebook. The apps ask for date of birth during sign-up, but have no way of verifying responses. Any child who knows basic arithmetic can work out how to bypass this gateway.
Of course, Facebook’s new changes do not stop Facebook itself from collecting young users’ data. And when an Instagram user becomes a legal adult, all of their data collected up to that point will then likely inform an incredibly detailed profile which will be available to facilitate Facebook’s main business model: extremely targeted advertising.
Deploying Instagram’s top dad
Facebook has been highly strategic in how it released news of its recent changes for young Instagram users. In contrast with Facebook’s chief executive Mark Zuckerberg, Instagram’s head Adam Mosseri has turned his status as a parent into a significant element of his public persona.
Since Mosseri took over after Instagram’s creators left Facebook in 2018, his profile has consistently emphasised he has three young sons, his curated Instagram stories include #dadlife and Lego, and he often signs off Q&A sessions on Instagram by mentioning he needs to spend time with his kids.
Adam Mosseri’s Instagram Profile on July 30 2021. Instagram
When Mosseri posted about the changes for under-16 Instagram users, he carefully framed the news as coming from a parent first, and the head of one of the world’s largest social platforms second. Similar to many influencers, Mosseri knows how to position himself as relatable and authentic.
Age verification and ‘potentially suspicious’ adults
In a paired announcement on July 27, Facebook’s vice-president of youth products Pavni Diwanji announced Facebook and Instagram would be doing more to ensure under-13s could not access the services.
Diwanji said Facebook was using artificial intelligence algorithms to stop “adults that have shown potentially suspicious behavior” from being able to view posts from young people’s accounts, or the accounts themselves. But Facebook has not offered an explanation as to how a user might be found to be “suspicious”.
Diwanji notes the company is “building similar technology to find and remove accounts belonging to people under the age of 13”. But this technology isn’t being used yet.
It’s reasonable to infer Facebook probably won’t actively remove under-13s from either Instagram or Facebook until the new Instagram For Kids app is launched — ensuring those young customers aren’t lost to Facebook altogether.
Despite public backlash, Diwanji’s post confirmed Facebook is indeed still building “a new Instagram experience for tweens”. As I’ve argued in the past, an Instagram for Kids — much like Facebook’s Messenger for Kids before it — would be less about providing a gated playground for children and more about getting children familiar and comfortable with Facebook’s family of apps, in the hope they’ll stay on them for life.
A Facebook spokesperson told The Conversation that a feature introduced in March prevents users registered as adults from sending direct messages to users registered as teens who are not following them.
“This feature relies on our work to predict peoples’ ages using machine learning technology, and the age people give us when they sign up,” the spokesperson said.
They said “suspicious accounts will no longer see young people in ‘Accounts Suggested for You’, and if they do find their profiles by searching for them directly, they won’t be able to follow them”.
Resources for parents and teens
For parents and teen Instagram users, the recent changes to the platform are a useful prompt to begin or to revisit conversations about privacy and safety on social media.
Regarding Instagram for Kids, a Facebook spokesperson told The Conversation the company hoped to “create something that’s really fun and educational, with family friendly safety features”.
But the fact that this app is still planned means Facebook can’t accept the most straightforward way of keeping young children safe: keeping them off Facebook and Instagram altogether.
When it was launched on October 6, 2010 by Kevin Systrom and Mike Krieger, Instagram was an iPhone-only app. The user could take photos (and only take photos — the app could not load existing images from the phone’s gallery) within a square frame. These could be shared, with an enhancing filter if desired. Other users could comment or like the images. That was it.
As we chronicle in our book, the platform has grown rapidly and been at the forefront of an increasingly visual social media landscape.
In 2012, Facebook purchased Instagram for a deal worth a $US1 billion (A$1.4 billion), which in retrospect probably seems cheap. Instagram is now one of the most profitable jewels in the Facebook crown.
Instagram has integrated new features over time, but it did not invent all of them.
Similarly, IGTV is Instagram’s answer to YouTube’s longer-form video. And if the recently-released Reels isn’t a TikTok clone, we’re not sure what else it could be.
Instagram is largely responsible for the rapid professionalisation of the influencer industry. Insiders estimated the influencer industry would grow to US$9.7 billion (A$13.5 billion) in 2020, though COVID-19 has since taken a toll on this as with other sectors.
As early as in 2011, professional lifestyle bloggers throughout Southeast Asia were moving to Instagram, turning it into a brimming marketplace. They sold ad space via post captions and monetised selfies through sponsored products. Such vernacular commerce pre-dates Instagram’s Paid Partnership feature, which launched in late-2017.
The use of images as a primary mode of communication, as opposed to the text-based modes of the blogging era, facilitated an explosion of aspiring influencers. The threshold for turning oneself into an online brand was dramatically lowered.
Instagrammers relied more on photography and their looks — enhanced by filters and editing built into the platform.
As influencers commercialised Instagram captions and photos, those who had owned online shops turned hashtag streams into advertorial campaigns. They relied on the labour of followers to publicise their wares and amplify their reach.
Bigger businesses followed suit and so did advice from marketing experts for how best to “optimise” engagement.
In mid-2016, Instagram belatedly launched business accounts and tools, allowing companies easy access to back-end analytics. The introduction of the “swipeable carousel” of story content in early 2017 further expanded commercial opportunities for businesses by multiplying ad space per Instagram post. This year, in the tradition of Instagram corporatising user innovations, it announced Instagram Shops would allow businesses to sell products directly via a digital storefront. Users had previously done this via links.
Instagram isn’t just where we tell the visual story of ourselves, but also where we co-create each other’s stories. Nowhere is this more evident than the way parents “sharent”, posting their children’s daily lives and milestones.
Many children’s Instagram presence begins before they are even born. Sharing ultrasound photos has become a standard way to announce a pregnancy. Over 1.5 million public Instagram posts are tagged #genderreveal.
Sharenting raises privacy questions: who owns a child’s image? Can children withdraw publishing permission later?
Sharenting entails handing over children’s data to Facebook as part of the larger realm of surveillance capitalism. A saying that emerged around the same time as Instagram was born still rings true: “When something online is free, you’re not the customer, you’re the product”. We pay for Instagram’s “free” platform with our user data and our children’s data, too, when we share photos of them.
Ultimately, the last decade has seen Instagram become one of the main lenses through which we see the world, personally and politically. Users communicate and frame the lives they share with family, friends and the wider world.
Due to the global pandemic, this year’s International Communication Association conference was held online. This post shares the abstracts and short videos made for our roundtable on ‘Approaching Instagram: New Methods and Modes for Examining Visual Social Media’. Hopefully it might prove useful to those studying Instagram and thinking about methodologies. The participants in this roundtable were Crystal Abidin (Curtin University), Tim Highfield, (University of Sheffield), Tama Leaver, (Curtin University), Anthony McCosker (Swinburne University of Technology), Adam Suess, (Griffith University), Katie Warfield (Kwantlen Polytechnic University) and Alice Witt (Queensland University of Technology).
The Panel Overview
Instagram has more than a billion users, yet despite being owned by Facebook remains a platform that’s vastly more popular with young people, and synonymous with the visual and cultural zeitgeist. However, compared to parent-company Facebook, or competitors such as Twitter, Instagram has been comparatively under-studied and under-researched. Moreover, as Facebook/Instagram have limited researcher access via their APIs, newer research approaches have had to emerge, some drawing on older qualitative approaches to understand and analyse Instagram media and interactions (from images and videos to comments, emoji, hashtags, stories, and more). The eight initial participants in this roundtable, from Australia, Canada, Finland and the United Kingdom, roundtable have pioneered specific research methods for approaching Instagram (across quantitative and qualitative fields), and our intention is to broaden the discussion moving beyond (just) methods to larger questions and ideas of engaging with Instagram as a visual social media icon on which larger social, cultural changes and questions are necessarily explored.
Contributions set the scene for a larger discussion, examining the invisible labour of the ‘Instagram Husband’ as a highly important but almost always hidden figure, particularly mythologized by online influencers. Broader conceptual questions are also raised in terms of how the Instagram platform reconfigures time from 24-hour Stories, looping Boomerangs, to temporality measured relative to when content was posted, with Instagram becoming the centre of its own time and space. Another contributor argues that Instagram users are always creating and fashioning each other, not just themselves, using the liminal figures of the unborn (via ultrasounds) and the recently deceased as case studies where Instagram users are most obviously creating other people in the feeds. Another contributor asks how the world of art is being reconfigured by the aesthetics and practices of being ‘insta-worthy’. Another contribution asks how to move beyond hashtags as the primary method of discovering collections of content. On a different note, the practices of commercial wedding photographers are examined to ask how weddings are being reimagined and renegotiated in an era of social media visuality. Finally, important questions are raised about the content that is not visualized and not allowed on Instagram at all, and how these moderation practices can be mapped against the ‘black box’ of Instagram’s algorithms.
[1] The Instagram Husband / Crystal Abidin, Curtin University
Social media has become a canvas for the commemoration and celebration of milestones. For the highly prolific and commercially viable vocational internet celebrities known as Influencers, coupling up in a relationship is all the more significant, as it impacts their public personae, their market value to sponsors, and their engagement with followers. However, behind-the-scenes of such young women’s pristine posturing are often their romantic partners capturing viable footage from behind-the-camera, in a phenomenon known in popular discourse as “the Instagram husband”. These (often heterosexual male) romantic partners toggle between ‘commodity’ on camera to ‘creator’ off camera. Although the narrative of the Instagram Husband is usually clouded in the notions of sacrificial romance, the unremunerated work is wrought with strain. Between the domesticity of Influencers’ romantic coupling and the publicness of their branded individualism, this chapter considers the labour, tensions, and latent consequences when Influencers intertwine commodify their relationships into viable entities. Through traditional and digital ethnographic research methods and in-depth data collection among Singaporean women Influencers and their (predominantly heterosexual) partners, the chapter contemplates the phenomenon of the Instagram Husband and its impact on visual representations of romantic relationships online.
[2] Examining Instagram time: aesthetics and experiences / Tim Highfield, University of Sheffield
Temporal concerns are critical underpinnings for the presentation and experience of popular social media platforms. Understanding and transforming the temporal is key to the operation of these platforms, becoming a means for platforms to intervene in user activity. On Instagram, temporality plays out in different ways. Ostensibly describing in-the-moment, as-it-happens sharing and live documentation, the Insta- of Instagram has long been complicated by features of the platform and cultural practices and norms which encourage different types of participation and temporal framing. This contribution focuses on how Instagram time is presented and experienced by the platform and its users, from how content appears in non-linear algorithmic feeds to aesthetics that suggest, or explicitly callback to, older technologies and eras. These create temporal contestation as, for example, the implied permanence of the photo stream is contrasted with the ephemerality of Stories, where content usually lasts for only 24 hours, and the trapped seconds-long loops of Boomerangs. This temporal contestation apparent between different features of the platform also plays out in Instagram’s aesthetics, which include retro throwbacks of filters to the explicit visuals of Story filters reminiscent of VHS tape and physical film. Such platformed approaches then raise questions for researchers about Instagram’s temporality, how it is experienced by its users, and how it is repositioned and reframed by the platform’s own architecture, displays, and affordances.
[3] Creating Each Other on Instagram / Tama Leaver, Curtin University
While visual social media platforms such as Instagram are, by definition, about connecting and communication between multiple people, most discussions about Instagram practices presume that accounts, profiles and content are managed by individual users with the agency to make informed choices about their activities. However, Instagram photos and videos more often than not contain other people, and thus the sharing of visual material is often a form of co-creation where the Instagram user is often contributing and shaping another person or group’s online identity (or performance). This contribution outlines some of the larger provocations that occur when examining the way loves ones use Instagram to visualize the very young and the recently deceased. Indeed, even before birth, the sharing of the 12- or 20-week ultrasound photos and gender reveal parties often sees an Instagram identity begin to be shaped by parents before a child is even born. At the other end of life, mourning and funereal media often provide some of the last images and descriptions of a person’s life, something that can prove quite controversial on Instagram. Considering these two examples, this contribution argues that content creation could, and probably should, be considered visual co-creation, and Instagram should be seen as a platform on which users fashion each others identities as much as their own.
[4] Navigating Instagram’s politics of visibility / Anthony McCosker, Swinburne University of Technology
This contribution reflects on several research projects that have had to negotiate Instagram’s depreciated API access, and its increasingly restrictive moderation practices limiting what the company sees as sensitive or harmful content. One project with Australian Red Cross was designed to access and analyse location data for posts engaging with humanitarian activity, in order to generate new insights and information about how to address humanitarian needs particular locations. The other examined users’ engagement with content actively engaged with the depression through hashtag use. Both cases required the adjustment of methods to sustain the research beyond the API restrictions and enable future work to continue to draw insights about the respective research problems. I discuss the development of an inclusive hashtag practices method, data collaborative co-research practices, and visual methods that can account for situational and contextual analysis through a targeted sampling and theory building approach.
[5] Appreciating art through Instagram / Adam Suess, Griffith University
Instagram is an important application for art galleries, museums, and cultural institutions. For arts professionals it is a key tool for promotion, accessibility, participation, and the enhancing of the visitor experience. For arts educators it is an opportunity to influence the number of people who value the arts and seek lifelong learning through the aesthetic experience. Instagram also has pedagogical value in the gallery and is relevant for arts based learning programs. There is limited research about the use of Instagram by visitors to art galleries, museums, and cultural institutions and the role it plays in their social, spatial, and aesthetic experience. This study examined the use of Instagram by visitors to the Gerhard Richter exhibition at the Queensland Gallery of Modern Art (14 October 2017 – 4 February 2018). The research project found that the use of Instagram at the gallery engaged visitors in a manner that transcended the physical space, evolving and extending their aesthetic experience. It also found that Instagram acted as a tool of influence shaping the way visitors engaged with art. This finding is significant for arts educators seeking to engage students and the community through Instagram, centered on their experience of art.
[6] Instagram Visuality and The West Coast Wedding / Katie Warfield, Kwantlen Polytechnic University
The intersection of artsy, youthful, beautiful, and playful aesthetics alongside corporate branding have established certain normative modes of visuality on the globally popular social media platform Instagram. Adopting a post-phenomenological lens alongside an intersectional feminist critique of the platform, this paper presents the findings of working with six commercial wedding photographers on the west coast of Canada whose photographs are often shared for clients on social media. Via interviews, photo elicitation, and participant observation, this paper teases apart the multi-stable and manifold socio-technical forces that shape Instagram visuality or the visual sedimented ways of seeing shaped by Instagram and embodied and performed by image producers. This paper shows the habituation of these modes of seeing and argues that Instagram visuality is the result of various and complex intimate conversational negotiations between: discursive visual tropes (e.g. lighting, subject arrangement, and material symbols of union), material technological affordances (in-built filters, product tagging, and the grid layout of user pages), and sedimented discursive-affective “moods” (white material femininity and nature communion) that assemble to shape the normative depictions of west coast weddings.
[7] Probing the black box of content moderation on Instagram: An innovative black box methodology / Alice Witt, Queensland University of Technology
The black box around the internal workings of Instagram makes it difficult for users to trust that their expression through content is moderated, or regulated, in ways that are free from arbitrariness. Against the particular backdrop of allegations that the platform is arbitrarily removing some images depicting women’s bodies, this research develops and applies a new methodology for empirical legal evaluation of content moderation in practice. Specifically, it uses innovative digital methods, in conjunction with content and legal document analyses, to identify how discrete inputs (i.e. images) into Instagram’s moderation processes produce certain outputs (i.e. whether an image is removed or not removed). Overall, across two case studies comprising 5,924 images of female forms in total, the methodology provides a range of useful empirical results. One main finding, for example, is that the odds of removal for an expressly prohibited image depicting a woman’s body is 16.75 times higher than for a man’s body. The results ultimately suggest that concerns around the risk of arbitrariness and bias on Instagram, and, indeed, ongoing distrust of the platform among users, might not be unfounded. However, without greater transparency regarding how Instagram sets, maintains and enforces rules around content, and monitors the performance of its moderators for potential bias, it is difficult to draw explicit conclusions about which party initiates content removal, in what ways and for what reasons. This limitation, among others raised by this methodology, underlines that many vital questions of trust in content moderation on Instagram remain unanswered.
[X]Visualising the Ends of Identity: Pre-Birth and Post-Death on Instagram in Information, Communication and Society by me and Tim Highfield. This is one of the first Ends of Identity article coming from our big Instagram dataset, analysing the first year (2014). We’ll be writing more looking at the three years we collected (14-16) before the Instagram APIs changed and locked us out! Here’s the abstract:
This paper examines two ‘ends’ of identity online – birth and death – through the analytical lens of specific hashtags on the Instagram platform. These ends are examined in tandem in an attempt to surface commonalities in the way that individuals use visual social media when sharing information about other people. A range of emerging norms in digital discourses about birth and death are uncovered, and it is significant that in both cases the individuals being talked about cannot reply for themselves. Issues of agency in representation therefore frame the analysis. After sorting through a number of entry points, images and videos with the #ultrasound and #funeral hashtags were tracked for three months in 2014. Ultrasound images and videos on Instagram revealed a range of communication and representation strategies, most highlighting social experiences and emotional peaks. There are, however, also significant privacy issues as a significant proportion of public accounts share personally identifiable metadata about the mother and unborn child, although these issue are not apparent in relation to funeral images. Unlike other social media platforms, grief on Instagram is found to be more about personal expressions of loss rather than affording spaces of collective commemoration. A range of related practices and themes, such as commerce and humour, were also documented as a part of the spectrum of activity on the Instagram platform. Norms specific to each collection emerged from this analysis, which are then compared to document research about other social media platforms, especially Facebook.
Visual content is a critical component of everyday social media, on platforms explicitly framed around the visual (Instagram and Vine), on those offering a mix of text and images in myriad forms (Facebook, Twitter, and Tumblr), and in apps and profiles where visual presentation and provision of information are important considerations. However, despite being so prominent in forms such as selfies, looping media, infographics, memes, online videos, and more, sociocultural research into the visual as a central component of online communication has lagged behind the analysis of popular, predominantly text-driven social media. This paper underlines the increasing importance of visual elements to digital, social, and mobile media within everyday life, addressing the significant research gap in methods for tracking, analysing, and understanding visual social media as both image-based and intertextual content. In this paper, we build on our previous methodological considerations of Instagram in isolation to examine further questions, challenges, and benefits of studying visual social media more broadly, including methodological and ethical considerations. Our discussion is intended as a rallying cry and provocation for further research into visual (and textual and mixed) social media content, practices, and cultures, mindful of both the specificities of each form, but also, and importantly, the ongoing dialogues and interrelations between them as communication forms.
The answer, of course, is both. And 19-year-old Instagram model Essena O’Neill’s very public rejection of the inauthentic nature of social media last week can been read through both lenses.
On the one hand, O’Neill deleted her heavily trafficked Instagram, YouTube and Tumblr accounts, and re-directed her audience to her new blog decrying the artificiality of social media life. She was embraced by many for revealing the inner workings of a poorly understood social media marketplace. Deleting accounts with more half a million followers certainly does make a statement.
On the other hand, O’Neill’s actions have also been interpreted as a rebranding effort, shifting away from the world of modelling toward a new online identity as a vegan eco-warrior.
Influencing the influencers
O’Neill was – and largely remains – what is referred to by marketers as an “influencer” or by some academics as a “microcelebrity”.
Given the large numbers of followers, they are very attractive platforms for brands and marketers wanting to reach these “organic” social media audiences. Yet, while these social media channels often depict idyllic lives, O’Neill’s dramatic revelations have raised questions about the authenticity of many influencers.
Or, more specifically, questions about exactly what sort of money is changing hands, and how visible sponsored and paid posts ought to be on social media.
Clashes between authenticity and commerce have a long history on social media. A notable example occurred in 2009 when Nestlé courted influential “mommy bloggers”, effectively dividing the community between those happy to be flown to a Nestlé retreat and those who argued Nestlé’s history of unethical business practices in relation to breastfeeding were unforgivable.
More recently, influential YouTube star and fashion blogger Zoe “Zoella” Sugg faced a backlash following the revelations that her best-selling debut novel, Girl Online, was written at least in part by a ghostwriter.
Anthropologist and social media researcher Crystal Abidin has extensively studied and documented Singaporean influencers, noting a range of different practices, from explicit tags to implicit mentioning of brands, to indicate paid or sponsored posts.
Recognising these various tags and indicators requires a level of Instagram literacy that regular viewers will likely develop, but casual audiences could easily miss. Indeed, as Abidin and Mart Ots have argued, this lack of transparent standards can be understood as “the influencer’s dilemma”.
As Singaporean influencers have been around for a decade, some have aged sufficiently to shift from their own sponsored posts to endorsements featuring their children, becoming what Abidin describes as micro-microcelebrities.
Australia also has its own infant influencers, the most visible being PR CEO Roxy Jacenko’s daughter, four year old Instagram star Pixie Curtis. As a second generation influencers emerge, clear social norms about sponsorship and advertising transparency on Instagram become more pressing.
Leveraging authenticity
Australian newly launched marketing company Tribe has positioned itself as a broker between influencers – “someone with 5000+ real followers” on Facebook, Twitter or Instagram – and brands.
As Tribe notes, the ACCC does not currently require individuals on social media to reveal paid posts. However, it does recommend influencers add #spon to sponsored posts to flag identify paid content.
Tribe influencer marketing in action.Tribe Group
The difference between a recommendation and a rule aside, while a quick search reveals some 47,000 Instagram images tagged with #spon, many of these are not sponsored posts.
Top images tagged #spon on Instagram, 9 November 2015.Instagram
Of the top #spon tagged posts on Instagram yesterday (9 November), they feature influencers spruiking tea, videogames, resorts, beer and a mobile service provider along with two pets sponsored by a dog show and, as seems fitting, a dog food company.
An explicit marker like #spon would at least make sponsored posts identifiable, but no such norm currently exists, and even Tribe only “strongly recommends” rather than mandates its use.
See through
In a post ironically titled “How To Make $$$ on Social Media”, Essena O’Neill notes that she was charging A$1,000 to feature a product on her Instagram feed, a fact she did not disclose until her recent rejection of her social media modelling past.
O’Neill’s own authenticity might not be helped by the fact that she took to Vimeo – another social media platform – and her own blog, to denounce social media.
This could be read as a clear reminder that social media isn’t inherently morally charged: the value of communication platforms depends in large part on what’s being communicated.
Moreover, as O’Neill’s actions have inspired other Instagram users and influencers to add “honest” captions about the constructedness of their images, if nothing else O’Neill has provoked a very teachable moment, potentially increasing the media literacy of many social media users.
Traditional media industries have long had regulations that ensure advertising and other content are clearly differentiated. While regulating social media is challenging, calling for social media influencers to self-regulate should not be.
Far from damaging their influence, such transparency may just add to what audiences perceive as their authenticity.
At this week’s fantastically engaging CCI Digital Methods Summer School held at Swinburne University in Melbourne, Tim Highfield and I presented a workshop about analysing visual social media, focusing on Instagram data collection and anaylsis. It was based, in part, on our recent First Monday paper, but also looked beyond that at ways of surfacing research questions and approaches. We were pleased with the interest in the workshop, and really positive responses to it, so we’ve shared the slides here:
There will be more on Instagram from us later this year, but if you’re working on Instagram I’d love to hear what you’re doing; either leave a comment here or ping me an email if you want to get in touch.
Today First Monday published A Methodology for Mapping Instagram Hashtags by Tim Highfield and myself. This methodology paper explains the processes behind the various media we’ve been tracking as part of the Ends of Identity project, although the utility of the methods go far beyond that. Beyond technical questions, we’ve included some important ethical and privacy questions that arose as we started to explore Instagram mapping. Here’s the abstract:
While social media research has provided detailed cumulative analyses of selected social media platforms and content, especially Twitter, newer platforms, apps, and visual content have been less extensively studied so far. This paper proposes a methodology for studying Instagram activity, building on established methods for Twitter research by initially examining hashtags, as common structural features to both platforms. In doing so, we outline methodological challenges to studying Instagram, especially in comparison to Twitter. Finally, we address critical questions around ethics and privacy for social media users and researchers alike, setting out key considerations for future social media research.
The full paper is available at First Monday, fully open access, with a Creative Commons license. As always, your comments, thoughts and feedback are welcome here, or on Twitter.
If you’re interested, Axel Bruns did a great liveblog summary of the paper, and for the truly dedicated there is an mp3 audio copy of the talk. The paper itself is in the process of being written up and should be in full chapter form in a month or so; if you’d like to look over a full draft once it’s written up, just let me know.