As Australia and other countries debate the merits of banning kids under 14 from social media, Meta has announced a significant “reimagining” of teenagers’ experience of Instagram.
These new “Teen Accounts” will be set to private by default, have the maximum content and messaging restrictions possible, pause notifications at night, and add new ways for teens to indicate their content preferences.
Importantly, for kids under the age of 16, changing these default settings will now require parental permission.
The move, touted as giving “peace of mind” for parents, is a welcome step – but parents and guardians should use it to talk to their kids about online spaces.
What’s different about Teen Accounts?
Teen Accounts are a combination of new features and a repackaging of a number of tools that have already been in place, but haven’t had the visibility or uptake Meta would have preferred.
Bringing these incremental changes together under the umbrella of Teen Accounts should make these changes more visible to teens and caregivers.
under-18s will have accounts set to private by default, and under-16s will only be able to change that setting with parental permission
teens will only be able to receive messages from people they are already following or are connected to
content restrictions and the blocking of offensive words in comments and messages will be set to the maximum setting possible
notifications from Instagram will be turned off between 10pm and 7am
teens will be reminded to leave Instagram after 60 minutes of use on any given day.
Some of these tools are more useful than others. A reminder to leave Instagram after 60 minutes that teens can just click past sets a fairly low bar in terms of time management.
But default account settings matter. They can really shape a user’s experience of a platform. Teens having private accounts by default, with protections around content and messaging set to their strongest settings, will significantly shape their time on Instagram.
Stopping under-16s from changing these settings without parental or guardian consent is the biggest change, and really does differentiate the teen experience of Instagram from the adult one.
Most of these changes focus on safety and age-appropriate experiences. But it is a positive step for Meta to also include new ways for teens to indicate the content they actually prefer, instead of just relying on algorithms to infer these preferences.
Do parents and guardians have to do anything?
In promoting Teen Accounts, head of Instagram Adam Mosseri emphasised the change is aimed at giving parents “peace of mind”. It doesn’t require explicit intervention from parents for these changes to occur.
“I’m a dad, and this is a significant change to Instagram and one that I’m personally very proud of,” noted Mosseri. This is part of a longer-term strategy of positioning Mosseri as a prominent parental voice to increase his perceived credibility in this domain.
Parents or guardians will need to use their own accounts for “supervision” if they want to know what teens are doing on Instagram, or have access to more granular controls. These include setting personalised time limits, seeing an overview of a teen’s activity, or allowing any of the default settings to change.
The real opportunity for parents here is to take these changes as a chance to discuss with their children how they’re using Instagram and other social media platforms.
No matter what safety measures are in place, it’s vital for parents to build and maintain a sense of openness and trust so young people can turn to them with questions, and share difficulties and challenges they encounter online.
Meta has said the shift to Teen Accounts will reduce the level of inappropriate content teens might encounter, but that can never be absolute.
These changes minimise the risks, but don’t remove them. Ensuring young people have someone to turn to if they see, hear, or experience something that’s inappropriate or makes them uncomfortable will always be incredibly important. That’s real peace of mind.
Can’t teens still lie about their age?
Initially, Teen Accounts will apply to new teens who sign up. The changes will also roll out for existing teen users whose birth date Instagram already has on file.
Over time, Mosseri and Antigone Davis, Meta’s global head of safety, have both said Instagram is rolling out new tools that will identify teenagers using Instagram even if they didn’t enter an accurate birth date. These tools are not active yet, but are supposed to be coming next year.
This is a welcome change if it proves accurate. However, the effectiveness of inferring or estimating age is yet to be proven.
The bigger picture
Teen Accounts are launching in Australia, Canada, the United Kingdom and the United States this week, taking up to 60 days to reach all users in those countries. Users in the rest of the world are scheduled to get Teen Accounts in January 2025.
For a long time, Instagram hasn’t done enough to look after the interests of younger users. Child rights advocates have mostly endorsed Teen Accounts as a significant positive change in young people’s experiences and safety on Instagram.
Yet it remains to be seen whether Meta has done enough to address the push in Australia and elsewhere to ban young people (whether under-14s or under-16s, depending on the proposal) from all social media.
Teen Accounts are clearly a meaningful step in the right direction, but it’s worth remembering it took Instagram 14 years to get to this point. That’s too long.
Ultimately, these changes should serve as a prompt for any platform open to kids or teens to ensure they provide age-appropriate experiences. Young users can gain a lot from being online, but we must minimise the risks.
In the meantime, if these changes open the door for parents and guardians to talk to young people about their experiences online, that’s a win.
Meta’s Instagram and Threads apps are “slowly” rolling out a change that will no longer recommend political content by default. The company defines political content broadly as being “potentially related to things like laws, elections, or social topics”.
Users who follow accounts that post political content will still see such content in the normal, algorithmically sorted ways. But by default, users will not see any political content in their feeds, stories or other places where new content is recommended to them.
For users who want political recommendations to remain, Instagram has a new setting where users can turn it back on, making this an “opt-in” feature.
This change not only signals Meta’s retreat from politics and news more broadly, but also challenges any sense of these platforms being good for democracy at all. It’s also likely to have a chilling effect, stopping content creators from engaging politically altogether.
Politics: dislike
Meta has long had a problem with politics, but that wasn’t always the case.
In 2008 and 2012, political campaigning embraced social media, and Facebook was seen as especially important in Barack Obama’s success. The Arab Spring was painted as a social-media-led “Facebook Revolution”, although Facebook’s role in these events was widely overstated,
However, since then the spectre of political manipulation in the wake of the 2018 Cambridge Analytica scandal has soured social media users toward politics on platforms.
Increasingly polarised politics, vastly increased mis- and disinformation online, and Donald Trump’s preference for social media over policy, or truth, have all taken a toll. In that context, Meta has already been reducing political content recommendations on their main Facebook platform since 2021.
Instagram and Threads hadn’t been limited in the same way, but also ran into problems. Most recently, the Human Rights Watch accused Instagram in December last year of systematically censoring pro-Palestinian content. With the new content recommendation change, Meta’s response to that accusation today would likely be that it is applying its political content policies consistently.
How the change will play out in Australia
Notably, many Australians, especially in younger age groups, find news on Instagram and other social media platforms. Sometimes they are specifically seeking out news, but often not.
Not all news is political. But now, on Instagram by default no news recommendations will be political. The serendipity of discovering political stories that motivate people to think or act will be lost.
Combined with Meta recently stating they will no longer pay to support the Australian news and journalism shared on their platforms, it’s fair to say Meta is seeking to be as apolitical as possible.
But with Meta positioning Threads as a potential new town square while Twitter/X burns down, it’s hard to see what a town square looks like without politics.
The lack of political news, combined with a lack of any news on Facebook, may well mean young people see even less news than before, and have less chance to engage politically.
Politics and hard news are important, I don’t want to imply otherwise. But my take is, from a platform’s perspective, any incremental engagement or revenue they might drive is not at all worth the scrutiny, negativity (let’s be honest), or integrity risks that come along with them.
Like for Facebook, for Instagram and Threads politics is just too hard. The political process and democracy can be pretty hard, but it’s now clear that’s not Meta’s problem.
A chilling effect on creators
Instagram’s announcement also reminded content creators their accounts may no longer be recommended due to posting political content.
If political posts were preventing recommendation, creators could see the exact posts and choose to remove them. Content creators live or die by the platform’s recommendations, so the implication is clear: avoid politics.
Creators already spend considerable time trying to interpret what content platforms prefer, building algorithmicfolklore about which posts do best.
While that folklore is sometimes flawed, Meta couldn’t be clearer on this one: political posts will prevent audience growth, and thus make an already precarious living harder. That’s the definition of a political chilling effect.
For the audiences who turn to creators because they are perceived to be relatable and authentic, the absence of political posts or positions will likely stifle political issues, discussion and thus ultimately democracy.
How do I opt back in?
For Instagram and Threads users who want these platforms to still share political content recommendations, follow these steps:
go to your Instagram profile and click the three lines to access your settings.
click on Suggested Content (or Content Preferences for some).
click on Political content, and then select “Don’t limit political content from people that you don’t follow”.
With much fanfare, Meta announced last week that they’re rolling out all sorts of generative AI features and experiences across a range of their apps, including Instagram. AI agents in the visage of celebrities are going to exist across Meta’s apps, with image generation and manipulation affordances of all sorts hitting Instagram and Facebook in particular. At first glance, allowing generative AI tools to create and manipulate content on Instagram seems a little odd. In the book Instagram: Visual Social Media Cultures I co-authored with Tim Highfield and Crystal Abidin, one of the things we examined as a consistent tension within Instagram has been users holding on to a sense of authenticity whilst the whole platform is driven by a logic of templatability. Anything popular becomes a template, and can swiftly become an overused cliché. In that context, can generative AI content and tools be part of an authentic visual landscape, or will these outputs and synthetic media challenge the whole point of something being Instagrammable?
More than that, though, generative AI tools are notoriously fraught, often trained on such a broad range of indiscriminate material that they tend to reproduce biases and prejudices unless very carefully tweaked. So the claim that I was most interested in was the assertion that Meta are “rolling out our new AIs slowly and have built in safeguards.” Many generative AI features aren’t yet available to users outside the US, so for this short piece I’m focused on the generative AI stickers which have rolled out globally for Instagram. Presumably this is the same underlying generative AI system, so seeing what gets generated with different requests is an interesting experiment, certainly in the early days of a public release of these tools.
Requesting an AI sticker in Instagram for ‘Professor’ produced a pleasingly broad range of genders and ethnicities. Most generative AI image tools have initially produced pages of elderly white men in glasses for that query, so it’s nice to see Meta’s efforts being more diverse. Queries for ‘lecturer’ and ‘teacher in classroom’ were similarly diverse.
Heading in to slightly more problematic territory, I was curious how Meta’s AI tools were dealing with weapons and guns. Weapons are often covered by safeguards, so I tested ‘panda with a gun’ which produced some pretty intense looking pandas with firearms. After that I tried a term I know is blocked in many other generative AI tools, ‘child with a gun’, and saw my first instance of a safeguard demonstrably in action, with no result and a warning that ‘Your description may not follow our Community Guidelines. Try another description.’
However, as safeguards go, this is incredibly rudimentary, as a request for ‘child with a grenade’ readily produced stickers, including several variations which did, indeed, show a child holding a gun.
The most predictable words are blocked (including sex, slut, hooker and vomit, the latter relating, most likely, to Instagram’s well documented challenges in addressing disordered eating content). Thankfully gay, lesbian and queer are not blocked. Oddly, gun, shoot and other weapon words are fine by themselves. And while ‘child with a gun’ was blocked, asking for just ‘rifle’ returned a range of images that look a lot to me like several were children holding guns. It may well be the case that the unpredictability of generative AI creations means that a lot more robust safeguards are needed that just blocking some basic keywords.
Zooming out a bit, in a conversation on LinkedIN, Jill Walker Rettberg (author of the new book Machine Vision) was lamenting that one of the big challenges with generative AI trained on huge datasets is the lack of cultural specificity. As a proxy, I thought it’d be interesting to see how Meta’s AI handles something as banal as flags. Asking for a sticker for ‘US flag’ produced very recognisable versions of the stars and stripes. ‘Australia flag’ basically generated a mush of the Australian flag, always with a union jack, but with a random number of stars, or simply a bunch of kangaroos. Asking for ‘New Zealand flag’ got a similar mix, again with random numbers of stars, but also with the Frankenstein’s monster that was a kiwi (bird) with a union jack on its arse and a kiwi fruit for a head; the sort of monstrous hybrid that only a generative AI tool can create blessed with a complete and utter lack of comprehension of context! (That said when the query was Aotearoa New Zealand, quite different stickers came back.)
More problematically, a search for ‘the Aboriginal flag’ (keeping in mind I’m searching from within Australia and Instagram would know that) produced some weird amalgam of hundreds of flags, none of which directly related to the Aboriginal Flag in Australia. Trying ‘the Australian Aboriginal flag’ only made matters worse, with more union jacks and what I’m guessing are supposed to be the tips of arrows. At a time when one of the biggest political issues in Australia is the upcoming referendum on the Aboriginal and Torres Strait Islander Voice, this complete lack of contextual awareness shows that Meta’s AI tools are incredibly US-centric at this time.
And while it might be argued that generative AI are never that good with specific contexts, trawling through US popular culture queries showed Meta’s AI tools can give incredibly accurate stickers if you’re asking for Iron Man, Star Wars or even just Ahsoka (even when the query is incorrectly spelt ‘ashoka’!).
At the moment the AI Stickers are available globally, but the broader Meta AI tools are only available in the US, so to give Meta the benefit of the doubt, perhaps they’ve got significant work planned to understand specific countries, cultures and contexts before releasing these tools more widely. Returning to the question of safeguards, though, even the bare minimum does not appear very effective. While any term with ‘sexy’ or ‘naked’ in it seems to be blocked, many variants are not. Case in point, one last example: the query ‘medusa, large breasts’ produced exactly what you’d imagine, and if I’m not mistaken, the second sticker created in the top row shows Medusa with no clothes on at all. And while that’s very different from photographs of nudity, if part of Meta’s safeguards are blocking the term ‘naked’, but their AI is producing naked figures all the same, there are clearly lingering questions about just how effective these safeguards really are.
Facebook recently announced significant changes to Instagram for users aged under 16. New accounts will be private by default, and advertisers will be limited in how they can reach young people.
The new changes are long overdue and welcome. But Facebook’s commitment to childrens’ safety is still in question as it continues to develop a separate version of Instagram for kids aged under 13.
The company received significant backlash after the initial announcement in May. In fact, more than 40 US Attorneys General who usually support big tech banded together to ask Facebook to stop building the under-13s version of Instagram, citing privacy and health concerns.
Privacy and advertising
Online default settings matter. They set expectations for how we should behave online, and many of us will never shift away from this by changing our default settings.
Adult accounts on Instagram are public by default. Facebook’s shift to making under-16 accounts private by default means these users will need to actively change their settings if they want a public profile. Existing under-16 users with public accounts will also get a prompt asking if they want to make their account private.
These changes normalise privacy and will encourage young users to focus their interactions more on their circles of friends and followers they approve. Such a change could go a long way in helping young people navigate online privacy.
Facebook has also limited the ways in which advertisers can target Instagram users under age 18 (or older in some countries). Instead of targeting specific users based on their interests gleaned via data collection, advertisers can now only broadly reach young people by focusing ads in terms of age, gender and location.
This change follows recently publicised research that showed Facebook was allowing advertisers to target young users with risky interests — such as smoking, vaping, alcohol, gambling and extreme weight loss — with age-inappropriate ads.
This is particularly worrying, given Facebook’s admission there is “no foolproof way to stop people from misrepresenting their age” when joining Instagram or Facebook. The apps ask for date of birth during sign-up, but have no way of verifying responses. Any child who knows basic arithmetic can work out how to bypass this gateway.
Of course, Facebook’s new changes do not stop Facebook itself from collecting young users’ data. And when an Instagram user becomes a legal adult, all of their data collected up to that point will then likely inform an incredibly detailed profile which will be available to facilitate Facebook’s main business model: extremely targeted advertising.
Deploying Instagram’s top dad
Facebook has been highly strategic in how it released news of its recent changes for young Instagram users. In contrast with Facebook’s chief executive Mark Zuckerberg, Instagram’s head Adam Mosseri has turned his status as a parent into a significant element of his public persona.
Since Mosseri took over after Instagram’s creators left Facebook in 2018, his profile has consistently emphasised he has three young sons, his curated Instagram stories include #dadlife and Lego, and he often signs off Q&A sessions on Instagram by mentioning he needs to spend time with his kids.
When Mosseri posted about the changes for under-16 Instagram users, he carefully framed the news as coming from a parent first, and the head of one of the world’s largest social platforms second. Similar to many influencers, Mosseri knows how to position himself as relatable and authentic.
Age verification and ‘potentially suspicious’ adults
In a paired announcement on July 27, Facebook’s vice-president of youth products Pavni Diwanji announced Facebook and Instagram would be doing more to ensure under-13s could not access the services.
Diwanji said Facebook was using artificial intelligence algorithms to stop “adults that have shown potentially suspicious behavior” from being able to view posts from young people’s accounts, or the accounts themselves. But Facebook has not offered an explanation as to how a user might be found to be “suspicious”.
Diwanji notes the company is “building similar technology to find and remove accounts belonging to people under the age of 13”. But this technology isn’t being used yet.
It’s reasonable to infer Facebook probably won’t actively remove under-13s from either Instagram or Facebook until the new Instagram For Kids app is launched — ensuring those young customers aren’t lost to Facebook altogether.
Despite public backlash, Diwanji’s post confirmed Facebook is indeed still building “a new Instagram experience for tweens”. As I’ve argued in the past, an Instagram for Kids — much like Facebook’s Messenger for Kids before it — would be less about providing a gated playground for children and more about getting children familiar and comfortable with Facebook’s family of apps, in the hope they’ll stay on them for life.
A Facebook spokesperson told The Conversation that a feature introduced in March prevents users registered as adults from sending direct messages to users registered as teens who are not following them.
“This feature relies on our work to predict peoples’ ages using machine learning technology, and the age people give us when they sign up,” the spokesperson said.
They said “suspicious accounts will no longer see young people in ‘Accounts Suggested for You’, and if they do find their profiles by searching for them directly, they won’t be able to follow them”.
Resources for parents and teens
For parents and teen Instagram users, the recent changes to the platform are a useful prompt to begin or to revisit conversations about privacy and safety on social media.
Regarding Instagram for Kids, a Facebook spokesperson told The Conversation the company hoped to “create something that’s really fun and educational, with family friendly safety features”.
But the fact that this app is still planned means Facebook can’t accept the most straightforward way of keeping young children safe: keeping them off Facebook and Instagram altogether.
When it was launched on October 6, 2010 by Kevin Systrom and Mike Krieger, Instagram was an iPhone-only app. The user could take photos (and only take photos — the app could not load existing images from the phone’s gallery) within a square frame. These could be shared, with an enhancing filter if desired. Other users could comment or like the images. That was it.
As we chronicle in our book, the platform has grown rapidly and been at the forefront of an increasingly visual social media landscape.
In 2012, Facebook purchased Instagram for a deal worth a $US1 billion (A$1.4 billion), which in retrospect probably seems cheap. Instagram is now one of the most profitable jewels in the Facebook crown.
Instagram has integrated new features over time, but it did not invent all of them.
Similarly, IGTV is Instagram’s answer to YouTube’s longer-form video. And if the recently-released Reels isn’t a TikTok clone, we’re not sure what else it could be.
Instagram is largely responsible for the rapid professionalisation of the influencer industry. Insiders estimated the influencer industry would grow to US$9.7 billion (A$13.5 billion) in 2020, though COVID-19 has since taken a toll on this as with other sectors.
As early as in 2011, professional lifestyle bloggers throughout Southeast Asia were moving to Instagram, turning it into a brimming marketplace. They sold ad space via post captions and monetised selfies through sponsored products. Such vernacular commerce pre-dates Instagram’s Paid Partnership feature, which launched in late-2017.
The use of images as a primary mode of communication, as opposed to the text-based modes of the blogging era, facilitated an explosion of aspiring influencers. The threshold for turning oneself into an online brand was dramatically lowered.
Instagrammers relied more on photography and their looks — enhanced by filters and editing built into the platform.
As influencers commercialised Instagram captions and photos, those who had owned online shops turned hashtag streams into advertorial campaigns. They relied on the labour of followers to publicise their wares and amplify their reach.
Bigger businesses followed suit and so did advice from marketing experts for how best to “optimise” engagement.
In mid-2016, Instagram belatedly launched business accounts and tools, allowing companies easy access to back-end analytics. The introduction of the “swipeable carousel” of story content in early 2017 further expanded commercial opportunities for businesses by multiplying ad space per Instagram post. This year, in the tradition of Instagram corporatising user innovations, it announced Instagram Shops would allow businesses to sell products directly via a digital storefront. Users had previously done this via links.
Instagram isn’t just where we tell the visual story of ourselves, but also where we co-create each other’s stories. Nowhere is this more evident than the way parents “sharent”, posting their children’s daily lives and milestones.
Many children’s Instagram presence begins before they are even born. Sharing ultrasound photos has become a standard way to announce a pregnancy. Over 1.5 million public Instagram posts are tagged #genderreveal.
Sharenting raises privacy questions: who owns a child’s image? Can children withdraw publishing permission later?
Sharenting entails handing over children’s data to Facebook as part of the larger realm of surveillance capitalism. A saying that emerged around the same time as Instagram was born still rings true: “When something online is free, you’re not the customer, you’re the product”. We pay for Instagram’s “free” platform with our user data and our children’s data, too, when we share photos of them.
Ultimately, the last decade has seen Instagram become one of the main lenses through which we see the world, personally and politically. Users communicate and frame the lives they share with family, friends and the wider world.
Due to the global pandemic, this year’s International Communication Association conference was held online. This post shares the abstracts and short videos made for our roundtable on ‘Approaching Instagram: New Methods and Modes for Examining Visual Social Media’. Hopefully it might prove useful to those studying Instagram and thinking about methodologies. The participants in this roundtable were Crystal Abidin (Curtin University), Tim Highfield, (University of Sheffield), Tama Leaver, (Curtin University), Anthony McCosker (Swinburne University of Technology), Adam Suess, (Griffith University), Katie Warfield (Kwantlen Polytechnic University) and Alice Witt (Queensland University of Technology).
The Panel Overview
Instagram has more than a billion users, yet despite being owned by Facebook remains a platform that’s vastly more popular with young people, and synonymous with the visual and cultural zeitgeist. However, compared to parent-company Facebook, or competitors such as Twitter, Instagram has been comparatively under-studied and under-researched. Moreover, as Facebook/Instagram have limited researcher access via their APIs, newer research approaches have had to emerge, some drawing on older qualitative approaches to understand and analyse Instagram media and interactions (from images and videos to comments, emoji, hashtags, stories, and more). The eight initial participants in this roundtable, from Australia, Canada, Finland and the United Kingdom, roundtable have pioneered specific research methods for approaching Instagram (across quantitative and qualitative fields), and our intention is to broaden the discussion moving beyond (just) methods to larger questions and ideas of engaging with Instagram as a visual social media icon on which larger social, cultural changes and questions are necessarily explored.
Contributions set the scene for a larger discussion, examining the invisible labour of the ‘Instagram Husband’ as a highly important but almost always hidden figure, particularly mythologized by online influencers. Broader conceptual questions are also raised in terms of how the Instagram platform reconfigures time from 24-hour Stories, looping Boomerangs, to temporality measured relative to when content was posted, with Instagram becoming the centre of its own time and space. Another contributor argues that Instagram users are always creating and fashioning each other, not just themselves, using the liminal figures of the unborn (via ultrasounds) and the recently deceased as case studies where Instagram users are most obviously creating other people in the feeds. Another contributor asks how the world of art is being reconfigured by the aesthetics and practices of being ‘insta-worthy’. Another contribution asks how to move beyond hashtags as the primary method of discovering collections of content. On a different note, the practices of commercial wedding photographers are examined to ask how weddings are being reimagined and renegotiated in an era of social media visuality. Finally, important questions are raised about the content that is not visualized and not allowed on Instagram at all, and how these moderation practices can be mapped against the ‘black box’ of Instagram’s algorithms.
[1] The Instagram Husband / Crystal Abidin, Curtin University
Social media has become a canvas for the commemoration and celebration of milestones. For the highly prolific and commercially viable vocational internet celebrities known as Influencers, coupling up in a relationship is all the more significant, as it impacts their public personae, their market value to sponsors, and their engagement with followers. However, behind-the-scenes of such young women’s pristine posturing are often their romantic partners capturing viable footage from behind-the-camera, in a phenomenon known in popular discourse as “the Instagram husband”. These (often heterosexual male) romantic partners toggle between ‘commodity’ on camera to ‘creator’ off camera. Although the narrative of the Instagram Husband is usually clouded in the notions of sacrificial romance, the unremunerated work is wrought with strain. Between the domesticity of Influencers’ romantic coupling and the publicness of their branded individualism, this chapter considers the labour, tensions, and latent consequences when Influencers intertwine commodify their relationships into viable entities. Through traditional and digital ethnographic research methods and in-depth data collection among Singaporean women Influencers and their (predominantly heterosexual) partners, the chapter contemplates the phenomenon of the Instagram Husband and its impact on visual representations of romantic relationships online.
[2] Examining Instagram time: aesthetics and experiences / Tim Highfield, University of Sheffield
Temporal concerns are critical underpinnings for the presentation and experience of popular social media platforms. Understanding and transforming the temporal is key to the operation of these platforms, becoming a means for platforms to intervene in user activity. On Instagram, temporality plays out in different ways. Ostensibly describing in-the-moment, as-it-happens sharing and live documentation, the Insta- of Instagram has long been complicated by features of the platform and cultural practices and norms which encourage different types of participation and temporal framing. This contribution focuses on how Instagram time is presented and experienced by the platform and its users, from how content appears in non-linear algorithmic feeds to aesthetics that suggest, or explicitly callback to, older technologies and eras. These create temporal contestation as, for example, the implied permanence of the photo stream is contrasted with the ephemerality of Stories, where content usually lasts for only 24 hours, and the trapped seconds-long loops of Boomerangs. This temporal contestation apparent between different features of the platform also plays out in Instagram’s aesthetics, which include retro throwbacks of filters to the explicit visuals of Story filters reminiscent of VHS tape and physical film. Such platformed approaches then raise questions for researchers about Instagram’s temporality, how it is experienced by its users, and how it is repositioned and reframed by the platform’s own architecture, displays, and affordances.
[3] Creating Each Other on Instagram / Tama Leaver, Curtin University
While visual social media platforms such as Instagram are, by definition, about connecting and communication between multiple people, most discussions about Instagram practices presume that accounts, profiles and content are managed by individual users with the agency to make informed choices about their activities. However, Instagram photos and videos more often than not contain other people, and thus the sharing of visual material is often a form of co-creation where the Instagram user is often contributing and shaping another person or group’s online identity (or performance). This contribution outlines some of the larger provocations that occur when examining the way loves ones use Instagram to visualize the very young and the recently deceased. Indeed, even before birth, the sharing of the 12- or 20-week ultrasound photos and gender reveal parties often sees an Instagram identity begin to be shaped by parents before a child is even born. At the other end of life, mourning and funereal media often provide some of the last images and descriptions of a person’s life, something that can prove quite controversial on Instagram. Considering these two examples, this contribution argues that content creation could, and probably should, be considered visual co-creation, and Instagram should be seen as a platform on which users fashion each others identities as much as their own.
[4] Navigating Instagram’s politics of visibility / Anthony McCosker, Swinburne University of Technology
This contribution reflects on several research projects that have had to negotiate Instagram’s depreciated API access, and its increasingly restrictive moderation practices limiting what the company sees as sensitive or harmful content. One project with Australian Red Cross was designed to access and analyse location data for posts engaging with humanitarian activity, in order to generate new insights and information about how to address humanitarian needs particular locations. The other examined users’ engagement with content actively engaged with the depression through hashtag use. Both cases required the adjustment of methods to sustain the research beyond the API restrictions and enable future work to continue to draw insights about the respective research problems. I discuss the development of an inclusive hashtag practices method, data collaborative co-research practices, and visual methods that can account for situational and contextual analysis through a targeted sampling and theory building approach.
[5] Appreciating art through Instagram / Adam Suess, Griffith University
Instagram is an important application for art galleries, museums, and cultural institutions. For arts professionals it is a key tool for promotion, accessibility, participation, and the enhancing of the visitor experience. For arts educators it is an opportunity to influence the number of people who value the arts and seek lifelong learning through the aesthetic experience. Instagram also has pedagogical value in the gallery and is relevant for arts based learning programs. There is limited research about the use of Instagram by visitors to art galleries, museums, and cultural institutions and the role it plays in their social, spatial, and aesthetic experience. This study examined the use of Instagram by visitors to the Gerhard Richter exhibition at the Queensland Gallery of Modern Art (14 October 2017 – 4 February 2018). The research project found that the use of Instagram at the gallery engaged visitors in a manner that transcended the physical space, evolving and extending their aesthetic experience. It also found that Instagram acted as a tool of influence shaping the way visitors engaged with art. This finding is significant for arts educators seeking to engage students and the community through Instagram, centered on their experience of art.
[6] Instagram Visuality and The West Coast Wedding / Katie Warfield, Kwantlen Polytechnic University
The intersection of artsy, youthful, beautiful, and playful aesthetics alongside corporate branding have established certain normative modes of visuality on the globally popular social media platform Instagram. Adopting a post-phenomenological lens alongside an intersectional feminist critique of the platform, this paper presents the findings of working with six commercial wedding photographers on the west coast of Canada whose photographs are often shared for clients on social media. Via interviews, photo elicitation, and participant observation, this paper teases apart the multi-stable and manifold socio-technical forces that shape Instagram visuality or the visual sedimented ways of seeing shaped by Instagram and embodied and performed by image producers. This paper shows the habituation of these modes of seeing and argues that Instagram visuality is the result of various and complex intimate conversational negotiations between: discursive visual tropes (e.g. lighting, subject arrangement, and material symbols of union), material technological affordances (in-built filters, product tagging, and the grid layout of user pages), and sedimented discursive-affective “moods” (white material femininity and nature communion) that assemble to shape the normative depictions of west coast weddings.
[7] Probing the black box of content moderation on Instagram: An innovative black box methodology / Alice Witt, Queensland University of Technology
The black box around the internal workings of Instagram makes it difficult for users to trust that their expression through content is moderated, or regulated, in ways that are free from arbitrariness. Against the particular backdrop of allegations that the platform is arbitrarily removing some images depicting women’s bodies, this research develops and applies a new methodology for empirical legal evaluation of content moderation in practice. Specifically, it uses innovative digital methods, in conjunction with content and legal document analyses, to identify how discrete inputs (i.e. images) into Instagram’s moderation processes produce certain outputs (i.e. whether an image is removed or not removed). Overall, across two case studies comprising 5,924 images of female forms in total, the methodology provides a range of useful empirical results. One main finding, for example, is that the odds of removal for an expressly prohibited image depicting a woman’s body is 16.75 times higher than for a man’s body. The results ultimately suggest that concerns around the risk of arbitrariness and bias on Instagram, and, indeed, ongoing distrust of the platform among users, might not be unfounded. However, without greater transparency regarding how Instagram sets, maintains and enforces rules around content, and monitors the performance of its moderators for potential bias, it is difficult to draw explicit conclusions about which party initiates content removal, in what ways and for what reasons. This limitation, among others raised by this methodology, underlines that many vital questions of trust in content moderation on Instagram remain unanswered.