As part of Curtin University’s new The Future Of podcast series, I was recently interviewed about my ongoing research into pre-birth and infancy at one end, and digital death at the other, in relation to our presence(s) online. You can hear the podcast here, or am embedded player should work below:
Facebook has always had a problem with kids.
The US Children’s Online Privacy Protection Act (COPPA) explicitly forbids the collection of data from children under 13 without parental consent.
Rather than go through the complicated verification processes that involve getting parental consent, Facebook, like most online platforms, has previously stated that children under 13 simply cannot have Facebook accounts.
Many children have utilised some or all or Facebook’s features using their parent’s or older sibling’s accounts as well. Facebook’s internal messaging functions, and the standalone Messenger app have, at times, been shared by the named adult account holder and one or more of their children.
Sometimes this will involve parent accounts connecting to each other simply so kids can Video Chat, somewhat messing up Facebook’s precious map of connections.
Enter Messenger Kids, Facebook’s new Messenger app explicitly for the under-13s. Messenger Kids is promoted as having all the fun bits, but in a more careful and controlled space directed by parental consent and safety concerns.
To use Messenger Kids, a parent or caregiver uses their own Facebook account to authorise Messenger Kids for their child. That adult then gets a new control panel in Facebook where they can approve (or not) any and all connections that child has.
Kids can video chat, message, access a pre-filtered set of animated GIFs and images, and interact in other playful ways.
In the press release introducing Messenger Kids, Facebook emphasises that this product was designed after careful research, with a view to giving parents more control, and giving kids a safe space to interact providing them a space to grow as budding digital creators. Which is likely all true, but only tells part of the story.
As with all of Facebook’s changes and releases, it’s vitally important to ask: what’s in it for Facebook?
While Messenger Kids won’t show ads (to start with), it builds a level of familiarity and trust in Facebook itself. If Messenger Kids allows Facebook to become a space of humour and friendship for years before a “real” Facebook account is allowed, the odds of a child signing up once they’re eligible becomes much greater.
Facebook playing the long game
In an era when teens are showing less and less interest in Facebook’s main platform, Messenger Kids is part of a clear and deliberate strategy to recapture their interest. It won’t happen overnight, but Facebook’s playing the long game here.
If Messenger Kids replaces other video messaging services, then it’s also true that any person kids are talking to will need to have an active Facebook account, whether that’s mum and dad, older cousins or even great grandparents. That’s a clever way to keep a whole range of people actively using Facebook (and actively seeing the ads which make Facebook money).
Facebook wants data about you. It wants data about your networks, connections and interactions. It wants data about your kids. And it wants data about their networks, connections and interactions, too.
When they set up Messenger Kids, parents have to provide their child’s real name. While this is consistent with Facebook’s real names policy, the flexibility to use pseudonyms or other identifiers for kids would demonstrate real commitment to carving out Messenger Kids as something and somewhere different. That’s not the path Facebook has taken.
Facebook might not use this data to sell ads to your kids today, but adding kids into the mix will help Facebook refine its maps of what you do (and stop kids using their parents accounts for Video Chat messing up that data). It will also mean Facebook understands much better who has kids, how old they are, who they’re connected to, and so on.
One more rich source of data (kids) adds more depth to the data that makes Facebook tick. And make Facebook profit. Lots of profit.
Facebook’s main app, Messenger, Instagram, and WhatsApp (all owned by Facebook) are all free to use because the data generated by users is enough to make Facebook money. Messenger Kids isn’t philanthropy; it’s the same business model, just on a longer scale.
Facebook isn’t alone in exploring variations of their apps for children.
Google, Amazon and Apple want your kids
As far back as 2013 Snapchat released SnapKidz, which basically had all the creative elements of Snapchat, but not the sharing ones. However, their kids-specific app was quietly shelved the following year, probably for lack of any sort of business model.
Since early 2017, Google has also shifted to allowing kids to establish an account managed by their parents. It’s not hard to imagine why, when many children now chat with Google daily using the Google Home speakers (which, really, should be called “listeners” first and foremost).
Google Home, Amazon’s Echo and soon Apple’s soon-to-be-released HomePod all but remove the textual and tactile barriers which once prevented kids interacting directly with these online giants.
A child’s Google Account also allows parents to give them access to YouTube Kids. That said, the content that’s permissible on YouTube Kids has been the subject of a lot of attention recently.
In short, if dark parodies of Peppa Pig where Peppa has her teeth painfully removed to the sounds of screaming is going to upset your kids, it’s not safe to leave them alone to navigate YouTube Kids.
Nor will the space created by Messenger Kids stop cyberbullying; it might not be anonymous, but parents will only know there’s a problem if they consistently talk to their children about their online interactions.
Facebook often proves unable to regulate content effectively, in large part because it relies on algorithms and a relatively small team of people to very rapidly decide what does and doesn’t violate Facebook’s already fuzzy guidelines about acceptability. It’s unclear how Messenger Kids content will be policed, but the standard Facebook approach doesn’t seem sufficient.
At the moment, Messenger Kids is only available in the US; before it inevitably arrives in Australia and elsewhere, parents and caregivers need to decide whether they’re comfortable exchanging some of their children’s data for the functionality that the new app provides.
And, to be fair, Messenger Kids may well be very useful; a comparatively safe space where kids can talk to each other, explore tools of digital creativity, and increase their online literacies, certainly has its place.
Most importantly, though, is this simple reminder: Messenger Kids isn’t (just) screen time, it’s social time. And as with most new social situations, from playgrounds to pools, parent and caregiver supervision helps young people understand, navigate and make the most of those situations. The inverse is true, too: a lack of discussion about new spaces and situations will mean that the chances of kids getting into awkward, difficult, or even dangerous situations goes up exponentially.
Messenger Kids isn’t just making Facebook feel normal, familiar and safe for kids. It’s part of Facebook’s long game in terms of staying relevant, while all of Facebook’s existing issues remain.
Tama Leaver is an Associate Professor in the Department of Internet Studies at Curtin University in Perth, Western Australia.
[This piece was originally published on the ABC News website.]
Last week’s Digitising Early Childhood conference here in Perth was a fantastic event which brought together so many engaging and provocative scholars in a supportive and policy/action-orientated environment (which I suppose I should call ‘engagement and impact’-orientated in Australia right now). For a pretty well document overview of the conference itself, you can see the quite substantial tweets collected via the #digikids17 hashtag on Twitter, which I’d really encourage you to look over. My head is still buzzing, so instead to trying to synthesise everyone else’s amazing work, I’m just going to quickly point to the material that arose my three different talks in case anyone wishes to delve further.
First up, here are the slides for my keynote ‘Turning Babies into Big Data—And How to Stop It’:
If you’d like to hear the talk that goes with the slides, there’s an audio recording you can download here. (I think these were filmed, so if a link becomes available at some point, I’ll update and post it here.) There was a great response to my talk, which was humbling and gratifying at the same time. There was also quite a lot of press interest, too, so here’s the best pieces that are available online (and may prove a more accessible overview of some of the issues I explored):
- Rebecca Turner’s ABC article ‘Owlet Smark Sock prompts warning for parents, fears over babies’ sensitive health data’;
- Leon Compton’s interview on ABC Radio Tasmania ‘Social media and parenting: the do’s and don’ts’ (audio);
- Brigid O’Connell’s Herald Sun story, Parents may be unwittingly turning babies into big data (paywalled); and
- Cathy O’Leary’s story in The Western Australian, Parents airing kids’ lives on social media.
While our argument is still being polished, the slides for this version of Crystal Abidin and my paper From YouTube to TV, and back again: Viral video child stars and media flows in the era of social media are also available:
This paper began as a discussion after our piece about Daddy O Five in The Conversation and where the complicated questions about children in media first became prominent. Crystal wasn’t able to be there in person, but did a fantastic Snapchat-recorded 5-minute intro, while I brought home the rest of the argument live. Crystal has a great background page on her website, linking this to her previous work in the area. There was also press interest in this talk, and the best piece to listen to (and hear Crystal and I in dialogue, even though this was recorded at different times, on different continents!):
- Brett Worthington’s Life Matters segment on Radio National, How social media videos turn children into viral sensations (audio)
Finally, as part of the Higher Degree by Research and Early Career Researcher Day which ended the conference, I presented a slightly updated version of my workshop ‘Developing a scholarly web presence & using social media for research networking’:
Overall, it was a very busy, but very rewarding conference, with new friends made, brilliant new scholarship to digest, and surely some exciting new collaborations begun!
Over the next month, I’m lucky enough to be involved in three separate events focused on infancy online, digital media and early childhood. The details …
 Thinking the Digital: Children, Young People and Digital Practice – Friday, 8th September, Sydney – is co-hosted by the Office of the eSafety Commissioner; Institute for Culture and Society, Western Sydney University; and Department of Media and Communications, University of Sydney. The event opens with a keynote by visiting LSE Professor Sonia Livingstone, and is followed by three sessions discussing youth, childhood and the digital age is various forms. While Sonia Livingstone is reason enough to be there, the three sessions are populated by some of the best scholars in Australia, and it should be a really fantastic discussion. I’ll be part of the second session on Rights-based Approaches to Digital Research, Policy and Practice. There are limited places, and a small fee, involved if you’re interested in attending, so registration is a must! To follow along on Twitter, the official hashtag is #ThinkingTheDigital.
The day before this event, Sonia Livingston is also giving a public seminar at WSU’s Parramatta City campus if you’re in able to attend on the afternoon of Thursday, 7th September.
 The following week is the big Digitising Early Childhood International Conference 2017 which runs 11-15 September, features a great line-up of keynotes as well as a truly fascinating range of papers on early childhood in the digital age. I’m lucky enough to be giving the conference’s first keynote on Tuesday morning, entitled ‘Turning Babies into Big Data–And How to Stop it’. I’ll also be presenting Crystal Abidin and my paper ‘From YouTube to TV, and Back Again: Viral Video Child Stars and Media Flows in the Era of Social Media‘ on the Wednesday and running a session on the final day called ‘Strategies for Developing a Scholarly Web Presence during a Higher Degree & Early Career’ as part of the Higher Degree by Research/Early Career Researcher Day. It should be a very busy, but also incredibly engaging week! To follow tweets from conference, the official hashtag is #digikids17.
 Finally, as part of Research and Innovation Week 2017 at Curtin University, at midday on Thursday 21st September I’ll be presenting a slightly longer version of my talk Turning Babies into Big Data—and How to Stop It in the Adventures in Culture and Technology series hosted by Curtin’s Centre for Culture and Technology. This is a free talk, open to anyone, but please either RSVP to this email, or use the Facebook event page to indicate you’re coming.
A new piece in The Conversation from Crystal Abidin and me …
The “pranks” would routinely involve the parents fooling their kids into thinking they were in trouble, screaming and swearing at them, only the reveal “it was just a prank” as their children sob on camera.
Despite its removal the content continues to circulate in summary videos from Philip DeFranco and other popular YouTubers who are critiquing the DaddyOFive channel. And you can still find videos of parents pranking their children on other channels around YouTube. But the videos also raise wider issues about children in online media, particularly where the videos make money. With over 760,000 subscribers, it is estimated that DaddyOFive earned between US$200,000-350,000 each year from YouTube advertising revenue.
The rise of influencers
Kid reactions on YouTube are a popular genre, with parents uploading viral videos of their children doing anything from tasting lemons for the first time to engaging in baby speak. Such videos pre-date the internet, with America’s Funniest Home Videos (1989-) and other popular television shows capitalising on “kid moments”.
In the era of mobile devices and networked communication, the ease with which children can be documented and shared online is unprecedented. Every day parents are “sharenting”, archiving and broadcasting images and videos of their children in order to share the experience with friends.
Even with the best intentions, though, one of us (Tama) has argued that photos and videos shared with the best of intentions can inadvertently lead to “intimate surveillance”, where online platforms and corporations use this data to build detailed profiles of children.
YouTube and other social media have seen the rise of influencer commerce, where seemingly ordinary users start featuring products and opinions they’re paid to share. By cultivating personal brands through creating a sense of intimacy with their consumers, these followings can be strong enough for advertisers to invest in their content, usually through advertorials and product placements. While the DaddyOFive channel was clearly for-profit, the distinction between genuine and paid content is often far from clear.
From the womb to celebrity
As with DaddyOFive, these influencers can include entire families, including children whose rights to participate, or choose not to participate, may not always be considered. In some cases, children themselves can be the star, becoming microcelebrities, often produced and promoted by their parents.
South Korean toddler Yebin, for instance, first went viral as a three-year-old in 2014 in a video where her mom was teaching her to avoid strangers. Since then, Yebin and her younger brother have been signed to influencer agencies to manage their content, based on the reach of their channel which has accumulated over 21 million views.
On Friday, 7 April at 4pm I’ll be giving a public talk entitled “Saving the Dead? Digital Legacy Planning and Posthumous Profiles” as part of the John Curtin Institute of Public Policy (JCIPP) Curtin Corner series. It’ll touch on both ethical and policy issues relating to the traces left behind on digital and social media when someone dies. Here’s the abstract for the talk:
When a person dies, there exist a range of conventions and norms regarding their mourning and the ways in which their material assets are managed. These differ by culture, but the inescapability of death means every cultural group has some formalised rules about death. However, the comparable newness of social media platforms means norms regarding posthumous profiles have yet to emerge. Moreover, the usually commercial and corporate, rather than governmental, control of social media platforms leads to considerable uncertainty as to which, if any, existing laws apply to social media services. Are the photos, videos and other communication history recorded via social media assets? Can they be addressed in wills and be legally accessed by executors? Should users have the right to wholesale delete their informatic trails (or leave instructions to have their media deleted after death)? Questions of ownership, longevity, accessibility, religion and ethics are all provoked when addressing the management of a deceased user’s social media profiles. This talk will outline some of the ways that Facebook and Google currently address the death of a user, the limits of these approaches, and the coming challenges for future internet historians in addressing, accessing and understanding posthumous profiles.
Update (10 April 2017): the talk went well, thanks to everyone who came along. For those who’ve asked, the slides are available here.
A little article about digital death I wrote for The Conversation …
Facebook’s accidental ‘death’ of users reminds us to plan for digital death
The accidental “death” of Facebook founder Mark Zuckerberg and millions of other Facebook users is a timely reminder of what happens to our online content once we do pass away.
Earlier this month, Zuckerberg’s Facebook profile displayed a banner which read: “We hope the people who love Mark will find comfort in the things others share to remember and celebrate his life.” Similar banners populated profiles across the social network.
After a few hours of users finding family members, friends and themselves(!) unexpectedly declared dead, Facebook realised its widespread error. It resurrected those effected, and shelved the offending posthumous pronouncements.
For many of the 1.8-billion users of the popular social media platform, it was a powerful reminder that Facebook is an increasingly vast digital graveyard.
It’s also a reminder for all social media users to consider how they want their profiles, presences and photos managed after they pass away.
The legal uncertainty of digital assets
Your material goods are usually dealt with by an executor after you pass away.
But what about your digital assets – media profiles, photos, videos, messages and other media? Most national laws do not specifically address digital material.
Requests to access the accounts of deceased loved ones, even by their executors, are routinely denied on privacy grounds.
While most social networks, including Facebook, explicitly state you cannot let another person know or log in with your password, for a time leaving a list of your passwords for your executor seemed the only easy way to allow someone to clean up and curate your digital presence after death.
Five years ago, as the question of death on social media started to gain interest, this legal uncertainty led to an explosion of startups and services that offered solutions from storing passwords for loved ones, to leaving messages and material to be sent posthumously.
But as with so many startups, many of these services have stagnated or disappeared altogether.
Dealing with death
Public tussles with grieving parents and loved ones over access to deceased accounts have led most big social media platforms to develop their own processes for dealing with digital death.
Facebook now allows users to designate a “legacy contact” who, after your death, can change certain elements of a memorialised account. This includes managing new friend requests, changing profile pictures and pinning a notification post about your death.
But neither a legacy contact, nor anyone else, can delete older material from your profile. That remains visible forever to whoever could see it before you die.
The only other option is to leave specific instructions for your legacy contact to delete your profile in its entirety.
Instagram, owned by Facebook, allows family members to request deletion or (by default) locks the account into a memorialised state. This respects existing privacy settings and prevents anyone logging into that account or changing it in the future.
Twitter will allow verified family members to request the deletion of a deceased person’s account. It will never allow anyone to access it posthumously.
LinkedIn is very similar to Twitter and also allows family members to request the deletion of an account.
Google’s approach to death is decidedly more complicated, with most posthumous options being managed by the not very well known Google Inactive Account Manager.
This tool allows a Google user assign the data from specific Google tools (such as Gmail, YouTube and Google Photos) to either be deleted or sent to a specific contact person after a specified period of “inactivity”.
The minimum period of inactivity that a user can assign is three months, with a warning one month before the specified actions take place.
But as anyone who has ever managed an estate would know, three months is an absurdly long time to wait to access important information, including essential documents that might be stored in Gmail or Google Drive.
If, like most people, the user did not have the Inactive Account Manager turned on, Google requires a court order issued in the United States before it will consider any other requests for data or deletion of a deceased person’s account.
Planning for your digital death
The advice (above) is for just a few of the more popular social media platforms. There are many more online places where people will have accounts and profiles that may also need to be dealt with after a person’s death.
Currently, the laws in Australia and globally have not kept pace with the rapid digitisation of assets, media and identities.
Just as it’s very difficult to legally pass on a Kindle library or iTunes music collection, the question of what happens to digital assets on social media is unclear to most people.
As platforms make tools available, it is important to take note and activate these where they meet (even partially) user needs.
Equally, wills and estates should have specific instructions about how digital material – photos, videos, messages, posts and memories – should ideally be managed.
With any luck the law will catch up by the time these wills get read.
Two new articles …
[X] Visualising the Ends of Identity: Pre-Birth and Post-Death on Instagram in Information, Communication and Society by me and Tim Highfield. This is one of the first Ends of Identity article coming from our big Instagram dataset, analysing the first year (2014). We’ll be writing more looking at the three years we collected (14-16) before the Instagram APIs changed and locked us out! Here’s the abstract:
This paper examines two ‘ends’ of identity online – birth and death – through the analytical lens of specific hashtags on the Instagram platform. These ends are examined in tandem in an attempt to surface commonalities in the way that individuals use visual social media when sharing information about other people. A range of emerging norms in digital discourses about birth and death are uncovered, and it is significant that in both cases the individuals being talked about cannot reply for themselves. Issues of agency in representation therefore frame the analysis. After sorting through a number of entry points, images and videos with the #ultrasound and #funeral hashtags were tracked for three months in 2014. Ultrasound images and videos on Instagram revealed a range of communication and representation strategies, most highlighting social experiences and emotional peaks. There are, however, also significant privacy issues as a significant proportion of public accounts share personally identifiable metadata about the mother and unborn child, although these issue are not apparent in relation to funeral images. Unlike other social media platforms, grief on Instagram is found to be more about personal expressions of loss rather than affording spaces of collective commemoration. A range of related practices and themes, such as commerce and humour, were also documented as a part of the spectrum of activity on the Instagram platform. Norms specific to each collection emerged from this analysis, which are then compared to document research about other social media platforms, especially Facebook.
[X] Instagrammatics and digital methods: studying visual social media, from selfies and GIFs to memes and emoji in Communication, Research and Practice, also authored by Tim Highfield and me. This paper came out earlier in the year, but I forgot to mention it here. The abstract:
Visual content is a critical component of everyday social media, on platforms explicitly framed around the visual (Instagram and Vine), on those offering a mix of text and images in myriad forms (Facebook, Twitter, and Tumblr), and in apps and profiles where visual presentation and provision of information are important considerations. However, despite being so prominent in forms such as selfies, looping media, infographics, memes, online videos, and more, sociocultural research into the visual as a central component of online communication has lagged behind the analysis of popular, predominantly text-driven social media. This paper underlines the increasing importance of visual elements to digital, social, and mobile media within everyday life, addressing the significant research gap in methods for tracking, analysing, and understanding visual social media as both image-based and intertextual content. In this paper, we build on our previous methodological considerations of Instagram in isolation to examine further questions, challenges, and benefits of studying visual social media more broadly, including methodological and ethical considerations. Our discussion is intended as a rallying cry and provocation for further research into visual (and textual and mixed) social media content, practices, and cultures, mindful of both the specificities of each form, but also, and importantly, the ongoing dialogues and interrelations between them as communication forms.
At yesterday’s outstanding Controlling Data: Somebody Think of the Children symposium I presented the first version of my new paper “Intimate Surveillance: Normalizing Parental Monitoring and Mediation of Infants Online.” Here’s the abstract:
Parents are increasingly sharing information about infants online in various forms and capacities. In order to more meaningfully understand the way parents decide what to share about young people, and the way those decisions are being shaped, this paper focuses on two overlapping areas: parental monitoring of babies and infants through the example of wearable technologies; and parental mediation through the example of the public sharing practices of celebrity and influencer parents. The paper begins by contextualizing these parental practices within the literature on surveillance, with particular attention to online surveillance and the increasing importance of affect. It then gives a brief overview of work on pregnancy mediation, monitoring on social media, and via pregnancy apps, which is the obvious precursor to examining parental sharing and monitoring practices regarding babies and infants. The examples of parental monitoring and parental mediation will then build on the idea of “intimate surveillance” which entails close and seemingly invasive monitoring by parents. Parental monitoring and mediation contribute to the normalization of intimate surveillance to the extent that surveillance is (re)situated as a necessary culture of care. The choice to not survey infants is thus positioned, worryingly, as a failure of parenting.
An mp3 recording of the audio is available, and the slides are below:
The full version of this paper is currently under review, but if you’re interested in reading the draft, just email me.
I’m pleased to note that my chapter ‘Born Digital? Presence, Privacy, and Intimate Surveillance’ is out now in the Re-Orientation: Trans-lingual, Trans-cultural, Trans-media. Studies in narrative, language, identity, and knowledge collection edited by John Hartley and Weiguo Qu for Fudan University Press. The collection is the outcome of the fantastic Culture+8: New Times, New Zones symposium in 2014 which explored cultural synergies between different countries and locations in the +8 timezone which include Perth where we hosted the event, and, of course, China.
My chapter is a key part of my Ends of Identity project; here I start to think about ‘intimate surveillance’ which is where parents and loved ones digitally document and survey their offspring, from sharing ultrasound photos to tracking newborn feeding and eating patterns. Intimate surveillance is a deliberately contradictory term: something done with the best of intentions but with possibly quite problematic outcomes. Here’s the full abstract:
The moment of birth was once the instant where parents and others first saw their child in the world, but with the advent of various imaging technologies, most notably the ultrasound, the first photos often precede birth (Lupton, 2013). In the past several decades, the question is no longer just when the first images are produced, but who should see them, via which, if any, communication platforms? Should sonograms (the ultrasound photos) be used to announce the impending arrival of a new person in the world? Moreover, while that question is ostensibly quite benign, it does usher in an era where parents and loved ones are, for the first years of life, the ones deciding what, if any, social media presence young people have before they’re in a position to start contributing to those decisions.
This chapter addresses this comparatively new online terrain, postulating the provocative term intimate surveillance, which deliberately turns surveillance on its head, begging the question whether sharing affectionately, and with the best of intentions, can or should be understood as a form of surveillance. Firstly, this chapter will examine the idea of co-creating online identities, touching on some of the standard ways of thinking about identity online, and then starting to look at how these approaches do and do not explicitly address the creation of identity for others, especially parents creating online identities for their kids. I will then review some ideas about surveillance and counter-surveillance with a view to situating these creative parental acts in terms of the kids and others being created. Finally, this chapter will explore several examples of parental monitoring, capturing and sharing of data and media about their children, using various mobile apps, contextualising these activities not with a moral finger-waving, but by surfacing specific questions and literacies which parents may need to develop in order to use these tools mindfully, and ensure decisions made about their children’s’ online presences are purposeful decisions.
The chapter can be read here.