Home » Ends of Identity

Category Archives: Ends of Identity

ABC Perth Spotlight forum: how to protect your privacy in an increasingly tech-driven world

perth_abc_privacyforumI was pleased to be part of the ABC Perth Radio’s Spotlight Forum on ‘How to Protect Your Privacy in an Increasingly Tech-driven World‘ this morning, hosted by Nadia Mitsopoulos, and also featuring Associate Professor Julia Powles, Kathryn Gledhill-Tucker from Electronic Frontiers Australia and David Yates from Corrs Chambers Westgarth.

You can stream the Forum on the ABC website, or download here.

ABCPrivacyForum


Print Friendly, PDF & Email

Instagram’s privacy updates for kids are positive. But plans for an under-13s app means profits still take precedence

Shutterstock

By Tama Leaver, Curtin University

Facebook recently announced significant changes to Instagram for users aged under 16. New accounts will be private by default, and advertisers will be limited in how they can reach young people.

The new changes are long overdue and welcome. But Facebook’s commitment to childrens’ safety is still in question as it continues to develop a separate version of Instagram for kids aged under 13.

The company received significant backlash after the initial announcement in May. In fact, more than 40 US Attorneys General who usually support big tech banded together to ask Facebook to stop building the under-13s version of Instagram, citing privacy and health concerns.

Privacy and advertising

Online default settings matter. They set expectations for how we should behave online, and many of us will never shift away from this by changing our default settings.

Adult accounts on Instagram are public by default. Facebook’s shift to making under-16 accounts private by default means these users will need to actively change their settings if they want a public profile. Existing under-16 users with public accounts will also get a prompt asking if they want to make their account private.

These changes normalise privacy and will encourage young users to focus their interactions more on their circles of friends and followers they approve. Such a change could go a long way in helping young people navigate online privacy.

Facebook has also limited the ways in which advertisers can target Instagram users under age 18 (or older in some countries). Instead of targeting specific users based on their interests gleaned via data collection, advertisers can now only broadly reach young people by focusing ads in terms of age, gender and location.

This change follows recently publicised research that showed Facebook was allowing advertisers to target young users with risky interests — such as smoking, vaping, alcohol, gambling and extreme weight loss — with age-inappropriate ads.

This is particularly worrying, given Facebook’s admission there is “no foolproof way to stop people from misrepresenting their age” when joining Instagram or Facebook. The apps ask for date of birth during sign-up, but have no way of verifying responses. Any child who knows basic arithmetic can work out how to bypass this gateway.

Of course, Facebook’s new changes do not stop Facebook itself from collecting young users’ data. And when an Instagram user becomes a legal adult, all of their data collected up to that point will then likely inform an incredibly detailed profile which will be available to facilitate Facebook’s main business model: extremely targeted advertising.

Deploying Instagram’s top dad

Facebook has been highly strategic in how it released news of its recent changes for young Instagram users. In contrast with Facebook’s chief executive Mark Zuckerberg, Instagram’s head Adam Mosseri has turned his status as a parent into a significant element of his public persona.

Since Mosseri took over after Instagram’s creators left Facebook in 2018, his profile has consistently emphasised he has three young sons, his curated Instagram stories include #dadlife and Lego, and he often signs off Q&A sessions on Instagram by mentioning he needs to spend time with his kids.

Adam Mosseri's Instagram Profile
Adam Mosseri’s Instagram Profile on July 30 2021.
Instagram

When Mosseri posted about the changes for under-16 Instagram users, he carefully framed the news as coming from a parent first, and the head of one of the world’s largest social platforms second. Similar to many influencers, Mosseri knows how to position himself as relatable and authentic.

Age verification and ‘potentially suspicious’ adults

In a paired announcement on July 27, Facebook’s vice-president of youth products Pavni Diwanji announced Facebook and Instagram would be doing more to ensure under-13s could not access the services.

Diwanji said Facebook was using artificial intelligence algorithms to stop “adults that have shown potentially suspicious behavior” from being able to view posts from young people’s accounts, or the accounts themselves. But Facebook has not offered an explanation as to how a user might be found to be “suspicious”.

Diwanji notes the company is “building similar technology to find and remove accounts belonging to people under the age of 13”. But this technology isn’t being used yet.

It’s reasonable to infer Facebook probably won’t actively remove under-13s from either Instagram or Facebook until the new Instagram For Kids app is launched — ensuring those young customers aren’t lost to Facebook altogether.

Despite public backlash, Diwanji’s post confirmed Facebook is indeed still building “a new Instagram experience for tweens”. As I’ve argued in the past, an Instagram for Kids — much like Facebook’s Messenger for Kids before it — would be less about providing a gated playground for children and more about getting children familiar and comfortable with Facebook’s family of apps, in the hope they’ll stay on them for life.

A Facebook spokesperson told The Conversation that a feature introduced in March prevents users registered as adults from sending direct messages to users registered as teens who are not following them.

“This feature relies on our work to predict peoples’ ages using machine learning technology, and the age people give us when they sign up,” the spokesperson said.

They said “suspicious accounts will no longer see young people in ‘Accounts Suggested for You’, and if they do find their profiles by searching for them directly, they won’t be able to follow them”.

Resources for parents and teens

For parents and teen Instagram users, the recent changes to the platform are a useful prompt to begin or to revisit conversations about privacy and safety on social media.

Instagram does provide some useful resources for parents to help guide these conversations, including a bespoke Australian version of their Parent’s Guide to Instagram created in partnership with ReachOut. There are many other online resources, too, such as CommonSense Media’s Parents’ Ultimate Guide to Instagram.

Regarding Instagram for Kids, a Facebook spokesperson told The Conversation the company hoped to “create something that’s really fun and educational, with family friendly safety features”.

But the fact that this app is still planned means Facebook can’t accept the most straightforward way of keeping young children safe: keeping them off Facebook and Instagram altogether.

The Conversation

Tama Leaver, Professor of Internet Studies, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Print Friendly, PDF & Email

Life and Death on Social Media Podcast

As part of Curtin University’s new The Future Of podcast series, I was recently interviewed about my ongoing research into pre-birth and infancy at one end, and digital death at the other, in relation to our presence(s) online. You can hear the podcast here, or am embedded player should work below:

Print Friendly, PDF & Email

Facebook hoping Messenger Kids will draw future users, and their data

A Facebook logo is reflected in a person's eyeFacebook has always had a problem with kids.

The US Children’s Online Privacy Protection Act (COPPA) explicitly forbids the collection of data from children under 13 without parental consent.

Rather than go through the complicated verification processes that involve getting parental consent, Facebook, like most online platforms, has previously stated that children under 13 simply cannot have Facebook accounts.

Of course, that has been one of the biggest white lies of the internet, along with clicking the little button which says you’ve read the Terms of Use; many, many kids have had Facebook accounts — or Instagram accounts (another platform wholly-owned by Facebook) — simply by lying about their birth date, which neither Facebook nor Instagram seek to verify if users indicate they’re 13 or older.

Many children have utilised some or all or Facebook’s features using their parent’s or older sibling’s accounts as well. Facebook’s internal messaging functions, and the standalone Messenger app have, at times, been shared by the named adult account holder and one or more of their children.

Sometimes this will involve parent accounts connecting to each other simply so kids can Video Chat, somewhat messing up Facebook’s precious map of connections.

Enter Messenger Kids, Facebook’s new Messenger app explicitly for the under-13s. Messenger Kids is promoted as having all the fun bits, but in a more careful and controlled space directed by parental consent and safety concerns.

To use Messenger Kids, a parent or caregiver uses their own Facebook account to authorise Messenger Kids for their child. That adult then gets a new control panel in Facebook where they can approve (or not) any and all connections that child has.

Kids can video chat, message, access a pre-filtered set of animated GIFs and images, and interact in other playful ways.

Parental controls on Facebook's Messenger Kids.PHOTO: The app has controls built into its functionality that allow parents to approve contacts. (Supplied: Facebook)

In the press release introducing Messenger Kids, Facebook emphasises that this product was designed after careful research, with a view to giving parents more control, and giving kids a safe space to interact providing them a space to grow as budding digital creators. Which is likely all true, but only tells part of the story.

As with all of Facebook’s changes and releases, it’s vitally important to ask: what’s in it for Facebook?

While Messenger Kids won’t show ads (to start with), it builds a level of familiarity and trust in Facebook itself. If Messenger Kids allows Facebook to become a space of humour and friendship for years before a “real” Facebook account is allowed, the odds of a child signing up once they’re eligible becomes much greater.

Facebook playing the long game

In an era when teens are showing less and less interest in Facebook’s main platform, Messenger Kids is part of a clear and deliberate strategy to recapture their interest. It won’t happen overnight, but Facebook’s playing the long game here.

If Messenger Kids replaces other video messaging services, then it’s also true that any person kids are talking to will need to have an active Facebook account, whether that’s mum and dad, older cousins or even great grandparents. That’s a clever way to keep a whole range of people actively using Facebook (and actively seeing the ads which make Facebook money).

Mark Zuckerberg and wife Priscilla Chan sit on a couch next to each other while reading to their son Max.
PHOTO: Mark Zuckerberg and wife Priscilla Chan read ‘Quantum Physics for Babies’ to their son Max. (Facebook: Mark Zuckerberg)

Facebook wants data about you. It wants data about your networks, connections and interactions. It wants data about your kids. And it wants data about their networks, connections and interactions, too.

When they set up Messenger Kids, parents have to provide their child’s real name. While this is consistent with Facebook’s real names policy, the flexibility to use pseudonyms or other identifiers for kids would demonstrate real commitment to carving out Messenger Kids as something and somewhere different. That’s not the path Facebook has taken.

Facebook might not use this data to sell ads to your kids today, but adding kids into the mix will help Facebook refine its maps of what you do (and stop kids using their parents accounts for Video Chat messing up that data). It will also mean Facebook understands much better who has kids, how old they are, who they’re connected to, and so on.

One more rich source of data (kids) adds more depth to the data that makes Facebook tick. And make Facebook profit. Lots of profit.

Facebook’s main app, Messenger, Instagram, and WhatsApp (all owned by Facebook) are all free to use because the data generated by users is enough to make Facebook money. Messenger Kids isn’t philanthropy; it’s the same business model, just on a longer scale.

Facebook isn’t alone in exploring variations of their apps for children.

Google, Amazon and Apple want your kids

As far back as 2013 Snapchat released SnapKidz, which basically had all the creative elements of Snapchat, but not the sharing ones. However, their kids-specific app was quietly shelved the following year, probably for lack of any sort of business model.

A teen checks her Twitter notifications.
PHOTO: The space created by Messenger Kids won’t stop cyberbullying. (ABC News: Will Ockenden )

Since early 2017, Google has also shifted to allowing kids to establish an account managed by their parents. It’s not hard to imagine why, when many children now chat with Google daily using the Google Home speakers (which, really, should be called “listeners” first and foremost).

Google Home, Amazon’s Echo and soon Apple’s soon-to-be-released HomePod all but remove the textual and tactile barriers which once prevented kids interacting directly with these online giants.

A child’s Google Account also allows parents to give them access to YouTube Kids. That said, the content that’s permissible on YouTube Kids has been the subject of a lot of attention recently.

In short, if dark parodies of Peppa Pig where Peppa has her teeth painfully removed to the sounds of screaming is going to upset your kids, it’s not safe to leave them alone to navigate YouTube Kids.

Nor will the space created by Messenger Kids stop cyberbullying; it might not be anonymous, but parents will only know there’s a problem if they consistently talk to their children about their online interactions.

Facebook often proves unable to regulate content effectively, in large part because it relies on algorithms and a relatively small team of people to very rapidly decide what does and doesn’t violate Facebook’s already fuzzy guidelines about acceptability. It’s unclear how Messenger Kids content will be policed, but the standard Facebook approach doesn’t seem sufficient.

At the moment, Messenger Kids is only available in the US; before it inevitably arrives in Australia and elsewhere, parents and caregivers need to decide whether they’re comfortable exchanging some of their children’s data for the functionality that the new app provides.

And, to be fair, Messenger Kids may well be very useful; a comparatively safe space where kids can talk to each other, explore tools of digital creativity, and increase their online literacies, certainly has its place.

Most importantly, though, is this simple reminder: Messenger Kids isn’t (just) screen time, it’s social time. And as with most new social situations, from playgrounds to pools, parent and caregiver supervision helps young people understand, navigate and make the most of those situations. The inverse is true, too: a lack of discussion about new spaces and situations will mean that the chances of kids getting into awkward, difficult, or even dangerous situations goes up exponentially.

Messenger Kids isn’t just making Facebook feel normal, familiar and safe for kids. It’s part of Facebook’s long game in terms of staying relevant, while all of Facebook’s existing issues remain.

Tama Leaver is an Associate Professor in the Department of Internet Studies at Curtin University in Perth, Western Australia.

[This piece was originally published on the ABC News website.]

Print Friendly, PDF & Email

Reflections and Resources from the 2017 Digitising Early Childhood Conference

Last week’s Digitising Early Childhood conference here in Perth was a fantastic event which brought together so many engaging and provocative scholars in a supportive and policy/action-orientated environment (which I suppose I should call ‘engagement and impact’-orientated in Australia right now). For a pretty well document overview of the conference itself, you can see the quite substantial tweets collected via the #digikids17 hashtag on Twitter, which I’d really encourage you to look over. My head is still buzzing, so instead to trying to synthesise everyone else’s amazing work, I’m just going to quickly point to the material that arose my three different talks in case anyone wishes to delve further.

First up, here are the slides for my keynote ‘Turning Babies into Big Data—And How to Stop It’:

TL_KeynoteIf you’d like to hear the talk that goes with the slides, there’s an audio recording you can download here. (I think these were filmed, so if a link becomes available at some point, I’ll update and post it here.)  There was a great response to my talk, which was humbling and gratifying at the same time. There was also quite a lot of press interest, too, so here’s the best pieces that are available online (and may prove a more accessible overview of some of the issues I explored):

While our argument is still being polished, the slides for this version of Crystal Abidin and my paper From YouTube to TV, and back again: Viral video child stars and media flows in the era of social media are also available:

This paper began as a discussion after our piece about Daddy O Five in The Conversation and where the complicated questions about children in media first became prominent. Crystal wasn’t able to be there in person, but did a fantastic Snapchat-recorded 5-minute intro, while I brought home the rest of the argument live. Crystal has a great background page on her website, linking this to her previous work in the area. There was also press interest in this talk, and the best piece to listen to (and hear Crystal and I in dialogue, even though this was recorded at different times, on different continents!):

Finally, as part of the Higher Degree by Research and Early Career Researcher Day which ended the conference, I presented a slightly updated version of my workshop ‘Developing a scholarly web presence & using social media for research networking’:

Overall, it was a very busy, but very rewarding conference, with new friends made, brilliant new scholarship to digest, and surely some exciting new collaborations begun!

Keynotes

[Photo of the Digitising Early Childhood Conference Keynote Speakers]

Print Friendly, PDF & Email

Three Upcoming Infancy Online-related events

Over the next month, I’m lucky enough to be involved in three separate events focused on infancy online, digital media and early childhood. The details …

[1] Thinking the Digital: Children, Young People and Digital Practice – Friday, 8th September, Sydney – is co-hosted by the Office of the eSafety Commissioner; Institute for Culture and Society, Western Sydney University; and Department of Media and Communications, University of Sydney. The event opens with a keynote by visiting  LSE Professor Sonia Livingstone, and is followed by three sessions discussing youth, childhood and the digital age is various forms.  While Sonia Livingstone is reason enough to be there, the three sessions are populated by some of the best scholars in Australia, and it should be a really fantastic discussion.  I’ll be part of the second session on Rights-based Approaches to Digital Research, Policy and Practice. There are limited places, and a small fee, involved if you’re interested in attending, so registration is a must! To follow along on Twitter, the official hashtag is #ThinkingTheDigital.

The day before this event, Sonia Livingston is also giving a public seminar at WSU’s Parramatta City campus if you’re in able to attend on the afternoon of Thursday, 7th September.

[2] The following week is the big Digitising Early Childhood International Conference 2017 which runs 11-15 September, features a great line-up of keynotes as well as a truly fascinating range of papers on early childhood in the digital age. I’m lucky enough to be giving the conference’s first keynote on Tuesday morning, entitled ‘Turning Babies into Big Data–And How to Stop it’.  I’ll also be presenting Crystal Abidin and my paper ‘From YouTube to TV, and Back Again: Viral Video Child Stars and Media Flows in the Era of Social Media‘ on the Wednesday and running a session on the final day called ‘Strategies for Developing a Scholarly Web Presence during a Higher Degree & Early Career’ as part of the Higher Degree by Research/Early Career Researcher Day.  It should be a very busy, but also incredibly engaging week!  To follow tweets from conference, the official hashtag is #digikids17.

[3] Finally, as part of Research and Innovation Week 2017 at Curtin University, at midday on Thursday 21st September I’ll be presenting a slightly longer version of my talk Turning Babies into Big Data—and How to Stop It in the Adventures in Culture and Technology series hosted by Curtin’s Centre for Culture and Technology.  This is a free talk, open to anyone, but please either RSVP to this email, or use the Facebook event page to indicate you’re coming.

ACAT Poster

Print Friendly, PDF & Email

When exploiting kids for cash goes wrong on YouTube: the lessons of DaddyOFive

A new piece in The Conversation from Crystal Abidin and me …

File 20170502 17277 1wirwwy

DaddyOFive parents Mike & Heather Martin issue an apology for their prank videos. / YouTube

Tama Leaver, Curtin University and Crystal Abidin, Curtin University

The US YouTube channel DaddyOFive, which features a husband and wife from Maryland “pranking” their children, has pulled all its videos and issued a public apology amid allegations of child abuse.

The “pranks” would routinely involve the parents fooling their kids into thinking they were in trouble, screaming and swearing at them, only the reveal “it was just a prank” as their children sob on camera.

Despite its removal the content continues to circulate in summary videos from Philip DeFranco and other popular YouTubers who are critiquing the DaddyOFive channel. And you can still find videos of parents pranking their children on other channels around YouTube. But the videos also raise wider issues about children in online media, particularly where the videos make money. With over 760,000 subscribers, it is estimated that DaddyOFive earned between US$200,000-350,000 each year from YouTube advertising revenue.


Philip DeFranco / WOW… We Need To Talk About This…

The rise of influencers

Kid reactions on YouTube are a popular genre, with parents uploading viral videos of their children doing anything from tasting lemons for the first time to engaging in baby speak. Such videos pre-date the internet, with America’s Funniest Home Videos (1989-) and other popular television shows capitalising on “kid moments”.

In the era of mobile devices and networked communication, the ease with which children can be documented and shared online is unprecedented. Every day parents are “sharenting”, archiving and broadcasting images and videos of their children in order to share the experience with friends.

Even with the best intentions, though, one of us (Tama) has argued that photos and videos shared with the best of intentions can inadvertently lead to “intimate surveillance”, where online platforms and corporations use this data to build detailed profiles of children.

YouTube and other social media have seen the rise of influencer commerce, where seemingly ordinary users start featuring products and opinions they’re paid to share. By cultivating personal brands through creating a sense of intimacy with their consumers, these followings can be strong enough for advertisers to invest in their content, usually through advertorials and product placements. While the DaddyOFive channel was clearly for-profit, the distinction between genuine and paid content is often far from clear.

From the womb to celebrity

As with DaddyOFive, these influencers can include entire families, including children whose rights to participate, or choose not to participate, may not always be considered. In some cases, children themselves can be the star, becoming microcelebrities, often produced and promoted by their parents.

South Korean toddler Yebin, for instance, first went viral as a three-year-old in 2014 in a video where her mom was teaching her to avoid strangers. Since then, Yebin and her younger brother have been signed to influencer agencies to manage their content, based on the reach of their channel which has accumulated over 21 million views.


Baby Yebin / Mom Teaches Cute Korean baby Yebin a Life Lesson.

As viral videos become marketable and kid reaction videos become more lucrative, this may well drive more and more elaborate situations and set-ups. Yet, despite their prominence on social media, such children in internet-famous families are not clearly covered by the traditional workplace standards (such as Child Labour Laws and that Coogan Law in the US), which historically protected child stars in mainstream media industries from exploitation.

This is concerning especially since not only are adult influencers featuring their children in advertorials and commercial content, but some are even grooming a new generation of “micro-microcelebrities” whose celebrity and careers begin in the womb.

In the absence of any formal guidelines for the child stars of social media, it is the peers and corporate platforms that are policing the welfare of young children. As prominent YouTube influencers have rallied to denounce the parents behind the DaddyOFive accusing them of child abuse, they have also leveraged their influence to report the parents of DaddyOFive to child protective services. YouTube has also reportedly responded initially by pulling advertising from the channel. YouTubers collectively demonstrating a shared moral position is undoubtedly helpful.

Greater transparency

The question of children, commerce and labour on social media is far from limited to YouTube. Australian PR director Roxy Jacenko has, for example, defended herself against accusations of exploitation after launching and managing a commercial Instagram account for her her young daughter Pixie, who at three-years-old was dubbed the “Princess of Instagram”. And while Jacenko’s choices for Pixie may differ from many other parents, at least as someone in PR she is in a position to make informed and articulated choices about her daughter’s presence on social media.

Already some influencers are assuring audiences that child participation is voluntary, enjoyable, and optional by broadcasting behind-the-scenes footage.

Television, too, is making the most of children on social media. The Ellen DeGeneres Show, for example, regularly mines YouTube for viral videos starring children in order to invite them as guests on the show. Often they are invited to replicate their viral act for a live audience, and the show disseminates these program clips on its corporate YouTube channel, sometimes contracting viral YouTube children with high attention value to star in their own recurring segments on the show.


Sophia and Rosie Grace featured on Ellen after their viral Nicki Minaj video.

Ultimately, though, children appearing on television are subject to laws and regulations that attempt to protect their well-being. On for-profit channels on YouTube and other social media platforms there is a little transparency about the role children are playing, the conditions of their labour, and how (and if) they are being compensated financially.

Children may be a one-off in parents’ videos, or the star of the show, but across this spectrum, social media like YouTube need rules to ensure that children’s participation is transparent and their well-being paramount.

Tama Leaver, Associate Professor in Internet Studies, Curtin University and Crystal Abidin, Adjunct Research Fellow at the Centre for Culture and Technology (CCAT), Curtin University

This article was originally published on The Conversation. Read the original article.

Print Friendly, PDF & Email

Saving The Dead? Digital Legacy Planning and posthumous profiles

On Friday, 7 April at 4pm I’ll be giving a public talk entitled “Saving the Dead? Digital Legacy Planning and Posthumous Profiles” as part of the John Curtin Institute of Public Policy (JCIPP) Curtin Corner series. It’ll touch on both ethical and policy issues relating to the traces left behind on digital and social media when someone dies. Here’s the abstract for the talk:

Wifi_Tombstone

When a person dies, there exist a range of conventions and norms regarding their mourning and the ways in which their material assets are managed. These differ by culture, but the inescapability of death means every cultural group has some formalised rules about death. However, the comparable newness of social media platforms means norms regarding posthumous profiles have yet to emerge. Moreover, the usually commercial and corporate, rather than governmental, control of social media platforms leads to considerable uncertainty as to which, if any, existing laws apply to social media services. Are the photos, videos and other communication history recorded via social media assets? Can they be addressed in wills and be legally accessed by executors? Should users have the right to wholesale delete their informatic trails (or leave instructions to have their media deleted after death)? Questions of ownership, longevity, accessibility, religion and ethics are all provoked when addressing the management of a deceased user’s social media profiles. This talk will outline some of the ways that Facebook and Google currently address the death of a user, the limits of these approaches, and the coming challenges for future internet historians in addressing, accessing and understanding posthumous profiles.

It’s being held in the Council Chambers at Curtin University. If you’d like to come along, please do, registration is open now (and free).

Update (10 April 2017): the talk went well, thanks to everyone who came along. For those who’ve asked, the slides are available here.

Print Friendly, PDF & Email

Facebook’s accidental ‘death’ of users reminds us to plan for digital death

A little article about digital death I wrote for The Conversation

Facebook’s accidental ‘death’ of users reminds us to plan for digital death

Tama Leaver, Curtin University

ZuckThe accidental “death” of Facebook founder Mark Zuckerberg and millions of other Facebook users is a timely reminder of what happens to our online content once we do pass away.

Earlier this month, Zuckerberg’s Facebook profile displayed a banner which read: “We hope the people who love Mark will find comfort in the things others share to remember and celebrate his life.” Similar banners populated profiles across the social network.

After a few hours of users finding family members, friends and themselves(!) unexpectedly declared dead, Facebook realised its widespread error. It resurrected those effected, and shelved the offending posthumous pronouncements.

For many of the 1.8-billion users of the popular social media platform, it was a powerful reminder that Facebook is an increasingly vast digital graveyard.

It’s also a reminder for all social media users to consider how they want their profiles, presences and photos managed after they pass away.

The legal uncertainty of digital assets

Your material goods are usually dealt with by an executor after you pass away.

But what about your digital assets – media profiles, photos, videos, messages and other media? Most national laws do not specifically address digital material.

As most social networks and online platforms are headquartered in the US, they tend to have “terms of use” which fiercely protect the rights of individual users, even after they have died.

Requests to access the accounts of deceased loved ones, even by their executors, are routinely denied on privacy grounds.

While most social networks, including Facebook, explicitly state you cannot let another person know or log in with your password, for a time leaving a list of your passwords for your executor seemed the only easy way to allow someone to clean up and curate your digital presence after death.

Five years ago, as the question of death on social media started to gain interest, this legal uncertainty led to an explosion of startups and services that offered solutions from storing passwords for loved ones, to leaving messages and material to be sent posthumously.

But as with so many startups, many of these services have stagnated or disappeared altogether.

Dealing with death

Public tussles with grieving parents and loved ones over access to deceased accounts have led most big social media platforms to develop their own processes for dealing with digital death.

Facebook now allows users to designate a “legacy contact” who, after your death, can change certain elements of a memorialised account. This includes managing new friend requests, changing profile pictures and pinning a notification post about your death.

But neither a legacy contact, nor anyone else, can delete older material from your profile. That remains visible forever to whoever could see it before you die.

The only other option is to leave specific instructions for your legacy contact to delete your profile in its entirety.

Instagram, owned by Facebook, allows family members to request deletion or (by default) locks the account into a memorialised state. This respects existing privacy settings and prevents anyone logging into that account or changing it in the future.

Twitter will allow verified family members to request the deletion of a deceased person’s account. It will never allow anyone to access it posthumously.

LinkedIn is very similar to Twitter and also allows family members to request the deletion of an account.

Google’s approach to death is decidedly more complicated, with most posthumous options being managed by the not very well known Google Inactive Account Manager.

This tool allows a Google user assign the data from specific Google tools (such as Gmail, YouTube and Google Photos) to either be deleted or sent to a specific contact person after a specified period of “inactivity”.

The minimum period of inactivity that a user can assign is three months, with a warning one month before the specified actions take place.

But as anyone who has ever managed an estate would know, three months is an absurdly long time to wait to access important information, including essential documents that might be stored in Gmail or Google Drive.

If, like most people, the user did not have the Inactive Account Manager turned on, Google requires a court order issued in the United States before it will consider any other requests for data or deletion of a deceased person’s account.

Planning for your digital death

The advice (above) is for just a few of the more popular social media platforms. There are many more online places where people will have accounts and profiles that may also need to be dealt with after a person’s death.

Currently, the laws in Australia and globally have not kept pace with the rapid digitisation of assets, media and identities.

Just as it’s very difficult to legally pass on a Kindle library or iTunes music collection, the question of what happens to digital assets on social media is unclear to most people.

As platforms make tools available, it is important to take note and activate these where they meet (even partially) user needs.

Equally, wills and estates should have specific instructions about how digital material – photos, videos, messages, posts and memories – should ideally be managed.

With any luck the law will catch up by the time these wills get read.

The Conversation

Tama Leaver, Associate Professor in Internet Studies, Curtin University

This article was originally published on The Conversation. Read the original article.

Print Friendly, PDF & Email

Twitter


Archives

Categories