I was pleased to join Associate Professor Crystal Abidin as panellists on the ABC Perth Radio Spotlight Forum on social media, child influencers and keeping your kids safe online. It was a wide-ranging discussion that really highlights community interest and concern in ensuring our young people have the best access to opportunities online while minimising the risks involved.
You can listen to a recording of the broadcast here.
Coroner finds social media contributed to 14-year-old Molly Russell’s death. How should parents and platforms react?
Last week, London coroner Andrew Walker delivered his findings from the inquest into 14-year-old schoolgirl Molly Russell’s death, concluding she “died from an act of self harm while suffering from depression and the negative effects of online content”.
The inquest heard Molly had used social media, specifically Instagram and Pinterest, to view large amounts of graphic content related to self-harm, depression and suicide in the lead-up to her death in November 2017.
The findings are a damning indictment of the big social media platforms. What should they be doing in response? And how should parents react in light of these events?
Social media use carries risk
The social media landscape of 2022 is different to the one Molly experienced in 2017. Indeed, the initial public outcry after her death saw many changes to Instagram and other platforms to try and reduce material that glorifies depression or self-harm.
Instagram, for example, banned graphic self-harm images, made it harder to search for non-graphic self-harm material, and started providing information about getting help when users made certain searches.
BBC journalist Tony Smith noted that the press team for Instagram’s parent company Meta requested that journalists make clear these sorts of images are no longer hosted on its platforms. Yet Smith found some of this content was still readily accessible today.
Also, in recent years Instagram has been found to host pro-anorexia accounts and content encouraging eating disorders. So although platforms may have made some positive changes over time, risks still remain.
That said, banning social media content is not necessarily the best approach.
What can parents do?
Here are some ways parents can address concerns about their children’s social media use.
Open a door for conversation, and keep it open
It’s not always easy to get young people to open up about what they’re feeling, but it’s clearly important to make it as easy and safe as possible for them to do so.
Research has shown creating a non-judgemental space for young people to talk about how social media makes them feel will encourage them to reach out if they need help. Also, parents and young people can often learn from each other through talking about their online experiences.
Try not to overreact
Social media can be an important, positive and healthy part of a young person’s life. It is where their peers and social groups are found, and during lockdowns was the only way many young people could support and talk to each other.
Completely banning social media may prevent young people from being a part of their peer groups, and could easily do more harm than good.
Negotiate boundaries together
Parents and young people can agree on reasonable rules for device and social media use. And such agreements can be very powerful.
They also present opportunities for parents and carers to model positive behaviours. For example, both parties might reach an agreement to not bring their devices to the dinner table, and focus on having conversations instead.
Another agreement might be to charge devices in a different room overnight so they can’t be used during normal sleep times.
What should social media platforms do?
Social media platforms have long faced a crisis of trust and credibility. Coroner Walker’s findings tarnish their reputation even further.
Now’s the time for platforms to acknowledge the risks present in the service they provide and make meaningful changes. That includes accepting regulation by governments.
More meaningful content moderation is needed
During the pandemic, more and more content moderation was automated. Automated systems are great when things are black and white, which is why they’re great at spotting extreme violence or nudity. But self-harm material is often harder to classify, harder to moderate and often depends on the context it’s viewed in.
For instance, a picture of a young person looking at the night sky, captioned “I just want to be one with the stars”, is innocuous in many contexts and likely wouldn’t be picked up by algorithmic moderation. But it could flag an interest in self-harm if it’s part of a wider pattern of viewing.
Human moderators do a better job determining this context, but this also depends on how they’re resourced and supported. As social media scholar Sarah Roberts writes in her book Behind the Screen, content moderators for big platforms often work in terrible conditions, viewing many pieces of troubling content per minute, and are often traumatised themselves.
If platforms want to prevent young people seeing harmful content, they’ll need to employ better-trained, better-supported and better-paid moderators.
Harm prevention should not be an afterthought
Following the inquest findings, the new Prince and Princess of Wales astutely tweeted “online safety for our children and young people needs to be a prerequisite, not an afterthought”.
For too long, platforms have raced to get more users, and have only dealt with harms once negative press attention became unavoidable. They have been left to self-regulate for too long.
The foundation set up by Molly’s family is pushing hard for the UK’s Online Safety Bill to be accepted into law. This bill seeks to reduce the harmful content young people see, and make platforms more accountable for protecting them from certain harms. It’s a start, but there’s already more that could be done.
In Australia the eSafety Commissioner has pushed for Safety by Design, which aims to have protections built into platforms from the ground up.
If this article has raised issues for you, or if you’re concerned about someone you know, call Lifeline on 13 11 14.
QR code contact-tracing apps are a crucial part of our defence against COVID-19. But their value depends on being widely used, which in turn means people using these apps need to be confident their data won’t be misused.
WA Premier Mark McGowan’s government has enjoyed unprecedented public support for its handling of the COVID-19 pandemic thus far. But this incident risks undermining the WA public’s trust in their state’s contact-tracing regime.
While the federal government’s relatively expensive COVIDSafe tracking app — which was designed to work automatically via Bluetooth — has become little more than the butt of jokes, the scanning of QR codes at all kinds of venues has now become second nature to many Australians.
These contact-tracing apps work by logging the locations and times of people’s movements, with the help of unique QR codes at cafes, shops and other public buildings. Individuals scan the code with their phone’s camera, and the app allows this data to be collated across the state.
That data is hugely valuable for contact tracing, but also very personal. Using apps rather than paper-based forms greatly speeds up access to the data when it is needed. And when trying to locate close contacts of a positive COVID-19 case, every minute counts.
But this process necessarily involves the public placing their trust in governments to properly, safely and securely use personal data for the advertised purpose, and nothing else.
Australian governments have a poor track record of protecting personal data, having suffered a range of data breaches over the past few years. At the same time, negative publicity about the handling of personal data by digital and social media companies has highlighted the need for people to be careful about what data they share with apps in general.
The SafeWA app was downloaded by more than 260,000 people within days of its release, in large part because of widespread trust in the WA government’s strong track record in handling COVID-19. When the app was launched in November last year, McGowan wrote on his Facebook page that the data would “only be accessible by authorised Department of Health contact tracing personnel”.
In spite of this, it has now emerged that WA Police twice accessed SafeWA data as part of a “high-profile” murder investigation. The fact the WA government knew in April that this data was being accessed, but only informed the public in mid-June, further undermines trust in the way personal data is being managed.
McGowan today publicly criticised the police for not agreeing to stop using SafeWA data. Yet the remit of the police is to pursue any evidence they can legally access, which currently includes data collected by the SafeWA app.
It is the government’s responsibility to protect the public’s privacy via carefully written, iron-clad legislation with no loopholes. Crucially, this legislation needs to be in place before contract-tracing apps are rolled out, not afterwards.
It may well be that the state government held off on publicly disclosing details of the SafeWA data misuse until it had come up with a solution. It has now introduced a bill to prevent SafeWA data being used for any purpose other than contact tracing.
This is a welcome development, and the government will have no trouble passing the bill, given its thumping double majority. Repairing public trust might be a trickier prospect.
Trust is a premium commodity these days, and to have squandered it without adequate initial protections is a significant error.
The SafeWA app provided valuable information that sped up contact tracing in WA during Perth’s outbreak in February. There is every reason to believe that if future cases occur, continued widespread use of the app will make it easier to locate close contacts, speed up targeted testing, and either avoid or limit the need for future lockdowns.
That will depend on the McGowan government swiftly regaining the public’s trust in the app. The new legislation is a big step in that direction, but there’s a lot more work to do. Trust is hard to win, and easy to lose.
Facebook Messenger and Instagram’s direct messaging services will be integrated into one system, Facebook has announced.
The merge will allow shared messaging across both platforms, as well as video calls and the use of a range of tools drawn from both platforms. It’s currently being rolled out across countries on an opt-in basis, but hasn’t yet reached Australia.
Facebook CEO Mark Zuckerberg announced plans in March last year to integrate Messenger, Instagram Direct and WhatsApp into a unified messaging experience.
At the crux of this was the goal to administer end-to-end encryption across the whole messaging “ecosystem”.
Ostensibly, this was part of Facebook’s renewed focus on privacy, in the wake of several highly publicised scandals. Most notable was its poor data protection that allowed political consulting firm Cambridge Analytica to steal data from 87 million Facebook accounts and use it to target users with political ads ahead of the 2016 US presidential election.
In a statement released yesterday on the new merge, Instagram CEO Adam Mosseri and Messenger vice president Stan Chudnovsky wrote:
… one out of three people sometimes find it difficult to remember where to find a certain conversation thread. With this update, it will be even easier to stay connected without thinking about which app to use to reach your friends and family.
While that may seem harmless, it’s likely Facebook is actually attempting to make its apps inseparable, ahead of a potential anti-trust lawsuit in the US that may try to see the company sell Instagram and WhatsApp.
Together, with Facebook, 24/7
The Messenger/Instagram Direct merge will extend to features rolled out during the pandemic, such as the “Watch Together” tool for Messenger. As the name suggests, this lets users watch videos together in real time. Now, both Messenger and Instagram users will be able to use it, regardless of which app they’re on.
For example, in the new merged messaging ecosystem, a user you previously blocked on Messenger won’t automatically be blocked on Instagram. Thus, the blocked person will be able to once again contact you. This could open doors to a plethora of unexpected online abuse.
Why this is good for Mark Zuckerberg
This first step – and Facebook’s full roadmap for the encrypted integration of WhatsApp, Instagram Direct and Messenger – has three clear outcomes.
Firstly, end-to-end encryption means Facebook will have complete deniability for anything that travels across its messaging tools.
It won’t be able to “see” the messages. While this might be good from a user privacy perspective, it also means anything from bullying, to scams, to illegal drug sales, to paedophilia can’t be policed if it happens via these tools.
This would stop Facebook being blamed for hurtful or illegal uses of its services. As far as moderating the platform goes, Facebook would effectively become “invisible” (not to mention moderation is expensive and complicated).
This is all great news for Mark Zuckerberg, especially as Facebook stares down the barrel of potential anti-trust litigation.
Secondly, once the apps are merged, functionally they will no longer be separate platforms. They will still exist as separate apps with some separate features, but the vast amount of personal data underpinning them will live in one giant, shared database.
Deeper data integration will let Facebook know users more intimately. Moreover, it will be able to leverage this new insight to target users with more advertising and expand further.
Finally, and perhaps most concerning, is that by integrating its apps Facebook could legitimately respond to anti-trust lawsuits by saying it can’t separate Instagram or WhatsApp from the main Facebook platform – because they’re the same thing now.
And if they can’t be separated, there’s no way Facebook could sell Instagram or WhatsApp, even if it wanted to.
100 billion messages a day
With the sheer size of its user database, Facebook continues to either purchase, or squash, its competition. Concerns about the company being a monopoly aren’t without merit.
Just a few months ago, Facebook released its Instagram-housed tool Reels which bears a striking resemblance to TikTok, another social app sweeping the globe.
It seems this is just another example of Facebook trying to use the sheer size of its network to stifle growing competition, aided (perhaps unwittingly) by Donald Trump’s anti-China sentiment.
If competition is important to encouraging innovation and diversity, then the newest development from Facebook discourages both these things. It further entrenches Facebook and its services into the lives of consumers, making it harder to pull away. And this certainly isn’t far from monopolistic behaviour.