Seeing the referendum to give a Voice to Aboriginal and Torres Strait Islander peoples profoundly defeated across Australia today is heart-breaking and confusing.
My heart goes out to all Australians feeling let down, but especially, of course, to the Indigenous people of this country for whom this would have been, at least, one small step in the right direction.
As someone who researches online communication, digital platforms and how we communicate and tell stories to each other, I fear the impact of this referendum will be even wider still.
The rampant and unabashed misinformation and disinformation that washed over social media, and was then amplified and normalised as it was reported in mainstream media, is more than worrying.
Make no mistake, this was Australia’s Brexit. It was the pilot, the test, to see how far disinformation can succeed in campaigning in this country. And succeed it did.
In the UK, the pretty devastating economic impact of Brexit has revealed the lies that drove campaigning for it (as have former campaigners who admitted the truth was no barrier for them).
I fear most non-Indigenous Australians will not have as clear and unambiguous a sign that they’ve been lied to, at least this time.
In Australia, the mechanisms of disinformation have now been tested, polished, refined and sharpened. They will be a force to be reckoned with in all coming elections. And our electoral laws lack the teeth to do almost anything about that right now.
I do not believe that today’s result is just down to disinformation, but I do believe it played a significant role. I’m not sure if it changed the outcome, but I’m not sure it didn’t, either.
There was research that warned about the unprecedented levels of misinformation looking at early campaigning around the Voice. There will be more that looks back after this result.
But before another election comes along, we need more than just research. We need more than just improved digital literacies, although that’s profoundly necessary.
We need critical thinking like never before, we need to equip people to make informed choices by being able to spot bullshit in its myriad forms.
I am under no illusion that means people will agree, but they deserve to have tools to make an actually informed choice. Not a coerced one. Social media isn’t just entertainment; it’s our political sphere. Messages don’t just live on social media, even if they start there.
Messages might start digital, but they travel across all media, old and new.
I know this is a rant after a profoundly disappointing referendum, and probably not the best expressed one. But there is so much work to do if this country isn’t even more assailed by weaponised disinformation at every turn.
I was pleased to join Sarah Tallier and The Future Of team to discuss how Twitter has changed since being purchased by Elon Musk, what this means for Twitter as some form of public sphere, and what alternatives are emerging (Mastodon!).
Will Twitter ever be the same since Elon Musk’s takeover? And what impact will his changes have on users, free speech and (dis)information?
Twitter is one of the most influential speech platforms in the world – as of 2022, it had approximately 450million monthly active users. But its takeover by Elon Musk has sparked concerns about social media regulation and Twitter’s ability to remain a ‘proxy for public opinion’.
To explore this topic, Sarah is joined by Tama Leaver, Professor of Internet Studies at Curtin University.
- Why does Twitter matter? [00:48]
- Elon rewinds content regulation [06:54]
- Twitter’s political clout [10:16]
- Make the internet democratic again [11:28]
- What is Mastodon? [15:29]
- Can we ever really trust the internet? [17:47]
And there’s a transcript here.
As new technologies emerge, educators have an opportunity to help students think about the best practical and ethical uses of these tools, or hide their heads in the sand and hope it’ll be someone else’s problem.
It’s incredibly disappointing to see the Western Australian Department of Education forcing every state teacher to join the heads in the sand camp, banning ChatGPT in state schools.
Generative AI is here to stay. By the time they graduate, our kids will be in jobs where these will be important creative and productive tools in the workplace and in creative spaces.
Education should be arming our kids with the critical skills to use, evaluate and extend the uses and outputs of generative AI in an ethical way. Not be forced to try them out behind closed doors at home because our education system is paranoid that every student will somehow want to use these to cheat.
For many students, using these tools to cheat probably never occurred to them until they saw headlines about it in the wake of WA joining a number of other states in this reactionary ban.
Young people deserve to be part of the conversation about generative AI tools, and to help think about and design the best practical and ethical uses for the future.
Schools should be places where those conversations can flourish. Having access to the early versions of tomorrow’s tools today is vital to helping those conversations start.
Sure, getting around a school firewall takes one kid with a smartphone using it as a hotspot, or simply using a VPN. But they shouldn’t need to resort to that. Nor should students from more affluent backgrounds be more able to circumvent these bans than others.
Digital and technological literacies are part of the literacy every young person will need to flourish tomorrow. Education should be the bastion equipping young people for the world they’re going to be living in. Not trying to prevent them thinking about it at all.
[Image: “Learning with technology” generated by Lexica, 1 February 2023]
Update: Here’s an audio file of an AI speech synthesis tool by Eleven Labs reading this blog post:
I was pleased to join Associate Professor Crystal Abidin as panellists on the ABC Perth Radio Spotlight Forum on social media, child influencers and keeping your kids safe online. It was a wide-ranging discussion that really highlights community interest and concern in ensuring our young people have the best access to opportunities online while minimising the risks involved.
You can listen to a recording of the broadcast here.
Coroner finds social media contributed to 14-year-old Molly Russell’s death. How should parents and platforms react?
Last week, London coroner Andrew Walker delivered his findings from the inquest into 14-year-old schoolgirl Molly Russell’s death, concluding she “died from an act of self harm while suffering from depression and the negative effects of online content”.
The inquest heard Molly had used social media, specifically Instagram and Pinterest, to view large amounts of graphic content related to self-harm, depression and suicide in the lead-up to her death in November 2017.
The findings are a damning indictment of the big social media platforms. What should they be doing in response? And how should parents react in light of these events?
Social media use carries risk
The social media landscape of 2022 is different to the one Molly experienced in 2017. Indeed, the initial public outcry after her death saw many changes to Instagram and other platforms to try and reduce material that glorifies depression or self-harm.
Instagram, for example, banned graphic self-harm images, made it harder to search for non-graphic self-harm material, and started providing information about getting help when users made certain searches.
BBC journalist Tony Smith noted that the press team for Instagram’s parent company Meta requested that journalists make clear these sorts of images are no longer hosted on its platforms. Yet Smith found some of this content was still readily accessible today.
Also, in recent years Instagram has been found to host pro-anorexia accounts and content encouraging eating disorders. So although platforms may have made some positive changes over time, risks still remain.
That said, banning social media content is not necessarily the best approach.
What can parents do?
Here are some ways parents can address concerns about their children’s social media use.
Open a door for conversation, and keep it open
It’s not always easy to get young people to open up about what they’re feeling, but it’s clearly important to make it as easy and safe as possible for them to do so.
Research has shown creating a non-judgemental space for young people to talk about how social media makes them feel will encourage them to reach out if they need help. Also, parents and young people can often learn from each other through talking about their online experiences.
Try not to overreact
Social media can be an important, positive and healthy part of a young person’s life. It is where their peers and social groups are found, and during lockdowns was the only way many young people could support and talk to each other.
Completely banning social media may prevent young people from being a part of their peer groups, and could easily do more harm than good.
Negotiate boundaries together
Parents and young people can agree on reasonable rules for device and social media use. And such agreements can be very powerful.
They also present opportunities for parents and carers to model positive behaviours. For example, both parties might reach an agreement to not bring their devices to the dinner table, and focus on having conversations instead.
Another agreement might be to charge devices in a different room overnight so they can’t be used during normal sleep times.
What should social media platforms do?
Social media platforms have long faced a crisis of trust and credibility. Coroner Walker’s findings tarnish their reputation even further.
Now’s the time for platforms to acknowledge the risks present in the service they provide and make meaningful changes. That includes accepting regulation by governments.
More meaningful content moderation is needed
During the pandemic, more and more content moderation was automated. Automated systems are great when things are black and white, which is why they’re great at spotting extreme violence or nudity. But self-harm material is often harder to classify, harder to moderate and often depends on the context it’s viewed in.
For instance, a picture of a young person looking at the night sky, captioned “I just want to be one with the stars”, is innocuous in many contexts and likely wouldn’t be picked up by algorithmic moderation. But it could flag an interest in self-harm if it’s part of a wider pattern of viewing.
Human moderators do a better job determining this context, but this also depends on how they’re resourced and supported. As social media scholar Sarah Roberts writes in her book Behind the Screen, content moderators for big platforms often work in terrible conditions, viewing many pieces of troubling content per minute, and are often traumatised themselves.
If platforms want to prevent young people seeing harmful content, they’ll need to employ better-trained, better-supported and better-paid moderators.
Harm prevention should not be an afterthought
Following the inquest findings, the new Prince and Princess of Wales astutely tweeted “online safety for our children and young people needs to be a prerequisite, not an afterthought”.
For too long, platforms have raced to get more users, and have only dealt with harms once negative press attention became unavoidable. They have been left to self-regulate for too long.
The foundation set up by Molly’s family is pushing hard for the UK’s Online Safety Bill to be accepted into law. This bill seeks to reduce the harmful content young people see, and make platforms more accountable for protecting them from certain harms. It’s a start, but there’s already more that could be done.
In Australia the eSafety Commissioner has pushed for Safety by Design, which aims to have protections built into platforms from the ground up.
If this article has raised issues for you, or if you’re concerned about someone you know, call Lifeline on 13 11 14.
QR code contact-tracing apps are a crucial part of our defence against COVID-19. But their value depends on being widely used, which in turn means people using these apps need to be confident their data won’t be misused.
WA Premier Mark McGowan’s government has enjoyed unprecedented public support for its handling of the COVID-19 pandemic thus far. But this incident risks undermining the WA public’s trust in their state’s contact-tracing regime.
While the federal government’s relatively expensive COVIDSafe tracking app — which was designed to work automatically via Bluetooth — has become little more than the butt of jokes, the scanning of QR codes at all kinds of venues has now become second nature to many Australians.
These contact-tracing apps work by logging the locations and times of people’s movements, with the help of unique QR codes at cafes, shops and other public buildings. Individuals scan the code with their phone’s camera, and the app allows this data to be collated across the state.
That data is hugely valuable for contact tracing, but also very personal. Using apps rather than paper-based forms greatly speeds up access to the data when it is needed. And when trying to locate close contacts of a positive COVID-19 case, every minute counts.
But this process necessarily involves the public placing their trust in governments to properly, safely and securely use personal data for the advertised purpose, and nothing else.
Australian governments have a poor track record of protecting personal data, having suffered a range of data breaches over the past few years. At the same time, negative publicity about the handling of personal data by digital and social media companies has highlighted the need for people to be careful about what data they share with apps in general.
The SafeWA app was downloaded by more than 260,000 people within days of its release, in large part because of widespread trust in the WA government’s strong track record in handling COVID-19. When the app was launched in November last year, McGowan wrote on his Facebook page that the data would “only be accessible by authorised Department of Health contact tracing personnel”.
In spite of this, it has now emerged that WA Police twice accessed SafeWA data as part of a “high-profile” murder investigation. The fact the WA government knew in April that this data was being accessed, but only informed the public in mid-June, further undermines trust in the way personal data is being managed.
McGowan today publicly criticised the police for not agreeing to stop using SafeWA data. Yet the remit of the police is to pursue any evidence they can legally access, which currently includes data collected by the SafeWA app.
It is the government’s responsibility to protect the public’s privacy via carefully written, iron-clad legislation with no loopholes. Crucially, this legislation needs to be in place before contract-tracing apps are rolled out, not afterwards.
It may well be that the state government held off on publicly disclosing details of the SafeWA data misuse until it had come up with a solution. It has now introduced a bill to prevent SafeWA data being used for any purpose other than contact tracing.
This is a welcome development, and the government will have no trouble passing the bill, given its thumping double majority. Repairing public trust might be a trickier prospect.
Trust is a premium commodity these days, and to have squandered it without adequate initial protections is a significant error.
The SafeWA app provided valuable information that sped up contact tracing in WA during Perth’s outbreak in February. There is every reason to believe that if future cases occur, continued widespread use of the app will make it easier to locate close contacts, speed up targeted testing, and either avoid or limit the need for future lockdowns.
That will depend on the McGowan government swiftly regaining the public’s trust in the app. The new legislation is a big step in that direction, but there’s a lot more work to do. Trust is hard to win, and easy to lose.