Home » Articles posted by Tama

Author Archives: Tama

The Future of Twitter (Podcast)

I was pleased to join Sarah Tallier and The Future Of team to discuss how Twitter has changed since being purchased by Elon Musk, what this means for Twitter as some form of public sphere, and what alternatives are emerging (Mastodon!).

We discuss:

Will Twitter ever be the same since Elon Musk’s takeover? And what impact will his changes have on users, free speech and (dis)information?    

Twitter is one of the most influential speech platforms in the world – as of 2022, it had approximately 450million monthly active users. But its takeover by Elon Musk has sparked concerns about social media regulation and Twitter’s ability to remain a ‘proxy for public opinion’.

To explore this topic, Sarah is joined by Tama Leaver, Professor of Internet Studies at Curtin University.

  • Why does Twitter matter? [00:48]
  • Elon rewinds content regulation [06:54]
  • Twitter’s political clout [10:16]
  • Make the internet democratic again [11:28]
  • What is Mastodon? [15:29]
  • Can we ever really trust the internet? [17:47]

And there’s a transcript here.

Banning ChatGPT in Schools Hurts Our Kids

[Image: “Learning with technology” generated by Lexica, 1 February 2023]As new technologies emerge, educators have an opportunity to help students think about the best practical and ethical uses of these tools, or hide their heads in the sand and hope it’ll be someone else’s problem.

It’s incredibly disappointing to see the Western Australian Department of Education forcing every state teacher to join the heads in the sand camp, banning ChatGPT in state schools.

Generative AI is here to stay. By the time they graduate, our kids will be in jobs where these will be important creative and productive tools in the workplace and in creative spaces.

Education should be arming our kids with the critical skills to use, evaluate and extend the uses and outputs of generative AI in an ethical way. Not be forced to try them out behind closed doors at home because our education system is paranoid that every student will somehow want to use these to cheat.

For many students, using these tools to cheat probably never occurred to them until they saw headlines about it in the wake of WA joining a number of other states in this reactionary ban.

Young people deserve to be part of the conversation about generative AI tools, and to help think about and design the best practical and ethical uses for the future.

Schools should be places where those conversations can flourish. Having access to the early versions of tomorrow’s tools today is vital to helping those conversations start.

Sure, getting around a school firewall takes one kid with a smartphone using it as a hotspot, or simply using a VPN. But they shouldn’t need to resort to that. Nor should students from more affluent backgrounds be more able to circumvent these bans than others.

Digital and technological literacies are part of the literacy every young person will need to flourish tomorrow. Education should be the bastion equipping young people for the world they’re going to be living in. Not trying to prevent them thinking about it at all.

[Image: “Learning with technology” generated by Lexica, 1 February 2023]

Update: Here’s an audio file of an AI speech synthesis tool by Eleven Labs reading this blog post:

Watching Musk fiddle while Twitter burns

Seeing Elon Musk pledge to reinstate Trump on Twitter understandably starts another wave of folks leaving the platform, but if we all leave Twitter, won’t it just become what Trump dreamed Parler would be?

Musk (2)

I’ve been on Twitter for more than 15 years, and it’s the platform that has most felt like home for the majority of that time. I’m heartbroken by what Musk has managed to do to the platform and the people who (mostly used to, now) run it in a few short weeks. His flagrant disregard for users or the platform itself is gutting. (I’m with Nancy Baym on what’s being lost here, even if the platform stays online and doesn’t fall over.)

For better or worse, the media broadly (and academia in many ways, to be fair) has used Twitter as a proxy of public opinion. That won’t change overnight. If mostly moderate and left-leaning voices leave, does that give Trump via Musk exactly what he always wanted?

Trump gets Twitter as a pulpit to say whatever half-formed thought escapes his head, and a crowd of MAGA voices to cheer him on at every step. While the echo chamber idea has been widely challenged, it feels like this could be how that chamber would actually cohere.

From outside the US, that anyone, let alone a meaningful percentage, of US citizens believe the Biden election was ‘stolen’ feels like it shows exactly how powerful and destructive the Trump’s Twitter can be.

I don’t want to be putting dollars in Musk’s pocket either as he burns users and employees alike, but I’m deeply conflicted about just leaving the space which still has 15 years of ‘public square’ reputation. I don’t have a solution, but I have many fears.

And, yes, like many I’ve set up on Mastodon to see how that space evolves.

Spotlight forum: social media, child influencers and keeping your kids safe online

DSC02121

I was pleased to join Associate Professor Crystal Abidin as panellists on the ABC Perth Radio Spotlight Forum on social media, child influencers and keeping your kids safe online. It was a wide-ranging discussion that really highlights community interest and concern in ensuring our young people have the best access to opportunities online while minimising the risks involved.

You can listen to a recording of the broadcast here.

Coroner finds social media contributed to 14-year-old Molly Russell’s death. How should parents and platforms react?

Tama Leaver, Curtin University

Last week, London coroner Andrew Walker delivered his findings from the inquest into 14-year-old schoolgirl Molly Russell’s death, concluding she “died from an act of self harm while suffering from depression and the negative effects of online content”.

The inquest heard Molly had used social media, specifically Instagram and Pinterest, to view large amounts of graphic content related to self-harm, depression and suicide in the lead-up to her death in November 2017.

The findings are a damning indictment of the big social media platforms. What should they be doing in response? And how should parents react in light of these events?

Social media use carries risk

The social media landscape of 2022 is different to the one Molly experienced in 2017. Indeed, the initial public outcry after her death saw many changes to Instagram and other platforms to try and reduce material that glorifies depression or self-harm.

Instagram, for example, banned graphic self-harm images, made it harder to search for non-graphic self-harm material, and started providing information about getting help when users made certain searches.

BBC journalist Tony Smith noted that the press team for Instagram’s parent company Meta requested that journalists make clear these sorts of images are no longer hosted on its platforms. Yet Smith found some of this content was still readily accessible today.

Also, in recent years Instagram has been found to host pro-anorexia accounts and content encouraging eating disorders. So although platforms may have made some positive changes over time, risks still remain.

That said, banning social media content is not necessarily the best approach.

What can parents do?

Here are some ways parents can address concerns about their children’s social media use.

Open a door for conversation, and keep it open

It’s not always easy to get young people to open up about what they’re feeling, but it’s clearly important to make it as easy and safe as possible for them to do so.

Research has shown creating a non-judgemental space for young people to talk about how social media makes them feel will encourage them to reach out if they need help. Also, parents and young people can often learn from each other through talking about their online experiences.

Try not to overreact

Social media can be an important, positive and healthy part of a young person’s life. It is where their peers and social groups are found, and during lockdowns was the only way many young people could support and talk to each other.

Completely banning social media may prevent young people from being a part of their peer groups, and could easily do more harm than good.

Negotiate boundaries together

Parents and young people can agree on reasonable rules for device and social media use. And such agreements can be very powerful.

They also present opportunities for parents and carers to model positive behaviours. For example, both parties might reach an agreement to not bring their devices to the dinner table, and focus on having conversations instead.

Another agreement might be to charge devices in a different room overnight so they can’t be used during normal sleep times.

What should social media platforms do?

Social media platforms have long faced a crisis of trust and credibility. Coroner Walker’s findings tarnish their reputation even further.

Now’s the time for platforms to acknowledge the risks present in the service they provide and make meaningful changes. That includes accepting regulation by governments.

More meaningful content moderation is needed

During the pandemic, more and more content moderation was automated. Automated systems are great when things are black and white, which is why they’re great at spotting extreme violence or nudity. But self-harm material is often harder to classify, harder to moderate and often depends on the context it’s viewed in.

For instance, a picture of a young person looking at the night sky, captioned “I just want to be one with the stars”, is innocuous in many contexts and likely wouldn’t be picked up by algorithmic moderation. But it could flag an interest in self-harm if it’s part of a wider pattern of viewing.

Human moderators do a better job determining this context, but this also depends on how they’re resourced and supported. As social media scholar Sarah Roberts writes in her book Behind the Screen, content moderators for big platforms often work in terrible conditions, viewing many pieces of troubling content per minute, and are often traumatised themselves.

If platforms want to prevent young people seeing harmful content, they’ll need to employ better-trained, better-supported and better-paid moderators.

Harm prevention should not be an afterthought

Following the inquest findings, the new Prince and Princess of Wales astutely tweeted “online safety for our children and young people needs to be a prerequisite, not an afterthought”.

For too long, platforms have raced to get more users, and have only dealt with harms once negative press attention became unavoidable. They have been left to self-regulate for too long.

The foundation set up by Molly’s family is pushing hard for the UK’s Online Safety Bill to be accepted into law. This bill seeks to reduce the harmful content young people see, and make platforms more accountable for protecting them from certain harms. It’s a start, but there’s already more that could be done.

In Australia the eSafety Commissioner has pushed for Safety by Design, which aims to have protections built into platforms from the ground up.


If this article has raised issues for you, or if you’re concerned about someone you know, call Lifeline on 13 11 14.The Conversation

Tama Leaver, Professor of Internet Studies, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ABC Perth Spotlight forum: how to protect your privacy in an increasingly tech-driven world

perth_abc_privacyforumI was pleased to be part of the ABC Perth Radio’s Spotlight Forum on ‘How to Protect Your Privacy in an Increasingly Tech-driven World‘ this morning, hosted by Nadia Mitsopoulos, and also featuring Associate Professor Julia Powles, Kathryn Gledhill-Tucker from Electronic Frontiers Australia and David Yates from Corrs Chambers Westgarth.

You can stream the Forum on the ABC website, or download here.

ABCPrivacyForum


Instagram’s privacy updates for kids are positive. But plans for an under-13s app means profits still take precedence

Shutterstock

By Tama Leaver, Curtin University

Facebook recently announced significant changes to Instagram for users aged under 16. New accounts will be private by default, and advertisers will be limited in how they can reach young people.

The new changes are long overdue and welcome. But Facebook’s commitment to childrens’ safety is still in question as it continues to develop a separate version of Instagram for kids aged under 13.

The company received significant backlash after the initial announcement in May. In fact, more than 40 US Attorneys General who usually support big tech banded together to ask Facebook to stop building the under-13s version of Instagram, citing privacy and health concerns.

Privacy and advertising

Online default settings matter. They set expectations for how we should behave online, and many of us will never shift away from this by changing our default settings.

Adult accounts on Instagram are public by default. Facebook’s shift to making under-16 accounts private by default means these users will need to actively change their settings if they want a public profile. Existing under-16 users with public accounts will also get a prompt asking if they want to make their account private.

These changes normalise privacy and will encourage young users to focus their interactions more on their circles of friends and followers they approve. Such a change could go a long way in helping young people navigate online privacy.

Facebook has also limited the ways in which advertisers can target Instagram users under age 18 (or older in some countries). Instead of targeting specific users based on their interests gleaned via data collection, advertisers can now only broadly reach young people by focusing ads in terms of age, gender and location.

This change follows recently publicised research that showed Facebook was allowing advertisers to target young users with risky interests — such as smoking, vaping, alcohol, gambling and extreme weight loss — with age-inappropriate ads.

This is particularly worrying, given Facebook’s admission there is “no foolproof way to stop people from misrepresenting their age” when joining Instagram or Facebook. The apps ask for date of birth during sign-up, but have no way of verifying responses. Any child who knows basic arithmetic can work out how to bypass this gateway.

Of course, Facebook’s new changes do not stop Facebook itself from collecting young users’ data. And when an Instagram user becomes a legal adult, all of their data collected up to that point will then likely inform an incredibly detailed profile which will be available to facilitate Facebook’s main business model: extremely targeted advertising.

Deploying Instagram’s top dad

Facebook has been highly strategic in how it released news of its recent changes for young Instagram users. In contrast with Facebook’s chief executive Mark Zuckerberg, Instagram’s head Adam Mosseri has turned his status as a parent into a significant element of his public persona.

Since Mosseri took over after Instagram’s creators left Facebook in 2018, his profile has consistently emphasised he has three young sons, his curated Instagram stories include #dadlife and Lego, and he often signs off Q&A sessions on Instagram by mentioning he needs to spend time with his kids.

Adam Mosseri's Instagram Profile
Adam Mosseri’s Instagram Profile on July 30 2021.
Instagram

When Mosseri posted about the changes for under-16 Instagram users, he carefully framed the news as coming from a parent first, and the head of one of the world’s largest social platforms second. Similar to many influencers, Mosseri knows how to position himself as relatable and authentic.

Age verification and ‘potentially suspicious’ adults

In a paired announcement on July 27, Facebook’s vice-president of youth products Pavni Diwanji announced Facebook and Instagram would be doing more to ensure under-13s could not access the services.

Diwanji said Facebook was using artificial intelligence algorithms to stop “adults that have shown potentially suspicious behavior” from being able to view posts from young people’s accounts, or the accounts themselves. But Facebook has not offered an explanation as to how a user might be found to be “suspicious”.

Diwanji notes the company is “building similar technology to find and remove accounts belonging to people under the age of 13”. But this technology isn’t being used yet.

It’s reasonable to infer Facebook probably won’t actively remove under-13s from either Instagram or Facebook until the new Instagram For Kids app is launched — ensuring those young customers aren’t lost to Facebook altogether.

Despite public backlash, Diwanji’s post confirmed Facebook is indeed still building “a new Instagram experience for tweens”. As I’ve argued in the past, an Instagram for Kids — much like Facebook’s Messenger for Kids before it — would be less about providing a gated playground for children and more about getting children familiar and comfortable with Facebook’s family of apps, in the hope they’ll stay on them for life.

A Facebook spokesperson told The Conversation that a feature introduced in March prevents users registered as adults from sending direct messages to users registered as teens who are not following them.

“This feature relies on our work to predict peoples’ ages using machine learning technology, and the age people give us when they sign up,” the spokesperson said.

They said “suspicious accounts will no longer see young people in ‘Accounts Suggested for You’, and if they do find their profiles by searching for them directly, they won’t be able to follow them”.

Resources for parents and teens

For parents and teen Instagram users, the recent changes to the platform are a useful prompt to begin or to revisit conversations about privacy and safety on social media.

Instagram does provide some useful resources for parents to help guide these conversations, including a bespoke Australian version of their Parent’s Guide to Instagram created in partnership with ReachOut. There are many other online resources, too, such as CommonSense Media’s Parents’ Ultimate Guide to Instagram.

Regarding Instagram for Kids, a Facebook spokesperson told The Conversation the company hoped to “create something that’s really fun and educational, with family friendly safety features”.

But the fact that this app is still planned means Facebook can’t accept the most straightforward way of keeping young children safe: keeping them off Facebook and Instagram altogether.

The Conversation

Tama Leaver, Professor of Internet Studies, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why the McGowan government will have an uphill battle rebuilding trust in the SafeWA app.

QR code contact-tracing apps are a crucial part of our defence against COVID-19. But their value depends on being widely used, which in turn means people using these apps need to be confident their data won’t be misused.

That’s why this week’s revelation that Western Australian police accessed data gathered using the SafeWA app are a serious concern.

WA Premier Mark McGowan’s government has enjoyed unprecedented public support for its handling of the COVID-19 pandemic thus far. But this incident risks undermining the WA public’s trust in their state’s contact-tracing regime.

While the federal government’s relatively expensive COVIDSafe tracking app — which was designed to work automatically via Bluetooth — has become little more than the butt of jokes, the scanning of QR codes at all kinds of venues has now become second nature to many Australians.

These contact-tracing apps work by logging the locations and times of people’s movements, with the help of unique QR codes at cafes, shops and other public buildings. Individuals scan the code with their phone’s camera, and the app allows this data to be collated across the state.

That data is hugely valuable for contact tracing, but also very personal. Using apps rather than paper-based forms greatly speeds up access to the data when it is needed. And when trying to locate close contacts of a positive COVID-19 case, every minute counts.

But this process necessarily involves the public placing their trust in governments to properly, safely and securely use personal data for the advertised purpose, and nothing else.

Australian governments have a poor track record of protecting personal data, having suffered a range of data breaches over the past few years. At the same time, negative publicity about the handling of personal data by digital and social media companies has highlighted the need for people to be careful about what data they share with apps in general.

The SafeWA app was downloaded by more than 260,000 people within days of its release, in large part because of widespread trust in the WA government’s strong track record in handling COVID-19. When the app was launched in November last year, McGowan wrote on his Facebook page that the data would “only be accessible by authorised Department of Health contact tracing personnel”.

Screenshot of Mark McGowan’s Facebook Page announcing the SafeWA App.
Mark McGowan’s Facebook Page

In spite of this, it has now emerged that WA Police twice accessed SafeWA data as part of a “high-profile” murder investigation. The fact the WA government knew in April that this data was being accessed, but only informed the public in mid-June, further undermines trust in the way personal data is being managed.

McGowan today publicly criticised the police for not agreeing to stop using SafeWA data. Yet the remit of the police is to pursue any evidence they can legally access, which currently includes data collected by the SafeWA app.

It is the government’s responsibility to protect the public’s privacy via carefully written, iron-clad legislation with no loopholes. Crucially, this legislation needs to be in place before contract-tracing apps are rolled out, not afterwards.

It may well be that the state government held off on publicly disclosing details of the SafeWA data misuse until it had come up with a solution. It has now introduced a bill to prevent SafeWA data being used for any purpose other than contact tracing.

This is a welcome development, and the government will have no trouble passing the bill, given its thumping double majority. Repairing public trust might be a trickier prospect.

Trust is a premium commodity these days, and to have squandered it without adequate initial protections is a significant error.

The SafeWA app provided valuable information that sped up contact tracing in WA during Perth’s outbreak in February. There is every reason to believe that if future cases occur, continued widespread use of the app will make it easier to locate close contacts, speed up targeted testing, and either avoid or limit the need for future lockdowns.

That will depend on the McGowan government swiftly regaining the public’s trust in the app. The new legislation is a big step in that direction, but there’s a lot more work to do. Trust is hard to win, and easy to lose.

Tama Leaver, Professor of Internet Studies, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Future Of Children’s Online Privacy

I was delighted to join Dr Anna Bunn, Deputy Head of Curtin Law School, and the Future Of team for a podcast interview all about Children’s Online Privacy.

The half hour podcast is online here in various formats, including shownotes, or embedded in this post:

We discuss:

What’s the impact of parents sharing content of their children online? And what rights do children have in this space?

In this episode, Jessica is joined by Dr Anna Bunn, Deputy Head of Curtin Law School and Tama Leaver, Professor of Internet Studies at Curtin University to discuss “sharenting” – the growing practice of parents sharing images and data of their children online. The three examine the social, legal and developmental impacts a life-long digital footprint can have on a child.

  • What is the impact of sharing child-related content on our kids? [04:08]
  • What type of tools and legal protections would you like to see in the future to protect children? [16:30]
  • At what age can a child give consent to share content [18:25]
  • What about the right to be forgotten [21:11]
  • What’s best practice for sharing child-related content online? [26:01]

Web’s inventor says news media bargaining code could break the internet. He’s right — but there’s a fix

Tama Leaver, Curtin University

The inventor of the World Wide Web, Tim Berners-Lee, has raised concerns that Australia’s proposed News Media and Digital Platforms Mandatory Bargaining Code could fundamentally break the internet as we know it.

His concerns are valid. However, they could be addressed through minor changes to the proposed code.

How could the code break the web?

The news media bargaining code aims to level the playing field between media companies and online giants. It would do this by forcing Facebook and Google to pay Australian news businesses for content linked to, or featured, on their platforms.

In a submission to the Senate inquiry about the code, Berners-Lee wrote:

Specifically, I am concerned that the Code risks breaching a fundamental principle of the web by requiring payment for linking between certain content online. […] The ability to link freely — meaning without limitations regarding the content of the linked site and without monetary fees — is fundamental to how the web operates.

Currently, one of the most basic underlying principles of the web is there is no cost involved in creating a hypertext link (or simply a “link”) to any other page or object online.

When Berners-Lee first devised the World Wide Web in 1989, he effectively gave away the idea and associated software for free, to ensure nobody would or could charge for using its protocols.

He argues the news media bargaining code could set a legal precedent allowing someone to charge for linking, which would let the genie out of the bottle — and plenty more attempts to charge for linking to content would appear.

If the precedent were set that people could be charged for simply linking to content online, it’s possible the underlying principle of linking would be disrupted.

As a result, there would likely be many attempts by both legitimate companies and scammers to charge users for what is currently free.

While supporting the “right of publishers and content creators to be properly rewarded for their work”, Berners-Lee asks the code be amended to maintain the principle of allowing free linking between content.


Google and Facebook don’t just link to content

Part of the issue here is Google and Facebook don’t just collect a list of interesting links to news content. Rather the way they find, sort, curate and present news content adds value for their users.

They don’t just link to news content, they reframe it. It is often in that reframing that advertisements appear, and this is where these platforms make money.

For example, this link will take you to the original 1989 proposal for the World Wide Web. Right now, anyone can create such a link to any other page or object on the web, without having to pay anyone else.

But what Facebook and Google do in curating news content is fundamentally different. They create compelling previews, usually by offering the headline of a news article, sometimes the first few lines, and often the first image extracted.

For instance, here is a preview Google generates when someone searches for Tim Berners-Lee’s Web proposal:

Results page for the Google Search 'tim berners lee www proposal'.
This is a screen capture of the results page for the Google Search: ‘tim berners lee www proposal’.
Google

Evidently, what Google returns is more of a media-rich, detailed preview than a simple link. For Google’s users, this is a much more meaningful preview of the content and better enables them to decide whether they’ll click through to see more.

Another huge challenge for media businesses is that increasing numbers of users are taking headlines and previews at face value, without necessarily reading the article.

This can obviously decrease revenue for news providers, as well as perpetuate misinformation. Indeed, it’s one of the reasons Twitter began asking users to actually read content before retweeting it.

A fairly compelling argument, then, is that Google and Facebook add value for consumers via the reframing, curating and previewing of content — not just by linking to it.

Can the code be fixed?

Currently in the code, the section concerning how platforms are “Making content available” lists three ways content is shared:

  1. content is reproduced on the service
  2. content is linked to
  3. an extract or preview is made available.

Similar terms are used to detail how users might interact with content.

Extract showing the way 'Making content available' is defined in the Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020
The News Media and Digital Platforms Mandatory Bargaining Code 2020 outlines three main ways by which platforms make news content available.
Australian Government

If we accept most of the additional value platforms provide to their users is in curating and providing previews of content, then deleting the second element (which just specifies linking to content) would fix Berners-Lee’s concerns.

It would ensure the use of links alone can’t be monetised, as has always been true on the web. Platforms would still need to pay when they present users with extracts or previews of articles, but not when they only link to it.

Since basic links are not the bread and butter of big platforms, this change wouldn’t fundamentally alter the purpose or principle of creating a more level playing field for news businesses and platforms.


In its current form, the News Media and Digital Platforms Mandatory Bargaining Code could put the underlying principles of the world wide web in jeopardy. Tim Berners-Lee is right to raise this point.

But a relatively small tweak to the code would prevent this, It would allow us to focus more on where big platforms actually provide value for users, and where the clearest justification lies in asking them to pay for news content.


For transparency, it should be noted The Conversation has also made a submission to the Senate inquiry regarding the News Media and Digital Platforms Mandatory Bargaining Code.The Conversation

Tama Leaver, Professor of Internet Studies, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Twitter


Archives

Categories