In the Grey: Risk, Technology and the Conversations We Avoid
I was kindly invited to attend a workshop as an observer, there to support a colleague and stay in the background. When a question came up about Australia’s social media ban for under 16s, they asked me to step in and share my perspective. What interests me is not the ban itself, but what it reveals about our relationship with risk, technology and the conversations we are still reluctant to have.
We already know prohibition does not work. Public health has shown us this again and again. When risk cannot be eliminated, we use harm minimisation because it acknowledges reality rather than wishing it away. Young people live online. They learn, connect, create and build communities there. They are growing up in a world we designed for them, often without the governance, safeguards or cultural norms that should have accompanied it.
Which is why it feels unfair, even absurd, to respond by restricting their autonomy rather than interrogating the failures that sit with adults, systems and platforms. Tech companies have built spaces that are deeply engaging and, at the same time, deeply unsafe. Governments have lagged behind the technology curve for more than a decade. And culturally, we have outsourced responsibility to the “other”, whether that is the algorithm, the device or the person holding it.
Avoidance has consequences. It shows up in the conversations we postpone because they are uncomfortable. Consent. Drug and alcohol use. Mental ill health. Disordered eating. Racism and homophobia. Unrealistic beauty standards. Loneliness in a hyperconnected world. These are not problems created solely by social media, although social media can amplify them in ways we could not have imagined, even during the dot com boom of the 90s. They are problems we would rather not confront, because they force us to look at the gaps in our culture, our relationships, our homes and our systems of care.
This is where the ban falls short. It treats the symptom and ignores the conditions that allow harm to take root. It pushes behaviour into darker and less visible corners. It does nothing to strengthen digital literacy or support young people, families and communities to build confidence in navigating online spaces safely. And it fails to fund the upstream supports we already know are critical if we want to reduce harm in any meaningful way.
This broad, sweeping policy approach also exposes its own contradictions. The ban does not even touch some of the more concerning online spaces, yet it sidelines those doing the good work on the ground. It overlooks and does not honour those having thoughtful, ongoing conversations with young people, as well as the peer advocates and community organisations that are routinely turned to as trusted voices on youth mental health, but somehow discounted when the conversation shifts to digital literacy, social media and inclusion. For many young people, particularly those in regional and remote communities or those who are already marginalised, these spaces are not optional. They are a lifeline, a place to find connection, identity and community.
Being online comes with inherent risk, and that risk is evolving rapidly in the age of AI, synthetic media and deepfakes. Banning access will not prepare young people for the world they are already inheriting. Conversation will. Education will. Honest engagement will. The work still sits with all of us, not with banning them, and we need to do better.