MozFest is a unique hybrid: part art, tech and society convening, part maker festival, and the premiere gathering for activists in diverse global movements fighting for a more humane digital world.

Secure your MozFest ticket
A photograph of a person sitting with their laptop on the stairs of the MozFest House venue, next to them is an orange and pink sign with the MozFest house logo.

100s of sessions

Immersive sessions that teach privacy best practices, develop solutions to online misinformation and harassment, build open-source tools, support Trustworthy AI innovations, and more

A photograph of the MozFest Opening Circle, with a large group of participants all with their hands raised in the air cheering on a video conferencing platform.

1,000s of participants

Artists, activists, technologists, designers, students, and journalists from across the world attend MozFest each year.

A photograph of a person placing a square sticky note onto a board, where the colour of the note says if they trust decisions made by an AI.

145+ Countries

MozFest welcomes activists from Taipei, coders from Berlin, educators from Nairobi, researchers from Brasilia, and others from regions and movements around the world

Over the years, MozFest has fueled the movement to ensure the internet benefits humanity, rather than harms it. We remain focused in our work to build a healthier internet and more Trustworthy AI.

Join the internet health community for these exciting opportunities and events that explore what it takes to build a better web.

Sustain the momentum of MozFest

Lead a Working Group Project

Join an engaged community whose aim is to help our technical community build more Trustworthy AI. The call for projects is now open!

Join the MozFest Slack

A rich network that celebrates the impact a healthy open Internet can have. Share ideas before, after, and during the festival.

book cover icon

Discover the award-winning MozFest Book, a celebration of our past and future.

Our Latest Dialogues and Debates

If you’re on social media or other corners of the internet a lot, you’ve likely encountered AI systems with names like GPT-3 or DALL·E in recent months. These are some of the AI systems that have become surprisingly good at creating convincing snippets of text or computer-generated images. These models come with risks, too: for example, they can perpetuate harmful stereotypes against women or marginalized groups. How can these risks be addressed? And what part can regulation play in this?