The Zen of ML is a set of design principles that helps ML educators and self-learners prioritise responsible machine learning practices. The principles consider the end-to-end machine learning development cycle, from data collection to model evaluation and continuous deployment. Inspired by the Zen of Python, the Zen of ML can be viewed as a culture code that promotes the responsible development of ML products and projects. It is not binding or enforceable, but is intended to shape industry norms and offer a practical guide to building trustworthy AI.
- The Zen of ML is a project of the Mozilla Trustworthy AI Working Group. We have developed draft design principles over the months leading up to MozFest. At the AIIRL hackathon we will review, evaluate and build on the design principles. The questions we seek to answer are:
- Do the design principles cover the key aspects of responsible ML?
- Are the design principles useful for new entrants into ML?
- Are the design principles useful for educators?
- Are the design principles well formulated?
- Do the design principles fulfill the requirements that we set out for them?
The hackathon will consist of interactive sessions where we evaluate the draft principles and improve them where necessary. At the end of the hackathon we hope to have a set of working design principles so that we can launch The Zen of ML 1.0 in the weeks that follow.
Practice in using machine learning (beginner to advanced), designers, linguists
March 13th 17:00 - 21:00 CET
The Nanny State is a workshop using design justice practices to explore the impact of surveillance and artificial intelligence on the labor industry, particularly on domestic workers, e.g., nannies and housekeepers. The use of artificial intelligence or AI in the labor sector is pervasive, there are examples of employers tracking labor productivity, health status, and replacing core job activities among others. AI is capturing the employee’s digital footprint while simultaneously attempting to predict the employee’s next move.
This storytelling workshop will present (1) the prevalence of home surveillance technology in households around Europe and (2) involve nannies, housekeepers, and other domestic laborers in the design of AI technology that is currently driving their industry. MozFest participants interested in labor rights, surveillance, and algorithmic accountability will learn how to design more equitable hiring platforms. I plan to use design justice practices to ensure this session will contain minimal jargon and use participatory research principles. This workshop aims to produce a UX field guide grounded in social justice for technologists and designers.
The agenda is broken down to three parts:
- A short introduction into the artificial intelligence and data behind child care hiring apps.
- Best practices for analyzing and interpreting this data.
- In small groups, participants will discuss community-based alternatives for algorithmic accountability.
March 13th 09:00 - 17:00 CET
The future of digital storytelling will involve the increasing use of algorithmic tools, both to develop new forms of narrative and to find efficiencies in creative production. However, unsupervised algorithms trained on massive amounts of web-based text come with issues of bias most harmfully pertaining to gender, race, and class. The Narrative Future of AI, as part of Mozilla's Trustworthy AI working group, is seeking to review typical biases that occur from writing with the GPT-3 API, AI Dungeon. The outcome will be a series of science fiction stories and feedback from working group members on their observations of bias and problematic AI behaviours. This analysis will form the basis of our first set of recommendations for creative writing with advanced machine learning tools.
Participants will be presented with a short series of flash fiction stories created by working group members with AI Dungeon. They will be asked to comment on the text, highlighting biases they recognise or pointing out genre tropes that intersect with other forms of prejudice. They will also be able to create their own stories using a multiplayer scenario designed by members of the working group.
Reading, writing, critiquing, discussing
March 13th 10:00 - 18:00 CET
PRESC is a tool to help data scientists, developers, academics and activists evaluate the performance of machine learning classification models, specifically in areas which tend to be under-explored, such as generalizability and bias. Our current focus on misclassifications, robustness and stability will help facilitate the inclusion of bias and fairness analyses on the performance reports so that these can be taken into account when crafting or choosing between models.
- PRESC is still a young project, and would benefit greatly from having its infrastructure and approaches to model evaluation tested and validated in a broader context. We invite you to contribute by:
- Test driving the tool on your dataset and model
- Contributing a dataset for us to use for future testing and development
- Making code contributions
- Providing your perspective, feedback, or recommendations based on your experience or industry
As the tool is currently accessible as a Python library API, some experience with Python and its data science stack (Pandas/Numpy/Scikit-learn) is necessary to run it. Aside from this, participants are welcome to interact with the project through the Github repo, such by commenting on issues. High-level documentation on the evaluation approaches is also available in the repo for discussion.
March 13th 16:00 - 00:00 CET
The MENA AI landscape appears to be vibrant, with many entities catalyzing smart technologies for digital transformation. Yet a scarce amount of sources exist on what constitutes the AI MENA landscape. There have been efforts to bring together the pan-Arab AI community in conferences such as the Arab AI Summit hosted in Jordan in 2019, the Arab IOT and AI Challenge in Egypt. But details about key players and entities, policies and research, that revolve around AI are sparsely documented. In order to fully exploit the potential of existing capacities and understand gaps in practices, it is essential to map this ecosystem. We aim in this discussion to present a research whereby we conducted an initial mapping of AI entities in the MENA. We hope this snapshot will be a first step from which we can foster a stronger understanding of how AI is being leveraged in the region.
We aim for the session to be a starting point for a crowdsourced mapping of AI ecosystems in the MENA. We will be launching a website which will visualise and list all entities found in the research. Before the hackathon, there will be a website already launched, but only at MVP stage. We hope through the hackathon we could do two things: make the mapping more automatic (e.g. find AI entities working in MENA via Web scraping and not manually), and build new features for the website (e.g. more granular visualisation).
Full-stack development (VueJS, Strapi), Web scraping
March 13th 08:00 - 16:00 CET