Build A Bot Facilitator Guide
Build A Bot Facilitator Guide
Build A Bot Facilitator Guide
What’s inside?
This facilitation guide includes a set of activities for children, families and parents to
experiment with the potential and peril of AI assistants. In this document there are three
workshops with facilitator guides, slide decks, worksheets and other materials. These
have all been designed as analog activities and do not require a computer.
Purpose:
Emerging technologies have the potential to make great contributions to society, and at
the same time there is an urgent need to address the embedded bias, dominant
narratives and the replication of real-world structural inequities they perpetrate. It’s
critical that all students, educators and families are knowledgeable about the ethical
implications of emerging technologies and have the agency to design, reflect and
participate in decision-making processes. The goal of this project is to provide K12
communities with learning experiences which provoke everyone to ask informed
questions about emerging technologies, and to interrogate and reflect on how our
positionalities are embedded in our design work.
Activity Usage
License: CC-BY-NC-SA under Creative Commons
These activities are licensed as CC-BY-NC-SA under creative commons. This license
allows you to remix, change, and build upon these activities non-commercially as long
as you include acknowledgement of the creators. Derivative works should include
acknowledgment of the authors and be licensed as CC-BY-NC-SA.
To acknowledge the creators, please include the text, “This Build a Bot Curriculum was
created by Ariam Mogos with key contributions from Laura McBain, Megan Stariha and
Carissa Carter from the Stanford d.school.” More information about the license can be
found at: https://creativecommons.org/licenses/by-nc-sa/3.0/
If you are interested in using this work for for-profit commercial purposes please contact
Ariam Mogos ([email protected]) & Laura McBain
([email protected]).
To use and edit the activities in this document, make a copy of this document by:
1. Making sure you are logged into your Google Account.
2. Go to File > Make a copy.
3. You will be prompted to name and save the materials to your drive.
1
In order to access the slides, make sure to follow the steps above to add them to your
Google Drive account.
Thank you to our colleagues at the MIT Media Lab Personal Robots Group for this
fantastic facilitation guide template and their work around AI + Ethics ; )
1. How might we support all children, family and educators to ask informed
questions about emerging technologies and participate in the design and
decision-making processes of emerging technologies?
2. How might we support children, family and educators learn about discriminatory
design and how our positionalities (encompassing our identity and social status)
influence the creation of technology?
For youth:
● Explore curiosity about the way emerging technology works, and the implications
it has on different communities and society.
● Create analog-based solutions with concepts which underpin emerging
technology and examine the role of positionality (encompassing our identity and
social status) in design work.
● Embrace the creation of emerging technology grounded in social justice, which
centers the perspectives and experiences of non-dominant communities (Black
and Brown communities, persons with disabilities, LGBTQ, etc.), and
acknowledges the harm inflicted on them (goal for everyone in K12)***
2
Activities:
1. Design lines for a bot. (Designed for ages: 14-18 years old)
In this hands-on activity, participants build experiences for their own AI assistant,
all while considering the various social implications on users.
2. Design datasets for a bot. (Designed for ages: 14-18 years old)
In this hands-on activity, participants build datasets for their own AI assistant, all
while considering the various social implications on users.
3. Design rules for a bot. (Designed for ages: 14-18 years old)
In this hands-on activity, participants build rules for their own AI assistant, all
while considering the various social implications on users.
Acknowledgments
Thank you to all the Black and Brown women whose exceptional scholarship, fight for
liberation and continuous advocacy have influenced and inspired this work, and honors
those who came before: Safiya Noble, Ruha Benjamin, Simone Browne, Timnit Gebru,
Joy Buolamwini, Deb Raji, Rediet Abebe and many others.
3
BUILD A BOT.
Help! I need Some-bot-y.
Overview:
In this hands-on activity, participants build experiences for their own AI assistant, all
while considering various social implications on users.
4
Activity Outline:
Learning Objectives:
● The interactions we have with AI assistants like Alexa, Siri and Google Home are
designed by real people who create content in order to engage us, meet our
needs and influence our behavior.
● Social attachment, gender stereotypes, abuse, etc. are all critical issues to
consider when designing how our AI assistants interact with users.
● It’s important for us to examine how our positionalities (encompassing our identity
and social status) influence how we design experiences with technology, and the
unintended consequences it may have on people who have different lived
experiences.
Materials Needed:
Facilitation Guide:
1. (5 minutes) Pull up the deck or ask participants if they’re familiar with Alexa, Siri
or Google Home. What do these technologies help us do? If participants are
unsure, list a few and ask them to popcorn a few more:
a. Play music
b. Turn the lights on
c. Make a grocery list
5
2. (15 minutes) How cool would it be to design lines for our own AI assistants and
be bots? What if we could make them say anything? We’re going to experiment
rapidly and try it out!
Each team will have 15 minutes to experiment with the Everyday Request Cards
(front side only) and pick different movie lines for how they want their AI
assistant to respond to a user request. Once they pick a few lines for their request
cards, they can record their partner acting out the lines in bot mode and test it out!
3. (15 minutes) Once participants have completed the activity, refer back to the deck
or prompt them with the reflection: “What if some of these movie lines were
actual responses? What might happen?” Give participants 10 minutes to
brainstorm on post-it’s or paper. Popcorn out ideas or facilitate a short activity
clustering participant ideas, and debrief.
4. (5 minutes) Share with them how real companies hire comedians, playwrights,
screenwriters and people good with language to write these lines for AI
assistants like Alexa, Siri and Google Home. Share some of the qualifications of
the job description:
6
Sample job description
to write for Siri
(published on Apple
Jobs 2020).
have implications.
5. (10 minutes) Share two examples with participants of how the design of AI
assistants can cause harm to users, which include:
7
b. Gender Stereotypes: This is when we design AI assistants with traditional
female names, voices and gendered responses, and can reinforce gender
stereotypes and encourage sexist behavior from a user. It can also exclude
people who are gender non-conforming.
6. (25 minutes) Tell participants that now that we’ve thought about a couple of
implications, we’re going to take a different approach to designing our lines! Ask
participants to turn over their request cards and to read them over.
Some participants may have a sense of how their identity and social status
influences how they view the world, and many participants may not. Hand out
8
and review the Power and Positionality Guide to support students examine and
reflect on their designs during the activity.
Popcorn out or ask participants to hang up their designs and facilitate a gallery
walk.
7. (10 minutes) Lead participants into a short debrief around the activity. Here are a
couple of starter questions:
a. How did thinking about the implications affect the way you designed the
experience?
b. How did thinking about your positionality affect the way you designed the
experience?
c. What impact might that have on different users?
d. What other issues do you think are important to consider when designing
these types of experiences and interactions for users? Who do you think is
not considered or left out when these experiences are designed? Who else
do you think can be harmed?
9
BUILD A BOT.
Help! I need Some-bot-y.
Overview:
In this hands-on activity, participants build datasets for their own AI assistant, all while
considering the various social implications on users.
10
Activity Outline:
Learning Objectives:
Materials Needed:
Facilitation Guide:
11
For example, when we get our annual physical at school, a dataset is created for
each of us that includes our height, weight, pulse, blood pressure and more.
These are different types of information, but they’re all related or “correlated” to
help us understand if we’re in good physical health. Datasets tell stories (which
includes stories that are not being told). For example, what information is not
part of our annual physical that could also tell us about our health, and why
is it left out?
“What if a user asked your AI assistant, what’s a tree? How would you design
your AI assistant to respond? What would the dataset include?”
2. (10 minutes) Ask participants to only complete their own perspective. Call on
participants and popcorn out their datasets. Show participants the pre-made
data-set found in the slide deck:
12
3. (15 minutes) Ask participants to complete the perspectives of an ant, bird and
bear.
“If any of these animals were asked what’s a tree, how would they respond?”
Call on participants and popcorn out their datasets or pin them up and facilitate a
gallery walk. Show participants the pre-made dataset of how the Maori tribe in
New Zealand might define a tree found in the slide deck:
4. (10 minutes) Lead participants into a short debrief around the activity. Here are a
couple of starter questions:
Share with participants that we all create “datasets” which reflect our identities,
values, perspectives and biases. Some datasets have been the dominant
narrative, or primary view accepted by the world. And it has done a lot of
harm to non-dominant communities. Show participants the dominant or default
perspective:
13
How might we change that? AI assistants are goldmines of information. Share
with participants that examples of datasets we can find in our AI assistants come
from Wikipedia, Yelp, Google search results and more. Who decides where the
information comes from and whose perspective does that information represent?
What influence can this have on the world?
have implications.
5. (5 minutes) One example of how the design of datasets for AI assistants can
cause harm to users include:
14
a. Misinformation: “Iris” (Siri spelled backwards) is a popular voice assistant
for android phones. Iris has given women misleading information about
emergency contraception and abortion services, and when women search
or ask for these services, it has quoted the bible.
6. (35 minutes) Tell participants we’re going to dive into another part of the card
deck and think about the implications of how we evaluate, select and curate
datasets for our AI assistants. Ask participants to pull out the category cards and
pick one category (food and culture/news/health/history and geography). Select a
request card from the category and turn it over to review!
15
Some participants may have a sense of how their identity and social status
influences how they view the world, and many participants may not. Hand out
and review the Power and Positionality Guide to support students examine and
reflect on their designs during the activity.
7. (10 minutes) Lead participants into a short debrief around the activity. Here are a
couple of starter questions:
a. How did thinking about the implications affect the way you searched for,
selected and curated data sources?
b. How did thinking about your positionality affect the way you searched for,
selected and curated data sources?
c. What impact might that have on a user(s)?
d. What other issues do you think are important to consider when selecting
and curating data sources for users?
e. Did the brainstorm bank help? If so, how? If not, why?
f. What reflections do you have about the way you think about technology
and information? How might it differ from others?
16
BUILD A BOT.
Help! I need Some-bot-y.
Overview:
In this hands-on activity, participants build datasets for their own AI assistant, all while
considering the various social implications on users.
17
Activity Outline:
Learning Objectives:
● AI assistants like Alexa, Siri and Google home have different rules that are
designed by real people, and those rules do not always keep our data safe.
● Every question we ask and every conversation we have with our AI assistants is
data. It might not be important to us, but that data can be used to help us,
influence our behavior, or used against us. Our AI assistants can also collect data
when we’re not engaging with it, and potentially use that data.
● It’s important for us to examine how our positionalities (encompassing our identity
and social status) influence how we design rules for technology, and the
unintended consequences it may have on people who have different lived
experiences.
Materials Needed:
18
Facilitation Guide:
“Do you think all our questions and conversations stay between us and our AI
assistants?”
Share with participants that AI assistants can be like lockers at school. They can
store all kinds of data. Some data is more sensitive than other data.
2. (10 minutes) Share with participants that an announcement over the loudspeaker
in school has been made. Pick a premade announcement or design your own.
19
3. (10 minutes) Lead participants into a short debrief around the activity. Here are a
couple of starter questions:
a. How did it feel to have no control over what happens with your data at
school? What did it make you want to do?
b. What type of rules did you design?
Share with participants that every question we ask and every conversation we
have is data. It might not be data that’s important to us, but it can be used to
influence us or even used against us.
have implications.
4. (10 minutes) Share with participants one example of how the design of data for
AI assistants can cause harm to users, which include:
a. Data Privacy: According to VRT News, people are hired to listen to audio
files recorded on the Google Home smart speakers and the Google
Assistant smartphone app. These audio files support Google to improve
it’s search engine. While most of these recordings were made consciously,
some of the recordings had sensitive information and were never meant to
be recorded.
20
5. (35 minutes) Tell participants we’re going to dive into another part of the card
deck and think about the implications of the systems and rules we design for our
AI assistants. Ask participants to pull out the Rules cards and select a request
card. Turn it over and review!
Some participants may have a sense of how their identity and social status
influences how they view the world, and many participants may not. Hand out
and review the Power and Positionality Guide to support students examine and
reflect on their designs during the activity.
21
6. (10 minutes) Lead participants into a short debrief around the activity. Here are a
couple of starter questions:
a. How did thinking about the implications affect the way you designed rules
for your AI assistant?
b. How did thinking about your positionality affect the way you designed
rules for your AI assistant?
c. What impact might that have on a user(s)?
d. What other issues do you think are important to consider when designing
rules for AI assistants?
e. What other issues do you think are important to consider when designing
rules for an AI assistant?
f. What reflections do you have about the way you think about technology
and rules or systems?
22
Put it all together…
Ask participants
to reflect on one
positive impact
their bot might
make in the world.
What difference
did thinking about
implications and
positionality
make?
23