843 AI Projects Cookbook

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

ARTIFICIAL

INTELLIGENCE
Classes XI & XII

AI PROJECTS COOKBOOK

Subject Code:843
Foreword

“Harnessing the power of AI to solve world problems is not just a choice; it’s a
responsibility we owe to humanity’s future, a beacon of innovation, guiding us
towards a brighter, more sustainable world.”

Welcome to our AI Cookbook! In this collection, we present an assortment of AI


projects spanning across various complexity levels, from low-code and no-code
solutions to high-level coding endeavours. Our aim with this cookbook is to
showcase the versatility and practicality of artificial intelligence in addressing real-
world challenges, particularly those aligned with the Sustainable Development
Goals (SDGs).

Within these pages, you will find an array of projects ranging from applications
designed to tackle pressing issues impacting our society to innovative chatbots and
mobile apps that enhance user experiences. Whether you're a novice enthusiast or
an experienced developer, there's something for everyone to explore and learn
from.

We hope this cookbook inspires you to delve into the realm of AI, experiment
with different tools and techniques, and ultimately contribute towards building a
better and more sustainable future for all.

Happy cooking!

1
AI Cookbook
"Unleash the Power of AI: Your Recipe Book for Intelligent Solutions”

Index

S. no. Topics Page Number


1 Project Guidelines 3-16
2 Sample Projects 17-38

2
Project Guidelines

(SDG Aligned)

Students can submit their projects on


https://flaunch.io or mail to [email protected]
for Future opportunities like the 1M1B India AI Youth Awards

3
1. Prepare for the Project

Identify a local issue affecting your school or community and that could be solved using
artificial intelligence (AI).

While doing this, you will learn more about problems you can solve to improve lives and make
the world a better place. You will also learn many important skills including:
➢ how to work as a team member
➢ how to clearly identify an issue and who it is affecting (the user)
➢ how to brainstorm solutions and select the best one
➢ how to decide which type of AI may be useful for your proposed solution
➢ how to ethically gather and use data to train a computer to help solve the issue
➢ how to test the prototype with users and use their feedback to improve your solution
➢ how to pitch your solution to people who will be able to help you take action

2. Form a team

Getting to know the people in your team

Introduce yourselves within your team and find out:


➢ What each person loves doing
➢ What each person is good at
➢ What each person would like to get better at
➢ What each person knows and thinks about AI

AI project team roles


There are several roles team members can take on when working on an AI project. Below are a
few examples of roles and tasks. If you are forming a smaller team, members may have to take
on multiple roles.

4
Project leader • Schedules and allocates tasks among the team
• Ensures tasks are completed on time
• Actsas the point of contact between the team and the teacher,
users and stakeholders
• Resolves team issues

Data expert • Decides on type of data needed to train an AI model


• Collects data
• Ensures data is in a format that the team can work with
• Ensures data is ethically sourced and unfair bias is eliminated
• Works with prototype builder to train the AI model

Information researcher • Collects questions from the team that need answers
• Identifies where answers can be located (source)
• Searches for answers, writes up a report and passes
information to the project reporter
Designer • Workswith the team and the user to create a process flow for
the new user experience
• Plans the user interface for the prototype
Prototype builder/coder • Works with data expert to train/teach computer
• Creates the prototype and codes if necessary

Tester • Works with users to tests the prototype


• Gets feedback from users and user sign-off when the
prototype has met user requirements
• Creates an action plan on what needs to be fixed and
prioritises requests for future improvements

Marketing/Communications • Collatesthe team Project Logbook submission and creates the


leader content for the video pitch
• Selects spokespeople within the team for various matters
relating to the project

Video producer Films the activities of the team and edits these into a
presentation for submission

5
Tasks are not solely the responsibility of those who are assigned to the specific role. Each team
member should involve other team members, users or stakeholders in order to achieve their
goals. Team members may also have more than one role. The project is a team effort and
requires collaboration and communication.
Team collaboration and communication

Collaboration is actively working with others with an open mind to the ideas of others to
accomplish your goal. As a team member, you need to be able to share your ideas and ask
questions so that your team and teacher understand your thinking.
A project plan will help you get started. Here are some tips:
➢ Add start and end dates for the project phases –background reading and learning about AI
and AI tools, forming a team, problem definition, understanding users, brainstorming a
solution, designing a solution, collecting and preparing data, training your model, building
your prototype, testing your solution, refining your solution, preparing your submissions
➢ It is important to have a clear plan of how you will communicate with each other to help
you work more effectively as a team and resolve issues more easily so that you can
achieve your goal.
➢ List tasks for each project phase and assign these to the team members with start and
end dates.

Here are some tips on to get you started with a communications plan:
➢ Will you meet face-to-face, online or a mixture of each to communicate?
➢ How often will you come together to share your progress?
➢ Who will set up online documents and ensure that everyone is contributing?
➢ What tools will you use for communication?

During the project, it is also helpful to create minutes for team meetings, where you log the date
the meeting took place, who attended the meeting, who was invited but unable to attend the
meeting, the purpose of the meeting, the items discussed and the items that need to be done as
a result of the discussion.

3. Define the Problem


Discuss each issue in your list among your team and arrange the ideas so that similar issues are
grouped together. Here are some questions you can use to guide your discussion...
➢ Which of these issues really matter to you and which ones would you like to tackle?
➢ Why would you want to solve these problems?
➢ What are likely solutions for each problem?
➢ What impact will solving the problem have on the community?

6
➢ Have other people already come up with solutions to this problem?
➢ Do these solutions require AI?

Consider whether AI would be a good fit for the solution


AI mimics the way the human brain works. It learns through experience by accumulating data
and insights from every interaction, getting better all the time, rather than being programmed to
perform a logical set of rules.
When humans use AI to help them solve problems, the overall effect is greater efficiency,
better decision making and quicker implementation of solutions.
AI may be a good fit to your solution if it requires one of the following:
➢ Recognising digital images, videos and other visual inputs (image recognition)
➢ Recognising speech, tone of voice, words spoken, heart rate, gestures and responding
to these (natural interaction)
➢ Looking for patterns in large amounts of data and using these patterns to reason or make
decisions (recommender systems)
➢ Use of smart sensors to gather information about the environment (machine perception
and autonomous systems)

If you think your solution does not meet any of the above criteria, it probably will not benefit from
using AI. Discuss this with your teacher and try to find an alternative problem requires an AI
solution.
Narrow down your list of issues to those that matter most to you and that you think you can solve
using AI.
➢ Vote on one problem that all or most of you would like to tackle, taking into consideration
the impact you are likely to make.
➢ Now breakdown the problem and think of the people affected (your users), what they
need and what you can do for them. Make sure that what you can do to help them is
something that can be measured. You should be able to express the users’ problem in
one sentence:

How can we help _____________ [a specific user or group of users] find a way to___________
[do what] so that they can do ________________ [something not done before that can be
measured].

This is your problem statement.

7
4. Understand your users
Before developing a solution, it is important to thoroughly understand both the problem and your
users. Understanding your users involves empathy - putting yourself in their shoes and connecting
with how they might be feeling about their problem, circumstance, or situation.
Who are your users?
➢ Who is the hero of your story? Hint: This is the user.
➢ What is their current struggle today?
➢ What problems do they face?
➢ What is one idea you have to solve their problem?
➢ How would that idea improve the user’s experience?

Observe

You can get to know your users better by actually meeting with them and observing them as they
carry out their tasks and as they interact with the people and tools in their environment.

Conducting interviews

You can conduct interviews with a number of users to find out more about their needs. Ask open-
ended questions about how they live and work. Listen to their stories to understand their hopes,
fears, and goals that motivate them.
Here are some questions you can ask:
➢ What’s their story?
➢ What is their experience and what they do, think and feel throughout their experience?
➢ What are the highs and lows of their experience?
➢ Whom do they work with?
➢ Who do they rely on and who relies on them?
➢ What’s expected of them?
➢ What are their needs?
➢ What problem are they solving?
➢ How do they define success?

Create an empathy map

Select a scenario and a user to focus on for your empathy map. You can create more than one
empathy map for different types of users.

8
Think about the perspective of the user. Write the name of your persona in the middle of the
map. Using the information from the scenario, add observations to the appropriate section of the
map.

You can share your Empathy Maps with your user(s) to double check your observations and
assumptions.

Refine your Problem Statement


With a better understanding of your user, review your problem statement. Re-state this as follows:

[a specific user or group of users] are experiencing issues with [problem] today
because of [cause].

5. Brainstorm the solution

Generate ideas
Once your team has a clear understanding of your user’s problems and challenges, it is time to
brainstorm a few possible ideas for solutions.
As you generate ideas keep your problem statement in mind and respond to the challenge
question:
How might we use the power of machine learning to help people increase their knowledge or
improve their skills?
Individually write down or draw your ideas for a solution. Set a timer for five minutes and come
up with as many ideas as you can.

Combine everyone’s ideas and group similar ideas together.

➢ Vote on the best ideas. Each participant should have 2 to 3 votes.

9
➢ Listen to each team member’s reasoning behind their votes. This is more important than
the votes themselves.
➢ Which idea would be best for the user and why?
➢ Can you get your user to tell you directly what it is they need or want most?
➢ Is there any other information that is needed?

Prioritise your ideas

When many ideas are being considered, it helps to put your ideas in a priority grid focusing on
importance to the user and on how easy it will be for the team to develop.
Value to the user = low to high
Ease of development by the team = easy to hard

Plot your ideas on the graph.


Focus on what is valuable to the user and easy to develop and implement. Avoid ideas down in
the ‘NOT A GOOD THING TO DO’ section.

Choose an idea

Take another five minutes to refine ideas for your solution. Think of the solutions that can be
designed within a short timeframe (about one hour total) using one of the AI tools that you are
comfortable with. Pick the best idea.

Example:
My AI solution will help someone learn yoga poses. I will train a ML model using Teachable
Machine to recognize different yoga poses. The tool’s confidence level at identifying the pose will
help the person know if they have done the pose correctly and how they can adjust their position
to increase the machine’s confidence level. A high confidence level means that the person is
doing the pose correctly.

10
6. Design your solution
Document the steps that your users will now do using your AI tool. Describe what the
user does first, what happens next, and so on. You may also represent the steps as
sketches or a storyboard.
Example
The user does a yoga pose in front of the computer webcam.
The program indicates the name of the pose and the confidence level that the name
of the pose is correct.
The user keeps practicing the pose until the confidence level is high and they are
satisfied that they have done the pose correctly.
The user repeats this process with another pose.

7. Identify and collect data


Decide what type of data you need for your AI solution, such as images, sounds, text.
Decide what labels you would like the computer to assign to the images, sounds or
text when the computer recognises them. For instance, you could have the labels,
‘happy’, ‘sad’ and ‘neutral’ which the computer can output when you show it images of
people’s faces

Collecting data
Data such as images, audio, video, statistics, can be collected using recording devices
or sensors.
You can also create your own data set by observing behaviours and logging your
observations and information about your subjects in a table or spreadsheet. For
instance, you might want to log data about your run daily – start time, end time,
distance covered, location, etc.
You can also find data sets online, but you need to ensure that the data from a credible
source and that you have permission from the owner to use these for your AI solution.
You can find some data sets from sites such as Kaggle, Google Data or the Atlas of
Living Australia.
Think of ethical considerations when collecting data. Will the process of collecting data
harm anything or anyone? For instance, when you capture photos of animals in the
wild, will this process endanger them or their habitats?
If you collect images of people, will this put their privacy at risk or damage their
reputation? How will you ensure privacy?
Other Considerations when working with data
➢ Is the data accurate and recent?

11
➢ Do you have a representative sample that is diverse enough to represent the
population you are examining?
➢ De-identifying - which features must be removed to safeguard privacy of
people’s data?
➢ Are you taking care to avoid unfair bias?

8. Build your prototype


For teams who will not be building a working prototype, you may simply create a
concept of your prototype in this section.
Creating a concept of your prototype
Develop a concept of your prototype. Take your user experience design in Section 6
and expand on it to include a sketch of each screen that is part of your solution with
details of what the users will do as they interact with your solution. Show how users
will transition from one step to the next and include features of how your solution will
work (screens, buttons, overall layout, etc).
Document your prototype concept.
Provide a brief description that explains each step in the user’s experience. Include
information about how the solution uses data, makes decisions, and the final output
or action.
Choose your AI tools to build your solution
Solving a real-world problem with AI will typically require you to train AI models to
recognise and classify images, sound, text, numbers. There are also pre-trained
models that have been trained to classify particular data. The models can then be used
to initiate a set of actions which can be coded into programs.
There are a number of tools you could use to create and train your AI models as well
as build programs. You will encounter the tools listed below in the AI Foundations
course by IBM and ISTE. These tools do not require any prior coding knowledge and
their web sites come with good support materials for students and teachers.
Machine Learning for Kids (beginner, intermediate, advanced) Machine Learning for
Kids is an easy-to-use platform that you can use to create and train your own model.
You can collect and classify your data, or you can use several pretrained models.
While doing so, you will also learn about AI concepts and what goes on under the
covers, such as how the model was trained.
To build a program using your model, you have access to coding platforms from within
the Machine Learning for Kids site. These include Scratch, App Inventor and Python.
The platform also provides access to pre-trained models and can import TensorFlow
models, such as those created using Teachable Machine.
Machine Learning for Kids requires you to register an account on the IBM cloud so
that you can have free access to IBM Watson AI and work as a team on the project.

12
Teachable Machine (beginner)
Teachable Machine is a web-based tool for creating machine learning models. You
can easily train a computer to recognize your images and sounds without writing any
machine learning code. However, to create an application that uses your model, you
would need to do so in a separate coding platform capable of handling your model’s
TensorFlow format.
MIT App Inventor (beginner/intermediate)
MIT App Inventor is a visual coding tool, similar to Scratch for creating fully functional
apps for Android smartphones and tablets. For those without Android smartphones,
there is an online emulator for Windows, Mac OS X and Linux machines. To enable
machine learning, you will need to import some AI extensions for MIT App Inventor.
IBM Watson Assistant (advanced)
IBM Watson Assistant is an AI product that lets you build intelligent chatbots that
handle conversational interactions with users on any topic. The chatbot can be
integrated into a web site, an app or a messaging channel.
Write down which AI tool(s) you will use to build your solution.
Creating your working prototype
➢ Gather the training data you need to train your model. If you are generating
data in real-time, for example, posing in front of a webcam, list the actions or
items you will show.
➢ Follow the instructions for the tool you are using and train your model.
➢ Test your model using new data to see how well it recognizes or responds to
the information.
➢ Add more training data as needed to increase the tool’s accuracy.
Write down what decisions or outputs will your tool generate and what further action
needs to be taken after a decision is made.

9. Test your prototype


When your prototype is ready, it is time to get some users involved in testing your
solution.
For teams who are not building a working prototype at this time, you can still get users
to review your concept. Show your users your concept and get feedback about
whether they think this would solve their problems if developed. Ask them for
improvements they would like to see.
For teams with a working prototype, your goal is to find out the following:
➢ How well does the prototype work and solve the users’ problem?
➢ What needs to change now to meet the minimum user requirements?
➢ What improvements can be made later?

13
As you can see, you can keep refining your prototype until it gets better and better and
it is up to you and your users to decide how many more improvements should be done
based on your available time and the cost of doing so.
Selecting test users/data
Describe which users/data will you select to test your solution, why they are the right
ones and whether they are representative of your subjects.
Observe your users during testing
Tips for testing your solution with your users:
➢ Take detailed notes as you observe your users.
➢ Allow your users to experience the solution without explaining it. Give only basic
information to get them started but let them explore how it functions. If you are
testing a prototype concept rather than a built prototype, allow your users to
examine the visual representations and read the explanations for each step.
➢ Allow your users to make mistakes while testing your prototype. Don’t correct
them right away if they do something wrong. This is valuable information that
you can use to determine if something is unclear about your solution or how
users might interpret it in a different way.
➢ Take note of their questions. These questions provide insight into areas that are
not clear in your design and can also provide inspiration for new features.
List your observations of your users as they tested your solution.

Ask users for feedback after testing


Ask your users about their experience or impressions as they are exploring the
prototype. You might ask the following questions or come up with your own:
➢ What were you thinking as you used this tool?
➢ How did this solution make your feel?
➢ What confused you?
➢ What surprised you?
➢ What do you wish the tool would do? Why?

Refining the prototype

Based on user testing, write down what recommendations you can act on now so
that the prototype can be used. Write down what recommendations you can leave for
later. After making changes to refine your prototype, iterate and test once again.

14
10. Reflect as a team
Take a moment to reflect on how team members collaborated with each other during
the project.
How did you actively work with each other and with your users and stakeholders? If
you were keeping a diary or log of your team meetings and to-do lists, you may include
those in this section too.

11. Reflect individually


A good way to identify what you have learned is to ask yourself what surprised you
during the project. List the things that surprised you and any other thoughts you might
have on the issue(s) in your local community or on what you have learned about AI.
Each team member should write their own individual reflection in this section.

12. Create a video pitch


Write your script
Imagine you are seeking media, support or funding for your solution. Your video pitch
is your chance to highlight the issue you want to solve and why it matters to you, to
your stakeholders and to your community or the world.
Start by writing a script for your video.
Introduce your team:
Talk about your team, how you came up with the idea and why the issue is really
important to you.
Introduce the problem and your solution: What is the problem you are trying to solve?
Who is affected? Who will be using your solution to help those affected by the
problem?
Include sketches of your ideas from your brainstorming, clips of you training your
model, evidence of your users testing the solution, screenshots of the solution, the
solution in use and the impact it has made.
Why are you the best people to deliver a solution:
Show your understanding of AI and of your users’ needs. Talk about the roles of the
team members, your commitment to the project, how you collaborated effectively and
how you have acted ethically and responsibly in developing the solution.
The plan ahead:
Talk about what further improvements you would like to do with your solution, how far
you would like to take it, what you need to achieve this and how passionate you are
about making this happen.

15
Start filming and your video
You can find tips and ideas on how to develop a pitch video here.
Here are some tips to consider when creating your video:

PRESENTATION
➢ Ensure your video follows a clear and logical sequence and is well-paced and
clearly communicated.
➢ Be illustrative. Use demonstrations of your prototype and/or visuals where
appropriate to illustrate examples.
➢ Present accurate science and technology and use appropriate language.
➢ Let your passion for your chosen topic/idea show through when presenting.
➢ Ensure your video has good sound and image quality.
➢ Keep your videos no longer than 3 minutes.

CONTENT
➢ Show how well your solution addresses the defined problem.
➢ Show how your solution meets user needs and provides a better user
experience. You can ask users to speak about the solution and how it will
improve their experience.
➢ Demonstrate the originality and creativity in your proposed problem and
solution.
➢ Provide insight into how well the team collaborated. Showcase team members
clearly illustrating their role in the project.
➢ Provide insight into the team members’ learning journey through the challenge
and how your AI knowledge and design thinking skills have developed.

Submitting your video


Upload your video to YouTube or Vimeo and share the URL in your Project Logbook.
If the video is private, please include the password with the link.

16
Sample Projects

Domain S.No. Title Page


No
Data Science 1 AI-Enabled Attendance System 18
2 Breast Cancer Detection Model 20
Natural 3 AI Voice Assistant for People with Disabilities 22
Language 4 Hate Speech Detection 25
Processing
Computer 5 School Surveillance System 27
Vision 6 Lung Cancer Detection Model 29
Low Code 7 Leak Weak 31
(Google
Dialogue Flow)
No Code 8 Image Classification (Computer Vision) 33
(MIT 9 Fake Voices: The Ethics of Deepfakes (NLP) 35
AppInventor) 10 An app to track mood over a period and 36
visualize the data (Data Science)

17
Project 1: AI-Enabled Attendance System

1. Problem Statement: Current transportation systems lack efficient means of


notifying parents of their child's safe arrival home from
school.
Manual methods of ensuring child safety post-school hours
are unreliable and prone to human error.
2. Users/Stakeholders:
• Parents seeking reassurance of their child's safety.
• Developers creating the facial attendance system.
• Schools or organizations implementing the
technology.

3. Objectives:
• To provide parents with real-time notifications upon
their child's arrival.
• Enhance child safety and provide peace of mind to
parents.

4. Features:
• Facial recognition technology for accurate child
identification.
• Automatic email notifications to parents upon
detection.
• User-friendly interface for easy setup and
monitoring.

5. AI Used:
• Computer vision algorithms for facial recognition.
• Machine learning for improving detection accuracy
over time.

6. Data Used:
• Facial data of registered children for identification.
• Historical data for algorithm training and
optimization.

18
7. Solution: • Develop a comprehensive facial attendance
system integrating advanced computer vision
technology.
• Implement a secure database of registered
children's facial data for accurate identification.
• Design an intuitive user interface allowing parents
to easily register their child's face and receive
notifications.
• Integrate automatic email notification functionality
to alert parents in real-time upon their child's safe
arrival home.
• Provide robust security measures to protect sensitive
data and ensure system reliability.
8. SDG Involved:
• Supports SDG 3 (Good Health and Well-being) by
enhancing child safety.
• Contributes to SDG 4 (Quality Education) by
promoting secure school-to-home transitions.

9. Future Scope:
• Expansion to include additional features such as
geolocation tracking.
• Integration with smart home systems for enhanced
security measures.

Link of Project on Github: https://github.com/1M1B/AI-Cookbook-


Projects/tree/master/Project%201-%20AI-Enabled%20Attendance%20System

19
Project 2: Breast Cancer Detection Model

1. Problem Statement: The early detection of breast cancer is crucial for


effective treatment and improved survival rates.
However, traditional methods of diagnosis may be time-
consuming, expensive, and prone to human error.
Developing an accurate and efficient breast cancer
detection model using artificial intelligence (AI) can help
address these challenges.
2. Users/Stakeholders: • Patients: Individuals concerned about their breast
health.
• Healthcare Professionals: Doctors, radiologists, and
medical staff involved in breast cancer diagnosis and
treatment.
• Researchers: Professionals working in the field of
medical imaging and AI.
• Policy Makers: Government bodies and organizations
responsible for healthcare policy and funding.
3. Objectives: • Develop an AI-powered breast cancer detection
model capable of accurately analysing medical
images (e.g., mammograms, ultrasounds).
• Improve early detection rates and reduce false
positives/negatives.
• Provide a user-friendly interface for healthcare
professionals to interpret model results.
• Enhance the efficiency and accuracy of breast cancer
diagnosis.
• Contribute to the advancement of medical AI research
and technology.

4. Features: • AI Algorithm: Implementation of machine learning for


diagnosis.
• User Interface: A user-friendly interface for healthcare
professionals to input data and interpret model
results.
• Real-time Analysis: Capability to analyse medical
data in real-time for timely diagnosis.
• Performance Metrics: Evaluation of model
performance using metrics such as sensitivity,
specificity, and accuracy.
5. AI Used: Machine Learning: Supervised learning algorithms (e.g.,
logistic regression, support vector machines) for
classification tasks.

20
6. Data Used: Labelled Datasets: Annotated datasets with labels
indicating the presence or absence of breast cancer.

7. Solution: • Development of a breast cancer detection model


using AI algorithms trained on medical imaging data.
• Implementation of a user-friendly interface for
healthcare professionals to upload medical images
and receive model predictions.
• Integration with existing healthcare systems for
seamless adoption and use in clinical settings.

8. SDG Involved: Sustainable Development Goal 3: Ensure healthy lives


and promote well-being for all at all ages. By improving
early detection rates and accuracy in breast cancer
diagnosis, the project contributes to achieving this goal.

9. Future Scope: • Expansion to Other Modalities: Extend the model to


analyse medical imaging modalities (e.g., MRI scans)
for comprehensive breast cancer diagnosis.
• Integration with Electronic Health Records (EHR):
Integrate the model with EHR systems to streamline
patient data management and improve healthcare
workflows.
• Continuous Improvement: Continuously update and
refine the model using new data and advancements
in AI technology to enhance performance and
accuracy over time.

Link of Project on Github: https://github.com/1M1B/AI-Cookbook-


Projects/tree/master/Project%202-
%20Breast%20Cancer%20Detection%20Model

21
Project 3: AI Voice Assistant for People with
Disabilities

1. Problem Statement: Many individuals with disabilities face challenges in


accessing and utilizing technology due to physical
limitations or barriers. Traditional voice assistants may not
adequately cater to their specific needs, leading to
difficulties in communication, accessing information, and
performing tasks independently.
2. Users/Stakeholders: • Individuals with disabilities (visual impairment, mobility
impairment, etc.)
• Caregivers and family members
• Developers and designers of the AI voice assistant
• Disability advocacy groups and organizations
3. Objectives: • To develop an AI voice assistant tailored to the needs
of people with disabilities.
• To improve accessibility and usability for individuals
with disabilities in utilizing technology.
• To enhance independence, productivity, and quality of
life for users with disabilities.

4. Features: • Natural language processing capabilities to


understand and respond to user queries effectively.

5. AI Used: The model incorporates several NLP-related


functionalities such as:
• Speech Recognition: The code uses the speech
recognition library to recognize spoken commands.
• Language Understanding: It interprets user queries
and commands to perform various actions, such as
opening websites or sending WhatsApp messages.
• Text Generation: The code utilizes the OpenAI API
for text generation, allowing the voice assistant to
provide responses to user queries and engage in
conversation.
• Text Manipulation: It includes functionalities for
extracting relevant information from user
messages, such as phone numbers or specific
message content.
• Text-to-Speech: The code utilizes the
win32com.client library to enable the voice
assistant to speak responses to the user.

22
6. Data Used: Speech recognition: Audio recordings paired with
transcriptions. Open AI Chat GPT: Large corpus of text
conversations and corresponding responses for training.

7. Solution: AI Voice assistant, will greatly benefit individuals with


disabilities, particularly those with mobility impairments or
visual impairments. Here's how:

1. Accessibility: People with mobility impairments may


find it challenging to interact with traditional input
devices like keyboards or mice. A voice assistant
allows them to control devices and access information
using voice commands, making technology more
accessible.

2. Hands-free Operation: For individuals with mobility


impairments who have limited or no use of their hands,
a voice assistant enables hands-free operation of
devices, allowing them to perform tasks such as
browsing the internet, sending messages, or
controlling smart home devices.

3. Assistive Technology: Voice assistants can serve


as assistive technology tools for individuals with visual
impairments by providing auditory feedback and
enabling voice-controlled navigation of digital content,
including websites and applications.

4. Communication Aid: For individuals with speech


impairments, voice assistants can act as
communication aids, allowing them to generate spoken
output by typing text input or selecting predefined
phrases using their voice.

Overall, the voice assistant implemented in the code


contributes to enhancing the accessibility and usability of
technology for people with disabilities, empowering them
to navigate digital environments and perform various tasks
more independently.
8. SDG Involved: Goal 3: Good Health and Well-being – By enhancing
accessibility to technology, the project contributes to
improving the quality of life and well-being of individuals
with disabilities.
Goal 10: Reduced Inequalities – The AI voice assistant
aims to reduce inequalities by providing equal access to
information and resources for people with disabilities.

23
9. Future Scope: • Expansion to support additional languages and
dialects.
• Integration with wearable devices for hands-free
operation.
• Continuous updates and improvements based on user
feedback and technological advancements.
• Collaboration with healthcare providers and assistive
technology experts to address specific needs and
challenges of different disability groups.

Link of Project on Github: https://github.com/1M1B/AI-Cookbook-


Projects/tree/master/Project%203-
%20AI%20Voice%20Assistant%20for%20People%20with%20Disabilities

24
Project 4: Hate Speech Detection Model

1. Problem Statement: Hate speech perpetuates discrimination and


prejudice, which can result in unequal
treatment and opportunities based on factors
like race, ethnicity, religion, gender, or sexual
orientation. It can lead to increased stress,
anxiety, depression, and reduced overall well-
being for those who are targeted by it.
2. Users/Stakeholders: • General Public using social media platform
like Twitter, Facebook, LinkedIn etc.

3. Objectives: • Addressing the issue of hate speech often


involves efforts to educate and raise
awareness, promote tolerance and
inclusivity.
• Creating Public awareness and reporting
mechanisms for combating hate speech
and its effects.
• Contribute to the advancement of AI
research and technology.

4. Features: 1.Accessing the Tool


2.Customizing Preferences
3. Real-Time Monitoring
4. Receiving Alerts
5. Reviewing Detected Content
6. Decision-Making
7. User Feedback
8. Settings Adjustment
9.EducationalResources
10. Advocacy and Reporting
5. AI Used: Machine Learning: Supervised learning
algorithms (Decision Tree Classifier) for
classification tasks.

6. Data Used: Labeled Datasets: Annotated datasets with


labels indicating hate speech or neutral text.

7. Solution: Development of hate speech detection model


using AI algorithms trained on data.

25
8. SDG Involved: Sustainable Development Goal 3: Ensure
healthy lives and promote well-being for all at
all ages. By improving mental health of people,
the project contributes to achieving this goal.

9. Future Scope: • Continuous Improvement: Continuously


update and refine the model using new data
and advancements in AI technology to
enhance performance and accuracy over
time.

Link of Project on Github: https://github.com/1M1B/AI-Cookbook-


Projects/tree/master/Project%204-%20Hate%20Speech%20Detection%20Model

26
Project 5: School Surveillance System

1. Problem Statement: The problem statement revolves around


ensuring the safety and security of students
and staff within the school campus by detecting
the entry of unknown individuals.
2. Users/Stakeholders: • School administrators
• Teachers
• Students
• Parents
• Security personnel
3. Objectives: • Enhance security within the school
premises.
• Detect unauthorized entry promptly.
• Ensure the safety of students and staff.
• Provide peace of mind to parents and
guardians.
• Implement an efficient surveillance system
without compromising privacy.
4. Features: • Facial Recognition: Identify known
individuals such as students, teachers, and
staff.
• Unknown Person Detection: Detect and
alert security personnel about the presence
of unknown individuals.
• Database Management: Manage a
database of known faces and individuals
associated with the school.
• User-Friendly Interface: Easy-to-use
interface for administrators and security
personnel.

5. AI Used: • Facial Recognition: Deep learning


algorithms for accurate facial recognition.
• Object Detection: AI-based algorithms for
detecting and tracking individuals.

6. Data Used: Facial Data: Images of students, teachers, and


staff for training the facial recognition model.

7. Solution: Implement a comprehensive surveillance


system equipped with AI-based facial
recognition and object detection algorithms.

27
Cameras strategically placed at school
entrances and exits continuously monitor the
surroundings. The system compares the
detected faces with a database of known
individuals.
8. SDG Involved: • SDG 4: Quality Education: Ensure a safe
and secure learning environment for
students and staff.
• SDG 9: Industry, Innovation, and
Infrastructure: Utilize technology innovation
for enhancing security infrastructure.
• SDG 16: Peace, Justice, and Strong
Institutions: Strengthen security measures
to promote safety and justice within the
school community.
9. Future Scope: • Enhanced Accuracy: Continuously improve
the accuracy of facial recognition algorithms
to minimize false positives and negatives.
• Integration with Other Systems: Integrate
the surveillance system with other security
systems and school management software
for better coordination and efficiency.

Link of Project on Github: https://github.com/1M1B/AI-Cookbook-


Projects/tree/master/Project%205-%20School%20Surveillance%20System

28
Project 6: Lung Cancer Detection Model

1. Problem Statement: Current lung cancer diagnosis methods rely


heavily on manual analysis of scans by
radiologists, which can be time-consuming,
prone to human error, and may miss early-stage
tumors. This can lead to delayed diagnoses and
hinder the effectiveness of treatment options.
2. Users/Stakeholders: • Patients: Individuals concerned about their
lung health.
• Healthcare Professionals: Doctors and
medical staff involved in lung cancer
diagnosis and treatment.
• Researchers: Professionals working in the
field of medical imaging and AI.
• Policy Makers: Government bodies and
organizations responsible for healthcare
policy and funding.
3. Objectives: • Develop an AI-powered lung cancer detection
model capable of accurately analyzing
medical Xray images.
• Improve early detection rates and reduce
false positives/negatives.
• Enhance the efficiency and accuracy of lung
cancer diagnosis.
• Contribute to the advancement of medical AI
research and technology.

4. Features: • AI Algorithm: Implementation of machine


learning for diagnosis.
• Real-time Analysis: Capability to analyze
medical X-ray images in real-time for timely
diagnosis.
5. AI Used: Machine Learning: Supervised Learning
(Computer Vision using Python)

6. Data Used: Labelled Datasets: Annotated datasets with


images indicating the presence or absence of
lung cancer.

7. Solution: • Development of a lung cancer detection


model using AI algorithms trained on medical
imaging data.

29
• Integration with existing healthcare systems
for seamless adoption and use in clinical
settings.

8. SDG Involved: Sustainable Development Goal 3: Ensure


healthy lives and promote well-being for all at all
ages. By improving early detection rates and
accuracy in breast cancer diagnosis, the project
contributes to achieving this goal.
9. Future Scope: • Integration with Electronic Health Records
(EHR): Integrate the model with EHR systems
to streamline patient data management and
improve healthcare workflows.
• Continuous Improvement: Continuously
update and refine the model using new data
and advancements in AI technology to
enhance performance and accuracy over
time.

Link of Project on Github: https://github.com/1M1B/AI-Cookbook-


Projects/tree/master/Project%206-%20Lung%20Cancer%20Detection%20Model

30
Project 7: Leak Weak

1. Problem Statement: Many individuals, especially young girls and women,


have questions and concerns about menstruation but
may feel embarrassed or uncomfortable discussing them
openly. Access to accurate information about periods is
essential for menstrual health and hygiene.

2. Users/Stakeholders: • Adolescent girls


• Women of reproductive age
• Parents and caregivers
• Teachers and educators
• Healthcare professionals
3. Objectives: • Provide accurate and reliable information about
menstruation and menstrual health.
• Address common questions and concerns related to
periods in a confidential and non-judgmental manner.
• Promote menstrual hygiene practices and debunk
myths and misconceptions surrounding menstruation.
• Empower individuals to manage their menstrual
health confidently and comfortably.
4. Features: • Chatbot interface for asking questions and receiving
personalized responses.
• Comprehensive FAQ section covering a wide range of
topics related to periods.
• Educational articles and resources on menstrual
health and hygiene.
• Period tracking feature
5. AI Used: Natural Language Processing (NLP) for understanding
and processing user queries.

6. Data Used: Medical literature and resources on menstrual health and


hygiene.

7. Solution: The mobile application will serve as a user-friendly


platform for individuals to access accurate information
and support regarding menstruation. The integrated
chatbot provide responses in real-time. The app will also
feature educational resources and period tracking tools
8. SDG Involved: Goal 3: Good Health and Well-being (promoting
menstrual health and hygiene)
Goal 4: Quality Education (providing accurate and
accessible information about periods)

31
9. Future Scope: • Integration with telemedicine services for connecting
users with healthcare professionals for personalized
advice and consultations.
• Expansion of language support and cultural
adaptation to reach a wider audience.
• Collaboration with schools, NGOs, and government
agencies to promote menstrual health education and
awareness.
• Incorporation of advanced AI capabilities for
predictive analytics and personalized health
recommendations.
• Partnerships with menstrual product manufacturers
and retailers for product recommendations and
promotions.

Project Link (for Chatbot Ihita):


https://console.dialogflow.com/api-client/demo/embedded/7776c630-6ede-4c1a-
860a-f8bfc6e7bb46
In this case, the name of the chatbot is Ihita which means “Fighter / Warrior”.
Video Link
https://drive.google.com/file/d/1EB4qye2Wzhhoqqja0wFw81blYD1Io1XK/view?usp=s
haring

Steps to create the chatbot using dialog flow:


Use the following link to get started with Dialog flow:
https://dialogflow.cloud.google.com/#/getStarted
Before creating the Chatbot in Dialog flow make sure you are ready with the list of
questions and answers which you will use it to train the chatbot. Some sample
questions are listed here.
1. Can I exercise while I am on my period?
2. What should I do to relieve my cramps?
3. What if I skip a period? Etc.
The list of questions with the answers can be accessed from this link.
https://drive.google.com/file/d/1_8wsyhP5FsnKK141wx43XJSCbL0Jl3ys/view?usp=s
haring
Then in Dialog Flow, Create an Agent by giving an appropriate name for the chatbot.
Once the Agent is created, then it prompts you to create the intents.
An intent categorizes an end-user's intention for one conversation turn. For each
agent, you define many intents, where your combined intents can handle a complete

32
conversation. When an Agent is created, by default two intents are created whicha re
Default welcome intent, default fallback intent. The rest are to be created depending
on the idea behind the chatbot. In the default welcome intent, when the user gives a
greeting message, your chatbot should respond to the kind of services it can do. So,
give only one response and remove all the other default welcome greetings.
All actions should be saved for the Agent to update itself.
Next step is to create as many intents as required by the objective of the chatbot.
Along with each intent, the training phrases with the respective answers should be
given and trained.
Also, entities can be created in Dialog Flow. Defines the type of information you want
to extract from user input. For example, vegetable could be the name of an entity
type. Clicking Create Entity from the Dialog flow Console creates an entity type.
You can see all the entities associated with an intent by clicking on it in Dialog flow.
Whereas intents represent the user's overall intention in their utterance, entities
represent key words in the utterance that we want to extract.
After this, choose Integrations option from the menu (left hand side) and choose
Web demo. In the next screen, Click on Enable. Then click on the hyperlink to open
the chatbot in a separate tab. Then when the chatbot opens in a separate tab, greet
the chatbot with Hi or any welcome greeting and see the response.
Other than this method, the chatbot can be integrated in any other platform like
Google Site / Weebly / Thunkable.
Ex: If you want the chatbot in a Google Site, then open a Google site, in that select
embed option, then choose Embed code option, paste the link, click on Insert option,
then Next option and then finally Insert option. You can see that the Google Dialog
Flow chatbot is inserted in the Google site. You can test the chatbot by clicking on
the preview button and then publish it.

33
Project 8: Image Classification App Development

1. Problem Statement: There is a need to introduce students to the basics of


machine learning in a practical and engaging manner.
Additionally, providing hands-on experience in creating
apps that implement machine learning concepts through
image classification can enhance students'
understanding and interest in this field.
2. Users/Stakeholders: - Educators and teachers
- Students (elementary, middle, or high school level)
- Educational institutions
- Parents and caregivers
3. Objectives: 1. Introduce students to the fundamentals of machine
learning in a user-friendly way
2. Provide practical experience in app development
using MIT App Inventor and the LookExtension.
3. Foster creativity and problem-solving skills among
students through app creation.
4. Enhance understanding of image classification
concepts and their real-world applications.
4. Features: 1. Image classification functionality using smartphone or
tablet cameras.
2. Confidence level display for each classification result.
3. User-friendly interface for app development using MIT
App Inventor.
4. Integration of LookExtension for machine learning
capabilities.
5. Interactive learning modules to explain machine
learning concepts.
5. AI Used: - Machine learning algorithms for image classification.
- LookExtension from MIT App Inventor for implementing
machine learning in the app.

6. Data Used: - Training datasets for machine learning models (e.g.,


ImageNet, CIFAR-10).
- Custom image datasets collected by students for app
development and testing.
7. Solution: The solution involves guiding students through the
process of creating their own image classification apps
using MIT App Inventor and the LookExtension. By
allowing students to take photos with their devices and
classify objects with confidence levels, the app provides
a hands-on learning experience in machine learning
concepts.

34
8. SDG Involved:
- Goal 4: Quality Education (enhancing STEM education
and digital literacy)
- Goal 9: Industry, Innovation, and Infrastructure
(promoting innovation in technology education)
9. Future Scope: 1. Expansion of app functionality to include other
machine learning tasks (e.g., text recognition, sentiment
analysis).
2. Collaboration with educational institutions to integrate
the app development curriculum into existing STEM
programs.
3. Creation of online resources and tutorials for self-
paced learning and community engagement.
4. Integration of feedback mechanisms to gather user
insights and improve app usability.
5. Exploration of partnerships with industry professionals
to offer mentorship and career guidance opportunities.

Project Link: https://appinventor.mit.edu/explore/resources/ai/image-classification-


look-extension

35
Project 9: Fake Voices: The Ethics of Deepfakes
1. Problem Statement: Introducing students to synthetic media and its potential
impacts, including deepfake technology.

2. Users/Stakeholders: • Educators
• Students

3. Objectives: • To teach students about coding and artificial


intelligence.
• To provide a basic understanding of machine
learning.
• To prompt students to predict future uses and
abuses of synthetic media.
4. Features: • Creation of a smartphone app that can modify
speech rate and pitch.
• Group presentations on the future impacts of
deepfake media.

5. AI Used: Machine learning algorithms for modifying speech


characteristics.

6. Data Used: Speech data for training the app.


Examples of deepfake media for discussion

7. Solution: • Hands-on coding activities to create the app.


• Group discussions and presentations on the future
implications of synthetic media.

8. SDG Involved: • Goal 4: Quality Education - Enhancing students'


understanding of technology and its societal
impacts.
• Goal 9: Industry, Innovation, and Infrastructure -
Introducing students to emerging technologies like
synthetic media.

9. Future Scope: • Expanding the curriculum to include more advanced


topics in artificial intelligence and media literacy.
• Integrating real-world case studies and examples of
synthetic media to deepen students' understanding.

Project Link: https://appinventor.mit.edu/explore/resources/ai/fake_voices_unit

36
Project 10: An app to track mood over a period and
visualize the data

1. Problem Statement: Creating an app to track mood over a


period and visualize the data.

2. Users/Stakeholders: • Students
• Educators

3. Objectives: • Develop and/or use a model to


generate data.
• Support explanations, predict
phenomena, analyze systems,
and/or solve problems through data
visualization.
• Enhance understanding of data
analysis and visualization
techniques.
4. Features: • Mood tracking functionality for a week
or month.
• Visualization of mood data over time
(happy, angry, sad).
• Step-by-step guide for app creation.
5. AI Used: No specific AI used in this scenario. The
focus is on data generation and
visualization.

6. Data Used: Self-generated data on mood (happy,


angry, sad) over the tracking period.

7. Solution: • Creation of an app that tracks mood


and visualizes the data.
• Step-by-step guide for students to
follow along and create the app.
8. SDG Involved: Goal 4: Quality Education - Enhancing
students' data literacy and analytical
skills.
Goal 3: Good Health and Well-being -
Promoting self-awareness and mental
health monitoring.

37
9. Future Scope: • Expanding the app to include more
sophisticated mood tracking features.
• Integrating machine learning
algorithms to predict mood patterns
based on various factors.
• Incorporating feedback mechanisms
for users to reflect on and improve
their mood over time.

Project Link: https://appinventor.mit.edu/explore/ai2/data_science_unit

38
Class XI| Artificial Intelligence |AI Projects Cookbook

You might also like