At the United Nations Office for the Coordination of Humanitarian Affairs (OCHA), we bring the world together to tackle humanitarian emergencies and save the lives of people caught in crises.
Description of Activities on AI
Project 1: Global Tropical Cyclone Impact Model
Tropical cyclones (known as hurricanes or typhoons in certain regions) can cause significant humanitarian impact, with up to 800 million people affected yearly around the world. By using machine learning methods and existing forecasts, the impact of cyclones can now be anticipated, allowing humanitarian organizations to respond more quickly and efficiently.
The project initially aimed at developing a trigger mechanism for the tropical cyclone early action protocol of the Philippines Red Cross Forecast Based Financing project, in partnership with The Netherlands Red Cross’ data and digital initiative ‘510’. A model was developed that predicts the potential damage of a tropical cyclone before landfall. The model uses ‘inputs’ or ‘features’ for the cyclone track such as windspeed, and rainfall, as well as static features like topography, housing building material, household relative wealth, and population density. Based on these inputs, the model then determines the percentage of completely damaged houses per municipality. The model is currently being used in the OCHA anticipatory action framework for the Philippines, where pre-agreed financing is released for a pre-agreed action plan by UN agencies if a certain level of damage is predicted.
In collaboration with the ISI Foundation, the model has now been adapted to work using a grid (instead of the specific municipalities in the Philippines) using open global data sources, which helps enable its application globally, and improves its accuracy. The new grid-based model is currently being extended for use in Fiji to support the OCHA anticipatory action framework, and will be used in other contexts in the coming year.
Lessons Learned
One key challenge in the project was accessing and processing all the input data required to train and run the model. Most important is reliable impact data for historical cyclones. To be useful for model training, the impact data needs to be geographically specific; in the very least, sub-national. However, this data is often unavailable, which limits the model’s accuracy. Historical forecasts of tropical cyclones can also be difficult to find and access in an easily usable format, further limiting the extent to which the model can be trained. OCHA and its partners are currently working to extend the grid-based model to be used globally. This requires finding and assessing the validity of global public datasets for model training. As the model is applied to further contexts, other country-specific datasets will be incorporated where possible. The aim is to make the model available publicly in a relatively easy-to-use format, such as a web application.
Project 2: AI based content classification service
Manually analyzing a large volume of text data can be a tedious and time-consuming task. By leveraging Azure Open AI service, the process becomes significantly more efficient. The service, built using Microsoft Power Platform, enables end-users to feed in text snippets or excel spreadsheets, receiving categorized content in return. This versatile service can be configured to analyze diverse content types, including news snippets, project descriptions, social media posts, or budgetary spreadsheets.
One notable application involves the Climate Team at OCHA, utilizing the service to analyze historical data within OCHA-administered Country-based Pooled Funds. Project summaries are input into the service, allowing the discernment of activities specifically relevant to climate adaptation. This enhances OCHA’s ability to optimize the effectiveness and impact of these initiatives in the future.
Additionally, the Digital Services section at OCHA employs the service to analyze Internet or Cyber Security incidents, categorizing them into types such as Internet Outage and Data Breach. Leveraging Microsoft Power Automate Flow for real-time classification, the data feeds directly into a Power BI dashboard used for monitoring purposes.
We are also exploring applications such as automatically sifting through social media postings to identify those most relevant to humanitarian work.
Lessons Learned
We aim to test the service with diverse content types to comprehensively understand its limitations. In our experience, when utilizing the service to identify climate adaptation activities, we encountered instances where the AI generated new categories despite explicit prompts to the contrary. To address this, we plan to explore the application of fine-tuned models for specific variations. Engaging in more diverse applications will undoubtedly contribute to a deeper understanding of the service’s capabilities and areas for improvement.
Project 3: Automated tagging of ReliefWeb Jobs
Leverage a Large Language Model to automatically apply certain categories to jobs being posted to ReliefWeb. The intention is to reduce human efforts in areas that we believe AI can provide a suitable solution thereby allowing our human capacity to focus on more impactful activities.
Lessons Learned
Over-engineering is an easy trap to fall into. In our case, we were able to back away from advanced work and costs (e.g. fine-tuning, dedicated services running at all times) when we simply tested the LLM directly and found the results to be satisfactory.
Project 4: ReliefWeb Q&A Chatbot
As a first version, the ReliefWeb Q&A Chatbot will allow a site visitor to ask questions about the (single) report page that they are viewing. On average, a single report had an attached file of approximately 17 pages. The Q&A Chatbot will retrieve answers from that material.
Lessons Learned
Trying to provide a Q&A (RAG) across many documents was challenging when users knew the content extremely well – as they did not find the results very usual (e.g. missing specific details or nuance). Focusing on a single report reduces that challenge and thus us time to gather feedback, better understand the technology and investigate how to expand/grow the solution.