Introducing SignpostAI: An AI Lab for Humanitarian Aid

Generative AI (GenAI) is a technological disruptor that can reconfigure humanitarian approaches to information and service provision. It offers compelling capabilities to sustainably address global unmet client information needs in a timely fashion. However, it presents privacy, safety and ethical challenges that are acutely amplified in the humanitarian context.

Signpost AI is attempting frontier development and research by developing a chatbot to explore if an evidence-based, ethical, responsible, and transparent approach to AI can effectively leverage the opportunities of Generative AI while addressing its challenges and mitigating harms. In this introductory blog post we introduce Signpost AI, our vision and principles, what we are doing, and how we are thinking about the challenges and opportunities of using Generative AI.

What is SignpostAI?

Signpost is the world’s first scalable community-led information program. Since 2015, it has become the largest such service in the aid sector by leveraging cutting-edge technology to reach vulnerable communities wherever they are, empowering them to understand their options, solve problems, make decisions for themselves, and access vital services. At Signpost AI, we aspire to transparently, and ethically develop AI tools to empower our communities through effective and high quality information and service provision. 

As part of our vision, we are currently prototyping Signpostchat, a groundbreaking attempt to bring humanitarian information to a higher scale with multilingual, redteamed, AI powered chat functionality that is plug and play to any humanitarian context. As part of this development, we are open-sourcing and documenting our full process of developing Signpostchat. This sharing of our work is a conversation starter; we are excited to have an open dialogue and collaborate on knowledge-sharing with humanitarian, academic and industry partners over how to solve the age-old problems of scale, sustainability and access in the aid sector

The principles that guide our approach in innovating an Ethical and Responsible approach to AI are firmly rooted in humanitarian people-centered and “do no harm” considerations: 

  1. Ethical and Responsible: Ethical considerations are foundational to how we create our products. They are omnipresent in all of our technical, evaluative and quality decision-making processes. This is to ensure our AI portfolio is safe,  equal, human-centered, and does no harm

  2. Transparent: We are dedicated to documenting and sharing all aspects of our AI work through blogs, case studies and research papers. This would include disseminating technical process documentation, AI Impact Assessments, Decision-making processes and Ethical frameworks, etc. This extensive documentation serves not only as a guide for partners but a process philosophy that ensures our AI is open, accountable and trustworthy. 

  3. Evidence-based: We are committed to providing insights grounded in rigorous research, analysis and empirical evidence.  This emphasizes our dedication to using sound scientific methods to ensure effectiveness, competence and credibility of our information products and services

  4. Collaborative: We believe that to utilize AI solutions in the humanitarian space positively requires partnerships and collaborations, based on inclusion, mutual knowledge sharing and production. These collaborations include a range of important stakeholders including communities, humanitarian organizations, academic research institutions and technology partners.

Opportunities and Challenges of AI

Opportunities leading to increased:

  • Scalability 

  • Efficiency

  • Availability (24/7)

  • Consistency

Challenges using AI chatbot in humanitarian space

  • Accuracy and Reliability

  • Opaqueness and lack of AI Literacy

  • Data and Privacy Concerns

  • Safety and Ethical Challenges

  • Trust or “Losing the Human Touch”

Signpost is using its principles to practically tackle the challenges of implementing AI, and leverage the opportunities. In doing so, Signpost AI is owning both the benefits and risks to Generative to produce important lessons learned.

This is how we are framing our principles to de-risk and mitigate not just current problems of Generative AI but the inevitably, unpredictable future ones.

Conclusion

AI is slated to become even more complex and widespread. We do not want the humanitarian sector to be left behind. Signpost AI approaches this project with humility, and a learning mindset. We want to learn how to responsibly and ethically develop, use and govern this powerful technology to better help the communities that we work for and those we need to reach.

Sitting Generative AI out, will only leave humanitarian sector out of important future conversations which can impact our communities. This is why we are making our AI development an open book, holding out an invitation to dialogue, converse, and learn together. 

Previous
Previous

[Behind the Bot] Chronicles of a Protection Officer

Next
Next

Introducing the Signpost AI Blog