Operationalizing Transparency and Explainability at Signpost AI

Introduction

Generative AI in the humanitarian context, holds the potential to transform crisis response, aid distribution, information provision and more by enabling rapid data analysis, substantiating prediction models, and automating personalized engagement, informing tailored programme design and gathering and sharing information at increased scale. [1]

Leveraging this potential requires an effective and responsible technology approach grounded in humanitarian principles. Responsible AI and AI Ethics efforts in the sector have variously mapped key principles such as privacy, accountability, safety, security, and human rights, etc.[2] [3] These principles serve as anchoring guidance during  the project lifecycle of an AI implementation or development. This includes various stages such as (i) Problem and Solution Identification (ii) the Data Journey of Collection, Protection and Processing (iii) effects of the AI algorithm and (iv) Outputs [4] 

A Harvard Berkman Klein Center for Internet and Society analysis found many themes among the contents of 36 prominent AI principle documents from various domains; eight of which are specifically applicable for the humanitarian sector [5]. Of these, one key principle pertinent for this research brief is transparency and explainability.

In this research article, we will be looking at transparency and explainability in detail, both conceptually and empirically. In the next section, we define transparency and explainability in the context of AI and the humanitarian sector. In the final section, we will look at what Signpost AI (SPAI) efforts are being made to ensure transparency and technology explainability.

Defining Transparency & Explainability

The concepts transparency and explainability are often discussed in parallel. In many contexts, these terms are used interchangeably or considered closely related. However, for our purposes here, we will examine them distinctly to gain a clearer understanding of what each entails and the associated practical implications.

Transparency

Transparency can refer to the open sharing of information about an organization's actions, decisions, processes, and resource allocations to ensure accountability and build trust with affected  communities, partners, and stakeholders. Transparency involves providing accessible, and accurate, information about decision-making processes, technology’s internal mechanisms and the impact of interventions. In the computer sciences, transparency is focused specifically on inner workings and underlying logic of a system or a model. For the humanitarian use-case, this definition needs to be broader and encompass stakeholders, interactions and other organizational decision making.

When it comes to AI, according to the OECD, transparency entails responsible disclosure about AI tools and systems as well as providing meaningful information [6]:

  1. Inform stakeholders of their interactions with AI systems within programs and in the workplace

  2. to provide information that enable those adversely affected by an AI system to challenge its output

Transparency has been described in the literature as the sharing of information about an organization’s experiences with AI applications, detailing which ones work, how they work, and which ones do not, along with the reasons why. This also includes disclosing how AI and algorithmic systems are used, how they are audited and assessed, and demonstrating compliance with regulations such as the EU’s AI Act, where applicable. [7]

Being a transparent, open organization about technology use enhances trust, accountability with clients, stakeholders, donors and society at large. For example, transparency has been found to be key towards generating trust through dynamics of expectation management in the case of chatbot interactions [8] In practice, this means laying out what chatbots can and cannot do and making it clear when users interact with them. 

Transparency is a key catalyst for fostering collaboration within the aid sector as well as developing key partnerships between humanitarian and external organizations. Especially in the case of Generative AI, a largely untested technology on the ground, it is essential to be transparent about policies, modalities, and safeguards that are being built with regards to due process, protection protocols and recourse mechanisms. [9] Transparency and details on collaborative partnerships with technology companies is also crucial, given such arrangements deal in the data of the world’s most vulnerable people, and can provide fertile ground for unsanctioned  commodification for third party data brokers and intermediaries.

Finally, Transparency serves as an accountability yardstick to evaluate claims against actual results, such as assessing the usefulness of a given AI tool, the frequency of accurate outputs in its specific use case, and its added value relative to total costs. By fostering open and published results, transparency enables organizations and external stakeholders to effectively gauge the performance and impact of AI and technology.

Explainability

Explainability focuses on the ability to provide clear and meaningful explanations for the outputs, predictions, or behaviors of a system. Explainable systems aim to offer insights into how and why they arrived at particular results; this empowers individuals to comprehend the rationale behind the system's actions and decisions. Explainability communicates the inner-workings, reasoning and justifications for complex AI systems, making them more open, and offering insight into how organizations can think about algorithmic processing, outputs, which data is being used and how. [10]

Explainability has become important for Generative AI in particular, given that most Generative AI systems operate as “black boxes” which means significant parts of how answers are generated is hidden from everyone, including experts. In essence, a black box hides the system and it processes leaving only the inputs and outputs visible. Explainability for systems which rely on Generative AI is limited due to proprietary (due to market pressures, companies do not want to publish what little they do know) and technical reasons (mapping computational reasoning in Generative AI systems is as yet, unsolved technological and engineering challenge) reasons. 

Given this black box problem, what does explainability mean for humanitarian organizations? With no definitive answers on the horizon, there are some partial ways of ensuring explainability. OECD for example, provides the following [11]:

  1. Provide simple, and easy to understand information on sources of data/inputs, processes (where feasible) and 

  2. To foster general understanding of AI systems, including their capabilities and limitations

Explaining how the Gen AI algorithm operates, even at a high level, is useful for managing expectations and brings to light publicly associated  limitations and risks. It also raises the significant question of how the benefits of Generative AI can be weighed against the trade-offs and risks when the opacity of the technology could potentially hide issues of the unknown unknowns nature. 

It is pertinent to here to outline three benefits of even this partial AI explainability (in addition to ones above) for humanitarian organizations:

  1. Explainability improves technical compliance with regulations and ethical guidelines, especially related to those on harm, discrimination and bias

  2. It allows for better error-detection, and debugging processes for both staff and external stakeholders. It enables experts to refine, update or adjust their AI systems to improve performance while correcting for red-flags or outputs which might be biased, discriminaotry, sexist, etc. 

  3. Finally, it provides a body of technical work related to Generative AI specific to the humanitarian context where such literature is nascent and limited. Most such work, currently, is either high-level or borrowed from other contexts. Openness with such technical work allows others to learn from the successes and failures of those attempting to innovate with the technology in the sector

Lack of  transparency and explainability mechanisms with AI make it impossible for organizations, stakeholders and society at large to audit their technologies, and provide evidence for its context-specific feasibility. This means AI systems can become obscure to those who use them, who are impacted by their use, or who monitor them, leading to challenges of accountability when systems cause harm. The opacity of these systems also preclude clients and users from recognizing if and why their rights were violated and therefore from seeking redress for those violations. Even when understanding the system is possible, it requires interactional expertise to translate the intricacies of the system to communicate with the wider public. The high threshold of expertise required to understand the systems in the first place can frustrate efforts to pursue remedies for harm caused by AI systems. 

Transparency and Explainability at SPAI

As part of a humanitarian organization, Signpost, SPAI adopts transparency standards through regular reporting, and use of technologies such as dashboards and platforms which provide data on information provision. With regards to SPAI’s AI work, the main mechanism of transparency and explainability has been setting out publishing clear work-in-progress, technical details, mapping harms and setting out clear data policies. This is to enhance accountability as well as setting up space to improve cross and inter-sector collaboration to ensure clear data publication on their work, enhancing accountability and improving collaboration across sectors. Let us look at some select efforts being made at SPAI to operationalize transparency and explainability principles:

Ethical Platform: The Signpost AI Ethical Platform mentioned above can be seen below. Of the four pillars, one major part is transparency; to communicate openly and transparently for the purposes of responsibility and broader accountability [20]. Even though explainability is not explicitly mentioned, it is a major component of this pillar.

Open Publishing Updates, Research, Best Practices and Limitations: The foremost mechanism for transparency and explainability for SPAI is its open publishing documents and blogs. SPAI communicates its rationales for decision making, works-in-progress, updates on successes and setbacks as well as best practices that are being learned along the way. This work can be accessed on the open blog which publishes posts regularly. [21] To give one example of best practices, refer to this post on how to use and control generative AI models safely. This post by our Red-Team specialist explains in simple analogous language how technical concepts such as user and system prompts, prompt engineering, etc. can be understood for the purposes to using them in active projects.

A second category of open publishing includes more in-depth research pieces. This category of publications includes literature reviews, technical explainers as well as in-depth looks at various aspects SPAI’s work on the SignpostAI chatbot. Let’s take two examples. The research piece on AI Literacy is part one of a series in which it looks at technological,  digital and AI literacy as a crucial piece in the ecosystem of AI development. It explores how SPAI is training its staff to not just better understand the potential and limits of Generative AI but to use it productively as part of the pilot. It looks in detail at the steps that SPAI took in order to foster such literacy [22].

The second article maps the thinking around operationalizing Generative AI in the humanitarian context  [23]. It looks not only into the literature on benefits of the technology as well as associated risks, limitations and trade-off, it also highlight SPAI efforts at mitigations as well as its thought-processes regarding trade-offs. This broad and in-depth scoped article takes a deep dive into technical and social limitations of Generative AI and ways in which SPAI is attempting to mitigate them.

Explaining Processes, and Workflows: SPAI also publishes details on ongoing work. We will look at two examples here: one from a Quality Evaluation perspective and the other from a Red-Team Perspective. In the former (visualized below), our Protection Officer (PO) walks the reader through their weekly diaries which contains details on how they are tracking quality related metrics of the chatbot outputs as well as their insights. The original weekly diaries themselves will be available in the future for humanitarian practitioners.

There is also a Red Team Explainer which explains how the Red Team evaluates chatbot outputs, and how they are set up in relation to the Quality Team in order to ensure that chatbot evaluations are done collaboratively, and effectively. The report contains all the details required for other organizations to set up their own red-teams, what considerations they should keep in mind, what seems to be working for SPAI, etc. [24]

Information and Outlining Data Privacy & Protection Efforts: Data is key to AI. It is also a consequential element for users and other stakeholders in order to uphold their rights. As part of its effort to be transparent, SPAI makes its data privacy and protection policies public while also giving details on specific parts of safeguarding data.  For example, SPAI has released details on its techniques for anonymizing sensitive client data [25] as well as delving into not only the frameworks that govern data privacy and protection but also providing a peek into SPAI efforts [26]. SPAI has robust mechanisms on its websites and the platforms it uses to inform users and clients of their data rights. For example, Signpost AI chat agents explicitly highlight that users are speaking to non-human agents and should not divulge private information. These transparency efforts may burden the users, but it will ensure that practice of caution is enforced to minimize potential issues.

Open-Sourcing Code and Explaining AI Agents: SPAI upholds its explainability goals by open-sourcing and publicly releasing all SPAI chatbot agent code. [27] This referenced repository includes code and documentation related to the AI chatbot architecture as well as the vector database that underpins the RAG-based structure of the technology. This open source repository allows anyone to not only inspect the code but also re-use/modify it constructively for their own projects in the humanitarian sector.

Supporting this technical documentation are explainers. For example, the Agent Architecture explainer gives easy to understand details and visualizes the infrastructure of the Signpost AI agent technology [28]

There is also a public-facing explainer which explains the technical aspects of the chatbot and how it works in simple, easy to understand language for the layperson. [29]

Technology Partnership Safeguards and Open Collaborations: SPAI is currently in partnerships with technology partners as well as research labs. Details of these collaborations are being kept as above board as possible. Where possible, SPAI underlines the need to be extra vigilant and careful given that the humanitarian sector interacts with the world’s most vulnerable and marginalized populations. Within these partnerships, SPAI takes great care to anonymize, and safeguard client data given the opacity of technology company infrastructures as well as inevitable third party intermediaries which may become involved. It is a matter that is also under careful review given unknowns and opacities regarding Generative AI pipelines. SPAI strongly advocates with the tech community for more transparency in the handling, segregation, and protection standards for all humanitarian data that they process. SPAI through its efforts and conversations is hoping such collaborations will lead to improved openness and transparency of technology companies’ tech stacks and data practices, specifically in the offer or solution provision to support humanitarian action.

Additionally, SPAI is also forging research agreements with organizations which want to respectively  (a) highlight SPAI AI chatbot case study and (b) conduct novel technical research on methodologies to evaluate chatbot responses. In the latter for example, all details from this research collaboration will be openly published and available to all, including the humanitarian as well as the technology sectors. 


As evidenced by the above examples, Signpost AI takes transparency and explainability seriously. It operationalizes these principles by adhering to data standards and disclosure policies, open sourcing code, and publication of meaningful, accessible explanations, research and details of how the SPAI AI system works so that people can be meaningfully informed. Figuring out whether Generative AI is appropriate and useful in the humanitarian context requires this adherence so that the sector as a whole could learn from experiences of SPAI and take next steps in relation to this technology.

References

  1. Chatbots in humanitarian contexts - The Engine Room

  2. AI and Emerging Tech for Humanitarian Action: Opportunities and Challenges - World | ReliefWeb

  3. AI for humanitarian action: Human rights and ethics | International Review of the Red Cross

  4. A FRAMEWORK FOR THE ETHICAL USE OF ADVANCED DATA SCIENCE METHODS IN THE HUMANITARIAN SECTOR

  5. Ibid.

  6. Transparency and explainability (OECD AI Principle)

  7. Spencer, Sarah. 2024. “HPN Network Paper.”

  8. Chatbots in humanitarian contexts - The Engine Room

  9. Private tech, humanitarian problems: how to ensure digital transformation does no harm - Access Now

  10. What is Explainable AI (XAI)? | IBM

  11. Transparency and explainability (OECD AI Principle)

  12. Principled Artificial Intelligence | Berkman Klein Center

  13. AI Ethics Guidelines Global Inventory - AlgorithmWatch

  14. Montreal AI Ethics Institute

  15. OECD AI Principles overview

  16. AI for humanitarian action: Human rights and ethics | International Review of the Red Cross

  17. AI and Emerging Tech for Humanitarian Action: Opportunities and Challenges - The All India Disaster Mitigation Institute

  18. https://www.signpost.ngo/who-we-are

  19. A FRAMEWORK FOR THE ETHICAL USE OF ADVANCED DATA SCIENCE METHODS IN THE HUMANITARIAN SECTOR

  20. Introducing SignpostAI: An AI Lab for Humanitarian Aid

  21. SPAI Blog — signpostai

  22. First Steps in AI Literacy: Training Moderators — signpostai

  23. Navigating Generative AI at Signpost: Risks, Mitigations, Benefits and Trade-offs — signpostai

  24. Signpost AI Red Team: Metrics, Scope, and Workflows — signpostai

  25. Using AI to Anonymize Sensitive Client Data — signpostai

  26. https://www.signpostai.org/airesearchhub/data-privacy-and-protection-in-the-age-of-ai

  27. https://github.com/theirc/signpost-ai/

  28. Signpost AI Agent Architecture, Infrastructure and Workflow — signpostai

  29. Inner Workings of the Signpost AI Chatbot: Non-tech Explainer — signpostai

Previous
Previous

Investigating “Society in the Loop”

Next
Next

Mapping AI Design Principles