Mapping AI Design Principles
Introduction
In 2021 there were only a handful of organizations exploring use of Artificial Intelligence (AI) and machine learning (ML) in the humanitarian sector. Things changed with the release of ChatGPT in November 2022. [1] Two years on, conversations on how to create, deploy and use AI have become ubiquitous. Philanthropists, funders, aid and humanitarian organizations are exploring use cases for Generative AI and how the technology might be used to improve aid and humanitarian operations. Generative AI tools identify and encode patterns of relationships in huge amounts of data and then use that information to generate content based on user natural language requests [2]. Using this technology to create and implement innovative and ethical AI solutions has the potential to bring transformative benefits to millions impacted by humanitarian crises. [3]
These benefits range from potential improved efficiency, scale, accessibility, cost-effectiveness, and multilingual support [4]. Additionally, chatbot type agents can help local organizations create content to amplify local voices, lessen research overheads, minimize the burden of legal and administration processes and costs [5]. Some humanitarians have pointed out that Generative AI could help leapfrog "existing challenges and transform humanitarian aid systems” in key areas such as coordination and localization, response and anticipatory analysis, resource planning and financing. [6]
Humanitarian actors need to practice caution. In the rush to embrace AI and find effective solutions, there is a risk in overlooking fundamental values and neglecting equally important discussions about AI ethics and governance, humanitarian principles and obligations, and accountability. Adopting AI without setting rules and guidelines risk undermining core, collective principles, foregrounding the voices and agency of vulnerable populations and to make aid more accountable. [7]
Hence, guidelines, shared standards and principles based on humanitarian values are urgently required in all aspects of AI technology use, development, testing and deployment. This is not only to enable a fruitful utilization of this powerful technology but to embed core humanitarian values in the technologies the sector uses. [8]
Signpost AI [9] is developing a RAG-based chatbot to leverage its database of localized knowledge to provide accurate, appropriate, safe, and relevant information [10]. As a humanitarian actor, it is not just grounding its AI development in humanitarian values, but also creating a shared blueprint for AI development in the sector in the process. The latter goal is to help zero in on what safe and responsible AI development looks like.
This paper focuses specifically on design principles undergirding development of AI in the humanitarian context. The next section provides a select literature survey on humanitarian, technology and humanitarian engineering and design approaches This section will also give examples of how Signpost AI implements the principles in its development of Signpost AI chat. This paper is a first attempt to synthesize, articulate and map AI Design principles. The paper will then briefly touch on challenges that Signpost AI has faced in this regard.
Humanitarian Principles as Starting Points
The adoption, integration or creation of AI tools in humanitarian operations or programs should be rooted in basic humanitarian principles and commitments, e.g. do no harm [11], accountability [12], localization [13]. Such fundamental starting points require that humanitarians do the following [14]
Engage with communities who will be impacted by the use of AI and explain if and how its use will impact the support they receive
Incorporate views of communities into design of AI pilots. Create plans for recourse in case things go wrong
Ensure that AI accountability initiatives are linked with the ability of the communities to engage
Share resources with local and national organizations to increase access to AI knowledge, solutions and relationships with solution providers
Partner not only with technology companies in headquartered in North America and Europe but also those in the Global South
Adopt Transparency. Be open with how AI tools work, thow they are used and share learnings on what works and what does not
Philanthropists, humanitarians, and other stakeholders must work together to establish a strong foundation based on existing human and humanitarian-centered principles and practices. This foundation can then inform the development of principles related to technology and AI.
This idea, promoting the fundamental interests of people is also key in The Montréal Declaration for the Responsible Development of Artificial Intelligence [15]. The principles of this Declaration include those of:
Well-Being
Respect for Autonomy
Protection of Privacy and Intimacy
Solidarity
Democratic Participation
Equity
Diversity Inclusion
Prudence
Responsibility
Sustainable Development
Combining the values of a humanitarian mission with that of effective technology adoption/development requires a Humanitarian AI Ecosystem within which the topic of this paper, Design principles would be one component. Such an ecosystem has been sketched in the field by Nasim Motalebi and Andrej Verity [16] and includes the following considerations:
Data and Technology: How are AI models and tools selected based on humanitarian needs and use cases
Governance and AI Regulation: What are humanitarian strategies for data protection, privacy, legal and ethical use of Generative AI?
AI and Data Upskilling: How do we build AI capacity for upskilling and creating good practice
Evaluation and Improvement: Evaluating validity and robustness of AI outcomes
Coordination Collaboration: Nurturing multi-stakeholder partnerships between humanitarian organizations, UN agencies, nonprofits and technology companies from around the world
Signpost AI, for example, has been publishing research and best practices across these five categories mentioned above. For example:
The thinking on LLM selection for Signpost AI Chat [17]
How Signpost AI ensures data privacy and protection in the AI tool [18] [19]
How is Signpost AI building capacity through AI Literacy [20]
Signpost AI documentation on how the AI tool is being evaluated, tested, impressions and what is being learned [21][22] [23]
Signpost AI is collaborating and partnering with different technology and humanitarian organizations. For example, of the many tech partners Signpost AI is speaking to, two include MILA (https://mila.quebec/) and Google
Signpost AI conducted literature reviews on (i) sector-wide humanitarian principles related to AI and technology and (ii) ethical and responsible AI. Combining them with the Signpost values and principles led to the creation of Signpost AI Principles:
This serves as the foundational basis for Signpost AI Design Principles.
In the following section, we explore AI Design Principles, and clarify how Signpost AI has been working to apply these principles in line with the AI Principles mentioned above.
Digging Deep: AI Design Principles
In this section, we look at some AI/technology design and engineering principles and practices which attempt to foreground human and humanitarian considerations in engineering and design practices.
Humanitarian Engineering
Grounded in principles of social justice, appropriate technology, acknowledging local context, Humanitarian Engineering attempts to use the power of technology as a force for positive change [24]. Beginning in the 2010s, Humanitarian Engineering attempts to narrow the distance between engineering as a distinct profession and the humanitarian sector as a special socio-political practice [25]. It attempts to synthesize these two domains in order to draw on science and technology to meet the basic needs of marginalized and vulnerable people. Essentially, it is the creation of technologies to help people. In order to ensure that technological projects deliver tangible and direct impacts, it foregrounds focus on people
There are many schema that exist for this field but we are going to focus on one for our instruction: [26][27]
Focus on people
Relate, Listen, Ask, Cooperate, Empower
Understand Social and Physical Context
Be a Professional Humanitarian Engineer
Build Technological Capacity
Ensure Long-Term Positive Impact
Understand Impact on/from Social Context
Design for Sustainability
Assess Outcomes
Promote Human Dignity, Rights, and Fulfillment
These principles highlight the importance of people in the development of the technological product. Signpost AI currently has no specific document for Humanitarian specific AI design. Current work relies on using literature (some of which is highlighted in these pages) to see what works best given organizational structure, staff limits and workflows. Programmatically, Signpost AI started its work on the Signpost AI chatbot from similar principles as well using the Signpost AI Principles.
There is an inherent tension here between the rapid iterative mechanism of technological development and the deliberate, methodological operationalization of humanitarian ethics and principles. Signpost AI believes that a balance between the two can be had and literature on Humanitarian Engineering points in that direction.
For example, in order to ensure that the AI tool will be safe, do no harm, and provide high quality reliable information to users, Signpost AI translates existing principles for AI tools [28][29], and bases Quality frameworks on fundamentally humanitarian principles [30]
Privacy By Design
Introduced in the late 1990s, the concept of Privacy of Design was created to ensure that privacy should be built into produce, service, and system designs from the outset rather than added afterward. Its framework has generally considered best practice for privacy protection and include: [31]
Proactive not Reactive; Preventative not Remedial
Privacy as the Default Setting
Privacy Embedded into Design
Full Functionality – Positive-Sum, not Zero-Sum
End-to-End Security – Full Lifecycle Protection
Visibility and Transparency – Keep it Open
Respect for User Privacy – Keep it User-Centric [32]
These principles promote privacy protection throughout the entire development process, from the initial design to the final deployment and beyond.
Signpost AI already uses the framework of Privacy of Design in its efforts to uphold the highest standards of data privacy, security and protection. [33] Efforts include full documentation on data collection and minimization, how data is secured and anonymized [34] and how consent, purpose limitation and transparency are embedded throughout.
Ethics By Design for AI Framework
Another useful framework for AI Design Principles comes from the Ethics by Design Framework. It is an approach for systematically and comprehensively include ethical considerations in the design and development process of new technological systems and devices. Established in 2020, this approach is based on the conviction that (a) technologies such as AI are not neutral (b) that values can be embedded into the design process and (c) design choices can have significant ethical consequences. [35]
This approach ensures that ethical matters (and for Signpost AI purposes, humanitarian issues as well) are addressed throughout the length of the development process.
Requirements will pertain not only to the AI system itself but also to the processes and tools involved in its development. Initially, foundational moral values and general principles are transformed into ethical requirements tailored to the specific AI system being created. Next, the approach to building the system in a way that embodies these values is established. In this way, ethical requirements are converted into tangible tasks, goals, tools, functions, and constraints.[36]
Below is the Ethics by Design Framework followed by a brief explanation on Signpost AI practices:
Assessment: refers to how the system's objectives are assessed against foundational values
Instantiation: Moral Values are instantiated as characteristics the AI system should possess
Mapping: The third step of the Ethics by Design approach involves mapping these high-level ethical design requirements into specific procedures and actions to be taken during the design process.
Application: How and where will ethical requirements be handled in one’s own methodology
Implementation: The final step is simply to implement Ethics by Design protocols during the development process in accord with the mapping done in the previous step.
Signpost AI has done corresponding work across these steps. Throughout the design and development process, the chatbot is tested, evaluated and assessed on the basis of whether it can provide effective, safe, non-discriminatory, ethical responses to users. [37]
Conclusion
Signpost AI’s pilot efforts directly test and evaluate whether the Signpost AI chatbot responses adhere to the same high standard of responses, required by human moderators as well. Testing, Evaluation and Assessment measures are directly based on humanitarian values and principles and as such viability of Signpost AI chatbot depends on it not violating them. Workflow, organizational, managerial and decision-making considerations are all aimed towards making sure the eventual client is provided enhanced service.
This is not to say Signpost AI has not faced challenges and limitations in its endeavors. It has been challenging to balance the impulse to develop AI tools fast with the necessary consideration required to make sure that the values are being adhered to. This is slightly easier in the development process, given that the evaluation and measurement tools essential to making the AI tool successful, were translated from existing values and principles. In terms of management and operationally, it becomes more difficult given competing priorities and funding-related deadlines. This has only necessitated that Signpost AI stay true to its values and principles throughout, even if these are imperfect implementations.
Another issue is related to the idea that starting with fixed values and principles means that there might not be enough room for discovery of new ethical issues and for reflection and deliberation. Signpost AI believes, given its rapid iteration development model, there is plenty of room for these processes. Any emerging issue is quickly flagged and is made to become part of the methodology.
AI Design Principles derived from humanitarian values are crucial. One cannot develop AI tools without relying on such a framework. Ensuring that AI is used and deployed safely and ethically requires such a document. In a future installment, using the literature outlined here, we will look at Signpost AI Design Principles (derived from its values) in more detail as we develop and implement them. Such a document only exists piecemeal with a strong dependence on foundational values. This paper itself is part of mapping complementary work and developing a blueprint for AI Design principles in the humanitarian context based Signpost AI best practices and learning.
References
Chow, Andrew. 2023. “How ChatGPT Managed to Grow Faster Than TikTok or Instagram”. Time.com. Accessed 1st December 2023 from https://time.com/6253615/chatgpt-fastest-growing
The clock is ticking to build guardrails into humanitarian AI
Navigating Generative AI at Signpost: Risks, Mitigations, Benefits and Trade-offs — signpostai
Four ways ChatGPT could help level the humanitarian playing field
Generative AI for Humanitarians - September 2023 - World | ReliefWeb
The clock is ticking to build guardrails into humanitarian AI
https://www.signpostai.org/blog/xy08iss5ax1ipjohs4u04s79joxk6d
From Principle to Practice: A User’s Guide to Do No Harm - GSDRC
The clock is ticking to build guardrails into humanitarian AI
Montréal Declaration for a Responsible Development of Artificial Intelligence
Generative AI for Humanitarians - September 2023 - World | ReliefWeb
https://www.signpostai.org/airesearchhub/data-privacy-and-protection-in-the-age-of-ai
https://www.signpostai.org/blog/9aoqq6o4b9on5tk2pixnjrc08df4l2
https://www.signpostai.org/airesearchhub/first-steps-in-ai-literacy-training-moderators
https://www.signpostai.org/blog/96lux07mcj9lrqyb59tkmj72vq5fb2
https://www.signpostai.org/blog/xy08iss5ax1ipjohs4u04s79joxk6d
https://www.signpostai.org/blog/signpost-ai-chat-pilot-early-results-and-feedback
Humanitarian Engineering Book | Electrical & Computer Engineering
Guiding Principles: Establishing a Constitution for Ethical AI Development — signpostai
https://www.signpostai.org/blog/xy08iss5ax1ipjohs4u04s79joxk6d
Gürses, S., Troncoso, C., Diaz, C.: Engineering privacy by design. Computers, Privacy & Data Protection 14(3), 25 (2011)
https://www.datagrail.io/blog/data-privacy/privacy-by-design/
https://www.signpostai.org/airesearchhub/data-privacy-and-protection-in-the-age-of-ai
https://www.signpostai.org/blog/9aoqq6o4b9on5tk2pixnjrc08df4l2
Brey, Philip, and Brandt Dainow. 2023. “Ethics by Design for Artificial Intelligence.” AI and Ethics. doi: 10.1007/s43681-023-00330-4.
Ibid.
https://www.signpostai.org/blog/hjb1trsrqtbza8iklrcz97ngdvz49t