First Steps in AI Literacy: Training Moderators

Introduction

Progress in artificial intelligence (AI) has exacerbated a new form of digital divide, referred to as the AI divide. This divide highlights the disparities in access, advantages, and opportunities related to AI technology among different regions, communities, and socioeconomic groups. The effects of this gap falls asymmetrically on marginalized communities [1] and vulnerable populations [2]. It can also exist within humanitarian organizations; a lack of AI knowledge and skills can cause technical silos, hamper staff productivity and slow down attempts to explore technical advancements related to AI. [3]  This divide is compounded by negative impacts caused by a broader discourse on unfounded fears around AI. 

It is crucial, therefore, to address this divide by fostering and enhancing AI literacy by ensuring that a deeper comprehension and understanding of AI is available to all. This is in service of both equipping individuals with AI skills as well as allaying fears around the technology.

In this short piece, we will look at how Signpost AI is promoting AI literacy and developing capacities for its moderators for the pilot phase of its AI agent technology. These moderators are front line workers who engage with our clients on a daily basis; they are humanitarians with expert knowledge. During the pilot, they will be testing the the Signpost AI agent/chatbot tool. Burnishing them with an AI education is also a part of the pilot.

What is AI Literacy?

Digital literacy emerged as a concept in the 1970s at a time when computers and computer applications were gaining popularity. It signified the ability of users to be able to assess basic computer-related ideas and skills and become competent in using computer systems related to their tasks. The importance of digital literacy only increased as computer technologies became crucial to new social and economic opportunities [4]. Recognizing this, in the humanitarian sector, digital literacy is a foundational tool used to enhance skills, and develop relevant competencies to empower individuals to leverage technology and contribute towards sustainable, equal futures. [5] [6]

AI literacy is often viewed as an advanced form of digital literacy, describing the ability to understand, interact with, and critically evaluate AI systems and AI outputs. [7]  Understandings of AI come from before the era of Generative AI beginning 2022 but the definition as a whole, still broadly suffices with a few refinements.  There are four aspects to AI literacy to consider:

  • Have knowledge of the basic functions of AI and how to use AI applications [8]

  • Application of AI knowledge and concepts in different contexts [9]

  • Ability to evaluate, appraise, predict and design with AI tools and applications [10]

  • Human-centered considerations (e.g. fairness, accountability, ethics, etc.) [11]

Global initiatives to promote AI literacy are gaining momentum [12], particularly in the educational sector [13] [14][15]. However, in the humanitarian sector, progress has been slower. With the rapid rise of generative AI, there is now a growing effort to establish AI literacy mechanisms as the sector seeks to integrate this technology into its workflows. [16]

There are three different audiences in the humanitarian context who would be benefit from AI Literacy: end-users, humanitarian workforce and humanitarian organization leadership:

  1. Users: AI literacy for users involves understanding several key aspects: (a) the capabilities, limitations, benefits, and risks associated with AI tools; (b) the available services and features that enable informed decision-making; (c) their rights regarding privacy, confidentiality, access, and consent in relation to AI usage; and (d) how their data is currently used and its potential future applications. This knowledge fosters greater empowerment, enhances access to services, and encourages stronger engagement.

  2. Humanitarian Workers: AI literacy would augment humanitarian workers' existing skills by providing a deeper understanding of (a) features (b) limitations (c) the benefits, risks, and trade-offs associated with generative AI tools. This knowledge empowers them to critically evaluate applied use of these tools in different applied day-to-day settings. Additionally, it fosters informed discussions about ethical considerations, encourages responsible usage, and promotes a proactive approach to navigating the evolving landscape of AI technology in their work.

  3. Organizational Leadership: Enhancing education around generative AI can significantly improve future strategies, accountability, and decision-making among organizational leadership. By fostering a deep understanding of AI's implications, leaders can implement robust governance frameworks that promote ethical and responsible usage of AI tools throughout the organization. This proactive approach not only strengthens accountability but also ensures that all stakeholders are aligned with best practices in AI deployment, ultimately leading to more effective and trustworthy outcomes in humanitarian efforts. Such strategic foresight is essential for navigating the complex landscape of AI while maintaining the organization’s commitment to its mission and values.

AI literacy in an organization will only be effective if it is spread evenly across all three segments. For this paper however, we will look specifically at Signpost humanitarian moderators and how Signpost AI’s pilot preparations for its Generative AI agent tool include a grounded, practical AI literacy strategy.

AI Literacy for Signpost AI pilot moderators

Signpost is currently conducting a pilot to understand how Signpost AI agent tool can help moderator workflows, enhance information access for those in need and lay groundwork for future innovation. This pilot is being conducted in four contexts with moderators of those countries involved: Greece, Italy, Kenya and El Salvador. One aspect of this pilot is to prepare moderators for using the tool. In mapping Risks, Mitigations, Benefits and Trade-offs of Generative AI [17], we learned that AI education of different groups is central to the deployment of AI technologies. Taking that lesson to heart,  Signpost AI team created  a program to foster learning and AI education among moderators for the pilot.

All of the practical, grounded implementations listed  during this AI training are based on principles of maintaining a reliable continuous feedback loop between moderators and country pilot leads, and Signpost cross-departmental collaboration on training materials. Let us take a look at how Signpost AI imparted AI Literacy across its four aspects in the Greece context. All of the materials listed below for training were a team effort; created in collaboration with country, program, quality, product and Red teams.

  1. Trainings covering topics on: 

    • Overview of Generative AI

    • Overview of Signpost AI Chatbot

    • Hands-on Training with AI tool

  2. Documentation on Quality and Red Team Metrics

  3. Video Demos demonstrating specific workflow use of AI tool

  4. Knowledge Tests testing AI fundamentals as well as use-case specifics

  5. Biweekly and monthly Check-ins (open to questions)

Know and Understand Generative AI

Over the course of several months, the Greece Pilot lead led a training program focused on explaining the fundamentals of Generative AI. This knowledge and understanding of Generative AI came in two different forms: (a) trainings and (b) knowledge tests

There were a total of three training sessions of which we will speak about only two in this section. The first session, an hour long, took place a couple of months prior to the beginning of the pilot and provided an overview of Generative AI, its key concepts, benefits and risks and an overview of the Signpost AI agent, establishing a foundational understanding for the moderators. See below for select images.

The second training lasting two hours long, took place right before the beginning of the pilot; re-introducing Generative AI fundamentals but more focused on the Signpost AI agent tool hosted on its customer service platform. This session went into details on its features and laid out key aspects of evaluating the agent, such as how to score on Quality metrics. It also gave definitions and clear examples  of prompting, Constitution rules, prompt-tuning as well as hallucinations. All sessions had time at the end of the sessions so moderators could ask questions and give feedback.

After these training sessions, a knowledge test assessing the fundamentals, and tool-specifics was administered to all moderators. They were given a few days to take the test; upon completion, they receive completion certificates. The current plan is to administer knowledge tests monthly in order to ensure that the moderators’ understanding is up to date.

Sample questions from the Knowledge Test

Use and Apply AI

To effectively train moderators for the pilot, text resources were provided for their independent study, enabling them to gain a comprehensive understanding of the AI tool they would use to evaluate the outputs of AI agents. Essential materials included detailed information on quality metrics, accompanied by specific examples, access to all presentation slides, a document featuring various case examples, and Protection Officer diaries. This approach not only equipped moderators with the necessary knowledge but also fostered a deeper engagement with the AI tool, ensuring they were well-prepared to assess its performance accurately and effectively. Additionally, ongoing support and opportunities for discussion were encouraged to further enhance their learning experience.

Going over Quality Metrics

A final third training session focused on hands-on use of the AI agent tool so that any questions that come up from the moderators are resolved during this time.

Evaluate and Create AI

The third training session adopted a hands-on approach, during which the Protection Officer (PO) provided evaluation examples directly to the moderators. In this session, the PO also detailed the quality metrics, explaining how the moderators would assess the AI agent's performance. The learning objectives included specific, step-by-step instructions for evaluating the AI agent effectively. This practical training session aimed to equip participants with real-world experience and build their proficiency in using the AI tool. By structuring the training across three sessions, the program ensured that workers transitioned from theoretical knowledge to practical application, effectively preparing them for the seamless integration of AI technology into their daily workflows.

Finally, moderators had access to demonstration videos with commentary that they could use. These videos provide an asynchronous approach to learning where moderators could go back and refresh their understanding of the tool.

AI Ethics

Emphasizing ethics and adherence to Signpost values and principles was a key focus in all training sessions and discussions. Protection Officers actively encouraged open dialogue, allowing ample space for concerns, questions, and feedback. To maintain this supportive environment, POs will remain in constant contact, ensuring moderators feel comfortable sharing their thoughts on the AI tool.

The training materials, including documents, videos, and meetings, are designed to promote transparency and educate moderators on the ethical dilemmas that may arise when using the Signpost AI agent tools.

Early Lessons

The significance of AI Literacy is that it creates in workers a foundational understanding of Generative, decreases hype-driven and genuine fears through hands-on engagement, increases curiosity and creates conditions for beneficial technological uptake. Even in the short time that the pilot has been ongoing, POs are already capturing some early lessons on how to do AI Literacy better:

Importance of Starting AI Literacy Programs Earlier: Feedback from moderators indicated that not all felt prepared to engage with the AI tool. This highlighted the need for additional support and emphasized that training should begin well in advance. Workers may require more resources to fully understand and feel comfortable using the AI tool.

AI Learning Never Stops: Given the rapid advancements in generative AI technology and the ongoing updates to the tool, learning must be a continuous process. Education cannot rely on one-off training sessions; instead, it should foster a culture of ongoing learning and adaptation. This approach is essential not only for keeping up with new developments but also for ensuring that workers’ skill sets are continually enhanced.

Digital Literacy Still Matters: While the focus is on AI literacy, we cannot overlook the importance of digital literacy. This gap can still exist within the workforce and must be acknowledged. If unaddressed, deficiencies in digital literacy could further widen the gap in AI literacy.

Be mindful of workloads: In the context of the pilot, Signpost AI has faced resource constraints, meaning moderators must carve out time from their daily responsibilities to test the AI tool. It is important that training is balanced and does not add to their workload while accommodating their schedules. To address this, Protection Officers in Greece and Italy ensured that the hands-on third session was offered multiple times to suit the availability of moderators.

Training Accessibility: Training should be clear and not solely text-based. It must accommodate various learning styles, ensuring that guides are accessible even to those with initially low digital and AI literacy. This foundational work can eventually facilitate the creation of AI literacy materials for end users as well. Furthermore, AI literacy initiatives should be inclusive, particularly for marginalized groups, by providing materials in multiple languages and formats to address diverse learning needs.

Generative AI presents humanitarians with a challenge that is not just technological in nature; it is not just about making the technology work in our contexts (even though that is a big challenge in itself, given the statistical nature of the technology). It also includes issues of accountability and data governance. [18] Finding solutions to these problems requires fostering an ecosystem which can enable technology and data processes, governance, technical capacities and assessment strategies. AI Literacy is a key pillar of such an ecosystem, Signpost AI’s efforts at AI Literacy highlighted here showcase a small component of this larger strategy to create such an ecosystem.

References

[1] AI literacy and the new Digital Divide - A Global Call for Action | UNESCO

[2] Wang, C., Boerman, S. C., Kroon, A. C., Möller, J., & H de Vreese, C. (2024). The artificial intelligence divide: Who is the most vulnerable? New Media & Society, 0(0). https://doi-org.libproxy.newschool.edu/10.1177/14614448241232345

[3] Generative AI for Humanitarians

[4] Leahy, D., & Dolan, D. (2010, September). Digital literacy: A vital competence for 2010?. In IFIP international conference on key competencies in the knowledge society (pp. 210–221). Berlin, Heidelberg: Springer.

[5] Digital Learning For All | United Nations

[6] Safe Space to Learn: Digital Literacy and Inclusion for Women and Girls in Humanitarian Settings | International Rescue Committee (IRC)

[7] A systematic review of AI literacy scales | npj Science of Learning

[8] Artificial intelligence and computer science in education: From kindergarten to university | IEEE Conference Publication

[9] Designing Digital Literacy Activities: An Interdisciplinary and Collaborative Approach | IEEE Conference Publication

[10] Educing AI-Thinking in Science, Technology, Engineering, Arts, and Mathematics (STEAM) Education

[11]  K-9 Artificial Intelligence Education in Qingdao: Issues, Challenges and Suggestions | IEEE Conference Publication

[12] Ng, D. T. K. et al. A review of AI teaching and learning from 2000 to 2020. Educ. Inf. Technol. 28, 8445–8501 (2023).

[13] Ng, D. T. K. et al. Artificial intelligence (AI) literacy education in secondary schools: a review. Interact. Learn. Environ. 31, 1–21 (2023).

[14] Steinbauer, G., Kandlhofer, M., Chklovski, T., Heintz, F. & Koenig, S. A differentiated discussion about AI education K-12. Künstl. Intell. 35, 131–137 (2021).

[15]  Hwang, Y., Lee, J. H. & Shin, D. What is prompt literacy? An exploratory study of language learners’ development of new literacy skills using generative AI. https://doi.org/10.48550/arXiv.2311.05373 (2023).

[16] Davy Tsz Kit Ng, Jac Ka Lok Leung, Samuel Kai Wah Chu, Maggie Shen Qiao, Conceptualizing AI literacy: An exploratory review, Computers and Education: Artificial Intelligence, Volume 2, 2021, 100041, ISSN 2666-920X, https://doi.org/10.1016/j.caeai.2021.100041

[17] Navigating Generative AI at Signpost: Risks, Mitigations, Benefits and Trade-offs — signpostai

[18] Generative AI for Humanitarians

Next
Next

Signpost AI Protective Safeguards: System Prompts & Constitutional AI