Foundational Model Selection: Tailoring AI for Humanitarian Aid

Imagine you’re standing at the crossroads of technology and humanity. You’ve got a global network of people in need on one side, and on the other side, an array of AI models that can offer solutions—if you choose the right one. Not an easy choice, right? Well, that’s exactly the challenge the protection officers leading Signpost AI’s pilot programs faced. They had to carefully select which AI model would empower individuals and communities in Greece, Italy, Kenya, and El Salvador.

So, how did they decide? Was it a high-tech game of rock-paper-scissors? Not quite. There were some key factors at play, including language capabilities, cultural understanding, and the ability to provide empathetic support. The pilot programs, which launched in Greece and Italy in early September and are gearing up for Kenya and El Salvador in early October, are testing AI’s potential to transform lives. And we’re here to spill the beans (not literally, though).

Before we dive into each country’s AI model selection, here’s a fun thought: AI is like a really smart assistant. But instead of ordering your favorite takeout or reminding you about that 3 p.m. meeting, this assistant helps people navigate displacement, find legal assistance, and access healthcare in some of the most challenging environments. Let’s explore how our protection officers are helping AI empower communities.

Greece and Italy: Claude, The Multilingual Master

Model Chosen: Claude by Anthropic

Pilot Lead: The protection officer overseeing Greece and Italy had a tough decision. These countries are like the Grand Central Station of refugee movements, with languages from all over the world flying around. Imagine trying to help someone in Greek, then switching to Arabic, then Italian, then—well, you get the picture.

The Challenge: Refugees and migrants from the Middle East, Africa, and beyond all need information in their own languages. It’s like hosting a dinner party where everyone speaks a different language, and you’re the host!

Why Claude? Enter Claude by Anthropic. This AI model wasn’t chosen because it wears a fancy monocle (although that would be cool), but because it excelled at multilingual communication. Claude could handle Arabic, Farsi, French, Greek, and Italian with the ease of a seasoned polyglot. Plus, it managed to stay unbiased—critical when you’re working with vulnerable populations who need accurate and sensitive responses.

Conclusion: With Claude’s help, the pilot lead in Greece and Italy can focus on providing practical solutions to complex challenges—like a translator who also knows the local bus schedule, the nearest healthcare center, and how to navigate legal systems. And that’s what makes AI truly empowering: it’s not just about tech; it’s about using tech to help people get the answers they need, faster and with dignity.

Kenya: Claude’s Empathy in Action

Model Chosen: Claude by Anthropic

Pilot Lead: Over in Kenya, where the pilot kicks off in early October, the protection officer had a different kind of challenge. While the multilingual aspect was still important, Kenya’s program needed a model with heart. Picture this: you’re helping someone who has just escaped a crisis. They’re vulnerable, scared, and need emotional support, not just information.

The Challenge: How do you offer empathy through AI? Can a model really comfort someone in distress? Spoiler alert: yes, it can!

Why Claude? Claude wasn’t just chosen for its language skills; it was picked because it knows how to listen. Claude’s empathy-driven responses align with Psychological First Aid (PFA), meaning it doesn’t just spit out facts—it offers comfort, clarity, and kindness. It’s like a virtual hug, but without the awkwardness of actually asking for one. Claude also has a knack for summarizing complex, often emotional stories into manageable pieces of advice, which is key in Kenya’s humanitarian setting.

Conclusion: In Kenya, AI isn’t just about giving information—it’s about empowering people to navigate their way out of crises with dignity. Claude is helping the protection officer do just that, providing not only facts but also the emotional support that makes all the difference.

El Salvador: GPT-4’s Spanish-Speaking Superpowers

Model Chosen: OpenAI GPT-4

Pilot Lead: Down in El Salvador, the protection officer had a straightforward need: find an AI model that could speak Spanish fluently. But not just any Spanish—the kind that understands local slang, dialects, and, most importantly, the cultural nuances of the region.

The Challenge: Spanish is more than just a language in El Salvador; it’s a lifeline for communicating about job opportunities, social programs, and local government services.

Why GPT-4? The protection officer chose OpenAI’s GPT-4 for its top-notch Spanish language processing. GPT-4 doesn’t just translate; it understands the intricacies of Salvadoran Spanish and tailors its responses accordingly. It’s like having a well-read local guide in your back pocket, ready to answer questions with cultural insight and precision.

Conclusion: In El Salvador, GPT-4 is helping people find the information they need to improve their lives, whether it’s finding a job or accessing social services. With GPT-4, AI is empowering communities by making vital resources more accessible in the language they understand best.

Key Criteria for Model Selection in Humanitarian Settings

So how did these protection officers choose their models? Did they just go with the AI equivalent of a gut feeling? Nope. Here are the key criteria they focused on when selecting the best model for each region.

1. Language Support

When it comes to humanitarian aid, language isn’t just a tool—it’s the bridge between help and hope. The ability to handle diverse languages was a top priority, especially in Greece and Italy, where Claude’s multilingual expertise shone. Similarly, GPT-4’s mastery of Spanish made it the clear choice for El Salvador.

2. Cultural Sensitivity and Context

It’s one thing for AI to understand words, but understanding context? That’s next level. The protection officers needed models that could handle cultural nuances. In El Salvador, GPT-4 doesn’t just speak Spanish—it speaks Salvadoran Spanish, picking up on the subtleties that make communication more meaningful. Claude’s ability to deliver culturally sensitive responses was also a win for Greece, Italy, and Kenya.

3. Empathy and Psychological Support

Now, here’s where things get even more human. AI isn’t just about rattling off facts—it’s about offering emotional support in critical situations. Claude’s alignment with Psychological First Aid (PFA) principles made it the ideal model for Kenya, where empathy is just as important as information.

4. Bias Mitigation and Ethics

Nobody wants AI to perpetuate harmful biases, especially in vulnerable populations. Claude’s focus on ethical AI and bias mitigation was crucial in its selection for Greece and Italy, ensuring that responses were fair, respectful, and inclusive.

5. Summarization and Accuracy

Ever tried to explain a complex issue, only to be met with a blank stare? Summarization is key in humanitarian settings where people need clear, actionable advice. Claude’s ability to distill long, often emotional user inputs into concise, actionable responses was a deciding factor in Kenya.

AI for Empowerment, Not Just Information

AI isn’t just about automating tasks—it’s about empowering individuals and communities to take control of their own lives. Whether it’s Claude’s multilingual mastery in Greece and Italy, its empathy-driven support in Kenya, or GPT-4’s deep understanding of Spanish in El Salvador, these models are designed to help people, not just serve as tools.

By selecting AI models tailored to the unique needs of each region, the protection officers are enabling communities to access information that could change their lives. Whether it’s finding shelter, navigating legal systems, or simply getting the emotional support they need, these AI models are here to empower, not just inform. And hey, if Claude or GPT-4 could tell a joke, they might say, “Why did the AI go to Human Camp? To learn empathy, of course!” (Okay, maybe leave the jokes to us humans for now.)

As these pilots unfold, the protection officers will keep a close eye on how well these models perform in real-world settings. But one thing’s for sure—AI is here to help, one thoughtful, empathetic response at a time. Stay tuned for more updates!

Previous
Previous

[BtB | PO Diary] Wk 4: When AI Gets Personal

Next
Next

Hallucinations: Understanding AI Mix-Ups