Translating Human Guidance to AI Agents

How do you convert guidance documents for humans into instructions for AI chatbot agents? It is a little bit like turning your special family secret recipe into the instructions of a microwaveable meal.  Teaching AI agents to give accurate, contextual and specific information in the right non-directive tone in a humanitarian setting requires peeling off the nuance, and subjectivities of human language and transforming it into simple, structured prompts.

In this post, we’ll take a brief look at how we converted our Signpost Moderator handbook, a guide intended for human moderators, into System Prompts which serve as  high-level rules and parameters designed to define an agent's capabilities, personality, and behavior.

The Signpost Moderator Handbook

The provenance of Signpost AI agent ecosystem System Prompts come from two places: our values and the Signpost Moderation guide. The Moderator guide is designed to orient moderators starting in their role as Digital Community Liaisons. Their primary roles include: 

  1. Guide people through information

  2. Build trust through good communication

  3. Capturing trends to make information responsive

These are key responsibilities and if the Signpost AI agent technology is to be successful, it must be able to carry these out close to the performance of human moderators. 

The handbook is divided into six sections, each dedicated to a specific Signpost principle:

Each section outlines Do’s and Don’ts and other instructions for human moderators. Let’s take a brief look at one of them, a people-centered approach; it has five sub-principles:

  1. People Centered Approach

  2. Compassionate Communication

  3. Psychological First Aid (PFA)

  4. The Escalation Protocol

  5. Informality and Transparency

The People-centered approach sub-principle is further divided into two main instructions: (a) Letting the User Decide What to do and (b) Letting users lead the conversation. Key points are distilled into the following checklist:

This checklist directs the moderators to let users have agency over the information they share and how they make use of that information. It also affords users privacy while giving them and their informational needs center-stage in their interaction with Signpost.  Let’s see how parts of this sub-principle, People Centered Approach is adapted into System prompts for the AI agent.

Translating Human Guidance into AI Agent Guidance

Signpost principles and values are already embedded in the Moderator guide rules and so do no necessitate specific value-laden prompts for the AI agent. The next step required is translating this language meant for humans into System Prompts meant for AI systems. That means, removing any anthropomorphic, unclear or nuanced language and simplifying instructions to one idea per System Prompt.

Signpost AI downloaded all of the text corpus of the principles, instructions and checklists through its Zendesk, the customer service platform. We filtered and formatted this corpus into specific sentences and then de-duplicated the instructions that were already in the original agent System prompts. These cleaning, filtering, formatting and de-duplication efforts were carried out by our Red Team and was done manually using Google Sheets:

After reviewing what moderator values were missing in the original System Prompts, the Red Team translated Moderator Handbook instructions into System Prompts. In some cases, this meant simplifying the original sentence and removing additional information, in others, this meant adding context and and further instructions.

For example, the following instruction “Listen to what people are saying and sympathize with them.”  is converted into 

  • “Actively listen to what the user is saying and sympathize with them. Create a space that is safe and calm to stabilize the individual to connect them to help and resources from your knowledge base. It is important to manage expectations and not to make specific promises (e.g. 'Please tell us about your problems and we will try to help you/do our best to give you the right information').”

In the earliest translations, this System Prompt attempted to have the AI agent sympathize with the user while giving it specific sentences that would manage the expectations of users. While this seemed useful in the beginning, over time, we started moving away from such prompts which were both long and contained more than one instruction.

Some moderation guidelines remained unchanged because of their simplicity and singular intention. For example, “Only ask questions that are strictly necessary to provide people with relevant information.” 

Others were translated and given more context without giving double meaning; “Give people the information they need to make decisions for themselves.”

  • Summarize the response to give the user the information they require to make informed decisions about their needs and that I am a helpful assistant that will not counsel the user based on personal opinion.

This initial translation always remained a work in progress as the Signpost Quality and Red Teams carried out rapid and iterative evaluations and learned a great deal about the value of directness, simplicity and conciseness.  While the principles and values of the moderator handbook remain in spirit, over time, the System Prompts in the latest versions of AI agents have become crisper, simpler and to the point. 

Lessons Learned

There are a few lessons to be learned from this translation exercise, some of which came late from rapid evaluations over months. There is no surprise that these lessons intersect with the learnings we have had from creating System Prompts ourselves. 

  1. Clarity is King:  Language that is ambiguous, and has multiple meanings needs to be simplified. While humans are able to use social cues and context to understand such language, AI agents and LLMs require direct prompting. When translating human guidance, avoid unclear language and state instructions directly

  2. Keep it Simple: LLMs underperform if they are given too much information. Each System prompt must focus on one task or idea. Deciding on how much context to include is both an art and a science. Answers in specific use-cases will require experimentation and extensive evaluations

  3. Structured Format: A consistent format for prompts will help the AI agent recognize and respond appropriately. It is recommended that human guidance documents are encapsulated in structured templates for ease of communicating instructions to AI agents

Adapting human language into a form that machines can understand is a new frontier, and experimentation is essential to discover what works effectively. This task resembles how programmers develop algorithms to solve real-world problems; they must create routines that simplify complexity. Similarly, the art and science of translating human documents into machine instructions share this goal, but with one key difference: the (human) language itself remains unchanged.

Previous
Previous

Hallucinations: Understanding AI Mix-Ups

Next
Next

[BtB | PO Diary] Wk 2: The AI Empathy Experiment