When your AI flight attendant sings a song about your plane swimming

Recently I experienced an ad that incorporated a “Virtual Flight Attendant” .    It was quite janky but I was pleased to see it incorporated a chatbot.  The chatbot had voice recognition and text chat.  Naturally the voice recognition didn’t work.  Never fear!  I was determined to find out what it knew.

I asked it to write a limerick about flight attendants first.  Something like this:

There once was a flight crew so quirky,
Serving peanuts and tea, rather perky.
They danced in the aisles,
With the widest of smiles,
In tutus, which made it quite jerky.

Nice!  I then asked it to write a song about planes swimming in the ocean, which it proceeded to do in 3 verses and 2 choruses!  Obviously this one was ChatGPT powered and was probably capable of saying rather terrible things.  Lucky it was just an ad I suppose.

So I watched a talk by Vilas Dhar, an AI Ethicist.  This convinced me of the need to go through an “ethical checklist” prior to deploying AI for widespread use.  It has many benefits, foremost ensuring less problems later down the line when people mess with your hard work.

Vilas has an “Ethical Framework” that can be used at the design stage of an AI project to prevent embarrassing abuse and inappropriate use of your chatbot.

These are:

  1. Responsible Data Practices:
    • Examine the source and quality of the training data used for AI models.
    • Assess and mitigate explicit and implicit biases present in the dataset.
    • Consider how the data might perpetuate or increase historical biases.
    • Explore opportunities to prevent biased decision-making in the future.
  2. Well-defined Boundaries for Safe and Appropriate Use:
    • Clearly define the intended goals and target population for the AI tool.
    • Identify the responsible way to ensure the tool serves the needs of the target population.
    • Consider the incentives and potential misuse cases of the tool.
  3. Robust Transparency:
    • Ensure transparency in how the AI tool arrives at its recommendations and outcomes.
    • Provide traceability and auditability of the AI system’s decision-making process.
    • Enable decision-makers to understand the inputs, analysis, and outputs of the tool.
    • Engage with a diverse range of stakeholders to promote equity and fairness.

The framework emphasizes the importance of starting with ethical and responsible data practices, clearly defining the boundaries and intended use cases of the AI tool, and maintaining robust transparency throughout the development and deployment process. It aims to provide a foundation for making informed decisions and creating AI tools that support an equitable, sustainable, and thriving future.

In the given scenario, Sarah, the CTO of a technology company, had to address the issue of their AI-driven chatbot making inappropriate, inaccurate, and offensive responses to customers. Here’s a summary of how Sarah applied the Vilas Ethical AI Framework:

  1. Responsible Data Practices:
    • Sarah discovered that the chatbot was trained on an unscrubbed dataset from internet conversations, which likely introduced biases and inappropriate content.
    • To address this, she directed her team to use a new dataset compiled from the company’s own customer interactions, after scrubbing personal information.
    • She also instructed the team to run the data through bias detection processes and filters.
  2. Well-defined Boundaries for Safe and Appropriate Use:
    • Sarah found that customers were using the chatbot for far-ranging conversations beyond its intended scope of customer service.
    • To address this, she involved the customer support team and frontline workers to understand the typical conversations with customers.
    • Based on this input, they built new boundary conditions for the chatbot, limiting it to discuss only relevant and expertise-related topics.
  3. Robust Transparency:
    • Sarah realized that there was no way to explain some of the insensitive outputs from the chatbot, lacking transparency.
    • To improve transparency, her team built multiple input-output checkpoints and an internal audit process to monitor the chatbot’s outputs regularly.
    • They also implemented a risk assessment and response framework, allowing users to flag inappropriate conversations in real-time for immediate action.

Sarah’s approach involved taking the chatbot offline, addressing the data issues, defining clear boundaries for appropriate use, and implementing measures for transparency and auditability. The scenario emphasizes the importance of integrating ethical analysis from the initial design phase and throughout deployment to avoid such issues in the future.

Thought tools like these can be invaluable in IT projects.  They need to be considered in yours too.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *