Coastal Media Brand

As a designer, designing for trust in Artificial Intelligence (AI) products is paramount. AI presents unique challenges that require transparent interfaces, clear feedback, and ethical considerations to build user confidence. If you prioritize trust, you can ensure user adoption and satisfaction, which enhances the overall user experience in AI-powered products.

As AI Product Designer, Ioana Teleanu, talks about in this next video, AI can hallucinate! How can designers ensure our AI-enabled solutions are reliable and users can trust them? Let’s find out.

video transcript

  1. Transcript loading…

“We all fear what we do not understand.”

― Dan Brown, The Lost Symbol

The best way to build trust with our users is to be as transparent as possible (without overwhelming the user with too much technical information).

  1. Clearly communicate:

    1. Where does your system get its data? Indicate sources where possible.

    2. What user-generated information does the system use? For example, does the system rely on other users to provide data? 

    3. How does your system learn from user data?

    4. What are the chances of errors?

  2. If your system relies on personal data (such as location data, demographic information or web usage metrics):

    1. Always collect this information with full consent.

    2. Ask users to explicitly opt-in to share information instead of asking them to turn off the setting.

    3. Allow the user to use your solution without providing any personal data.

Characteristics of a Trustworthy AI System

The National Institute of Standards and Technology (NIST) defines seven characteristics of a trustworthy AI system:  

  1. Valid and reliable: Validity refers to the system’s ability to meet user needs. Reliability refers to the system’s ability to keep performing, without fail. To ensure your AI products are valid and reliable, define success criteria and metrics to measure the performance of the system. Constantly assess the system to confirm it is performing as intended.

  2. Safe: AI systems must never cause harm to their users. Rigorously test and simulate real-world usage to detect possible use cases where the system may cause harm and address them through design. Designers, data scientists and developers must work together to safeguard user safety. For example, you may prohibit a user from performing certain actions based on their age or location, or display warnings prominently. 

  3. Secure and resilient: A system is resilient if it can continue to perform under adverse or unexpected conditions and degrade safely and gracefully when this is necessary. For example, you might design a non-AI-based solution to allow the user to continue using the solution in case the AI system breaks down. 

  4. Accountable and transparent: Transparency refers to the extent to which users can get information about an AI system throughout its lifecycle. The more transparent a system is, the more likely people are to trust it. For example, the system can provide status updates on its functioning or information on its process so that people using the system can understand it better.

  5. Explainable and interpretable: An explainable system is one that reveals how it works. The system can offer descriptions tailored to users’ roles, knowledge, and skill levels. Explainable systems are easier to debug and monitor.

  6. Privacy-enhanced: Privacy refers to safeguarding users’ freedoms, identities and dignity. There is a tradeoff between enhanced privacy and bias. Allowing people to remain anonymous can limit the inclusive data needed for AI to function with minimal bias.

  7. Fair with harmful bias managed: Fairness relates to equality and eliminating discrimination. Bias isn’t always negative. Fairness is a subjective term that differs across cultures and even specific applications.

Use this checklist to check the reliability of your AI tool.

Get your free template for “Check Your AI Tool’s Reliability”

Check Your AI Tool's Reliability

Unsupervised: AI Art that Sidesteps the Copyright Debate

Generative AI can create stunning works of art. Unsupervised, part of artist Refik Anadol’s project, called Machine Hallucinations, is a generative artwork. The abstract images are driven by the Museum of Modern Art’s (MoMA) data, guided by machine learning and intricate algorithms, showcasing the intersection of art and cutting-edge AI research.

Anadol trained a unique AI model to capture the machine’s “hallucinations” of modern art in a multi-dimensional space—data was collected from MoMA’s extensive collection and processed with machine learning models.

This project tackles the challenges of AI-generated art—it has huge potential for creative expression, but it raises concerns with transparency and ethics. Anadol’s work invites a conversation about the interplay between art, AI research, and technology’s far-reaching impact. 

The art copyright debate centers on attributing creative rights in AI-generated artworks. Traditionally, copyright law is based on human authorship. Unsupervised addresses this issue by openly acknowledging the collaborative role of its AI model, StyleGAN2 ADA, in creating the art. This approach avoids copyright complexities by recognizing both the AI and the human artist, Refik Anadol, as co-creators. In doing so, Unsupervised fosters a shared authorship model, providing transparency and clarity in navigating the evolving landscape of art copyright for AI-generated works. 

video transcript

  1. Transcript loading…

The Take Away

In design, building trust with users is paramount—especially with AI, transparency plays a pivotal role. As designers, it’s essential to clearly communicate various aspects, such as the data sources, how the system learns from user data, and the probability of errors. 

A trustworthy AI system possesses several vital attributes. Firstly, it must be valid and reliable, meeting user needs and performing consistently. Safety is non-negotiable; rigorous testing is crucial to detect potential harm, and collaboration between designers, data scientists, and developers is vital to ensure user safety. 

Accountability and transparency are achieved through regular status updates and clear insights into the system’s processes. Explainability and interpretability make the system understandable, aiding in debugging and monitoring. 

Privacy-enhanced AI respects users’ privacy while managing biases, acknowledging the delicate balance between privacy and data inclusivity. Lastly, fairness, a nuanced concept varying across cultures, should be strived for, with careful management of biases to eliminate discrimination. As designers, understanding and implementing these principles are fundamental to crafting ethical and trustworthy AI systems.

References and Where to Learn More

The Verge documents how Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day

Vox explains how image-generating algorithms work in simple language in the video, AI art, explained

See Stable Diffusion Frivolous for a teardown of the class action lawsuit against Stable Diffusion, highlighting both the distrust in AI and the complex tech that fuels the distrust.

Ars Technica’s analysis of Stable Diffusion copyright lawsuits

Read the self-stated goals set by OpenAI, Google, Microsoft and Anthropic for the AI industry body Frontier Model Forum.

The Scientific American’s analysis of why People Don’t Trust AI and How We Can Change That.

Here’s more on the thought experiment, Squiggle Maximizer.

Watch this interesting conversation on the opportunities of Generative AI between Mark Rolston, Ina Fried and Tom Mason at the DLD Conference.

Learn more about Unsupervised, Machine Hallucinations.

Coastal Media Brand

© 2024 Coastal Media Brand. All rights Reserved.