EXPLAINABLE ARTIFICIAL INTELLIGENCE

Case Study

Problem:

The goal is to identify mechanisms of explaining black box deep machine learning algorithms to increase transparency, trust, and use by the government and public.


Solution:

Evaluate explainable ML techniques in custom challenge problems that showcase trust and use.

    • Identify measures and metrics that showcase the effectiveness of explanation of black box models.
    • Identify challenge problems relevant to the 12 performer teams and measures/metrics of the XAI program
    • Generate, conduct, and manage multiple program-wide evaluations to collect results that compare the methods used by performer teams.
    • Write reports to DARPA keeping them apprised of the status, potential roadblocks, and noteworthy advances made by performer teams.

Outcome:

    • Identification of key methods and mechanisms to ensure understandable and appropriately trusted black box models through experiments that contained 12,700 participants in user studies. 
    • Successfully guided 11/12 of performer teams through evaluation to best understand AI explainability.
    • Creation of follow-on programs based on the work performed on the XAI program.