Dati

Questioni generali
Scienza e Tecnologia
Argomenti specifici
Intelligenza artificiale
Posizione
United Kingdom
Ambito di influenza
Nazionale
Collegamenti
https://jefferson-center.org/citizens-juries-artificial-intelligence/
Video
Citizens Juries on Artificial Intelligence
Data di inizio
Data di fine
In corso
No
Tempo limitato o ripetuto?
Un unico periodo di tempo definito
Scopo/Obiettivo
Ricerca
Prendere, influenzare o contestare le decisioni del governo e degli enti pubblici
Prendere, influenzare o contestare le decisioni delle organizzazioni private
Approccio
Consultazione
Ricerca
Scala della partecipazione pubblica
Consultare
Numero totale di partecipanti
36
Aperto a tutti o Limitato ad alcuni?
Limitato a pochi gruppi o individui
Metodo di reclutamento per sottoinsieme limitato della popolazione
Campione casuale stratificato
Tipi generali di metodi
Processo deliberativo e dialogico
Tipi generali di strumenti/tecniche
Facilitare il dialogo, la discussione e/o la deliberazione
Metodi, strumenti e tecniche specifici
Giuria dei cittadini
Legalità
Facilitatori
Formazione dei facilitatori
Facilitatori professionisti
Faccia a faccia, Online o Entrambi
faccia a faccia
Tipi di interazione tra i partecipanti
Discussione, dialogo o deliberazione
Informazioni e risorse per l'apprendimento
Presentazioni di esperti
Materiali scritti di sintesi
Presentazioni video
Metodi decisionali
Generazione di idee
Accordo generale/Consenso
Sondaggio d'opinione
Comunicazione dei risultati e delle conoscenze ottenute.
Relazione pubblica
Organizzatore/manager principale
Jefferson Center
Tipo di Organizzatore/Manager
Organizzazione non governativa
Finanziatore
The National Institute for Health Research, Greater Manchester Patient Safety Translational Research Centre, Information Commissioner’s Office
Tipo di finanziatore
Istituzione accademica
Governo nazionale
Personale
Volontari
No
Evidenze empiriche relative all'impatto
Tipi di cambiamento
Cambiamenti nelle conoscenze, negli atteggiamenti e nel comportamento delle persone
Autori del cambiamento
Organizzazioni degli stakeholder
Funzionari pubblici eletti

CASO

Citizens Juries on Artificial Intelligence

16 settembre 2019 Scott Fletcher Bowlsby
25 luglio 2019 Scott Fletcher Bowlsby
7 giugno 2019 Scott Fletcher Bowlsby
6 giugno 2019 Annie Pottorff
Questioni generali
Scienza e Tecnologia
Argomenti specifici
Intelligenza artificiale
Posizione
United Kingdom
Ambito di influenza
Nazionale
Collegamenti
https://jefferson-center.org/citizens-juries-artificial-intelligence/
Video
Citizens Juries on Artificial Intelligence
Data di inizio
Data di fine
In corso
No
Tempo limitato o ripetuto?
Un unico periodo di tempo definito
Scopo/Obiettivo
Ricerca
Prendere, influenzare o contestare le decisioni del governo e degli enti pubblici
Prendere, influenzare o contestare le decisioni delle organizzazioni private
Approccio
Consultazione
Ricerca
Scala della partecipazione pubblica
Consultare
Numero totale di partecipanti
36
Aperto a tutti o Limitato ad alcuni?
Limitato a pochi gruppi o individui
Metodo di reclutamento per sottoinsieme limitato della popolazione
Campione casuale stratificato
Tipi generali di metodi
Processo deliberativo e dialogico
Tipi generali di strumenti/tecniche
Facilitare il dialogo, la discussione e/o la deliberazione
Metodi, strumenti e tecniche specifici
Giuria dei cittadini
Legalità
Facilitatori
Formazione dei facilitatori
Facilitatori professionisti
Faccia a faccia, Online o Entrambi
faccia a faccia
Tipi di interazione tra i partecipanti
Discussione, dialogo o deliberazione
Informazioni e risorse per l'apprendimento
Presentazioni di esperti
Materiali scritti di sintesi
Presentazioni video
Metodi decisionali
Generazione di idee
Accordo generale/Consenso
Sondaggio d'opinione
Comunicazione dei risultati e delle conoscenze ottenute.
Relazione pubblica
Organizzatore/manager principale
Jefferson Center
Tipo di Organizzatore/Manager
Organizzazione non governativa
Finanziatore
The National Institute for Health Research, Greater Manchester Patient Safety Translational Research Centre, Information Commissioner’s Office
Tipo di finanziatore
Istituzione accademica
Governo nazionale
Personale
Volontari
No
Evidenze empiriche relative all'impatto
Tipi di cambiamento
Cambiamenti nelle conoscenze, negli atteggiamenti e nel comportamento delle persone
Autori del cambiamento
Organizzazioni degli stakeholder
Funzionari pubblici eletti
Questa voce è stata originariamente aggiunta in Inglese. Visualizza questa voce nella sua lingua originale. clicca per maggiori informazioni

Citizen Jurors were charged with exploring and deliberating about how important it is to be able to understand how an Artificial Intelligence system reaches a decision (“explainability”), even if the ability to do so could make its decisions less accurate.


Note: the following entry is missing citations. Please help us verify its content.


Problems and Purpose

Artificial Intelligence is becoming a common tool in almost every industry, from healthcare to transportation to manufacturing and human services. But many people remain unsure about what AI is and is not, or how it might or might not be used. Pop culture tends to portray AI technology as rogue cyborgs and evil computers (think HAL in 2001: A Space Odyssey), although in reality, we encounter AI almost every day, in much more mundane situations: our emails are filtered in our inbox, our music is suggested by Spotify, and Google completes our thoughts as we type. Part of the confusion and nervousness likely stems from people not knowing that AI is technology which makes it possible for “machines to learn from experience, adjust to new inputs and perform human-like tasks.” (SAS)

The National Institute for Health Research (NIHR) Greater Manchester Patient Safety Translational Research Centre (PSTRC) and the Information Commissioner’s Office in the United Kingdom (ICO) wanted to explore what people expect to know about how an AI system reaches a decision. These groups recognized a need to learn about how people might weigh the benefits of increased accuracy of an AI system compared against the ability to show how that system reached its decision. To answer these questions they commissioned the Jefferson Center to work with their partner Citizens Juries c.i.c. (UK) to design and implement a pair of Citizens Juries – one in Northern England (Manchester) and one in the West Midlands (Coventry). Jurors were charged with exploring and deliberating about four scenarios where an AI system would make a decision, and subsequently decide how important it is to be able to understand how the AI system reached its decision (“explainability”), even if the ability to do so could make its decisions less accurate.

Because AI concepts are still unfamiliar to many people, can be packed with complex science and data, and have potential legal and regulatory impacts that remain unclear, researchers decided to use Citizens Juries. This approach provided participants with a chance to learn about AI, its applications, and how it functions, then generate guidance about how automated decision-making programs might be most effectively used and overseen.

Background History and Context

Know what events lead up to this initiative? Help us complete this section!

Organizing, Supporting, and Funding Entities

The Juries were commissioned by National Institute for Health Research (NIHR) Greater Manchester Patient Safety Translational Research Centre (PSTRC) and the Information Commissioner’s Office (ICO). The Jefferson Center worked with their UK partner, Citizens Juries c.i.c., to design and facilitate the Juries.

Participant Recruitment and Selection

The first Jury was in Coventry, from February 18-22, and the process was repeated in Manchester February 25-March 1. Each Jury was made up of 18 people, who were recruited via radio, newspaper, and job advertisements to represent a cross-section of the public.

Methods and Tools Used

The event used a citizens' jury process. Over 5 days, Jurors learned about different AI systems, and considered the trade-offs between AI accuracy and explanations for decisions made by AI in four different scenarios.

What Went On: Process, Interaction, and Participation

During the first two days, Jurors heard from four different expert witnesses who prepared them to explore the four scenarios:

  • Healthcare: diagnosing an acute stroke
  • Recruitment: screening job applications and shortlisting candidates
  • Healthcare: matching donated kidneys with potential recipients
  • Criminal Justice: evaluating whether or not someone will be charged with a minor offence or given the opportunity to participate in a rehabilitation program

The first topics included an introduction to AI and relevant laws concerning AI, including data protection law, the information and explanations required by law from AI decisions, and the responsibilities and rights of AI software providers, citizens, and others.

Next, Jurors listened to two experts who were asked to present competing arguments on prioritizing AI performance or explainability. Jurors deliberated and then identified reasons to prioritize AI performance over explainability and vice-versa, and identified potential trade-offs for both.

On the third and fourth days, Jurors considered the scenarios relating to healthcare, criminal justice, and recruitment. For each case, they:

  1. Read the scenario
  2. Watched a video (recorded specifically for the Jury) of a person in that field (such as a person who works with stroke patients)
  3. Listened to a presentation from Dr. Allan Tucker, of Brunel University London’s Department of Computer Science, who reviewed how AI systems would be used in that scenario and responded to juror questions
  4. Deliberated
  5. Responded to questions about the given scenario and created rationale for why AI decision-making systems should be used in the scenario

On the final day of the Jury, participants worked together to create a few general conclusions about AI and AI explainability, discussed when it is essential to provide an explanation about an automated decision, and responded to a series of additional questions from the Jury conveners.

Influence, Outcomes, and Effects

Over the course of each Jury, participants learned more about Artificial Intelligence, discussed the drawbacks and limitations of AI systems, and explored the benefits and opportunities that these systems can offer.

Both Juries drafted a statement to their neighbors about the experience. As Manchester participants wrote, “This opened our eyes and gave us an inside view of how AI works on us, how we can protect ourselves, and the ways this will change our lives in the future. We are now more aware of the importance of AI for ourselves, for future generations, and for the world.”

The results of these Juries are informing guidance under development by the Information Commissioner’s Office and the Alan Turing Institute on citizens’ rights to an explanation when decisions that affect people are made using AI. Their findings will be presented to a range of stakeholders, developers, researchers and public and private interests through a series of roundtable workshops convened by the ICO. These meetings will explore how consumer and citizen perspectives align with and diverge from those utilizing and developing these technologies. Based on these discussions, the ICO will generate a report of their potential policies for future oversight of automated decision-making programs later this year.

Analysis and Lessons Learned

Want to contribute an analysis of this initiative? Help us complete this section!

See Also

References

Citizens Juries on Artificial Intelligence – The Jefferson Center

External Links

The final report

Greater Manchester PSTRC website

Notes