4th AAAI/ACM V-Conference on AI, Ethics and Society, 19-21 May 2021

4th AAAI/ACM Conference on AI, Ethics, and Society

A single-track virtual conference. Program co-chair Prof Seth Lazar, Virtual Experience Chair, Dr Chelle Adamson. HMI papers will be presented by Dr Claire Benn, Dr Alban Grastien, Dr Atoosa Kasirzadeh, Dr Damian Clifford, Dr Pamela Robinson and Elija Perrier. Conference website here.

Over the last few years, the world has awoken to the power that we have vested—often without thought or care—in the people and systems that collect, aggregate, analyse, and act on our data. At the same time, AI systems promise new ways to empower individuals and collectives to change society from the bottom up. International organisations, governments, universities, corporations, and philanthropists have recognised the urgent need to bring all of our intellectual tools to bear on charting a course through this uncertain new territory. Earlier iterations of this conference and others have seen the first fruits of these calls to action, as programs for research have been set out in many fields relevant to AI, Ethics, and Society.

The early days of shaking us awake are done: we now know, well, that we are increasingly reliant on AI systems that are radically changing the world around us, for better and worse. The next step is to chart a course forward, both by deepening our diagnosis of where we are now, and by developing new goals, models, and technical and regulatory systems to shape the future of AI and society toward how we collectively intend our societies to look.

To achieve these twin objectives—a richer understanding of where we are now, and technical and socio-technical paths forward—we must draw on insights from across disciplines. AIES is convened each year by program co-chairs from Computer Science, Law and Policy, the Social Sciences, and Philosophy. Our goal is to encourage talented scholars in these and related fields to submit their best work related to the morality, law, and political economy of data and AI. Papers should be tailored for a multi-disciplinary audience without sacrificing excellence. In addition to the community of scholars who have participated in these discussions from the outset, we want to explicitly welcome disciplinary experts who are newer to this topic, and see ways to break new ground in their own fields by thinking about data and AI.

The following list of topics and examples is intended to be illustrative, not exhaustive.

  • Empirical research into the impacts of AI systems.
    • Work bringing to light applications of AI with significant but insufficiently recognized impacts.
      • E.g. detailing new and underexplored uses of AI in government, defence, healthcare, finance, political campaigning, marketing, digital platforms and other areas.
    • Work advancing our theoretical understanding of how AI systems are changing societies.
      • E.g. exploring how data and AI-driven policy-making leads to changes in how governments see citizens (and vice versa); how industry shapes social environments so that they are more susceptible to datafication; how AI systems can react to, produce and reproduce social inequality and prejudice, including racism and misogyny; the social consequences of automation; political economy of big tech.
    • Work investigating public or professional resistance to the deployment of AI systems.
  • Evaluative research into AI impacts.
    • Work deepening the moral diagnosis of existing and feasible AI systems.
      • E.g. theoretical accounts of why surveillance may be resisted or embraced; how it reshapes subjectivity and behavior; the kinds of manipulation it enables; accounts of the nature of discrimination as practiced by AI systems; existential risks posed by the development of AI systems.
    • Work evaluating existing and feasible AI systems against existing legal and regulatory regimes.
      • E.g. assessing the feasibility of ‘black box’ AI systems complying with existing administrative law; data protection implications of existing AI systems; impact of AI systems on antitrust issues.
  • Evaluative research into the goals at which we should aim when redesigning AI systems.
    • Theoretical work aimed at addressing, understanding, or resolving evaluative uncertainty and disagreement about goals to aim at with AI systems.
      • E.g. determining how to think about discrimination in the age of AI; how to philosophically conceptualize alignment with human values.
    • Normative theory aiming to map out how AI systems could be used legitimately, and for social benefit.
      • E.g. re-examining the moral foundations of administrative law to devise standards for AI-assisted institutional decision-making.
  • Technical research into the representation, acquisition, and use of ethical knowledge by AI systems.
    • How can ethical knowledge be represented as rules and constraints; as utility functions; as stories and scripts; as deep neural networks; etc?
      • E.g. ethical knowledge is learned by humans from limited amounts of experience and pedagogy; what does this mean for representation?
    • How should key concepts such as fairness and bias be formalized to allow properties of intelligent systems to be evaluated and guaranteed?
      • E.g. establishing “best practices” for training set curation to prevent or reduce transmission of existing societal bias to a learning system.
  • Proposal and/or evaluation of technical methods for realising evaluative goals.
    • Work focusing on developing AI systems for specific application domains that advance valid evaluative goals.
      • E.g. ‘Mechanism Design for Social Good’ and related areas.
    • Work introducing mechanisms for procedural justice into AI systems as deployed in practice.
      • E.g. methods for making AI systems in practice better suited to democratic governance; design tools for introducing auditability trails into AI systems; explainable AI with a social purpose.
  • Proposal and/or evaluation of sociotechnical methods for realising evaluative goals.
    • Work exploring the culture and practices of AI research and development to counteract structural injustice.
      • E.g. labor rights and employee activism in the tech sector; alternative socially-oriented methods for AI research and development such as data trusts and public benefit corporations; nature of collective mobilization in digitally distributed environments.
    • Work proposing and evaluating methods for responsible and inclusive innovation with active involvement from those affected by new technologies.
      • E.g. methods for participatory design and responsible innovation practices.
  • Proposal and/or evaluation of legal and regulatory approaches for realising evaluative goals.
    • Work exploring the relative merits of using legal instruments such as antitrust, consumer protection, and data protection to regulate the impacts of AI.
      • E.g. comparative analysis of data protection regimes; arguments for or against explicit regulation of automated decision-making; ongoing prospects for transnational regulation.
    • Work exploring the role of public law in constraining public use of AI and related technologies.
      • E.g. investigation of how administrative law needs to be revised to accommodate AI (or vice versa).

REGISTER HERE

Contact:

https://hmi.anu.edu.au/contact

Organizing Institutions:

Advancement of Artificial Intelligence

Association for
Computing Machinery

ACM Special Interest group on
Artificial Intelligence

Sponsors:

DeepMind Ethics & Society

Founders Pledge