Past Editions
Past Editions
We can harness unprecedented amounts of data using AI, creating opportunities to tackle major societal problems in numerous domains, such as health, well-being, and mobility. To make AI useful, we need to find new ways to combine the creative power of humans with the analytical capabilities of computers. While designing solutions and developing systems for social good, a key challenge lies in finding out how to help designers, experts, and societal stakeholders work together with AI to prepare, realize and evaluate design interventions. How can we reduce design complexity for large-scale social interventions?
Check out the recorded talks from the third edition of TAFF here!
Ben Shneiderman
University of Maryland
Chenhao Tan
University of Chicago
Juho Kim
School of Computing at KAIST
Aaron Halfaker
Microsoft Research
Nithya Sambasivan
Google
Trivik Verma
Delft University of Technology
Tim Kraska
MIT
Vanessa Murdock
Amazon
Mounia Lalmas
Head of Tech Research at Personalization at Spotify
Judith Redi
Head of Data Science at Miro
The adoption of artificial intelligence, data science, data analytics, among other techniques is predominant in many contexts and domains: often used to help us decide which items to buy, what music to listen to, and in high-stakes domains such as education, healthcare provision or criminal justice, among others. The performance of such AI systems depends both on the learning algorithms, as well as the data used for their training and evaluation. The role of the algorithms is well studied. In contrast, research that focuses on the data used in AI systems is not commonplace. Data, however, is always at their core, being a crucial component for advancing and assessing the technological field. We took a multi-disciplinary view and explored further lessons learned from success stories and examples in which the irresponsible use of data can create and foster inequality and inequity, perpetuate bias and prejudice, or produce unlawful or unethical outcomes. We discussed and drew certain guidelines to make the use of data a responsible practice.
Check out the recorded talks from the second edition of TAFF here!
Jahna Otterbacher
Cyprus Center for Algorithmic Transparency (CyCAT) at the OUC
Luke Stark
University of Western Ontario
Lora Aroyo
Google Research
Q. Vera Liao
IBM T.J. Watson Research Center
Elena Simperl
King's College London
Catherine D'Ignazio
MIT, Data + Feminism Lab
Solon Barocas
Cornell University, Microsoft
Alessandro Piscopo
BBC, Datalab
Krishnaram Kenthapadi
Amazon AWS AI
Seda Gürses
Delft University of Technology
The unprecedented rise in the adoption of artificial intelligence techniques in many contexts is concomitant with shortcomings of such technology with respect to robustness, interpretability, usability, and trustworthiness. Crowd computing offers a viable means to engage a large number of human participants in data related tasks and in user studies. In the context of overcoming the computational and interactional challenges facing the current generation of AI systems, recent work has shown how crowd computing can be leveraged to either debug noisy training data in machine learning systems, understand which machine learning models are more congruent to human understanding in particular tasks, or to advance our understanding of how AI systems can influence human behavior.
Check out the recorded talks from the first edition of TAFF here!
Matthew Lease
UT Austin, Amazon
Gianluca Demartini
University of Queensland
Mihaela Vorvoreanu
Microsoft
Shamsi Iqbal
Microsoft
Panos Ipeirotis
New York University
Michael Bernstein
Stanford University
Nithya Sambasivan
Google
Olga Megorskaya
Toloka
Simo Hosio
University of Oulu
Edith Law
University of Waterloo