The European Union’s Scientific Advisory Mechanism (SAM) provides independent scientific evidence and policy recommendations to the European institutions by request of the Collage of Commissioners. The Scientific Advisory Mechanism comprises SAPEA which brings together around 110 academics from across Europe, offers outstanding expertise from natural sciences, engineering and technology, medical, health, agricultural and social sciences, and the humanities, and provides independent evidence reviews on request.
Given the rise in the frequency and cost of data security threats, it is critical to understand whether and how companies strategically adapt their operational workforce in response to data breaches. We study hiring in the aftermath of data breaches by combining information on data breach events with detailed firm-level job posting data.
Using a staggered Difference-in-Differences approach, we show that breached firms significantly increase their demand for cybersecurity workers. Furthermore, firms’ responses to data breaches extend to promptly recruiting public relations personnel — an act aimed at managing trust and alleviating negative publicity — often ahead of cybersecurity hires. Following a breach, the likelihood that firms post a cybersecurity job rises by approximately two percentage points, which translates to an average willingness to spend an additional $61,961 in annual wages on cybersecurity, public relations, and legal workers. While these hiring adjustments are small for affected firms, they represent a large potential impact of over $300 million on the overall economy. Our findings underscore the vital role of human capital investments in shaping firms’ cyber defenses and provide a valuable roadmap for managers and firms navigating cyberthreats in an increasingly digital age.
Our society remains profoundly inequitable, due in part to
biases in human and algorithmic decision-making. Addressing this, we
propose machine learning and data science methods to improve the
fairness of decision-making, focusing on applications in healthcare
and public health. First, we develop scalable Bayesian methods for
assessing bias in human decision-making and apply these methods to
measure discrimination in police traffic stops across the United
States. Second, we develop methods to address an important source of
bias in algorithmic decision-making: when the target the algorithm is
trained to predict is an imperfect proxy for the desired target. We
show how to leverage plausible domain knowledge in two real-world
settings — flood detection and medical testing — to detect and
mitigate target variable bias.
Participatory scholarship historically prioritizes local context, but foundation models are often disconnected from downstream contexts and users by design. I’ll discuss recent work in which we develop a blueprint for public participation that identifies more local, application-oriented opportunities for meaningful participation within the foundation model ecosystem.
In AI & ML, participatory approaches hold promise to lend agency and decision-making power to marginalized stakeholders. But what does meaningful participation look like in practice? This talk will first cover an in-depth case study of designing ML tools with and in service of activists who monitor gender-related violence.
Drawing from intersectional feminist theory and participatory design, we develop methods for data collection, annotation, modeling, and evaluation that aim to prioritize activist expertise and sustainable partnerships. Then, we’ll consider what participatory approaches should look like in the age of foundation models.
While large deep learning models have become increasingly accurate, concerns about their (lack of) interpretability have taken a center stage. In response, a growing subfield on interpretability and analysis of these models has emerged.
Hundreds of techniques have been proposed to “explain” predictions of models, however, what aims these explanations serve and how they ought to be evaluated are often unstated. In this talk, I will first present a framework to quantify the value of explanations, which allows us to compare different explanation techniques. Further, I will highlight the need for holistic evaluation of models, sharing two tales on (i) how geographically representative are artifacts produced from text-to-image generation models, and (ii) how well can conversational LLMs challenge false assumptions?
This talk is with Dr. Pruthi, an Assistant Professor at the Indian Institute of Science, Bengaluru. He received his Ph.D. from the School of Computer Science at Carnegie Mellon University and is broadly interested in the areas of natural language processing and deep learning, with a focus towards inclusive development and evaluation of AI models.
What are the potential technical and policy research problems in the LLM space? What should the future of ethical AI be?
Transparency Coalition.ai (TCAI) is on a mission to establish our raison d’etre, we will first describe how the current generation of Large Language Models are built on Training Data collected using a variety of mechanisms. These practices have resulted in a variety of potential consumer harms such as mis/dis-information, deepfakes and hallucinations.
In this talk, our speakers will highlight the need for regulatory action on Training Data collection and processing to create an Ethical AI framework that protects consumers. They will also provide a survey of how current regulatory approaches to AI are lacking in specificity, timeliness, and potential impact. They will also spotlight their work towards engaging and educating law and policy makers, their proposed policy initiatives, and learnings from the field.
Generative AI, especially tools like ChatGPT, has the potential to revolutionize K-12 education. They offer exciting ways to personalize learning and engage students. But, these tools also come packed with new challenges.
Questions arise regarding effective use and ethical considerations: How are students and teachers using ChatGPT? How much trust do they place in its accuracy? And what do students and teachers know (and not know) about how to be effective and responsible users?
In this talk, our speakers will explore how students and teachers utilize ChatGPT, how much they trust it, and what they understand about the effective and responsible use of AI. We will present findings from two focus group studies involving middle and high school educators and students, exploring the practical applications and ethical implications in classrooms.
This talk with the founders of foundry10 aims to foster collaborative discussions on responsible AI in K-12 education, encouraging attendees to share their experiences and insights.
XD works in multi-disciplinary teams of engineers, project managers, and data scientists to support the research and application of artificial intelligence solutions to the delivery of government services. Each team works with federal stakeholders across government and often with the support of outside partners, such as academic research groups, to apply the latest innovations in artificial intelligence to each project.
This session is joined by XD Emerging Technology Fellows for a roundtable discussion on issues of responsible AI with a goal of exploring potential collaborations for faculty and students.
Predictive Risk Models (PRM) have become commonplace in many government agencies to provide optimal data-driven decision-making outcomes in high-risk contexts such as criminal justice, child welfare, homelessness, immigration etc. While such technology continues to be acquired and implemented rapidly throughout the government because of the perceived benefits of cost reductions and better decision-making outcomes, recent research has pointed out several issues in how PRMs are developed. Notably, existing risk assessment approaches underlie much of the training data for these PRMs. But what exactly are these PRMs predicting? In this talk, I use empirical studies in the context of child welfare to deconstruct and interrogate what “risk” in PRMs actually means and provide provocative directions for the community to discuss how we can move beyond our existing PRM development approaches.
Until the introduction of the European Union’s General Data Protection Regulation (GDPR) in May 2018, privacy and data protection had, with few exceptions, been the domain of legal and policy departments in only the largest corporations.
With the arrival of GDPR and the subsequent introduction of similar regulations around the world, particularly the California Consumer Privacy Act (CCPA), a lot of the weight shifted to privacy programs and privacy engineering functions that are much closer to product development. A massive increase in market adoption of machine learning and, more recently, viral adoption of Large Language Models, are now driving legislators to regulate the use of Artificial Intelligence, and it is the same privacy programs and engineering functions that are largely expected to pick up the slack. In this presentation we will discuss experiences from privacy programs at two different high tech companies, how these programs are organized and what they do, and some of the most pressing challenges, technical and otherwise, that they face when it comes to complying with ongoing tsunami of privacy, data protection and AI regulation.
In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.
In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.
In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.
In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.