Projects

End-to-end responsibility

Fairness, accountability, and transparency are as influenced by data collection and cleaning as they are by machine learning. For example, imputing missing values hides a bias about what was missing and why, or running string-based entity resolution incurs a bias against short names from Asian nationalities, or injecting differential privacy can change fairness metrics. We’re working on tracking statistical properties through the entire data lifecycle with the notion of “Data Equity Systems.” We envision a database-style “optimizer” that can select algorithms to improve, say, fairness while maintaining accuracy.

Synthetic datasets for privacy and fairness

We’ve done some work on generalizing synthetic data algorithms so that we can take in arbitrary relational schemas and produce synthetic versions that are a) differentially private, b) high utility for prediction tasks, and potentially c) fair. The key challenge is dealing with very high dimensional data, like “the set of all movies you’ve watched.” We have a matrix factorization approach that can scale with high-performance computing hardware/software that offers competitive accuracy with slower/smaller approaches.

“Nutritional Labels” on datasets for fairness, biases, fitness for use

More than just metadata, these are interactive widgets that allow you to explore “bad correlations” in the data to assess whether it’s appropriate for your purposes.

Learning fair representations from heterogeneous (urban) data

If you have access to 100 datasets, which ones should you include in your trained model? Conventionally, you either assume some domain knowledge to pick the “best” ones for your application, or you throw everything in and let the model sort it out. But the biases in all those datasets have unpredictable effects downstream. Using techniques from multi-task learning, representation learning, and fair ML, we are learning fair representations of heterogeneous datasets that can be used in a variety of downstream applications to simplify the network architectures, reduce training time, and improve accuracy, without destroying fairness.

Multi-view learning

We have what appears to be the world’s best mousetrap for learning joint representations over multiple views (e.g., text + image). We’re interested in expanding this approach to different kinds of heterogeneous data.

Multi-objective fairness in recommendation

In systems with multiple parties, individual or group fairness for one may be in a different direction than for another group(s). We have been working to optimize fairness across multiple groups, each with different and competing goals (profit vs. privacy).

Explainable recommendations

In this project, we are exploring a new framework for explainable recommendation that involves both system designers and end users. The system designers will benefit from structured explanations that are generated for model diagnostics. The end users will benefit from receiving natural language explanations for various algorithmic decisions. We have developed post-hoc (summative) explanation generation and are currently working on pre-hoc (formative) generation as well as evaluating different methods for presenting explanations to the users. The project will also develop aggregated explainability measures and release evaluation benchmarks to support reproducible explainable recommendation research.

Fairness in search

We have been working to understand how we could balance relevance and diversity in search. This has led to new metrics and frameworks. These allow us to scan a given problem domain and the available data to come up with estimations for fairness in ML systems. Our experiments using these theoretical constructs so far have shown that we could indeed create fair systems without sacrificing user satisfaction. We are now pursuing these objectives of fairness in other ML applications and domains.

Multi-objective fairness in recommendation

In systems with multiple parties, individual or group fairness for one may be in a different direction than for another group(s). We have been working to optimize fairness across multiple groups, each with different and competing goals (profit vs. privacy).

Privacy-aware personalization

Creating personalized content or recommendations invariably requires having more personal information. In the age of hyper-personalization, it begs the question, “To what end?” We want the benefits of personalization, but not at the expense of privacy. We are looking into how far we could push the boundary of one without breaking the other.

Auditing search and recommendation algorithms for problematic information

Despite search and recommendation algorithms’ increasingly important role in selecting, presenting, ranking, and recommending what information is considered most relevant for us — a key aspect governing our ability to meaningfully participate in public life — there is no notion of whether this information is credible. How do algorithmically powered search systems filter, rank, and recommend search results that score highly on classic information retrieval metrics, but are potentially misleading or even completely inaccurate? We have been building a research infrastructure to audit search and recommendation systems to investigate their role in surfacing such misinformation.

Hybrid intelligent interactive systems for information narrative maps

We are creating computational tools and intelligent systems to detect and assess information narratives at scale. Current approaches to tackling large-scale information not only ignore this narrative perspective, they also treat the investigation either as a pure ML problem or if they plan to involve humans, humans are treated as bystanders, only coming in to provide corrections to the ML output at the end. Thus, neither approach is robust for use in dynamic real-world scenarios. In this project, we are adapting the “Human is the loop” paradigm from visual data analytics, to create a robust hybrid human-AI narrative sense making system, where the focus is to recognize human analysts’ work processes and communicate them back to the algorithm, thus enabling a hybrid human-AI co-learning system for narrative forensics.

Supporting scalable fact-checking of search results through human-AI intelligence

Today, search-engines like Google and Bing are powered by manual fact-checks (See ClaimReview markup). But manual fact-checking of search results is not a scalable option, especially considering the deluge of misinformation coupled with the increased pressure to debunk it quickly and accurately. How can we scale and sustain fact-checking online, without compromising the quality of fact-checks? How can we move toward technology-enabled human-centric fact-checking that scales as well as supports fact-checkers’ professional values and code of principles? In this project, we are merging the human-centric capabilities of fact-checkers with the computational capabilities offered by Artificial Intelligence (AI) algorithms, while also drawing inspiration from Value Sensitive Design (VSD).

Data Statements

A data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software. In this project, we are developing schemas and best practices for creating data statements for natural language processing datasets. This project was initiated by the Value Sensitive Design Lab and the Tech Policy Lab.

Diversifying Machine Learning Models:

In order to learn a fair and inclusive representation of the world, algorithms require access to unbiased and diverse datasets. What if such a dataset could be dynamically created by people without the need for centralized data and privacy concerns related to sharing?  Relying on Federated Learning Paradigm and Human-Centric AI we discover the possibilities of breaking away from centralized data repositories and relying directly on citizens to collaboratively train the algorithms that have societal impacts on them.

Surrogate Benchmark Initiative (SBI) 

Computational Science is being revolutionized by the integration of AI and simulation and in particular, by deep learning surrogates that can replace all or part or of traditional large-scale HPC computations. Surrogates can achieve remarkable performance improvements (e.g., several orders of magnitude) and so save in both time and energy. The SBI project will create a community repository and FAIR (Findable, Accessible, Interoperable, and Reusable) data ecosystem for HPC application surrogate benchmarks, including data, code, and all relevant collateral artifacts the science and engineering community needs to use and reuse these data sets and surrogates. We intend that our repositories will generate active research from both the participants in our project and the broad community of AI and domain scientists.