CS Events

PhD Defense

Practical Methods for Fair and Explainable Decision Making

 

Download as iCal file

Wednesday, September 25, 2024, 10:00am - 12:00pm

 

Speaker: Abraham Gale

Location : CoRE 305

Committee

Professor Amélie Marian (chair)

Professor David M. Pennock

Professor Yongfeng Zhang

Professor Nikhil Garg (External Member)

Event Type: PhD Defense

Abstract: Algorithmic decision making is used to make a wide variety of important decisions in today’s society, from bail decisions to school matchings. There is an increasing recognition of the importance of understanding exactly how algorithms make their decisions. This dissertation aims at bridging the gap between algorithm designers and stakeholders by providing metrics and tools to better understand the behavior of decision processes. Bridging this gap has the potential to empower stakeholders to make more informed decisions about how they interact with the algorithms and increase their trust in these systems that are increasingly critical to the running of society. For regular people to be full participants in algorithmic decision making, the algorithms chosen must be explainable and transparent so that no expert knowledge is required to understand their outcomes. The users of decision systems must also feel the algorithms that drastically affect the course of their lives are fair and reasonable. To further these explainability and fairness goals, this dissertation proposes explainable algorithms and metrics to promote decisions that are clearly fair and reasonable to the people they most affect.The first stage of algorithmic decision making is when decision makers choose the decision model to be used as an input to the algorithmic decision process; in many scenarios, decisions are based on ranking functions that aim at ordering participants to the decision. This dissertation provides explainable metrics to help the designers understand the diversity, fairness, and imputed weights of the developing ranking function. These metrics do not require expert knowledge to understand what they are measuring and how they are measuring it. By providing explainable metrics, decision makers can more easily trust the claims the metrics are making about the data. Specifically, this dissertation proposes transparent participation metrics to clarify the ranking process, by assessing the contribution of each parameter used in the ranking function in the creation of the final ranked outcome, using information about the ranking functions themselves, as well as observations of the underlying distributions of the parameter values involved in the ranking. In order to evaluate the outcome of the ranking process, diversity and disparity metrics are proposed to measure how similar the selected objects are to each other, and to the underlying data distribution. Metrics are evaluated on synthetic data, as well as on two real-world scenarios: high school admissions and decathlon scoring.Once explainable metrics are used in the design process, decision makers may want to automatically optimize the metrics without compromising the explainability of the outcome of the decision process. One such explainable metric is the disparity, which is closely related to the popular statistical parity measure of fairness. Optimizing this metric usually means minimizing the disparity between the population and the selected set. Traditionally, this would be done by minimizing disparity through an opaque transformation of the ranking functions or a set-aside system, in academic proposals about how to rank with fairness constraints and real-world school matching systems respectively. The same minimization can be done with bonus points without substantially increasing the complexity of the ranking function. Fairness goals are often added on top of an existing ranking function. By mirroring the language in which these ranking of functions are usually described, the fairness goals can be easily combined with the existing ranking function without reducing explainability. The conversion of the fairness goals into a specific number of bonus points is provided by a novel algorithm named the Disparity Compensation Algorithm(DCA). DCA uses a sampling-based approach modeled after Stochastic Gradient Descent to satisfy fairness goals orders of magnitude faster than competing algorithms and allows algorithm designers to accomplish fairness goals without compromising the explainability of the overall ranking function.The need for fairness and explainability can continue past the main part of the algorithmic decision process to the follow-up decisions that need to be made to respond to potential errors. Once an error has been detected, the outcome may need to be adjusted to avoid unfairly disadvantaging any participants due to the error. Adjusting the outcome to repair errors after the algorithm has already been completed needs a stronger explanation than simply running the algorithm as originally planned with no errors involved. Without this explanation, it can be difficult to convince both the harmed and unharmed participants that they are being treated fairly. A real-world example of this challenge arises in applications of deferred-acceptance (DA) matching algorithms, where errors or changes to the matching inputs are sometimes discovered only after the algorithm has been run and the results are announced to participants. Mitigating the effects of errors is a different technical problem than the original match since the decision-makers are often constrained by the decisions already announced. This dissertation proposes models for this new problem, along with explainable mitigation strategies to go with these models. Three different error scenarios are explored: In the first scenario, one of the participants is removed after the match is complete, leading the group matched to that participant unmatched. In the second scenario, some participants are unfairly disregarded and not ranked by a participant on the other side. In the last scenario, some participants are unfairly promoted over others. For each error type, the expected number of participants directly harmed, or helped, by the error, the number indirectly harmed or helped, and the number of participants with justified envy due to the errors are calculated. Error mitigation strategies must then arbitrate between minimizing the extra resources needed to mitigate the error and ensuring that no participants are unnecessarily harmed by the error. This dissertation provides algorithms, metrics, and models for fairer and more explainable algorithmic decision making. In particular, by providing metrics to help design ranking functions, explainable optimization, and sound error mitigation, this dissertation will allow for greater stakeholder participation in algorithmic decisions. As algorithms become increasingly cryptic it is essential to remember that key decision algorithms are designed to help humans make decisions at scale. If the algorithms are not fair and explainable to the humans they represent, they will not be successful no matter how effective they are. When explainability and fairness are centered, the algorithms can become natural extensions of the humans making the decisions.

Contact  Professor Amélie Marian

Zoom Link:https://rutgers.zoom.us/j/96136168607?pwd=jHAygrNF8UeNN9qp5Fc16g0huDrXzY.1