Final ACO Doctoral Examination and Defense of Dissertation of Jad Salem, April 17, 2023

Final doctoral examination and defense of dissertation of Jad Salem, April 17, 2023

Title: Online Learning with a View Toward Fairness

Link to Thesis draft:

Date: April 17, 2023
Time: 8:30am EST
Location: Skiles 005 and Zoom (link TBD)

Dr. Santosh Vempala, College of Computing, Georgia Institute of Technology
Dr. Mohit Singh, School of Industrial and Systems Engineering, Georgia Institute of Technology
Dr. Grigoriy Blekherman, School of Mathematics, Georgia Institute of Technology
Dr. Deven Desai, College of Business, Georgia Institute of Technology

Advisor: Dr. Swati Gupta, School of Industrial and Systems Engineering, Georgia Institute of Technology
Reader: Dr. Vijay Kamble, College of Business Administration, University of Illinois Chicago

The field of fairness in algorithmic decision-making can be traced back at least 25 years and has flourished over the past decade. However, the overwhelming majority of this research concerns offline decision-making, where decisions are made en masse. In practice, however, decisions are often made in real time with only partial information. In this talk, I use the domains of hiring and pricing to explore the themes of uncertainty and fairness in online-decision-making.

The first line of inquiry I will discuss revolves around uncertainty in online decision-making. Specifically, I focus on the task of online applicant-screening where uncertainty (e.g., noise in evaluations, or knowledge that certain pairwise rankings may be corrupted due to bias) is captured by a partial order (i.e., a partial ranking) on the set of applicants. Modeling the screening process as a k-secretary problem with partial ordinal information, I prove tight lower and upper bounds on the competitive ratio (i.e., the worst-case expected performance ratio).

The second line of inquiry revolves around achieving comparative fairness (i.e., ensuring that similar individuals receive similar decisions) in an online learning setting. The challenge here is that stringent fairness constraints can prevent learning, thus resulting in poor performance. I thus introduce a novel relaxation of individual fairness which allows decisions to monotonically vary over time. Under this constraint, I provide bandit convex optimization algorithms which attain big-Oh optimal regret for $N=1,2$ and sublinear regret for $N>2$.

Finally, I turn to the law: what is allowed in addressing fairness and bias in algorithmic decision-making? First, I discuss the legality of fairness interventions in hiring. In particular, I show that precedent supports the use of partial orders to account for uncertainty in hiring. Turning to dynamic pricing, I discuss open questions regarding (a) how privacy laws affect contextual pricing, and (b) how price gouging laws can be translated into temporal fairness constraints.