Skip to main content
Ciocca Center for Innovation and Entrepreneurship Homepage

Ethics and Innovation in the Age of AI

Bias and Fairness in Search Engines

Fang Yi

Lead Researchers:

Yi Fang, Professor
Department of Computer Science and Engineering, Santa Clara University

Zhiqiang Tao, Assistant Professor
Rochester Institute of Technology

Research Overview

Search systems ranging from web engines and e-commerce platforms to admissions, housing, and employment systems play a critical role in shaping access to information, economic opportunities, and social mobility. They influence both users making decisions and providers seeking visibility. This project investigates how bias arises in these systems and develops methods to ensure fairer outcomes while maintaining strong system performance.

Research Question:

  • How do biases emerge across core components of search systems, including query processing, document representation, ranking algorithms, and evaluation metrics?
  • What strategies can effectively mitigate these biases to ensure fair and equitable outcomes for both users and content providers?

Methodology

The researchers propose a novel machine learning framework that mitigates bias through automatically weighted loss functions and curriculum learning strategies. Using meta-learning to adjust ranking loss, the approach improves fairness metrics for minority groups while preserving competitive ranking performance. They also evaluate Large Language Models (LLMs) in text-ranking tasks, establishing benchmarks for assessing fairness in these emerging systems.

Key Findings: Bias and Fairness in Search Engines

  • Improved fair exposure of minority candidates without reducing accuracy.
  • Addressed data selection bias in Pinterest’s ads ranking system.
  •  Validated a modified unsupervised domain adaptation method.
  • Among the first to study fairness in LLMs for search and ranking.
  • Identified inherent LLM biases and proposed mitigation strategies.

Future Implications:

To raise awareness and advance the field, the researchers delivered a tutorial on fairness in machine learning at CIKM 2022 and led the book Fairness in Search Systems (Now Publishers, August 2025). The book identifies persistent challenges and promising research directions for more inclusive information retrieval. Building on prior work in bias mitigation for LLMs, the researchers are extending their investigations into foundation models’ reasoning and multi-modality capabilities.

Support from the Ciocca Center

With $35,000 in support from the Ciocca Center in 2021, the researchers conducted a preliminary study that produced promising results and demonstrated the importance of this research direction. They are deeply grateful for the Center’s support.