Bias as a Fix for Congestion
Economists describe overloaded labor and admissions markets as congested. Yet the term is usually deployed to mean “too many applicants,” glossing over the deeper constraint: decision‑makers cannot extract enough reliable information from superficially similar files without incurring prohibitive cost. I call this bottleneck evaluation congestion. The concept explains why organizations default to biased heuristics and why an analogous mechanism governs information markets such as web search. Google’s early PageRank, for instance, substituted hyperlink counts—a cheap, low‑variance surrogate—for direct page‑quality inspection, trading systematic bias for reduced uncertainty. Labor‑market screens built on prestige or referrals make the same trade‑off.
Evaluation Congestion
Assessment technologies are imperfect and expensive. As the candidate pool grows, evaluators face rising variance in judgments and sharply increasing marginal cost of deeper scrutiny. Evaluation congestion occurs when the expected benefit of additional information falls below (i) the extra effort required per file and (ii) the budgetary or temporal cost of expanding capacity—hiring reviewers, extending timelines, or convening extra meetings.
Employers often view false positives (FPs)—hiring poor performers—as costlier than false negatives (FNs)—overlooking strong candidates. With $C_{\text{FP}} > C_{\text{FN}}$, evaluators prefer filters that minimize FPs even if FN risk rises. This asymmetry pushes decision‑makers toward severe heuristic screens rather than noisier but inclusive evaluations.
Heuristic Gatekeeping in Practice
- Elite professional firms substitute alma mater prestige and “cultural fit” for direct skill tests (Rivera 2012).
- Faculty pipelines show extreme concentration: ten U.S. Ph.D. programs produce half of all tenure‑track hires (Clauset, Arbesman & Larremore 2015).
- Online labor markets reveal that employers open only a fraction of applications, guided by keyword filters and résumé parsers (Barach & Horton, 2022).
In each case, biased heuristics shrink the evaluation set, reducing FP risk more cheaply than deeper reading could.
A Two‑Stage Selection Architecture
Selection in congested markets unfolds in two stages. A visibility gate governed by heuristics determines who is even considered, while a subsequent evaluation gate applies costlier, noisier judgment to those who pass. Because exclusion at stage one is absolute, most inequality originates there; bias is tolerated upstream to keep downstream variance and cost manageable. The formal model that follows makes this two‑stage structure explicit: the threshold $t$ captures the visibility gate, and the choice of effort $e$ together with capacity $M$ represents the noisy second‑stage evaluation.
LLM‑Generated Resumes: Escalating Evaluation Congestion
Large‑language‑model assistants now let applicants generate infinite variations of polished résumés, cover letters, and project portfolios at near‑zero cost. The traditional expense of crafting a high‑signal document collapses, so the visibility signal ss grows less informative for a given outlay $c(s)$. In the model, this is a mean‑preserving spread in the distribution of ss conditional on quality $q$: more low‑quality applicants reach any fixed threshold $t$, raising $N(t)$ and intensifying evaluation congestion.
Evaluators respond on two margins:
- Tighter heuristic gates. Because editing tools inflate document quality, committees further raise $t$ or introduce additional automated filters (e.g., psychometrics, video prompts) that are harder to spoof.
- Greater reliance on network signals. Referrals and alumni ties become relatively scarcer (and harder to fake) than textual polish, so weight on informal networks rises—exacerbating structural inequality.
Net effect: LLM‑aided résumé generation pushes the system farther along the bias‑dominance frontier, shifting costs from document preparation to social‑capital acquisition.
A Stylized Model of Evaluation Congestion
Environment
- Applicants draw latent quality $q \sim F$ on $[0,1]$.
- Each chooses a visibility signal $s \in [0,1]$ at convex cost $c(s)$. Signal and quality are independent.
- The evaluator can scrutinize at most $M$ files with effort $e \in [0,\bar{e}]$. Effort yields an unbiased but noisy estimate:
$$\hat{q} = q + \varepsilon, \qquad \varepsilon \sim \mathcal{N}(0, \sigma^2(e)), \quad \sigma'(e) < 0$$
at cost $g(e)$.
- Capacity expansion raises $M$ at cost $k(M)$ with $k'(\cdot) > 0$.
- The evaluator sets a quality bar $\theta$. A hired candidate with $q < \theta$ is an FP; any excluded candidate with $q \geq \theta$ is an FN. Let the costs be $C_{\text{FP}} > 0$ and $C_{\text{FN}} > 0$ with $C_{\text{FP}} > C_{\text{FN}}$.
Timing
- The evaluator sets a visibility threshold $t$, review capacity $M$, and effort level $e$ before any applications are submitted.
- Observing these rules, applicants choose a signal $s$ to maximize their hiring probability net of cost.
- Evaluation proceeds. The evaluator reviews the top $M$ files with $s \ge t$, observes noisy estimates $\hat{q}$, and hires the best candidate.
- Payoffs are realized.
Evaluator's Problem
Let queue length $N(t) = N[1 - F_s(t)]$. The evaluator solves:
$$\max_{t,e,M} \quad \mathbb{E}[q_{\text{hire}}] - g(e) - k(M) - C_{\text{FP}} \Pr(q_{\text{hire}} < \theta) - C_{\text{FN}} \Pr(q_{\max} \geq \theta, \text{ not hired})$$
subject to $N(t) \leq M$.
Bias-Dominance Result
With convex $g$ and $k$ and $C_{\text{FP}} > C_{\text{FN}}$:
Proposition 1. Raising the threshold $t$ (tightening the gate) is cost-minimizing before either lowering effort $e$ or paying $k'(M)$ to expand capacity. Only when $t \to 1$ do the other margins become attractive.
Sketch. The marginal reduction in FP loss dominates the rise in FN loss until $t$ saturates, given $C_{\text{FP}} > C_{\text{FN}}$ and decreasing returns in $g, k$.
Applicant Strategy
Given $t$, a risk-neutral applicant maximizes:
$$U(s) = \Pr(s \geq t) w - c(s)$$
In symmetric equilibrium, signaling satisfies $w = c'(t)$ (provided $c'(1) < w$). Tightening $t$ raises wasteful signaling outlays.
Welfare
Total surplus is:
$$W = \mathbb{E}[q_{\text{hire}}] - g(e) - k(M) - C_{\text{FP}} \Pr(\text{FP}) - C_{\text{FN}} \Pr(\text{FN}) - N \mathbb{E}[c(s)]$$
As $t$ rises, evaluation cost falls, but signaling waste, exclusion error, and error penalties change; signaling deadweight loss can exceed the evaluator's entire review budget.
Analogy to PageRank
Map applicants to web pages, $s$ to link authority, $c(s)$ to SEO spending, $M$ to first-page slots, $\theta$ to minimum relevance, and $C_{\text{FP}}$ to user irritation at bad links. Google raises the link-authority bar to reduce FP cost, inducing an arms race in backlinks (higher $c(s)$) while overlooking newer but high-quality pages (FNs).
Welfare Implications
Heuristic gatekeeping curbs false positives cheaply but imposes hidden losses:
- Invisible exclusion of strong but uncredentialed applicants.
- Strategic conformity as applicants over‑invest in positional signals.
- Structural inequality exists because favored signals reflect inherited advantage.
- Dynamic rigidity: feedback arrives only on a narrow slice of talent, slowing learning.
Single‑stage models with symmetric error costs ignore these distortions.
Future
Future theory should endogenize (i) the visibility gate, (ii) conditional noisy evaluation, (iii) strategic signalling and exit, and (iv) asymmetric error costs. Bridging labor‑market heuristics with surrogate metrics like PageRank may reveal common welfare trade‑offs and guide reforms—shared assessment resources, richer low‑variance signals, or deliberate tolerance for greater variance rather than ever‑tighter bias.
Conclusion
“Congestion” is not merely a headcount problem; it is an accuracy‑and‑capacity problem shaped by asymmetric false‑positive and false‑negative costs. When evaluators cannot afford FP risk, they tighten heuristic filters, trading randomness for rigidity. Markets appear orderly only because they narrow their field of vision. Improving welfare requires investments that raise true discriminative precision or a willingness to accept more variance instead of defaulting to exclusionary bias.