underneath.news
underneath.news
What the story is actually about
Tuesday, May 12, 2026
Content powered byTranscengine™|For publishers →
PowerMay 10, 20266 min readAnalyzed by Transcengine™

The Hiring Algorithm That Answers to No One

Patternaccountability laundering

Job seekers are discovering their applications are filtered and rejected by AI screening systems before any human ever reads them. No explanation is offered, no appeal process exists, and there is no one to contact.

Companies have outsourced the legal liability of rejection to algorithmic black boxes. When a human rejects you, they can be questioned, deposed, and held to bias standards. When an AI does it, there is no one to call, no decision to defend, and no discrimination to prove. Automated hiring screens are not an efficiency upgrade. They are a liability shield dressed up as innovation.

Minimum Viable Truth

AI hiring filters exist to protect companies from accountability for rejection, not to identify better candidates.

Marcus spent four months applying to 200 jobs. He customized every resume. He researched every company. He followed up. In almost every case, he received the same automated response within minutes of submitting, sometimes seconds. A form rejection. No reason given. No human name attached.

When he finally landed an interview, the recruiter told him she had never seen his resume. Their ATS (applicant tracking system) had filtered him out automatically. She had pulled him from a different pool entirely, on a hunch, after a colleague mentioned his name.

He had made it not by being qualified, but by knowing someone who knew someone. The algorithm had already decided he wasn't worth a look.

The Screen Before the Screen

Most large employers no longer read resumes as a first step. They run them through automated screening software that scores candidates against a profile, filters by keyword match, ranks by inferred attributes, and passes a narrow shortlist to human eyes. The rest go into a folder no one opens.

This is framed as a solution to volume. Companies receive thousands of applications for a single role. No recruiter can read them all. AI makes the triage possible.

That framing is technically accurate. It is also a way of not saying the other thing.

What the Algorithm Actually Does

When a human hiring manager rejects your application, they are making a judgment call that can be questioned. If patterns emerge (Black applicants rejected at higher rates, women filtered out of technical roles, candidates with foreign-sounding names disappearing before interviews), that pattern can surface in litigation. A decision-maker can be deposed. An HR director can be put on the stand. There is a chain of accountability.

When an algorithm rejects you, the chain breaks.

The company can say: we did not reject you. The system scored you. The system is neutral. The system has no intent. You cannot depose a model. You cannot prove what a black-box ranking function was optimizing for. You cannot compel a vendor to disclose proprietary weights. And even if you could, the company can point to the vendor, and the vendor can point to the training data, and the training data points to the historical hiring decisions of companies like the one that just rejected you.

The feedback loop is perfect and perfectly insulated.

Why "Neutral" Is a Choice

The standard defense of automated screening is that it removes human bias. This defense depends on what you mean by bias.

If bias means a recruiter making a snap judgment based on a name, automated systems can reduce that specific failure. But they replace it with a different one: the assumption that whoever got hired in the past is the model for who should get hired in the future.

Every AI screening system is trained on historical data. Historical hiring data reflects historical biases. When the model learns what a "qualified" candidate looks like, it is learning from a dataset shaped by decades of decisions that courts have already found, repeatedly, to be discriminatory.

Neutrality is not a property of the training data. It is a marketing claim.

The Person on the Other End

There is no human to appeal to. This is not a design flaw. It is the design.

When rejection is automated, the company is not required to offer a reason. There is no fair chance ordinance for algorithms in most jurisdictions. There is no civil rights framework that has successfully compelled algorithmic transparency in private hiring. The vendors who build these systems consider their models trade secrets.

You applied. The system scored you. The score was not high enough. No further information is available.

This is not a neutral administrative outcome. It is a decision made about your life and livelihood by a system that has been deliberately placed outside the reach of accountability, and then described in press releases and HR conferences as a step toward fairness.

What "Efficiency" Costs

The efficiency framing is worth examining directly.

Automated screening does reduce recruiter workload. It does process high volumes. These are real things. But efficiency for whom? The company processes more applications with fewer hours of labor. The candidate spends weeks crafting materials that a model discards in milliseconds. The cost of the system's efficiency is externalized entirely onto the people it screens out. They never know why, never have recourse, and are left to iterate blindly.

When someone tells you a system is efficient, the useful question is: efficient for which party, and at whose expense?

The Accountability Gap

AI hiring tools are sold to HR departments as a way to make better decisions faster. What they also do, and what rarely appears in the sales deck, is transfer the legal and moral weight of rejection away from the employer.

The employer did not reject you. An algorithm did. The algorithm is a product. The product was built by a vendor. The vendor used industry-standard methodology. Everyone made reasonable decisions and no one is responsible for the outcome.

This structure is not unique to hiring. It is a pattern: using automation not to improve decisions but to diffuse accountability for them so thoroughly that no one can be held to anything.

When you cannot find anyone to answer for a decision that affected your life, the question worth asking is not: was this an accident?

The question is: who benefits from the fact that no one has to answer?

Editorial Note

underneath.news analyzes structural patterns, power dynamics, and the conditions that shape contemporary events. This is original analytical commentary, not reporting. We do not summarize, paraphrase, or replace coverage from any specific publication.

More Analyses

TechnologyMay 12, 2026

A Private Company Is Deciding Which Countries Get Powerful AI

Patternungoverned power concentration

China sought access to Anthropic's most advanced AI models. Anthropic said no. The decision was made internally, by company leadership, with no public process and no external oversight.

The question of which countries and populations get access to the most powerful AI systems is now being answered by private companies on the basis of their own strategic calculations. There is no democratic process governing these decisions, no international framework, and no accountability structure. A small number of companies in a small number of cities are deciding, unilaterally, which parts of the world get access to transformative technology and which do not. This is an extraordinary concentration of consequential power.

Minimum Viable Truth

The most important geopolitical decisions about AI access are being made by private companies with no democratic mandate and no requirement to explain themselves.

6 min read
PowerMay 12, 2026

You Are Paying for the War at the Grocery Store

Patterncost externalization

US inflation rose to 3.8% in April. Steel tariffs are raising the price of canned foods. Consumers are increasingly relying on credit to cover basic expenses, cycling through debt to manage costs that are rising faster than wages.

The Iran war and the tariff regime were decisions made by a small number of people at the top of a political system. The cost of those decisions is being paid by a large number of people at the bottom of an economic one. This is not a side effect. It is the standard architecture of how policy costs are distributed. The people who decide are rarely the people who pay.

Minimum Viable Truth

Inflation and rising consumer debt are not economic phenomena that happen to coincide with policy decisions. They are the mechanism by which the cost of those decisions is transferred from decision-makers to everyone else.

6 min read
PowerMay 12, 2026

OpenAI Is a Tool Until Someone Dies

Patternaccountability shield

Parents have filed a lawsuit against OpenAI after their teenager died following interactions with ChatGPT in which the chatbot provided information about drugs. The lawsuit argues the product was designed to build dependency and trust in a way that made it dangerous for vulnerable users.

OpenAI's legal defense will rest on a familiar structure: it is a tool, tools do not have intentions, and users are responsible for how they use tools. This defense collapses when examined against how the product is actually designed and marketed. ChatGPT is not designed to be a neutral information retrieval system. It is designed to be trusted, personable, emotionally attuned, and compelling. You cannot optimize a product to feel like a confidant and then disclaim responsibility for what it says in confidence.

Minimum Viable Truth

When a product is designed to be trusted, it inherits a duty of care. The tool defense does not survive the product design.

6 min read