0to1 Logo

AI in Recruiting: The Gap Between Perception and Reality

What both job seekers and talent professionals get wrong about AI-powered resume screening

John Sasser
John Sasser
January 5, 2026
15 min read
AIRecruitingHR Tech

Job seekers are convinced AI is automatically rejecting their applications. Talent professionals insist AI screening barely exists. Both are partially right. Both are partially wrong.

The truth matters a lot more than either side realizes.

I conducted extensive research into the actual capabilities of major ATS platforms, current legislation, and documented evidence of AI adoption and bias. What I found explains the disconnect.


The Bottom Line Up Front

AI-powered resume screening has reached mainstream adoption. SHRM's 2025 Talent Trends report shows 43-51% of organizations now use AI in recruiting. That's nearly double from 26% in 2024.

75% of companies allow AI to reject candidates without human review at some stage. But true autonomous rejection throughout the entire hiring process is less common at 21-35% of companies.

What's actually happening is a hybrid system. AI filters aggressively at the top of the funnel while humans make final decisions on a narrowed pool.

This explains why candidates feel filtered out immediately while recruiters honestly believe they review everyone who matters.


What the Major ATS Platforms Actually Do

I examined six leading platforms: Greenhouse, Workday, iCIMS, Lever, Gem, and SmartRecruiters.

I found a consistent pattern. AI features exist and are growing, but most platforms position AI as assistive rather than autonomous.

Greenhouse: Explicitly Refuses to Auto-Reject

Greenhouse takes the strongest anti-automation stance. Their official documentation states: "Greenhouse never uses AI to rate candidates or auto-reject applications. We just don't think that any company that truly wants to find great candidates and deliver a fair experience would outsource their decisions to a bot."

Their native AI features are assistive tools that help recruiters work faster. Keyword suggestions, resume anonymization, talent filtering. They don't score or rank candidates. Humans assign all scores manually.

The third-party marketplace is different though.

Integrations like AI Screened can disqualify candidates who don't meet "Must Have" requirements. But these require separate subscriptions and explicit configuration.

Workday: Grades Candidates A Through D

Workday's approach differs significantly following its April 2024 acquisition of HiredScore. The system evaluates candidates against job requirements using an A, B, C, D grading system.

Official documentation emphasizes human-in-the-loop design: "The model's output does not supplant human judgment, nor is it recommending anyone. Recruiters and hiring managers always have complete discretion."

But there's a revealing caveat in Workday's Recruitment Privacy Statement: "In some recruitment exercises, for example, where we have particularly high volumes of applicants, it is not possible for every CV/resume or application to be reviewed by a member of the talent acquisition team during initial sifting stages."

Translation: high-volume scenarios allow AI screening without human review.

HiredScore AI is primarily available to enterprise organizations with 25,000+ employees. The platform publishes NYC Local Law 144 bias audit results. Workday currently faces a major class action lawsuit (Mobley v. Workday) alleging its AI tools led to discriminatory outcomes.

iCIMS: Strong Human-in-the-Loop Protections

iCIMS has mature native AI capabilities through its Talent Explorer platform. Their policy documentation is unambiguous: "There is no automated decision-making, and all decisions begin and/or end with human decision points."

The platform earned TRUSTe Responsible AI Certification in March 2025. First enterprise recruiting software to obtain this.

Analysis notes that iCIMS does not offer true automation. Actions don't execute without user interaction.

Lever: Recently Added Native AI

Lever historically relied on third-party integrations for AI screening. This changed in Spring 2025 with "Talent Fit," an AI-powered matching engine that ranks candidates with transparent scoring and explanations.

AI features are not on by default in core Lever. Third-party integrations must be actively enabled by administrators.

Gem: Explainable AI with Clear Opt-In

Gem offers AI Sourcing Agents and AI Inbound Agents for application review. The system produces percentage-based match scores (e.g., 90% match) with explainable reasoning. Hovering over scores shows alignment to each requirement.

AI inbound ranking is opt-in on a per-job basis. The company states: "Gem AI does not make hiring decisions... We do not make any automated decisions on advancing or rejecting those applicants."

Gem has completed third-party bias audits and implements proactive guardrails including PII removal before AI processing.

SmartRecruiters: "Winston" AI Companion

SmartRecruiters introduced Winston in October 2024. An AI suite including Winston Chat, Winston Match, and Winston Screen. Their tagline: "The AI That Keeps Hiring Human."

The privacy policy states AI features are opt-in: "Our Customer may opt in to use features that assist them in the recruitment process through automated means." An AI Control Center gives companies control over what Winston sees, learns, and acts on.

Winston Screen "automates initial screening using the criteria you set, so only qualified, engaged candidates make it through." That suggests some automated filtering capability, but documentation consistently emphasizes human final decisions.

The Pattern Across All Six Platforms

PlatformNative AI ScoringAuto-Reject CapabilityDefault StatusHuman Required
GreenhouseNo (explicit policy)Via integrations onlyAI features ONAlways
WorkdayYes (A/B/C/D grades)Possible high-volumeRequires setupCaveated
iCIMSYes (Candidate Ranking)NoHuman-gatedAlways
LeverYes (new 2025)Via third-partyOpt-inAlways
GemYes (match %)NoOpt-in per jobAlways
SmartRecruitersYes (Winston Match)Filtering possibleOpt-inEmphasized

Key finding: No major ATS platform enables fully autonomous rejection by default. All emphasize human-in-the-loop design.

But high-volume scenarios, third-party integrations, and employer configurations can enable automated filtering that candidates never see.


The Legal Landscape Is Evolving Rapidly

Federal Guidance Has Been Removed, But Laws Remain

In January 2025, the Trump administration's Executive Order 14179 led to the removal of key EEOC and DOL AI guidance documents from agency websites. The EEOC's May 2023 technical assistance on AI and Title VII was taken down on January 27, 2025.

Underlying anti-discrimination laws remain fully enforceable. Title VII, the ADA, the ADEA, and other federal statutes continue to apply to AI-driven hiring decisions.

The iTutorGroup settlement ($365,000 in August 2023) demonstrated this liability. The company programmed recruiting software to automatically reject female applicants 55+ and male applicants 60+. The EEOC's first case involving automated hiring discrimination.

State Laws Create a Compliance Patchwork

JurisdictionLawEffectiveKey Requirements
NYCLocal Law 144July 2023Annual bias audit, 10-day notice, publish results
IllinoisAIVIAJan 2020Notice, consent, deletion rights for AI video
IllinoisHB 3773Jan 2026Notice for all AI hiring, anti-discrimination
MarylandHB 1202Oct 2020Signed waiver for facial recognition
ColoradoSB 24-205June 2026Impact assessments, notice, appeal rights
CaliforniaCCPA ADMTJan 2027Risk assessments, opt-out rights

NYC Local Law 144 is the most mature regulation but faces enforcement challenges.

A December 2025 State Comptroller audit found the NYC Department of Consumer and Worker Protection received only 2 AEDT complaints during July 2023-June 2025. Meanwhile auditors found "at least 17 instances of potential non-compliance."

Colorado's AI Act (effective June 2026) creates the most comprehensive requirements. Annual impact assessments, risk management programs, mandatory disclosure, appeal processes, and reporting of discovered algorithmic discrimination to the Attorney General within 90 days.


What the Data Actually Shows

AI Adoption Has Doubled in One Year

SHRM's 2025 research reveals rapid acceleration:

  • 43% of organizations use AI for HR tasks in 2025 (up from 26% in 2024)
  • 51% use AI specifically for recruiting
  • 82% of AI-using companies apply it to resume review
  • Two-thirds only started using AI in the past year

The Autonomous Rejection Question

This is where the data gets nuanced. Both sides can find support.

From Resume.org's August 2025 survey:

  • 75% allow AI to reject without human review
  • But only 35% allow this at ANY stage
  • 39% limit AI rejections to resume screening only

From ResumeBuilder's 2024 survey:

  • 21% automatically reject at ALL stages without human review
  • 50% use AI rejection ONLY at initial resume screening
  • 29% maintain human oversight for ALL decisions

The reconciliation: Most companies allow some AI rejection, but it's primarily concentrated at the resume screening stage. Only 21-35% allow fully autonomous rejection throughout. This explains the perception gap.

Academic Research Confirms AI Bias

A University of Washington study (October 2024) analyzed over 3 million comparisons across 500+ job listings.

  • LLMs favored white-associated names 85% of the time
  • Female-associated names favored only 11% of the time
  • Black male-associated names were never favored over white male-associated names

Employers Acknowledge Bias But Implement Anyway

67% of companies using AI acknowledge their tools introduce bias. Specific concerns:

  • 47% believe AI leads to age bias
  • 44% cite socioeconomic bias
  • 30% mention gender bias
  • Only 4% say AI NEVER produces bias

Despite this, 89% of HR professionals using AI report it saves time. Efficiency drives adoption despite known concerns.

The Landmark Lawsuit to Watch

Mobley v. Workday, conditionally certified as a nationwide collective action in May 2025, could include "hundreds of millions" of rejected applicants. Workday reported 1.1 billion application rejections during the relevant period.

The court allowed a theory that Workday can be held liable as an "agent" of employers. This could transform vendor liability for years.


Distinguishing AI from Traditional Automation

Much of what companies call "AI" is actually enhanced automation or rule-based filtering.

Traditional ATS functions (not AI):

  • Keyword matching/parsing
  • Yes/no eligibility questions ("knockout questions")
  • Rule-based filters (must have degree, X years experience)

Actual AI/ML capabilities:

  • Semantic understanding via NLP
  • Pattern recognition learning from historical data
  • Predictive scoring based on past hiring outcomes
  • Video/audio analysis of interviews

When job seekers complain about being "filtered by AI," they may actually encounter traditional rule-based filtering that's existed for decades. Or they may encounter genuine ML scoring. The distinction matters because rule-based filters are deterministic, while ML scoring introduces bias and opacity.


Conclusion: The Reality Is Uncomfortable for Everyone

Job seekers are right that:

  • AI adoption doubled in one year
  • 75% of companies allow some AI rejection without human review
  • Documented bias exists. LLMs favor white-associated names 85% of the time
  • Real legal cases have established discrimination via automated systems

Talent professionals are right that:

  • Major ATS platforms don't enable autonomous rejection by default
  • Human-in-the-loop design is the stated norm
  • Most AI rejection is concentrated at resume screening, not throughout
  • Only 21-35% use fully autonomous rejection at all stages

What both groups miss:

AI filters aggressively at the top of the funnel, often without human review, while humans engage with the resulting pool.

A recruiter honestly believing they review everyone may not realize AI narrowed the pool before they saw it.

A candidate convinced AI rejected them may have been filtered by a simple rule (wrong location, missing keyword) rather than machine learning.

For job seekers: AI filtering is real, but maximizing ATS compatibility still matters because much "AI" is actually rule-based parsing.

For talent professionals: Audit your actual technology stack, including third-party integrations, against emerging state requirements. Verify what autonomous actions are actually enabled.


Sources