Candidate Voice Report 2026: Independent Research on How Job Seekers Experience AI Screening, Voice Interviews, Video Interviews, and Assessment Platforms
The Candidate Voice Report is the first independent, structured study of how candidates actually experience AI in the hiring process — not how recruiters think they experience it. The 2026 edition surveys 2,500+ recent applicants across voice AI, chat AI, video interviewing, and skills assessment touchpoints. This page documents the research design, instrument, and publication plan. Field opens August 2026; results publish in Q1 2027.
Methodology: Mixed-methods study combining a structured online survey of 2,500+ candidates who applied to a U.S. or UK role in the prior 90 days that included an AI touchpoint (voice AI screen, chat screen, async video interview, or skills assessment), with 60 follow-up qualitative interviews stratified across modality, role type, and outcome (hired / progressed / rejected / withdrew). Field period August–October 2026; publication Q1 2027.
Key Findings
Candidate-side, not recruiter-perceived
Most existing 'candidate experience' data is collected by employers from candidates currently in their funnel — which understates negative experience and overstates among-hired sentiment. This study surveys candidates independently, post-process, with no employer in the loop.
2,500+ candidates, 60 interviews
Target sample of 2,500+ recent applicants across the four primary AI modalities, paired with 60 in-depth interviews to surface the mechanisms behind survey patterns.
Modality-level reporting
Findings are reported separately for voice AI, chat AI, async video, and assessment — never as a single composite. Each modality has different completion-rate patterns, fairness considerations, and candidate-perception drivers.
Q1 2027 publication
Field August–October 2026; analysis November 2026 – January 2027; release Q1 2027 with full anonymized dataset, instrument, and code available for replication.
Why a Candidate-Side Study, and Why Now
The candidate-experience data the recruiting industry currently relies on has a structural problem: it is collected by employers, from candidates who are actively in the employer's funnel, often as part of an offer process or a post-hire onboarding flow. Three biases follow.
Response bias. Candidates who had a frustrating experience are less likely to complete an employer-administered survey — particularly if they suspect the survey is reviewed by the same recruiter they will interact with again.
Survivorship bias. Candidates who reach offer or hire are over-represented; candidates who dropped out at the AI-touchpoint stage are largely absent from employer-administered datasets.
Desirability bias. Candidates who are interviewing or considering an offer have strong incentives to report positively in any employer-channel survey, regardless of their actual experience.
Independent, post-process surveying solves all three. It is the only way to produce candidate-experience data that reflects the population of applicants — not the population of finalists.
Candidate experience measured by employers is a measurement of finalists. Candidate experience measured independently is a measurement of applicants. These are different populations producing different conclusions.
Six Primary Research Questions
The 2026 instrument is built around six research questions:
1. Completion and drop-off. At what rate do candidates start vs. complete each AI touchpoint type, and what are the most common reasons candidates abandon mid-process? Reasons are surfaced via open-ended responses and coded.
2. Perceived fairness. Do candidates perceive each modality as fair, transparent about how their data is used, and reflective of their actual capabilities? How do perceptions vary by role type and demographic segment?
3. Communication clarity. Were candidates told in advance that AI would be used? Were they given a clear option to request a human alternative? How well did the platform explain what was being evaluated?
4. Modality preference. Given equivalent role and stage, which AI modality do candidates prefer to encounter — and which do they actively avoid? How does preference vary by candidate age, role type, and prior experience with each modality?
5. Outcome correlation. For candidates who reached a hiring decision, does AI-touchpoint experience correlate with their willingness to apply to the same employer again, refer others, or accept the offer if extended?
6. Accommodations and accessibility. For candidates who needed accommodations (disability-related, language, technology access), did the AI process provide them effectively? Where did the process fail?
Sampling and Recruitment
Eligibility: Adult job seekers in the U.S. or UK who applied to a role in the prior 90 days that included at least one AI touchpoint of an identified type (voice AI screen, chat screen, async video interview, or skills assessment). Verification questions confirm the AI modality before substantive questions begin.
Stratification: across modality (target n>=500 per modality), role type (frontline / hourly, professional / knowledge worker, technical, executive), outcome (hired / progressed past AI stage / rejected at AI stage / withdrew at AI stage), and demographic segments sufficient to support fairness-perception analysis without small-n disclosure risk.
Recruitment: independent survey panels with verified-applicant screening, supplemented by candidate community partnerships and outreach via professional networks. No employer-administered channels are used — this is a deliberate methodological choice to avoid the response biases described above.
Incentives: a flat completion incentive that does not vary by response content. Respondents are not told before completion that some modalities are studied more than others.
Instrument Design Principles
The instrument applies three design principles drawn from candidate-experience research methodology:
Separate experience from outcome. Candidates who were rejected often rate the process more negatively than candidates who advanced — even when the process was identical. The instrument structurally separates process-quality questions from outcome and asks about process before revealing or asking about outcome.
Ask about specific behaviors, not general satisfaction. 'Was the AI process fair?' is a poor question; it produces socially desirable responses and offers no remediation guidance. 'Did you understand how the AI was scoring your responses?' or 'Were you offered a clear option to request a human interviewer?' are concrete, actionable, and answerable.
Use modality-specific question batteries. Voice AI, chat AI, async video, and skills assessment have different candidate-experience surfaces and different failure modes. The instrument has a shared core plus modality-specific batteries; reporting is always at the modality level.
The full instrument, including modality-specific batteries and the demographic and accommodations modules, will be published as an appendix to the final report.
Fairness, Compliance, and Demographic Reporting
Demographic data is collected to support fairness-perception analysis — specifically, whether candidates from different demographic segments report meaningfully different experiences with each AI modality. Demographic items are placed at the end of the instrument, are explicitly optional, and use response categories aligned with U.S. EEOC and UK Equality Act conventions.
Reporting practices follow established disclosure-control thresholds: no segment is reported below n=50, and intersectional segments below that threshold are aggregated rather than published. Findings on perceived fairness are reported as candidate perceptions of process fairness — they are not adverse-impact analyses of the underlying AI systems, which require employer-side outcome data the candidate-side instrument cannot provide.
The study is structured to comply with applicable consent and privacy frameworks (informed consent at survey start, data minimization, candidate-controlled withdrawal). The privacy notice and data-handling policy will be public alongside the report.
What This Report Is, and What It Is Not
What it is: an independent, methodologically defensible measurement of how candidates experience AI hiring touchpoints, reported by modality, with the instrument and dataset open to scrutiny.
What it is not: a vendor scorecard. The report does not name individual vendors in the candidate-facing instrument because most candidates cannot reliably identify which vendor's platform they encountered (the platform is typically white-labeled inside the employer's branding). Vendor-level analysis would require employer cooperation we have not sought for this edition; it is a candidate-perception study, not a vendor-comparison study.
It is also not an indictment of AI in hiring. Several modalities are likely to perform well on candidate-perceived fairness, communication clarity, and accommodation. The report will publish what the data shows — favorable or unfavorable — modality by modality, without an editorial thesis to defend.
Why Independent Candidate Data Is Scarce
Candidate-experience research published by employers and vendors usually relies on data collected through employer-administered channels — post-application surveys, candidate-NPS instruments delivered through the ATS, or onboarding feedback after hire. The Talent Board's annual Candidate Experience Research and SHRM's Talent Acquisition Benchmarking Report remain the dominant industry baselines, and both rely heavily on employer-mediated channels. That structurally over-samples finalists and under-samples the candidates who dropped out at the AI-touchpoint stage — exactly the population a candidate-side study needs to hear from.
The regulatory frame around candidates and AI in hiring has moved fast. The U.S. EEOC's May 2023 technical assistance addressed automated systems in employment decisions; New York City Local Law 144 (effective July 5, 2023) requires bias audits and disclosures to candidates; and Illinois has regulated employer use of AI video interviews under the Artificial Intelligence Video Interview Act since January 1, 2020. Candidates today are both more aware of AI in hiring and entitled to information about it — but the candidate-side experience of the disclosed-AI hiring process is still under-measured. Pew Research surveys on AI in workplace decisions have shown that a majority of U.S. adults are uncomfortable with AI making final hiring decisions, with notable variance by age and prior workplace experience. Public-attitude data complements but does not substitute for direct, post-process measurement of candidates who actually went through an AI hiring touchpoint, which this report is designed to provide.
Limitations
All process-experience data is self-reported. Applications are limited to the prior 90 days to mitigate recall bias, and respondents are asked to anchor on a single specific recent application. Most candidates can't reliably identify which vendor's platform delivered a given AI touchpoint (employer branding typically obscures the underlying vendor), so vendor-level findings are out of scope for this candidate-side study. Survey panels reach a substantial cross-section of recent applicants but under-represent specific populations — applicants without sustained internet access, applicants for whom English is not the dominant language (this edition is U.S./UK English-only), and applicants who fully exited the labor market after the experience. The 2026 edition is a snapshot; longitudinal change will be measurable only with a 2027 follow-up. Findings on perceived fairness by demographic segment are candidate perceptions, not adverse-impact analyses of underlying systems, which require employer-side outcome data the candidate-side instrument cannot provide.
Independence, Funding, and Respondent Care
The Candidate Voice Report 2026 is self-funded by Recruiting Tech Reviews. There is no vendor sponsorship, no paid inclusion, no employer co-funding, and no pre-publication editorial review by any party other than the authors. If any future funding arrangement is added prior to fielding, this section will be updated with the entity, the date, and the scope.
Recruiting Tech Reviews publishes editorial coverage of AI recruiting platforms whose employer customers are the source of candidate experiences in this study. To keep the candidate-side study clean: the candidate-facing instrument does not name vendors and is not designed to produce vendor-by-vendor satisfaction rankings, the analysis team operates separately from the editorial review team during the analysis window, and findings are not pre-shared with any platform or employer before public release.
Participation is voluntary, consent is obtained at survey start, and respondents can withdraw at any time without penalty. No personally identifying information is collected; demographic items are explicitly optional and reported only at segment sizes large enough to protect respondent privacy.
How Buyers Should Use the Findings
When the report releases in Q1 2027, three concrete uses for buyers:
1. Modality selection. Use the candidate-perception data alongside operational data (completion rates, time-to-hire impact) when choosing between modalities for a given role type. Modality preference varies meaningfully by candidate population.
2. Vendor evaluation pressure. Use the modality-level findings to ask vendors specific, evidence-anchored questions about how they address the most common candidate frustrations in their modality. Vendors who can articulate a specific, recent product investment in addressing those frustrations are differentiated from those who cannot.
3. Accommodation and accessibility validation. Use the accommodations findings to build pass/fail accessibility requirements into the RFP, rather than treating accessibility as a feature checkbox. The data will surface which accommodation patterns are most commonly broken in the modality you are evaluating.
Participate or Partner
Candidates: when the field period opens in August 2026, eligible recent applicants will be invited via the survey panels and community partners we work with. Watch the site for the public participation page that opens at field launch.
Researchers: pre-publication briefing access is available to academic researchers, policy analysts, and credentialed journalists. The full instrument and sampling plan will be released alongside the report; the anonymized respondent-level dataset is available on request after publication.
Vendors: this is a candidate-side study and we do not solicit vendor sponsorship. Vendors who would like to discuss the methodology or contribute to instrument development (without any pre-publication review or sponsorship arrangement) are welcome to reach out via the consultation form.
Related Articles
Deeper coverage of each topic area covered in this report.
Editorial ranking of AI recruiting platforms on candidate-experience criteria.
Modality-specific deep dive on async video interviews — context for the video-modality battery in the survey instrument.
The modality comparison framework that informs the voice and chat batteries in the instrument.
ATS-context analysis of how candidate experience varies with platform integration.
Related Topic Hubs
Related Research
Apply This Research
Get a research-backed evaluation for your program
Our research team builds custom shortlists and evaluation frameworks based on your ATS, hiring volume, and requirements — applying the same methodology behind this report.