A few months ago, we shared that Zapier's Talent Acquisition team was running an experiment: piloting AI-powered recruiter screens for select roles. We aimed to keep up with high application volume, reduce fraud, and give more candidates a real shot to show their potential beyond what's "perfect on paper." We said we'd be transparent about what we found.
Here's that update.
The problem got clearer
We continue to see thousands of applications arrive within 48 hours of opening many roles, with up to 30% showing signs of misrepresentation: bot submissions, fake identities, AI-enabled candidates who look strong on paper, but don't always hold up in practice.
We started this experiment thinking our core problem was volume and fraud. What we discovered was more meaningful: we were missing genuinely strong candidates. Not because they weren't qualified, but because our recruiters didn't have the bandwidth to reach them.
What we built—and how we built it
We chose our platform carefully. After evaluating multiple options, Ezra AI Labs stood out for its candidate experience, evaluation framework, and pace of innovation. They also brought us in as a design partner: iterating together on workflow, ATS integration, and scoring calibration, with a shared goal of accuracy and fairness—not just speed.
Here's how it works:Â
After an initial application review, most candidates are either declined or invited to an AI-led interview. A small number (strong referrals and sourced candidates we want to speak with) go directly to a live screen.Â
Candidates invited to Ezra receive a clear explanation of what it is, why we're using it, and the option to opt out with no penalty. (That one principle was non-negotiable throughout: fully opt-in. Candidate agency isn't a nice-to-have; it's foundational.)
If they participate, candidates complete a 15–20 minute structured conversation on their own schedule, in any time zone.Â
A Zapier recruiter reviews every output (transcript, video clips, summary, and score) before anyone advances. The AI has guardrails that prevent it from making any decision that resembles a hiring decision. That will always be a human call.
What the data actually showed
Across roughly 250 interviews during the pilot, a few things stood out clearly.
Recruiters got significant time back. Time between initial applicant review to completed recruiter screen shrunk from 8 days to 2.75 days, a 66% reduction. Reviewing an Ezra screen takes significantly less time than a live screen. Across the pilot, that translated to 84 hours of recruiter capacity returned. With our recruiters typically running ~15–20 screens per week, we project that switching to Ezra fully would free up roughly 5–6 hours per recruiter per week. That's time that goes back to mid-funnel conversations, hiring manager relationships, and engaging candidates.
We screened 5x more candidates per role. Not 5% more. Five times more. And in our first pilot with software engineering roles, 30% of candidates who advanced to hiring managers were people we wouldn't have had the capacity to screen through our traditional process. That's the part that matters most to us. Uncovering "hidden gems" through agentic screens, simply because we're able to get more in-depth information from candidates, is a major benefit to candidates and recruiters alike.
Candidates responded better than we expected. Across all interviews, the average candidate rating was 4.5 out of 5 stars, with 86% giving a 4 or 5. Completion rates were 97%—meaning virtually everyone who started an AI agent-led interview finished it. The feedback themes were consistent: it felt more conversational than expected, the scheduling flexibility was valued, and candidates appreciated having space to share more than what a resume can hold. One candidate wrote: "It was genuinely impressive to see how thoughtfully the tool is designed to create consistency while still allowing space to tell my story."
And for our global hiring—especially in India—removing the time zone barrier made a real difference. Candidates could complete a screen on their own time rather than coordinating across a 10+ hour gap.
Fraud dropped out on its own. By comparing the pool to our fraud risk detector and ATS fraud detection, we found that potential fraudulent candidates (bots, fake identities, etc.) overwhelmingly avoid completing the Ezra interview. Only about 5% of Ezra-interviewed candidates were flagged for cheating behavior. The cheat-detection architecture did its job, so our recruiters spent almost no time sorting through bad actors.
Ezra's candidate scoring aligned with ours. Candidates who scored strongly on Ezra correlated strongly with those who scored well on our recruiters' assessments. Our recruiters still make the final decision about whether to advance a candidate to the next stage.Â
What we're still figuring out
Honesty requires sharing the friction, too.
Opt-in rates varied more than expected, between 35% and 81% depending on the role. We think this is rooted in how clearly the why was communicated upfront: what we're testing, how it works, and why it fits who we are as an AI-first company. The clearer and more honest we are upfront, the more candidates engage.Â
Technical issues surfaced during the pilot (link resets for candidates, occasional response lag) are being tackled by Ezra's team. Some candidates also noted that Ezra couldn't answer specific questions about Zapier, which is fair. So, we're building out the Zapier knowledge base in the platform to address that.
How recruiters used agentic screens varied. Some used Ezra for high-volume roles and saw major efficiency gains. Others used it more selectively and found a different value: surfacing a signal they wouldn't have caught, rather than saving raw hours. Both are valid. It's a good reminder that the right AI solution isn't always about speed; sometimes it's about improving decision-making quality.
The bigger picture for talent acquisition teams
Here's what we're building toward: every applicant can have a structured, consistent shot to show how they think, not just how they write (or generate) their resume and application. Recruiters review higher-quality signals, so they can focus on what only humans can do: building relationships, advising hiring teams, and making decisions.Â
The future isn't "AI replaces recruiters." It's human judgment, amplified by AI precision, with strong guardrails in place. Zapier will keep experimenting and iterating, with transparency and people at the center.
We're committed to shaping the future of agentic recruiting responsibly and for the mutual benefit of the company and our candidates.Â
If you're a TA or People leader thinking through your own version of this, we'd love to hear what you're learning.









