Case StudyMarch 5, 2026 · 8 min read

CoRecruit: Building Conversational AI Screening for Wanderly

Travel nursing staffing runs on phone screens. Hundreds of them, every week, done by recruiters. We replaced the first stage entirely with a conversational AI agent - here's how it was built.

Wanderly is a travel nursing marketplace. Nurses find assignments; hospitals fill critical staffing gaps. The matching process requires understanding a nurse's credentials, preferences, license states, and availability - then connecting them to assignments that fit.

The first step in that process is a phone screen. Hundreds of them, every week, done by recruiters.

We were asked to reduce screening time without reducing quality. We ended up replacing the screening process entirely.

The Problem with the Existing Workflow

A recruiter phone screen for a travel nurse takes 15–25 minutes. It covers:

  1. Current license states and pending applications
  2. Specialty and unit experience (ICU vs. PCU, adult vs. pediatric)
  3. Preferred assignment length and start date
  4. Compensation expectations and benefits priorities
  5. Geographic flexibility and location preferences
  6. Prior travel experience and agency relationships

Most of this information is structured - the same questions, the same categories, answered in slightly different ways by each candidate. The recruiter's role is to collect it accurately and record it in the ATS.

The unstructured part - understanding what the candidate actually wants, detecting mismatches between expectations and available assignments, building rapport - is where recruiters add real value.

The screening call was spending 80% of its time on the structured part.

What We Built

CoRecruit is a conversational AI agent that conducts the initial screening via SMS and a short web-based conversation. It:

  • Introduces itself as a Wanderly assistant (not a human recruiter)
  • Asks the structured qualification questions conversationally, adapting order based on responses
  • Handles clarifications and follow-up questions in natural language
  • Detects incomplete or contradictory answers and re-asks tactfully
  • Produces a structured candidate profile in the ATS, formatted exactly as recruiters need it

The recruiter receives a completed profile before they speak to the candidate. The first call becomes a placement conversation, not a data collection exercise.

Technical Architecture

The core of CoRecruit is a multi-turn conversation engine built on top of Claude, with:

State management: The conversation maintains a structured data model of what's been collected and what's still needed. Each LLM call receives the current state and is instructed to advance it - not to generate free-form conversation.

Extraction layer: After each candidate turn, a separate extraction pass parses the response against the expected schema. This separates "did we collect the data" from "how do we respond next."

Validation logic: Certain fields require validation - license states against NCLEX records, specialty claims against stated experience. These run as synchronous checks before confirming the information is recorded.

ATS integration: Wanderly's ATS accepted structured data via webhook. The completed profile was formatted to match their existing field structure exactly, requiring no recruiter reformatting.

What the Results Showed

After deployment:

  • Average time to completed candidate profile: 8 minutes (down from 20 minutes of recruiter time)
  • Completion rate for candidates who started the screening: 74%
  • Recruiter assessment of profile quality: comparable to manually conducted screens
  • Recruiter time freed per week per agent: ~6 hours

The 26% drop-off in completion rate was expected - some candidates prefer phone calls, and we built a fallback path that routed them directly to a recruiter.

What We Learned

Transparency matters more than we expected. Candidates who knew they were talking to an AI had higher completion rates than those who found out mid-conversation. Upfront disclosure was not a liability.

Conversation design is the hard part. The LLM was straightforward to integrate. Getting the conversation to feel natural, handle tangents gracefully, and not feel like a form disguised as chat - that took most of the engineering time.

Structured output requires structured prompting. Asking an LLM to "extract the candidate's license states" from a free-form response works poorly if the candidate said something ambiguous. The extraction prompts needed to enumerate edge cases explicitly.

Integration is underestimated. The ATS integration took as long as the conversation engine. Legacy systems have rigid field structures, undocumented edge cases, and rate limits that only appear in production.

The Broader Pattern

CoRecruit is a specific instance of a general pattern: take a high-volume, structured-but-conversational workflow, move the collection into AI, and give humans the synthesized output rather than the raw material.

This pattern applies in any domain where professionals spend significant time collecting information before they can do the work that actually requires their judgment. Loan origination. Insurance intake. Legal intake. IT helpdesk triage.

The technology is available. The design work is what makes it work.