
AI fluency, measured through conversation.
Learn the AI Persona and Skill Level of your next hire.
Invite candidates, see instant reports.
Candidate Conversation
Our language models are trained to measure real-life AI skills through conversation.
Can you show me how you'd use an AI assistant to expand a rough client email? Give a one-sentence draft and the exact prompt you'd use.
Draft: "Sorry for the delay — we'll get the report to you by EOD Thursday." Prompt: "Short, professional email apologizing for a 2-day delay on a client report. Max 3 sentences. Tone: warm, not over-apologetic." Role, length, tone, and concrete detail give the model constraints so the output is usable; I also add a folder of example client conversations as context for tone and style.
How do you keep up with the latest in AI, and how would you explain the current landscape — key players, how things are evolving — to a colleague who's not deep in it?
I follow OpenAI and Anthropic releases, plus things like The Batch and Benedict Evans for use cases rather than hype. For a colleague I'd frame it as: we're past "use AI or not" — it's about where ChatGPT or Claude add value versus where you keep humans in the loop, and how fast the tooling is changing. I'd give two concrete examples, like using Claude to draft client emails or Copilot for boilerplate, so it's relatable.
What do you see as the main limitations of current AI tools, and how do you decide when not to use them — or how to work around them when you do?
Hallucination, recency, and context windows filling up — I never treat output as final and I check facts and dates. I also watch for trust issues: the "helpful assistant" can slip into flattery or agreeing with me instead of pushing back, so I don't take its tone at face value. For anything high-stakes I draft with AI then validate myself; judgment stays with me.
Pay per assessment. Start free.
For tech companies and startups hiring dev, product, and data roles.
No Credit Card or Subscription Required.
Auto top-up with one toggle.
AI skills assessed through conversation
One AI talks, one scores. Evidence for every score; anti-gaming built in.
Conversation & Challenges
Conversation plus mini challenges: games, mock apps, know-how. No exam, no anxiety.
Evidence-Based Scoring
Scored against a rubric created in partnership with academics, psychologists, human behaviour and industry experts. Every score tied to a quote.
Adaptive Depth
AI skills measurement for the age of AI. Chat is the natural medium to measure AI knowledge and skills — we probe when it matters, move on when done.
Anti-Gaming Protection
Copy-paste — AI-tool phrases, unnatural structure. Style shifts — tone vs. your baseline. Speed — long pauses, impossibly fast replies. All surfaced in the report.
| How we compare | AISA | Sapia | HackerRank | Codility | TestGorilla | iMocha | HireVue |
|---|---|---|---|---|---|---|---|
| Focused on technical & product roles | ✓ | Dev only | Dev only | Partial | Partial | Partial | |
AI skills as core focusOnly AISA ✓ Add-on MCQ | |||||||
| Conversational, invisible assessment (dialogue, not test or exam) | ✓ | Add-on | Video | Partial | |||
| Adaptive depth (probes when needed) | ✓ | Fixed 5 Qs | ✓ | ✓ | |||
| Quote-backed evidence for every score | ✓ | ✓ | ✓ | ✓ | Partial | Partial | ✓ |
Behind the rubric
AISA's assessment framework is developed by a team with deep roots in technical recruitment, behavioural science, and AI product leadership — drawing on 15 years of hiring experience including 3,000+ interviews, 300+ hires, and published research on recruitment methodology. The rubric is informed by backgrounds spanning the Metropolitan Police, Harvard, Crowdbotics (Silicon Valley), and the European School of Economics.