AI Skills Assessment by Role
A developer and a product manager both need AI skills, but the skills that matter are fundamentally different. AISA adapts its assessment focus, dimension weighting, and conversation topics to match each role. Same rubric. Different emphasis.
Every candidate is assessed across five dimensions: Prompting & Communication, Technical Understanding, Workflow & Application, Critical Thinking, and Safety & Responsibility. But the weight each dimension carries, and the conversation topics that surface evidence, shift based on role. A data scientist gets deeper technical probes; a PM gets harder questions about product scoping and vendor evaluation.
Choose a Role
Developers
Assess prompt engineering for code generation, AI-assisted debugging, model selection, and how AI fits into the development workflow. Not just whether they use Copilot, but whether they use it well.
Product Managers
Evaluate AI literacy for product scoping: can they write AI-aware PRDs, evaluate vendor claims, anticipate failure modes, and guide teams on responsible AI deployment?
Designers
Test generative AI fluency in the design process. From prompt craft for visual tools to evaluating AI outputs against brand, accessibility, and UX standards.
Data Scientists
Probe LLM and RAG workflow competency: embeddings, fine-tuning trade-offs, evaluation metrics, production system design, and the practical limits of context windows.
All Other Roles
Marketing, operations, finance, legal, HR, sales — any role where AI touches daily work. The same rubric adapts its conversation to match your team's professional context.
One Rubric, Five Dimensions
Every role is assessed against the same behavioral rubric. The difference is emphasis, not methodology.
Read more about our scoring methodology in the AISA Rubric.