Role Assessment

Generative AI Skills Assessment for Designers

Testing AI fluency in the design process, not just the tools

The strongest designers treat generative AI as an exploration accelerator, not a replacement for judgment. They can prompt Midjourney for 50 variations and explain why they rejected 49 of them. Designers who score below 5 on AISA tend to either avoid AI entirely or accept its first output uncritically. AISA surfaces this distinction through conversation about actual design decisions.

What We Assess

Specific AI competencies we probe through natural conversation, tailored for designers.

Visual AI Prompt Craft

Can they describe a desired visual outcome in terms a generative model can act on? We test specificity — not just "make a modern landing page" but prompts that include layout structure, color constraints, typography direction, and mood references. The best designers build prompt libraries and iterate systematically.

Design Process Integration

Where in the design pipeline does AI add value for them? We look for evidence of AI in mood boards, wireframing, icon generation, copy drafting, user flow ideation, or accessibility testing — not just final asset generation. The designers who score highest use AI at the stages where exploration speed matters most.

Critical Evaluation of Outputs

When an AI generates a design, can they articulate what is wrong with it? We assess whether they check for brand consistency, accessibility compliance (contrast ratios, text sizing), cultural sensitivity, and visual hierarchy. Accepting AI output without this filter is a liability, not a skill.

Ethical & Brand Awareness

Do they consider copyright implications of AI-generated imagery, bias in representation, and brand dilution from over-reliance on generic AI aesthetics? Designers who score well understand that generative AI tends toward visual homogeneity and actively work against that tendency.

Dimension Focus

AISA scores across five dimensions. Here is how they weight for designers.

Workflow & Application

25%

Where does AI fit in their design process? We assess whether they use AI during ideation, prototyping, iteration, or asset production — and whether they can articulate why it belongs at that stage. A designer who uses AI to generate 20 layout concepts in 10 minutes, then refines the best one manually, demonstrates a fundamentally different workflow than someone who generates one image and ships it.

Prompting & Communication

23%

How do they prompt visual AI tools? Design-specific prompting requires describing spatial relationships, style references, brand constraints, and accessibility requirements in language the model understands. We probe whether they can iterate on prompts systematically — adjusting tone, composition, color palette — rather than regenerating randomly until something works.

Critical Thinking

22%

Can they evaluate AI-generated designs against brand guidelines, accessibility standards, and UX principles? This is the dimension that separates a designer who uses AI from a designer who uses AI well. We look for evidence that they apply the same critical eye to AI outputs that they would to a junior designer's work.

Prompting
Technical Understanding
Workflow
Critical Thinking
Safety

What Good vs. Poor Looks Like

Patterns we see consistently in designer assessments. AI skill in design is about judgment, not just generation.

Strong signal (score 7-10)

  • +Describes a systematic prompt iteration process — not just re-rolling until they get something they like, but adjusting specific parameters (style weight, composition guidance, negative prompts) based on what the previous output lacked.
  • +Uses AI at specific stages of their design process with clear rationale. "I use it for rapid ideation in the divergent phase, then switch to manual work for refinement because the model cannot understand our design system tokens."
  • +Evaluates AI outputs against concrete standards: WCAG contrast ratios, brand typography rules, responsive behavior, and cultural representation. Can articulate why a visually appealing AI output might still be unusable.
  • +Thinks about copyright and originality. Understands that AI-generated assets may have licensing implications and has a workflow for ensuring final deliverables are original enough for commercial use.

Weak signal (score 1-4)

  • -Describes AI use as "I put in a prompt and see what comes out." No iteration strategy, no understanding of why different prompts produce different results, no systematic approach to improving outputs.
  • -Cannot explain where AI fits in their process. Uses it opportunistically for final assets rather than strategically for exploration. Treats AI as a production shortcut rather than a thinking tool.
  • -Accepts AI visual outputs without critical review. Does not check accessibility, brand consistency, or cultural appropriateness. Ships whatever looks good at first glance.
  • -No awareness of the homogeneity problem — that generative AI tends toward similar aesthetics (the "Midjourney look"), and that relying on it too heavily erodes distinctive brand identity.

The Conversation Approach

AISA does not ask designers to generate images during the assessment. It has a design thinking conversation — exploring how they make decisions when AI is part of their toolkit.

A typical designer assessment explores their current AI toolkit, how they choose between tools for different tasks, a specific project where AI changed their process, and how they maintain quality standards when working with generative outputs. The conversation might present a design challenge — "your team needs to produce 40 product illustrations in two weeks" — and explore how they would approach it with and without AI.

The value is in the reasoning. Two designers might both use Midjourney, but one has a refined prompt vocabulary and an evaluation rubric, while the other is rolling dice. The conversation surfaces this difference in a way that portfolio reviews cannot — because portfolios show outcomes, not process. Learn more about why process visibility matters in our rubric methodology documentation.

The Hiring Context

Design hiring has traditionally centered on portfolios — polished case studies that show final outcomes. But AI changes what matters. A beautiful portfolio might now have been generated in an afternoon with minimal design judgment. Conversely, a designer with a modest portfolio might have extraordinary AI-augmented process skills that would triple their output on your team.

The design industry is also navigating a generational divide. Senior designers who built their careers on manual craft are adapting to AI at different rates. Junior designers are AI-native but may lack the critical evaluation skills to maintain quality. Neither portfolio review nor whiteboard exercises reveal where a designer sits on this spectrum.

AISA provides a third signal. It does not replace portfolio review — it adds the dimension of process and judgment that portfolios hide. The report shows exactly how a designer thinks about AI in their workflow, with quotes and evidence that hiring managers can use alongside traditional assessment. For a comprehensive approach to design team hiring, see our AI skills gap implementation guide.

Why It Matters

Design teams are under pressure to produce more, faster. AI promises to deliver that speed, but only if designers know how to wield these tools without compromising quality. A designer who uses AI poorly ships generic, inaccessible, off-brand work faster — which is worse than shipping nothing. AISA gives design leaders evidence of whether a candidate can maintain design standards while using AI as a force multiplier. The report includes specific quotes showing how they think about quality, brand, and user experience when AI is in the loop.

Further Reading

Understand the methodology and context behind our AI skills assessment.

Start assessing designers

One conversation. Evidence-based scoring across five dimensions. A report you can actually use to make hiring decisions.