Certification Development

    The Science Behind PCSI Certifications

    Each PCSI certification follows a structured, research-based development process. This section offers a transparent view of where each credential stands, the progress made, and what remains before launch.

    We publish this so you can verify the rigor behind your credential.

    Select a Certification

    PCSI-GSAIL Development Progress

    Item Bank Development
    LaunchJune 2026

    Detailed, real-time transparency into the development process for the Global Strategic AI Leadership certification.

    Development Lifecycle

    Every PCSI certification follows a five-phase sequential process. PCSI-GSAIL is currently in Phase 3.

    Full process details
    Research & Scoping
    Framework Development
    Item Bank Development
    Pilot Testing
    Certification Launch
    What the Exam Covers

    Framework Structure

    The exam is organized into seven competency domains, each covering a distinct area of professional practice. Domain weights determine what percentage of exam questions come from each area, ensuring the assessment reflects real-world job priorities.

    Blueprint Domain Weights

    Post-Launch Validation

    Expert Validation

    Building the Exam

    Assessment Development

    Are There Enough Questions?

    Blueprint Coverage

    Every competency domain must have enough high-quality items to build a reliable exam. This shows progress toward that goal across all seven domains.

    Is the Exam Fair?

    Item Quality Standards

    A fair exam includes questions at different difficulty levels, tests different types of thinking, and uses varied question formats. Here is a snapshot of how the item pool is balanced across these dimensions.

    All items undergo sensitivity and bias review before entering the operational pool. After pilot testing, additional statistical analysis will verify that no question systematically advantages or disadvantages any demographic group.

    What the Exam Looks Like

    Exam Structure and Passing Standard

    How the exam is structured, what it takes to pass, and how the passing standard will evolve after launch.

    Exam Format

    110Questions Per Form
    60-65%Scenario-Based
    MultipleEquivalent Forms

    Each form includes a small number of unscored research items alongside scored questions. Candidates will not know which items are unscored.

    Provisional Passing Standard

    In effect at launch and during initial exam administration. Will be replaced by the Angoff-derived cut score once the post-launch expert study is completed.

    65%Total Score Required
    Scenario-based and judgment-driven exam, not recall-based
    Items test at Apply, Analyze, or Evaluate cognitive levels
    Within the 60 to 70% range typical for Angoff-derived cut scores
    3+Advanced Items Correct
    Ensures capability in higher-complexity scenarios
    Prevents passing while avoiding all complex judgment questions
    Set as a safeguard, not a second scoring system
    Passing Standard Lifecycle
    Provisional Model
    Exam Launch
    Expert Validation + Angoff
    Final Cut Score

    What Happens After Launch

    The exam launches with the provisional passing standard in place. After launch, the expert validation survey and a formal Modified Angoff study will be conducted to derive a final, defensible cut score.

    1Expert Panel

    Experienced practitioners define expectations based on real-world AI-enabled HR decisions.

    2Minimum Competence Defined

    The benchmark reflects safe, consistent practice, not advanced or exceptional performance.

    3Independent Ratings

    Experts estimate how many minimally qualified candidates would answer each item correctly.

    4Structured Re-rating

    Differences are examined and refined to improve consistency and judgment accuracy.

    5Cut Score Derived

    Final estimates are aggregated to produce a defensible passing threshold.

    Pilot Testing and Item Analysis

    Every item is analyzed for statistical quality after pilot administration. Items that do not meet quality thresholds are revised or removed before the exam goes live.

    Score Equating Across Forms

    Statistical equating ensures every form holds candidates to the same standard. A slightly harder form does not penalize, and a slightly easier form does not give an advantage.

    Post-Launch Both the expert validation survey and the Angoff study will be conducted after the exam launches. The provisional standard remains in effect until then.