Once a test is developed and put into operational use, it requires ongoing care and attention to improve upon or at a minimum maintain validity evidence.

For example, security analyses (‘data forensics’) should be conducted to detect and respond to anomalous results that might indicate validity has been compromised. Form- and item-level analyses should be conducted to detect and respond to normal effects of exposure and/or changes in the underlying domain of interest. New items/tasks should be developed and piloted so that operational forms can be reassembled or replaced in response to changes in psychometric characteristics or in the underlying domain of interest. Even the blueprint, domain analysis, and standard setting should be periodically reviewed to ensure they represent the current state of the underlying domain of interest and the intended interpretation and use of the test scores.

Alpine staff has extensive experience working with test sponsors to design and implement maintenance activities aligned with the validity framework designed at the outset of the program and any adaptations made over time as the needs of the program change. Such activities include ongoing psychometric consultation and program design (e.g., strategic and tactical initiatives), components of the test development process (e.g., analyze domain, develop blueprint, develop content, review content, pre-test and analyze, assemble operational forms, conduct standard setting), additional item- and form-level analyses (e.g., differential item functioning [DIF], drift, analysis of balance among operational forms), security analyses, and consequential validity analyses.