This SAT Vision Standard Document has so far established qualitative standards for building a training program in Vision Developer. Now let’s turn to the quantitative standards. No good quantitative standard can stand by itself.
The numbers below and the baseline numbers that you will establish for your program presume well-written tasks, elements, SKs, objectives, etc. Deviations from the quantitative standard indicate potential problems with the quality of these aspects of your program—for example, poorly written task statements will result in too little or too many task statements because the tasks are not aligned to the qualitative standards set forth above.
Benchmarking others who have developed a JTA hierarchy, an objectives hierarchy, and program hierarchy for the same or similar job can go a long way to establishing quantitative standards for your program.
While a job can vary in the number of functions and tasks, and tasks can vary in their number of elements and SKs, using an estimator as a guide may be useful when building JTA, Objectives, and Program Hierarchies. This section provides estimates for a typical job in terms of number of functions, tasks, elements, SKs, learning objectives, test questions, and courses.
Through benchmarking and other information, establish a baseline for the following data. Compare a project in VISION to the baseline for that project to determine whether that project needs closer evaluation.
Standards |
1.Base lines are established for tasks and analysis data. 2.100% of tasks have DIF data and are marked completed. 3.80% of tasks or greater are selected for training. 4.Base lines are established for objective hierarchy data. 5.Base lines are established for program hierarchy data. |
Best Practices |
|
Tasks |
|
Tasks—Establish base line |
Not all training programs, even those that are nearly identical, will have the exact same number of tasks.
We encourage you to start with what you have; establish the baseline number of tasks in the program and work from there. In future revisions, work towards aligning the task list with similar programs across industry.
|
Tasks per Organizer—Establish base line |
The number of organizers per task is an artifact of how the job analysis was organized. It may be by job role (preferred), or by job tier, or by system, etc.
The number of organizers will be indicative of how the analysis was performed.
|
Elements per task— Simple 6-8, Medium 10-12, Complex 12 and above |
A Task must be able to be performed within a single work shift and should have a manageable number or Elements. A right-sized task will likely have about 6-15 elements depending on the complexity of the task. More than 15 suggests that the Task ought to be broken into Sub-Tasks.
|
SKs per task—15 |
Many programs do not have SKs in their Task analysis. It is preferred to have them, and if they are present, establish a baseline through benchmarking and other information. |
Tasks marked completed—100% |
If a SME identifies a Task in the course of a job analysis, it ought to go through the complete process of selection, even if it is determined to not be trained.
A selection status should be set, and the task analysis status set to Completed for all tasks in the analysis hierarchy.
|
Tasks with DIF—100% |
All identified tasks should be DIF’ed.
|
Tasks selected for training—90% |
Not all tasks will be selected for training, but a high percentage of them should be trained.
If the task selection rate is too low, it probably means that the identified tasks are too narrow in their scope… meaning they are probably really elements or SK’s.
You might expect to find that there are many fewer tasks in a similar project at another site if your tasks are too narrow and many of them unselected for training.
|
Objectives |
|
Number of Performance Objects per Organizer—Establish base line |
The number of objectives per organizer is an artifact of how the training program has been organized. It may be by job role (preferred), or by job tier, or by system, etc.
While there is no standard baseline, it is encouraged that every project establishes the existing baseline, so that future work in the objectives hierarchy can be measured against the baseline. Deviations from the baseline indicate inconsistency in design of the program.
|
Number of Cognitive Objects per Organizer—Establish base line |
See above. |
EOs from consolidated SKs—35% |
The number of enabling objectives derived from consolidated SKs will indicate the relative complexity of the SKs.
A low number here indicates that SKs on average are complex, whereas a high number indicates they are relatively simple.
|
EOs from direct 1:1 SKs—65% |
The number of enabling objectives derived directly from single SKs will indicate the relative complexity of the SKs.
|
Performance EOs—20-40% (Establish a baseline.) |
Performance objectives are what students will be able to do on their jobs. They can be either cognitive or psychomotor in nature, but they are different from cognitive (or academic) objectives which are written in order to test students’ knowledge of a topic. Typically, the percentage of performance EOs will likely be lower than the cognitive (academic) EOs and will depend upon the complexity of the job and how much someone must know to perform the tasks.
|
Cognitive (Academic) EOs— 60-80% (Establish a baseline.) |
As noted above, the percentage here will depend on the complexity of knowledge necessary to perform the job.
|
Questions |
|
Multiple Choice—50%-95% Matching True/False Scenario Short Answer Fill-in-the-blank Essay |
The number of multiple-choice questions will likely vary by industry. The nuclear industry, for example, relies heavily on multiple-choice questions for exams (in the range of 90 to 95%). Base your percentage on what makes sense for your industry and how students are tested.
Based on the percentage of multiple-choice questions, establish base lines for the other types of questions.
Avoid true/false questions on exams as they can be easily guessed.
|
Program |
|
Number of Training Units per Organizer—Establish baseline |
Establish a baseline. |
Hands-on Instruction—20-40% |
Hands-on instruction will likely be less than cognitive instruction but establish a baseline for your industry and program.
|
Cognitive Instruction—60-80% Number of TUs per Organizer—Establish baseline |
Most training programs contain more cognitive lessons than hands-on. Establish a baseline for your industry. Establish a baseline. |
Number of Objectives in a TU—10 Hands-on Instruction—20-40% |
Establish a baseline, but Training Units also represent a chunk of manageable content, so too many learning objectives in a single TU may indicate that the lesson is too large. Hands-on instruction will likely be less than cognitive instruction but establish a baseline for your industry and program.
|
Cognitive Instruction—60-80% |
Most training programs contain more cognitive lessons than hands-on. Establish a baseline for your industry.
|
Number of Objectives in a TU—10 |
Establish a baseline, but Training Units also represent a chunk of manageable content, so too many learning objectives in a single TU may indicate that the lesson is too large. |