4 Test Administration

Chapter 4 of the Dynamic Learning Maps® (DLM®) Alternate Assessment System 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017) describes general test administration and monitoring procedures. This chapter describes updated procedures and data collected in 2020–2021, including the DLM policy on virtual test administration, a summary of administration time, adaptive routing, Personal Needs and Preferences Profile selections, and teacher survey responses regarding user experience, remote assessment administration, and accessibility.

Overall, administration features remained consistent with the 2019–2020 intended implementation, including the availability of instructionally embedded testlets, spring operational administration of testlets, the use of adaptive delivery during the spring window, and the availability of accessibility supports.

For a complete description of test administration for DLM assessments, including information on available resources and materials and information on monitoring assessment administration, see the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).

4.1 Overview of Key Administration Features

This section describes DLM test administration for 2020–2021. For a complete description of key administration features, including information on assessment delivery, Kite® Student Portal, and linkage level selection, see Chapter 4 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017). Additional information about changes in administration can also be found in the Test Administration Manual 2020–2021 (DLM Consortium, 2021) and the Educator Portal User Guide (Dynamic Learning Maps Consortium, 2021d).

4.1.1 Test Windows

Instructionally embedded assessments were available for teachers to optionally administer between September 14 and December 21, 2020, and between January 1 and February 24, 2021. During the consortium-wide spring testing window, which occurred between March 8 and July 2, 2021, students were assessed on each Essential Element (EE) on the blueprint. Each state education agency sets its own testing window within the larger consortium spring window.

4.1.2 DLM Statement on Virtual Assessment Administration

In October 2020, DLM staff released a policy document stating that DLM assessments must be administered in person by a qualified test administrator, not virtually (e.g., over Zoom, Microsoft Teams, Google Hangouts, etc., in which the test administrator is not physically present during administration). This policy was supported by a resolution from the DLM Technical Advisory Committee, who agreed that there would be too many risks associated with a virtual administration (e.g., student ability to access the content, test security, validity of score inferences). The policy does not require an in-school administration. For example, a test administrator could travel to the student’s house, or a separate off-site testing facility could be used.

4.2 Administration Evidence

This section describes evidence collected during the 2020–2021 operational administration of the DLM alternate assessment. The categories of evidence include data relating to administration time, device usage, and the adaptive delivery of testlets in the spring window.

4.2.1 Administration Time

Estimated administration time varies by student and subject. During the spring testing window, estimated total testing time was between 45 and 135 minutes per student, with each testlet taking approximately 5–15 minutes. Actual testing time per testlet varies depending on each student’s unique characteristics.

Kite Student Portal captured start and end dates and time stamps for every testlet. Actual testing time per testlet was calculated as the difference between start and end times. Table 4.1 shows the distribution of test times per testlet. Most testlets took approximately 2–3 minutes to complete. Testlets time out after 90 minutes.

Table 4.1: Distribution of Response Times per Testlet in Minutes
Grade Min Median Mean Max 25Q 75Q IQR
Elementary 0.08 2.27 3.08 88.10 1.40 3.60 2.20
Middle school 0.07 2.03 2.81 88.80 1.23 3.30 2.07
High school 0.08 2.28 3.08 89.12 1.38 3.62 2.23
Biology 0.25 2.20 2.92 41.45 1.43 3.38 1.95
Note. Min = minimum, Max = maximum, 25Q = lower quartile, 75Q = upper quartile, IQR = interquartile range

4.2.2 Device Usage

Testlets may be administered on a variety of platforms. In addition to start and end times, Kite Student Portal captured the operating system used for each testlet completed in 2020–2021. Although this data does not capture specific devices used to complete each testlet (e.g., SMART Board, switch system, etc.), this data does provide high-level information about how students access assessment content. For example, we can identify how often an iPad is used relative to a Chromebook or traditional PC. Figure 4.1 shows the number of testlets completed on each operating system, by linkage level. Overall, 34% of testlets were completed on a Chromebook, 33% were completed on a PC, 25% were completed on an iPad, and 9% were completed on a Mac. In general, PCs are the most popular operating system for lower linkage levels, whereas PCs and Chromebooks are more similar at the higher linkage levels. This may reflect that testlets at the lower linkage levels are typically teacher-administered, but higher linkage levels are computer administered. Thus, these results may indicate that teachers and students tend to use different devices for accessing assessment content.

Figure 4.1: Distribution of Devices Used for Completed Testlets

Distribution of Devices Used for Completed Testlets

4.2.3 Adaptive Delivery

During the spring 2021 test administration, the science assessments were adaptive between testlets, following the same routing rules applied in prior years. That is, the linkage level associated with the next testlet a student received was based on the student’s performance on the most recently administered testlet, with the specific goal of maximizing the match of student knowledge and skill to the appropriate linkage level content.

  • The system adapted up one linkage level if the student responded correctly to at least 80% of the items measuring the previously tested EE. If the previous testlet was at the highest linkage level (i.e., Target), the student remained at that level.
  • The system adapted down one linkage level if the student responded correctly to less than 35% of the items measuring the previously tested EE. If the previous testlet was at the lowest linkage level (i.e., Initial), the student remained at that level.
  • Testlets remained at the same linkage level if the student responded correctly to between 35% and 80% of the items on the previously tested EE.

The linkage level of the first testlet assigned to a student was based on First Contact survey responses. The correspondence between the First Contact complexity bands and first assigned linkage levels are shown in Table 4.2.

Table 4.2: Correspondence of Complexity Bands and Linkage Levels
First Contact Complexity Band Linkage Level
Foundational Initial
Band 1 Initial
Band 2 Precursor
Band 3 Target

For a complete description of adaptive delivery procedures, see Chapter 4 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017). For a summary of student adaptive routing during the spring 2019 administration, see Chapter 4 of the 2018–2019 Technical Manual Update—Science (Dynamic Learning Maps Consortium, 2019).

Following the spring 2021 administration, analyses were conducted to determine the mean percentage of testlets that adapted up a linkage level, stayed at the same linkage level, or adapted down a linkage level from the first to second testlet administered for students within a grade band or course and complexity band. The aggregated results can be seen in Table 4.3.

Overall, results were similar to those found in the previous years. For the majority of students across all grade bands who were assigned to the Foundational Complexity Band by the First Contact survey, testlets did not adapt to a higher linkage level after the first assigned testlet (ranging from 59% to 89%). A similar pattern was seen for students assigned to Complexity Band 3, with the majority of students not adapting down to a lower linkage level after the first assigned testlet (ranging from 63% to 80%). Consistent patterns were not as apparent for students who were assigned Complexity Band 1 or Complexity Band 2. Distributions across the three categories were more variable across grade bands. Further investigation is needed to evaluate reasons for these different patterns.

The 2020–2021 results build on earlier findings from previous years of operational assessment administration and suggest that the First Contact survey complexity band assignment is an effective tool for assigning most students content at appropriate linkage levels. Most students assigned to the Foundational Complexity Band and Complexity Band 3 did not adapt, with between 11% and 41% of students adapted to the available adjacent linkage level, suggesting that the available content served the majority of students’ needs. Results also indicate that students assigned to Band 2 were more variable with respect to the direction in which they move between the first and second testlets. Several factors may help explain these results, including more variability in student characteristics within this group and content-based differences across grade bands. Further exploration is needed in this area. Finally, results show that students assigned to Band 1 tended to adapt up a linkage level more frequently, which is an expected finding given that the Foundational and Band 1 students are both assigned content at the Initial linkage level. However, patterns of adaptation beyond the first adaptation opportunity (e.g., between the second and third testlets, third and fourth testlets), indicate that majority of Band 1 students adapt back down to the Initial level during the assessment, rather than remaining at the Precursor level. Thus, changes to the assignment process are not planned. For a description of previous findings, see Chapter 4 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017) and the subsequent annual technical manual updates (Dynamic Learning Maps Consortium, 2019, 2020).

Table 4.3: Adaptation of Linkage Levels Between First and Second Science Testlets (N = 31,574)
Foundational
Band 1
Band 2
Band 3
Grade Adapted up (%) Did not adapt (%) Adapted up (%) Did not adapt (%) Adapted up (%) Did not adapt (%) Adapted down (%) Did not adapt (%) Adapted down (%)
3–5 40.8 59.2 73.8 26.2 26.0 44.7 29.3 63.3 36.7
6–8 29.9 70.1 63.8 36.2 37.2 41.2 21.5 67.1 32.9
9–12 31.7 68.3 58.4 41.6 42.2 39.5 18.3 79.6 20.4
Biology 11.1 88.9 31.2 68.8 14.8 37.0 48.1 70.6 29.4
Note: Foundational and Band 1 correspond to testlets at the lowest linkage level, so testlets could not adapt down a linkage level. Band 3 corresponds to testlets at the highest linkage level in science, so testlets could not adapt up a linkage level.

4.2.4 Administration Incidents

As in all previous years, testlet assignment during the spring 2021 assessment window was monitored for evidence that students were correctly assigned to testlets. Administration incidents that have the potential to affect scoring are reported to state education agencies in a supplemental Incident File. One incident occurred during the spring 2021 administration in which picture response cards needed to respond to items were not included in the testlet information page for one testlet. The teslet information page provides test administrators with information specific to each testlet. Once the missing response cards were reported to the service desk, the testlet information page in question was immediately corrected to include the picture response cards. However, prior to the correction, seven students had taken the testlet. Because it is unknown if the test administrators replaced the missing response card pictuers with alternatives, and students’ responses would have been impacted by the lack of response cards, state education agencies were given the option to revert students to the end of the testlet completed immediately prior to the affected testlet and resume testing or let students proceed forward as usual. For students who proceeded as usual, credit was given for all items on the affected testlet, to err on the side of benefiting the student. In total, one student returned back to the affected testlet, and six students proceeded forward with a rescore.

As in previous years, an Incident File was delivered to state partners with the General Research File (see Chapter 7 of this manual for more information), which provided the list of students who did not have their assessment reset to the affected testlets, and therefore were potentially affected by the issue. States were able to use this file during the 2-week review period to make decisions about invalidation of records at the student level based on state-specific accountability policies and practices. Quality control procedures were also updated to ensure that all testlet information pages have the required materials included.

4.3 Implementation Evidence

This section describes evidence collected during the spring 2021 operational implementation of the DLM alternate assessment. The categories of evidence include a description of Kite system updates and survey data relating to user experience, remote assessment administration, and accessibility.

4.3.1 Kite System Updates

Several updates were made to the Kite system during 2020–2021 to improve its functionality. A new Student Roster and First Contact Survey Status extract was created to provide testing readiness information in one place. The roster includes the current grade in which the student is enrolled, all subjects in which the student is rostered, and the student’s First Contact survey status and completion date. A majority of the pages in Educator Portal that include tables were reorganized to take advantage of the horizontal space. All tables in Educator Portal were updated to a standard user interface. An update was made to the user interface by having users first enter roster information; roster name and subject, as well as roster location; state, district, and school. Lastly, the voice generator used to create the spoken audio for text to speech on all testlets was updated to a more lifelike voice at a standard reading speed.

4.3.2 User Experience With the DLM System

User experience with the spring 2021 assessments was evaluated through the spring 2021 survey, which was disseminated to all teachers who had a student rostered for DLM assessments. As in previous years, the survey was distributed to teachers in Kite Student Portal, where students completed assessments. Each student was assigned a survey for their teacher to complete. The survey consisted of four blocks. Blocks A and C, which provide information used for the validity argument and information about teacher background, respectively, are administered in every survey. Block B is spiraled, and teachers are asked about one of the following topics per survey: accessibility, relationship to ELA instruction, relationship to mathematics instruction, or relationship to science instruction. Block N was added in 2021 to gather information about educational context during the COVID-19 pandemic.

A total of 9,399 teachers responded to the survey (with a response rate of 62%) about 18,502 students’ experiences.

Participating teachers responded to surveys for a median of 1 student. Teachers reported having an average of 10 years of experience in science and with students with significant cognitive disabilities. The median response to the number of years of experience in science was 8 years, and the median experience with students with significant cognitive disabilities was 7 years. Approximately 25% indicated they had experience administering the DLM science assessment in all six operational years.

The following sections summarize user experience with the system, remote assessment administration, and accessibility. Additional survey results are summarized in Chapter 9 (Validity Studies). Survey results pertaining to the educational experience of students during the COVID-19 pandemic are described by Accessible Teaching, Learning, and Assessment, Systems (Accessible Teaching, Learning, and Assessment Systems, 2021). For responses to the prior years’ surveys, see Chapter 4 and Chapter 9 in the respective technical manuals (Dynamic Learning Maps Consortium, 2019, 2020).

4.3.2.1 Educator Experience

Survey respondents were asked to reflect on their own experience with the assessments as well as their comfort level and knowledge administering them. Most of the questions required teachers to respond on a 4-point scale: strongly disagree, disagree, agree, or strongly agree. Responses are summarized in Table 4.4.

Nearly all teachers (94%) agreed or strongly agreed that they were confident administering DLM testlets. Most respondents (86%) agreed or strongly agreed that the required test administrator training prepared them for their responsibilities as test administrators. Most teachers also responded that they had access to curriculum aligned with the content that was measured by the assessments (86%) and that they used the manuals and the Educator Resources page (90%).

Table 4.4: Teacher Responses Regarding Test Administration
SD
D
A
SA
A+SA
Statement n % n % n % n % n %
I was confident in my ability to deliver DLM testlets 101 1.6 267 4.1 2,686 41.3 3,448 53.0 6,134 94.3
Required test administrator training prepared me for the responsibilities of a test administrator 254 3.9 647 10.0 3,211 49.5 2,370 36.6 5,581 86.1
I have access to curriculum aligned with the content measured by DLM assessments 219 3.4 664 10.3 3,407 52.7 2,180 33.7 5,587 86.4
I used manuals and/or the DLM Educator Resource Page materials 153 2.4 470 7.2 3,588 55.3 2,276 35.1 5,864 90.4
Note: SD = strongly disagree; D = disagree; A = agree; SA = strongly agree; A+SA = agree and strongly agree.

4.3.3 Remote Assessment Administration

Two questions on Block N of the survey asked test administrators where their student took assessments this year, and if the student took any tests remotely (i.e., at a location other than school but with a trained test administrator present), what their remote testing experience was like. As a reminder, the DLM policy on virtual assessment administration required an in-person test administrator, but that administration was not required to occur in school. Table 4.5 summarizes teacher responses regarding the setting of test administration. Most teachers (95%) responded that DLM assessments were administered to the student at school. Table 4.6 summarizes teachers’ responses about the experience of students who took DLM assessments remotely. Of the students who took assessments remotely, very few (less than 24%, 3% of all students) used different accessibility supports than they would normally have access to, experienced technology difficulties, had to respond in a less preferred response mode, and/or had someone other than the teacher administer the assessments remotely (e.g., paraeducator or other qualified test administrator).

Table 4.5: Teacher Responses Regarding Administration Setting
Setting n %
At school 17,077 94.7
At home      344   1.9
Testing facility not at school      142   0.8
Other      100   0.6
Not applicable      375   2.1
Table 4.6: Teacher Responses Regarding Circumstances Applicable to Remote Testing
Circumstance Yes (%) No (%) Unknown (%)
Student used different accessibility supports when testing remotely than at school 554 (16.9) 2,373 (72.3) 356 (10.8)
Student experienced technology difficulties during assessments taken remotely 376 (10.6) 2,871 (81.0) 297   (8.4)
Student had to respond in a less preferred response mode because of remote arrangements 354 (10.3) 2,797 (81.1) 299   (8.7)
Someone other than the teacher administered the assessments remotely 202   (5.6) 3,176 (87.6) 249   (6.9)

4.3.4 Accessibility

Accessibility supports provided in 2020–2021 were the same as those available in previous years. The DLM Accessibility Manual (Dynamic Learning Maps Consortium, 2021c), distinguishes accessibility supports that are provided in Kite Student Portal via the Personal Needs and Preferences Profile, require additional tools or materials, or are provided by the test administrator outside the system.

Table 4.7 shows selection rates for the three categories of accessibility supports. The most commonly selected supports were human read aloud, test administrator enters responses for student, and individualized manipulatives. For a complete description of the available accessibility supports, see Chapter 4 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).

Table 4.7: Accessibility Supports Selected for Students (N = 75,714)
Support n %
Supports provided in Kite Student Portal
Spoken audio 15,172 20.0
Magnification 9,835 13.0
Color contrast 6,527 8.6
Overlay color 3,201 4.2
Invert color choice 2,083 2.8
Supports requiring additional tools/materials
Individualized manipulatives 35,577 47.0
Calculator 23,038 30.4
Single-switch system 2,695 3.6
Alternate form - visual impairment 1,795 2.4
Two-switch system 920 1.2
Uncontracted braille 44 0.1
Supports provided outside the system
Human read aloud 66,920 88.4
Test administrator enters responses for student 46,211 61.0
Partner assisted scanning 6,938 9.2
Language translation of text 1,260 1.7
Sign interpretation of text 1,122 1.5

Teachers were asked whether the student was able to effectively use available accessibility supports and whether the accessibility supports were similar to the ones used for instruction. The majority of teachers agreed that students were able to effectively use accessibility supports (93%).

Of the teachers who reported that their student was unable to effectively use the accessibility supports (7%), the most commonly reported reason was that the student could not provide a response even with the support provided (54%). These data are shown in Table 4.8.

Table 4.8: Reason Student Was Unable to Effectively Use Available Accessibility Supports
Reason n %
Even with support, the student could not provide a response 348 54.5
The student needed a support that wasn’t available or allowed 174 27.2
The student was unfamiliar with the support 114 17.8
The student refused the support during testing 99 15.5
There was a technology problem (e.g., KITE display, AAC device) 29 4.5

4.3.5 Data Forensics Monitoring

During the spring 2021 administration, two data forensics monitoring reports were made available in Educator Portal. The first report includes information about testlets completed outside of normal business hours. The second report includes information about testlets that were completed within a short period of time.

The Testing Outside of Hours report allows state education agencies to specify days and hours within a day that testlets are expected to be completed. Each state can select its own days and hours for setting expectations. For example, a state could elect to flag any testlet completed outside of Monday through Friday from 6:00 a.m. to 5:00 p.m. local time. The Testing Outside of Hours report then identifies students who completed assessments outside of the defined expected hours. Overall, 2,735 (1%) science testlets were completed outside of the expected hours by 2,112 (7%) students.

The Testing Completed in a Short Period of Time report identifies students who completed a testlet within an unexpectedly short period of time. The threshold for inclusion in the report was testlet completion time of less than 30 seconds. The report is intended for state users to identify potentially aberrant response patterns; however there are many legitimate reasons a testlet may be submitted in a short time period. Overall, 9,235 (3%) testlets were completed in an short period of time by 2,527 (8%) students.

4.4 Conclusion

During the spring 2021 administration, the DLM system was available during two testing windows: an optional instructionally embedded window and the spring window. Administration evidence was collected in the form of administration time data and adaptive delivery results. State education agencies received a file regarding a science scoring incident. Implementation evidence was collected in the form of teacher survey responses regarding user experience, remote assessment administration, accessibility, and Personal Needs and Profile selections. New data forensics monitoring reports were made available to state education agencies in Educator Portal.