11 Conclusion and Discussion
The Dynamic Learning Maps® (DLM®) Alternate Assessment System is based on the core belief that all students should have access to challenging, grade-level academic content. The DLM assessments provide students with the most significant cognitive disabilities the opportunity to demonstrate what they know and can do. The DLM system is designed to map students’ learning after a full year of instruction.
The DLM science assessment was administered operationally for the sixth time in 2020–2021. This technical manual update provides updated evidence from the 2020–2021 year. Due to the COVID-19 pandemic, limited studies were conducted to produce new evidence for the validity argument. COVID-19 also impacted participation, educational experiences, and performance (see Accessible Teaching, Learning, and Assessment Systems, -Accessible Teaching, Learning, and Assessment Systems (2021)). Score reports included caveat language to support interpretation, urging caution when using results. As advised by the U.S. Department of Education, results should primarily be used “to provide information to parents, educators, and the public about student performance and to help target resources and supports” (Rosenblum, 2021). Because of this, the intended uses of the results were different than what would be intended under a normal administration year. Thus, the contents of this manual are meant to describe changes to the DLM Alternate Assessment System in 2020–2021 and the results that were provided, rather than contribute to the full assessment validity argument for original intended uses of results, including inclusion of results in statewide accountability models. For evidence evaluating original intended uses, see the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017) and the subsequent technical manual updates (Dynamic Learning Maps Consortium, 2017, 2018a, 2018b, 2019). The contents of this manual are summarized in Table 11.1.
|1||Provides an overview of information updated for the 2020–2021 year|
|2||Not updated for 2020–2021|
|3||Provides evidence collected during 2020–2021 of test content development and administration|
|4||Describes updates to the assessment administration|
|5||Describes the statistical model used to produce results based on student responses, along with a summary of item parameters|
|6||Not updated for 2020–2021|
|7, 8||Describes results and analyses from the sixth operational administration|
|9||Provides additional studies from 2020–2021 focused on specific topics related to validity|
|10||Describes updates to professional development and participation rates and evaluation results|
This chapter describes future research studies as part of ongoing and iterative processes of program responsiveness, validation, and evaluation.
11.1 Operational Assessment
As noted previously in this manual, 2020–2021 was the sixth year the DLM science assessment was administered operationally. The DLM Consortium is committed to continual improvement of assessments, teacher and student experiences, and technological delivery of the assessment system. Through formal research and evaluation as well as informal feedback, some improvements have already been implemented for 2021–2022.
Overall, there were no significant changes to the item-writing procedures, item flagging outcomes, standard setting, or the modeling procedure used to calibrate and score assessments from previous years to 2020–2021.
Studies planned for 2021–2022 to provide additional validity evidence are summarized in the following section.
11.1.1 Future Research
The continuous improvement process also leads to future directions for research to inform and improve the DLM Alternate Assessment System in 2021–2022 and beyond. The manual identifies some areas for further investigation.
In 2021–2022, we will pilot a new method for collecting students’ opportunity to learn data and continue to collect data on students’ educational experiences amidst the COVID-19 pandemic. We also plan to collect parent feedback on score report contents and interpretability.
Additional Kite® enhancements will be implemented in future years, including updates based on teacher feedback to the Instruction and Assessment Planner used to create instructional plans and assign instructionally embedded testlets.
Other ongoing operational research is also anticipated to grow as more data become available. All future studies will be guided by advice from the DLM TAC and the state partners, using established processes.