08/19/2024 | Press release | Distributed by Public on 08/19/2024 14:44
Establishing strong school partnerships
Mount St. Joseph University has partnered with a local school district to embed early field experiences in local schools. Teacher candidates attend their math methods courses in the local schools taught by Mount St. Joseph faculty. Later in their program, candidates take their structured literacy course with an embedded practicum under the direction of MSJ faculty at the same school with young children recommended by the school's teachers and administration.
The partnership clearly benefits the university by providing professors and candidates with consistent access to real students in real classrooms. However, as in all great partnerships, the district reaps benefits as well. In the early stages, school staff noticed practicum teachers from the program were having a positive impact on students' reading, including students who had long struggled with reading. They collected data and shared it with school leaders. As a result, the curriculum director himself took a training course on the science of reading and has revamped the school's entire approach to reading.
Despite this success, Dr. Saylor explains that her program did not go into the school with an agenda to try to change its instructional approach. Rather, they tried to be supportive and helpful.
"We collaborate closely and purposefully with our school partners, and many of our current relationships feel like true partnerships among colleagues."Dr. Laura Saylor, Mount St. Joseph University
Field experiences early and often
The program seeks to ensure that candidates have numerous opportunities to hone their teaching skills in a range of settings, at least one of which must be in an urban school.
Provide a lesson planning template to guide instruction
The program requires that candidates use a specific lesson planning template. The template first asks candidates to detail standards and important information about students (to set the context), then asks for assessment data to drive instructional decisions.
One notable feature of this template is that the program requires candidates to support their instructional decisions by citing research. The faculty encourage candidates to be very specific in their application of the research. For example, a candidate cannot simply reference "Anita Archer" (who has written on explicit instruction among other topics), but rather needs to be precise about how they plan to implement explicit instructional techniques.
Planning for the lesson's methods comes after all of this initial planning, so the lesson is driven by the standards, students' needs, and students' progress. Lessons must reflect the research, rather than focusing on a fun activity that a teacher found online.
Observation instrument aligned with expectations for in-service teachers
The program developed an observation instrument known as the Dispositional, Instructional, Content-Specific Evaluation (DICE). This instrument addresses the same topics as the state's observation instrument for in-service teachers but with modified expectations more appropriate for student teachers. Candidates are rated on a five-point scale (zero to four). Earning a four means someone is a top student teacher, equivalent to a good novice teacher in the field (rather than being equivalent to a highly effective veteran teacher).
The program is also careful to protect against score inflation on the DICE; they warn candidates that they're not going to get scores of three or four on their first observations. They've found that cooperating teachers and supervisors also want to give higher scores than are warranted, and so have to push them to score more accurately, since doing so supports candidates' improvement. The goal is to encourage growth. To ensure accurate scoring, the program offers in-person training for cooperating teachers and program supervisors on how to use the observation instrument and makes a recorded training available online as well. All observers complete a sample observation; if the scores aren't accurate, the program keeps working with them.