Manchester University logo
Manchester University logo
Education Department

The Harry H. Henney ’35 & Jeanette Henney Department of Education

The Harry H. Henney ’35 & Jeanette Henney Department of Education at Manchester prepares students for excellence in the field of teaching.

Students have a variety of majors from which they can choose, including Educational Studies and teaching degrees. Because of the rigorous program, Manchester successfully graduates effective professionals ready for diverse settings working with children and young adults.

The Education Program embraces the University’s liberal arts program, as well as the State of Indiana’s requirements for licensing. Manchester is accredited by CAEP, the Council for the Accreditation of Educator Preparation.

Accreditation

The Harry H. Henney ’35 & Jeanette Henney Department of Education is fully accredited by the Council for the Accreditation of Educator Preparation (CAEP). To ensure that Manchester University not only meets state and federal standards, but continues to provide their students with a quality program, we complete several different reports annually. The information from data collected through these reports assists the department in improving their program.

Measure 1: Completer Effectiveness & Impact on P-12 Student Learning

Completer Survey Data

Each year, the Indiana Department of Education sends surveys to administrators, as well as EPP completers, to determine the completers’ impact on P-12 student learning. Due to the low numbers of returns, the EPP began sending out their own surveys in hopes of obtaining additional feedback to determine how well they meet Measure 1: Completer Impact and Effectiveness. The following links provide the completer survey results for Indiana Department of Education and EPP.

EPP Survey Results:

Completer Survey Data 2016
Completer Survey Data 2017
Completer Survey Data 2018
Completer Survey Data 2019
Completer Survey Data 2020
Completer Survey Data 2021
Completer Survey Data 2022
Completer Survey Data 2023

Indiana Department of Education Results:

DOE Completer Satisfaction Data 2019
DOE Completer Satisfaction Data 2020
DOE Completer Satisfaction Data 2021
DOE Completer Satisfaction Data 2022
DOE Completer Satisfaction Data 2023
DOE Completer Satisfaction Data 2024

The EPP uses the data from completer and employer surveys, edTPA outcomes, Praxis II scores and pass rates, Danielson Rubric feedback, as well as results from several key assessments each year to form a holistic view of areas needing improvement, areas of growth, and areas of continued success. In addition, the EPP presents this data to the Teacher Education Committee and the Teacher Advisory Council for stakeholder input on ways to fulfill deficits and input on proposed improvements.

Observation Data

During the student teaching semester, teacher candidates are given feedback from their cooperating teacher(s) and university supervisors using the Danielson Rubric. Consistent feedback using this tool provides the teacher candidates guidance on room for growth throughout the semester, as well as giving the EPP information on the candidates’ impact on P-12 students.

Danielson 2017 Student Teachers
Danielson 2018 Student Teachers
Danielson 2019 Student Teachers
Danielson 2020 Student Teachers
Danielson 2021 Student Teachers
Danielson 2022 Student Teachers
Danielson 2023 Student Teachers
Danielson 2024 Student Teachers

The EPP uses the data from completer and employer surveys, edTPA outcomes, Praxis II scores and pass rates, Danielson Rubric feedback, as well as results from several key assessments each year to form a holistic view of areas needing improvement, areas of growth, and areas of continued success. In addition, the EPP presents this data to the Teacher Education Committee and the Teacher Advisory Council for stakeholder input on ways to fulfill deficits and input on proposed improvements.

Completer Effectiveness

Each year, the Indiana Department of Education sends out surveys to administrators of EPP completers who have one year of experience teaching in order to help determine Measure 1: Completer Impact and Effectiveness. Below are the results from these surveys. Please note that any category with less than 10 responses are not reflected for privacy reasons.

2017 Teacher Effectiveness
2018 Teacher Effectiveness
2019 Teacher Effectiveness
2020 Teacher Effectiveness
2021 Teacher Effectiveness
2022 Teacher Effectiveness
2023 Teacher Effectiveness
2024 Teacher Effectiveness

The EPP uses the data from completer and employer surveys, edTPA outcomes, Praxis II scores and pass rates, Danielson Rubric feedback, as well as results from several key assessments each year to form a holistic view of areas needing improvement, areas of growth, and areas of continued success. In addition, the EPP presents this data to the Teacher Education Committee and the Teacher Advisory Council for stakeholder input on ways to fulfill deficits and input on proposed improvements.

Teacher Candidate Impact on Student Learning

Prior to the implementation of edTPA which began with the 2021 completers, the EPP used it’s own Senior Comprehensive Exam rubric, which evaluated the teacher candidates’ impact on P-12 student learning during their student teaching semester. The EPP began using edTPA in 2021 in order to have a reliable and valid measurement of teacher candidates’ impact on student learning during their student teaching semester.

Results Prior to Implementation of edTPA
SCE Impact on Student Learning
EPP Report 2019 Manchester
2019 Impact on Student Learning Data

edTPA Results Disaggregated by Licensure Level
2020-21 Impact on Student Learning Data
2021-22 Impact on Student Learning Data
2022-23 Impact on Student Learning Data
2023-24 Impact on Student Learning Data

The EPP uses the data from completer and employer surveys, edTPA outcomes, Praxis II scores and pass rates, Danielson Rubric feedback, as well as results from several key assessments each year to form a holistic view of areas needing improvement, areas of growth, and areas of continued success. In addition, the EPP presents this data to the Teacher Education Committee and the Teacher Advisory Council for stakeholder input on ways to fulfill deficits and input on proposed improvements.

Measure 2: Satisfaction of Employers & Stakeholder Involvement

Employer Satisfaction of Completers

Each year, the Indiana Department of Education sends surveys to administrators, as well as EPP completers, to determine the completers’ impact on P-12 student learning. Due to the low numbers of returns, the EPP began sending out their own surveys in hopes of obtaining additional feedback to determine how well they meet Measure 2: Candidate Competency at Completion. The following links provide the employer survey results for Indiana Department of Education and EPP.

EPP Survey Results:

Employer Survey Data 2016
Employer Survey Data 2017
Employer Survey Data 2018
Employer Survey Data 2019
Employer Survey Data 2020
Employer Survey Data 2021
Employer Survey Data 2022

Indiana Department of Education Survey Results:

DOE Employer Satisfaction Data 2018
DOE Employer Satisfaction Data 2019
DOE Employer Satisfaction Data 2020
DOE Employer Satisfaction Data 2021
DOE Employer Satisfaction Data 2022

The EPP uses the data from completer and employer surveys, edTPA outcomes, Praxis II scores and pass rates, Danielson Rubric feedback, as well as results from several key assessments each year to form a holistic view of areas needing improvement, areas of growth, and areas of continued success. In addition, the EPP presents this data to the Teacher Education Committee and the Teacher Advisory Council for stakeholder input on ways to fulfill deficits and input on proposed improvements.

Teacher Education Committee

Membership

Members of the Teacher Education Committee are comprised of:

  • the Chair of the Department of Education
  • the Director of Teacher Education
  • four to five faculty members representing two or more colleges with teacher licensure programs, seeking a balance in representation between all-grade, secondary and elementary
  • the Registrar or designee,
  • two students majoring in teacher education, representing two different licensing levels
  • if the chair of the department is also the director of teacher education, one additional representative from the Education Department will be appointed

Chair

The Director of Teacher Education shall act as chair of the Teacher Education Committee.

Duties

Policy formation, advisory, external liaison, appellate jurisdiction

  1. to serve as liaison between the Manchester University faculty and the external licensing and accrediting agencies.
  2. to review as needed, at least every five years, all teaching certification patterns to determine their effectiveness in meeting:

(1) the needs of perspective teachers,
(2) the graduation requirements of Manchester University,
(3) the requirements of the certification bulletin of the State of Indiana,
(4) the accreditation standards of appropriate accrediting organizations.

  1. to recommend policy regarding the teacher education program and teacher certification patterns.
  2. to make minor modifications in teacher certification patterns after consultation with the department(s) affected, within the guidelines established by Manchester University and the certification bulletin of the State of Indiana.
  3. to review recommendations from the director of teacher education regarding all candidates for admission to teacher education, student teaching and final certification. Student representatives will not function in this duty.
  4. to serve as an appellate body for students who have not been recommended by the director of teacher education. Student representatives will not function in this duty.
  5. to advise the director of teacher education.

Input and Output

  1. The committee may receive proposals from departments, divisions, and academic officers.
  2. The committee shall report to the faculty through the chair. Policy items may be brought to the faculty via the Executive Committee or forwarded to other committees.
  3. The committee shall process through the Academic Policies Committee all recommendations for adding and dropping teacher certification patterns, with final action to be taken by the faculty.
  4. The chair shall report promptly to the faculty minor modifications in teacher certification patterns.
  5. The committee shall submit a copy of the minutes to the chair of the Department of Education to keep the chair fully informed of committee action. The chair shall also submit a copy of the minutes, excluding executive sessions, for the file in the Office of Academic Affairs.

Measure 3: Candidate Competency at Completion

Graduation Rates

Each academic year, the Henney Department of Education collects data regarding graduation rates of teacher candidates admitted to the teaching programs. The tables below compare the cohort of students admitted into the program with those who graduate with a degree in education. It should be noted that some students switch from licensure track to educational studies after being admitted into the program, as they prefer to teach in a non-traditional setting, often choosing graduate school or working with children or young adults outside of the classroom. The following data reflects the number of students graduating in our licensure track program.

Additionally, beginning with the 2018 graduates, the Henney Department of Education implemented a policy that requires teacher candidates to pass their content licensure tests prior to earning permission to student teach. As can be seen from the data, this resulted in a drop in teaching graduates. The trend toward a higher graduation rate is moving in a positive direction. The EPP is confident the content exam policy is an important one, and it values the shift in culture that has taken place. Teacher candidates are preparing for and taking the tests earlier, providing them more opportunities to take and pass said tests, if needed. While this affects the graduation rate, administrators and clinical faculty, as well as other stakeholders, affirm this ensures the preparedness of student teachers.

Ability to be Licensed

Annual Reports

In compliance with federal regulations, the following links include Manchester University’s Educator Preparation Provider (EPP) Annual Reports, Traditional Title II reports and Alternative Title II reports.

Annual Reports
2008-2009
2009-2010
2010-2011
2011-2012
2012-2013
2013-2014
2014-2015
2015-2016
2016-2017
2017-2018
2018-2019
2019-2020
2020-2021
2021-2022
2022-2023
2023-2024

Traditional Title II Reports
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024

Alternative Title II Reports
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024

Measure 4: Ability to be Hired in Area of Licensure

Employment Rates

The following data regarding employment of graduates of the Henney Department of Education is collected by the Manchester University Office of Institutional Effectiveness based on surveys sent to recent graduates of the institution. Employment, and other data collected, is self-reported.

According to trends associated with employment, over 90% of graduates from the Program have reported employment, or continuing their education at graduate school since 2016. Only 75% of respondents reported employment in 2015. The EPP is unsure of why the 2015 employment rate fell out of the 90-100% range as the other years reflect.

CAEP 2019 Self Study

List of Accredited Initial Programs CAEP Visit 2019

K-6: Elementary Education
5-12: English/Language Arts, Social Studies, Mathematics
P-12: Physical Education, Health and Physical Education, Special Education, High Ability

Standard 1: Content and Pedagogical Knowledge

Notes from CAEP:

Making a case: In Standard 1, the provider makes the case for candidate competencies at the point reached by exit from the program through data from common assessments. The EPP argues that candidates prepared in initial programs can effectively engage with all P-12 students and are competent in the four InTASC categories-the learner and learning; content; instructional practice; and professional responsibility-and that they are prepared in their specialty/licensure area (components 1.1 and 1.3). Candidates prepared in advanced programs apply their knowledge and skills so that learning and development opportunities for P-12 students are enhanced through data literacy, use of research, data analysis and evidence, collaborations with colleagues and community, appropriate use of technology for the candidate’s field, and applications of professional dispositions, laws, and policies.

The provider demonstrates that candidates are able to apply the necessary knowledge and skills for success in their own professional P-12 practice, including use of research and evidence (component 1.2), a commitment to challenging college- and career-ready level standards for all their students (component 1.4), and appropriate use of technology in instruction (component 1.5).

Initial candidates’ abilities to teach diverse students effectively, adapting their repertoire of skills as needed, is an overarching theme for Standard 1. For advanced preparation candidates, the principal focus is professional knowledge and skills that equip them to support needs of diverse P-12 student learners through their specialty field.

The guiding questions for initial and advanced preparation may help focus the selection of evidence and the EPP inquiry of its message:

How do candidates:

  • demonstrate an understanding of the 10 in InTASC standards at the appropriate progression level(s) in the following categories: the learner and learning; content; instructional practices; and professional responsibility?
  • use research and evidence to measure their P-12 students’ progress and their own professional practices?
  • apply content and pedagogical knowledge as reflected in outcomes assessments in response to standards of Specialized Professional Association (SPA), the National Board for Professional Teaching Standards (NBPTS), states, or other accrediting bodies (e.g. National Association of Schools of Music- NASM)?
  • demonstrate skills and commitment that afford all P-12 students access to rigorous college-and career ready standards (e.g. Next Generation Science Standards, National Career Readiness-Certification, Common Core Standards)?
  • apply technology standards as they design, implement, and assess learning experiences to engage students and improve learning; and enrich professional practices?
  • ensure that candidates use research and evidence to develop an understanding of the teaching profession?EPP should reflect on:
  • STRENGTHS AND CHALLENGES-What strengths and areas of challenge have you discovered about candidate content and pedagogical knowledge and its applications as you analyzed and compared the results of your disaggregated data by program and by demographics? What questions have emerged that need more investigation? How are you using this information for continuous improvement?
  • TRENDS-What trends have emerged as you compared program and demographic data about candidate content and pedagogical knowledge and its applications across evidence sources and programs? What questions have emerged that need more investigation? How are you using this information for continuous improvement?
  • IMPLICATIONS-What implications can you draw or conclusions can you reach across evidence sources about candidate content and pedagogical knowledge and its applications? What questions have emerged that need more investigation? Improvement? How have data-driven decisions on changes been incorporated into preparation?

Throughout the lens of CAEP standard 1:  Content and Pedagogical Knowledge, the MU EPP reflects consistently on its program with a focus on the following questions:

  1. Do candidates adequately demonstrate progression in understanding the four InTASC categories:the learner and learning; content; instructional practices; and professional responsibility? (CAEP 1.1, 1.2, 1.3, 1.4, 1.5)
  2. Do MU program completers demonstrate proficient content knowledge and understanding of engaging pedagogy (e.g., inquiry) and ongoing, standards-aligned assessment to ensure K-12 students’ mastery of rigorous academic standards?(CAEP 1.1, 1.4, 1.5,)
  3. Do MU program completers intentionally design technology infused, student-centered learning experiences inclusive of all students with the goal of towards deepening understanding of content and skills of students? (CAEP 1.3, 1.4, 1.5)
  4. Do Manchester University program completers professionally and effectively use evidence and peer-reviewed research to make pedagogical decisions which impact student learning and improves their own professional understanding of teaching? (CAEP 1.2, 1.3, 1.4, 1.5)

Summary Statement:

As candidates progress through the Manchester University EPP, they must develop deep comprehension of and the ability to apply content knowledge and pedagogy.  Several evidence packets support the EPP’s analysis of CAEP Standard 1.  These include the Danielson, Candidate Admission and Completion (CRC), SPA Reports, and Employer and Completer Satisfaction (ECS) packets.  Reviewers will see corresponding evidence packet titles in parenthesis where appropriate.

Standard 1.1 Candidates progress through a program in which the InTASC framework is intentionally supported; all required courses, key assessments, formal observation forms, and disposition rubrics reflect the ten InTASC standards.  With multiple measures, the EPP is confident in completers’ understanding of four areas:  the learner and learning, content, instructional practices, and professional responsibility.  Table 1A provides the alignment between the InTASC standards and courses required of all majors, and it demonstrates the intentionality of the program to prepare candidates.  Because candidates must maintain a minimum of 2.5 GPA in overall and in their majors, the program ensures an adequate understanding of the InTASC standards as well as comprehension of content knowledge and pedagogy.

Besides required mastery of coursework, EPP assessments supporting the candidates’ understanding of the learner and learning include the Danielson Framework to evaluate student teaching (Danielson packet), candidate performance on integrated unit plan which is included in the SCE:  Impact on Student Learning capstone project (SCE packet), both of which required candidates to differentiate lessons and measure impact on P-12 students’ learning.  Additionally, GPA comparisons reflect candidates’ progression through required coursework (CRC packet).

Candidates’ content knowledge is expressed through assessments such as program admission criteria (CRC packet).  In accordance with the Indiana Rules for Educator Preparation and Accountability (REPA), candidates must obtain a minimum passing score on the CASA in reading, mathematics, and writing.  The CASA provides evidence of application of basic skills needed by educators.

Prior to the 2017-2018 school year, the EPP did not require the successful completion of content exams to student teach.  To assure clinic faculty of the candidates’ content knowledge, the EPP now requires candidates to pass the Pearson Content Areas Assessments to earn final approval for student teaching.  These content exams include the elementary education generalist which consists of four subtests:  reading and English language arts; mathematics; science, health, and physical education; and social studies and fine arts.  Secondary and all-grade teacher candidates must successfully pass the Content Area Assessment aligned with their content of study such as English Language Arts, mathematics, Social Studies- Historical Perspectives, etc. The EPP disaggregates test data based on licensure areas.  The Praxis II and Pearson Content Exam Table provides a summary of the pass rates for subject specific licensure tests, comparing MU candidates with scores from state averages (CRC packet).

The EPP evaluates candidates’ instructional practices through the development of curriculum and the appropriate teaching of content.  In the fall of 2016, the EPP moved from a holistic grading of candidates’ knowledge and skills during student teaching to the Danielson Framework to track criterion-specific skills; published research attests to the validity and reliability of the Danielson Framework.  While the EPP still uses the holistic evaluation tool to assign a student teaching grade, all supervising instructors and clinical faculty complete the Danielson framework for assessment purposes (Danielson packet).  This research-based evaluation tool and trained supervisors provide a more reliable approach to evaluating candidates’ instructional practices.

Professional responsibility develops over the course of the program and is measured through dispositional rubrics developed by the EPP (Attachment CAEP 2A Candidate Profiles for Recommendation to Program).  Progression through the program requires candidates to demonstrate improvement, as they should demonstrate ratings of proficient or distinguished in the first five dispositions by the mid-way point of their sophomore year.  Clinical faculty also evaluate candidates’ professional stance through an EPP developed evaluation form. Interviews with the DTE occur prior to admission to the program and again when the candidate seeks permission to student teach, and at these points, the DTE and candidate review feedback on the candidate’s dispositions and create a remediation plan if it is needed (CRC packet).  The Checkpoint Matriculation Data table included in the Program Admission and Completion packet provides disaggregated data regarding candidates who begin the program versus those who finish the program.  The data is based on disposition rubrics (professional responsibility) as well as GPA and test scores (content knowledge).

Standard 1.2  While the MU EPP understands that only a few of the teacher candidates may enter the field of educational research, it believes all educators, whether they are kindergarten teachers or high school English teachers, must at minimum be good consumers of trends in pedagogy.  In addition, teacher candidates should be able to consider the relationship between the pedagogical decisions they make and the impact they have as educators on their students.  As a result, the most comprehensive use of research occurs during the candidates’ student teaching experience as they conduct an impact on student learning action research project which requires them to design a research-based, standards-driven unit plan to measure their students’ academic growth.  Not only do they investigate the peer-reviewed research-based pedagogy, but they also they write a literature review, teach the unit in their student teaching setting, and analyze collected data.  This capstone project builds upon the skills developed throughout the entirety of their program, but instead of designing a hypothetical unit plan, the unit is designed, taught, and analyzed in an authentic setting:  student teaching.  A rubric developed by the EPP is used to measure the impact on student learning capstone project (SCE packet)

Education faculty infuse opportunities for candidates to read peer-reviewed research articles as well as use published and original research to deepen their understanding of pedagogy and the impact instruction has on student performance.  The following examples illustrate a variety of ways required courses support experiences with research and evidence to measure student progress. In EDUC 111 and EDUC 211, candidates explore teaching practices based on scientific research.  In EDUC 237, teacher candidates examine the different parts of a research article including the literature review and methodology.  They work to analyze the purpose of the research as well as the application of the study for a classroom teacher.  Since the last accreditation cycle in 2011, the EPP has introduced a new assessment course (EDUC 245) required of all teacher candidates.  In EDUC 245, candidates examine the construction, reliability, and validity of assessments.

To prepare candidates for successful student teaching and completion of the SCE, in the fall of their junior year, candidates enroll in EDUC 362, a course in which candidates use research to support practices in a lesson, literature choices, strategy usage.  Their culminating project is a research-based action plan.  In the spring, junior candidates enroll in either EDUC 342 (secondary and all-grade candidates) or EDUC 340 (elementary candidates) which builds upon the foundation.  The final project is an integrated unit plan which incorporates peer-reviewed research to support pedagogical decisions, assessment selections, and adaptations/modifications (SCE packet).  This scaffolded experience has provided the program with increased confidence in candidates’ content and pedagogical understanding.

Standard 1.3 To obtain an Indiana teaching license, candidates must successfully pass the Pearson content examination for their content area(s); they must also pass the appropriate Pearson pedagogy exam (CRC packet). As explained in Standard 1.1, the EPP now requires candidates to pass all content exams prior to student teaching.  Table 3 in the Candidate Recruitment and Completion (CRC) packet provides data regarding candidates’ pass rate for all Pearson exams, both content and pedagogy.  While the program has only had the opportunity to collect one cycle of data, the initial impact was significant.  74 percent of the cohort either self-selected out or did not pass the content exams prior to student teaching.

The EPP has collaborated with faculty within the disciplines to submit appropriate SPA reports for programs with more than five completers for the period of 2012-2013, 2013-2014, 2014-2015.  Each program has submitted alignment of the content standards for its discipline’s professional organization such as the National Council of Teachers of English (NCTE).  To intentionally align programs with standards, each of the syllabi identify the content and pedagogical standards supported by course content (SPA Reports packet).   Besides the periodic review throughout the academic year, at the end of each academic year, the EPP holds a full day retreat to examine the scope and sequence of the content; additionally, it revisits the alignment of the courses to the InTASC, content standards for each SPA report, and CAEP standards.

Besides measuring candidates’ knowledge, the EPP also measures their ability to apply the knowledge and pedagogy.  During student teaching, candidates complete the Impact on Student Learning Project (SCE packet) which requires candidates to apply content and pedagogical knowledge in an authentic teaching experience.  They analyze their impact on student learning through the lens of pedagogical best practice, collecting data prior to the teaching of the unit as well as after the unit has concluded.  Using peer-reviewed research, they analyze the effectiveness of their teaching and make suggestions for future instruction. Two members of the EPP use an EPP-developed rubric to evaluate the capstone projects.

Clinical faculty and university supervisors observe teacher candidates during the student teaching experience and complete the content student observation tool, a discipline-specific rubric designed to evaluate the teacher candidate’s ability to effectively teach content (CRC packet).  Using the standards identified by the discipline-related professional organization, the EPP can determine application of specific content knowledge.

For the last three cycles of student teachers, the EPP has implemented the use of the Danielson Framework observation tool (Danielson packet). The Danielson provides both the cooperating teacher and the university supervisor a valid and reliable tool for evaluating student teachers in multiple domains including content knowledge and pedagogy.  Through student teaching observations, the EPP documents the teacher candidates’ content and pedagogical understanding.

Since the implementation of CAEP accreditation in the state of Indiana, the Indiana Department of Education has collected employer survey data (ECS packet). While the return rate has been extremely low, with only 7 MU employers returning surveys, the IDOE has provided the EPP with employer satisfaction data.  Questions on the survey correspond with InTASC standards.

Standard 1.4  Throughout the teacher preparation program, the EPP scaffolds the development of lesson plans and unit plans focused on the Indiana state content standards.  First introduced in EDUC 245:  Educational Assessment, teacher candidates unpack the academic standards, create measurable learning objectives, and design appropriate and authentic assessments by which they would measure the objectives.  In the fall of their third year in the program, all teacher candidates enroll in EDUC 362:  Literacy and English Language Learners, a course which builds upon curriculum design and assessment related to the rigorous Indiana academic standards.   Teacher candidates not only design a standards-based lesson plan, but they must record the teaching of the lesson and then reflect on the process, self-evaluating the effectiveness of teaching the academic standards.  When they complete their junior interview with the Director of Teacher Education, the teacher candidates must reflect on their experience of designing lessons and assessments and teaching.

Supported by their development in previous required courses, the teacher candidates create an integrated unit plan during their literacy class (EDUC 340:  Literacy Block or EDUC 342:  Literacy in the Content Area).  This unit plan requires teacher candidates to write a week to two-week unit plan which incorporates individual, standards-based lesson plans that focus on a research-based best practice.  The finished plan includes pre- and post-assessments, individual plans, and the supporting documents required to teach the unit.  Ultimately, it serves as the foundation for the SCE Impact on Student Learning capstone project completed the following year (SCE packet).

As previously mentioned, the Indiana Department of Education recently began collecting employer survey data; despite the low return, the survey does hold great potential for providing feedback to the EPP.

Standard 1.5 Question 14 on the employer survey specifically asks employers to evaluate the EPP’s completers’ levels of ability to design, implement, and assess learning experiences through technology standards (ECS packet).  Because Indiana does not have a statewide approach to integrating technology, administration’s perception of completers’ use of technology may vary.  It should be noted the EPP is taking strides to integrate technology throughout its program.  By using Google Hangout to connect with classrooms across the state and Twitter to chat with authors, the EPP introduces candidates to the integration of technology.

Based on feedback from the Teacher Advisory Council in the fall of 2017, the EPP created an e-learning assignment in the required literacy courses EDUC 340:  Literacy Block and EDUC 342:  Literacy in the Content Area; all teacher candidates take one of these courses.  Teacher candidates modify a lesson plan in their unit plan by requiring it in the e-Learning lesson plan format (SCE packet).  Supported by a completer of the Manchester University program, preservice teachers have a model for an e-learning format which couples nicely with the program’s lesson plan format.  Teacher candidates submit the same lesson plan, but in two formats:  face-to-face and e-learning.

Equally as important, for the past few years, the EPP has hosted tech summits each fall to provide teacher candidates with current best practices in incorporating technology into their classrooms.  Classroom teachers provide instruction through half-day professional development workshops required of all upperclassmen enrolled in education majors; these workshops provide authentic opportunities for teacher candidates to develop a deeper understanding of using technology to support K-12 students (SCE packet).

Summary Case for Meeting Standard 1:

Using multiple data points, the EPP has met CAEP standard 1 by measuring candidates’ (1.1) understanding of the InTASC standards, (1.2) use of research and evidence to develop understanding of the teaching profession and to measure their impact on P-12 students’ learning, (1.3) apply content and pedagogical knowledge, (1.4) demonstrate skills and commitment so that all P-12 students have access to standards, and (1.5) model and apply technology standards as they design, implement, and assess learning.  Throughout the analysis of the evidence, clear strengths and challenges emerged.

Evidence for CAEP Standard 1 is located in corresponding evidence packets, and summaries of key findings include the following:

Strengths and Challenges

Based on data collected through the SCE Impact on Student Learning capstone project (SCE packet), candidates adequately demonstrate progression in understanding the four InTASC categories:  the learner and learning; content; instructional practices; and professional responsibility (Q1) (CAEP 1.1, 1.2, 1.3, 1.4, 1.5).

SCE:  Due to the low numbers MU teaching program, trends are represented for candidates across all areas of licensure.  Additionally, the data presented reflect EPP feedback on the initial submission of the project; therefore, the trends provided on the initial SCE evaluation tend to be lower than the EPP would like.

Candidates’ in the MU EPP have the following strengths: (A) Averages on criterion 8, Implications for Teaching and Professional Development (InTASC 2, 4, CAEP 2.3, 3.6), tend to be the highest (1.7/3.0); candidates consistently demonstrate the ability to analyze data, reflect on their impact on student growth, and make observations about their teaching and professional growth.  Accordingly, candidates must consider how they used data to drive instruction and make pedagogical decisions based on data collected from assessments aligned with learning objectives.

The MU EPP has identified the following challenges: (A) The EPP will pay close attention to the performance of Physical Education and Health candidates as their average scores tend to be lower than those of other disciplines.  The 2017 comparison shows the PE and Health candidates’ range between 9.83-10, but the other disciplines ranged between 13-17.  Because the Physical Education and Health program oversees their program, the EPP will continue to work with the PE/Health faculty to infuse knowledge and skills needed to successfully complete the SCE.

Danielson Framework:  Analysis of the Danielson framework data, the SCE Impact on Student Learning capstone project, the content-specific student teaching rubric, and the Pearson content and pedagogy scores, MU program completers demonstrate proficient content knowledge and understanding of engaging pedagogy (e.g., inquiry) and ongoing, standards-aligned assessment to ensure K-12 students’ mastery of rigorous academic standards (Q2).  (CAEP 1.1, 1.4, 1.5,)

The EPP recognizes MU program completers need additional opportunities to learn to intentionally design technology-infused, student-centered learning experiences inclusive of all students with the goal of deepening understanding of content and skills of students.  Of the criteria identified on the survey conducted by the EPP as well as the Indiana DOE, confidence in infusing curriculum with technology is an area in which candidates do not feel confident (Q3) (CAEP 1.3, 1.4, 1.5).

Reflecting on data collected for the SCE and the Danielson framework, MU program completers professionally and effectively use evidence and peer-reviewed research to make pedagogical decisions to impact student learning and to improve their own professional understanding of teaching (Q4) (CAEP 1.2, 1.3, 1.4, 1.5).  The EPP believes the implementation of the SCE and the Danielson Framework has increased the accountability of candidates.

Candidates’ in the MU EPP have the following strengths: (A) Overall performance of candidates tends to be relatively high.  All but three of the components indicate candidates are proficient in the four domains (1) Planning and Preparation; (2) Classroom Environment; (3) Instruction; and (4) Professional Responsibilities. (B) Similar to the information reflected in the employer satisfaction survey, candidates excel in relationships and recognize their professional responsibilities to P-12 students, parents, and colleagues.  In the category Danielson 2a:  creating an Environment of Respect and Rapport, candidates averaged 3.45/4.0.  In regards to Danielson 4E:  Growing and Developing Professionally and Danielson 4F:  Showing Professionalism, candidates averaged 3.3/4.0.  These three areas reflect the emphasis the EPP places on relationships, and because the candidates work closely with the four full-time faculty and Field Experience and Assessment Coordinator, they see these elements modeled.

Candidates’ in the MU EPP have the following challenges: (A) Candidates seem to struggle applying experiences regarding the development of assessments as they averaged only 2.88/4.0, a rating of basic, on Danielson 1f:  Designing Student Assessments; performance on Danielson 3d:  Using Assessment in Instruction was not much better with an average of 3.02/4.0, just above basic.  These scores reflect the feedback from employers on the IDOE survey (Employer and Completer Satisfaction packet).  The lowest score on this survey was on Q12, candidates’ ability to analyze student assessment data to improve classroom instruction.  The average was a 2.71/4.0, indicating employers disagree with the statement.

Candidate Recruitment and Completion:  The particular evidence included in the candidate Recruitment and Completion packet is the use of the Pearson CASA, content, and pedagogy exams required for progression through and completion of the MU teacher preparation program.  Additionally, Table 3 in the CRC packet demonstrates the number of candidates who were admitted to the program versus the number who actually completed the program.

According to the data, the MU EPP has the following strengths:

  • Candidates who complete the program have demonstrated mastery of content knowledge in order to effectively teach.In 2017, nine of thirteen candidates originally admitted to the program were not given permission to student teach primarily because of failure to pass the content exams.This number may alarm stakeholders and the institution; however, the EPP believes this will lead to a stronger cohort of completers.Candidates demonstrating comprehension of content knowledge assure clinical faculty focus can be on pedagogy and classroom management.
  • Dispositions measured provide a positive picture of the candidates overall.For all three cohorts measured, tend to be high with a range of 2.88 – 4.0/4.0 if the 1 outlier of a low performing PE candidate is removed.While no apparent trend appears to distinguish licensure areas, it is important to note candidates score at proficient or distinguished in most of the categories.They are especially high in patience and respectful attitude.

Employer and Completer Satisfaction:  While only 7 employers submitted the survey administered by the Indiana Department of Education, the EPP finds its feedback valuable.

Examining the data, one of the employers was extremely dissatisfied with one of the completers.  In all of the questions, the response ranged from strongly disagree to disagree.  The other 6 responses, however, indicate strengths in the program:

  • Completers work effectively with other professionals (Q17) and they work effectively with parents/guardians (Q18).Both of these questions earned an average response of 3.28/4.0.If the EPP removes the one dissatisfied employer, the ratings for these two questions raises to 3.5/4.0.
  • The overall satisfaction with the training of the first year teachers is also quite positive.The EPP earned an average of 3.14/4.0.Once again, removing the very dissatisfied employer, the average increases to 3.5/4.0.(While the EPP cannot do so, it wishes it could track the candidates to the surveys to compare the employers’ assessments of the completers to the other data collected while they were candidates in the program.)

Implications

To increase candidate performance in CAEP Standard 1, the MU EPP has identified the following plans of action for the 2018-2019 academic year:

(1) Candidates clearly understand the 10 InTASC standards.  During the sophomore interview with the DTE, candidates can talk about the standards; however, anecdotally, candidates’ responses tend to be based around the observation the InTASC standards are “just good teaching.”  To help candidates articulate the importance of the InTASC standards, the EPP will develop a plan of action for making the standards visible.  After initial brainstorming sessions, the EPP will consider creating InTASC posters and YouTube videos to be accessed by candidates.  More importantly, the EPP will work to infuse the standards into required assessments.  Candidates will have to identify the InTASC standards being met in their work.  The EPP will also develop a rubric for the sophomore and junior interviews with the DTE.  Through the interviews, the DTE will assess candidates’ ability to articulate the InTASC standards.

(2) The EPP will work with providers to identify opportunities in the clinical experiences for the EPP to evaluate candidates on their performance (CAEP 1.2, 2,1).  Currently, performance as an educator is not assessed until the senior year during the SCE and student teaching clinical experience. The EPP recognizes, however, the need to integrate performance-based assessments throughout the program so candidates develop the skills and knowledge assessed during the final stage of the program. According to the employer survey, candidates were not fully prepared to enter the classroom.  This particular question (Q2) asked if MU first year teachers met expectations of a beginning teacher for content preparation and knowledge.  The EPP’s average was a 2.83/4.0, placing it in the high end of “disagree.”  The EPP would like to increase this average as well as the 2.83 average of “understanding how students learn and develop at the grade level they are teaching” (Q1).  As the EPP revises its program, it will examine ways to create more intentional experiences in these areas.

(3) While the SCE provides the EPP with an adequate evaluation of candidates’ ability to have an impact on P-12 students through research-based best practices and application of content knowledge and pedagogy, the EPP believes revising the SCE into a work sample format would provide candidates with an even more authentic project reflecting the work required of classroom teachers.

(4) The EPP will continue to develop and administer its own employer survey.  Based on the candidate survey it piloted in the spring of 2018, an employer survey will be created and sent to employers of 1st and 3rd year completers.  All 1st and 3rd year completers will be sent the completer survey in addition to the survey MU sends all graduates.

(5) To continue implement the Danielson model effectively, the EPP sent one of the University supervisors to regional training in April 2018.  This supervisor will train other supervisors and clinical faculty on the use of the Danielson Framework.    In addition to the training for evaluators, the EPP will introduce the Danielson Framework to candidates earlier in their program.  The Danielson is an excellent measure of curriculum design, teaching, and professionalism.

(6) In the spring of 2018, candidates were required to create plans for e-learning days as part of their integrated unit plans.  They are also held accountable for integrating technology into their lesson plans while student teaching and while completing their impact on student learning projects; however, the EPP recognizes the need for a better plan of infusing technology into the program both as consumers of technology as well as those responsible for using technology in P-12 classrooms.  The EPP will continue to collaborate with clinical faculty and administrators as well as other EPPs to create a formal plan of action for technology.  By the end of the 2018-2019 academic year, the EPP will have a clear, measurable plan of action in place.

(7) The EPP will closely track the trend in the number of candidates admitted to the program versus the number of completers.  Understanding the reasons for the trend will be critical to the future of the program.  In regards to the Pearson content exams, the DTE has organized study tables for candidates.  She is also working with disciplines across the institution to identify how to better prepare candidates.  Intentionally including disciplinary faculty in the preparation of candidates is critical to the development of their content knowledge.

Standard 1: SPA Report Section

(Q1) Based on the analysis of the disaggregated data, how have the results of specialty licensure area or SPA evidence been used to inform decision making and improve instruction and candidate learning outcomes?

Each of the teacher preparation programs with five or more completers for the data collection cycle (academic years 2014-2015, 2015-2016 2016-2017) completed a SPA report; this required each of the program areas to collect evidence related to candidates’ mastery of content specific to the accreditation standards.  Unfortunately, when the current Director of Teacher Education (DTE) and the Field Experience and Assessment Coordinator (FEAC) began their positions in the fall of 2015, they found a haphazard collection of information and no structure to systematically collecting data.  Since then, they have worked to develop a Quality Assurance System (QAS) which outline key checkpoints, the type of assessment collected, and the path for collection.  The EPP requested a shared drive on the institutions internal network, and each program has access to key assessments.  They are able to upload data and work together as a unit to make use of the disaggregated data.  The EPP, as well, has access to each of the shared drives and can make comparisons across programs.  This, however, has come about slowly, and the DTE and FEAC continue to search for more efficient and appropriate ways to help programs use data to drive their instruction.  Additionally, the EPP is working with the Office of Institutional Effectiveness to tie the SPA report data collection to the departments’ annual reports required of that office. Adding additional lines to the current form and asking licensure areas to reflect on key assessments will ensure an even more systematic and institutional ownership of the SPA reports.  The responsibility will no longer rest on the DTE’s and FEAC’s shoulders.

It should be noted the shift between requiring Praxis II to Pearson content exams to evaluate candidates’ content knowledge.  The same occurred for the pedagogy exams as well.  This shift applies to all programs in the EPP applying for SPA recognition.

Elementary Education (ACEI) – Undergraduate/Initial

Status of program regarding SPA report:  National Recognition with Conditions

Resubmitted March 2018

Data was collected from the Pearson Elementary Education Generalist content exams which consist of four separate subtests (Reading and English Language Arts; Mathematics; Science, Health, and Physical Education; and Social Studies and Fine Arts).  Alignment between licensure exams and ACEI standards 1.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 3.1, 3.2, 3.3, 3.4, 3.5, 4.0, 5.1, and 5.2 support the use of the Pearson content exams as a good measure of candidates’ content knowledge.  The EPP reported the following number of completers 17 for 2014-201, 4 or 2015-2016, and 9 for 2016-2017.  Clearly, the EPP has struggled with enrollment in the elementary program.  While the state of Indiana has reflected this overall drop as well, the EPP was concerned by the low numbers.  Despite the low numbers, though, Manchester candidates still tend to score higher than the state average on the Pearson exams.  Additionally, candidates pass the pedagogy portion of the exam typically on the first or second time indicating the EPP is preparing them with skills to teach.

It should be noted only 4 elementary education majors completed the program in 2018 even though the cohort began with 13.  Since the SPA report for ACEI was originally submitted, the EPP implemented the requirement of passing the content exams prior to student teaching.  Currently, the EPP has 12 candidates preparing to student teach and complete in 2019; however, not all have passed their content exams yet.

English/Language Arts (NCTE) – Undergraduate/Initial

Status of program regarding SPA report:  National Recognition

Data points used for the NCTE SPA report include the Praxis II English Language Lit Comp Content Knowledge exam.  While candidates do relatively well, the English education program recognized the need to provide a more intentional alignment between content assessed by teacher licensure exams.  As a result, since the resubmission which earned it national recognition, the department collaboratively aligned required courses and key assessments with the standards assessed on the Pearson content exam (since initial SPA report was submitted, Pearson has replaced Praxis II).  More importantly, the English education program recognized previous assessment of its candidates was unintentional and often simply telling accreditors what they wanted to hear.  During the initial response to conditions, the program created new key assessments to reflect mastery of content knowledge.  The program also collaborated with the faculty in the Education Department to support the key assessment of the unit plan and the SCE Impact on Student Learning capstone project.

Of all of the programs, the mindset shift in English/Language Arts program was most dramatic and transformative.  The faculty in the program reported after pouring over reviewers’ comments and thinking differently about assessment, they finally understood the purpose of collecting data to use for programmatic shifts.  As a result, the English/Language Arts program, despite earning National Recognition, continues to realign courses and key assessments to better prepare English/Language Arts candidates for the classroom.  The EPP uses this program as model for using data and assessment to change programs.

Health Education (AAHPERD/AAHE) Undergraduate/Initial

Status of program regarding SPA report:  National Recognition with Conditions

Resubmitted March 2018

While this program received recognition with conditions, the EPP is quite impressed with the resubmitted report in March 2018, and it anticipates better results next month.  Much like the English/Language Arts program, the faculty in the Health program took ownership of the resubmission, spending hours/days reviewing the reviewers’ comments, examining the collection points and resulting data, and revising the program based on evidence. For the SPA report, the Health program reported a total of 6 completers for the data collection cycle.  During the cycle, Indiana’s requirement of the Praxis II content exam shifted to the Pearson content exam, so the data could not be fully compared. 

The Health program also heavily revised its key assessments, focusing on the written assessment, the health lesson plan, the teaching experience, and the annotated bibliography project.  Each was more carefully aligned with the AAHE standards.  Important to clearly reflecting student performance, many of the revisions focused on articulating expectations in the scoring guides.  Instead of general, broad statements, the program wrote specific, performance-based descriptors. 

This particular program should be complimented on intentionally focusing its program revisions to being better aligned with the required standards.  For several years, candidates in this program struggled to pass the licensure exams.  However, the Health education program has carefully aligned student learning goals with required standards; as a result, the rate of passing the licensure tests on the first attempt reflects this focus.

Gifted Children (NAGC/CEC) – Undergraduate/Initial

Status of program regarding SPA report:  Further Development Required

Not resubmitted according to state status of Gifted Children SPA reports

The EPP received communication from the NAGC which indicated that after the Fall 2017 cycle, NAGC will no longer be reviewing gifted education teacher preparation programs as part of the CAEP accreditation process.  To complete the program review before the onsite accreditation visit, the program may choose program review conducted by Indiana or the national review with feedback option, which is offered by CAEP.  Due to low numbers and confusion of the status of NAGC, the EPP did not submit a response to conditions.

Mathematics (NCTM) – Undergraduate/Initial

Status of program regarding SPA report:  Further Development Required

Not resubmitted – missed deadline for resubmission – few candidates in program

According to the consultant hired by the EPP, the NCTM SPA report was in a bad place.  The EPP had to respond by September 15, 2017 (feedback was received in August). Because the report earned a “further development required” decision it could not respond on March 15, 2018.  The math education program will have to wait a full year to submit a new report.  Unfortunately, the program has too few completers to submit a SPA report; it will have to submit a low enrollment program report to the state of Indiana by September 15, 2018.

Physical Education (NASPE) – Undergraduate/Initial

Status of program regarding SPA report:  National Recognition with Conditions

Resubmitted March 2018

Data sources used for the NASPE SPA report include the revision of assessment tools.  Major revisions were made on the rubrics for the key assessments.  For example, Assessments 3, 4, 5, 6, 7, and 8 were aligned with the NASPE standards/elements on the rubrics.  This provides candidates with understanding of connection and purpose of the assessment.  Other changes included revisions to the teaching lesson evaluation (assessment #8) in order to show alignment between the assignment guide sheet and the rubric used to evaluate the assignment. In this case, the description was improved to align with the standards and elements, specifically highlighting advanced lesson plan preparation prior to candidates teaching the lesson to children and being evaluated in their pedagogical performance.  Assessment 7 was better aligned with the NASPE standards and elements in order to provide more breadth and depth.  Again, expectations were clarified by using performance-based descriptors with varying degrees on the rubric.

The Physical Education program is especially proud of its 100% pass rate on the licensure exams.  Typically, completers of this program pass the standardized tests on the first attempt, indicating the program is directly aligned with NASPE standards.

Social Studies (NCSS) – Undergraduate/Initial

Status of program regarding SPA report:  National Recognition with Conditions

Resubmitted March 2018

Unfortunately, between the time the first response to conditions was submitted in March 2017 and the response from the reviewers, the history faculty member responsible for writing the report died in a car accident.  Subsequently, the history (social studies) program had to work in a different manner to understand what this person had envisioned and submitted.  Without their original input, the program learned the importance of collaboration within the specific licensure area and with the EPP at large.  Since then, a team approach has taken shape within the history education program.  Regardless of what happened within the program, the NCSS response to conditions submitted in march 2018 is a much stronger report than previously submitted. 

Data sources from the history education program, like the other programs, involves content licensure exam, Praxis II and Pearson.  During the submission, the program reported 5 completers (4 in 2014-2015, 0 in 2015-2016, and 1 in 2016-2017).  Data tables were updated to reflect subtests and alignment to specific NCSS standards and elements. Additional documents were added to include the ETS Social Studies Content Study Guide and the blueprint used for the Pearson Historical Perspective Assessment.  Due to the small size of the different licensure programs, all secondary candidates enroll in the same courses such as EDUC 342 Literacy in the Content Area.  The EPP is working to revise the key assessment of the unit plan to differentiate for different content areas. 

Special Education (CEC) – Undergraduate/Initial

Status of program regarding SPA report:  National Recognition with Conditions

Resubmitted March 2018

For the data cycle used for the SPA report, the Special Education program reported the following number of program completers:  11 in 2014-2015, 3 in 2015-2016, and 6 in 2016-2017.  Key data points for the CEC SPA report include Praxis II/Pearson content as well as GPA in required courses.  Additional data came from the following rubric based assessments:  unit plan, student teaching evaluation rubric, reader case study, and the SCE:  Impact on Student Learning capstone project.  The program recognizes the difficulty it created by shifting from one evaluation tool to another to evaluate the student teaching experience.  During this data collection cycle, it shifted from the RISE model to program specific content rubrics and the Danielson Framework.  While the shift does not allow for a clear comparison of data, the EPP believed shifting to content specific rubrics and the Danielson provided candidates and the program with more reliable and valid data to use in program decisions.

In the revisions, the program made major changes to rubrics and collected additional evidence for reviewers.  It has been paramount to the program to carefully align the assessments, particularly the content exams, with the CEC standards.  These standards are also aligned to course content providing candidates with a clear view of where content is covered and the connection to licensure.  In regards to the key assessments such as the unit plan, the EPP is currently exploring ways to make these assessments better aligned with specific programs.  All candidates are enrolled in these courses, such as EDUC 340 Literacy Block, the course in which they write the integrated unit plan.  The EPP, though, must differentiate assignments and rubrics to meet the needs of each program such as those outlined by the CEC.

(Q2) Based on analysis of specialty licensure area data, how have individual areas used data for change?

Based on the process used to write the initial SPA reports at the same time changes in the EPP’s accreditation team [new Director of Teacher Education (DTE) and new Field Experience and Assessment Coordinator (FEAC)], faculty in the EPP and licensure programs have identified specific changes which must take place in the next two years:

  1. Prior to the new team, assessment of programs has been unintentional, more of an afterthought simply to write a SPA report.As a result, programs search for data and created alignment with standards after the fact.The DTE and FEAC have made a commitment to the programs to work with them to create a more systematic, intentional way of organizing data.Several key elements are in place: (A) the purchase of the CORE software program will allow the EPP to track key data points and report them to the licensure programs; (B) shared drives on the institution’s internal system have been created so programs can upload assessment data, tables, and track candidates’ progress in licensure areas; (C) the EPP is working with the Office of Institutional Effectiveness to add SPA-specific questions to the forms used for departmental reports.Intentional data collection will result and programs will reflect on the data when it submits their annual reports institutionally.Streamlining data collection and reflecting on the collected data at the end of each year will provide the programs a much more intentional use of data for changes.

     

  2. The DTE and FEAC will work with programs to establish regular meeting times throughout the academic year to ensure alignment between required courses and program-specific standards. Each of the programs will need to revisit the identified assessments and follow CAEP guidelines for validity and reliability of the assessment tools.

     

  3. The EPP intends on revising its teacher preparation program, and key stakeholders in this revision are content faculty directly involved in licensure programs.The EPP would like to increase the focus on clinical field experiences, making them the core of the program.Course work will revolve around the intentional clinical placements.Input by all stakeholders will be critical to the revisions of the program.The EPP will organize a team including content faculty, practitioners, completers, administrators, and other community partners, and this committee will consider the demands of the 21st century classroom teacher.Using the CAEP and InTASC standards as well as specialize program standards, the team will redesign the program.The Manchester University EPP believes its current program is too traditional.The process of writing SPA reports has revealed how unintentional much of the data collection seems.Candidates must see a deep connection between their program and the realities of the profession.

     

  4. Based on data collection from the SPA reports, the programs believe the EPP believes it must redesign its SCE Impact on Student Learning capstone project so better reflects the realities of teaching.Shifting its focus less from the writing of research paper to a work sample will offer candidates a more realistic experience in planning curriculum and assessments as well as using data to drive instruction.The different licensure programs will be engaged in this revision as well.

(Q4) How are SPA reports that are not nationally Recognized being addressed?

As explored in Q1, all Manchester University licensure programs have submitted the appropriate SPA reports for review.  Currently, only the English Language Arts program has earned National Recognition.  The following programs have earned National Recognition with Conditions:  Elementary Education, Health Education, Physical Education, Social Studies education, and Special Education.  The response to conditions were submitted by March 15, 2018, and the EPP will learn of the status in just a few weeks.  The Gifted Education and Mathematics Education initial responses to conditions earned further development needed.  For a variety of reasons, these SPA reports were not resubmitted.  However, the EPP is working with faculty to establish a timeline and appropriate plan of action for completing these SPA reports in the future. 

The EPP just learned the Indiana Department of Education will require state program reviews for small programs, so it is currently working with the IDOE to determine the correct processes to follow.

Standard 2: Partnerships for Clinical Preparation

Notes from CAEP:  2013 initial preparation- The provider ensures that effective 2,1 [components 2.1 and 2.2] and high-quality clinical practice [component 2.3] are central to preparation so that candidates develop the knowledge, skills, and professional dispositions necessary to demonstrate positive impact on all P-12 students’ learning and development.

2016 advanced level preparation-The provider ensures that effective partnerships [component A.2.1] and high-quality clinical practice [component A.2.2] are central to preparation so that candidates develop the knowledge, skills, and professional dispositions appropriate for their professional specialty field.

Making a case: High quality clinical practice is a unique and critical feature for both initial and advanced preparation programs. Standards 2 and A.2 are the places to demonstrate that the provider has partnerships with P-12 schools that are beneficial to both parties (component 2.1 and A.2.1). The provider explains how collaborative partnerships are conducted, monitored, and evaluated, and how this evidence has led to changes in programs for both initial preparation and for advanced preparation. The EPP provides examples of beneficial collaboration and how the provider and schools work together (e.g., the process for co-selection of mentor (co-op) teachers and university supervisors-component 2.2).

The clinical experiences are addressed in component 2.3 for initial preparation and component A.2.2 for advanced level preparation. For initial clinical experiences, what associations does the provider find between the particular aspects of its preparation (such as breadth, depth, diversity, coherence, and duration) and candidate outcomes-such as completion and licensure. For advanced preparation, EPPs should document the opportunities for candidates to practice their developing knowledge and skills, and address what faculty have learned from the relationship of culminating experiences with candidate success in problem-based tasks or research characteristic of their professional specialization.

The guiding questions may help focus the selection of evidence and the EPP inquiry of its message:

  • How do clinical partners co-construct mutually beneficial P-12 school and community arrangements, including technology-based collaborations, for clinical preparation and share responsibility for continuous improvement of candidate preparation?
  • What are the mutually agreeable expectations for candidate entry, preparation, and exit to ensure that theory and practice are linked, to maintain coherence across clinical and academic components of preparation, and to share accountability for candidate outcomes?
  • How do clinical partners co-select, prepare, evaluate, support, and retain high-quality clinical educators, both provider- and school-based, who demonstrate a positive impact on candidates’ development and P-12 student learning and development?
  • What are the multiple indicators and appropriate technology-based applications used to establish, maintain, and refine criteria for selection, professional development, performance evaluation, continuous improvement, and retention of clinical educators in all clinical placement settings?
  • How does the provider work with partners to design clinical experiences of sufficient depth, breadth, diversity, coherence, and duration to ensure that candidates demonstrate their developing effectiveness and positive impact on all students’ learning and development?
  • How are clinical experiences, including technology-enhanced learning opportunities, structured to have multiple performance-based assessments at key points within the program to demonstrate candidates’ development of the knowledge, skills, and professional dispositions (as delineated in Standard 1) that are associated with a positive impact on the learning and development of all P-12 students?

The EPP should reflect on:

  • STRENGTHS AND CHALLENGES-What strengths and areas of challenge have you discovered in your clinical experiences and in your partnership arrangements as you analyzed and compared the results of your disaggregated data by program and by demographics? What questions have emerged that need more investigation? How are you using this information for continuous improvement?
  • TRENDS-What trends have emerged as you compared program and demographic data describing clinical experiences across evidence sources and programs? What questions have emerged that need more investigation? How are you using this information for continuous improvement?
  • IMPLICATIONS-What implications can you draw or conclusions can you reach across evidence sources about your school/districts partnerships and your clinical experiences? What questions have emerged that need more investigation? Improvement? How have data-driven decisions on changes been incorporated into preparation

Summary Statement:

Partnerships with P-12 schools and other community organizations are important to the development of teacher candidates.  The Clinical Partnership Evidence (CP) Packet articulates the MU EPP’s reflection on the following questions related to CAEP standard 2:

  1. Are the clinical partners co-constructed and mutually beneficial to P-12 school and community arrangements, and do they offer opportunities for technology-based collaborations?
  2. In what ways does the MU EPP work with stakeholders to develop mutually agreeable expectations for candidate entry, preparation, and exit in order to link theory and practice as well as to share accountability for candidate outcomes?
  3. To what extent does the MU EPP work with partners to design clinical experiences of sufficient depth, breadth, diversity, coherence, and duration to ensure that candidates demonstrate their developing effectiveness and positive impact on all students’ learning and development?
  4. How are clinical experiences, including technology-enhanced learning opportunities, structured to have multiple performance-based assessments at key points within the program to demonstrate candidates’ development of the knowledge, skills, and professional dispositions (as delineated in Standard 1) that are associated with a positive impact on the learning and development of all P-12 students?

Standard 2.1 Because of the institution’s rural setting, with the next town more than fifteen miles away, the EPP relies heavily on the local school system for many of the clinical experiences.  Changes in DTEs and administration in the local schools as well as the geographical location of MU has posed some challenges for the EPP.  However, to proactively address these issues, the EPP has held face-to-face meetings between the current DTE, the field experience coordinator, other EPP faculty, and administrators and clinical faculty.  It also hosts biannual meetings of the Teacher Advisory Council and an annual community partnership lunch.  Both groups involve a collaboration between important stakeholders; as a result, the EPP has increased communication and improved the quality of the experiences.

Each of the required education courses includes a field experience.  The EPP has structured field experiences in foundational courses such as EDUC 111, 211, 237, and 245 with focused observations.  Once candidates have been admitted to the program, they experience small group teaching experiences.  Today’s structured junior-level field experiences is based on one of the most mutually beneficial partnerships the EPP has experienced.  The clinical experience organically emerged in February 2011 when the local community schools eliminated nineteen paraprofessional positions in the elementary school.  Third-grade classrooms heavily dependent on paraprofessionals to run reading groups were the hardest hit by the sudden budgetary cut.  For the remainder of the year, the teacher candidates enrolled in EDUC 340:  Literacy Block ran the reading groups in collaboration with the clinical faculty.  This co-constructed clinical experience has grown into an organized daily clinical experience Monday through Thursday throughout the academic year.  Teacher candidates work with third grade teachers to establish skills-based curriculum and interventions during Response to Intervention (RTI).

In the spring of 2018, responding to a request by the intermediate school in the local school corporation, the EPP collaborated with clinical faculty to provide support small groups of fifth-grade students with support in literacy for two days and in math skills the other two days.  While the fifth-grade partnership is a new development, the feedback from the classroom teachers as well as the teacher candidates indicate a positive experience for everyone involved.  At the end of the spring semester 2018, an intern within the EPP held focus groups and recorded the participants’ responses; additionally, representatives from the EPP and clinical faculty created a plan of action for the 2018-2019 school year.

Collaboration of this kind meets the needs of both parties, and because they are co-constructed with stakeholders, they are more intentional than those which simply involve the EPP placing students in a classroom for observation. Frequent meetings with the clinical faculty and the candidates allowed candidates to discuss data collection, design interventions, and see their impact on student learning.  The collaborative effort also provided insight into the professionalism involved in teaching.  Using this experience as a model, the Field Experience and Assessment Coordinator and the DTE will continue to partner with other clinical sites.

The culminating field experience occurs during the candidates’ senior year with the student teaching experience, most commonly taking place in the spring semester.  While student teaching is only one semester, the candidates have obligations which span the entire academic year.  In the fall prior to student teaching, candidates participate in the first four days of the clinical placements’ academic year.  This experience provides candidates with insight into the setting up of a classroom and the beginning days of a school year when the community of the classroom is established.  Throughout the fall semester, elementary candidates spend a minimum of six Tuesdays and Thursdays as well as a full-week in their student teaching classrooms.  They also spend one week in a Spanish-immersion elementary school in the urban setting of Fort Wayne Community Schools.  Secondary and all-grade candidates, because of their course schedules, do not have designated days as the elementary candidates.  They spend a minimum of 25 hours throughout the fall semester.  During the January session prior to the student teaching experience, secondary and all-grade candidates spend the session in a clinical placement in Jefferson Middle School, also part of Fort Wayne Community Schools.  The student teaching clinical experience begins with the first day of the semester and ends on the last day of final exams.

Meetings between clinical faculty, university supervisors, clinical administrators, and the EPP provide opportunities to co-construct the experiences.

Standard 2.2  The preparation of teacher candidates is dependent upon high quality clinical experiences.  As a result, the EPP networks intentionally with school systems to develop scaffolded opportunities as identified in Standard 2.1.  This requires frequent communication between the Field Experience and Assessment Coordinator and the Director of Teacher Education to develop a cadre of high quality clinical educators to serve as mentors for teacher candidates.

The EPP has established criteria for administrators to use when selecting clinical faculty for field experiences.  Through the memorandums of understanding signed by both the Director of Teacher Education and the school systems’ superintendents, the EPP articulates the expectations for clinical faculty (Evidence:  Memorandum).   Evidence also exists in the Clinical Partnership packet showing minutes from community partnership luncheons and the Teacher Advisory Council.  These two groups include administrators from all surrounding school corporations, and the agendas often include discussion of dispositions of candidates as well as important skills and knowledge candidates must possess.  Equally as important, the groups discuss the training and qualities required of clinical educators to support the candidates’ clinical experiences.  Prior to the spring of 2018, the EPP did not have a formal way of collecting feedback regarding clinical faculty.  As the EPP makes better use of the online platform CORE, the EPP can collect feedback regarding the clinical faculty from university supervisors as well as the teacher candidates.

Standard 2.3 With the addition of the Field Experience and Assessment Coordinator, the EPP continues to work towards more intentional clinical experiences.  Each of the required courses includes a clinical experience.  The foundational courses, taken in the first and second years of the program, require simple observation; the third-year transitions to small group instruction of third or fifth-grade students.  Through collaboration with the clinical faculty, teacher candidates are introduced to the elements of planning for instruction based on authentic assessments.

The Teacher Education Student Handbook offers an outline of the clinical experiences, and the Field Experience and Assessment Coordinator communicates frequently with partners to ensure candidates are meeting the requirements and the clinical faculty are providing the candidates with the appropriate depth in experience.  The outlined expectations along with the EPP-created evaluation form provide coherence in the experiences.  In the Spring of 2018, the EPP purchased the same online platform (CORE) used by the School of Pharmacy.  Through this system, candidates can log their hours and clinical faculty can provide immediate feedback to the EPP regarding the candidates’ performance and dispositions. With more open lines of communication, the EPP anticipates an increase in the retention of clinical faculty and better experiences for the candidates.

To bring diversity to the clinical experiences, senior candidates have an intensive introduction to an urban clinical experience by the program’s partnership with Fort Wayne Community Schools, the largest public school system in Indiana.  Elementary candidates spend one full week during the fall methods block in Lindley Elementary, a Spanish immersion elementary school.  Secondary and all-grade candidates spend the January session immediately prior to student teaching in a classroom at Jefferson Middle School.  Both intensive experiences prior to student teaching offer teacher candidates with settings different than the ones they have previously experienced.

Strengths and Challenges:  Like other institutions, especially those located in a rural setting such as North Manchester, the EPP struggles with scheduling clinical experiences outside the local public schools because teacher candidates are dependent upon transportation and hampered by traditional course schedules not conducive to clinical experiences which require flexibility.   However, the small size of the institution as well as the local community is also a strength.  Administrators, clinical faculty, and members of the EPP know each other professionally and personally.   When opportunities arise for collaborative projects, it is common practice for one of the parties to reach out to the other. Regarding the research questions, the EPP is confident it continues to improve the clinical experiences for candidates, and it has a clear plan of action for how it will increase collaboration with partners.

With the new position of the Field Experience and Assessment Coordinator and the community partnership lunches, the EPP has increased co-constructed and mutually beneficial with clinical partners (Q1).  Evidence is in the Memorandums of Understanding (2.1, 2.2) and other evidence located in the Clinical Partnership Packet such as minutes from the Teacher Advisory Council and community partnership luncheon support the commitment to co-constructing partnerships.  The feedback provided by the intern regarding the two RTI settings indicate several of the following strengths:  to “work one-on-one and give the students the attention they need to help them succeed;” “coming up with own lessons and implementing;” “working on ‘teacher-skills’ and being a professional;” seeing progress with the students;” and “using cooperative teaching and being able to have that extra help/support of another peer.”  Not only do the candidates find the experience beneficial, but the clinical faculty have committed to continuing the partnership because of the benefits afforded the 3rd and 5th grade students.  The EPP is currently working on creating online surveys using the CORE software program.  This will give the EPP more definitive data regarding the candidates’ perception of their experiences.  The CORE program will also allow the EPP to have clinical faculty complete online evaluations and provide feedback on the clinical experience in general.  Ultimately, the partnership will continue to be mutually beneficial.

Integrating technology continues to pose an issue for the EPP, and using it for collaborative clinical experiences is no different.; however, it does collaborate with classroom teachers.  For example, the EPP frequently uses Facetime or Google Hangout to collaborate with clinical faculty or classrooms of students.  One of the ideas explored during the community partnership lunch was a professional development experience through technology.  Administrators in the community partnership lunch as well as the members of the Teacher Advisory Council also are committed to exploring a variety of new partnerships.

The EPP has identified the need to define mutually beneficial partnerships and consider more quantitative ways to measure the effectiveness of the clinical experiences for candidates, the P-12 students, the EPP, and the clinical settings.  The survey and focus groups conducted by the intern this spring provide a foundation for future assessment practices; however, the EPP understands it must create evaluation tools to measure effectiveness for this particular question and standard.

Examining the minutes of the Teacher Advisory Council, the community partnership lunch, and the Teacher Education Committee provides adequate evidence the MU EPP collaborates with stakeholders to develop mutually agreeable expectations for candidate entry, preparation, and exit in order to link theory and practice as well as to share accountability for candidate outcomes (Q2).  The best examples of the EPP adequately answering question 2 are the junior and senior clinical experiences.  Through the self-study, the EPP has identified a need to deepen the relationship with partners regarding clinical experiences the first two years of the program.  Both the Field Experience and Assessment Coordinator and the DTE have made a commitment to schedule meetings with clinical sites to connect expectations with clinical experiences, incorporating all stakeholders’ insights.

Until recently, field experiences were dictated by the EPP with little input of the partners.  Often, the EPP provided the guidelines to the administrators who then placed candidates.  Through intentional planning and feedback during meetings with stakeholders, the EPP has moved to a more collaborative approach to constructing the field experiences.  In regards to the question 3, “to what extent does the MU EPP work with partners to design clinical experiences of sufficient depth, breadth, diversity, coherence, and duration to ensure that candidates demonstrate their developing effectiveness and positive impact on all students’ learning and development,” the EPP understands the need to focus attention on the first two years of clinical experiences.  It believes having candidates in the field in the first semester is critical to developing and retaining high-quality teaching candidates; however, it also recognizes the current foundational field experiences need more focus and intentionality as is supported by the outlined expectations and minimum hours.  One of the major questions the EPP continues to wrestle with is how to increase the rigor and caliber of the experiences when many candidates are simply exploring teaching as an option.

Finally, the EPP is comfortable with many of the current clinical experiences, including technology-enhanced learning opportunities; however, it is working towards multiple performance-based assessments at key points within the program to demonstrate candidates’ development of the knowledge, skills, and professional dispositions (as delineated in Standard 1) in order to have a positive impact on the learning and development of all P-12 students (Q4).  Currently, assessments associated with the clinical experiences are subjective evaluations of the candidates’ dispositions rather than performance-based assessments.  Two questions emerged during the self-study: (1) how can the EPP involve partners in the design of performance-based experiences other than observations, and (2) who will be responsible for designing the assessments used for the performance-based assignments?

Implications

In the fall of 2018, the EPP will work with stakeholders to define co-constructed and mutually beneficial.  Once the definitions have been established, the EPP will work with stakeholders to determine appropriate evaluation tools for measuring the benefit of the clinical experiences.  The EPP has realized the current disposition rubric needs better aligned with program outcomes and InTASC and CAEP standards.  Clinical faculty, members of the EPP, and the Teacher Advisory Council will work together in the fall of 2018 to revise the disposition rubrics. Clinical faculty will need trained on the evaluation of dispositions as well.

 

Standard 3: Plan for Recruitment of Diverse Candidates who Meet Employment Needs

Standard 4: Program Impact

Notes from CAEP: Program Impact (NOTE: The role of states in generating evidence for various components of Standard 4 is dynamic and promises to continue to be for some years in the future as states sort out how best to fulfill their program approval, licensure, and data gathering responsibilities. For that reason, resources available to EPPs to document program impact are highly varied. CAEP has provided suggestions in the Evidence Guide, and in questions and answers online, describing examples for Standard 4 under various conditions. In addition, the Board of Directors has adopted a transition policy for Standard 4 under which it will not be necessary for EPPs to fully meet all four components under initial preparation until self studies are submitted for academic year 2019/2020.

2013 initial preparation-The provider demonstrates the impact of its completers on P-12 student learning and development [component 4.1], classroom instruction [component 4.2] and schools [component 4.3], and the satisfaction of its completers [component 4.4] with the relevance and effectiveness of their preparation.

2016 advanced level preparation–The provider documents the satisfaction of its completers from advanced preparation programs [component 4.2] and their employers [component 4.1] with the relevance and effectiveness of their preparation.

[NOTE: Under CAEP Board policy, all components of Standard 4 and Standard A.4 must be met for full accreditation.]

Making a case: In Standard 4, the provider demonstrates that the pre-service preparation covered in Standard 3 and Standard 1 equips pre-service teachers to have a positive impact on P-12 student learning and development for all students. The provider should present its evidence that completers are having a positive impact on P-12 student learning. The four components of initial preparation Standard 4 comprise four of the eight CAEP key indicator measures, ones that are important measures of EPP performance and part of the EPP’s continuing accountability to its stakeholders and the public. Effective teaching or other education professionals’ performance is a fundamental goal of the CAEP Standards; therefore, the provider must meet this standard to be accredited. For completers from advanced preparation programs, the Standard 4 case should include evidence of the satisfaction of completers and employers with the relevance and effectiveness of their preparation.

The guiding questions may help focus the selection of evidence and the EPP inquiry of its message:

  • How does the provider document, using multiple measures that program completers contribute to an expected level of student-learning growth?
  • How does the provider demonstrate:
  • through structured and validated observation instruments and/or student surveys that completers effectively apply the professional knowledge, skills, and dispositions that the preparation experiences were designed to achieve?
  • using measures that result in valid and reliable data and including employment milestones such as promotion and retention, that employers are satisfied with the completers’ preparation for their assigned responsibilities in working with P-12 students?
  • using measures that result in valid and reliable data, that program completers perceive their preparation as relevant to the responsibilities they confront on the job, and that the preparation was effective?

EPPs should reflect on:

  • STRENGTHS AND CHALLENGES-What strengths and areas of challenge have you discovered about the impact of completers who are employed in the education professional positions for which they were prepared as you analyzed and compared the results of your disaggregated data by program and by demographics? What questions have emerged that need more investigation? How are you using use this information for continuous improvement?
  • TRENDS-What trends have emerged about completer performance and completer/employer satisfaction with preparation as you compared program and demographic data across evidence sources and programs? What questions have emerged that need more investigation? How are you using this information for continuous improvement?
  • IMPLICATIONS-What implications can you draw or conclusions can you reach across evidence sources about completer performance and completer/employer satisfaction with preparation? What questions have emerged that need more investigation? Improvement? How have data-driven decisions on changes been incorporated into preparation

Summary Statement:

Reviewers will find analysis of the MU EPP’s evidence for CAEP standard 4 in the Employer and Completer Satisfaction (ECS) evidence packet, the Danielson Framework evidence packet, and the SCE Impact on Student Learning Project evidence packet.  The EPP asked the following questions:

  1. What are the multiple measures used by the EPP to evaluate program completers contribute to an expected level of student-learning growth?
  2. Based on the data collected from the Danielson Framework, do completers effectively apply the professional knowledge, skills, and dispositions expected of MU completers?
  3. Based on the employer satisfaction data provided by the Indiana Department of Education, are employers satisfied with the completers’ preparation for their assigned responsibilities in working with P-12 students?
  4. Do completers perceive their program adequately prepared them to be successful in the classroom?Was their preparation effective and relevant?
  5. Are program completers hitting milestones in their employment?Are they earning recognition for their work?

Standard 4.1 The EPP continues to develop ways to track program completers’ impact on P-12 students’ learning.  As candidates progress through the program, they must complete several clinical experiences which require them to work with data and research-based best practices to impact P-12 students’ learning.  In the junior year, candidates work all year long with small groups of students, tailoring lessons to meet individual needs.  Through collaboration with clinical faculty, candidates explore effective pedagogy as it relates to students’ performance on assessments.  During student teaching, the candidate completes an extensive impact on student learning research project which requires the candidate to collaborate with the cooperating teacher to identify a unit plan, design assessments including pre- and post-tests, select a best practice for teaching content, create a literature review of the research associated with the best practice, implement the unit, and analyze collected data to determine the impact on student growth (SCE packet).  This capstone project provides the EPP with a deep understanding of candidates’ ability to impact student learning.

Another measure of candidates’ impact on learners is completed by the university supervisor and the clinical faculty with the Danielson framework rubric.  Assessing candidates with this valid and reliable tool provides insight into observable teaching behaviors impacting student-learning growth.  Each of the four domains covered by the rubric lends itself to creating positive learning experiences for all P-12 students, and the student teacher is held accountable for impacting learning (Danielson packet).

Additionally, the only way to measure the impact completers have on P-12 students once they leave the program occurs through employer survey data (IDOE Employer survey results in ECS packet).  The process has only been used for the past year in the state of Indiana, and the IDOE struggles to get a large return.  The first series of data is provided in the evidence IDOE Employer survey results.

Finally, the successful passing of the Pearson pedagogy exam required of all Indiana teaching licenses indicates completers have an understanding of effective teaching practices (CRC packet).  Completers of the MU EPP between 2016 and 2018 have an 86 percent pass rate on the first attempt at the pedagogy exams, indicating a sufficient

Standard 4.2 Because independent research has indicated the Danielson framework is a valid and reliable tool for measuring teacher effectiveness, the EPP uses the framework to evaluate skills and dispositions of its student teachers (Danielson packet).  The EPP introduces the framework to candidates when they enroll in the literacy courses during their junior year.  As they write their curriculum units, they do so through the lens of the Danielson framework, using the tool to guide how they plan curriculum, how they would teach the curriculum, and the type of learning environment they would need to create so all students are successful.  They self-evaluate their own work using the Danielson framework rubric.  This practice using the tool allows candidates to consider how they will be evaluated during their student teaching experience.

Other valid and reliable instruments include the Pearson CASA, content, and pedagogy exams expected of all completers.  Data collected from these standardized tests indicate the MU EPP’s candidates are performing at or above state averages.  For example, for 2018, the MU EPP has an institutional pass rate of 85% for the math CASA compared to the state pass rate of 78%.  MU EPP’s institutional pass rate for the writing CASA is 89% which is 8% higher than the state pass rate.  Only a 1% difference separates MU’s institutional pass rate for the reading CASA which is 90% and the state’s pass rate is 91% (CRC packet).

The EPP also uses EPP-constructed rubrics including disposition evaluations, departmental approval rubrics, and content-specific observation rubrics used by clinical faculty during student teaching.  Each of these rubrics continues to undergo scrutiny by outside partners such as TEC and TAC, and the EPP is working closely with the Office of Institutional Effectiveness to establish a consistent and streamlined way to collect data and to intentionally use it to drive programmatic decisions.

Standard 4.3 In the past two years, the Indiana Department of Education created and distributed an online survey sent to administrators across the state for each first year teacher they employ.  The response rate has been rather low; however, the IDOE does provide EPPs with the data (ECS packet).  The only data point provided thus far is the survey results from 2016-2017.  Only 7 of employers of MU completers responded to the survey.  The respondents rated EPP first year teachers as 3.28/4.0 in bot their ability to work effectively with other professionals and effectively with parents/guardians, reflecting a strong approval of the professionalism of completers.  Overall, employers rated MU completers as 3.14/4.0 in overall satisfaction in the MU teacher preparation program.

In the fall of 2018, the Manchester University EPP will distribute its own survey to employers of 2017 graduates who are currently employed as first year teachers.  The survey is based on the InTASC standards and CAEP cross-cutting themes of diversity and technology.  Currently, the EPP is working with the Alumni Office and the IDOE to locate employers’ contact information for successful distribution of the survey.

Standard 4.4 Annually, as part of the Manchester University assessment plan, the Office of Institutional Effectiveness surveys recent alumni.  The EPP receives disaggregated data regarding completers’ perceptions of their preparation (ECS packet).  Overall, the completers rate their teaching program relatively high. A five-year average of data indicates a positive trend in completers being prepared for their careers.  In 2014, only 41% of alumni felt extremely well-prepared, the number dropped to 25% in 2015, but since then it has rebounded to 75% in 2016 and 69% in 2017.  This increase is important to the EPP because it indicates a positive move, and it also reflects a change in the EPP.  In the fall of 2015, a new DTE was named, and two new hires were made in the program:  a Field Experience and Assessment Coordinator and a new faculty member.

In the spring of 2018, the EPP independently surveyed first year teachers regarding their perception of their preparation.  The survey reflects questions asked on the employers’ survey the EPP will send to administrators in the fall of 2018.  The Likert scale uses the following ratings:  1-extremely satisfied, 2-moderately satisfied, 3-slightly satisfied, 4-neither satisfied nor dissatisfied, 5-slightly dissatisfied, 6-moderately dissatisfied, and 7-extremely dissatisfied.  Only 25% of the completers submitted the surveys, so the data does not give a full picture of the program; however, of the ones who did complete the online survey 75% were moderately satisfied with their teacher preparation program, and 25% were extremely satisfied.

Most importantly, during the spring of 2017, the IDOE surveyed recent completers of the MU teaching program using the following scale:  1-strongly disagree, 2-disagree, 3- agree, 4-strongly agree (Attachment CAEP 4A).  According to the results from

Trends, Strengths, and Challenges

The EPP uses the IDOE employer survey and the SCE:  Impact on Student Learning capstone project to evaluate the way program completers contribute to an expected level of student-learning growth; however, it has indicated that it needs to add depth to the data collection (Q1).  In the fall of 2015, the Vice President and Dean of Academic Affairs appointed a new DTE, and the EPP created the position of Field Experience and Assessment Coordinator.  When the two stepped into their new roles, they found an inadequate and fractured data collection system.  Since then, the EPP has worked collaboratively to realign courses with InTASC standards as well as the CAEP criteria.   While the lack of data posed a great challenge, the EPP sees the accreditation process as an opportunity to redesign a better teacher preparation program.  It acknowledges it has work to do, but it also knows it is on the right track with online surveys to send to completers and employers and with the SCE:  Impact on Student Learning.  Both hold great promise for rich data collection as the EPP continues to look for more efficient ways to collect this data.

Based on the data collected from the Danielson Framework, completers effectively apply the professional knowledge, skills, and dispositions expected of MU completers (Q2).  In the fall of 2014, the EPP began intentionally focusing on the type of completer it wanted to graduate.  The Danielson framework reflects the criteria of highly effective teachers, and using this valid and reliable assessment tool assures the EPP of its candidates’ effectiveness.

Additionally, the EPP has focused on graduating candidates of ability and conviction who understand and engage the whole learning in meaningful development of deep understanding of content and skills.  To fully evaluate this vision and driving mission, the EPP developed the capstone project required to graduate; the impact on student learning project previously mentioned in standard 4.1 aligns with this mission as well as CAEP standard 4.2 (SCE packet).  Since the implementation of this capstone project, the EPP has confidence in the completers’ ability to apply their professional knowledge, skills, and dispositions in an educational setting.

Based on the employer satisfaction data provided by the Indiana Department of Education, employers are satisfied with the completers’ preparation for their assigned responsibilities in working with P-12 students (Q3).

Survey data collected by Manchester University, the EPP, and the Indiana DOE indicate completers perceive their program was effective and relevant, and it adequately prepared them for success in the classroom (Q4).  One of the scores the institution values in the survey is the net promoter score, a number which reflects how likely completers are to recommend Manchester University to other people.  Institutionally, the net promoter score in 2017-2108 was 32.1%, but the EPP’s net promoter score was 57.9. While the EPP has much room for improvement, it is satisfied with its rating in relationship to the overall institution.  According to the IDOE, beginning teachers perceive the have an ability to differentiate instruction to meet all students’ learning needs, scoring a 4.0/4.0.  Additionally, completers perceive they exhibit ethical practice (3.95/4.0) and recognize the importance of continued professional development (3.95/4.0).  The EPP believes these high scores reflect the intentional focus on professional opportunities and frequent interactions with clinical faculty.

Manchester University proudly has program completers who hit milestones in their employment including National Board Certified alumnae Michele Keim and Lauren Bailey (Q5).  Numerous completers are also earning recognition for their excellent teaching (Q5).  Most recently, MU alum James Butler earned teaching recognition in the Austin Public Schools in Austin, TX.  He has also gained the attention of the school system for his work with mindfulness in the Austin classrooms.  A more comprehensive list of contributions, milestones, and accolades is kept by both the EPP and the Alumni Office.

Implications:  Through analysis of CAEP standard 4, the EPP recognizes it is on the right track for collecting important data regarding candidates’ and completers’ impact on P-12 learners.  It uses the Danielson framework to measure important attributes of candidates, including their ability to impact students’ learning.  The EPP also expects completers to submit a comprehensive impact on student learning project which requires candidates to actively collaborate with classroom teachers to design a unit of study and ultimately measure their impact on P-12 students using multiple measures.  Survey data from both alumni and employers indicate the EPP is adequately preparing the completers to teach.

However, after analysis, the EPP has made a commitment to several projects which will improve the quality of the completer.  First of all, the EPP will continue to collaborate with stakeholders such as members of the TEC and TAC as well as with those attending the community partnership lunches.  Through a collaborative effort which will link current clinical faculty and settings to courses within the program, the EPP can ensure a more intentional reflection of university coursework and the realities of teaching.  Using clinical faculty to support the development of and provide feedback for the impact on student learning project, the EPP can strengthen the reality of the project.  Revising the current SCE:  Impact on Student Learning so it is more authentic as a work sample will help candidates see a correlation between their preparation and the realities of teaching.  The EPP hopes, as well, this revised SCE:  Impact on Student Learning will give candidates more intentional opportunity to create and implement assessments to drive instruction.  Based on the employers’ survey conducted by the IDOE, this is one area the EPP must improve.

As indicated in the reflection in CAEP standard 3, the EPP will continue to develop authentic clinical experiences which reflect the realities of teaching in the 21st century.  Through more structured, authentic experiences, candidates will have an opportunity to observe, practice, and apply key elements of effective teaching.  Additionally, the EPP will continue to explore ways to create professional learning communities within the clinical experiences.  Building on co-constructed clinical experiences outlined in CAEP standard 3 will provide the EPP with an authentic program better preparing candidates for the classroom.  Work in the fall of 2018 will include identifying interested stakeholders who will work with the EPP to create these opportunities.  The EPP believes this intentionality will allow candidates and completers to feel better prepared for the classroom and will increase their positive perceptions of the program.

Additionally, the EPP will collaborate with completers and employers to create a more authentic impact on student learning project.  While the current project has a firm foundation, the EPP will create a timeline for moving the SCE:  Impact on Student Learning to the format of a work sample such as those used by the NBTC.  This new alignment will increase the authenticity of this important capstone project.

Standard 5: Provider Quality, Continuous Improvement, and Capacity

Notes from CAEP:  2013 initial preparation-The provider maintains a quality assurance system [component 5.1] comprised of valid data from multiple measures [component 5.2 and outcomes measures in 5.4], including evidence of candidates’ and completers’ positive impact on P-12 student learning and development [NOTE: This is a cross reference to preservice impact on P-12 student learning from component 3.5 and to inservice impact from Standard 4]. The provider supports continuous improvement that is sustained and evidence-based, and that evaluates the effectiveness of its completers [component 5.3 and the evidence for Standard 4]. The provider uses the results of inquiry and data collection to establish priorities, enhance program elements and capacity, and test innovations to improve completers’ impact on P-12 student learning and development [component 5.3].

2016 advanced level preparation (identical to initial)-The provider maintains a quality assurance system comprised of valid data from multiple measures, including evidence of candidates’ and completers’ positive impact on P-12 student learning and development. The provider supports continuous improvement that is sustained and evidence-based, and that evaluates the effectiveness of its completers. The provider uses the results of inquiry and data collection to establish priorities, enhance program elements and capacity, and test innovations to improve completers’ impact on P-12 student learning and development.

Making a case: Standard 5 occupies a pivotal position in CAEP standards. It describes the capacity of the EPP to reach its mission and goals through purposeful use of evidence, and it provides a source of evidence that informs all other CAEP standards. This dual function is described in the rationale for Standard 5 of the 2013 CAEP standards for initial preparation, from which the paragraph below is excerpted: Program quality and improvement are determined, in part, by characteristics of candidates that the provider recruits to the field [i.e., Standard 3]; the knowledge, skills, and professional dispositions that candidates bring to and acquire during the program [i.e., Standard 1]; the relationships between the provider and the P-12 schools in which candidates receive clinical training [i.e., Standard 2]; and subsequent evidence of completers’ impact on P-12 learning and development in schools where they ultimately teach [i.e., Standard 4]. To be accredited, a preparation program must meet standards on each of these dimensions and demonstrate success in its own continuous improvement efforts.

Effective organizations use evidence-based quality assurance systems and data in a process of continuous improvement. These systems and data-based continuous improvement are essential foundational requirements for CAEP accreditation. The self-study report provides an opportunity for the EPP to describe how well its quality assurance system is working in terms of responding to faculty questions about the effectiveness of preparation and the EPP’s use of that capacity to investigate innovations and inform continuous improvement.

Every provider has a set of procedures, processes, and structures-reporting lines, committees, offices, positions, policies-to ensure quality in hiring, admissions, courses, program design, facilities, and the like. It is the faculty’s way to insure that it has, for example, an appropriate curriculum, faculty, candidates, or program design. In an effective modern education organization, these procedures and structures are supported by a strong and flexible data generation and accessing capacity that-through disaggregation of data by demographic groups and individual preparation programs, different modes of delivery, and different campuses-can answer faculty questions about how well the EPP’s mission is accomplished and its goals met. That same system can serve, as well, to provide evidence and complete analyses of it for accreditation purposes.

For example, in the typical quality assurance system the provider attempts to ensure and monitor faculty quality through such activities as recruitment and search procedures, workload policies, faculty development support, promotion and tenure procedures, and post-tenure reviews. It monitors candidate quality by admissions standards, support services, advisement, course grade requirements, student teaching reviews, state license requirements, institutional standards, hiring rates, and so forth. And it attempts to ensure and monitor the quality of the educator preparation program itself through committees and administrators who review course syllabi, student course evaluations, employer surveys, state program approval reviews, and action research projects. All of these are sustained by extensive and accessible data.

The guiding questions for Standard 5 differ from those under Standards 1-4 because of these distinctions in the purposes of Standard 5. They are as follows:

  • THE QUALITY ASSURANCE SYSTEM-How well is the quality assurance system working for the EPP and how do you know? [component 5.1] Is it able to answer faculty questions about the adequacy of candidate preparation in particular areas (e.g., common core state standards, use of data to monitor student progress, creating assessments appropriate for different instructional purposes)? What modifications has the faculty identified and carried out to change or increase the capabilities
  • DATA IN THE QUALITY ASSURANCE SYSTEM-What strengths and weaknesses in the quality assurance system do faculty find when they use data and analyses from the system? [component 5.2]. Are the data relevant, verifiable, representative, cumulative, and actionable?” Can findings be triangulated with multiple data so they can be confirmed or found conflicting? What investigations into the quality of evidence and the validity of their interpretations does the EPP
  • USE OF DATA FOR CONTINUOUS IMPROVEMENT-What is the evidence that the EPP has improved programs in its continuous improvement efforts? [component 5.3] How have perspectives of faculty and other EPP stakeholders been modified by sharing and reflecting on data from the quality assurance system? [component 5.5] What “innovations” or purposeful changes has the EPP investigated and what were the results? [component 5.3]
  • OUTCOME MEASURES-What has the provider learned from reviewing its annual outcome measures over the past three years? These are the measures in component 5.4 (initial level, reported to CAEP annually) and A.5.4 (advanced level, not in annual report to CAEP):
  • Licensure rate
  • Completion rate
  • Employment rate
  • Consumer information such as places of employment and initial compensation (including student loan default rate

Summary Statement:  In the fall of 2015, the EPP underwent major changes in personnel.  The two individuals charged with accreditation left the institution rather abruptly, and they were the ones responsible for implementing the data collection the EPP had established for the previous NCATE accreditation visit.  The individuals who assumed these positions are new to their roles as DTE and the Field Experience and Assessment Coordinator; both are new to the accreditation process.  With full disclosure, the EPP continues to work on establishing a solid, effective system for analyzing data.  Including multiple stakeholders within the institution as well as outside Manchester University, such as the Teacher Advisory Council and other community partners, continues to be critical as the EPP works towards a more intentional process of using data to make programmatic improvements.  The use of the CORE software program and other electronic platforms such as a shared drive for internal use of data collection will provide a more streamlined approach to data collection.

As the MU EPP has spent the last two years working to understand the CAEP accreditation process and investigating its quality assurance system, it has learned a lot about itself and the direction it needs to head in the future.  Using the guiding questions provided by CAEP, the MU EPP considered the following questions:

  1. Is the quality assurance system working for the MU EPP and what evidence exists to demonstrate how well? (CAEP 5.1)
  2. What assessment changes has the faculty identified based on the quality assurance system?
  3. What strengths and weaknesses in the quality assurance system did the EPP identify when it examined the data and analyses from the system? (CAEP 5.2)
  4. Does the EPP collect relevant data that is verifiable, representative, cumulative, and actionable?
  5. How has the EPP used data to improve programs or what innovations has it investigated? (CAEP 5.3)
  6. What has the MU EPP learned from reviewing annual outcome measures (such as licensure rate, completion rate, employment rate, and consumer loan default rates) over the past three cycles? (CAEP 5.4)
  7. In what ways have faculty perspectives changed by sharing and reflecting on data from the quality assurance system? (CAEP 5.5)

The following evidence packets lend themselves to understanding the EPP’s quality assurance system:  Quality Assurance System (QAS) evidence packet, the SCE Impact on Student Learning evidence packet, Danielson Framework evidence packet, the Candidate Recruitment and Completion (CRC) evidence packet, and the Employer and Completer Satisfaction (ECS) evidence packet.  Each of these is included in supporting evidence documents and are referenced throughout this narrative in parenthesis.

Accreditation reports such as program annual reports, traditional Title II and Alternative Title II reports, survey data, observation data, institutional hiring rates, and student loan default rates can be found at the MU EPP Accreditation page on the Manchester University web page (https://www.manchester.edu/academics/colleges/college-of-education-social-sciences/academic-programs/education/education-home/accreditation).

Standard 5.1 The EPP designed the Quality Assurance System (QAS) to be an intentional, sequential, and consistent assessment of the program’s learning goals, course content, licensure programs, and Manchester University candidates and completers.  Using the multiple measures outlined in the previous standards and in the Teacher Education Student Handbook and the Student Teaching Handbook, the EPP regularly monitors candidates’ progress through the program (handbooks found in Supplemental Evidence packet). 

At the core of the QAS is the development of the whole candidate, and this is reflected in the belief growth should be supported by scaffolded experiences in clinical placements, courses, and dispositional reflection.  It recognizes nearly 100% of MU candidates are traditional recent high school graduates attending college for the first time.  Their view of teaching is from the student’s point of view; therefore, the EPP must support their professional development as they master content knowledge and pedagogy as well as hone their professional dispositions. Feedback to candidates is important, and the QAS allows the EPP to monitor progress in order to support this development. The FEAC updates files based on Pearson testing results, GPA, field experience completion and evaluation, and dispositional checks completed by clinical faculty and other stakeholders (QAS packet).   Data is recorded as it is collected.  For example, each Friday, the Field Experience and Assessment Coordinator receives testing results from Pearson.  These results are recorded immediately in the candidates’ files as well as placed on the spreadsheet used to monitor progression towards admission to the program or approval for student teaching.  When appropriate, progress of candidates and data are discussed at weekly department meetings, and the Teacher Education Committee (TEC) must grant approval for admission to the program based on the data collected.

Besides tracking candidate performance on required elements to progress through the program such as test scores and dispositions, the EPP has identified key assessment related to InTASC standards and individual Specialized Professional Associations (SPA).  The EPP has submitted all required programs for SPA review, and awaits final decision of those which had to submit rejoinders (see CAEP Standard 1 SPA section). 

Annually, as outlined in CAEP 4.4, the EPP uses data collected from alumni survey collected by the institution as well as data provided by the Indiana Department of Education employer satisfaction survey to monitor its effectiveness (ECS packet).  This data is analyzed and reflected upon during the EPP’s annual report; it is then used in the institutional annual report for the Office of Institutional Effectiveness.

Additionally, the EPP serves as an important part of the institution’s accreditation through the Higher Learning Commission, and in July 2017, Manchester University successfully submitted the mid-cycle Assurance Review, under the Higher Learning Commission’s (HLC) Open Pathway for re-accreditation (https://www.manchester.edu/about-manchester/institutional-effectiveness).  The DTE was a member of the institutional committee charged with writing the review.  In general, multiple stakeholders comprise the quality assurance system at Manchester University, and the EPP participates in annual program evaluation through the Higher Learning Commission (HLC).   

Standard 5.2  To support the development of candidates to completion, the EPP uses data which is relevant, verifiable, representative, and actionable.  Each program is held accountable for the following criteria:

  1. Using EPP-designed rubrics based on InTASC standards, the EPP measures candidates’ dispositions.
  2. Knowledge is measured using standardized tests created by Pearson, including the CASA, Pearson content exams, and Pearson pedagogy exams.
  3. EPP-designed rubrics measure candidates’ performance in clinical experiences.
  4. The Danielson Framework, a nationally accepted valid and reliable measure for teaching performance, is introduced to all candidates during the junior year as they design their curriculum unit and is used to evaluate them as student teachers.
  5. Based on individual Specialized Program Association standards, such as NCTM and NCTE, content rubrics have been developed and used to evaluate candidates’ content knowledge during the student teaching clinical experience.
  6. An EPP-designed rubric is used by a team of EPP faculty to evaluate the candidates’ Senior Comprehensive Evaluation (SCE):Impact on Student Learning, the capstone project required for graduation.
  7. Major checkpoints are established and monitored closely (QAS Outline).

The EPP meets weekly to discuss accreditation requirements as well as candidates’ progression through the program.  Interventions and remediation plans are discussed and reflection of programmatic goals and curriculum are analyzed on an ongoing basis.  As part of the EPP’s quality assurance system, the Teacher Education Committee meets monthly to examine Pearson test scores of candidates, to admit candidates to the program, to consider curricular changes, and to assist in program review (CRC packet).  Each semester, the Teacher Advisory Council meets to discuss program goals, candidates’ performance on all measures, and to provide feedback to the EPP regarding the development of high-quality completers. 

Standard 5.3 As mentioned in Standard 5.2, the EPP meets weekly to discuss its program’s goals, candidates’ progress, and test innovative ideas.  Over the past three years, the EPP has struggled to find footing with the assessment system.  Because of the turnover in the personnel and the lack of experience in the accreditation process of the new DTE and the new Field Experience and Assessment Coordinator, the EPP has worked hard to establish a solid system of data collection to ensure high-quality completers enter the workforce.  It believes it has taken important steps in establishing a QAS which will allow for informed decisions.

Twice a year, the EPP meets with the Teacher Advisory Council (TAC), a group of alumni, administrators, clinical faculty, Manchester University administrators, and members of the EPP.  Not only does this group reflect on the program’s impact on candidates and completers, but it suggests innovative ideas to meet the demands of the market and best prepare 21st century teachers.  Two important ideas emerged from these meetings, and the EPP is monitoring the impact closely.  The first is the requirement to pass content exams prior to student teaching.  The second is the integration of creating e-learning lessons into the integrated curriculum unit all candidates create in the literacy course in which they enroll their junior year.  While the EPP finds it relatively easy to monitor the first new requirement with tracking candidates applying to student teach versus those who earn permission and complete the student teaching clinical experience, it finds it more difficult to assess the innovation of integrating the e-learning lesson.  It is still considering ways to assess candidate growth in this area.

Adopted in the fall of 2015, the policy to pass Pearson content exams for approval to student teach went into effect with the 2018 cohort of completers.  As discussed earlier, only 36% of the original potential candidates gained permission to student teach and completed the program as planned.  The low pass rate shocked the EPP as well as other stakeholders; however, based on feedback from the TAC as well as the administrators in attendance at the community partnership lunch held in spring of 2018, the EPP believes this is a step in the right direction.  Clinical faculty are hesitant to turn over their classrooms to under-qualified content experts; requiring the passing of the content exams relieves the apprehension.  Clinical faculty can help student teachers focus on pedagogy.

The EPP continues to search for ways to creatively and authentically prepare candidates to use technology throughout their instruction.  It recognizes many of the school corporations in Indiana have adopted e-learning days.  In the fall of 2017, the EPP invited a program completer (who is a current practitioner recognized by her corporation as a technology expert) to present at the annual tech summit it requires of its candidates.  Not only did she present the SAMR model, but she worked with candidates to create effective e-learning lessons.  She shared the guidelines for designing e-learning lessons with the EPP who then integrated the format into the curriculum unit assignment in the literacy courses.

Through the analysis of the SCE data, the EPP is revising the capstone project during the 2018-2019 school year to reflect a work sample model.  Currently, the project, while it involves creating and teaching a curriculum unit in their student teaching clinical placement, the project results in a mini-thesis instead of a work sample such as those completed by NCBT applicants.  Instead of a research paper which contains a literature review, methodology, data analysis, and implications sections, the EPP is examining ways to create a work sample which involves the recording and reflection of the teaching of lessons within the unit and evidence of impact on student learning.  Using feedback from recent completers as well as clinical faculty will help the EPP design a more effective and authentic tool for measuring candidates’ impact on student learning and measuring their ability to reflect on their obligations as professionals.

Standard 5.4  The responsibility for the EPP’s quality assurance system rests primarily with the DTE and the Field Experience and Assessment Coordinator (FEAC); however, they frequently include stakeholders in the data analysis and in the implementation of program changes.  Because Indiana is not an EdTPA state, individual institutions must develop their own method for measuring completer impact on P-12 students.  The candidates’ capstone project, Senior Comprehensive Evaluation: Impact on Student Learning, as a key piece of evidence for its completers’ impact on student learning (SCE packet).  It also serves as the program’s graduation requirement for the university. 

Based on different experiences prior to the senior year, teacher candidates develop standards-based curriculum based on a researched best practice and develop appropriate assessments to measure student growth.  In the fall prior to the student teaching experience, the teacher candidates collaborate with their student teaching clinical faculty to develop a unit they use during student teaching.  A key piece of this the focus on a researched best practice and the assessments used to measure student growth.  Currently, the data collected on the project occurs when the project is turned in; however, the majority of candidates rewrite their papers prior to presenting their research posters to faculty, peers, and their families.  The data is alarming, as the average per standard on the rubric ranges from 1.32 to 1.75/3.0.  Candidates struggle with differentiated instruction and with using data to drive instruction.  These numbers reflect the results offered by the employer survey conducted by the Indiana Department of Education (ECS packet).

While the state of Indiana does require schools to complete surveys regarding impact on student growth, the return rate has been relatively small (only 7 respondents in August 2017), leaving EPPs with little information.  The survey indicated an overall rating of 2.71/4.0 on Q12:  analyze student assessment to improve classroom instruction.  During the departmental meetings, the EPP reflected on this feedback and has determined to explore ways to provide more authentic experiences with analyzing data to make pedagogical decisions.  While EDUC 245:  Assessment is a cornerstone course, it is often taken prior to admission to the program.  Building on the foundational content covered in the course, the EPP will look for clinical experiences and assignments which will require candidates to use data to drive instruction.  This, coupled with the revision of the SCE: Impact on Student Learning capstone project to a work sample, will improve the candidates’ impact on student learning.

Standard 5.5

As mentioned in Standard 5.3, twice a year, the Teacher Advisory Council spends an evening collaborating and examining programmatic needs (CAC packet).  For nearly three decades, the EPP has worked to have balanced membership of alumni, practitioners, administrators, EPP faculty, senior institutional leadership such as the Dean of the College of Education and Social Sciences and the Vice President of Student and Academic Affairs, and clinical supervisors.  Typically, the Council works in small groups to analyze data, discuss ways to deepen understanding of InTASC or CAEP standards, and identify areas the EPP can improve through the integration of current best practices.  For example, for feedback, the EPP brought to the Council the proposal to require the passing of the Pearson content licensure exams prior to student teaching.  

Since the last accreditation cycle in 2011, the EPP has created a cadre of administrators from the area to meet once a year (separate from the TAC) to offer verbal feedback regarding the program (Clinical Partnership packet).  This group meets at least once a year and focuses on integrating authentic and current clinical practices into the preparation program.  Open lines of communication have been established through the convening of this group, and members of the EPP frequently collaborate with these administrators on projects, both short-term such as the Akron family literacy night and long-term as reflected in the RTI experience at Manchester Intermediate School which emerged from the needs of clinical faculty.

The EPP also receives annual feedback from program completers and alumni through the institution’s survey (ECS packet).  During the annual department retreat and subsequent department meetings, the EPP examines and reflects upon the survey feedback, focusing particularly on the completers’ perceptions of the program in their preparation for the classroom.  When necessary, the EPP makes curriculum or programmatic changes.

Strengths and Challenges:  As mentioned in the first section of the CAEP self-study, the EPP shared the institution’s deep roots in teacher preparation.  Because thousands of Manchester (College) University educators fill classrooms across the world, the EPP values relationships with its alumni.  It intentionally involves alumni, employers, clinical faculty, and community partners in the decision making process.  As a result, the following strengths exist in the Manchester University EPP Quality Assurance System:

  1. The EPP is headed in the right direction with its Quality Assurance System, Question 1 considered throughout the analysis process.It has built upon previously implemented assessments, and it believes the system is much more intentional than it ever has been.The program reflects the InTASC standards, individual SPA organizations’ standards, Indiana professional educator standards, and CAEP standards.With the support of the Manchester University Office of Institutional Effectiveness and the Vice President of Academic Affairs, the EPP has purchased the CORE software package which will streamline the collection and analysis of data.Additionally, despite being novices to accreditation processes, the DTE and the FEAC now have a much better understanding of CAEP expectations.The EPP appreciates the direct alignment between standards, program goals, assessments, and intentional changes based on this information.It has a clearer picture of where it is headed as an EPP.
  2. Multiple measures exist to monitor candidates progress and completion (Q2 and Q4).These measures include Pearson standardized tests (CASA, content exams, pedagogy exams), disposition checks, clinical field experience evaluations by clinical faculty, GPAs in both the major and overall, SCE Impact on Student Learning capstone project with an EPP-created rubric, performance on the Danielson Framework, and employer and completer survey responses as well as graduating senior and recent graduate surveys administered by the institution

    It meets weekly as a unit to discuss the progress of candidates and to consider curriculum and program changes.  The EPP also holds annual retreats during the summer to spend uninterrupted time diving deeply into the data.  Important changes occur from this introspection and collaboration of colleagues.

  3. Open lines of communication exist for all stakeholders.Alumni frequently email or mail notes/cards to members of the EPP.These forms of communication provide anecdotal feedback to the program in the form of positive reinforcement or suggestions for program changes.Alumni have a vested interest in the success of candidates, and as a result, they support candidates in their development both in program suggestions and in the hiring of completers.One school system has five administrators who are MU completers and who make a practice of hiring MU practitioners.
  4. Manchester University has a solid reputation among local school systems for high-quality, professional completers (Q6). This is reflected in the above 95% hiring rate for completers as well as the 3.14/4.0 overall satisfaction rating of the program reported on the employer survey.
  5. One of the challenges facing the MU EPP is the small size of the program.Data collection makes comparisons between the different programs difficult.

Implications:  Despite the strengths of the EPP’s QAS, the program has set the following goals for itself:

  1. Use the CAEP standards, feedback from the SPA reports, stakeholders’ visions, and other measures to redesign the program based on final outcomes.In the fall of 2018, the program will begin to use information from the CAEP self-study (both what exists and what it believes it is missing) to frame the discussion and vision of the 21st century practitioner.It would like to frame its program around clinical experiences co-created with community partners, and it would like to use the work sample model to document candidates’ reflection on their teaching and professional development.
  2. Based on data analysis, the EPP has created a few innovations (Q5) which it is currently monitoring for effectiveness.These include the requirement to pass the Pearson content exams prior to student teaching and the implementation of the e-learning lesson in the integrated lesson plan in the literacy courses during the junior year.
  3. The EPP will continue to explore ways to increase candidates’ content knowledge based on the Pearson content exams (Q7).Candidates tend to perform well on the pedagogy exam, but many struggle to pass the content exams.To increase the number of completers, the EPP will need to work with content faculty to align coursework with the content standards assessed on the exams.
  4. More intentional collaboration between content faculty associated with licensure programs will need to take place in order to help programs take ownership of these programs (Q3 and Q7).For example, during the writing of the SPA reports, the chair of the English department recognized the importance of the department taking ownership of the alignment between standards, assessments, and program course requirements.With only support from the DTE, the chair successfully resubmitted the NCTE SPA report and earned national recognition.

Evidence Packets

Evidence Packet 1: SCE Impact on Student Learning
SCE Impact on Student Learning Data
SCE Impact on Student Learning Data July 2018

Evidence Packet 2: Candidate Recruitment and Retention
Candidate Profiles for Recommendation to Program
Candidate Dispositions Evaluation spreadsheet
Licensure Tests and GPAs 20162018
Content and Pedagogy Scores 201618
Content Specific Rubrics
SCE Impact on Student Learning Data July 2018

Evidence Packet 3: Danielson Framework Data
Danielson Framework
Danielson Framework Comparison Data Spreadsheet
Danielson Framework Data Evidence Packet July 2018

Evidence Packet 4: Employer and Completer Satisfaction
IDOE Teacher Survey
IDOE Principal Survey
Teacher Survey Data Aug. 2017
Principal Survey Data Aug. 2017
Employer and Completer Satisfaction (ECS) July 2018

Evidence Packet 5: Clinical Partnerships
Clinical Partnership (CP) July 2018

Subtle_Texture_gray-vert

Science of Reading

The science of reading refers to a body of research from the fields of education, cognitive psychology, developmental psychology, and neuroscience, that explains how individuals learn how to read and best practices for reading instruction. In the Henney Department of Education, our mission is to bridge theory and practice to prepare students for vocations that promote individual well-being, community engagement, peace, and social justice. As a department, we received a initial Lilly Endowment grant of $75,000.00 to aid in further incorporating the science of reading into its curriculum, as well as another Lilly Endowment grant of $500,000.00 as part of Lilly’s Advancing the Science of Reading initiative.

The Henney Department of Education has received A+ ratings by the National Council on Teacher Quality. See the report below:
National Council on Teacher Quality

Meet the Faculty