Education & Social Sciences

Standard 5:  Provider Quality, Continuous Improvement, and Capacity


Notes from CAEP:  2013 initial preparation-The provider maintains a quality assurance system [component 5.1] comprised of valid data from multiple measures [component 5.2 and outcomes measures in 5.4], including evidence of candidates' and completers' positive impact on P-12 student learning and development [NOTE: This is a cross reference to preservice impact on P-12 student learning from component 3.5 and to inservice impact from Standard 4]. The provider supports continuous improvement that is sustained and evidence-based, and that evaluates the effectiveness of its completers [component 5.3 and the evidence for Standard 4]. The provider uses the results of inquiry and data collection to establish priorities, enhance program elements and capacity, and test innovations to improve completers' impact on P-12 student learning and development [component 5.3].

2016 advanced level preparation (identical to initial)-The provider maintains a quality assurance system comprised of valid data from multiple measures, including evidence of candidates' and completers' positive impact on P-12 student learning and development. The provider supports continuous improvement that is sustained and evidence-based, and that evaluates the effectiveness of its completers. The provider uses the results of inquiry and data collection to establish priorities, enhance program elements and capacity, and test innovations to improve completers' impact on P-12 student learning and development.

Making a case: Standard 5 occupies a pivotal position in CAEP standards. It describes the capacity of the EPP to reach its mission and goals through purposeful use of evidence, and it provides a source of evidence that informs all other CAEP standards. This dual function is described in the rationale for Standard 5 of the 2013 CAEP standards for initial preparation, from which the paragraph below is excerpted: Program quality and improvement are determined, in part, by characteristics of candidates that the provider recruits to the field [i.e., Standard 3]; the knowledge, skills, and professional dispositions that candidates bring to and acquire during the program [i.e., Standard 1]; the relationships between the provider and the P-12 schools in which candidates receive clinical training [i.e., Standard 2]; and subsequent evidence of completers' impact on P-12 learning and development in schools where they ultimately teach [i.e., Standard 4]. To be accredited, a preparation program must meet standards on each of these dimensions and demonstrate success in its own continuous improvement efforts.

Effective organizations use evidence-based quality assurance systems and data in a process of continuous improvement. These systems and data-based continuous improvement are essential foundational requirements for CAEP accreditation. The self-study report provides an opportunity for the EPP to describe how well its quality assurance system is working in terms of responding to faculty questions about the effectiveness of preparation and the EPP's use of that capacity to investigate innovations and inform continuous improvement.

Every provider has a set of procedures, processes, and structures-reporting lines, committees, offices, positions, policies-to ensure quality in hiring, admissions, courses, program design, facilities, and the like. It is the faculty's way to insure that it has, for example, an appropriate curriculum, faculty, candidates, or program design. In an effective modern education organization, these procedures and structures are supported by a strong and flexible data generation and accessing capacity that-through disaggregation of data by demographic groups and individual preparation programs, different modes of delivery, and different campuses-can answer faculty questions about how well the EPP's mission is accomplished and its goals met. That same system can serve, as well, to provide evidence and complete analyses of it for accreditation purposes.

For example, in the typical quality assurance system the provider attempts to ensure and monitor faculty quality through such activities as recruitment and search procedures, workload policies, faculty development support, promotion and tenure procedures, and post-tenure reviews. It monitors candidate quality by admissions standards, support services, advisement, course grade requirements, student teaching reviews, state license requirements, institutional standards, hiring rates, and so forth. And it attempts to ensure and monitor the quality of the educator preparation program itself through committees and administrators who review course syllabi, student course evaluations, employer surveys, state program approval reviews, and action research projects. All of these are sustained by extensive and accessible data.

The guiding questions for Standard 5 differ from those under Standards 1-4 because of these distinctions in the purposes of Standard 5. They are as follows:

  • THE QUALITY ASSURANCE SYSTEM-How well is the quality assurance system working for the EPP and how do you know? [component 5.1] Is it able to answer faculty questions about the adequacy of candidate preparation in particular areas (e.g., common core state standards, use of data to monitor student progress, creating assessments appropriate for different instructional purposes)? What modifications has the faculty identified and carried out to change or increase the capabilities
  • DATA IN THE QUALITY ASSURANCE SYSTEM-What strengths and weaknesses in the quality assurance system do faculty find when they use data and analyses from the system? [component 5.2]. Are the data relevant, verifiable, representative, cumulative, and actionable?" Can findings be triangulated with multiple data so they can be confirmed or found conflicting? What investigations into the quality of evidence and the validity of their interpretations does the EPP
  • USE OF DATA FOR CONTINUOUS IMPROVEMENT-What is the evidence that the EPP has improved programs in its continuous improvement efforts? [component 5.3] How have perspectives of faculty and other EPP stakeholders been modified by sharing and reflecting on data from the quality assurance system? [component 5.5] What "innovations" or purposeful changes has the EPP investigated and what were the results? [component 5.3]
  • OUTCOME MEASURES-What has the provider learned from reviewing its annual outcome measures over the past three years? These are the measures in component 5.4 (initial level, reported to CAEP annually) and A.5.4 (advanced level, not in annual report to CAEP):
  • Licensure rate
  • Completion rate
  • Employment rate
  • Consumer information such as places of employment and initial compensation (including student loan default rate

Summary Statement:  In the fall of 2015, the EPP underwent major changes in personnel.  The two individuals charged with accreditation left the institution rather abruptly, and they were the ones responsible for implementing the data collection the EPP had established for the previous NCATE accreditation visit.  The individuals who assumed these positions are new to their roles as DTE and the Field Experience and Assessment Coordinator; both are new to the accreditation process.  With full disclosure, the EPP continues to work on establishing a solid, effective system for analyzing data.  Including multiple stakeholders within the institution as well as outside Manchester University, such as the Teacher Advisory Council and other community partners, continues to be critical as the EPP works towards a more intentional process of using data to make programmatic improvements.  The use of the CORE software program and other electronic platforms such as a shared drive for internal use of data collection will provide a more streamlined approach to data collection.

As the MU EPP has spent the last two years working to understand the CAEP accreditation process and investigating its quality assurance system, it has learned a lot about itself and the direction it needs to head in the future.  Using the guiding questions provided by CAEP, the MU EPP considered the following questions:

  1. Is the quality assurance system working for the MU EPP and what evidence exists to demonstrate how well? (CAEP 5.1)
  2. What assessment changes has the faculty identified based on the quality assurance system?
  3. What strengths and weaknesses in the quality assurance system did the EPP identify when it examined the data and analyses from the system? (CAEP 5.2)
  4. Does the EPP collect relevant data that is verifiable, representative, cumulative, and actionable?
  5. How has the EPP used data to improve programs or what innovations has it investigated? (CAEP 5.3)
  6. What has the MU EPP learned from reviewing annual outcome measures (such as licensure rate, completion rate, employment rate, and consumer loan default rates) over the past three cycles? (CAEP 5.4)
  7. In what ways have faculty perspectives changed by sharing and reflecting on data from the quality assurance system? (CAEP 5.5)

The following evidence packets lend themselves to understanding the EPP’s quality assurance system:  Quality Assurance System (QAS) evidence packet, the SCE Impact on Student Learning evidence packet, Danielson Framework evidence packet, the Candidate Recruitment and Completion (CRC) evidence packet, and the Employer and Completer Satisfaction (ECS) evidence packet.  Each of these is included in supporting evidence documents and are referenced throughout this narrative in parenthesis.

Accreditation reports such as program annual reports, traditional Title II and Alternative Title II reports, survey data, observation data, institutional hiring rates, and student loan default rates can be found at the MU EPP Accreditation page on the Manchester University web page (https://www.manchester.edu/academics/colleges/college-of-education-social-sciences/academic-programs/education/education-home/accreditation).

Standard 5.1 The EPP designed the Quality Assurance System (QAS) to be an intentional, sequential, and consistent assessment of the program’s learning goals, course content, licensure programs, and Manchester University candidates and completers.  Using the multiple measures outlined in the previous standards and in the Teacher Education Student Handbook and the Student Teaching Handbook, the EPP regularly monitors candidates’ progress through the program (handbooks found in Supplemental Evidence packet). 

At the core of the QAS is the development of the whole candidate, and this is reflected in the belief growth should be supported by scaffolded experiences in clinical placements, courses, and dispositional reflection.  It recognizes nearly 100% of MU candidates are traditional recent high school graduates attending college for the first time.  Their view of teaching is from the student’s point of view; therefore, the EPP must support their professional development as they master content knowledge and pedagogy as well as hone their professional dispositions. Feedback to candidates is important, and the QAS allows the EPP to monitor progress in order to support this development. The FEAC updates files based on Pearson testing results, GPA, field experience completion and evaluation, and dispositional checks completed by clinical faculty and other stakeholders (QAS packet).   Data is recorded as it is collected.  For example, each Friday, the Field Experience and Assessment Coordinator receives testing results from Pearson.  These results are recorded immediately in the candidates’ files as well as placed on the spreadsheet used to monitor progression towards admission to the program or approval for student teaching.  When appropriate, progress of candidates and data are discussed at weekly department meetings, and the Teacher Education Committee (TEC) must grant approval for admission to the program based on the data collected.

Besides tracking candidate performance on required elements to progress through the program such as test scores and dispositions, the EPP has identified key assessment related to InTASC standards and individual Specialized Professional Associations (SPA).  The EPP has submitted all required programs for SPA review, and awaits final decision of those which had to submit rejoinders (see CAEP Standard 1 SPA section). 

Annually, as outlined in CAEP 4.4, the EPP uses data collected from alumni survey collected by the institution as well as data provided by the Indiana Department of Education employer satisfaction survey to monitor its effectiveness (ECS packet).  This data is analyzed and reflected upon during the EPP’s annual report; it is then used in the institutional annual report for the Office of Institutional Effectiveness.

Additionally, the EPP serves as an important part of the institution's accreditation through the Higher Learning Commission, and in July 2017, Manchester University successfully submitted the mid-cycle Assurance Review, under the Higher Learning Commission’s (HLC) Open Pathway for re-accreditation (https://www.manchester.edu/about-manchester/institutional-effectiveness).  The DTE was a member of the institutional committee charged with writing the review.  In general, multiple stakeholders comprise the quality assurance system at Manchester University, and the EPP participates in annual program evaluation through the Higher Learning Commission (HLC).   

Standard 5.2  To support the development of candidates to completion, the EPP uses data which is relevant, verifiable, representative, and actionable.  Each program is held accountable for the following criteria:

  1. Using EPP-designed rubrics based on InTASC standards, the EPP measures candidates’ dispositions.
  2. Knowledge is measured using standardized tests created by Pearson, including the CASA, Pearson content exams, and Pearson pedagogy exams.
  3. EPP-designed rubrics measure candidates’ performance in clinical experiences.
  4. The Danielson Framework, a nationally accepted valid and reliable measure for teaching performance, is introduced to all candidates during the junior year as they design their curriculum unit and is used to evaluate them as student teachers.
  5. Based on individual Specialized Program Association standards, such as NCTM and NCTE, content rubrics have been developed and used to evaluate candidates’ content knowledge during the student teaching clinical experience.
  6. An EPP-designed rubric is used by a team of EPP faculty to evaluate the candidates’ Senior Comprehensive Evaluation (SCE):Impact on Student Learning, the capstone project required for graduation.
  7. Major checkpoints are established and monitored closely (QAS Outline).

The EPP meets weekly to discuss accreditation requirements as well as candidates’ progression through the program.  Interventions and remediation plans are discussed and reflection of programmatic goals and curriculum are analyzed on an ongoing basis.  As part of the EPP's quality assurance system, the Teacher Education Committee meets monthly to examine Pearson test scores of candidates, to admit candidates to the program, to consider curricular changes, and to assist in program review (CRC packet).  Each semester, the Teacher Advisory Council meets to discuss program goals, candidates’ performance on all measures, and to provide feedback to the EPP regarding the development of high-quality completers. 

Standard 5.3 As mentioned in Standard 5.2, the EPP meets weekly to discuss its program’s goals, candidates’ progress, and test innovative ideas.  Over the past three years, the EPP has struggled to find footing with the assessment system.  Because of the turnover in the personnel and the lack of experience in the accreditation process of the new DTE and the new Field Experience and Assessment Coordinator, the EPP has worked hard to establish a solid system of data collection to ensure high-quality completers enter the workforce.  It believes it has taken important steps in establishing a QAS which will allow for informed decisions.

Twice a year, the EPP meets with the Teacher Advisory Council (TAC), a group of alumni, administrators, clinical faculty, Manchester University administrators, and members of the EPP.  Not only does this group reflect on the program’s impact on candidates and completers, but it suggests innovative ideas to meet the demands of the market and best prepare 21st century teachers.  Two important ideas emerged from these meetings, and the EPP is monitoring the impact closely.  The first is the requirement to pass content exams prior to student teaching.  The second is the integration of creating e-learning lessons into the integrated curriculum unit all candidates create in the literacy course in which they enroll their junior year.  While the EPP finds it relatively easy to monitor the first new requirement with tracking candidates applying to student teach versus those who earn permission and complete the student teaching clinical experience, it finds it more difficult to assess the innovation of integrating the e-learning lesson.  It is still considering ways to assess candidate growth in this area.

Adopted in the fall of 2015, the policy to pass Pearson content exams for approval to student teach went into effect with the 2018 cohort of completers.  As discussed earlier, only 36% of the original potential candidates gained permission to student teach and completed the program as planned.  The low pass rate shocked the EPP as well as other stakeholders; however, based on feedback from the TAC as well as the administrators in attendance at the community partnership lunch held in spring of 2018, the EPP believes this is a step in the right direction.  Clinical faculty are hesitant to turn over their classrooms to under-qualified content experts; requiring the passing of the content exams relieves the apprehension.  Clinical faculty can help student teachers focus on pedagogy.

The EPP continues to search for ways to creatively and authentically prepare candidates to use technology throughout their instruction.  It recognizes many of the school corporations in Indiana have adopted e-learning days.  In the fall of 2017, the EPP invited a program completer (who is a current practitioner recognized by her corporation as a technology expert) to present at the annual tech summit it requires of its candidates.  Not only did she present the SAMR model, but she worked with candidates to create effective e-learning lessons.  She shared the guidelines for designing e-learning lessons with the EPP who then integrated the format into the curriculum unit assignment in the literacy courses.

Through the analysis of the SCE data, the EPP is revising the capstone project during the 2018-2019 school year to reflect a work sample model.  Currently, the project, while it involves creating and teaching a curriculum unit in their student teaching clinical placement, the project results in a mini-thesis instead of a work sample such as those completed by NCBT applicants.  Instead of a research paper which contains a literature review, methodology, data analysis, and implications sections, the EPP is examining ways to create a work sample which involves the recording and reflection of the teaching of lessons within the unit and evidence of impact on student learning.  Using feedback from recent completers as well as clinical faculty will help the EPP design a more effective and authentic tool for measuring candidates’ impact on student learning and measuring their ability to reflect on their obligations as professionals.

Standard 5.4  The responsibility for the EPP’s quality assurance system rests primarily with the DTE and the Field Experience and Assessment Coordinator (FEAC); however, they frequently include stakeholders in the data analysis and in the implementation of program changes.  Because Indiana is not an EdTPA state, individual institutions must develop their own method for measuring completer impact on P-12 students.  The candidates’ capstone project, Senior Comprehensive Evaluation: Impact on Student Learning, as a key piece of evidence for its completers’ impact on student learning (SCE packet).  It also serves as the program's graduation requirement for the university. 

Based on different experiences prior to the senior year, teacher candidates develop standards-based curriculum based on a researched best practice and develop appropriate assessments to measure student growth.  In the fall prior to the student teaching experience, the teacher candidates collaborate with their student teaching clinical faculty to develop a unit they use during student teaching.  A key piece of this the focus on a researched best practice and the assessments used to measure student growth.  Currently, the data collected on the project occurs when the project is turned in; however, the majority of candidates rewrite their papers prior to presenting their research posters to faculty, peers, and their families.  The data is alarming, as the average per standard on the rubric ranges from 1.32 to 1.75/3.0.  Candidates struggle with differentiated instruction and with using data to drive instruction.  These numbers reflect the results offered by the employer survey conducted by the Indiana Department of Education (ECS packet).

While the state of Indiana does require schools to complete surveys regarding impact on student growth, the return rate has been relatively small (only 7 respondents in August 2017), leaving EPPs with little information.  The survey indicated an overall rating of 2.71/4.0 on Q12:  analyze student assessment to improve classroom instruction.  During the departmental meetings, the EPP reflected on this feedback and has determined to explore ways to provide more authentic experiences with analyzing data to make pedagogical decisions.  While EDUC 245:  Assessment is a cornerstone course, it is often taken prior to admission to the program.  Building on the foundational content covered in the course, the EPP will look for clinical experiences and assignments which will require candidates to use data to drive instruction.  This, coupled with the revision of the SCE: Impact on Student Learning capstone project to a work sample, will improve the candidates’ impact on student learning.

Standard 5.5

As mentioned in Standard 5.3, twice a year, the Teacher Advisory Council spends an evening collaborating and examining programmatic needs (CAC packet).  For nearly three decades, the EPP has worked to have balanced membership of alumni, practitioners, administrators, EPP faculty, senior institutional leadership such as the Dean of the College of Education and Social Sciences and the Vice President of Student and Academic Affairs, and clinical supervisors.  Typically, the Council works in small groups to analyze data, discuss ways to deepen understanding of InTASC or CAEP standards, and identify areas the EPP can improve through the integration of current best practices.  For example, for feedback, the EPP brought to the Council the proposal to require the passing of the Pearson content licensure exams prior to student teaching.  

Since the last accreditation cycle in 2011, the EPP has created a cadre of administrators from the area to meet once a year (separate from the TAC) to offer verbal feedback regarding the program (Clinical Partnership packet).  This group meets at least once a year and focuses on integrating authentic and current clinical practices into the preparation program.  Open lines of communication have been established through the convening of this group, and members of the EPP frequently collaborate with these administrators on projects, both short-term such as the Akron family literacy night and long-term as reflected in the RTI experience at Manchester Intermediate School which emerged from the needs of clinical faculty.

The EPP also receives annual feedback from program completers and alumni through the institution’s survey (ECS packet).  During the annual department retreat and subsequent department meetings, the EPP examines and reflects upon the survey feedback, focusing particularly on the completers’ perceptions of the program in their preparation for the classroom.  When necessary, the EPP makes curriculum or programmatic changes.

Strengths and Challenges:  As mentioned in the first section of the CAEP self-study, the EPP shared the institution’s deep roots in teacher preparation.  Because thousands of Manchester (College) University educators fill classrooms across the world, the EPP values relationships with its alumni.  It intentionally involves alumni, employers, clinical faculty, and community partners in the decision making process.  As a result, the following strengths exist in the Manchester University EPP Quality Assurance System:

  1. The EPP is headed in the right direction with its Quality Assurance System, Question 1 considered throughout the analysis process.It has built upon previously implemented assessments, and it believes the system is much more intentional than it ever has been.The program reflects the InTASC standards, individual SPA organizations’ standards, Indiana professional educator standards, and CAEP standards.With the support of the Manchester University Office of Institutional Effectiveness and the Vice President of Academic Affairs, the EPP has purchased the CORE software package which will streamline the collection and analysis of data.Additionally, despite being novices to accreditation processes, the DTE and the FEAC now have a much better understanding of CAEP expectations.The EPP appreciates the direct alignment between standards, program goals, assessments, and intentional changes based on this information.It has a clearer picture of where it is headed as an EPP.
  2. Multiple measures exist to monitor candidates progress and completion (Q2 and Q4).These measures include Pearson standardized tests (CASA, content exams, pedagogy exams), disposition checks, clinical field experience evaluations by clinical faculty, GPAs in both the major and overall, SCE Impact on Student Learning capstone project with an EPP-created rubric, performance on the Danielson Framework, and employer and completer survey responses as well as graduating senior and recent graduate surveys administered by the institution

    It meets weekly as a unit to discuss the progress of candidates and to consider curriculum and program changes.  The EPP also holds annual retreats during the summer to spend uninterrupted time diving deeply into the data.  Important changes occur from this introspection and collaboration of colleagues.

  3. Open lines of communication exist for all stakeholders.Alumni frequently email or mail notes/cards to members of the EPP.These forms of communication provide anecdotal feedback to the program in the form of positive reinforcement or suggestions for program changes.Alumni have a vested interest in the success of candidates, and as a result, they support candidates in their development both in program suggestions and in the hiring of completers.One school system has five administrators who are MU completers and who make a practice of hiring MU practitioners.
  4. Manchester University has a solid reputation among local school systems for high-quality, professional completers (Q6). This is reflected in the above 95% hiring rate for completers as well as the 3.14/4.0 overall satisfaction rating of the program reported on the employer survey.
  5. One of the challenges facing the MU EPP is the small size of the program.Data collection makes comparisons between the different programs difficult.

Implications:  Despite the strengths of the EPP’s QAS, the program has set the following goals for itself:

  1. Use the CAEP standards, feedback from the SPA reports, stakeholders’ visions, and other measures to redesign the program based on final outcomes.In the fall of 2018, the program will begin to use information from the CAEP self-study (both what exists and what it believes it is missing) to frame the discussion and vision of the 21st century practitioner.It would like to frame its program around clinical experiences co-created with community partners, and it would like to use the work sample model to document candidates’ reflection on their teaching and professional development.
  2. Based on data analysis, the EPP has created a few innovations (Q5) which it is currently monitoring for effectiveness.These include the requirement to pass the Pearson content exams prior to student teaching and the implementation of the e-learning lesson in the integrated lesson plan in the literacy courses during the junior year.
  3. The EPP will continue to explore ways to increase candidates’ content knowledge based on the Pearson content exams (Q7).Candidates tend to perform well on the pedagogy exam, but many struggle to pass the content exams.To increase the number of completers, the EPP will need to work with content faculty to align coursework with the content standards assessed on the exams.
  4. More intentional collaboration between content faculty associated with licensure programs will need to take place in order to help programs take ownership of these programs (Q3 and Q7).For example, during the writing of the SPA reports, the chair of the English department recognized the importance of the department taking ownership of the alignment between standards, assessments, and program course requirements.With only support from the DTE, the chair successfully resubmitted the NCTE SPA report and earned national recognition.