Education & Social Sciences

Standard 4: Program Impact


Notes from CAEP: Program Impact (NOTE: The role of states in generating evidence for various components of Standard 4 is dynamic and promises to continue to be for some years in the future as states sort out how best to fulfill their program approval, licensure, and data gathering responsibilities. For that reason, resources available to EPPs to document program impact are highly varied. CAEP has provided suggestions in the Evidence Guide, and in questions and answers online, describing examples for Standard 4 under various conditions. In addition, the Board of Directors has adopted a transition policy for Standard 4 under which it will not be necessary for EPPs to fully meet all four components under initial preparation until self studies are submitted for academic year 2019/2020.

2013 initial preparation-The provider demonstrates the impact of its completers on P-12 student learning and development [component 4.1], classroom instruction [component 4.2] and schools [component 4.3], and the satisfaction of its completers [component 4.4] with the relevance and effectiveness of their preparation.

2016 advanced level preparation--The provider documents the satisfaction of its completers from advanced preparation programs [component 4.2] and their employers [component 4.1] with the relevance and effectiveness of their preparation.

[NOTE: Under CAEP Board policy, all components of Standard 4 and Standard A.4 must be met for full accreditation.]

Making a case: In Standard 4, the provider demonstrates that the pre-service preparation covered in Standard 3 and Standard 1 equips pre-service teachers to have a positive impact on P-12 student learning and development for all students. The provider should present its evidence that completers are having a positive impact on P-12 student learning. The four components of initial preparation Standard 4 comprise four of the eight CAEP key indicator measures, ones that are important measures of EPP performance and part of the EPP's continuing accountability to its stakeholders and the public. Effective teaching or other education professionals' performance is a fundamental goal of the CAEP Standards; therefore, the provider must meet this standard to be accredited. For completers from advanced preparation programs, the Standard 4 case should include evidence of the satisfaction of completers and employers with the relevance and effectiveness of their preparation.

The guiding questions may help focus the selection of evidence and the EPP inquiry of its message:

  • How does the provider document, using multiple measures that program completers contribute to an expected level of student-learning growth?
  • How does the provider demonstrate:
  • through structured and validated observation instruments and/or student surveys that completers effectively apply the professional knowledge, skills, and dispositions that the preparation experiences were designed to achieve?
  • using measures that result in valid and reliable data and including employment milestones such as promotion and retention, that employers are satisfied with the completers' preparation for their assigned responsibilities in working with P-12 students?
  • using measures that result in valid and reliable data, that program completers perceive their preparation as relevant to the responsibilities they confront on the job, and that the preparation was effective?
EPPs should reflect on:
  • STRENGTHS AND CHALLENGES-What strengths and areas of challenge have you discovered about the impact of completers who are employed in the education professional positions for which they were prepared as you analyzed and compared the results of your disaggregated data by program and by demographics? What questions have emerged that need more investigation? How are you using use this information for continuous improvement?
  • TRENDS-What trends have emerged about completer performance and completer/employer satisfaction with preparation as you compared program and demographic data across evidence sources and programs? What questions have emerged that need more investigation? How are you using this information for continuous improvement?
  • IMPLICATIONS-What implications can you draw or conclusions can you reach across evidence sources about completer performance and completer/employer satisfaction with preparation? What questions have emerged that need more investigation? Improvement? How have data-driven decisions on changes been incorporated into preparation

Summary Statement:

Reviewers will find analysis of the MU EPP’s evidence for CAEP standard 4 in the Employer and Completer Satisfaction (ECS) evidence packet, the Danielson Framework evidence packet, and the SCE Impact on Student Learning Project evidence packet.  The EPP asked the following questions:

  1. What are the multiple measures used by the EPP to evaluate program completers contribute to an expected level of student-learning growth?
  2. Based on the data collected from the Danielson Framework, do completers effectively apply the professional knowledge, skills, and dispositions expected of MU completers?
  3. Based on the employer satisfaction data provided by the Indiana Department of Education, are employers satisfied with the completers' preparation for their assigned responsibilities in working with P-12 students?
  4. Do completers perceive their program adequately prepared them to be successful in the classroom?Was their preparation effective and relevant?
  5. Are program completers hitting milestones in their employment?Are they earning recognition for their work?

Standard 4.1 The EPP continues to develop ways to track program completers’ impact on P-12 students’ learning.  As candidates progress through the program, they must complete several clinical experiences which require them to work with data and research-based best practices to impact P-12 students’ learning.  In the junior year, candidates work all year long with small groups of students, tailoring lessons to meet individual needs.  Through collaboration with clinical faculty, candidates explore effective pedagogy as it relates to students’ performance on assessments.  During student teaching, the candidate completes an extensive impact on student learning research project which requires the candidate to collaborate with the cooperating teacher to identify a unit plan, design assessments including pre- and post-tests, select a best practice for teaching content, create a literature review of the research associated with the best practice, implement the unit, and analyze collected data to determine the impact on student growth (SCE packet).  This capstone project provides the EPP with a deep understanding of candidates’ ability to impact student learning.

Another measure of candidates’ impact on learners is completed by the university supervisor and the clinical faculty with the Danielson framework rubric.  Assessing candidates with this valid and reliable tool provides insight into observable teaching behaviors impacting student-learning growth.  Each of the four domains covered by the rubric lends itself to creating positive learning experiences for all P-12 students, and the student teacher is held accountable for impacting learning (Danielson packet).

Additionally, the only way to measure the impact completers have on P-12 students once they leave the program occurs through employer survey data (IDOE Employer survey results in ECS packet).  The process has only been used for the past year in the state of Indiana, and the IDOE struggles to get a large return.  The first series of data is provided in the evidence IDOE Employer survey results.

Finally, the successful passing of the Pearson pedagogy exam required of all Indiana teaching licenses indicates completers have an understanding of effective teaching practices (CRC packet).  Completers of the MU EPP between 2016 and 2018 have an 86 percent pass rate on the first attempt at the pedagogy exams, indicating a sufficient

Standard 4.2 Because independent research has indicated the Danielson framework is a valid and reliable tool for measuring teacher effectiveness, the EPP uses the framework to evaluate skills and dispositions of its student teachers (Danielson packet).  The EPP introduces the framework to candidates when they enroll in the literacy courses during their junior year.  As they write their curriculum units, they do so through the lens of the Danielson framework, using the tool to guide how they plan curriculum, how they would teach the curriculum, and the type of learning environment they would need to create so all students are successful.  They self-evaluate their own work using the Danielson framework rubric.  This practice using the tool allows candidates to consider how they will be evaluated during their student teaching experience.

Other valid and reliable instruments include the Pearson CASA, content, and pedagogy exams expected of all completers.  Data collected from these standardized tests indicate the MU EPP’s candidates are performing at or above state averages.  For example, for 2018, the MU EPP has an institutional pass rate of 85% for the math CASA compared to the state pass rate of 78%.  MU EPP’s institutional pass rate for the writing CASA is 89% which is 8% higher than the state pass rate.  Only a 1% difference separates MU’s institutional pass rate for the reading CASA which is 90% and the state’s pass rate is 91% (CRC packet).

The EPP also uses EPP-constructed rubrics including disposition evaluations, departmental approval rubrics, and content-specific observation rubrics used by clinical faculty during student teaching.  Each of these rubrics continues to undergo scrutiny by outside partners such as TEC and TAC, and the EPP is working closely with the Office of Institutional Effectiveness to establish a consistent and streamlined way to collect data and to intentionally use it to drive programmatic decisions.

Standard 4.3 In the past two years, the Indiana Department of Education created and distributed an online survey sent to administrators across the state for each first year teacher they employ.  The response rate has been rather low; however, the IDOE does provide EPPs with the data (ECS packet).  The only data point provided thus far is the survey results from 2016-2017.  Only 7 of employers of MU completers responded to the survey.  The respondents rated EPP first year teachers as 3.28/4.0 in bot their ability to work effectively with other professionals and effectively with parents/guardians, reflecting a strong approval of the professionalism of completers.  Overall, employers rated MU completers as 3.14/4.0 in overall satisfaction in the MU teacher preparation program.

In the fall of 2018, the Manchester University EPP will distribute its own survey to employers of 2017 graduates who are currently employed as first year teachers.  The survey is based on the InTASC standards and CAEP cross-cutting themes of diversity and technology.  Currently, the EPP is working with the Alumni Office and the IDOE to locate employers’ contact information for successful distribution of the survey. 

Standard 4.4 Annually, as part of the Manchester University assessment plan, the Office of Institutional Effectiveness surveys recent alumni.  The EPP receives disaggregated data regarding completers’ perceptions of their preparation (ECS packet).  Overall, the completers rate their teaching program relatively high. A five-year average of data indicates a positive trend in completers being prepared for their careers.  In 2014, only 41% of alumni felt extremely well-prepared, the number dropped to 25% in 2015, but since then it has rebounded to 75% in 2016 and 69% in 2017.  This increase is important to the EPP because it indicates a positive move, and it also reflects a change in the EPP.  In the fall of 2015, a new DTE was named, and two new hires were made in the program:  a Field Experience and Assessment Coordinator and a new faculty member.

In the spring of 2018, the EPP independently surveyed first year teachers regarding their perception of their preparation.  The survey reflects questions asked on the employers’ survey the EPP will send to administrators in the fall of 2018.  The Likert scale uses the following ratings:  1-extremely satisfied, 2-moderately satisfied, 3-slightly satisfied, 4-neither satisfied nor dissatisfied, 5-slightly dissatisfied, 6-moderately dissatisfied, and 7-extremely dissatisfied.  Only 25% of the completers submitted the surveys, so the data does not give a full picture of the program; however, of the ones who did complete the online survey 75% were moderately satisfied with their teacher preparation program, and 25% were extremely satisfied. 

Most importantly, during the spring of 2017, the IDOE surveyed recent completers of the MU teaching program using the following scale:  1-strongly disagree, 2-disagree, 3- agree, 4-strongly agree (Attachment CAEP 4A).  According to the results from

Trends, Strengths, and Challenges

The EPP uses the IDOE employer survey and the SCE:  Impact on Student Learning capstone project to evaluate the way program completers contribute to an expected level of student-learning growth; however, it has indicated that it needs to add depth to the data collection (Q1).  In the fall of 2015, the Vice President and Dean of Academic Affairs appointed a new DTE, and the EPP created the position of Field Experience and Assessment Coordinator.  When the two stepped into their new roles, they found an inadequate and fractured data collection system.  Since then, the EPP has worked collaboratively to realign courses with InTASC standards as well as the CAEP criteria.   While the lack of data posed a great challenge, the EPP sees the accreditation process as an opportunity to redesign a better teacher preparation program.  It acknowledges it has work to do, but it also knows it is on the right track with online surveys to send to completers and employers and with the SCE:  Impact on Student Learning.  Both hold great promise for rich data collection as the EPP continues to look for more efficient ways to collect this data.

Based on the data collected from the Danielson Framework, completers effectively apply the professional knowledge, skills, and dispositions expected of MU completers (Q2).  In the fall of 2014, the EPP began intentionally focusing on the type of completer it wanted to graduate.  The Danielson framework reflects the criteria of highly effective teachers, and using this valid and reliable assessment tool assures the EPP of its candidates’ effectiveness. 

Additionally, the EPP has focused on graduating candidates of ability and conviction who understand and engage the whole learning in meaningful development of deep understanding of content and skills.  To fully evaluate this vision and driving mission, the EPP developed the capstone project required to graduate; the impact on student learning project previously mentioned in standard 4.1 aligns with this mission as well as CAEP standard 4.2 (SCE packet).  Since the implementation of this capstone project, the EPP has confidence in the completers’ ability to apply their professional knowledge, skills, and dispositions in an educational setting. 

Based on the employer satisfaction data provided by the Indiana Department of Education, employers are satisfied with the completers' preparation for their assigned responsibilities in working with P-12 students (Q3).

Survey data collected by Manchester University, the EPP, and the Indiana DOE indicate completers perceive their program was effective and relevant, and it adequately prepared them for success in the classroom (Q4).  One of the scores the institution values in the survey is the net promoter score, a number which reflects how likely completers are to recommend Manchester University to other people.  Institutionally, the net promoter score in 2017-2108 was 32.1%, but the EPP’s net promoter score was 57.9. While the EPP has much room for improvement, it is satisfied with its rating in relationship to the overall institution.  According to the IDOE, beginning teachers perceive the have an ability to differentiate instruction to meet all students’ learning needs, scoring a 4.0/4.0.  Additionally, completers perceive they exhibit ethical practice (3.95/4.0) and recognize the importance of continued professional development (3.95/4.0).  The EPP believes these high scores reflect the intentional focus on professional opportunities and frequent interactions with clinical faculty.

Manchester University proudly has program completers who hit milestones in their employment including National Board Certified alumnae Michele Keim and Lauren Bailey (Q5).  Numerous completers are also earning recognition for their excellent teaching (Q5).  Most recently, MU alum James Butler earned teaching recognition in the Austin Public Schools in Austin, TX.  He has also gained the attention of the school system for his work with mindfulness in the Austin classrooms.  A more comprehensive list of contributions, milestones, and accolades is kept by both the EPP and the Alumni Office.

Implications:  Through analysis of CAEP standard 4, the EPP recognizes it is on the right track for collecting important data regarding candidates’ and completers’ impact on P-12 learners.  It uses the Danielson framework to measure important attributes of candidates, including their ability to impact students’ learning.  The EPP also expects completers to submit a comprehensive impact on student learning project which requires candidates to actively collaborate with classroom teachers to design a unit of study and ultimately measure their impact on P-12 students using multiple measures.  Survey data from both alumni and employers indicate the EPP is adequately preparing the completers to teach. 

However, after analysis, the EPP has made a commitment to several projects which will improve the quality of the completer.  First of all, the EPP will continue to collaborate with stakeholders such as members of the TEC and TAC as well as with those attending the community partnership lunches.  Through a collaborative effort which will link current clinical faculty and settings to courses within the program, the EPP can ensure a more intentional reflection of university coursework and the realities of teaching.  Using clinical faculty to support the development of and provide feedback for the impact on student learning project, the EPP can strengthen the reality of the project.  Revising the current SCE:  Impact on Student Learning so it is more authentic as a work sample will help candidates see a correlation between their preparation and the realities of teaching.  The EPP hopes, as well, this revised SCE:  Impact on Student Learning will give candidates more intentional opportunity to create and implement assessments to drive instruction.  Based on the employers’ survey conducted by the IDOE, this is one area the EPP must improve.

As indicated in the reflection in CAEP standard 3, the EPP will continue to develop authentic clinical experiences which reflect the realities of teaching in the 21st century.  Through more structured, authentic experiences, candidates will have an opportunity to observe, practice, and apply key elements of effective teaching.  Additionally, the EPP will continue to explore ways to create professional learning communities within the clinical experiences.  Building on co-constructed clinical experiences outlined in CAEP standard 3 will provide the EPP with an authentic program better preparing candidates for the classroom.  Work in the fall of 2018 will include identifying interested stakeholders who will work with the EPP to create these opportunities.  The EPP believes this intentionality will allow candidates and completers to feel better prepared for the classroom and will increase their positive perceptions of the program.

Additionally, the EPP will collaborate with completers and employers to create a more authentic impact on student learning project.  While the current project has a firm foundation, the EPP will create a timeline for moving the SCE:  Impact on Student Learning to the format of a work sample such as those used by the NBTC.  This new alignment will increase the authenticity of this important capstone project.