Washburn University

Standard 1 Addendum

The following comments are based on a summary of questions and requests for additional information for Standard 1.

Summary of Evaluation of Teacher Candidates (SETC) and Evaluation Candidate Summary Evaluation Form (TCSEF)
The EPP created assessment, Teacher Candidate Summary Evaluation Form (TCSEF), does not have evidence to show it's a valid instrument for assessing Standard 1.

Student teaching data for the self-study were based on the Teacher Candidate Summary Evaluation Form (TCSEF). During the student teaching experience, a mentor teacher and university supervisor observe teacher candidates’ performance in a classroom setting, and submit evaluations at the end of the student teaching experience. The EPP reviewed the TCSEF as part of an assessment tool revision process for EPP created assessment tools (AE 1.1.). Implementation of this process led to a newly structured evaluation tool, the Summary Evaluation of Teacher Candidates (SETC) (AE 1.2, AE 1.3). Initial inter-rater reliability calculations for the SETC (AE 1.4) showed that nine indicators were .80 or above and three indicators were below .80 level. In order to improve the strength of agreement for indicators falling below the .80 level, the Director of Field Placement Coordinator will review the SETC tool with mentor teachers and university supervisors and discuss the meaning of the constructs and how they align with professional standards (e.g. InTASC; Kansas State Professional Standards); the meaning of the elements for expected performance along each construct; and expected performance for each level of performance (e.g., unacceptable, developing, and target) for candidates at the student teaching level of teacher preparation. The EPP will repeat reliability calculations using spring 2018 data in order to determine if agreement improved for indicators below the .80 level.

Content validity ratios (CVR) were used to establish validity of SECT indicators (AE 1.5). All but one indicator reached the needed content validity ratio (e.g., .49 for a sixteen member expert panel). Since SETC indicators are aligned with InTASC standards and Kansas State Professional Standards, the EPP decided to use feedback from the expert panel to reword the indicator and provide more specific elements to define it. While the EPP wants candidates to be “teacher leaders” in the profession, we understand that at the practicum and student teaching levels, candidates are focused on the curriculum and school setting where they teach. As a result, we decided to restructure the concept of leadership differently and provide examples of what we expect “leadership” to look like (e.g., participating in IEP meetings, GEI meetings, PLC meetings; attending school functions, etc.) at the teaching methods and practicum levels of teacher preparation.

As part of the assessment tool review and revision process, candidates’ performance was changed to ratings of either unacceptable (1), developing (2), or target (3) (previously, ratings fell into four levels of performance with Advance as a level 4). As in the past, SETC data will be collected electronically and summarized under supervision of the Assessment Coordinator. These data are presented to EPP faculty and the University Teacher Education Committee (UTEC) during biannual assessment retreats. Data are evaluated for patterns and trends of candidates’ performances, as these relate to the need for change in courses and clinical experiences completed prior to the student teaching experience.

Kansas Portfolio of Teacher Performance (KPTP)/ College and Career Readiness
No evidence was presented that candidates demonstrate skills and commitment that afford all P-12 students access to rigorous college and career-ready standards across multiple indicators.

The Kansas Performance Teaching Portfolio (KPTP) is a teacher work sample structured around four tasks and specific focus areas. This document allows teacher candidates to demonstrate their ability to use contextual factors to plan, implement, and reflect on a unit of study. Candidates submit unit plans, along with reflections on planning and teaching based on KPTP prompts.

Candidates are explicitly directed to design units with objectives that align with state standards (See
http://community.ksde.org/Default.aspx?tabid=4754). Discipline specific state standards are based on rigor identified by Next Generation Science Standards; the National Council for Teaching Mathematics; the College, Career, and Civic Life (C3) Framework for Social Studies Standards: Guidance for Enhancing the Rigor of K-12 Civics, Economics, Geography, and History; and Common Core Standards.

Candidates’ demonstration of skills and commitment to all P-12 students’ access to rigorous college and career ready standards are evident in Task 2, Focus Area B of the KPTP (AE 1.6). Teacher candidates are required to demonstrate ability to provide different approaches to learning that meet the diverse needs of learners; create instructional opportunities that are equitable and adapted to diverse learners; and show an understanding of a variety instructional strategies that develop various kinds of students' learning that include critical thinking and problem solving. Candidates must also demonstrate that they are able to plan effective instruction based on knowledge of students, the content, and curriculum outcomes, as well as the ability to integrate across and within content fields. Finally, they must demonstrate skill in using instructional tools and technology to gather, analyze, and present information. A three semester cycle of KPTP data (AE 1.7) indicate that program-wide, candidates consistently performed beyond the midpoint level based on the KPTP evaluation, and this is true for all individual programs as well.

Candidates’ ability to plan for and implement instruction based on rigorous standards for all students is also evaluated using specific Teacher Candidate Summary Evaluation Form (TCSEF) indicators (See AE 1.6 for a summary of indicators specific to evaluating college and career readiness). Observation of candidates’ performance using rigorous standards to teach all students show stronger evidence that EPP candidates know how to plan learning experiences along rigorous content standards. They are also able to structure the learning environment and plan differentiated lessons appropriate for the learning needs of a diverse group of students so that all have access to learning experiences along rigorous standards. Candidates were rated between the target to advanced levels of performance program-wide and for each individual licensensure area.

Content and Pedagogical Knowledge

There is no program wide data that demonstrates that candidates apply content and pedagogical knowledge in response to standards.

Content and pedagogical data are included in the original self-study report. Three years of Praxis II, content knowledge data (Chart 1.1b –original self-study) represent EPP-wide pass rates of 96%, 97%, and 98 % for 2013-2014, 2014-2015, and 2015-2016, respectively. This level of content knowledge is supported by data from Chart 1.1d (self-study report) that indicate EPP wide, candidates know the content they teach at the target level and above, as observed by mentors and university supervisors during student teaching. Data from this chart also indicate candidates’ content pedagogical knowledge and skills, based on use of methods and techniques (including the use of technology) to meet students learning needs. A holistic evaluation of candidates’ pedagogical knowledge based on KPTP ratings is provided in file 1.1d (original self-study), where demonstration of proficiency in designing, implementing, and evaluating instruction is summarized. Here, candidates consistently performed beyond the state passing score, and even beyond the EPP’s required passing score, which is higher than that required by the state.

Technology- Expereinces, Use, and Assessment

No specifications for how ISTE Standards are addressed in the course work.
Limited data related to candidate ability to model and apply technology standards in their practice.

Candidates have multiple technology experiences that are aligned with ISTE Standards in courses required for their specific licensure areas. These experiences are aligned and described in Self-Study Chart 1.1, Technology ISTE Standards. Included in this evidence file are each ISTE Standard, the product candidates produce in the course, and the description of the evidence that shows how the activity aligns with the ISTE standard. Individual course instructors monitor candidates’ performance in these courses.

The EPP ensures that all candidates are able to model and apply technology standards by assessing their knowledge and skills in ED 300,
Integrating Technology into the Curriculum and during the student teaching experience. All candidates must receive a grade of C or better in ED 300 prior to completing an EPP licensure program. ED 300 is a skill and workbased course that is assessed using a specification process for evaluation of candidates’ skills and knowledge. Candidates are given feedback for all work and must make revisions until the work addresses all Badge Level Requirements (See Examples of ED 300 projects, AE 1.8 CAEP Categories for Modeling and Applying Technology). Three semesters of candidates’ grades in ED 300 (see AE 1.9) points to a high level of technology knowledge and skills, with 95% of the candidates received grades of A or B (82% A grades/13% B grades).

Candidates’ ability to employ technology is also assessed during the student teaching experience. They are evaluated on their ability to use technology to design learning experiences, implement lessons, and assess student learning. The ability to use technology to plan learning experiences in assessed by KPTP Task 2 Focus Area B, Standard 12 (See AE 1.7; AE 2.4). The EPP recognizes that multiple indicators/standards are evaluated under Task 2 /Focus Area B. However, standard 12 relates specifically to the use of technology integration in planning. Mean performance on Task 2/Focus Area B reached the high mid-point level of performance.

The EPP gains more confidence that program candidates are able to use technology to help students learn by reviewing evaluation of technology skill and knowledge observed in practice. The mean performance recorded using the Teacher Candidate Summary Evaluation Form (TCSEF) Construct 2.2 shows performance ranging from 3.4/4 to 3.6/4--performance that exceeds the EPP’s expectation of target (3.0) performance.

Candidates are required by KPTP to submit an assessment plan as part of the KPTP portfolio, which assessed as Task 3 Focus Area E,
Analysis of Assessment Procedures. EPP candidates are rated beyond the midpoint level based on the KPTP evaluation regarding their use of assessment (1.2E – Self-study). Candidates use knowledge and skills acquired in the ED 300 for earning a Gain Score Badge (See AE 1.8) to develop and report data for the KPTP assessment. They are required to calculate and record student performance digitally, and must perform these and related tasks at least the C level before the required badge is attained. Gain score data are submitted electronically as part of the KPTP work sample.

Standard 2 Addendum

The following comments are based on a summary of questions and need for additional information for Standard 2.

Candidate Field Experiences with Diverse Students
The EPP does not provide data that each candidate has a field experience placement in a school with diverse and disadvantaged students.

As mentioned in the self-study, candidates’ field experiences are logically sequenced, as candidates move from experiences based primarily on observation, to teacher practice in student teaching. The EPP is assured that candidates have experiences that include diverse students (i.e., diversity by race/ethnicity, socioeconomic status, ability/disability) primarily because of the close connection between course content and field/practicum experiences associated with courses required by all candidates (See AE 2.1). Since Topeka USD 501 is the most frequently used district for field experiences, candidates often have experiences with students fitting more than one diversity characteristic (e.g., race and ability/disability). In addition to courses required by all students, candidates have diversity practicum experiences specific to their licensure areas. For example, the first practicum experience for all elementary licensure candidates takes place in Topeka USD 501. Approximately 60% of placements for secondary English methods take place in USD 501 schools. Three courses completed by Physical Education majors have practicum experiences in Topeka USD 501. The EPP tracks field and practicum experiences in the Education Data Management System (EDMS), in order to manage the nature of candidates’ practicum and filed experiences as they proceed through the teacher preparation program.

Collaboration With Partners
The EPP does not provide how it is providing data on the field experiences to the partners to collaborate in analyzing the data in making decisions about continuous improvement.
Lack of evidence on how data is shared and used by school-based clinical educators to modify selection criteria, assignments, and clinical experiences

The EPP has a longstanding collaborative relationship with surrounding districts. Much of our collaborative work has become routinized and continues to be beneficial to both the EPP and the schools with whom we work. Committees have been organized, however that focus on ongoing and specific discussions about field experiences and the preparation of teachers. These include the Field Experience Committee and the Advisory Board. School-based and EPP personnel changes over the years required the reconfiguration of these committees, in terms of both membership and charge.

The Advisory Committee of 2010 was led by the Education Department Chair and the Field Placement Coordinator, and included members from local school districts. Meeting minutes (AE 2.2) show discussions of State and NCATE accreditation, licensure expectations, disposition form revision, and field experiences and related data. Time for feedback, questions, and suggestions on these topics was included in the minutes. The Field Experiences Committee included local district representatives as well, and was chaired by Catherine Hunt, music education professor and member of the University Teacher Education Committee (UTEC). The meeting minutes record discussion of changes in field experience placement procedures and cooperating teacher survey data, as well as the presentation of instructional strategies by an EPP faculty member (See 5.5, pp. 10-11).

Currently, the Education Department Chair chairs the Advisory Council. Membership includes EPP faculty, the Field Placement Coordinator, and representatives from local school districts. The newly formed Council focuses on issues and concerns addressed earlier by the Field Experience Committee and the Advisory Committee, along with other issues important to teacher preparation. In short, the Council takes a more holistic view of the shared role between EPP faculty and school-based personnel in the preparation of future teachers. Minutes show discussions of topics ranging from recruitment, content of initial field experience, professional development, and the importance of data driven decision-making. Also, meeting minutes point to district members who more actively provide input to issues related the preparation of educators (See 5.5, pp. 1-9). The make-up and range of issues embraced by this group is important to note, since the placement of candidates in many field and practicum experiences begin at the district level, with many members of the Committee.

The EPP has discussed next-steps that could guide collaboration with the Council, and is prepared to suggest the following to the group:

    Evaluation of Clinical Educators
    How are clinical educators evaluated to ensure quality?

    The EPP recently reviewed and revised the process for selecting university supervisors and expected responsibilities for those chosen for these positions (AE 2.3). These processes and qualifications apply to university supervisors for student teaching and for EPP based supervisors for the elementary licensure program literacy practicum. Currently, the only qualification listed for mentor teachers is that they are certified, and this requirement is listed in the Memorandum of Understanding (MOU) for each district where the EPP places practicum students. Expected responsibilities of mentor teachers are listed in the Field Experience Handbook (5.1, pp. 4-5) and are discussed during orientations each semester. EPP faculty and the Director of Student Field Experiences have relied on district personnel (many of whom are Advisory Council members) to identify high quality mentor teachers. We often place candidates in field and practicum experiences in classrooms with program graduates whom we feel confident regarding their skill as mentors. Further, some faculty generate evaluations regarding mentors from candidates, and use this information when deciding whether to return to a teacher as a mentor. 

    The evaluation tools for clinical educators are in Stage 2 of the Process for Revision of Washburn Educator Preparation Program Assessment Tools (See AE 1.1). Evaluation tools for university supervisors and mentor teachers have been evaluated using the
    CAEP Evaluation Tool for EPP Created Assessments, and are set to be reviewed by a panel of experts (that includes members of our partner districts). A content validity ratio will be calculated for the tools, we have one year of tool pilot data to use for reliability tests. We plan to make final revisions to the tool during the spring of 2018, and begin to collect data at the end of the spring semester.

    Technology in Clinical Practice

    Data on candidates' technology use/integration during clinical practice
    What evidence does the EPP have to demonstrate candidates' use of technology to enhance student learning?

    The EPP is fortunate to have partner districts with technology resources for our students to use during practicums and student teaching (See Self-study 2.1 Instructional Strategies, p. 5). Candidates’ demonstration of the use and integration of technology in practice is documented by evaluation of their performance using indicators on the student teaching evaluation form and the Kansas Portfolio of Teacher Practice (KPTP) (AE 2.4). The EPP recognizes that multiple indicators/standards are evaluated under Task 2 /Focus Area B. However, standard 12 relates specifically to the use of technology integration in planning. Mean performance on Task 2/Focus Area B reached the high mid-point level of performance. The EPP gains more confidence that program candidates are able to use technology to help students learn by reviewing evaluation of technology skill and knowledge observed in practice. The mean performance recorded using the Teacher Candidate Summary Evaluation Form (TCSEF) Construct 2.2 shows performance ranging from 3.4/4 to 3.6/4--performance that exceeds the EPP’s expectation of target (3.0) performance.

    Standard 3 Addendum

    The following comments are based on a summary of questions and need for additional information for Standard 3.

    School District Staffing Needs

    What effort is the EPP taking to gather the staffing needs of the schools they serve?
    What formal mechanism is used for obtaining needs of partner schools?

    Washburn University’s Career Services hosts two interview days per year exclusively for education program completers. Representatives from partner districts, as well as representatives from other districts interested in Washburn teacher candidates attend these events. The EPP routinely requests information regarding district needs during Interview Days. On a statewide level, the Council of Education Deans (COED) discusses district needs during monthly meetings. This group consists of deans and the department chair from all public universities in the state. State data on personnel needs are part of discussions during these meetings, and COED focuses on what universities need to do in order to address state education personnel needs. Lastly, the EPP’s Advisory Council, which is made up of representatives from surrounding districts, offers a third opportunity for discussion of district personnel needs.

    Candidate Demographics

    • What are the disaggregated demographics of candidate diversity?
    As discussed in the self-study, the EPP is unable to get person-by-person diversity data from Washburn University's Office of Strategic Analysis and Reporting (SAR). Data are presented holistically (i.e. percentages of the ethnic/racial backgrounds of candidates). Further, students are not required to identify their race/ethnicity, and this results in an “unknown” background category. However, the EPP has been able to make use of these data, as well as its own diversity data collection efforts to gain a clearer understanding of candidate diversity. For example, during the 2016-2017 academic year, we began collecting student background data from those completing EPIC, which is the first required course in the program (AE 3.1). Based on these data, the percentage of students self-reporting as students of color is higher than the 15% we set as a minimum for admitted candidates of color. EPIC research data will be compared to candidates applying for admission beginning in 2018 (the year we expect this cohort of students to apply). We are also able to track the racial/ethnic background of scholarship recipients, in order to determine the impact of change in scholarship policy on diversity, from initial enrollment to program admission and completion (AE 3.2). We will be able to determine if the higher percentages of candidates of color who receive scholarships persists through program admission and completion. As a result, we will be able to determine if the availability of scholarship support has a positive impact on our efforts to increase the number of students of color in the program.

    Candidate’s Impact on Student Learning
    What evidence does the EPP have that their candidates make a positive contribution to student learning?
    Washburn EPP candidates analyze the impact of their instruction on student learning as part of the KPTP work sample. Candidates are able to determine the percentage of learning gained between the beginning of their units (as determined by pre test scores) to the end of the unit (as determined by post test scores), by calculating a gain score. Based on a summary of gain score data, overall, student teacher candidates consistently showed positive learning gain scores for their students (two student teacher candidates showed negative gain scores over a summary of 10 semesters of impact data) (See Self-study 4.1 Academic Gain Scores Reported by Student Teachers Based on KPTP Data)

    Knowledge of Law, Ethics, and Professional Standards
    How does the EPP assess candidates' knowledge of law, ethics and professional standards prior to student teaching? What data demonstrate candidates' abilities in these areas?

    Evidence 3.3, Ethics provides a summary of multiple courses that address law, ethics, and/or professional standards, some of which are specific to a licensure area. Candidates across licensure areas enroll in three courses that address these areas. Instructors for ED 225, Becoming an Educational Professional; ED 302, Exceptionality; and ED 385, Educational Foundations monitor candidates’ performances in law, ethics, and/or professional standards (AE 3.3, AE 3.4, AE 3.5, AE 3.6). Candidates are required to achieve a grade of C or better in these courses. With respect to assessment specific to law, ethics, and/or professional standards, candidates performed at the 70% or higher level on related assessments in all three courses. The exception was one semester for ED 302B during the fall 2016 semester, where candidates averaged performance at the 63% level on relevant course material. The final exam performance for these candidates however was at the 81% level. Candidates in one licensure area had a mean performance of 22/50 (44%) on the ED 385 Legal Case Study during the spring of 2016. This occurred however because one of two students did not compete the required work. Performance across licensure areas for that semester was 44/50 or 88%. All data for courses with specific law, ethics, and/or professional standards learning experiences point to candidates who know this material and how it relates to the education profession.

    Disposition Assessment
    How are candidates' dispositions assessed and monitored at multiple decision points?
    What is assessed at each of the program phases and how are candidates' dispositions assessed and monitored at each?
    Who is responsible for collecting and monitoring these dispositions?
    It is not clear from the evidence related to 3.3 that the EPP has established clear decision points for assessing non-academic traits of their candidates, nor has the EPP articulated an established process for monitoring and assessing these dispositions.
    Are the faculty involved in the review process?

    The EPP has established non-academic criteria that are listed on evaluation tools used to collect data at multiple points during the professional education program. Candidates are introduced to expected dispositions during EPIC, which is the first required course. As part of the admissions process (Phase I of the EPP’s Assessment System), candidates select a reference to complete the
    Professional Reference Form. This form is submitted electronically and reviewed by the EPP’s Undergraduate Program Committee. Summary of these data indicate candidates have dispositions fitting those entering the education profession (AE 3.7). The Professional Reference Form is currently at Stage 3 of the Process for Revision of Washburn Educator Preparation Program Assessment Tools (AE 1.2). This is the process the EPP is using to ensure that EPP generated assessment tools meet the sufficient level or higher on the CAEP Evaluation Tool Rubric and that tools are valid and reliable. To-date, the Form has been evaluated using the CAEP Evaluation Tool Rubric, aligned with InTASC Standards, and is currently being reviewed by an expert panel for validity. Existing data will be used to establish the Form’s reliability.

    Non-academic data are also collected during Phase III of the EPP’s Assessment System, at the end of the student teaching. Review of these data support initial non-academic data from Phase I. Candidates demonstrate positive dispositions beyond the target level, which is the expectation of the EPP faculty (AE 3.12). The total faculty review these data. Like the
    Professional Reference Form, the Professional Conduct and Disposition form is at Stage 3 of the Process for Revision of Washburn Educator Preparation Program Assessment Tools. The form has been evaluated using the CAEP Evaluation Tool Rubric and has been aligned with InTASC Standards; an expert panel has reviewed the tool for validity; and reliability calculations have been completed. Faculty will review these data to determine changes needed in the assessment tool (AE 5.1).

    The EPP is reviewing processes to improve submission of disposition data at Phase II of the Assessment System. Phase II data are submitted by mentor teachers during methods practicums, but submission of data has not been consistent across licensure programs. Candidates work closely with methods professors who are able to monitor their dispositions. Problematic performance and non-academic attitudes and behavoiors can be observed, especially given the size of methods classes and the close connection between university faculty and school-based mentor teachers. Still, the EPP plans to improve the process of collecting disposition data using the
    Professional Conduct and Disposition form at Phase II of the Assessment System, so that we are better able to document candidate dispositions throughout the professional preparation program.

    Program Progression: Skill and Knowledge Development

    What evidence does the EPP have that their candidates have solid content pedagogy and pedagogical skills?
    What evidence does the EPP have that their candidates are prepared to teach college and career readiness standards?
    What evidence does the EPP have that their candidates can integrate the use of technology into their instruction/assessment?
    At what point are candidates counseled out of the program?
    The EPP does not systematically document that their candidates have reached high standards for content knowledge in fields where certification is sought and can teach effectively with positive impacts on P-12 learning.

    The EPP’s program is organized around courses and field/practicum experiences designed to support candidates’ development into professional educators. The Washburn Educator Preparation Provider Program Levels of Progression (1.1 Progression Levels) summarizes the professional development the EPP envisions for program candidates. The Pre-Admission, Evaluation of Professional Practice, and Candidacy Completion levels identify required courses, practicum/field experiences, and assessments that are aligned with InTASC Standards. The EPP has identified specific criteria for monitoring candidate performance, and collects data at specific points in the program in order to ensure that candidates develop solid content knowledge and pedagogical knowledge and skills (AE 3.8). On some occasions, candidates may be counseled out of the program at any development level when it evident that they are not meeting required content/pedagogical knowledge, skills and/or dispositional criteria, and that remediation will not improve the situation.

    Data point to EPP candidates who have strong content knowledge for their licensure areas, as indicated by their content grade point averages at the Evaluation of Program Practice level of the professional program (3.11 Student Teaching Admissions); through the comparison of program completers content GPAs with that of peer program majors (Self-study1.1c GPA Comparison); and through the high level of performance by candidates on PRAXIS II content exams (Self-study1.1b Praxis II Content).

    Candidates demonstrate high levels of professional and general pedagogical knowledge beginning with admission to the EPP (AE 3.9 Admissions-Professional GPA). This knowledge continues to develop and is assessed as candidates apply for admission to student teaching (AE 3.11 Admission to Student Teaching – Professional GPA), and later by performance on the Professional Learning and Teaching (PLT) exam at program completion (Self-study 1.1a PLT Performance). Candidates’ content pedagogical skills and knowledge are first assessed in licensure specific methods courses (AE 3.10 Teaching Methods) and later in student teaching (1.1d InTASC Proficiency; 1.2F Proficiency in Designing, Implementing, and Evaluating Instruction; AE 1.7 Skills and Commitment to Access to Rigorous College and Career Ready Standards). All data suggest candidates have pedagogical and content pedagogical knowledge and skills appropriate for their licensure areas.

    Candidates are introduced to standards associated with college ready skills in ED 225,
    Becoming an Educational Professional (AE 3.3), and continue to develop knowledge of college and career ready standards, and how to plan lessons where all students have access to these standards (AE 3.10 Teaching Methods; AE 1.7 Skills and Commitment to Access to Rigorous College and Career Ready Standards). Candidates learn how to use technology in multiple courses, but are assessed across all licensure programs in ED 300, Technology (AE 1.9). Their ability to integrate technology into instruction and use technology to support assessment activities during student teaching, overall meets EPP expectations (1.1d InTASC Proficiency – TCSEF Construct 2.2; AE 2.4).

    From a holistic perspective, Washburn EPP candidates have strong content knowledge and pedagogical knowledge and skills. In most instances, they exceed criteria set by the EPP and Kanas State Department of Education (1.2F Proficiency in Designing, Implementing, and Evaluating Instruction).

    Support for Candidates

    What support measures are in place for candidates in need of remediation?
    A number of supports are in place for candidates needing remediation. Washburn’s Center for Student Success and Retention (CSSR) offers support services for traditional students, as well as support and services specific to the needs of first generation students and adult learners, as they enroll in their first year at Washburn. Also, the university sponsors the University Tutoring and Writing Center in order to assist candidates academically. In addition to tutoring, the Center offers academic workshops and referrals to other campus services that can support candidates academically. For candidates experiencing social and/or emotional issues that interfere with their academic performance and/or disposition development, the University offers short-term counseling at the University counseling center. In addition to these structured supports, the EPP faculty is always willing to work with candidates to help them identify impediments to knowledge, skill, and disposition development, and work with them to develop ways to remove obstacles to success.

    Standard 4 Addendum

    The following comments are based on a summary of questions and requests for additional information for Standard 4.

    Impact Pilot Study
    What additional documentation is available to confirm the approval of, and clarify the implementation of the pilot study?
    The EPP submitted an application to the Washburn Institutional Review Board (IRB) and to the Topeka USD 501 Office of Assessments and Demographics prior to beginning the pilot impact study. Both the IRB and USD 501 approved the study (AE 4.1 USD 501 Application and Approval; Self-study 4.1). Initial data regarding the impact of EPP program completers on student learning and development is reported in Standard 4 Impact Study Overview and Results (AE 4.2). The EPP made a decision to start this work with USD 501 given its proximity to campus and the number program completers who teach in the district. We plan to extend this work into other school districts after evaluating the pilot study.
    How representative is the completer data?
    All participants are currently employed with USD 501. Completer data for the Impact Study represents multiple grade levels and multiple, but not all licensure areas offered by the EPP. The pilot includes 10 participants with licenses to teaching in the following areas:
    Elementary/B-Grade 3 Emphasis
    Elementary/Adaptive Special Education Emphasis
    Three Elementary/Middle Level History Emphasis
    Elementary/Middle Level Math Emphasis
    Secondary History (6-12)
    Art (P-12)
    Physical Education (P-12)
    They are teaching in an early childhood center, elementary schools, and middle schools. Subject areas taught include elementary all subjects, math, reading, writing, history, art, and physical education.

    What valid and reliable measures of completer impact on P-12 student learning are used and what data is available to demonstrate completers' impact?
    What valid and reliable instruments are used to measure completers' application of knowledge, skills, and dispositions from their preparation programs?
    To-date survey/interview data from completers and their administrators have been collected and summarized. Standardized assessment data for each participant is expected in February 2018. The survey schedules used to collect data for the Impact Study are listed as Evidence 4.1 in the submitted self-study. Both academic and nonacademic data are/will be collected from multiple sources, in order to measure completers’ application of knowledge, skills, and disposition acquired as they completed the EPP program. These data include:

      How is the survey administered and what instructions are provided to potential respondents?
      Both program completers and building administrators received a letter of consent prior to participation in the Impact Study (AE 4.3). Participant administrators were asked to compete a Teacher Effectiveness survey based on the program completer working their building. The Administrator consent letter identified the purpose of the study and the use of data collected. Administrator participants were advised of the anonymity of information solicited (both personal and school location), and that participation was voluntary.
      USD 501 provided work email addresses of EPP alumni. Impact Study investigators sent surveys and letters of consent to these addresses. Completers who returned the survey were contacted to determine their interest in continuing with the study. Investigators reviewed the letter of content with potential participants and arranged for a phone interview time. The letter of consent for program completers also identified the purpose of the study. Permission was requested to collect impact data specific to students the completer teaches; information confirming the completer’s school placement; and review of an administrator’s evaluation of their teaching. The letter of consent asks completers to agree to an interview in order to determine perceptions of preparation to become an effective teacher, how they apply professional knowledge and skills to their current teaching practice, and their perceptions of the impact of professional dispositions on teaching effectiveness. As with the administrators’ consent letter, program completers were advised of the anonymity of information solicited (both personal and school location), and that participation was voluntary.

      What analyses have been conducted on the data presented as evidence?
      Data were analyzed using percentages of completers responding to specific questions (e.g. research-based instructional strategies/interventions or techniques) regarding their preparation and performance. Percentages of the extent to which participants (both completers and administrators) agreed, strongly agreed, disagreed, on strongly disagreed were also calculated for InTASC and CAEP aligned statements regarding completers’ preparation and performance. Perception data (for both completers and administrators) were summarized based on average ratings, using a 1-5 scale (e.g., perception of professional knowledge as it relates to teaching). EPP Impact Study Investigators analyzed Certified Staff Summative Evaluation by finding the percentage of participants rated at different levels available on the evaluation (i.e., Unsatisfactory, Basic, Proficient, Distinguished). Standardized test data will be analyzed for the number and percentage of completers’ students who are at standard, above standard, or below standard.

      What data is available specific to high needs schools?
      USD 501 Topeka Public Schools is an urban school district located in northeast Kansas. The district serves more than 14000 students, and there are more than 1300 teachers and 1100 support staff. USD 501 includes four high schools, seven middle schools, 21 elementary schools and four special schools. The district reports that all students are involved in career and college ready programs. The Kansas Department of Education (KSDE) lists USD 501 student population as 51.2% males and 48.8% females. White students make up 37.6% of the population; African American, 17.4 %; Hispanic, 31.8% and 13.2% are reported as other. Seventy one percent of USD 501 students are economically disadvantaged, 13% are English Language Learners, and 19.6% are students with disabilities. Participants were employed at eight different USD 501 schools, all of which were high need, given the percentage of economically disadvantaged students attending these schools. As a result, all data for the study are specific to high need schools.

      What data is available specific to licensure areas?
      The pilot data are not disaggregated by licensure area. The EPP felt the promise of anonymity would be compromised by identifying performance and response by licensure area.
      Insufficient evidence was provided to demonstrate the EPP's ability to measure completers' teaching effectiveness.
      Data quality information on instruments was not sufficient.
      What data supports completers' teaching effectiveness after their exit from the programs?
      A summary of data collected to-date is provided in AE 4.2. The return rate for administrator surveys was 30% and the rate for program completers was 100%. Administrators are satisfied with EPP completers’ teaching performance. Ninety three percent of participants were evaluated at the proficient level or higher on all areas of the Certified Staff Summative Evaluations completed by building administrators. Eighty percent of completers agreed or strongly agreed that they were prepared by the EPP to understand and implement skills and knowledge required by the teaching profession (e.g., knowledge of student learning, influence of school, family, cultural and community knowledge on the quality of education; use of assessment data for instruction, content and pedagogical knowledge). Seventy three percent of building administrators agreed or strongly agreed that completers were prepared in these areas. Program completers use a number of research-based instructional strategies/interventions or techniques that are listed on the data summary (AE 4.2). When asked specifically about impact on student learning and development, completers responded that they are having a positive impact on student learning and achievement. This perception was corroborated by building administrators who evaluated completers at the proficient level or higher evaluation on the student achievement section on the Certified Staff Summative Evaluation.
      Completers are involved in leadership position at their schools, to include coaching, development of district-level in-services, and building and district level committees. All participants have who agreed to a review of their evaluations (7 out of 10) were received renewed contract for the 2017-2018 year.

      Standard 5 Addendum

      The following comments are based on a summary of questions and requests for additional information for Standard 5.

      Disaggregated Data

      What data evidence is available that is disaggregated by license area and other factors such as race, ethnicity, gender, etc.

      The EPP uses multiple measures of assessment in order to determine candidates’ performances. These data are disaggregated by license area (e.g., Standard 1 Data). Multiple examples are included in the original self-study and in this addendum. Data collected to improve operational effectiveness are disaggregated by race, gender, language, exceptionality, and/or socioeconomic status (e.g., AE 3.1 EPIC Follow-up; AE 3.2 Scholarship Race/Ethnicity; 2.3 School Demographics).

      Data Collection, Analysis, and Use
      What content area data analysis has been conducted and what decisions have been made based on the analyses?
      What systemic data analyses processes are in place?
      How and when are data reported and shared?
      How has data been linked to program improvements or changes?
      What documentation does the EPP have that program changes are linked back to specific data?
      The EPP does not have documentation that program changes made are linked back to specific data.
      What systemic data analyses processes are in place?
      How and when are data reported and shared?
      How has data been linked to program improvements or changes?

      Data are systematically collected, analyzed, and shared as outlined in 5.1 Assessment Overview and Calendar. Program-wide data are reported to EPP faculty during biannual retreat meetings. Data are also located on the EPP’s data storage system for review by faculty in charge of licensure specific programs. Beginning in 2018, each licensure program will complete and submit an annual assessment report to both the EPP and the university. Programs will be required to analyze candidate performance data for the prior academic year, based on state licensure specific standards. This process will ensure a systemic process for the review of data relevant to each licensure program each year, a process the EPP believes will make for a more efficient process for reviewing candidate performance and program effectiveness.

      Program improvement and/or changes are made when indicated by data. These changes may be at the EPP level for all licensure programs or may be changes specific to one program. Data Driven Changes (AE 5.2) provides examples of program changes linked to specific data. Data for these changes can be provided at the site visit.

      Valid and Reliable Assessment Tools

      What has the EPP done to ensure rubrics meet adequate level according to the CAEP Assessment rubric?
      How will the EPP ensure valid and reliability of the EPP created assessments so they can use data for improvement purposes?
      What research method was used to establish reliability and what steps were taken to ensure the reliability of the data?
      Can the EPP share the plan they used to ensure validity for each EPP created assessment?
      Can the EPP provide documentation of the type of validity that was established for each EPP created assessment?
      The EPP does not provide sufficient evidence of the reliability and validity of the EPP developed instruments being used in the assessment process.

      The EPP has developed a process for ensuring that all EPP generated assessment tools are valid and reliable, and that they meet at least the sufficient level as defined by the CAEP Evaluation Tool for EPP Created Assessments. The Process for Revision of Washburn Educator Preparation Program Assessment Tools (AE 1.2) is a four-stage process the EPP uses for all EPP created assessment tools. The process begins with the review and evaluation of an assessment tool’s constructs and indicators. All assessments are aligned with appropriate professional standards (e.g., InTASC Standards, Kansas Educator Preparation Program Standards for Professional Education). Tools are then sent to an expert panel for review, in order to establish content validity. Expert panels are made up of university personnel (e.g., university supervisors, university professors) and school-based personnel (e.g., teachers and school administrators). A content validity ratio (CVR) based on the number of panel members will be established for each assessment tool indicator.

      Reliability is established based on one of two forms: inter-rater reliability based on percentage of exact agreement on tools with more than one rater, or the split-half method in order to determine correlation between ratings on assessment tools with a single rater/evaluator. The EPP looks for a .80 level of agreement in order to establish indicator reliability. Reliability and validity data are reviewed by EPP faculty most closely associated with review of data generated by the assessment (e.g., Director of Field Experiences, the Assessment Coordinator, or faculty appointed by the Department Chair). Changes are made to piloted assessment tools, and where necessary, tests for reliability and validity are repeated. The EPP has established a timeline for review of all assessment tools created by the unit. To-date, five evaluation tools are at various stages of the revision process. All five have been reviewed against the CAEP Evaluation Rubric and can be provided at the site visit. However, two of these have been provided in the evidence. Expert panels have reviewed several tools, and reliability and validity tests have been conducted for several assessment tools (see AE 1.4 and 1.5). The EPP is currently reviewing reliability and validity data to determine ways to increase the level of agreement for all indicators to a .80 level for reliability, and to ensure that all indicators have the expected CVR.

      Stakeholder Feedback
      What evidence is available to demonstrate that data was discussed and feedback was received from stakeholders?
      How has the EPP used the stakeholder feedback to inform
      The EPP does not have evidence of how they incorporate the feedback received from stakeholders to make program changes or decisions.
      EPP does not demonstrate how stakeholders input or feedback is used to inform program improvement.

      The EPP has historically sought input from program partners, and has at times shared data during meetings with advisory groups (AE 2.2 Advisory Committee Minutes; 5.5 Meeting Minutes (Advisory Board Meetings (pp. 1-9); Field Experiences Committee (pp. 10-11); and Unit Assessment Committee (pp. 12-16). Meeting minutes document that information was given and input received, during these meetings. Data was shared and discussed at some of these meetings. The EPP however is committed to developing a more systematic process for sharing data and information with stockholders/partners, one that allows for ongoing feedback and input into multiple aspects of the professional educator program. The EPP has identified the Advisory Council as the body where a systematic partnership can reside, and is committed to working with this group in order to:

        AE 1.1 Process for Revision of Washburn Educator Preparation Program Assessment Tools
        AE 1.2 Summary Evaluation of Teacher Candidates
        AE 1.3 Crosswalk TCSEF and SETC
        AE 1.4 Reliability Data
        AE 1.5 Validity Data
        AE 1.6 College and Career Ready Standards
        AE 1.7 Demonstration of College and Career Ready
        AE 1.8 CAEP technology
        AE 1.8 Integration of Tech
        AE 1.9 ED 300 Grades F15-F16
        AE 2.1 Sequence of School Field Experiences for All Students
        AE 2.2 Advisory Committee Minutes
        AE 2.3 University Supervisor Process
        AE 2.4 Use of Technology in Clinical Practice
        AE 3.1 EPIC Follow-up Survey Results FS 16-17
        AE 3.2 Scholarship Awards 2017 Race Ethnicity Data
        AE 3.3 ED 225 Law Ethics
        AE 3.4 ED 302A Teaching Exceptional Learners ECU.Elementary
        AE 3.5 ED 302B Secondary P12
        AE 3.6 Foundations Law and Ethics
        AE 3.7 Prof.Reference.Summary.Data
        AE 3.8 Progression Levels
        AE 3.9 Admissions General and Professional Knowledge
        AE 3.10 Demonstration of Performance Teaching Methods
        AE 3.11 Student Teaching Admissions Data-1
        AE 3.12 Candidate Dispositions
        AE 4.1 USD 501 Research Application
        AE 4.1 USD 501 Letter of Approval
        AE 4.2 Standard 4 Responses to Questions Based Upon Impact Study OVERVIEW
        AE 4.3 Impact Study Letters of Consent
        AE 5.2 Data-Driven Changes
        AE 5.4 Student Teaching Summary Eval
        Washburn University Final Addendum Narrative