In Module 7, we discussed the implementation of the curriculum plan. We looked at why people resist change, the role of teachers, students, administrator and parents in ensuring the successful implementation of change. In this chapter, we will focus on determining whether the curriculum plan implemented has achieved its goals and objectives as planned. In other words, the curriculum has to be evaluated to determine whether all the effort in terms of finance and human resources has been worthwhile.
Various stakeholders want to know the extent to which the curriculum has been successfully implemented. The information collected from evaluating a curriculum forms the basis for making judgements about how successfully has the programme achieved its intended outcomes and the worth or value of the programme. What is evaluation? Evaluation is the process of collecting data on a programme to determine its value or worth with the aim of deciding whether to adopt, reject, or revise the programme. Programmes are evaluated to answer questions and concerns of various parties.
Only $13.90 / page
The public want to know whether the curriculum implemented has achieved its aims and objectives; teachers want to know whether what they are doing in the classroom is effective; and the developer or planner wants to know how to improve the curriculum product. • McNeil (1977) states that “curriculum evaluation is an attempt to throw light on two questions: Do planned learning opportunities, programmes, courses and activities as developed and organised actually produce desired results? How can the curriculum offerings best be improved? (p. 134). •
Ornstein and Hunkins (1998) define curriculum evaluation as “a process or cluster of processes that people perform in order to gather data that will enable them to decide whether to accept, change, or eliminate something- the curriculum in general or an educational textbook in particular” (p. 320). • Worthen and Sanders (1987) define curriculum evaluation as “the formal determination of the quality, effectiveness, or value of a programme, product, project, process, objective, or curriculum” (p. 22-23). Gay (1985) argues that the aim of curriculum evaluation is to identify its weaknesses and strengths as well as problems encountered in implementation; to improve the curriculum development process; to determine the effectiveness of the curriculum and the returns on finance allocated. •
Oliva (1988) defined curriculum evaluation as the process of delineating, obtaining, and providing useful information for judging decision alternatives. The primary decision alternatives to consider based upon the evaluation results are: to maintain the curriculum as is; to modify the curriculum; or to eliminate the curriculum.
Evaluation is a disciplined inquiry to determine the worth of things. ‘Things’ may include programmes, procedures or objects. Generally, research and evaluation are different even though similar data collection tools may be used. The three dimensions on which they may differ are: • First, evaluation need not have as its objective the generation of knowledge. Evaluation is applied while research tends to be basic. • Second, evaluation presumably, produces information that is used to make decisions or forms the basis of policy.
Evaluation yields information that has immediate use while research need not. • Third, evaluation is a judgement of worth. Evaluation result in value judgements while research need not and some would say should not. As mentioned earlier, evaluation is the process of determining the significance or worth of programmes or procedures. Scriven (1967) differentiated evaluation as formative evaluation and summative evaluation. However, they have come to mean different things to different people, but in this chapter, Scriven’s original definition will be used. 8. 2. Formative evaluation: The term formative indicates that data is gathered during the formation or development of the curriculum so that revisions to it can be made. Formative evaluation may include determining who needs the programme (eg. secondary school students), how great is the need (eg. students need to be taught ICT skills to keep pace with expansion of technology) and how to meet the need (eg. introduce a subject on ICT compulsory for all secondary schools students). In education, the aim of formative evaluation is usually to obtain information to improve a programme.
In formative evaluation, experts would evaluate the match between the instructional strategies and materials used, and the learning outcomes or what it aims to achieve. For example, it is possible that in a curriculum plan the learning outcomes and the learning activities do no match. You want students to develop critical thinking skills but there are no learning activities which provide opportunities for students to practice critical thinking. Formative evaluation by experts is useful before full-scale implementation of the programme.
Review by experts of the curriculum plan may provide useful information for modifying or revising selected strategies. In formative evaluation learners may be included to review the materials to determine if they can use the new materials. For example, so they have the relevant prerequisites and are they motivated to learn. From these formative reviews, problems may be discovered. For example, in curriculum document may contain spelling errors, confusing sequence of content, inappropriate examples or illustrations.
The feedback obtained could be used to revise and improve instruction or whether or not to adopt the programme before full implementation. 8. 2. 2 Summative evaluation The term summative indicates that data is collected at the end of the implementation of the curriculum programme. Summative evaluation can occur just after new course materials have been implemented in full (i. e. evaluate the effectiveness of the programme), or several months to years after the materials have been implemented in full.
It is important to specify what questions you want answered by the evaluation and what decisions will be made as a result of the evaluation. You may want to know if learners achieved the objectives or whether the programme produced the desired outcomes. For example, the use of a specific simulation software in the teaching of geography enhanced the decision making skills of learners. These outcomes can be determined through formal assessment tasks such as marks obtained in tests and examinations. Also of concern is whether the innovation was cost-effective.
Was the innovation efficient in terms of time to completion? Were there any unexpected outcomes? Besides, quantitative data to determine how well students met specified objectives, data could also include qualitative interviews, direct observations, and document analyses How should you go about evaluating curriculum? Several experts have proposed different models describing how and what should be involved in evaluating a curriculum. Models are useful because they help you define the parameters of an evaluation, what concepts to study and the procedures to be used to extract important data.
Numerous evaluation models have been proposed but three models are discussed here. 8. 3. 1 Context, Input, Process, Product Model (CIPP Model) Daniel L. Stufflebeam (1971), who chaired the Phi Delta Kappa National Study Committee on Evaluation, introduced a widely cited model of evaluation known as the CIPP (context, input, process and product) model. The approach when applied to education aims to determine if a particular educational effort has resulted in a positive change in school, college, university or training organisation.
A major aspect of the Stufflebeam’s model is centred on decision making or an act of making up one’s mind about the programme introduced. For evaluations to be done correctly and aid in the decision making process, curriculum evaluators have to: • first delineate what is to be evaluated and determine what information that has to be collected (eg. how effective has the new science programme has been in enhancing the scientific thinking skills of children in the primary grades) • second is to obtain or collect the information using selected techniques and methods (eg. nterview teachers, collect test scores of students); • third is to provide or make available the information (in the form of tables, graphs) to interested parties.
To decide whether to maintain, modify or eliminate the new curriculum or programme, information is obtained by conducting the following 4 types of evaluation: context, input, process and product. Stufflebeam’s model of evaluation relies on both formative and summative evaluation to determine the overall effectiveness a curriculum programme (see Figure 8. 1). Evaluation is required at all levels of the programme implemented.
Formative and summative evaluation in the CIPP Model a) Context Evaluation (What needs to be done and in what context)? This is the most basic kind of evaluation with the purpose of providing a rationale for the objectives. The evaluator defines the environment in which the curriculum is implemented which could be a classroom, school or training department. The evaluator determines needs that were not met and reasons why the needs are not being met.
Also identified are the shortcomings and problems in the organisation under review (eg. sizable proportion of students in secondary schools are unable to read at the desired level, the ratio of students to computers is large, a sizable proportion of science teachers are not proficient to teach in English). Goals and objectives are specified on the basis of context evaluation. In other words, the evaluator determines the background in which the innovations are being implemented.
The techniques of data collection would include observation of conditions in the school, background statistics of teachers and interviews with players involve in implementation of the curriculum. ) Input Evaluation (How should it be done? ) is that evaluation the purpose of which is to provide information for determining how to utilise resources to achieve objectives of the curriculum. The resources of the school and various designs for carrying out the curriculum are considered. At this stage the evaluator decides on procedures to be used.
Unfortunately, methods for input evaluation are lacking in education. The prevalent practices include committee deliberations, appeal to the professional literature, the employment of consultants and pilot experimental projects. ) Process Evaluation (Is it being done? ) is the provision of periodic feedback while the curriculum is being implemented. d) Product Evaluation (Did it succeed? ) or outcomes of the initiative. Data is collected to determine whether the curriculum managed to accomplish it set out achieve (eg. to what extent students have developed a more positive attitudes towards science). Product evaluation involves measuring the achievement of objectives, interpreting the data and providing with information that will enable them to decide whether to continue, terminate or modify the new curriculum.
For example, product evaluation might reveal that students have become more interested in science and are more positive towards the subject after introduction of the new science curriculum. Based on this findings the decision may be made to implement the programme throughout the country. 8. 4. 2 Case Study: Evaluation of a Programme on Technology Integration in Teaching and Learning in Secondary Schools The integration of information and communication technology (ICT) in teaching and learning is growing rapidly in many countries.
The use of the internet and other computer software in teaching science, mathematics and social sciences is more widespread today. To evaluate the effectiveness of such a programme using the CIPP model would involve examining the following: Context: Examine the environment in which technology is used in teaching and learning • How did the real environment compare to the ideal? (eg. The programme required five computers in each classroom, but there were only two computer labs of 40 units each for 1000 students) • What problems are hampering success of technology integration? eg. technology breakdowns, not all schools had internet access) • About 50% of teachers do not have basic computer skills Input:
Examine what resources are put into technology integration (Identify the educational strategies most likely to achieve the desired result) • Is the content selected for using technology right? • Have we used the right combination of media? (internet, video-clips, etc) Process: Assess how well the implementation works (Uncovers implementation issues) • Did technology integration run smoothly? • Were there technology problems? Were teachers able to integrate technology in their lessons as planned? • What are the areas of curriculum in which most students experienced difficulty? Product: Addresses outcomes of the learning (Gather information on the results of the educational intervention to interpret its worth and merit) •
Did the learners learn using technology? How do you know? • Does technology integration enhance higher order thinking? 8. 4. 3 Stake’s Countenance Model The model proposed by Robert Stake (1967) suggests three phases of curriculum evaluation: the antecedent phase, the transaction phase and the outcome phase.
The antecedent phase includes conditions existing prior to instruction that may relate to outcomes. The transaction phase constitutes the process of instruction while the outcome phase relates to the effects of the programme. Stake emphasises two operations; descriptions and judgements. Descriptions are divided according to whether they refer to what was intended or what actually was observed. Judgements are separated according to whether they refer to standards used in arriving at the judgements or to the actual judgements. Antecedents Transactions
Outcomes Figure 8. 3 Stake’s Countenance Model 8. 3. 2 Eisner’s Connoisseurship Model Elliot Eisner, a well known art educator argued that learning was too complex to be broken down to a list of objectives and measured quantitatively to determine whether it has taken place. He argued that the teaching of small manageable pieces of information prohibits students from putting the pieces back together and applying them to new situations. As long as we evaluate students based on the small bits of information students we will only learn small bits of information.
Eisner contends that evaluation has and will always drive the curriculum. If we want students to be able to solve problems and think critically then we must evaluate problem solving and critical thinking, skills which cannot be learned by rote practice. So, to evaluate a programme we must make an attempt to capture the richness and complexity of classroom events. He proposed the Connoisseurship Model in which he claimed that a knowledgeable evaluator can determine whether a curriculum programme has been successful, using a combination of skills and experience.
The word ‘connoisseurship’ comes from the Latin word cognoscere, meaning to know. For example, to be a connoisseur of food, paintings or films, you must have knowledge about and experience with different types of food, paintings or films before you are able to criticise. To be a food critic, you must be a connoisseur of different kinds of foods. To be a critic, you must be aware and appreciate the subtle differences in the phenomenon you are examining. In other words, the curriculum evaluator must seek to be an educational critic.