(3.144.107.191)
Users online: 13669     
Ijournet
Email id
 

Year : 2010, Volume : 1, Issue : 1
First page : ( 15) Last page : ( 43)
Print ISSN : 2231-0681. Online ISSN : 2231-069X. Published online : 2010 June 1.

Conceptual Compedium on Complex Issues in Evaluation of Training

Satpathy LT COL J*

*GSO-1, Education, 27 HQ, Mtn Div, 99 APO

Abstract

The courses of training are the main stay of training methodology in establishments. In conventional sense, once objectives of training are decided, a curriculum is designed hypothetically and implemented. More emphasis is given to planning and implementation, ignoring evaluation and objective assessment of effectiveness and outcome of training programme being implemented. The need of an evaluation system is based on the hypothesis that success of any training programme depends upon validity of the said programme. It is, therefore necessary that each training course should have an in-built monitoring and evaluation system. This will ensure modification/improvement of the process, content and organizational effectiveness to meet the ever changing requirements of environment in which a trainee has to perform/transfer his learning. The important aspect is behavioral aspect of training. It will answer questions about consistency of process of training. Ultimately, formative evaluation assesses and improves courses validity or validity of process. The content, process of implementation, outcome and relevance to organization have never been tested after implantation of curriculum, thereby leaving test of validity to subjective guess work and conjectures. In this paper, the author has dwelt on genesis of formative evaluation, organisational change, individual change, instructional design,cognitive load theory and design of instruction,learning design,ADDIE model: purposes of technique,systematic inquiry competence integrity/honesty, responsibilities,classification and evaluation of approaches, pseudo-evaluation, objectivist/elite/quasi-evaluation, strengths and limitations, research issues and significance in evaluation of training programmes

Top

Introduction

Evaluation is systematic determination of merit, worth, and significance of something or someone using criteria against a set of standards. Evaluation often is used to characterize and apprise subjects of interest in a wide range of human enterprises. The field and practice of Formative Evaluation has its background in a diverse array of disciplines and the focus of its many practitioners reflects this varied framework. This follows the generally accepted premise that

“The best information for evaluation of training can be acquired by using objective, criterion-referenced measures. Disciplines and areas from which formative evaluation pulls its methods and processes include human resource management, industrial psychology, human resource development, educational psychology, human engineering, instructional systems, and communications.

Formative evaluation seeks to strengthen or improve a programme or intervention by examining, amongst other things, delivery of the programme, the quality of its implementation and organisational context, personnel, structures and procedures. As a change oriented evaluation approach, it is especially attuned to assessing in an ongoing way, any discrepancies between expected direction and outputs of the programme and what is happening in reality, to analysing strengths and weaknesses, to uncovering obstacles, barriers or unexpected opportunities and generating understandings about how the programme could be implemented better.

Formative evaluation is responsive to the dynamic context of a programme, and attempts to ameliorate the messinese that is an inevitable part of complex, multi-faceted programmes in a fluid environment. Formative evaluation pays special attention to delivery and intervention system, but not exclusively. In formative evaluation, evaluator has to analyse intervention logic, outcomes, results and impacts. Formative evaluation activities include collection and analysis of secondary data over the lifecycle of the programme and timely feedback of evaluation findings to programme actors to inform ongoing decision-making and action (form of operational intelligence). It requires an effective secondary data collection strategy, incorporating routinised monitoring secondary data alongside more tailored evaluation activities. Feedback is primarily designed to fine-tune the implementation of the programme although it may also contribute to policy-making at margins through piecemeal adaptation.

Evaluators conducting formative evaluation ask different kinds of questions and use a variety of methods to address them. Questions are commonly open-ended and exploratory, aimed at uncovering the processes by which programme takes shape, establishing what has changed from the original design and why, or assessing organisational factors such as the extent of ‘buy in’ by staff to the programme's goals and intended outcomes. Formative evaluation investigates the relationship between inputs and outcomes, which involve the formulation and measurement of early or short-term outcome measures. These often have a process flavour and serve as interim markers of more tangible longer-term outcomes.

Formative evaluation's concern with the efficiency and effectiveness of project management can be addressed through management-oriented methods like flow charting, PERT/CRM (Programme Evaluation and Review Technique and Critical Path Method) and project scheduling. The measurement of interim or short-term outcome measures, which capture steps in the theory of how change will be achieved over the long term, involve construction of qualitative or process indicators and use of basic forms of quantitative measurement. Formative evaluation may be planned and managed in a variety of ways The prevailing practice has been to prioritize the information needs of staff (policy makers, programme managers) as those primarily responsible for programme steerage, leaving unspecified the roles that staff and play in reshaping plans and strategies in response to feedback.

Literature Overview:

Newer conceptions of formative evaluation ‘Mutual Catalytic Model’ of formative evaluation (Chacon-Moscoso, 2002) emphasise a more inclusive approach to the involvement of participants and as well seek to elicit their participation as collaborators in the evaluation process rather than simply as providers of information. The role of evaluator changes from gathering secondary data and communicating evaluation findings to engaging programme participants in a form of evaluative inquiry. Organisational actors are helped to generate their own secondary data and feedback through collective learning processes. TenBrink (19734) connotes formative evaluation as ‘the process of obtaining information and using it to form judgments that in turn are to be used in decision-making.’ M. C. Alkin (1974) connotes formative evaluation as ‘the process of ascertaining the decision areas of concern, selecting appropriate information, and collecting and analyzing information in order to report summary secondary data useful to decision makers in selecting among alternatives’. Cronbach (1975) broadly connotes formative evaluation as ‘collection and use of information to make decisions about an educational program’. Although definitions focus on the attainment of objectives, passing of judgment, or process of scientific inquiry. deciding a course of action is a common thread m most definitions of formative evaluation (Stevens, Lawrenz, and Sharp, 1997). The words formative evaluation, assessment and measurement appear to be used interchangeably. Choppin (1985) attempts to ‘maintain the semantic distinctions’ between these words and identifies the ‘ultimate objective’ of each.

The concept of formative evaluation and metamorphosis has been the object of significant public debate during the past two decades Murphy (1993) identified three waves of reform in the 1980’s. During 1990’s, the trend continued with major and minor reform efforts with advances in digital design generating a resurgence of expertise innovations for education. Metamorphosis and reform efforts targeted at education, particularly in the k-12 systems in a variety of categories. Papagiannis, Easton, and Owens (1992) delineated categories focusing on educational governance, instructional methodology and funding or resource issues They distinguish between ‘the fundamental issue of educational governance from equally important but derivative issues of instructional methodology, administrative educational organisation and resource use’. Elmore, Peterson and McCarthev (1992) were particularly interested in the structure of subjects they studied, specifically taught grouping, allocation of time to content or subject matter and instructors relating to groups of taughts. In ‘Educational Restructuring: A Study’, by Cawelti (1994), described five categories of reform:

  • Curriculum/Teaching,

  • Educational institution Educational organisation,

  • Community Outreach,

  • Know-How and

  • Monetary Incentives.

Integrating evaluation with program development is critical to producing educational programs that have demonstrable impact. Scriven (1967) was the first to connote two types of educational program evaluation-formative and summative. Patton (1994) outlined their sequential nature’ first, formative secondary data are collected and used to prepare for summative evaluation, then, a summative evaluation is conducted to provide secondary data for external accountability. Extension evaluators recognize this sequence but place importance on the second phase due to need impact secondary data to address accountability (Voichick, 1991). Patton and others emphasize that evaluation should be an integral part of the program development process and place equal or greater weight on first phase, formative evaluation. According to Patton (1994), formative evaluation should provide feedback on original program and improve program implementation, while a summative evaluation should determine if desired outcomes are achieved and can be attributed to revised program.

Chambers (1994) argued it is not timing, but use of evaluation secondary data that distinguishes formative from summative. He emphasizes that formative evaluation provides secondary data with which to modify the initial intervention and its delivery so that the final intervention is more effective as revealed by the summative evaluation. Scheirer (1994) recommends using formative evaluation in a pilot situation to collect information on the feasibility of activities and their acceptance by recipients, suggesting qualitative methods to gather secondary data. In sum, these researchers suggest that formative evaluation should examine effect of program, the process of delivery and reaction of participants.

Researchers suggest using focus groups as the formative evaluation method for planning a short-term nutrition intervention (Crockett, Heller, Merkel, and Peterson, 1990, Iszler et al. 1995). Although a formative evaluation had been reported of a short-term program similar to the researcher wished to design (Crockett, Heller, Skauge and Merkel, 1992), the report failed to disclose any secondary data or secondary data collection method that would have provided insights on how to improve the materials for the summative evaluation. These reports also did not provide a comprehensive model that combined program development and evaluation steps.

In program development, first the inputs are collected and considered. Receiver inputs include measures of participants’ initial skills, attitudes, beliefs, and habits. Evaluator inputs include consideration and choice of communication channels (such as interpersonal or mass media), the source or sender of the message content and format of message. The situational factors to consider include time and place of delivery, repetitiveness of message and whether the message is one way or reciprocal. All inputs can influence responses to programme.

Next, an educational program is delivered with some level of interaction and participants’ attention to and comprehension of different aspects of the program is measured. The final step is outcomes measurement. Acceptance or rejection of program is determined at cognitive (knowledge), affective (attitude) and behavioral levels. This information guide program modification, closing the feedback loop. The model integrates program development and formative evaluation. The model specifies conducting an educational intervention or program. Any level of a pilot program desired by the evaluator can fit the model. However, offering a pilot program within the desired situational context and administering pilot instruments is critical to good formative evaluation. Researchers disagree with Iszler et al. (1995), who suggested that merely exposing members of the target audience to program ideas in a focus group setting is sufficient formative evaluation.

By offering the pilot program within the situational context, this study discovered (a) problems attracting the desired target audience, (b) unintended changes in program delivery, and (c) serious problems with evaluation instruments, delivery methods and materials that could only have been detected from experiencing a full pilot program. Scheirer (1994) called delivery of the program to other than the intended audience and alterations in program delivery (unknown to program developers) to suit site conditions type III errors. She suggested that evaluation of the process of program delivery is needed to detect these.

Outcome secondary data are enriched by using qualitative methods to secure participant feedback. These methods provided critical secondary data that explained why certain things did not work. This type of secondary data would have helped Crockett et al. (1992) understand the reasons for lack of program impact. Our qualitative outcome secondary data was richer because participants experienced the full pilot program. Combining quantitative and qualitative measures within the model framework led to a more rigorous examination of acceptance and impact of a pilot educational program. Formative evaluation involves many different tasks: Identification of evaluation goals, planning secondary data collection, contributing to methodological choices, Making value judgments and generating evaluation findings.

Depending on the topic of interest, there are professional groups that look to the quality and rigor of the evaluation process. One guiding principle within the evaluation community has

been that evaluations be useful. The Committee on Standards for Educational Evaluation has developed standards for educational programmes, personnel and taught evaluation. The standards are broken into four sections: Utility, Feasibility, Propriety and Accuracy. Various institutions have prepared their own standards, more or less related to those produced by the Committee. They provide guidelines about basing value judgments on systematic inquiry, evaluator competence and integrity, respect for people and regard for general and public welfare

Formative evaluation is a type of evaluation that has the purpose of improving programmes. It goes under other names such as developmental evaluation and implementation evaluation. It can be contrasted with other types of evaluation that have other purposes, in particular process evaluation and outcome evaluation. An example of this is its use in instructional design to assess ongoing projects during their construction to implement improvements. Formative evaluation can use any of the techniques that are used in other types of evaluation: surveys, interviews, secondary data collection and experiments (where these are used to examine the outcomes of pilot projects).

Top

Genesis of Formative Evaluation

Much of the foundation of the field of instructional design was laid in World War II, when the military faced the need to rapidly train large numbers of people to perform complex technical tasks, from field-stripping a carbine to navigating across the ocean to building a bomber.

Drawing on the research and theories on operant conditioning, training programs focused on observable behaviors. Tasks were broken down into subtasks and each subtask treated as a separate learning goal. Training was designed to reward correct performance and remediate incorrect performance. Mastery was assumed to be possible for every taught, given enough repetition and feedback. After the war, the success of the wartime training model was replicated in business and industrial training, and to a lesser extent in the primary and secondary classroom The approach is still common in the military educational administration.

Benjamin(1955) published an influential taxonomy of what he termed the three domains of learning: Cognitive (what we know or think), Psych-omotor (what we do, physically) and Affective (what we feel, or what attitudes we have). These taxonomies still influence the design of instruction. During the latter half of the 20th century, learning theories began to be influenced by the growth of digital computers. In the 1970s, many instructional design theorists began to adopt an information-processing-based approach to the design of instruction. Merrill (1971) developed Component Display Theory (CDT), which concentrates on means of presenting instructional materials (presentation techniques). Later in the 1980s, cognitive load theory began to find empirical support for a variety of presentation techniques.

Top

Organisational Change

The studies identified above focus on metamorphosis plans that use one or two interventions to change the educational process. Cuban (1988) describes these methods as First Order attempts at change. First order changes are piecemeal changes that attempt to ‘make what already exists more efficient and more effective, without disturbing the basic educational organisational features, without substantially altering the ways in which adults and children perform their roles’ (Cuban, 1988). Second Order changes, on the other hand, are systemic in nature and attempt to ‘alter the fundamental ways in which educational organisations are put together’ (Cuban, 1988). A number of researchers in the field (Morgan, 1971, 1994, Branson, 1987, Reigeluth and Garfinkle, 1994, Pogrow, 1996) have promoted second order or systemic change.

Both Branson (1987) and Reigeluth and Garfinkle (1994) have used a transportation metaphor to describe the level of change necessary for systemic restructuring. Reigeluth and Garfinkle(1994) related how as our society changed and moved from an agrarian base to an industrial and then to an information base, our transportation modes changed from the horse to the train, and eventually to the car and plane. These changes were not just visible indicators of the change process, but integral components that were necessary for the innovations to succeed (Reigeluth and Garfinkle, 1994). Branson's (1987) metaphor of the change between prop-driven aircraft and jet powered aircraft served to highlight his upper-limit hypothesis of the current state of education. His basic proposition was that just as piston- engine aircraft went through a continuous development process until they reached the practical upper limit of performance, so has the current educational system. No more gains in performance can be reached under the current design philosophy of education (Branson, 1998).

The system itself virtually insures that approximately 6.25% of children in educational institutions will have instructors that are in the lower quartile of performance two years in a row, leading to a learning deficit that is ‘virtually unrecoverable’ (Branson, 2000, p.197). Therefore, a completely new system must be designed. Pogrow (1996) and Reigeluth and Garfinkle (1994) concur with Branson. Pogrow stated that systemic change must occur for real progress to be made. He is particularly insistent about the need for systematic planning for specific learning outcomes as a prerequisite for an improved system.

Top

Individual Change

One of the problems associated with systemic or deep change efforts is the individual's ability to deal with change. Cawelti (1994) addressed this personal change issue in terms of high educational institution instructors by stating ‘some change theorists argue that it is better to undertake a full transformation, but practitioners find it difficult to manage such comprehensive change. Experience has shown that difficulties and resistance sometimes arise from implementing even a single element, such as a schedule change or establishment of standards’.

Various theories have been presented that address problems of individuals attempting to change their behaviors. Prochaska et al (1994) developed a model of personal change known as the trans-theoretical approach. They believe that change requires time and energy and often requires multiple attempts to achieve success. Rather than label the unsuccessful attempts as failures, they believe these attempts are part of an iterative process. Information, strategies and tactics developed during early change attempts may be reinterpreted and recycled as part of a subsequent attempt at personal change.

Vohs and Heatherton (2000) have researched the problem of self-regulatory failure under the construct of resource depletion. Their studies of self-regulatory failure investigated how ‘self regulatory resources can be depleted or fatigued by self-regulatory demands. Hence, the active effort required to control behavior in one-domain leads to diminished capacity for self-regulation in other domains’ (Vohs and Heatherton, 2000). In the case of a classroom instructor, this could happen quite easily, given the continual changes of schedules, instructional programs, staffing changes, and know-how implementation. Researchers (Sweller, 1989) have used the construct of cognitive load to address the difficulty individuals have with learning new material or implementing new processes. Taken together, the constructs of self-regulatory failure due to resource depletion and cognitive load provide a strong theoretical lens through which the individual change process can be viewed.

Top

Instructional Design

Instructional Design is the practice of arranging media (communication expertise) and content to help taughts and instructors transfer knowledge most effectively. The process consists broadly of determining the current state of taught understanding, defining the end goal of instruction and creating some media-based ‘intervention’ to assist in the transition. Ideally, the process is informed by pedagogically tested theories of learning and may take place in taught-only, instructor-led or community-based settings. The outcome of this instruction may be directly observable and scientifically measured or completely hidden and assumed.

As a field, instructional design is historically and traditionally rooted in cognitive and behavioral psychology. However, because it is not a regulated, well-understood field, ‘instructional design’ has been co-opted by or confused with a variety of other ideologically based and/or professional fields. Instructional design, for example, is not graphic design, although graphic design (from a cognitive perspective) could play an important role in Instructional Design.

Top

Cognitive Load Theory and Design of Instruction

Cognitive load theory developed out of several empirical studies of taughts, as they interacted with instructional materials. Researchers began to measure the effects of working memory load, and found that the format of instructional materials has a direct effect on the

performance of the taughts using those materials. While the media debates of the 1990s focused on the influences of media on learning, cognitive load effects were being documented in several journals.

By the mid to late 1990s, researchers had discovered several learning effects related to cognitive load and design of instruction (split attention effect, redundancy effect and worked-example effect). Later, researchers began to attribute learning effects to cognitive load. Mayer and his associates soon developed a Cognitive Theory of Multimedia Learning. Rather than attempting to substantiate the use of media, cognitive load learning effects provided an empirical basis for the use of instructional strategies. Mayer asked the instructional design community to reassess the media debate, to refocus their attention on what was most important-learning.

In the past decade, cognitive load theory has begun to be internationally accepted and begun to revolutionize how practitioners of instructional design view instruction. Recently, human performance experts have even taken notice of cognitive load theory, and have begun to promote this theory base as the science of instruction, with instructional designers as the practitioners of this field. Finally Clark, Nguyen and Sweller (2007) described how Instructional Designers can promote efficient learning using evidence based guidelines of Cognitive load theory.

Top

Learning Design

Learning Design specification supports the use of a wide range of pedagogies in online learning. Rather than attempting to capture the specifics of much pedagogy, it does this by providing a generic and flexible language. This language is designed to enable much different pedagogy to be expressed. The approach has the advantage over alternatives in that only one set of learning design and runtime tools then need to be implemented in order to support the desired wide range of pedagogies. The language was originally developed at the Open University of the Netherlands (OUNL), after extensive examination and comparison of a wide range of pedagogical approaches and their associated learning activities, and several iterations of the developing language to obtain a good balance between generality and pedagogic expressiveness.

A criticism of Learning Design theory is that learning is an outcome. While instructional theory Instructional Design focuses on outcomes, while properly accounting for a multi-variate context that can only be predictive, it acknowledges that (given the variability in human capability) a guarantee of reliable learning outcomes is improbable. We can only design instruction We cannot design learning (an outcome). The former is the metaphor for instructional design. The latter is the metaphor for Learning Design.

ADDIE Model:The most common model used for creating instructional materials is the ADDIE Model. This acronym stands for the 5 phases contained in the model:

Analyze - Analyze taught characteristics, task to be learned, etc.

Design - Develop learning objectives, choose an instructional approach

Develop - Create instructional or training materials

Implement - Deliver or distribute the instructional materials

Evaluate - Make sure the materials achieved the desired goals

Most of the current instructional design models are variations of the ADDIE model. A sometimes utilized adaptation to the ADDIE model is in a practice known as rapid prototyping. However, rapid prototyping is considered a somewhat simplistic type of model. At the heart of Instructional Design is the analysis phase. After thoroughly conducting the analysis – researcher can then choose a model based on findings. That is the area where most practioners get snagged. They simply do not do a thorough enough analysis. Proponents suggest that through an iterative process the verification of design documents saves time and money by catching problems while they are still easy to fix. This approach is not novel to the design of instruction, but appears in many design-related domains including software design, architecture, transportation planning, product development, message design, user experience design, etc.

Systems Approach Model:

Another well-known instructional design model is the Systems Approach Model. Walter Dick, Lou Carey, and James Carey originally published the model in 1978. Dick and Carey made a significant contribution to the instructional design field by championing a systems view of instruction as opposed to viewing instruction as a sum of isolated parts. The model addresses instruction as an entire system, focusing on the interrelationship between context, content, learning and instruction. According to Dick and Carey, ‘Components such as the instructor, taughts, materials, instructional activities, delivery system, and learning and performance environments interact with each other and work together to bring about the desired taught learning outcomes.

The components of the Systems Approach Model, also known as the Dick and Carey Model, are as follows,

Identify Instructional Goal(s)

Conduct Instructional Analysis

Analyze Taughts and Contexts

Write Performance Objectives

Develop Assessment Instruments

Develop Instructional Strategy

Develop Instructional Materials

Conduct Evaluation

Revise Instruction

Conduct Evaluation

Top

Miscellaneous Models

Formative evaluation developed relatively late in the course of evaluation's emergence as a discipline because of growing frustration with an exclusive emphasis on outcome evaluation as the only purpose for evaluation activity. Outcome evaluation looks at the intended or unintended positive or negative consequences of a program, policy or educational organisation. While outcome evaluation is useful where it can be done, it is not always the best type of evaluation to undertake.

For instance, in many cases it is difficult or even impossible to undertake an outcome evaluation because of either feasibility or cost. In other cases, even where outcome evaluation is feasible and affordable, it may be a number of years before the results of an outcome evaluation become available. Therefore, attention has turned to using evaluation techniques to maximise the chances that a program will be successful instead of waiting until the results of a program are available to assess its usefulness. Formative evaluation therefore complements outcome evaluation rather than being an alternative to it.

Learning theories play an important role in the design of instructional materials. Theories such as behaviorism, constructivism, social learning and cognitivism help shape and connote the outcome of instructional materials. Other models of instructional design include Smith/Ragan Model and Morrison/Ross/Kemp Model. Since instructional design deals with creating useful instruction and instructional materials, there are many other areas that are related to the field of instructional design, such as Assessment, Confidence-Based Learning, Educational Animation, Educational Psychology, Educational Know-how, E-Learning, Electronic Portfolio, Evaluation, Instructional Know-how, M-Learning, Multimedia Learning, Online Education, Instructional Design, Story boarding, Training, Inter-disciplinary Teaching, Rapid Prototyping and Understanding By Design

Those undertaking a formative evaluation may need to be specific about the roles each participant group can and should play, and how the knowledge of all participant groups can be brought together in ways that contribute to improved programme performance. Evaluators need also to consider the different needs of participant groups for different kinds of information, and to determine what kinds of feedback mechanisms or fora are appropriate in each case. Evaluators may want evidence of the progress being made towards objectives and early identification of problematic areas of implementation where intervention is required Formal presentations, tied into the decision-making cycle, are effective for this purpose. For evaluators and practitioner staff, the kind of evaluation findings they find useful are often those which illuminate the organisation's culture, policies and procedures and how these are impacting on their work, or the extent to which perceptions and experiences of the programme are shared by those delivering the programme and its recipients or beneficiaries.

They look at evaluation to enhance the quality of secondary data that will allow them to make quick decisions in an environment where they are commonly being asked to do more and more with fewer and fewer resources. Evaluators need to create fora that bring local programme actors together to engage in dialogue, drawing on evaluation findings and reflecting on their own knowledge and experience. In essence, the evaluator is seeking to create a collective learning process through which participants share meanings, understand the complex issues of programme implementation, examine linkages between actions, activities and intended outcomes, develop a better understanding of the variables that affect programme success and failure and identify areas where they may need to modify their thinking and behaviours.

Formative Evaluation has of late become the recommended method of evaluation in education. In this context, an evaluator would analyze the performance of a taught during the teaching/intervention process and compare this secondary da ta to the baseline secondary data. There are four visual criteria that can be applied: 1) Change in mean, 2) Change in level or discontinuity of performance, 3) Change in trend or rate of change, 4) Latency of change.

Another method of monitoring progress in formative evaluation is use of the number-point rule. In this method, if a certain pre-specified number of secondary data points collected during the intervention are above the goal, then the evaluators need to consider raising the goal or discontinuing the intervention. If secondary data points vary highly, evaluators can discuss how to motivate a taught to achieve more consistently.

Formative evaluation is a process of ongoing feedback on performance. The purposes are to identify aspects of performance that need to improve and to offer corrective suggestions. Be generous with formative evaluation. Share your observations and perceptions with the taught. You might simply share your observation and then ask the taught if (s) he can think of a better approach for the next time. Formative evaluation need not make a judgment. When giving formative feedback, offer alternatives to the taught. Use the taught is patient management documentation as well as your observations of performance to offer formative evaluation. The taught is charting reveals educational organisational skills, priorities, thought process, and judgment. Over the duration of the taught's experience with you, point out improvement to the taught.

Top

Purposes of Technique

Large scale, medium to long term schemes are often designed and implemented in dynamic, fluid contexts characterized by imperfect information, changing policy agendas and goal posts, unpredictable environmental conditions and moving target groups of intended beneficiaries. Formative evaluation is a strategy for dealing with a context of this kind. It starts from the premise that no matter how comprehensive and considered the programme design, it will invariably require steerage and possibly redirection and will be considerably strengthened by opportunities for participant reflection on what is working, what is not going to plan, and what kinds of changes need to be made.

Instructional strategies for successfully teaching concepts are found throughout instructional design literature. These strategies primarily consist of presenting taught with definitions, examples, and non-examples. While examples are important presentation instruments, theorist suggests that examples should not be reused in the assessment phase of instruction. The rationale being that encountered examples could be memorized thus activating different cognitive processes than those required for concept attainment. Consequently, test items referring to encountered examples may have less value in assisting instructors in discerning whether a taught has attained a target concept. There appears to be evidence supporting the notion that examples are not sufficient discriminators for judging a taught's level of concept attainment.

Formative evaluation is prospective in orientation, and conceived within a continuous cycle of information gathering and analysis, dialogue and reflection, and decision-making and action It has commonalities with forms of evaluative inquiry that draw on organisational learning models and processes, giving it a strong developmental focus for the organisation as a whole and for organisational members. Formative evaluations that are inclusionary and participative, involving local programme actors as active contributors and participants in the evaluation process, bring pragmatic benefits in addition to enhancing professional development and organisational capacity. Including staff as collaborators is likely to facilitate the collection not only of more reliable secondary data, but of secondary data that are actively used to improve daily programme activities at the local level.

Formative evaluation is a problem-solving process used in educational organisations to identify performance problems or opportunities, their causes, and appropriate solutions that, when implemented, will improve performance as evidenced by formative evaluation secondary data. As is often true of an emerging field, many definitions of formative evaluation are in existence. Each emphasizes different aspects of the broad field of human performance. Formative evaluation can help to strengthen horizontal structures and processes by creating and fostering feedback mechanisms and fora, enabling lessons to be shared. It can cultivate much thicker networks of professional and informal contacts between levels of decision-making through facilitating intra- and inter- organisational dialogue and learning.

Formative evaluation has important catalytic effects, mobilizing staff around a course of action, and engaging management thinking about future options. Patton introduced the idea of ‘process use’ to describe the utility to participants of being involved in the planning and implementation of an evaluation, irrespective of findings and recommendations that occur. The developmental and capacity building benefits accrue to staff as a side effect of a participative, formative evaluation. Although formative evaluation is contrasted with summative evaluation, the distinction is not always helpful or apposite. The process of formative evaluation may be an important component in summative evaluation. Formative evaluation can produce early outcome measures which serve as interim markers to programme effects. By tracking changes and linkages between inputs, outputs and outcomes it can help to identify causal mechanisms that can inform summative assessment. In some context, a more fruitful approach would be to see both types of evaluation as part of the same exercise.

Stolovitch and Keeps (1999) connote Formative Evaluation as ‘the application of what is known about human and educational organisational behavior to the enhancement of accomplishments, economically and effectively, in ways that are valued within the work setting’ thus emphasizing the purpose of formative evaluation. A definition by Harless (1995) includes the purpose and mentions the process in a broad nature. Harless (1995) states that Formative Evaluation is ‘an engineering approach to attaining desired accomplishments from human performers by determining gaps in performance and designing cost-effective and efficient interventions’. To synthesize these definitions, formative evaluation can be connoted as the process of analyzing performance-related secondary data to identify a performance gap, its causes, and solutions to close the gap, implementing the solutions, and evaluating the outcome and process used. These are primarily for improving human performance within an educational organisation.

American Evaluation Association has created a set of Guiding Principles for evaluators. The order of these principles does not imply priority among them, priority will vary by situation and evaluator role. The principles are as follows:

Systematic Inquiry: Evaluators conduct systematic, secondary data-based inquiries about whatever is being evaluated.

Competence: Evaluators provide competent performance to participants.

Integrity/Honesty: Evaluators ensure the honesty and integrity of the entire evaluation process.

Respect for People: Evaluators respect the security, dignity and self-worth of the respondents, program participants, clients, and other participants with whom they interact.

Responsibilities for General Welfare: Evaluators articulate and take into account the diversity of interests and values that may be related to the general and public welfare.

Top

Classification and Evaluation of Approaches

Evaluation approaches are conceptually distinct ways of interpreting about, designing and conducting evaluation efforts. Most of the evaluation approaches in use today make truly unique contributions to solving important problems, while others refine existing approaches in some way. Two classifications of evaluation approaches by House and Stufflebeam and Webster can be combined into a manageable number of approaches in terms of their unique and important underlying principles.

Studies consider all major evaluation approaches to be based on a common ideology, liberal democracy. Important principles of this ideology include freedom of choice, uniqueness of the individual, and empirical inquiry grounded in objectivity. It is contended that they are all based on subjectivist ethics, in which ethical conduct is based on the subjective or intuitive experience of an individual or group. One form of subjectivist ethics is utilitarian, in which ‘good ‘is determined by what maximizes some single, explicit interpretation of happiness for society as a whole. Another form of subjectivist ethics is intuitionist/pluralist, in which no single interpretation of ‘the good’ is assumed and these interpretations need not be explicitly stated or justified.

These ethical positions have corresponding epistemologies (philosophies of obtaining knowledge). The objectivist epistemology is associated with the utilitarian ethic. In general, it is used to acquire knowledge capable of external verification (inter-subjective agreement) through publicly inspectable methods and secondary data. The subjectivist epistemology is associated with the intuitionist/pluralist ethic. It is used to acquire new knowledge based on existing personal knowledge and experiences that are (explicit) or are not (tacit) available for public inspection.

Stufflebeam and Webster place approaches into one of three groups according to their orientation toward the role of values, an ethical consideration. The political orientation promotes a positive or negative view of an object regardless of what its value actually might be. They call this pseudo-evaluation. The questions orientation includes approaches that might or might not provide answers specifically related to the value of an object. They call this quasi-evaluation The values orientation includes approaches primarily intended to determine the value of some object. They call this true evaluation.

Pseudo-Evaluation: Politically controlled and public relations studies are based on an objectivist epistemology from an elite perspective. Although both the approaches seek to misrepresent value interpretations about some object, they go about it a bit differently. Information obtained through politically controlled studies is released or withheld to meet special interests Public relations information is used to paint a positive image of an object regardless of the actual situation. Neither of these approaches is acceptable evaluation practice, although a few examples where they have been used can be guestimated.

Top

Objectivist/Elite/Quasi-Evaluation

As a group, these approaches represent a highly respected collection of disciplined inquiry approaches. They are considered quasi-evaluation approaches because particular studies legitimately can focus only on questions of knowledge without addressing any questions of value. Such studies are, by definition, not evaluations. These approaches can produce characterizations without producing appraisals, although specific studies can produce both. Each of these approaches serves its intended purpose well

Experimental research is the best approach for determining causal relationships between variables. The potential problem with using this as an evaluation approach is that its highly controlled and stylized methodology may not be sufficiently responsive to the dynamically changing needs of most human service programs. Management information systems can give detailed information about the dynamic operations of complex programs. However, this information is restricted to readily quantifiable secondary data usually available at regular intervals.

Testing programs are familiar to just about anyone who has attended educational institution, served in the military, or worked for a large educational organisation. These programs are good at comparing individuals or groups to selected norms in a number of subject areas or to a set of standards of performance. However, they only focus on tested performance and they might not adequately sample what is taught or expected. Objectives-based approaches relate outcomes to prespecif ied objectives, allowing judgments to be made about their level of attainment. Unfortunately, the objectives are often not proven important or they focus on outcomes too narrow to provide the basis for determining the value of an object.

Content analysis is a quasi-evaluation approach because content analysis judgments need not be based on value statements. Instead, they can be based on knowledge. Such content analyses are not evaluations. On the other hand, when content analysis judgments are based on values, such studies are evaluations.

Objectivist, Mass, Quasi-Evaluation: Accountability is popular with constituents because it is intended to provide an accurate accounting of results that can improve the quality of products and services. However, this approach quickly can turn practitioners and consumers into adversaries when implemented in a heavy-handed fashion.

Objectivist, Elite, True Evaluation: Decision-oriented studies are designed to provide a knowledge base for making and defending decisions. This approach usually requires the close collaboration between an evaluator and decision-maker, allowing it to be susceptible to corruption and bias. Policy studies provide general guidance and direction on broad issues by identifying and assessing potential costs and benefits of competing policies. The drawback is these studies can be corrupted or subverted by the politically motivated actions of the participants.

Obiectivist, Mass, True Evaluation:

Consumer-oriented studies are used to judge the relative merits of goods and services based on generalized needs and values, along with a comprehensive range of effects. However, this approach does not necessarily help practitioners improve their work, and it requires a very good and credible evaluator to do it well.

Subjectivist, Elite, True Evaluation:

Accreditation/Certification programs are based on self-study and peer review of educational organisations, programs, and personnel. They draw on the insights, experience, and expertise of qualified individuals who use established guidelines to determine if the applicant should be approved to perform specified functions. However, unless performance-based standards are used, attributes of applicants and the processes they perform often are overemphasized in relation to measures of outcomes or effects. Connoisseur studies use highly refined skills of individuals intimately familiar with the subject of the evaluation to critically characterize and appraise it. This approach can help others see programs in a new light, but it is difficult to find a qualified and unbiased connoisseur.

Subjectivist, Mass, True Evaluation

The adversary approach focuses on drawing out the pros and cons of controversial issues through quasi-legal proceedings. This helps ensure a balanced presentation of different perspectives on the issues, but it is also likely to discourage later cooperation and heighten animosities between contesting parties if ‘winners’ and ‘losers’ emerge. Client-Centered studies address specific concerns and issues of practitioners and other clients of the study in a particular setting. These studies help people understand the activities and values involved from a variety of perspectives. However, this responsive approach can lead to low external credibility and a favorable bias toward those who participated in the study.

Measurement implies a numerical assignment of value, using instruments such as rulers, stopwatches, etc. ‘Measurement is rarely carried out for its own sake. It may be included in an assessment or formative evaluation, but is more to be regarded as a basic research procedure.’ Choppin argues that the term assessment should be reserved for application to people, including grading, certifying, etc. Assessment may often utilize a test for measurement, but it rarely has ‘much in common with scientific measurement.’ The term formative evaluation Choppin reserves for application to ‘abstract entities such as programs, curricula and educational organisational variables’.

Top

Circumstances in Which Applied

Commentators argue that all curricula initiatives operate in conditions of uncertainty and formative evaluation is a desirable corrective or steerage component of all programmes (Sanderson, 2002). Formative evaluation is particularly relevant to programmes whose goals and objectives cannot be well specified in advance, are open to interpretation by actors at different levels of the system, or which seem likely to change over the lifetime of the programme. In many of the programmes, objective is to introduce changes in the innovative behaviour of educational institutions and regions and to launch a process of building up collective learning. Formative evaluation can be a driver of, and contributor to, the organic learning. Understand knowledge creation processes that exist within regions and networks and should itself as a developmental process.

Formative evaluation has most relevance at the ex ante and mid-term phases, and indeed some programmes evolve continuously, never reaching a stage of being finished or complete. Formative evaluation activities may be extended throughout the life of a programme to help guide this evolution. Post-ante evaluations may draw on evidence from formative evaluation although their primary focus is summative. Formative evaluation is ideally built into the programme design as an ongoing activity rather than inserted into a particular phase. It may however take a particular form at different stages of the evaluation lifecycle. At the needs assessment stage in an ex ante evaluation, formative evaluation can determine who needs the programme, how great the need is, and what might work to meet the need.

Formative evaluation can inform evaluability assessment. Working with participants in the early stages of clarifying goals and strategies, making them realistic and evaluable, establishing how much consensus there is among goals and interventions and where the differences lie constitutes the essential groundwork for a formative evaluation. Evaluability assessment becomes an improvement-oriented experience that leads to significant programme changes and shared understandings, rather than just being seen as a planning exercise preparing for summative evaluation. Formative evaluation follows the lifecycle of the initiative through implementation, tracking the fidelity of the programme to goals and objectives, investigating process of delivery, diagnosing the way component parts of programme come together and reinforce or weaken one another and addressing problems as they emerge. Programme implementation is in large part about ongoing adaptation to local conditions. Methods used to study implementation should also be open-ended, discovery oriented and capable of describing developmental processes and changes.

Main Steps

  • The first step is gaining the commitment of key participants and programme actors at all levels to a formative evaluation as a collective learning and change-oriented process. This may require among other things negotiation about access and the use of information, clarification of roles and relationships, and agreement about what kinds of information will be relevant for which kinds of participants.

  • Building evaluation into programme design so that it is perceived as an essential tool for managing the programme and helping it to adapt to local conditions within a dynamic environment. This include laying the basis for formative evaluation in the early stages of needs assessment and evaluabiliry assessment, embedding formative evaluation into ongoing organisational processes and structures. Successful formative evaluation depends on the early adoption of an effective secondary data collection strategy and in many cases a management information secondary database which allows evaluators easy access to well organised programme information.

  • Creating an evaluation infrastructure to support formative evaluation as a learning, change-oriented, developmental activity. This includes working with programme staff on an ongoing basis to: Creating a culture that supports risk-taking, reduces fear of failure, and values lessons learned from mistakes. Establishing channels of communication that support the dissemination of information and allow organisational members to learn from one another in ways that contribute to new insights and shared understandings. Create new opportunities for shared learning and knowledge creation. Modify systems and structures that inhibit organisational learning

  • A fourth step entails finding out about the decision-making cycle, the different participant groups and their respective information needs and interests. These include policy makers and programme makers at central level, local site programme managers, and operational staff. Each set of participants will be asking different questions of the evaluation and have a preference for the way that findings are presented and or/communicated. Where there is a lack of appropriate mechanisms or opportunities for feedback, the evaluator will need to establish a structured way to provide relevant participants with feedback.

  • Formative evaluation involves an ongoing cycle of secondary data gathering and analysis. The choice of methods will be determined largely by the questions being addressed and the methodological preferences of different participants. Most formative evaluations use a variety of methods. Where a collaborative, participative approach is taken to formative evaluation, the methods are likely to include those which foster and support interaction, dialogue, learning and action.

  • There are different views as to whether the evaluator's responsibility stops with feeding back findings and facilitating processes of learning among programme actors, or whether she or he also has a role to play in follow-through action. Where the evaluator is external to the organisation, the role is likely to be limited to the former.

Formative evaluators may be internally located, especially where the preferred model of formative evaluation is influenced by organisational learning concepts and practices. In these circumstances, the formative evaluation cycle is likely to include shared responsibility for implementing the action plan and monitoring its progress. Formative evaluation implies determining value and worth and often involves making comparisons to other programs, curricula, or educational organisational schemes. Choppin writes, ‘Just as assessment may be characterized as a routine activity in which most evaluators will be involved, formative evaluation is an activity primarily for those engaged in research and development.’

Formative evaluation can serve several important purposes in the development of instruction, including (but not limited to) goal refinement, documentation, determination of impact, and program improvement. Hanson (1978) states, ‘Good formative evaluation is central to the continued development of a profession/Formative evaluation is becoming more mainstream as funding agencies require affirmation that their money if; being spent on projects or programs that are ‘effective’. Even with growing support for formative evaluation, it still has yet to gain acceptance among the masses. Threats to this acceptance are lack of formative evaluation knowledge, time constraints, budgetary constraints, lack oi personnel, or a negative predisposition to formative evaluation. Formative evaluation should never be underestimated as a standardized, prepackaged process.

Top

Strengths and Limitations

Formative evaluation provides a rich picture of a programme as it unfolds. It is a source of valuable learning not just prospectively for the programme but for future programmes as well. Formative evaluation is highly complementary to summative evaluation and is essential for trying to understand why a programme succeeds or fails, and what complex factors are at work. Large scale programmes are often marked by a discrepancy between formal programme theory and what is implemented locally. Formative evaluation can help determine whether the substantive theory behind the programme is flawed, whether the evaluation was deficient, or if implementation failed to pass some causal threshold.

To be effective and achieve its purpose of programme improvement, formative evaluation requires strong support from the top as well as bottom-up support. Programme decision-makers and others who will need to act on its findings must endorse it. Support may be withdrawn, overtly or covertly. This is if the findings expose weaknesses in programme design or implementation, especially where the organisational culture is one of blame and discourages innovation or learning from mistakes. Research findings suggest that programme managers are more receptive to ‘bad news’ that is communicated by internally located evaluators (‘one of us’), than by independent evaluators.

Formative evaluation can serve an important developmental or capacity-building purpose, for the organisation as a whole and for individual members, where it is seen as a form of organisational learning. Formative evaluation is time and labour intensive in comparison to most forms of summative evaluation. It relies primarily on qualitative methods that are heavy in their use of time and evaluation expertise, both at the secondary data gathering stage as well as in analysis. Depending on the audience for the formative evaluation findings, the reliance on qualitative methods may fail to meet the expectations of some participants for robust quantitative measures of progress.

Top

Summary of Approaches

Formative evaluation is made up of five broad phases: performance analysis, cause analysis, intervention selection and design, implementation and formative evaluation. Various models exist, specific to formative evaluation process, which address each of these phases. Majority of models, however, emphasize performance analysis, cause analysis and intervention selection phases. The behavior engineering model (Gilbert, 1996), performance improvement process (Harless, 1994) and performance analysis flow diagram (Mager, 1997) all address the phases of performance analysis, cause analysis and intervention. In contrast, few models, specific to formative evaluation process, exist which address implementation phases. The reason why few formative evaluation -specific implementation models have been developed is that frequently formative evaluation practitioners are hired as external consultants that educational organisation bring to identify performance problem, its causes and to propose solutions.

The final phase, formative evaluation, is rarely addressed in any detail by Formative Evaluation-specific models. Instead, formative evaluation practitioners tend to utilize models that were either designed for a specific intervention (Kirkpatrick's four-level framework was designed to formative evaluation trainings), or they adopt models from other disciplines such as program formative evaluation or human resource development.

Yet, even with the use of outside models, formative evaluation is conducted infrequently at best. There are many barriers to formative evaluation actually happening as part of the formative evaluation process. When it comes to evaluating, the educational organisational cards are stacked against it. An additional impediment to conducting formative evaluations involves the time and cost required in completing a formative evaluation. Although steps can be taken to speed up the process, formative evaluations have historically remained time and dollar consuming. Couple this with a dynamic environment and by the time the results from a formative evaluation are available, the educational organisation may have moved on to another intervention or entirely away from the particular performance under formative evaluation.

Another major barrier to formative evaluation is a workforce that is sensitive to the repercussions of downsizing. The threat of accountability that could result from a formative evaluation looms heavy. This perceived threat is even more prevalent in educational organisations where the hierarchical framework has been flattened, forcing decisions to be made at the job performance level. Perhaps having the largest repercussions for formative evaluation practitioners, is the earlier mentioned fact that there is no formative evaluation -specific model in the area of formative evaluation. Because the formative evaluation models used by formative evaluation practitioners come from outside disciplines and fields, and no strategic guidance is available to help the potential evaluator decide which model would be most effective in a given situation, it has been said that few people actually know how to conduct a formative evaluation (Gordon, 2000).

In light of the above, a formative evaluation model is needed that is closely related to the human performance expertise process and that addresses the aforementioned barriers to conducting formative evaluations. The purpose is to develop and formatively evaluate a formative evaluation model designed to guide formative evaluation practitioners in determining the degree of success of the interventions they propose to close performance gaps, and provide guidance in examining and evaluating the process they use to determine which solutions to implement.

As such, the model is designed for use after an intervention has been implemented and informs the owner of a performance problem about the value of the intervention that was employed. Additionally, the model provides information about how each phase in the formative evaluation process was conducted. The primary intended users of the model are individuals who are serving as evaluators of a formative evaluation intervention.

The formative evaluation of any new educational process or product should be undertaken with the understanding that a system is always nested within a super system, change requires time, effort, energy, and resources, and ultimately it is the individual who must make the change. These individuals may be either internal or external (consultants) to an educational organisation. Additionally, this model can be used by individuals who are involved in the entire formative evaluation process (from front end analysis through implementation) or by persons who are not familiar with any decisions made prior to the implementation of the intervention.

While the aim of formative evaluation modeling is on conducting a summative formative evaluation (depending on the amount of time elapsed since the intervention was implemented), portions of the model can be used in a formative manner to inform persons conducting front-end analysis of the thoroughness of their process. Finally, information collected using this model provides continuous improvement information back to the formative evaluation community. The model is intended to primarily support a novice formative evaluation practitioner. However, aspects of the model may be helpful to the entire continuum of users ranging from novice through expert formative evaluation practitioners. For this study, a practitioner is connoted as one with an understanding of the foundational knowledge and skills required to implement the formative evaluation process, but who has little to no practical application experience applying it. An expert is connoted as a person with the knowledge and skills of formative evaluation as well as one or more years of experience in applying the process in real-world applications.

Specifically, the model needs to be developed and tested on the following criteria:

Top

Criterion One: Effectiveness

Does use of the model provide a means to identify the degree to which the performance gap was closed?

Did use of the model provide information about the thoroughness of each phase of the formative evaluation process and

Are there additional, unanticipated benefits or limitations involved in using the model?

Top

Criterion Two: Efficiency

What is the time required to conduct formative evaluation using this model?

What is the cost required to conduct formative evaluation using this model?

Top

Criterion Three: Usefulness

What are the barriers to the use of this model?

Do the sponsors of the formative evaluation find the information to be useful?

How difficult is it to use the model?

To what degree does the model guide the user in conducting the formative evaluation and

How intuitive is the model's representation? (Are expectations clear to both a novice and an expert?)

Research Issues: A study can investigate the question ‘what actually occurs within a work site during the implementation of a formative evaluation system’. To document the process of implementation and support the improvement of the program, additional questions can be posed:

  • In what ways has formative evaluation changed the work of the individual?

  • In what ways has formative evaluation changed the group work?

  • In what ways has the work relationship between colleagues changed due to formative evaluation?

  • As a whole, is formative evaluation a help or a hindrance to the individual?

  • What could be done to make formative evaluation a more useful tool for the user?

  • How can formative evaluation developers incorporate flexibility in to their tools?

  • What do evaluators, evaluators and administrators view is the return on investment for formative evaluation?

  • What performance improvement techniques are applicable to the use of formative evaluation?

Top

Significance

Change in educational systems has taken a myriad of forms, ranging from management and funding reforms to the adoption of new educational institution calendars and media knowledge. Regardless of the context or type of change, however, little research has been done on the implementation process as it evolves. This study is an attempt to gain further information about the implementation process of formative evaluation template for evaluators. Specifically, a study can attempt to explicate the implementation process as it is experienced by a work group within a educational institution. This study of mandated change will provide insight as to how formative evaluation changes the culture of practice within the educational setting. Information derived from this study would shed light on how individuals adapt or reinvent knowledge tools for their own use.

Top

Conclusion

In what way has formative evaluation changed the group work? In what way has the work relationship between colleagues changed due to the formative evaluation? What could be done to make the formative evaluation a more useful tool for the user? It is common for special education instructors to collaborate with each other and with specialists on developing formative evaluation for their taughts. An implication for the development of formative evaluation requires the researcher to make a number of choices regarding the interface design, data storage and initial training of the users. The issues of time, training and knowledge barriers for the respondents in this study point toward a novel way of developing the user interface.

Researchers strongly agree within step one that involving multiple other parties in the formative evaluation is a good idea, having participants clarify and focus the formative evaluation's intent, primary goals and objectives, up-front. I'm unsure of your development possibilities, but it would be nice if the tool could be ‘packaged’ online so that someone could create an formative evaluation ‘project’ within the tool structure, and then the multiple participants could be given access to that project's files (via user names/passwords).

The benefit of a software-based tool (verses paper-based) is the ability to collaborate. The steps in this tool are accessible to any person granted password access: the instructional designer(s), researcher(s), evaluator(s), or participants. The evaluation algorithm is the organizational center for the tool. All deadlines and appointments (entered throughout the tool) are automatically placed on the evaluation algorithm. This evaluation algorithm (accessible by all with password access) is modifiable and printable. As researchers work your way through the tool, keep in mind that the tool is not only meant to guide the user through the evaluation process, but also to be a collaborative and organizational tool in that process.

Determining the goals and objectives of the product/program being evaluated helps to discern what the product/program was designed to accomplish. This helps when formulating the evaluation questions as well as the instrument questions. Identifying all participants involved in the product/program and the evaluation (secondary slide) helps to ensure that the necessary people have input. This list of participants is utilized later when determining who receives an evaluation proposal and when determining who receives a copy of the final report.

As straightforward as this step seems, there are always adjustments that may be necessary. The instruments may need adjustment if you find a question is confusing, or a question may need to be added. Time schedules may need adjustment depending on the overall time schedule of the project, or depending on individuals’ time schedules which are bound to change. This step allows for the specification of these changes and the party responsible for the changes. This information is automatically added to the evaluation algorithm. This step requires the information gathered from the data sources selected to be organized around the questions that the evaluation sought to answer. Several common analysis techniques are listed in this step.

The components of instruction needing revision should be listed in this step. Responsible persons or departments would be identified, as well as the deliverables. This step in the evaluation tool would also allow the revision and due date to be placed directly on the evaluation algorithm, which is accessible by all who are coordinating on the project. If that option is selected, the tool would skip back to so revisions could be made to the original evaluation plan (i.e. an abbreviated secondary evaluation).

Recommendations for Future Research

Finally, through the continued use of expert review, case studies, and case scenarios, future research should focus on further validating the components of the model and on use of the model in a variety of cases (e.g. internal versus external evaluators, various types of organizations, and evaluators with varied experience levels. A research agenda should also include meta-analysis of formative evaluation processes, findings, and impacts.

One:Validation of Components of the Model

Two:Refinement of the ‘Process Priority’ within the formative evaluation model

Three:Use of Model in a Variety of Case Types

Four:Meta-Analysis of Formative Evaluation Processes, Findings and Impacts

The need for a prescriptive, systematic formative evaluation modus operandi prompted the idea behind this developmental dissertation. The actual focus of the dissertation, however, was on the justification behind such a need - the proof that such an accessible tool did not already exist in the complete and procedural manner that the researcher envisioned. (Granted, numerous books speak volumes about the process of formative evaluation. The goal, however, was to find or create a tool that would help those active in the field to avoid spending so much time reading books and more time doing the formative evaluation utilizing the envisioned tool.) Unable to locate such a tool, the first phase was to find or develop a model that was comprehensive of the formative evaluation process. The second phase was to develop the prototype of a tool around that model. The third phase was to have the feasibility, usability, and necessity of such a tool be evaluated by those who know best formative evaluation and instructional design experts.

The expert review of the prototype was extremely successful - not because the tool was without fault but because the review process truly brought out what was in need of improvement, what would work well, and what could be done to enhance the tool even more. The findings of this expert review process will allow for the future development of the formative evaluation tool. The result to this process is the development of a prototype of a formative evaluation tool that has its place in the world of formative evaluation. Although developed for formative evaluation, it is agreed by all experts to be applicable to summative formative evaluation as well. Although the tool might not be useful to all target users, other potential users were identified in the expert review process.

The ultimate success or failure of such a tool will depend upon the thorough formative evaluation of the formative evaluation tool. This process was outlined previously and could be done with the confidence that the 14 steps the tool is based upon are selected from a thorough review of the literature. The expert review of the formative evaluation tool provides confidence in the content and structure of the prototype. The process by which the prototype was researched, developed, and evaluated is the issue at hand. This researcher feels the process was not only successful, but also rewarding.

Top

Recommended References

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

TopBack

 
║ Site map ║ Privacy Policy ║ Copyright ║ Terms & Conditions ║ Page Rank Tool
750,834,915 visitor(s) since 30th May, 2005.
All rights reserved. Site designed and maintained by DIVA ENTERPRISES PVT. LTD..
Note: Please use Internet Explorer (6.0 or above). Some functionalities may not work in other browsers.