The ultimate purpose of evaluation is to improve the instruction that is being designed.

The STARS team has, throughout the development process, followed the approach to instructional design outlined by Dick & Carey. Interative evaluation and revision throughout the design and development process is a hallmark of the Dick & Carey model, however it does conclude with a formal, specified "Evaluation" phase. Specifically, this is what the model states about this phase:
Design and Conduct Summative Evaluation
Evaluation is the process of determining the effectiveness of the instruction. Summative evaluation is the review of the finished instructional product. Other types of evaluation take place in earlier stages of the Dick and Carey model.
To elaborate on the terminology, the following are generally accepted definitions of evaluation and its sub-types.


Evaluation, the process of determining a system's effectiveness, is typically split into three categories:
  • formative evaluation: the evaluation of the instruction performed while the instruction is being formed
  • summative evaluation: the evaluation of the instruction performed after the instruction has been implemented
  • learner evaluation: determining the performance change of the learners due to the instruction implemented

Formative Evaluation:

Formative evaluation is used throughout the instructional design process to check and refine the design. The evaluation works with the design and development stages to form a recursive system of revision. Formative Evaluation consists of several stages:
  • design review - checks whether the instruction that is designed meets the analysis
  • expert review - A Subject-Matter Expert (SME) will review the implementation to determines if the content is accurate and consistent
  • subjects testing - tests the instruction with various groups (one-on-one, small groups, and field trials)
  • ongoing evaluation - continually examines the design with respect to possible change in content, audience, or size

Summative Evaluation:

Summative Evaluation is carried out after an instructional design has been implemented. It tests the effectiveness of the design and seeks to determine is if the desired changes have occurred. In short, have the goals and objectives been met?

For this project, we have attempted to capture a measure of both types of evaluation through the use of a feedback survey and sample assignment that requires utilization of skills presumably learned as a result of having completed the tutorial. Two participants provided links to screencasts that they had successfully created. However, both of the respondents also indicated that they had prior experience using Camtasia, so it is not possible to conclude that the tutorial played a role in the successful completion of their screencasts.

The nature of several survey questions address formative evaluation by asking for feedback related to the design of the tutorial itself, specifically by prompting users to use the space provided "to tell us what you liked best about the tutorial or what aspects of it you found to be the most helpful" and "tell us what you liked least or found the least helpful about the tutorial." Other questions approximate summative evaluation by asking users to self-rate, via a Likert scale response, their ability to successfully complete a screencast as a result of having completed the tutorial and asks for this information as it specifically relates to the various aspects of screencasting addressed in the tutorial. An example of these questions asks users to provide a response ranging from "I am very confident that I could make the necessary plans for an effective screencast" to "I am confident that I could NOT make the necessary plans for an effective screencast."
For the purposes of completing this assignment, collection of feedback data was cut off at noon on April 18, 2011. As of that point, six individuals from the learner group, which is a pre-identifed cohort of twelve USC faculty and staff members within the College of Education, had completed the feedback survey and two have submitted links to their completed screencasts, which is our primary intended means of summative evaluation. The details of their comments are provided below and available here as a pdf:.

Evaluation Survey Results:

Familiarity with screencasting before the tutorial:

Familiarity with Camtasia Studio before the tutorial:

Level of computer literacy and technical adeptness:
The graphs below represent how the learners, after completing the tutorial, rated their confidence level in completing the various stages of the screencast process using Camtasia Studio. Due to the voluntary nature of project participation, these four questions collectively comprise one of the main approaches that we used to approximate a measure of summative evaluation. The rating scale that respondents were given ranged from 1-5 with these values associated with the lowest and highest scores:

  • 1=Very confident they could complete the stage of development
  • 5=NOT confident they could complete the stage of development


The following is a list of the text responses provided to the open-ended questions within the survey:

Additional questions about Camtasia Studio not addressed in the tutorial:

  • How do you set the timing to fade in and out?
  • How do you add a title page/transition to the capture?
What worked well:
  • Separate videos for instruction and examples
  • Website design pleasing to the eye and easy to follow
  • Use of closed captions in videos
  • Narrator's voice clear and professional
  • Videos easy to follow

Suggestions for improvement:
  • Streamline content into one video
  • Not clear about Mac vs. PC interface differences at the beginning of tutorial
  • Should use the same version of Camtasia as target audience would most be most likely to use
  • Videos sometimes stagnant without narration at some points

Plans for revision:

We plan to monitor the feedback collection process for any additional responses that may become available in the near future as we expect that a larger number of responses may more clearly establish items of consistent concern to users. From the information gathered already, there are issues that we clearly need to address and chief among those is to clarify the interface differences between Mac and PC versions of Camtasia. Additionally, we will be looking at places in the videos where there is no narration to determine if there is content that needs to be provided that has not been. Other issues, such as the benefit of combining the content into one video vs. maintaining them as separate resources as they presently are would likely be best kept under consideration until additional feedback is obtained along with monitoring the pattern of responses associated with the Editing video to determine if users continue to frequently rate their confidence in successfully completing that phase with the lowest possible score. Another issue that could merit attention is the possibility of providing greater context for the videos by providing supplemental text on the main tutorial pages.

Overall, at this stage of the Evaluation Phase, which followed a necessarily brief Implementation Phase and involved learners with a wide range of technical and computer skills, we are pleased with the results and plan to continue to obtain more formative and summative evaluation as more USC College of Education staff and faculty have time to complete the tutorial.

Brown, A. & Green, T. D. (2006). The essentials of instructional design: Connecting fundamental principles with process and practice. Upper Saddle River: NJ: Pearson