This page contains three significant artifacts that have been used in my teaching contexts. In addition, this section contains teaching observations that both validate and offer guidance towards my teaching experiences. While many potential artifacts are available to choose from during my teaching experiences, I have chosen those that best reflect and demonstrate my teaching philosophy.
Context: In Software Engineering as a Human Activity, I heavily used the information that I learned from "Why Relying Solely on the Lecture Method Can Diminish Learning" to implement a lecture that extensive used "course flipping". The lecture was offered after a prior lecture on Sackman that claimed "order of magnitude" differences in programmer variability.
The goal of this lecture was to present a counterpoint to this existing study through evidence from replication studies as well as potential validity issues with the original paper. The materials from this lecture are as follows:
This lesson plan was experimental in nature; it is the first time that I had attempted to implement more active learning activities above that of "think-pair-share". These concept for these activities came from two workshops that I attended recently. The first introduced the concept of course flipping, where, roughly speaking, the students perform the lecture at home and the assignments during class. The second came from an Introduction to Teaching workshop, from which I borrowed the Bingo activity as well as the data analysis group exercise.
One of the goals of this lecture was to ensure that students had actually read the paper before class. Too often students will not do the prerequisite work required before a lecture, but get away with this tactic since the lecture is largely passive. Because "course flipping" is not commonly used in the department, I felt it necessary to send a warning e-mail to students before lecture, indicating, among other things, that active participation would be required in this lecture.
This lesson plan embodies my personal teaching philosophy in several ways. First, it minimizes the use of PowerPoint and uses it solely as a data presentation tool. Second, it offers a brainstorming activity that asks the students to hypothesize about the lecture, even if they do not yet know the correct answer. I find this sort of foreshadowing activity useful to provide some context and intuition for the material that is presented in class. Finally, it echoes my belief that the lecture should be more than a re-reading of what is already present in existing textbooks and readily accessible of the Internet. Instead, the lecture material in this lesson plan is dedicated instead to providing insight, intuition, and expertise into the topic, aspects of learning that cannot easily be acquired without interaction with a subject matter expert.
As this was my first in-depth application of course flipping, I encountered some difficulties. While I am comfortable with estimating lectures times for traditional seminars, I underestimated the time required to setup and complete the activities. I also did not anticipate the surprising amount of interaction from the students, given that previous passive lectures elicited few questions or comments from students. Using the metric of voluntary class questions, this metric alone is an indicator of the success of course flipping in the classroom. The final difficulty I encountered was that creating a course flipping version of a lecture takes a substantial amount of time when compared with that of a traditional lecture; perhaps two to three times as long. This initial investment seems well worth the effort, however.
Overall, I was incredibly satisfied with the use of course flipping in the classroom, and intend of incorporating this methodology to varying degrees in other lectures.
Context: The following grading rubric derives from a single question from a final exam I designed for Object-Oriented Languages and Systems in Fall 2010. Since questions are often re-used by the department from semester to semester, the full exam is not available. However, this single question illustrates my general philosophy with respect to student assessment. The goal of this question is to allow the student to demonstrate their understanding of a portion of the Bridge Pattern by having to write only a minimal amount of programming code during an open book, open Internet, open laptop, timed exam.
Prior grading rubrics, which extensively and meticulously enumerated all errors and point values as a way of minimizing bias were extremely time consuming and frustrating to implement in practice. By their very nature, they were also subjective, since the point values themselves are somewhat subjective, despite the outward appearance of objectivity. Evidently, this grading system would also result in a substantial students "fishing for points", which was itself an exhausting. The knowledge that this would happen also led to procrastination in grading quickly.
There had to be a better way. My goals for this grading rubric were to make the grading process more efficient for myself, provide accurate and informative assessments to the student for review, and minimize bickering over arbitrary points and instead focus on big picture ideas. The grading rubric utilized in this assessment mimics my teaching philosophy of student assessment, heavily borrowed from Dr. Winston's ideas from Skills, Big Ideas, and Getting Grades Out of the Way. In this approach, the student's response is classified into groupings, such as thorough understanding, adequate understanding, needs work, and so on. In this way, students exhibiting understanding of a concept is treated far more importantly than nitpicky and trivial details such as simple calculation and syntax errors. More importantly, this particular question echoes my philosophy that exam questions are to demonstrate understanding of existing material, not to "trick" students by providing questions that they have never seen before. In this artifact, students are clearly informed that the Bridge Pattern is a potential exam question, without giving them the exact question.
I have found this grading rubric to be extremely successful for both myself and the students. Students are happy because their responses are assessed holistically; they can focus on demonstrating understanding without having to worry about minor and trivial errors, since they will still fall into the bucket of thorough understanding. I am happy because it affords me time to provide better, personalized feedback to the student without having to meticulously provide individual point break downs for each and every error. And in turn, this approach has eliminated any procrastination issues that I experience with more traditional grading rubrics, which students appreciate because they receive their feedback more quickly. Finally, students are also appreciative of the fact that exams are fairly predictable in their content. In lectures, I will usually directly inform students of the scope of the exam and the types of questions they are expected to answer.
Teaching Context: This programming project was offered as an assessment tool in Architecture of Parallel Computers in Spring 2011. The programming project was made available to the students after being exposed to the general topic of "threading" from past homework assignments and lectures. The project was provided to the student in two parts. The first part of the project created a Simple Thread Pool Manager. The second part of the project built off their first project to implement a more complex work stealing algorithm.
The goal of this project was to provide students with a practical application of threading by first providing a project that is directly presented in the lecture, and then, by requiring the student complete some outside research (in this case, reading an article) to implement an extension to this algorithm on their own.
This project exhibits my personal belief that textbook knowledge is only a small component of the overall learning process. A mastery of the subject material can only be accomplished through application of the material through expanding the theoretical concepts that are discussed in lecture. Theoretically, the concept of threading is simple to explain but the nuances of actually implementing such techniques are quite error-prone. Students who only study the subject theoretically do not appreciate these nuances. The goal of this project, therefore, was to provide the students with a experimental environment by which they could apply theoretical concepts in a practical work stealing algorithm.
I determined that these goals were successfully satisfied by examining the performance of a related exam question, which tested only the theoretical component of the topic, with the questions elicited by the students during this course project. Despite the fact that many students were able to perform well during the exam for this particular question, a disproportion number of students had many difficulties in the course of completing this project. This confirms my hypothesis that exams are only one component of verifying student understanding, and that it is possible to perform well academically without actually being able to implement the idea in practice.
As a result of this, and other projects, I continue to use practical projects as an assessment tool rather than relying on exams and homework alone as an assessment mechanism. I have found that many students who perform poorly on exams do well in project environments, and I have found the converse to be true as well. Because of this, I now place less emphasis on exams and focus more on projects as a metric for assessing student performance.
I have also included two formal teaching evaluations from my teaching experiences. The reflections for these teaching evaluations are embedded within the evaluation itself.