soxteaching: chem, baseball, stories
In this post I will take a look at some performance data objectively, a topic about which I preach constantly to my students. Before doing so I must comment on my data collection process, more specifically, about the spreadsheet that has become “data central” for all of my AP endeavors. [LINK] A few years ago I realized that the only way “the deal” could work is if the exam and the grading standards matched the national AP equivalents. I also realized that “the deal” had to be reasonable and reachable. For those who might be interested in how an AP Chemistry exam score is computed please take a look at the linked spreadsheet.
The first chart shows that the students scores on the course’s final exam have increased over the years. The two indicators shown tend to track in the same manner and the graph confirms this for the students over the years. I have taught 191 students in this course at my school so there is enough of a sample to say that the trend has some validity.
I am not exactly sure of the reasons behind the trend but I do have an idea of contributing factors. First, the expectations of the students entering the course have changed over the years. I took over the course from a teacher for whom performing on the national exam was not an objective, and the course grades were rather high. In contrast, now there is little chance that entering students expect anything but a challenging course focused on preparing for the exam. Second, in the third year of the course we began to invite sophomores who showed exceptional promise. This essentially cherry-picked the best science students in the school, some of whom might never have taken the course, having to choose between different courses as a senior. Their performance not only lifted the average because of how they performed, they created an environment whereby seniors studied toward the end of the course, serving as energizers in a room full of slugs. The final factor in play is that I am better at coaching students for the exam and I have a deeper intuitive sense of which topics on which to focus and those which are not relevant. In summary, a combination of these factors has produced the success that is shown above.
The second chart shows the students’ performance on the course final exam and the national exam. This is where I get to see whether or not the course exam predicts success on the actual AP exam. As you can see, the results are a bit mixed, although they show a correlation. As I examine this I have the emotion to want to defend what happened each year but ultimately it is my job to control these factors better. A prime example is what happened in 2011, when six seniors were caught cheating on the final exam, with one being expelled right before graduation. As it turned out, course performance that year was overly elevated in enough students that by the end of the year they were not able (not interested?) in performing on the actual exam. Another factor in play is the fact that as a human being that cares for his students I fully know what a score means and I am sometimes caught trying to find ways to elevate student scores as a result.
Overall, I am struck by how long it takes to master the process of coaching students in this subject for this exam. I certainly don’t want to teach this course for the remainder of my career, as it is very demanding and lacks some of the fun of a less-intense course, but I want to continue tweaking and optimizing how this works. As the data shows, something seems to be going well and you gotta like that.