Sunday, October 23, 2005

Circular Reform II

The meat.
The Poobah feeds us what he thinks good curricular reform looks like. We need to get away from spoon-feeding lecture material and move toward problem based learning. We neeed to encourage life-long learning. We need to promote horizontal and vertical integration, i.e. I need to talk to more 4th years. We need to get away from the 2+2 formula of medical education where there are two years of preclinical class-work and two years of clinical apprenticeship in the hospital. We should replace it with a 2x2 formula that promotes more integration between the preclinical and clinical worlds. I wonder if he's aware that, on a stricly mathematical level, 2x2 = 2+2? Obviously, this is deeper than math. Of course, I have no idea how 2x2 might work, and Poobah gives no pointers.

Poobah also encourages our initiative to promote good teaching and our huge push for professionalism. Little does he know that our professionalism teaching actually includes (see Hubris). Finally, he says that we shouldn't be seduced by false outcomes, things like standardized exam (board) scores, student satisfaction, and residency results (the Match). After all, we're amazing students, so we should do well on the boards regardless.

He talks about the need for humanism and the goodness of having a standardized patient interview as part of the 2nd step of the boards. He talks about the possibility of changing pre-medical requirements to start out with better trained students, or other larger reforms. In response to questions, he complained about the loss of low-income students who don't understand that they'll be able to pay their way out of the enormouse debt burden that medical school entails, and speculates unproductively about how medical schools can decrease their costs.

Ahem.

1. There's no data. I complained about this on the way out, and another student said, "There is data, he just didn't show it to us. If he had, it would have taken six hours." Maybe, but without the data, the hour speech is a waste of time. As Poobah said, one of the major obstacles to curriculum reform is student opposition.

2. I have very little to say to a fourth year, thank you very much. I mean, we can talk, but when we do I don't learn things that I need to know. Unlike undergraduate, there's not stuff that you need to start right now or you'll be screwed, because all that stuff is built into the curriculum.

3. 2+2 is not on the table. Sure, the inspiration for this system (The Flexner Report) is almost 100, but that doesn't automatically make it wrong. More significantly, the dean's office told the MSTPs that this is unlikely to change. Since we do 2+4+2, a major disruption of the 2+2 system will affect us deeply. This isn't to say that they would never do anything to inconvenience us, but they gave no details on what the reform would be, other than to say 2+2 was unlikely to change, which makes me feel more confident that it will be protected.

4. The reason DMS couldn't tell us what the reforms would be is that they pitched them as being student originated and student driven, so I resent the possible implication of this meeting that reform will be centrally designed and executed.

5. Problem based learning is not a panacea. For the non-medical types, PBL is a system where, rather than sitting in lecture, students are put in small groups and assigned problems to deal with. In medicine, these normally take the form of cases. The varied parts of the case serve as teachable moments, e.g. we have a patient with poor circulation and use it as an excuse to talk about hemoglobin. At the same time, we can talk about the physiology of circulation, how to interview this patient, what parts of the history are significant, how to deal with this patient's access to care etc. etc.

This sounds great, right? Kind of. This environment isn't the best environment for everyone. Some people love lecture (I go back and forth). If you don't learn from lecture, you just don't show up. If you don't learn from small group, they still take attendance, so you get to suffer through however many hours of groupthink, then go home and try to learn it your way. Regardless, some things are just better lectured. The tie-ins can be somewhat contrived. Each small group has to be led by someone, who will either be a professor (expensive) or a TA (useless). There's also no way to standardize what a small group will cover. Do you want to be treated by someone whose group shortchanged the hemoglobin, or the history taking? What happens when they have a patient that isn't a case? There are thousands of pathogens and drugs that 2nd years have to know cold - how is that case-able? Switching to case-based learning would also require Dupont to build us a huge new expensive building to accomodate the numerous small groups, making the transition that much more expensive.

Every generation has its own educational fads. Whole language. Integrated Math. International Baccalaureate. Self Esteem (ugggh). How do we know that PBL is a real advance and not a random gyration or cul-de-sac of edu-bureaucra-somethingorother?

6. My last question is not solely rhetorical. How do assess whether a pedagogy works? It's not trivial. One way is to look at internal grades. Early in Harvard's New Pathway program, for instance, rather than teaching students the names, origins, insertions, and actions of the muscles, they just taught them that muscles have names, origins, insertions, and actions. These students were quite noticeable in their 3rd and 4th years as they were the ones that had no f'ing idea what they were doing. We don't really have internal grades, so this won't work too well.

Since medical school involves a lot of material, one simple and obvious way to test multiple pedagogies is a standardised test, like the US Medical Lisencing Exam, and see whether a pedagogy improves scores. But Poobah said that board scores don't matter because we're such stellar students that we'd do well anyway. No. Look at Baylor. They teach to the boards to an outrageous degree, and their board scores are significantly higher than say, ours, since we basically ignore said boards. Thus, pedagogy can have an impact on board scores. Second, it's ridiculous for a person that is (somewhat) involved in running the USMLE to say that it doesn't measure anything. Emphasize touchy-feely all you like, but there is a body of knowledge that doctors MUST posses. The degree to which a school imparts that information is relevant, even if it is not the whole story.


MBA programs are ranked by how much money their graduates make after 3 years. The analogous system for medical schools would see where people go for their residency. This seems more reasonable than board scores - the application process involves interviews, recommendation letters and descriptions of our performance in clinical clerkships. If I've learned nothing, continued my unprofessionality, and turned into a peronality-free robot, they'll notice. There are problems with this, obviously. It's less quantifiable than boards, unless you want to assign points based on how prestigious the specialty and location of the residency are, which would itself be arbitrary. Since residency is a matter of matching, in which students rank their preferences, you could see what percentage of students get their first choice, but what if I rank a place first because I know they're the only place that will take me? Finally, part of the reason I'll get into a specific residency is because I went to Dupont. The system is sticky - difficult to change. Still, looking at the change in our performance, relative to ourselves, provides some quick feedback.

One could argue that the true definition for success lies in the future, some 10 years hence, when they see what kind of doctors we are. This is horribly non-quantifiable. How are they going to assess our competence then if they admit it's impossible to measure competence now? Is it going to be outcomes, in that our school is better if we place more professors? That ignores the fact that most people don't want to be professors. Should we do take-home pay? How uncivilized. It does however, have the advantage of being a realistic assessment of your value to society.

Once you eliminate ways to compare programs, the relative value of programs is dependent entirely upon reputation. For instance, we have a reputation as a "Top 10 medical school," when, in fact, we're not. Not even close. But the strength of the Dupont brand is such that we seem that way. Or something. The whole 'not top 10' is based on US News's rankings (http://www.usnews.com/usnews/edu/grad/rankings/med/brief/mdrrank_brief.php) which, in the absence of better data, will be the way med schools are ranked (which should be incentive for developing alternate rankings). Let's look at the rankings and methodologies (http://www.usnews.com/usnews/edu/grad/rankings/about/06med_meth_brief.php): Reputation. Reputation. (This counts for 40% of the score, btw). NIH grants total, and per researcher. How this affects the quality of my MD-only colleagues education, I couldn't say. Note that the total is more heavily weighted than the per-researcher, thus encouraging schools to add mediocre scientists.

The next part is hillarious. Acceptance rate - what this has to do with quality, again, is unclear. Plus it encourages schools to drum up applications. Also, is this based on primary applications, or secondaries? A primary application costs $30 and all you have to do is check another box on the common app. To do a secondary, you actually want to go to the school. MCAT - yes, let's replace using board scores for a test that wonders whether you remember your cyclohexane chair conformations (http://www.cem.msu.edu/~reusch/VirtualText/sterism2.htm) from OChem. Undergraduate GPA - . Enourages schools to accept people that avoided PChem and/or classes that they thought would prove difficult - these are precisely the sort you want for your physician, no?

Faculty/student ratio is interesting. Note that's faculty members per student. We're not in undergrad anymore, dorothy. And while I would appreciate getting picked apart by 9.5 professors if I went to Harvard, I'm not sure I would notice if it was only the 4.5 that would be after me at Hopkins. Again, this encourages schools to hire more crappier professors, or relabel reserach assistants and other non-helpful people as 'teaching faculty.' Next to these metrics, board scores and %1st pick for residency seem downright brillant.


Satisfaction
The third criterion rejected by Poobah is student satisfaction. I've heard the argument before: I have no basis for comparison. I don't know whether my level of knowledge is actually good or competitive, only how it measures in the eyes of the very people I'm rating. Just because I had a good time in class doesn't mean I got anything out of it. But, as Poobah says, we're good students. We went to top schools. We take out knowledge and try to think about problems. We have a basis for comparison - it's called undergraduate. I've been taught physiology before, and I know when they're doing a bad job. We're here to learn, and we can tell the difference between when the professor is imparting useful information, imparting details about their research, and goofing off, and we rate them appropriately.

No comments: