Tuesday, May 30, 2006

Posner on Supreme Court Clerk Books

Judge Posner reviews two books about Supreme Court clerks here. A hilarious bit:
According to Courtiers, Stevens's clerks rewrite his drafts extensively, thus producing an inversion of the normal relation of clerk-author to justice-editor. In another inversion, Justice Harry Blackmun, a genuine eccentric, left the opinion-writing to his clerks after his first years on the Court and concentrated on cite-checking their drafts. He was by all accounts an awesome cite-checker.

Monday, May 29, 2006

Brinkley's Katrina book

Wilfred McClay has a scathing review of Douglas Brinkley's latest book-of-the-month:
The Great Deluge: Hurricane Katrina, New Orleans, and the Mississippi Gulf Coast is the historian Douglas Brinkley's bid for literary and historical greatness. * * *

One can be excused for wondering from the outset whether enough time has passed for anything of this epic scale to be written about these tragic and infuriating events - or whether Mr. Brinkley is the man for the job. Let me confess that I haven't read all of the writings of Douglas Brinkley. I doubt that anyone -- perhaps not even Mr. Brinkley himself -- has ever done that. He is a veritable ... deluge of literary productivity, with books to his credit on a dizzying array of subjects, ranging from Beat poetry to Jimmy Carter, and from Henry Ford to, most recently, the failed Democratic presidential candidate John Kerry. Indeed, the range of his literary productions is so wide as to seem indiscriminate. But his bestknown writings seem to have three things in common.

First and foremost is their relentless mediocrity. I cannot think of a historian or public intellectual who has managed to make himself so prominent in American public life without having put forward a single memorable idea, a single original analysis, or a single lapidary phrase -- let alone without publishing a book that has had any discernable impact. Mr. Brinkley is, to use Daniel Boorstin's famous words, a historian famous for being well-known.

Friday, May 26, 2006

The Shangri-La Diet

In the mail: The Shangri-La Diet, by Seth Roberts. Seth Roberts is a psychology professor at Berkeley who has gotten a lot of publicity for his attempts at self-experimentation. His most famous experiment involved figuring out how to limit his appetite by changing his diet -- which led to what he calls the Shangri-La Diet.

The concept is pretty simple: 1) Your body has a "set point" at any given time -- i.e., a given weight that the body "wants" to reach. If the set point is high, then you are hungrier. And vice versa. 2) The set point (and hence appetite, and hence weight gain or loss) rises whenever your body thinks that there is lots of flavorful food around. And it goes down when your body thinks that there is less flavorful food (i.e., a lack of flavorful food tells one's body that you are having to resort to undesirable food, which means starvation conditions, which tells your body to lower the appetite).

And finally: 3) The way to trick your body into thinking that there is less flavorful food around is to consume a few hundred calories a day from two sources that have little flavor: Oil, or sugar water. The calories here are essential: You can't drink saccharine water, because your body won't "think" that the only source of real calories is unflavorful food, and then it won't lower your set point.

Needless to say, the idea that you can lower your appetite and lose weight by drinking sugar water or oil is somewhat weird and controversial. The typical thing to do at this point would be to report on the results of trying it myself. But I don't need to lose any weight, and don't really care to try. So I'll point out three sources of support that Roberts does -- or doesn't -- bring to bear.

First, Roberts does cite a lot of studies and experiments to support the notion that 1) our bodies regulate weight via a set point, and 2) it is possible to regulate this set point via eating either flavorful or bland foods. Fair enough.

Second, Roberts cites a bunch of anecdotes from people -- often commenters on various blogs -- who claim that they have lost weight or found their appetite suppressed on the diet. I'm distinctly less impressed with this source of evidence. Even assuming that all of the commenters are telling the truth, there are two problems: Some of the results may be psychosomatic, and some of the claimed results simply aren't all that impressive. Roberts does report some anecdotes of people who have lost more than 20 pounds. But he also includes a lot of anecdotes like this:

I've lost about 3 pounds in ten days. Not earth shattering, but slow and steady is good I think.

This sort of comment is meaningless. I myself lost 6 pounds the other day in under an hour-and-a-half. It's true. But that was only because I went for a 9-mile run in the middle of the day. I gained all the weight back once I drank a lot of water to replace the sweat that I had lost. The point is, your weight can fluctuate widely depending on how much water you have drunk, what time of day it is, whether you have been sweating, whether you just ate a large meal (or did the opposite). All of which is to say that a loss of three pounds in ten days doesn't necessarily mean anything.

Third, the best source of evidence would be something that Roberts doesn't have: A double-blind experiment where a large sample of people is randomly divided into two groups, one of which is assigned to drink genuine sugar water or oil, and the other of which drinks a calorie-free substitute for the above; and then researchers at the end would determine which group of people lost more weight. (Incidentally, this would probably be the first experiment in history where researchers had to use a placebo for sugar water.)

I guess we'll have to wait for that experiment. For now, Roberts' book is flawed but very intriguing.

Tuesday, May 23, 2006

Katrina Reporting

I wonder if there's ever been as great a disparity between media coverage and reality as there was over Hurrican Katrina.

(Via Instapundit).

Season Finale of 24

Don't read this unless you want to read spoilers about 24's season finale last night.





24 is always suspenseful and entertaining. But last night's finale had several plot holes that made no sense.

1. In one scene, Chloe asks for permission to bring in her ex-husband Morris, who sells shoes. In the next scene, he's already at CTU wearing a badge. How'd that happen so quickly?

2. In a previous episode, there were F-18 jets ready to shoot down a plane that Jack Bauer was on. That is, they were very near Jack Bauer at that moment. In this episode, Jack Bauer is somehow able to *drive* to the site of a Russian sub, while the F-18s are supposedly more than 20 minutes away. Given that F-18s can fly at more than a thousand miles per hour, how was it that Jack Bauer could drive somewhere outside their range?

3. At the end, Chinese agents capture and kidnap Jack Bauer. A) How did they even know he was still alive? B) How did they deploy a team to kidnap him so quickly, and how did they know where he was? They couldn't have known that he would be under arrest at that particular place.

All of that said, last night's episode didn't quite equal the king of all plot holes from this season of 24: The episode where it is revealed that the First Lady's assistant -- Evelyn -- somehow managed to record a private phone conversation between the President and Henderson (how on earth?!? and why would she believe she had reason to tap the President's phone, even if she knew how?), and then deposited the tape recording in a safety deposit box at a local bank (how did she find time to do this on such a chaotic day?).

Monday, May 22, 2006

Keane

One of my favorite bands -- Keane -- has a new song out from their forthcoming album: "Is It Any Wonder." I downloaded it on ITunes. The intro sounds very much like the intro to U2's song "Zoo Station" from Achtung Baby -- the instrumentation is similar, and they're even in the same key. But the rest is different. Can't wait for the full album.

Other music that I've gotten recently, all of which I liked a lot:

Switchfoot, Nothing is Sound. Excellent rock album. I couldn't do better than this review.

Plumb's Chaotic Resolve. Some varied styles here -- a few hard rock tunes that remind me of Evanscence, but some lighter pop songs as well. Like everything that I listen to, this album has lots and lots of catchy hooks.

Audioslave's first album. Hard-driving rock. I think this group -- composed of Chris Cornell from Soundgarden and the band members from Rage Against the Machine -- will have more staying power than other combination groups like Velvet Revolver (Scott Weiland from Stone Temple Pilots and the band members from Guns N'Roses). I was surprised to hear so many religious references. I.e., Show Me How to Live ("Nail in my hand / From my creator / You gave me life / Now show me how to live"), or Light My Way ("In my hour of need / On a sea of grey / On my knees I pray to you / Help me find the dawn / Of the dying day / Won't you light my way").

Rebecca St. James, If I Had One Chance To Tell You Something. A heavier rock sound than her previous electronic-driven albums. Some songs sound a bit like Kelly Clarkson or maybe Avril Lavigne.

Death Cab Meets Cutie -- I got their song "Soul Meets Body" from ITunes. Catchy and folksy.

Revis -- I've downloaded several of their songs from ITunes. They're a hard rock band with a anthemic sound. I'm surprised that they haven't gotten a lot more radio play. They're as good as Nickelback, for example, and a lot better than some current rock bands with whiny lead singers that sound as if they're about 15.

Sunday, May 21, 2006

Microwaved Water

This page -- which purports to show a science experiment proving that microwaved water killed a plant -- has been making the email rounds. It struck me as a hoax, but unlike most email hoaxes, I couldn't find any webpages (such as Snopes or similar pages) debunking it. Too new, perhaps?

UPDATE: My brother points me to this link.

Footnote vs. Endnotes

I don't like endnotes in books. I much prefer footnotes.

Why? Because whenever I read a non-fiction book -- doesn't matter what the subject -- I constantly want to check the notes to see what the author is citing. It's orders of magnitude easier to run your eyes down to the bottom of the page than it is to check endnotes.

That said, I realize that some publishers seem to believe (why, I don't know) that the presence of footnotes is intimidating to some readers, or that it is technically easier to paginate the book if the endnotes are in a separate location.

So I've resigned myself to the fact that most non-fiction books have endnotes. That said, there are many different styles of endnotes, and some are so inconvenient that I wonder why the publisher included them at all.

At the worst end of the scale, one book that I read recently had endnotes that took the following form: They were not numbered. Nor were there numbers in the text. Nor were the endnotes labeled by the page in the text. Instead, the sole means of categorizing the endnotes was by chapter number (not title). Then, each endnote began with a short description of the textual information that it was meant to support.

For example, the author would have written a bit of text about City X's employment rate in the 1970s. From reading the text, you'd have no idea whether there was an endnote for this proposition or not. If you wanted to check, you had to do the following: 1. Flip backwards to find what chapter you were reading. 2. Flip to the endnotes to find the notes for Chapter 1 (or 2, etc.). 3. Read through the endnotes for Chapter 1 until you found something that began, "City X's employment rates: _____." 4. Stop reading the endnotes when you got to material that looked unfamiliar (indicating that there was no endnote for this particular proposition). 5. If you did find an endnote, the author might have cited a short form (i.e., "Jones"). 6. Now, if you want to know what "Jones" is, you have to read back over the endnotes in reverse order until you find a Jones (there not being a bibliography).

It was a dreadfully inconvenient setup. It combined every conceivable flaw -- lack of endnote numbers, lack of full citation forms or bibliography, lack of page numbers, lack of chapter headings, and even lack of any indication in the text where there is an endnote.

At the other end of the spectrum, endnotes are most useful when they have the following features:

1. Endnote numbers (this should go without saying; I despise the endnotes that start out with substantive descriptions, which make it difficult for the reader to figure out what goes with what).

2. At the top of each page in the endnotes section, you find the heading, "Notes for pages 67-74," or something like that. Otherwise, you might turn back to the endnotes, only to have to flip back to the text to figure out which chapter you were reading. It's difficult to keep your place while you're doing all of this.

3. If any endnote contains a short form of a cite, then add a bibliography to the book. Otherwise, readers who want to know a full cite will have to waste unbelievable amounts of time trying to figure out where the full cite first appeared.

4. It is MUCH better if the endnotes contain the full citations, by the way. I've seen books with endnotes and a bibliography, and the endnotes invariably end up all being in short form, which means that you have to flip to TWO separate locations to find the reference for a bit of text. Again, it can be hard to keep your place while you're doing this.

Saturday, May 20, 2006

Plagiarism

I strongly object to the fact that some people define "plagiarism" as broadly as this:
The term "plagiarism" applies to "the imitation of structure, research, and organization," notes Laurie Stearns, a copyright lawyer in "Copy Wrong: Plagiarism, Process, Property, and the Law," an essay appearing in the California Law Review in 1992. "Even facts or quotations can be plagiarized," writes Ms. Stearns, "through the trick of citing to a quotation from a primary source rather than to the secondary source in which the plagiarist found it in order to conceal reliance on the secondary source."
Perhaps I should note that I have seen that quotation numerous times; the most recent time, I clicked a link from this page

Now: Isn't that definition of plagiarism a bit too broad?

Let's think about the possible rules and scenarios:

1. Anytime you cite or quote a book/article/source, you are obliged ALSO to cite the book/source/article that first led you there. Thus, if I'm reminded from an op-ed in the Oil Trough, Arkansas Baptist Church newsletter that a particular quotation comes from Shakespeare, then forever after, I can't quote Shakespeare directly; I have to give some sort of credit to the newsletter.

Such a rule would be impossible to implement if you do any sort of meaningful research on a subject. When you research a law review article, for example, you might find one book about your topic in the library; that book cites 20 other books and articles that seem relevant. You go read those, and come back home with a list of 100 books and articles that need to be checked out. After you take the time to investigate those 100 books and articles, you're starting to see a lot of overlap in what sources are cited, but you now have still another 100 books and articles to check out. And so it goes, until it seems like you've reached the end of all the trails (including rabbit trails) that came along.

Now at that point, it would be absolutely unthinkable to try to go back and recreate every source that first led you to another source. And it wouldn't even make sense. If I'm researching judicial review, I would find that lots and lots of people cite James Bradley Thayer's famous article. Why should any one of them get the credit for introducing me to that famous article (even if I could remember where I first heard of it in the first place)?

This rule would also be impossible to enforce. Who on earth could ever know that I first stumbled on Thayer's article from one particular source rather than another?

Moreover, it's not the case that you're passing off someone else's research as your own. When you go back to the original source, that IS doing research of your own.

2. If a source is particularly obscure, such that you might never have heard of it elsewhere, you should give credit to the person who brought it to your attention.

That seems like a rule that would be more workable. Say that there's this great newspaper article from 1789 discussing judicial review, and it comes from an obscure Pennsylvania newspaper that no one had ever heard of until Prof. Jones uncovered it. In that case, it seems fair to credit Prof. Jones with the discovery, so that you don't inaccurately imply that you independently found that article in your own perusal of Pennsylvania newspapers from the 1700s.

Even there, though, it doesn't make sense to throw around "plagiarism" charges willy-nilly when the source is reasonably discoverable. That is, there's no reason that you should be forced to cite the first article/book that happened to lead you to a source that you WOULD eventually have discovered on your own or through any number of other resources.

3. You should cite the secondary source when you haven't actually checked out the original source for yourself.

This is perhaps the most defensible rule. Scholars should always check out the original sources for themselves whenever possible. (An exception would be where a historian relies on original documents, such that it would be immensely impractical and unnecessary for you to recreate the research).

But frankly, I suspect that a lot of scholars don't actually check the original source. I came across this phenomenon once in a law review article that I edited while in school. The author had inaccurately described what happened in an old Supreme Court case, in the midst of a passage that also cited Laurence Tribe's treatise a few times. When checking all of the citations, I corrected the inaccuracy after reading the original Supreme Court decision, and then was surprised to see the same inaccuracy in a footnote in Laurence Tribe's treatise. Clearly, that author had merely copied Tribe's description of the case. (Perhaps Tribe, like some map companies, introduces minor inaccuracies into his treatise in order to sniff out such instances of copying.)

That's the only sort of "cite copying" that I would label as plagiarism: Copying what one author says about an original source without even bothering to look up that original source for yourself. In such an instance, you're representing someone else's research as if it were your own.

UPDATE: Michelle Dulak Thomson writes with the following comment:
I agree with you, mostly. You surely don't have to present the whole chain of discovery by which you came in contact with every primary source. On the other hand, if a primary source is very rarely cited and difficult to access, it's common courtesy either to mention the secondary source that led you to it, or else to make clear that you found it independently of said secondary source. (If you're working that far into the archives, you generally know who's cited what, and it's best to make clear that you aren't poaching other people's work if you really aren't.)

And these steps insulate you from the charge that you never looked at the primary source at all -- which, as you say, is serious bad behavior, but I think very common.

In short: Only cite sources you've seen, and credit other scholars that may have led you to the more obscure ones, or else cite their independent work while making clear that you haven't relied on it for your own. I think those two rules keep you free of plagiarism.

How's My Driving?

A creative idea; I think I like it.
'How's My Driving?' for Everyone (and Everything?)

LIOR STRAHILEVITZ
University of Chicago Law School
--------------------------------------------------------------------------------

U Chicago Law & Economics, Olin Working Paper No. 290
U of Chicago, Public Law Working Paper No. 125
NYU Law Review, November 2006


Abstract:
This is a paper about using reputation tracking technologies to displace criminal law enforcement and improve the tort system. The paper contains an extended application of this idea to the regulation of motorist behavior in the United States and examines the broader case for using technologies that aggregate dispersed information in various settings where reputational concerns do not adequately deter antisocial behavior.

The paper begins by exploring the existing data on “How’s My Driving?” programs for commercial fleets. Although more rigorous study is warranted, the initial data is quite promising, suggesting that the use of “How’s My Driving?” placards in commercial trucks is associated with fleet accident reductions ranging from 20% to 53%. The paper then proposes that all vehicles on American roadways be fitted with “How’s My Driving?” placards so as to collect some of the millions of daily stranger-on-stranger driving observations that presently go to waste. By delegating traffic regulation to the motorists themselves, the state might free up substantial law enforcement resources, police more effectively dangerous and annoying forms of driver misconduct that are rarely punished, reduce information asymmetries in the insurance market, improve the tort system, and alleviate road rage and driver frustration by providing drivers with opportunities to engage in measured expressions of displeasure.

The paper addresses obvious objections to the displacement of criminal traffic enforcement with a system of “How’s My Driving?”-based civil fines. Namely, it suggests that by using the sorts of feedback algorithms that eBay and other reputation tracking systems have employed, the problems associated with false and malicious feedback can be ameliorated. Indeed, the false feedback problem presently appears more soluble in the driving context than it is on eBay. Driver distraction is another potential pitfall, but available technologies can address this problem, and the implementation of a “How’s My Driving?” for Everyone system likely would reduce the substantial driver distraction that already results from driver frustration and rubbernecking. The paper also addresses the privacy and due process implications of the proposed regime. It concludes by examining various non-driving applications of feedback technologies to help regulate the conduct of soldiers, police officers, hotel guests, and participants in virtual worlds, among others.

Thursday, May 18, 2006

Spectrum Commons

An interesting paper on spectrum commons:
The Spectrum Commons in Theory and Practice

JERRY BRITO
George Mason University - Mercatus Center - Regulatory Studies Program
--------------------------------------------------------------------------------
February 28, 2006


Abstract:
The radio spectrum is a scarce resource that has been historically allocated through command-and-control regulation. Today, it is widely accepted that this type of allocation is as inefficient for spectrum as it would be for paper or land. Many commentators and scholars, most famously Ronald Coase, have advocated that a more efficient allocation would be achieved if government sold the rights to the spectrum and allowed a free market in radio property to develop.

A new school of scholars, however, has begun to challenge the spectrum property model. While they agree with Coase that command-and-control spectrum management is highly inefficient, they instead propose to make spectrum a commons. They claim that new spectrum sharing technologies allow a virtually unlimited number of persons to use the same spectrum without causing each other interference and that this eliminates the need for either property rights in, or government control of, spectrum.

This Article aims to show that, despite the rhetoric, the commons model that has been proposed in the legal literature is not an alternative to command-and-control regulation, but in fact shares many of the same inefficiencies of that system. In order for a commons to be viable, someone must control the resource and set orderly sharing rules to govern its use. If the government is the controller of a commons - as proponents of a spectrum commons suggest it should be - then in allocating and managing the commons the government will very likely employ its existing inefficient processes.

Medical Costs

This interesting interview provides a clue as to why U.S. medical costs are so much higher than Europe's, and why that doesn't lead to longer lifespans. According to this one expert, it's not that we're getting too little medical care, but too much:
For three decades Nortin Hadler, a professor of medicine at the University of North Carolina at Chapel Hill, has been rigorously examining statistics generated by his medical colleagues’ practices and arriving at startling conclusions about their effectiveness. To take just one example, Hadler is credited with leading a complete rethinking about the treatment of back pain, which he finds excessive. He wrote the editorial accompanying a landmark study in The Journal of the American Medical Association two years ago suggesting that the benefits of surgery for back pain are overrated. He has also taken on heart treatment, testifying before Congress and the Social Security Advisory Board and publishing papers arguing that very little data back up the value of modern treatments like bypass surgery and angioplasty. He took his case about cardiac care and other health issues to the public in The Last Well Person: How to Stay Well Despite the Health-Care System (McGill-Queen’s University Press, 2004).

Q: Your book makes the case that too many people are having bypass surgery without much advantage. Under what circumstances do you think bypass surgery is appropriate?

H: None. I think bypass surgery belongs in the medical archives. There are only two reasons you’d ever want to do it: one, to save lives, the other to improve symptoms. But there’s only one subset of the population that’s been proved to derive a meaningful benefit from the surgery, and that’s people with a critical defect of the left main coronary artery who also have angina. If you take 100 60-year-old men with angina, only 3 of them will have that defect, and there’s no way to know without a coronary arteriogram. So you give that test to 100 people to find 3 solid candidates—but that procedure is not without complications. Chances are you’re going to do harm to at least one in that sample of 100. So you have to say, “I’m going to do this procedure with a 1 percent risk of catastrophe to find the 3 percent I know I can help a little.” That’s a very interesting trade-off.

* * *

If the data are not prompting so much interventional cardiology, what is?

H: Money. Interventional cardiology is what supports almost every hospital in America—it’s an enormous part of our gross domestic product. Every year in this country we do about half a million bypass grafts and 650,000 coronary angioplasties, with the mean cost of the procedures ranging from $28,000 to $60,000. There are a lot of people involved in this transfer of wealth. But no Western European nation has such a high rate of those procedures—and their longevity is higher than ours.

Lactic Acid

Some scientists evidently hadn't learned the principle that correlation is not causation.

Wednesday, May 17, 2006

Interesting Paper on Residential Segregation

This looks interesting:
Separate When Equal? Racial Inequality and Residential Segregation

PATRICK J. BAYER
Yale University - Department of Economics; National Bureau of Economic Research (NBER)
HANMING FANG
Yale University - Department of Economics; National Bureau of Economic Research (NBER)
ROBERT MCMILLAN
University of Toronto - Department of Economics
--------------------------------------------------------------------------------
October 2005

Yale Economic Applications and Policy Discussion Paper No. 9


Abstract:
Standard intuition suggests that residential segregation in the United States will decline when racial inequality narrows. In this paper, we hypothesize that the opposite will occur. We note that middle-class black neighborhoods are in short supply in many U.S. metropolitan areas, forcing highly educated blacks either to live in predominantly white high-socioeconomic status (SES) neighborhoods or in more black lower-SES neighborhoods. Increases in the proportion of highly educated blacks in a metropolitan area may then lead to the emergence of new middle-class black neighborhoods, causing increases in residential segregation. We formalize this mechanism using a simple model of residential choice that permits endogenous neighborhood formation. Our primary empirical analysis, based on across-MSA evidence from the 2000 Census, indicates that this mechanism does indeed operate: as the proportion of highly educated blacks in an MSA increases, so the segregation of blacks at all education levels increases. Time-series evidence provides additional support for the hypothesis, showing that an increase in black educational attainment in a metropolitan area between 1990-2000 significantly increases segregation. Our analysis has important implications for the evolution of both residential segregation and racial socioeconomic inequality, drawing attention to a negative feedback loop likely to inhibit reductions in segregation and racial inequality over time.

Friday, May 12, 2006

The Narnia movie

Tilda Swinton, who played the White Witch in the Narnia movie, is at it again:
Speaking about The Chronicles of Narnia, which in the run-up to its release was given a huge push by the Church, Swinton said: . . . "At least we made her whiter than white, the ultimate white supremacist, and we managed to railroad the knee-jerk attempt to make her look like an Arab."
I find this simply unbelievable. Who would have ever suggested that the "White Witch" should "look like an Arab"?

Thursday, May 11, 2006

More on Luttig

Kevin Drum opines:
Luttig, a super-conservative judge and a devout believer that the adminstration needed extraordinary powers to fight terrorism, suddenly discovered that he had been suckered. When this administration says something is critical to the war on terror, what it really means is that it's politically convenient. If something else is politically convenient tomorrow, they'll flip 180 degrees without batting an eye.

For some reason, Luttig found this unacceptable. And so he's gone.
I don't get this at all. Is there any reason, beyond sheer speculation, to believe that Luttig's disagreement with the Bush administration actually caused him to resign? How could it? The Bush administration has absolutely no authority or pull over Luttig. From Luttig's perspective, the Bush administration will be history by the beginning of 2009, but at age 51, he could have been a federal judge for another 30+ years. I don't see any reason that a sitting federal judge would feel the need to resign after disagreeing with a presidential administration.

Wednesday, May 10, 2006

Judicial News

In judicial news today:

1. A very odd move:
Prominent federal appeals court judge J. Michael Luttig, who was on the short list for a seat on the Supreme Court, has delivered a letter of resignation to President Bush, effective immediately.

Luttig, 51, has taken a position as senior vice president and general counsel of Boeing Co., and will move with his family to Chicago.
I suspect that salary was involved. Reminds me of a story involving Luttig and John Roberts from a few years back:
[Luttig] has a sense of humor: A few years ago, he "applied" for a first-year associate's job at Hogan & Hartson, which, he suspected, would pay more than the salary of a federal appeals judge. Responding in kind, Hogan partner John Roberts Jr. turned Luttig down, informing him that first-year associates normally lack life tenure, are not assigned a battery of law clerks to assist them, and don't wear black robes -- even on casual Fridays.
I wouldn't be surprised to see more moves like Luttig's in the future, if judicial salaries fall even further behind the private sector.


2. A former partner from my law firm is being nominated to the Tenth Circuit. In addition to his many accomplishments, he is a very congenial person, and I'm sure he'll make a fine judge:
President Bush is expected to nominate attorney and legal scholar Neil Gorsuch to take a seat on the 10th Circuit Court of Appeals in Denver, Sen. Wayne Allard announced late Tuesday.

Gorsuch, who has a long paper trail of writings opposing euthanasia and judicial activism, currently serves in the U.S. Department of Justice as a principal deputy to the associate attorney general.

* * *

Allard, a Republican, praised Neil Gorsuch's legal credentials. Gorsuch earned a law degree from Harvard University and a doctorate in legal philosophy from the University of Oxford. He has been a clerk to two U.S. Supreme Court justices, fellow Coloradan Byron White and current Associate Justice Anthony Kennedy.

Tuesday, May 09, 2006

Expertise

The latest Levitt-Dubner column:
Ericsson and his colleagues have thus taken to studying expert performers in a wide range of pursuits, including soccer, golf, surgery, piano playing, Scrabble, writing, chess, software design, stock picking and darts. They gather all the data they can, not just performance statistics and biographical details but also the results of their own laboratory experiments with high achievers.

Their work, compiled in the "Cambridge Handbook of Expertise and Expert Performance," a 900-page academic book that will be published next month, makes a rather startling assertion: the trait we commonly call talent is highly overrated. Or, put another way, expert performers — whether in memory or surgery, ballet or computer programming — are nearly always made, not born. And yes, practice does make perfect. These may be the sort of clichés that parents are fond of whispering to their children. But these particular clichés just happen to be true.
* * *

"I think the most general claim here," Ericsson says of his work, "is that a lot of people believe there are some inherent limits they were born with. But there is surprisingly little hard evidence that anyone could attain any kind of exceptional performance without spending a lot of time perfecting it." This is not to say that all people have equal potential. Michael Jordan, even if he hadn't spent countless hours in the gym, would still have been a better basketball player than most of us. But without those hours in the gym, he would never have become the player he was.

* * *
And it would probably pay to rethink a great deal of medical training. Ericsson has noted that most doctors actually perform worse the longer they are out of medical school. Surgeons, however, are an exception. That's because they are constantly exposed to two key elements of deliberate practice: immediate feedback and specific goal-setting.

The same is not true for, say, a mammographer. When a doctor reads a mammogram, she doesn't know for certain if there is breast cancer or not. She will be able to know only weeks later, from a biopsy, or years later, when no cancer develops. Without meaningful feedback, a doctor's ability actually deteriorates over time. Ericsson suggests a new mode of training. "Imagine a situation where a doctor could diagnose mammograms from old cases and immediately get feedback of the correct diagnosis for each case," he says. "Working in such a learning environment, a doctor might see more different cancers in one day than in a couple of years of normal practice."
Interesting observations there, particularly about the abilities of doctors.

I think that the bolded language above may be overstated, however. My personal experience, as always, influences my views of what seems plausible here. And I just don't find it plausible that experts can be "made, not born." It looks like the evidence shows merely that expert performers have spent years practicing their discipline. But that is consistent with the more common-sense theory that experts are "born with talent that they then have to exercise," as opposed to the sweeping claim that experts are "made, not born."

When I was a graduate teaching assistant, it was my great displeasure to teach a beginning guitar class for 20-25 music education majors at the University of Georgia. I taught this class for three quarters a year for two years. (It was a miserable experience, because 1) few of the students wanted to take a guitar class in the first place, it was simply a requirement of their major; and 2) all of the students came to class with a guitar, usually of extremely poor quality, with which they could make noise whenever they were bored, which was often, because I would inevitably have to move around the room examining each student's playing individually.)

Anyway, I must have taught well over a hundred students to play the guitar, at least in a rudimentary fashion. And the thing that most impressed me was the incredibly vast range of ability that the students had. Out of 20 students, none of whom had touched a guitar before in their lives, there would be a few who were just naturals, who could instantly mimic anything that I showed them, and who could make remarkable progress without any sign of having practiced during the week. And there would be a handful of students at the bottom for whom everything was an immense struggle, and who -- despite the ability to play some other instrument such as the saxophone or piano -- simply could not get their fingers to play even the simplest chord on the guitar. In other words, students could have very different levels of ability even as absolute beginners -- which indicates to me that there really is such a thing as talent.

Now, even the best student would obviously never become an expert guitar player without practice. In that sense, practice is essential. But I doubt that the students with zero talent could ever become candidates for playing at Carnegie Hall, even if they practiced diligently for years. Experts are born AND made, in other words.

To be fair, Dubner and Levitt do make this point, when they concede that few of us could hope to play basketball like Michael Jordan, no matter how much we practice. But that undermines their own suggestion that experts are "made, not born." If expertise was something that could be simply "made" through enough practice, then all of us could be a Michael Jordan (or a Yo Yo Ma) given enough practice.

Labels:

TV Online

I'm happy to see that Fox is now selling TV shows on ITunes. I think ITunes already sells episodes of "Lost." Now that they're going to sell "24" and "Prison Break," I can cancel my cable subscription.

Sunday, May 07, 2006

Ali G

Ali G. interviews Noam Chomsky here. Pretty funny; a bit off-color at times.

Monday, May 01, 2006

Pizza Recipe

One of the things that I like to make is pizza. I'm not as thorough as this guy, but I still enjoy it. When I have time to make whole wheat crust, that's the best. Sometimes, though, I end up buying a ready-made crust from the grocery store. I don't like buying ready-made crust for anything -- whether pizzas or pies -- but at least it's better than buying a ready-made pizza. Half a loaf is better, or something like that.

Here's what I made this weekend that was just delicious. First, the sauce. The best recipe for sauce that I've found so far is this one, which I made without the anchovy sauce or the cayenne pepper. Then I caramelized about three onions by letting them stew in a little oil and sugar for probably 20 minutes or so. (I was making three 12-inch pizzas, by the way).

At the same time, I grilled a few chicken breasts on our George Foreman grill, and then sliced them into small pieces. And all the while, I had cut up some roma tomatoes, covered them with garlic salt, and was letting them drain on paper towels. (Putting fresh tomatoes on pizza will just make it soggy unless you drain the water out). As for the cheese, I chopped up some whole-milk mozzarella -- the kind that is soft and comes in a large lump.

Then I cooked it at 550 -- the highest that my oven will go, unfortunately -- until the crust started to brown and the cheese looked good and melted.

I guess that's not much of a "recipe" -- too much eyeballing and guessing. But that's how my wife and I end up cooking a lot of the time.

Which reminds me, I'm not nearly as good at eyeballing as my grandfather was. He could measure things without using a measuring cup. I remember one time when I was a kid -- probably 10 or 12 -- and I was making some biscuits. My grandfather was making something else in the kitchen at the same time. I needed to measure out 1/2 cup of shortening to cut into the flour, but I couldn't find the 1/2-cup measure. I made some remark about not being able to find it, and then my grandfather said, "How much do you need?" "1/2 cup." He dipped a kitchen spoon into the shortening can and said, "There you go." I remember thinking, "No way. That's not going to be right." So I kept looking, and finally found the measuring cup. I put the shortening in it to double-check, and sure enough, my grandfather was right on the money. I was in awe. Of course, he had been cooking for about 60 years at that point.