Tuesday, April 30, 2013

What, we didn't repeal Obamacare?

If you want to know what a challenge the Obama administration faces in implementing its signature health-care law, this statistic might help: Fewer than six in 10 Americans know that the Obamacare law is still on the books. Seven percent think the Supreme Court struck it down; 12 percent say Congress repealed Obamacare. ...

Most Americans likely to access new health care programs under the Affordable Care Act—either through subsidized private insurance or the Medicaid expansion—say they don’t have enough information to understand “how it [the health law] will impact you and your family.”
--Sarah Kliff, Washington Post, on the prospect that a lot of people are going to be surprised that they are paying a fine for not complying with the health insurance mandate. HT: Marginal Revolution

The bottom line on the Reinhart-Rogoff controversy

In the end, all the corrections advocated by the critics shift the average GDP growth for very-high-debt nations to 2.2 percent, from a negative 0.1 percent in Reinhart and Rogoff’s original work. The finding remains that economic growth is lower in very-high-debt countries (see chart). It has been disappointing to watch those on the left seize on the embarrassing Excel errors but ignore this bigger picture.



The negative correlation between public debt and economic growth is also present in the other samples studied by Reinhart and Rogoff, and it is a finding whose broad contours are consistent with other empirical analyses. Although the critics are right that it’s difficult to pin down the exact strength of the debt-growth correlation, that’s no reason to discard the balance of evidence suggesting that it is negative.

Equally, it’s time to abandon the more specific claim that there is a threshold of 90 percent of GDP beyond which the negative effects of public debt on economic growth become particularly evident. This was always a stretch, and is now quite clearly inconsistent with the balance of the evidence. Unfortunately, it’s the sort of sound bite that the media and our politicians find irresistible.

Lost in all this sound and fury is the real question that we should be debating: Is it appropriate to infer that high debt is driving slower growth, and hence governments need to take greater care before taking on debt? Or is lower GDP growth, or perhaps some other factor, the reason that debt burdens rise? If the observed correlations reflect the latter reason (and there are hints that it may), then the whole exercise has little relevance to public policy.
--Betsey Stevenson and Justin Wolfers, Bloomberg, on the big picture

Sunday, April 28, 2013

AER publication = 0.77 years of life

What would you give for a tangible achievement that carries enormous professional prestige?

If you’re an economist, and the achievement is a paper published in the American Economic Review, the answer seems to be three-quarters of a year of your life. Or, looking at it a different way, about three-quarters of a thumb. ...

The researchers–Arthur Attema, Werner Brouwer, and Job van Exel–conducted a survey of 69 of their colleagues (a small number, they concede). All had already published at least one recent article in one of six major economics journals.

In their questionnaire, they described an admittedly far-fetched scenario. If a medicine existed that would give them a day-long surge in brainpower—enough to formulate an article good enough for one of four major journals—would they take it? If so, would they still do so if they knew it would slightly reduce their lifespan? ...

Specifically, the average study participant was willing to give up 0.77 years for a paper published in the American Economic Review, but only 0.55 years for the Quarterly Journal of Economics, 0.42 years for the Review of Economic Studies, and 0.38 years for the European Economic Review.

Interestingly, the researchers found this willingness to give up a few months was not related to any anticipated higher income such an article might bring. ...

A separate, only slightly less macabre question measured the value of their thumb. Right-handed economists were asked: “Suppose you can live either 20 more years without your right thumb, or a shorter period with your right thumb. How long should the latter period be such that you are indifferent between these options?” (Left-handed economists were asked about their left thumbs.)

The researchers found that, on average, study participants were willing to give up 1.02 years of life in exchange for keeping their dominant-hand thumb. Since they’re willing to give up 0.77 years for publication in AER, “we can infer that a publication is worth about three quarters of a thumb,” the researchers conclude.
--Tom Jacobs, Pacific Standard, on work-life tradeoffs. HT: ACT

Thursday, April 25, 2013

Is the FAA intentionally delaying flights?

The Federal Aviation Administration claims the sequester spending cuts are forcing it to delay some 6,700 flights a day, but rarely has a bureaucracy taken such joy in inconveniencing the public. ...

...FAA regional employees wrote to blow the whistle on their bosses. As one email put it, "the FAA management has stated in meetings that they need to make the furloughs as hard as possible for the public so that they understand how serious it is."

Strategies include encouraging union workers to take the same furlough day to increase congestion. "I am disgusted with everything that I see since the sequester took place," another FAA employee wrote. "Whether in HQ or at the field level it is clear that our management has no intention of managing anything. The only effort that I see is geared towards generating fear and demonstrating failure."
--Review & Outlook, WSJ, on a shadow strike at the FAA. Editorial page content, so take with a grain of salt.

Why did universities start giving legacy children admissions preference?

Legacy preference for university admissions was devised in 1925 at Yale University, where the proportional number of Jews in the student body was growing at a rate that became alarming to the school's administrators.[5] However, even prior to that year, Yale had begun to incorporate such amorphous criteria as 'character' and 'solidity', as well as 'physical characteristics', into its admissions process as an excuse for screening out Jewish students;[5] but nothing was as effective as legacy preference, which allowed the admissions board to summarily pass over Jews in favor of 'Yale sons of good character and reasonably good record', as a 1929 memo phrased it. Other schools, including Harvard, soon began to pursue similar policies for similar reasons, and Jewish students in the Ivy League schools were maintained at a steady 10% through the 1950s.


In 1990, the US Office of Civil Rights concluded an investigation into whether Harvard discriminated against Asians. The commission concluded that most of the under representation could be explained by the fact that few Asians were recruited athletes or children of alumni.
--Rohin Dhar, Priceonomics blog, on the legacy of legacy preference

Wednesday, April 24, 2013

You don't need to cool down after exercise

Most of us were taught in elementary school gym classes that the body requires a formal period of cooling down after a workout or competition. Instructors told us that by slowing to a jog or otherwise lessening the intensity of the workout, followed by stretching or otherwise transitioning out of physical activity, we would prevent muscle soreness, improve limberness and speed physiological recovery. ...

But under scientific scrutiny, none of those beliefs stand up well.

In a representative study published last year in The Journal of Human Kinetics, a group of 36 active adults undertook a strenuous, one-time program of forward lunges while holding barbells, an exercise almost guaranteed to make untrained people extremely sore the next day. Some of the volunteers warmed up beforehand by pedaling a stationary bicycle at a very gentle pace for 20 minutes. Others didn’t warm up but cooled down after the exercise with the same 20 minutes of easy cycling. The rest just lunged, neither warming up nor cooling down.

The next day, all of the volunteers submitted to a pain threshold test, in which their muscles were prodded until they reported discomfort. The volunteers who’d warmed up before exercising had the highest pain threshold, meaning their muscles were relatively pain-free.

Those who’d cooled down, on the other hand, had a much lower pain threshold; their muscles hurt. The cool-down group’s pain threshold was, in fact, the same as among the control group. Cooling down had bought the exercisers nothing in terms of pain relief.

Similarly, in two other studies published last year, one in The Journal of Human Kinetics and the other in The Journal of Strength and Conditioning Research, professional soccer players in Spain underwent a series of physical tests to benchmark their vertical leap, sprinting speed, agility and leg muscle flexibility, and then completed a normal soccer practice. Afterward, some of the players simply stopped exercising and sat quietly on a bench for 20 minutes, while others formally cooled down with 12 minutes of jogging and 8 minutes of stretching. ...

It turned out that there were almost no differences between the two groups of players. ...

The available data “quite strongly suggest a cool-down does not reduce postexercise soreness,” says Rob Herbert, a senior research fellow at Neuroscience Research Australia and senior author of what is probably the foundational study of cooling down, from 2007. In that experiment, healthy adults walked backward downhill on a treadmill for 30 minutes, courting sore muscles and curious stares from fellow gymgoers. Some of the volunteers first walked forward for 10 minutes as a warm-up; others did the same afterward, to cool down. Others didn’t warm up or cool down.

Two days later, the group that had cooled down was every bit as sore as the control group.
--Gretchen Reynolds, NYT, on yet another way to save time on your workout

Tuesday, April 23, 2013

Manhattan is a relative bargain if you make more than $100,000

New Yorkers assume that we live in the most expensive city in the country, and cost-of-living indexes tend to back up that assertion. But those measures are built around the typical American’s shopping habits, which don’t really apply to the typical New Yorker — especially not college-educated New Yorkers with annual household incomes in the top income quintile, or around $100,000. According to a recent study by Jessie Handbury, an economist at the University of Pennsylvania’s Wharton School, people in different income classes do indeed have markedly different purchasing habits. That may not be surprising, but once you account for these different preferences, it turns out that living in New York is actually a relative bargain for the wealthy.

... Remarkably, she found that for households earning above $100,000, grocery costs are 20 percent lower in cities with a high per-capita income (like New York) than in cities with a low per-capita income (like New Orleans). There’s evidence that the same forces hold true for other products that cater to upper-income people, from high-end retail to beauty services. The average manicure, for example, is about $3 cheaper in New York City than in each of the rest of the top 10 biggest cities in the United States, according to Centzy, a company that collects data on the prices of services. ...

Of course, not everything that wealthy New Yorkers spend money on is cheaper here. Housing, after all, is absurdly expensive, even for the rich. ... Still, it’s somewhat unfair to compare housing costs here to those in a place like Buffalo, or even Atlanta, since perks like access to amenities and unusually lucrative jobs are baked into the cost of New York real estate. ... Regardless, the rent burden isn’t actually as onerous as people assume: the typical resident here pays roughly the same share of her income in rent as does her counterpart in Los Angeles, Chicago, Philadelphia and Houston, according to N.Y.U.’s Furman Center for Real Estate and Urban Policy.

Professional-class workers who like to moan about the cost of living in New York — and I’m including myself in this group — don’t realize how spoiled we are by both variety and competitive pricing. Truthfully, things seem more expensive here because there’s just way more high-end stuff around to tempt us, and we don’t do the mental accounting to adjust sticker prices for the higher quality. We see a sensible shoe with a $480 price tag or an oatmeal cookie for $4 and sometimes don’t register that these are luxury versions of normal items available from Payless or Entenmann’s. The problem, in part, is that people tend to anchor their own expectations for what they should buy based on what their neighbors are buying, not what some abstract, median American buys. It’s a phenomenon known by some as affluenza...
--Catherine Rampell, NYT, on not crying poverty wolf in Manhattan

Monday, April 22, 2013

The Chinese origins of ketchup

Yes, dear reader, the word ketchup originally meant “fish sauce” in a dialect of Fujian province, the humid coastal region that also gave us the word “tea” (from Fujianese te). ...

The story begins more than 500 years ago, when this province on the South China Sea was the bustling center of seafaring China. Fujianese-built ships sailed as far as Persia and Madagascar and took Chinese seamen and settlers to ports throughout Southeast Asia. Down along the Mekong River, Khmer and Vietnamese fishermen introduced them to their fish sauce, a pungent liquid with a beautiful caramel color that they made (and still make) out of salted and fermented anchovies. This fish sauce is now called nuoc mam in Vietnamese or nam pla in Thai, but the Chinese seamen called it ke-tchup, “preserved-fish sauce” in Hokkien—the language of southern Fujian and Taiwan. ...

What may be surprising—given fish sauce’s heady scent and England’s reputation for bland food—is that while buying all these barrels of arrack [a liquor made of fermented red rice] from Chinese merchants in Indonesia, British sailors also acquired a taste for ke-tchup. By the turn of the 18th century, fish sauce and arrack had become as profitable for British merchants as they were for Chinese traders. ...

The great expense of this Asian import soon led to recipes in British and then American cookbooks for cooks attempting to make their own ketchup. Here’s one from a 1742 London cookbook in which the fish sauce has already taken on a very British flavor, with “eschallots” (shallots) and mushrooms...

The mushrooms that played a supporting role in this early recipe soon became a main ingredient, and from 1750 to 1850 the word ketchup began to mean any number of thin dark sauces made of mushrooms or even walnuts. ...

It wasn’t until the 19th century that people first began to add tomato to ketchups, probably first in Britain. This early recipe from 1817 still has the anchovies that betray its fish-sauce ancestry...

By the mid-1850s, the anchovies had been dropped, and it was only in 1890 that the need for better preservation (and the American sweet tooth) led American commercial ketchup manufacturers like Heinz to greatly increase the sugar in ketchup, leading to our modern sweet and sour formula.

Saturday, April 20, 2013

Boston runs on Dunkin'

It was clear amid the chaos Friday which was the hometown coffee chain.

On block after block of Boston’s Financial District and Downtown Crossing, Starbucks shops went dark as the city locked down, spurred by a manhunt for the second Marathon bombing suspect. Dunkin’ Donuts stayed open.

Law enforcement officials asked the chain to keep some restaurants open in locked-down communities to provide hot coffee and food to police and other emergency workers, including in Watertown, the focus of the search for the bombing suspect. Dunkin’ is providing its products to them at no charge.
--Erin Ailworth, Boston Globe, on police and their preferred coffee and donuts

Monday, April 15, 2013

The case for legalized polygamy

Recently, Tony Perkins of the Family Research Council reintroduced a tired refrain: Legalized gay marriage could lead to other legal forms of marriage disaster, such as polygamy. Rick Santorum, Bill O’Reilly, and other social conservatives have made similar claims. It’s hardly a new prediction—we’ve been hearing it for years. Gay marriage is a slippery slope! A gateway drug! If we legalize it, then what’s next? Legalized polygamy?

We can only hope.

Yes, really. While the Supreme Court and the rest of us are all focused on the human right of marriage equality, let’s not forget that the fight doesn’t end with same-sex marriage. We need to legalize polygamy, too. Legalized polygamy in the United States is the constitutional, feminist, and sex-positive choice. More importantly, it would actually help protect, empower, and strengthen women, children, and families. ...

The case for polygamy is, in fact, a feminist one and shows women the respect we deserve. Here’s the thing: As women, we really can make our own choices. We just might choose things people don’t like. If a woman wants to marry a man, that’s great. If she wants to marry another woman, that’s great too. If she wants to marry a hipster, well—I suppose that’s the price of freedom.

And if she wants to marry a man with three other wives, that’s her damn choice.

We have a tendency to dismiss or marginalize people we don’t understand. We see women in polygamous marriages and assume they are victims. ... All marriages deserve access to the support and resources they need to build happy, healthy lives, regardless of how many partners are involved. Arguments about whether a woman’s consensual sexual and romantic choices are “healthy” should have no bearing on the legal process. And while polygamy remains illegal, women who choose this lifestyle don’t have access to the protections and benefits that legal marriage provides.
--Jillian Keenan, Slate, embracing the slippery slope

Friday, April 12, 2013

Evidence of America's deteriorating mental health

The high prevalence of mental illness in the United States isn’t only because we’ve gotten better at detecting mental illness. More of us are mentally ill than in previous generations, and our mental illness is manifesting at earlier points in our lives. One study supporting this explanation took the scores on a measure of anxiety of children with psychological problems in 1957 and compared them with the scores of today’s average child. Today’s children—not specifically those identified as having psychological problems, as were the 1957 children—are more anxious than those in previous generations.

Another study compared cohorts of American adults on the personality trait of neuroticism, which indicates emotional reactivity and is associated with anxiety. Americans scored higher on neuroticism in 1993 than they did in 1963, suggesting that as a population we are becoming more anxious. Another study compared the level of narcissism among cohorts of American college students between 1982 and 2006 and found that more recent cohorts are more narcissistic.

An additional study supports the explanation that more people are diagnosed with mental illness because more of us have mental illness: The more recently an American is born, the more likely he or she is to develop a psychological disorder. Collectively, this line of research indicates that more is going on than simply better detection of mental illness.
--Robin Rosenberg, Slate, on reverse progress

Sunday, April 7, 2013

The pain of love unrequited

So I ignored Kevin’s texts and calls, patiently waiting for him to realize we really were supposed to be together. When I was back home for New Year’s I made sure every status advertised my whereabouts for the night. How else was he going to burst in at midnight to tell me he couldn’t live without me? Spoiler alert: he didn’t.

Subsequently, I decided to move to New York, where 20-somethings who no longer believe in love go to pursue more attainable goals, like being a stand-up comic. One day I awoke to an e-mail from my parents; the basketball hoop in my front yard had been knocked over during a storm and they decided to remove it completely.

I took this as a sign to officially abandon the plan. This time I cut Kevin out of my life completely and began to focus on more important things, like my blossoming waitressing career.
--Marina Shifrin, NYT, on making a stone of your heart

The diversity universities don't want

Last year [2009], two Princeton sociologists, Thomas Espenshade and Alexandria Walton Radford, published a book-length study of admissions and affirmative action at eight highly selective colleges and universities. ... But what was striking, as Russell K. Nieli pointed out last week on the conservative Web site Minding the Campus, was which whites were most disadvantaged by the process: the downscale, the rural and the working-class.

This was particularly pronounced among the private colleges in the study. For minority applicants, the lower a family’s socioeconomic position, the more likely the student was to be admitted. For whites, though, it was the reverse. An upper-middle-class white applicant was three times more likely to be admitted than a lower-class white with similar qualifications. ...

But cultural biases seem to be at work as well. Nieli highlights one of the study’s more remarkable findings: while most extracurricular activities increase your odds of admission to an elite school, holding a leadership role or winning awards in organizations like high school R.O.T.C., 4-H clubs and Future Farmers of America actually works against your chances. Consciously or unconsciously, the gatekeepers of elite education seem to incline against candidates who seem too stereotypically rural or right-wing or “Red America.”

This provides statistical confirmation for what alumni of highly selective universities already know. The most underrepresented groups on elite campuses often aren’t racial minorities; they’re working-class whites (and white Christians in particular) from conservative states and regions.

Saturday, April 6, 2013

Life advice from the elderly

[The] Legacy Project [is] a study of almost 1,500 people, ranging from their 70s to over 100, who shared their wisdom about life. [Cornell professor Karl Pillemer's] work resulted in the 2011 book “30 Lessons for Living: Tried and True Advice from the Wisest Americans.” ...

His research began with a simple question: “What are the most important lessons you have learned over your life?” ...

One unanimous refrain included just three simple words: Life is short. A retired engineer told Pillemer that “it passes in a nanosecond.” A 99-year-old woman said, “I don’t know what happened, but the next thing you know you are 100.”

That firm appreciation of life’s fleeting nature led to a list of surprising lessons for Pillemer and his research team. Though many survey participants had lived through hard economic times, instead of urging younger people to get steady, well-paying jobs, they consistently said, “Do something you enjoy.”

“Based on this extremely acute awareness of the shortness of life, everybody argued you should find work you love; work ought to be chosen for its intrinsic value, and for its sense of enjoyment, sense of purpose. And life was much too short to spend doing something you don’t like, even for a few years.”

Similarly, respondents surprised Pillemer when he asked them to name their biggest regrets. Instead of listing concerns like affairs, addictions, or shady business dealings, almost unanimously they answered: “I wish I had not spent so much time worrying.”

“The idea behind that again related to shortness of life. … The argument they make is that the mindless and ruminative worry over things one can’t control so effectively poisons life that it’s a waste of a precious lifetime.”

Another standout lesson from the survey involved the notion of being responsible for one’s own happiness. While it sounds like a cliché, said Pillemer, “It’s a critical part of their lived reality, and their argument is as follows: Younger people tend to be happy ‘if only’. … Their view from later life is that this has to morph into being happy in spite of things.
--Colleen Walsh, Harvard Gazette, on how to live

Thursday, April 4, 2013

The life of Roger Ebert

[Roger] Ebert, 70, who reviewed movies for the Chicago Sun-Times for 46 years and on TV for 31 years, and who was without question the nation’s most prominent and influential film critic, died Thursday in Chicago. He had been in poor health over the past decade, battling cancers of the thyroid and salivary gland. ...

On Tuesday, Mr. Ebert blogged that he had suffered a recurrence of cancer following a hip fracture suffered in December, and would be taking “a leave of presence.” In the blog essay, marking his 46th anniversary of becoming the Sun-Times film critic, Ebert wrote “I am not going away. My intent is to continue to write selected reviews but to leave the rest to a talented team of writers hand-picked and greatly admired by me.” ...

He had a good eye. His Sept. 25, 1967 review of Warren Beatty and Faye Dunaway in “Bonnie and Clyde” called it “a milestone” and “a landmark.”

“Years from now it is quite possible that ‘Bonnie and Clyde’ will be seen as the definitive film of the 1960s,” he wrote, “showing with sadness, humor and unforgiving detail what one society had come to.”

It was. Though of course Ebert was not infallible -- while giving Mike Nichols’ “The Graduate” four stars in the same year, he added that the movie’s “only flaw, I believe, is the introduction of limp, wordy Simon and Garfunkel songs.’’ ...

All that need be mentioned of Ebert’s social life was that in the early 1980s he briefly went out with the hostess of a modest local TV show called “AM Chicago.” Taking her to the Hamburger Hamlet for dinner, Ebert suggested that she syndicate her show, using his success with Siskel as an example of the kind of riches that awaited. While she didn’t return his romantic interest, Oprah Winfrey did follow his business advice.
--Neil Steinberg, Chicago Sun-Times, on my favorite movie critic