Filter | only charts

Restarting Nuclear Power

Derrick Freeman - October 20, 2014

Japanese prime minister Shinzo Abe has endorsed the restart of the nuclear reactors at Kyushu Electric's Sendai power plant in southern Japan. This is a remarkable turnaround -- considering all of Japan's 48 nuclear reactors were shut down after an earthquake and tsunami led to meltdowns at the Fukushima Daiichi plant in March 2011 -- with lessons for policymakers here in the U.S.

Japan's decision comes after its newly formed Nuclear Regulatory Authority (NRA) issued a more than 400-page safety report in July showing that Kyushu Electric's safety assessment met the new regulatory standards, followed by a month-long public comment period. The NRA replaced the patchwork of bureaucrats who had been responsible for oversight of the nuclear industry before the accident and is widely viewed as much stricter than the previous regime. The Japanese government has worked hard to restore confidence in the nuclear-power industry, for example by requiring all communities within 30 kilometers of a plant to submit evacuation plans for approval.

Tokyo's green light, however, is the first of many hurdles the nuclear industry must clear to get back on line. The next for Kyushu Electric is to gain consent from the governor of Kagoshima Prefecture and the mayor of Satsumasendai, where the plant is located.

Interestingly, support for the restart of Japan's nuclear reactors runs highest in the communities that host them. Overall, however, the public remains wary. Before the accident, nearly two-thirds of the public supported building new nuclear reactors; a national public poll taken in July and published by the Asahi newspapers found 59 percent opposition to the restart at Sendai.

Nonetheless, there are compelling reasons for Japan to flick the "on" switch for nuclear energy. Since the mothballing of its reactors, the Japanese economy has been in decline, and the country's utility sector has experienced tremendous losses each year. Nuclear power, once 30 percent of Japan's electricity generation, has had to be replaced with other fuels, primarily fossil. Resource-poor Japan has been obliged to depend more on imported natural gas, coal, and oil to meet its electricity needs. This reliance comes at a very heavy price for both Japan and its citizens. Natural-gas prices in Japan hit a record high of $20.125 per million BTU earlier this year, whereas the United States' Henry Hub Natural Gas Spot Price has averaged $4.675 through August 2014. Higher import prices for fossil fuels and a weaker yen have led to Japan running trade deficits for the first time in three decades.

Imported fossil fuels generated 88 percent of Japan's electricity last year, compared with 62 percent in 2010, according to Japan's Ministry of Economy, Trade, and Industry. Household consumers have felt the pinch of these expensive imports as their electricity rates have soared rising 19.4 percent, while industrial users were hit with a 28.4 percent rise.

Along with prices, greenhouse-gas emissions are also soaring. They are up 7.4 percent since fiscal year 2010 despite a decline in manufacturing production, the aggressive implementation of efficiency measures in households, and a significant ramp up in renewable energy. Last November, Japan announced that it would target a 3.8 percent emissions cut by 2020 versus 2005 levels. This amounts to a 3 percent rise from the U.N. benchmark year of 1990, rather than the 25 percent cut Tokyo previously promised to meet its Kyoto Protocol commitment. "Given that none of the nuclear reactors is operating, this was unavoidable," Nobuteru Ishihara, Japan's Environment Minister, has said.

The Sendai plant restart is months away if it happens, and uncertainty still surrounds Japan's energy future. But it appears that, almost four years removed from the devastation at Fukushima, Japan is turning the corner and moving to restore public confidence in its nuclear power infrastructure.

The United States is 35 years removed from Three Mile Island, a much less dangerous accident, yet the anti-nuclear backlash it engendered has yet to abate. However, there are some bright spots in the U.S. nuclear-power sector, with five reactors under construction including the Tennessee Valley Authority's Watts Bar 2 plant, which will be the first unit to come online in America since Watts Bar 1 came online in 1996. Watts Bar 2 should begin operation in the latter part of 2015.

Ironically, Japan appears to be doing a more expeditious job of dispelling public fears in large part by installing a strong, U.S.-style nuclear regulatory system. This suggests that Germany may have acted too hastily in announcing the shutdown of all its nuclear power plants after the Fukushima accident. In any event, let's hope Japan's example inspires the United States to proceed with building new nuclear plants -- and restoring U.S. global leadership in safe nuclear technology.

Derrick Freeman is a senior fellow and director of the Energy Innovation Project at the Progressive Policy Institute.

Search Engines, Free Advertising, and the Content Industry

Mytheos Holt, R Street - October 17, 2014

In 1925, a music industry professional complained to the New York Times that the new medium of radio was destroying the industry's business model by making songs too widely available to the public for free. The Times quoted the unnamed professional saying:

The public will not buy songs it can hear almost at will by a brief manipulation of the radio dials. If we could control completely the broadcasting of our compositions we would endeavor to prevent this saturation of the radio listeners with any particular song. . . .

We are striving to have the copyright law, which protects us from the free use of our compositions by the makers of phonograph records and music rolls, construed to cover broadcasting. The law specifies that we must be compensated if any of our songs are used by some one else for profit to them. We contend that the radio station is an enterprise founded for gain. It is not controlled by those purveying sets or parts, it is employed by organizations which use it as a medium of institutional advertising.

The music industry professional got his wish as far as copyright, and has turned out to be right in another way as well, though surely not in a way he would have expected. Radio is treated as free advertising -- and primarily for music producers! This is, in fact, the main reason why terrestrial radio stations are presently statutorily exempt from paying royalties.

Today, the story of radio's transition from content industry bete noir to one of its core advertisers is being replayed in the case of another medium that content industry professionals once lambasted as nothing but a gateway for pirates: search engines,

In a report released today by Google, the company lays out the case that search engines aren't major driver of piracy.The report claims that search is responsible for just 16 percent of the traffic to sites that host pirated content. By contrast, studies have shown that 64 percent of the traffic to legitimate sites comes from search engines.

To take one example, "katy perry" gets searched for 200,000 more times on Google than "free katy perry mp3." What's more, under new changes to the company's search algorithm, legitimate sources of Katy Perry's music will show up first on both searches, at no cost to Perry herself (though individual content salesmen such as Apple, Amazon or Spotify can pay to have their digital storefronts advertised prominently).

Starting in 2012, Google began "downranking" sites that receive a high volume of Digital Millennium Copyright Act take-down complaints, meaning that those results automatically are ranked lower in Google's search algorithm. The new tweaks will go further to prioritize results forlegitimate sources of film, music and other copyrighted content, as well as offering users multiple sources from which that content can be purchased, rented or streamed. This would apply even for obvious piracy-oriented searches, such as "the big lebowski torrent."

In short, for content producers, search engines serve as a form of free advertising, paid for either by middlemen online retailers, or by the search engine itself. As the Google report puts it:

Piracy often arises when consumer demand goes unmet by legitimate supply. As services ranging from Netflix to Spotify to iTunes have demonstrated, the best way to combat piracy is with better and more convenient legitimate services. The right combination of price, convenience and inventory will do far more to reduce piracy than enforcement can.

Consumers have a huge appetite for content, and there's evidence that they're willing to pay a lot for it. A report from May 2013 found that the most frequent consumers of pirated digital files actually spend 300 percent more on content than so-called "honest" consumers. This tendency for piracy itself to serve as a form of free advertising is why savvy media producers, such as the makers of HBO's Game of Thrones, find high piracy rates flattering, rather than alarming. Once HBO's new stand-alone streaming service launches, these users of pirated content easily could turn into legitimate consumers.

Search engines thus have a huge opportunity to exploit a market with an above-average appetite for content and expose it to more ways to purchase that content. This benefits search engines like Google, but it also benefits the content industry itself.

Of course, as the 1925 Times quote demonstrates, the content industry hasn't always been eager to embrace innovation. The disruptive effects of the Internet have shaken a lot of established content industry business models, which has led some of those industries into efforts at outright censorship, both through abuse of the DMCA's take-down system and through attempts to bake censorship into the Internet itself, via new legislation and trade agreements.

Google's report provides some truly lurid examples of what that "abuse" looks like, such as a film company allegedly trying to get a major newspaper's review of their film blocked in search results. Techdirt has also outlined some truly ridiculous examples of DMCA takedown requests. In view of these shameless attempts at censorship, Google's decision to protect against DMCA abuse from both directions is prudent. It remains to be seen whether these safeguards will continue to hold, but the proliferation of information about fair use on the Internet suggests reason for optimism.

Distinguishing between fighting piracy as an industry, and fighting individual pirates, who are rarely the hardened criminals that content industry advocates paint them as, could be a major step toward a better Internet both for consumers and producers.

This piece originally appeared at the R Street Institute's blog. Mytheos Holt is an R Street associate fellow.

The TPP Can Promote Medical Innovation

Eric V. Schlecht - October 17, 2014

This month, international trade representatives will head to Australia to continue negotiating the Trans-Pacific Partnership (TPP). Many believe this process is close to completion. A finalized TPP holds real hope for boosting global commerce and driving sustained economic growth for all countries involved.

It also has the potential to fuel medical innovation and bolster public health long into the future. But in order to realize that goal, some critical details need to be ironed out -- namely, the deal must ensure greater transparency and efficiency in how medicines make their way to patients in TPP markets. Strong provisions in the TPP will ensure that American companies are treated fairly when they sell their innovations abroad, and that patients in TPP countries have access to important medical treatments.

To their credit, U.S. negotiators seem keen to include these measures in the final deal. But they've recently come under attack from powerful U.S. interest groups that want to weaken protections for American companies. This would be a grave mistake.

The U.S. leads the world in the development of new drugs. We've produced more than 400 approved medicines since the turn of the century, and we've put another 5,000 medicines into clinical trials around the globe. These advances don't come easy. Our pharmaceutical industry invests more than $50 billion in research and development each year -- more than any other country by far -- to continue churning out medical advancements.

And of course, these breakthroughs aren't just reserved for American patients. They're used worldwide to improve -- and save -- countless lives. But when these treatments become available beyond our shores, they are often prevented from making their way to patients due to inefficient, slow, and complicated government approval, pricing, and reimbursement systems.

New Zealand is one of the major economies participating in the TPP, and it demonstrates the risks negotiators will be running if the proper protocols aren't required. The country's public-health system, PHARMAC, is charged with drug regulatory approvals and pricing and reimbursement policies. Because it was made an independent body in 2001, PHARMAC lacks basic transparency and accountability measures for its decisions.

It routinely denies foreign companies the most basic considerations -- such as guidelines for registering new drugs -- and reasonable timelines for approval or reimbursement decisions.

This has effectively denied patients and doctors the opportunity to weigh in on decisions that affect public health -- and as a result, these decisions tend to focus narrowly on cost at the expense of access to new medicines. One analysis found that of 83 prescription medicines registered with neighboring Australia between 2000 and 2006, only 22 were reimbursed in New Zealand. Overall, there is an average lapse of three years between the time a drug is approved in the first country and in New Zealand. This is surely part of the reason why New Zealand ranks a dismal 14th out of 19 OECD countries in terms of the annual number of patient deaths from treatable conditions.

The TPP must require that these decisions no longer be made in a vacuum. Governments' systems for pricing and reimbursing medicines for the purpose of public programs should be transparent, timely, and predictable. Drug makers should be allowed to appeal rate and approval decisions to an independent administrative body. And all decisions need to be substantiated by the latest science. Failing to provide these basic protections to drug companies severely limits their ability to research and develop drugs.

These are essentially the provisions enshrined in KORUS, America's landmark free-trade deal with South Korea, which came into force in 2012. This agreement was created with the goals of preventing arbitrary decisions, preserving patient access to a wide array of high-tech medicines, and retaining the incentives for future innovation.

A cadre of high-profile U.S. interest groups has mounted a concerted effort against including such strong provisions in the new TPP deal. This campaign includes the AARP, the public-employee union AFSCME, and the private-sector AFL-CIO and SEIU labor organizations. They've all repeatedly petitioned federal trade officials to back down from including these important protections in the deal.

They're worried that these provisions would somehow open the door for drug manufacturers here in America to challenge the payment policies of Medicare, Medicaid, and other public insurance programs. That fear is baseless.

In a letter reported by Inside U.S. Trade, the U.S. trade representative explicitly addressed these concerns: "These are straightforward provisions that will not require any changes to any U.S. healthcare laws nor will they affect the U.S. Government's ability to pursue the best healthcare policy for its citizens, including future reforms or decisions on healthcare expenditures."

Unlike many other nations, the U.S. government does not dictate drug prices through its government insurance programs. Reimbursement rates are, by design, largely tied to prices on the open market. In America, government health-care programs are intended for specific segments of the population, and the prices under those programs are tied to reported prices based on commercial sales in competitive markets.

In fact, Medicare Part D is a "best practice" example of this: It embraces private competition, not government price controls; it's completely delivered through private plans; and the government pays for plans based on competitive bids.

The TPP should include the same strong transparency provisions that were inscribed in the U.S. trade agreement with South Korea. This will not only ensure that citizens in TPP partner countries have timely access to safe, effective medicines, but also that U.S. companies can continue to produce them long into the future.

Eric V. Schlecht is a writer who has worked on budget and economic issues in Washington, D.C., for more than 20 years. He has served in leadership offices in both the U.S. Senate and House of Representatives.

Does Eminent Domain Even Raise Revenue?

Dean Stansel & Carrie Kerekes - October 17, 2014

Proponents of eminent domain for private development -- i.e., of forcibly taking private property and giving it to another private party -- claim it will generate more revenue for state and local governments. The Supreme Court even based its landmark 2005 case Kelo v. City of New London on this assertion, holding that the alleged economic benefits for communities legally justify these takings as "public use."

The claim that eminent domain leads to higher revenues has largely gone unchallenged. We recently examined the available data, and our study finds virtually no evidence that eminent-domain activity for private development is associated with higher government revenue. To the contrary, we find some evidence that eminent domain is associated with lower growth of government revenue in the future.

In other words, governments' primary justification for taking property from private owners like Susette Kelo and transferring ownership to big companies like Pfizer is based on faulty assumptions. In fact, the redevelopment plan for which Ms. Kelo's house (and those of her neighbors in New London, Conn.) was taken never happened. The land was actually used as a temporary dump for storm debris in the aftermath of Hurricane Irene in 2011.

Confiscating someone's home or business and using the land as a dump is an egregious property-rights violation. Even if eminent domain for private development did achieve the objective of producing higher revenues for state and local governments, it would be an abhorrent activity. However, it also has serious negative implications for the future economic prosperity of the community.

Private-property rights are the foundation of a successful market economy. Any encroachments on private-property rights -- like eminent domain -- hamper economic growth and result in lower standards of living than we would otherwise enjoy.

For example, in countries like Cuba and North Korea, where private-property rights are very insecure, entrepreneurs are less willing to invest in the new machines and equipment they need to expand their businesses. Individuals in these countries have a reasonable expectation that any machinery or equipment, or overall business or land itself, may at some point be taken from them by government predation or by individual criminals.

Fortunately, property rights in the United States are relatively secure -- but things are heading in the wrong direction. The Fraser Institute publishes an annual index that ranks countries according to their economic freedom using data in five areas: size of government, legal system and property rights, sound money, freedom to trade internationally, and regulation. In the recently released 2014 Economic Freedom of the World report, the United States fell to 12th, down from the second spot in 2000 and the seventh spot in 2008. In the area of "legal system and property rights," the United States fell all the way to 36th.

Our study's findings confirm that policymakers and the public are right to be skeptical of attempts to justify the seizure of private property with the promise of future financial windfalls. In reality, these encroachments may hamper economic growth and lead to lower standards of living for more than just those who have lost their homes or businesses.

Dean Stansel and Carrie Kerekes are economics professors at Florida Gulf Coast University and authors of a new study entitled "Takings and Tax Revenue: Fiscal Impacts of Eminent Domain," published by the Mercatus Center at George Mason University.

When 'Niceness' Becomes Tyranny

Thomas K. Lindsay - October 16, 2014

Take this quiz. In which of the following venues -- (a) The New York Times or (b) Fox News -- did the following report appear? "Teacher[s] . . . [are] frightened of the pupils and fawn on them." The "students make light of their teachers. . . . And, generally, the young copy their elders and compete with them in speeches and deeds, while the old come down to the level of the young; imitating the young, they are overflowing with . . . charm, and that's so that they won't seem to be unpleasant or despotic."

The answer is "none of the above." This account is nearly 2,500 years old, coming from Socrates in Plato's Republic, and is part of an analysis of how democratic freedom, taken to its extreme, can culminate in collective tyranny.

What has been the effect or our Niceness Crusade on today's children? A New Yorker piece blames parents for creating young children who are, as the article's title states it, "Spoiled Rotten." ("Why do kids rule the roost?" asks the subheadline.) What kind of college students do such children then grow up to become? A recent New Republic article by a former Yale professor worries that today's students at elite universities are "entitled little sh[**s]." Another study, of Bowdoin College students, conducted by the National Association of Scholars, finds these students guilty of "knowingness," which is "the antithesis of humility," the "enemy of education," and "a formula for intellectual complacency." Contrast this with Socrates' famous formulation, which served for centuries as liberal education's animating principle: "The unexamined life is not worth living for a human being."

Why might today's parents fail to exercise the leadership necessary to enforce the discipline necessary to their children's maturation? How have the relations between the young and old been turned upside down, with the older, more experienced generation now fearing to offend the younger, less-experienced generation, rather than vice versa?

Doubtless, a variety of factors are at play here, but, for Socrates, democratic justice, which he finds to be the principle of freedom, degenerates -- as do all political principles -- through being taken to its extreme. Liberty, which in the highest sense consists in freely choosing to restrain one's passions in order to pursue the just course, degenerates into license, which is liberty unrestrained by any purposes higher than freedom itself; that is, license is irresponsible freedom.

So unquenchable can become democracy's passion for freedom-as-unrestraint, argues Socrates, that, by virtue of its logic, it extends freedom ever wider, eventually to the animals themselves: "There come to be horses and donkeys who have gotten the habit of making their way quite freely and solemnly, bumping into whomever they happen to meet on the roads, if he doesn't stand aside, and all else is similarly full of freedom." This fantastic scenario is meant intentionally to be dreamlike, but it resonates with us today when we consider the principles animating the animal-rights movement. Some recall the saga of the ill-fated "Baby Fae," a newborn whose heart condition led doctors to take the desperate, ultimately unsuccessful, measure of transplanting a baboon's heart to her in hopes of saving her life. This produced outcries from the animal-rights movement, which critiqued the morality, as one scholarly paper puts it, "of the taking of an innocent animal's life to attempt to save the life of an innocent human." Similar human-animal equations appear regularly from PETA, such as its "Holocaust on Your Plate" campaign.

The political consequences of taking freedom to its extremes, according to Socrates, are that democracy's citizens "end up . . . by paying no attention to the laws, written or unwritten, in order that they may avoid having any master at all." Under the new dispensation, then, the old come down the level of the young; parents, to their children; human beings, to animals; and -- thanks to today's popularization of moral relativism -- objective Truth falls to subjective choice. All this in order that all may be fully "free."

How might we rediscover a sound basis for teaching and practicing self-restraint? We could start by reading Plato, who teaches that, while it is one sense natural for us to "want what we want, when we want it," human nature at its deepest longs for something more, something higher. We long to discover and participate in a good of such nobility that it trumps our lower desires.

There was a time, of course, not so long ago, when many college students could be expected to study (because it was required) Plato's Republic. But those bad old days of making students read things they might not want to read, such as difficult Platonic dialogues, surrendered to the same passion that Plato finds threatens to transform democratic liberty into tyrannical license. In the name of "student choice," our university elders, not wanting to seem "unpleasant or despotic," abandoned required core curriculums a half-century ago, replacing them with their intellectually spineless shadows -- "general education" and "distribution requirements."

It is difficult to envision today's universities restoring the type of rigorous, required core curriculum in which students would be compelled to encounter a text like the Republic, through which they might receive the greatest gift of all -- coming better to understand themselves and what they believe through engaging in a serious conversation with a mind greater than their own who challenges them to examine their unexamined assumptions. But if universities do not take this courageous step, they doom their students to lives suffocated by prejudice, by the "knowingness" and sense of entitlement that is death to intellectual as well as political liberty. If American higher education, which has come so much in our increasingly secular society to be the chief crafter of the culture, fails to seek to arrest the degeneration for which it is in some part culpable, Socrates would argue that we can next expect a culture in which "insolence" will come to be labeled "good education; anarchy, freedom; wastefulness, magnificence; and shamelessness, courage."

From this shift in the culture, Socrates concludes, comes "the beginning, so fair and heady, from which tyranny . . . naturally grows." Which leaves a question for today's parents and educators: Are the benefits of our aimless "niceness" toward our children worth the price? If not, then for their sake, as well as ours, we adults might consider acting again like grown-ups.

Thomas K. Lindsay directs the Center for Higher Education at the Texas Public Policy Foundation and is editor of He was deputy chairman of the National Endowment for the Humanities under George W. Bush. He recently published Investigating American Democracy with Gary D. Glenn (Oxford University Press).

Jacob Anbinder - October 16, 2014

New York's transit system is the lifeblood of America's largest metropolis. Comprising subways, commuter trains, and ferries, it's famously vast, and millions of commuters rely upon it every day.

Salt Lake City's is . . . not. Utah's capital features light rail, bus rapid transit, and even a commuter rail line. But these transportation options -- many of which are less than a decade old -- are hardly integral parts of the city's identity.

But not so fast, spoiled New Yorkers. Salt Lake's relatively modest transit network actually outperforms its New York counterpart on one essential measure: providing access to a high percentage of the region's jobs. That's according to data in the Access Across America report, the latest by transportation planner David Levinson and the Accessibility Observatory at the University of Minnesota.

The study, which ranked urban areas by their transit systems' ability to provide access to jobs, revealed some surprising truths about the impact of mass transit on urban mobility.

1. Percentages tell a different story.

Levinson and his colleagues ranked 46 of the largest American Metropolitan Statistical Areas (a definition used by the Census Bureau) by the number of transit-accessible jobs in the urban area in six different increments of time -- from ten minutes to one hour. The calculations were "worker-weighted," meaning they accounted for the actual residential patterns of each city.

In terms of raw numbers, New York was the victor in every time segment, with 1.2 million jobs reachable in one hour. In fact, there was an incredible amount of consistency among a core group of cities -- New York, Chicago, San Francisco, and Washington, D.C., all finished in the top five every time. That should come as little surprise, given that those four cities are among the densest and most transit-reliant in the country. Conversely, Birmingham, Ala., and Riverside, Calif., were in or near the bottom of the list every time.

Ranked by percentage of jobs accessible through transit, however, the list tells a different story. New York still does pretty well (ranking sixth), and San Francisco still makes an appearance at number three, but Salt Lake City takes first, with San Jose, Milwaukee, and Denver rounding out the top five. (Sorry, Riverside -- you're still last.)

Salt Lake's first-place finish is well deserved -- a one-hour commute on mass transit puts Salt Lakers in reach of an incredible 25.42 percent of the region's jobs. New York? Just below 15 percent.

2. It pays to think regionally.

Even more amazing than Salt Lake's ranking in the study, however, is the fact that those 129,000 transit-accessible jobs can be reached using just one agency: the Utah Transit Authority. In achieving this, Salt Lake was not alone. Of the five top-ranked urban areas in the one-hour category (ranked by percentage of accessible jobs), three have just one major transit agency.

It may seem counterintuitive, but it's a fact that speaks to the power of strong regional planning on issues of transportation, especially when addressing longer commutes, which are more likely to cross jurisdictional boundaries. Unlike Riverside, say, where the transit systems are separated by county lines, Salt Lake, Milwaukee, and Buffalo can better coordinate their bus schedules to optimize commutes. A unified transit system is not a panacea -- poor-performing Birmingham also has just one -- but it can go a long way toward improving regional job accessibility.

3. Population density doesn't play as big a role as you might think . . .

You would think that population density and job accessibility go hand-in-hand. In denser cities, not only would one expect a greater concentration of jobs, but also a more substantial mass-transit network to bring people to them. Yet, as the graph below shows, the statistical relationship between the two is limited at best. (The same is true if you measure job accessibility in raw numbers.)

About 8 percent of jobs are transit-accessible within an hour in the Los Angeles area, for example. That's roughly the same as greater Cleveland, which is only one-third as dense.

4. . . . but investment in heavy rail does.

Population density is, of course, just part of the equation. The kind of transportation used to commute also has a major impact on job accessibility. Looking at the raw number of accessible jobs, the urban areas that "outperform" their counterparts of similar densities tend to have established, extensive heavy-rail systems.

The Boston area and greater Kansas City have roughly the same population density, for example, but an hour on mass transit will take a commuter up to five times more jobs in the former than in the latter. The Washington, D.C., area has a population density below that of New Orleans, but its subway-centric transit network can reach ten times more jobs in one hour.

Interestingly, though, when looking at job accessibility as a percentage rather than as a raw number, heavy rail is not always the reason for a city's high ranking. Heavy-rail-reliant San Francisco and New York still have an edge, but Milwaukee, Buffalo, Portland, and Denver, which lead their peers of comparable density, rank high despite having no such advantage.

5. People like to stick to their cars . . .

Perhaps the most interesting element of the study, however, is the relationship between the ideal situation it describes and Americans' actual commuting habits. In reality, most American cities have only a very meager number of mass-transit commuters, outside of a few large metropolises with well-established rail networks. This remains the case even when the city in question has a high proportion of transit-accessible jobs.

Despite Salt Lake City's progress in making jobs accessible via public transportation, for example, less than 5 percent of the region's commuters use mass transit to get to work. Same goes for San Jose, ranked number two percentage-wise.


6. . . . unless everybody is taking the train.

Looking at the raw numbers of transit-accessible jobs, however, the situation is quite different.

Here, there appears to be a strong correlation -- at least preliminarily -- between the total number of transit-accessible jobs in a given city and the percentage of that city's commuters who use mass transit to get to work. This is the case in both in the 30-minute and 60-minute categories.

It will take a far more rigorous statistical analysis to determine if there truly is a relationship between the two measures. (For starters, the correlation is weaker when cities with very low transit usage are examined as a discrete group.) Still, it seems logical to assume that a virtuous circle exists between the sheer number of transit-accessible jobs and the proportion of people who use transit to travel to them.

When transit agencies focus on moving large numbers of passengers, the frequency of trains and buses usually increases. This, in turn, improves the public's perception of transit's reliability, which encourages greater ridership. Employers, eager to have access to a broader pool of talent, continue to locate their companies close to this well-used transit network.

7. If you build it, they still might not come.

Still, the report is perhaps most useful in showing that there's no magic formula for good transit investment.Salt Lake City is the perfect case in point. Thanks to regional planning that takes into account the entire Wasatch Front, the Utah Transit Authority has established a reputation as a national leader in smart mass-transit investment. Still, the city's low rates of transit ridership show that more work is needed to effect lasting change in Salt Lakers' commuting habits.

New York City faces a different problem. Its residents don't need to be encouraged to use the subways and buses, but a profound lack of regional cooperation has for years harmed the city's potential for growth. New York's impressive statistic isn't that 1.2 million jobs are transit-accessible -- it's that the city managed to achieve that number despite having no meaningful coordination between New Jersey Transit, the Port Authority, and the various agencies of the MTA.

For some cities, success is driven by delegating transit-planning powers to a regional body that can predict the needs of the entire urban area and plan its investments accordingly. For others, it's the existence of a local transportation culture in which people consider mass transit the default option and their car the alternative, rather than vice versa.

The ideal, of course, is to find the place where those two trends intersect. But as the report shows, that ideal may be for now the most elusive destination of all.

Jacob Anbinder is a policy associate at the Century Foundation, the New York-based think tank, where he writes about transportation, infrastructure, and urban policy. All graphic data is from the American Community Survey 2013 1 Year Estimates (using "urbanized area" as the geographic category), and "Access Across America: Transit 2014," prepared by Andrew Owen and David Levinson for the Accessibility Observatory at the University of Minnesota

Bringing Competition to Internet Service

Joshua Breitbart - October 15, 2014

If you live in an American city, chances are you're getting a raw deal -- paying more for broadband, and yet getting slower service, than your urban counterparts around the world. Part of the reason is that the urban broadband market in the U.S. is effectively a duopoly, as the chairman of the Federal Communications Commission (FCC) noted last month. Without competition, there's less incentive for Internet Service Providers (ISPs) to increase speeds, improve service, or slash prices. What's a city to do?

Bigger cities in the U.S. have relied almost exclusively on private companies to deliver broadband to residents, but the shine has come off this apple in recent years. Early on, the phone and cable companies leveraged their existing wires to squelch competition and dominate the broadband market. In the mid-2000s, cities like Philadelphia and San Francisco hoped companies like EarthLink or MetroFi would deliver citywide Wi-Fi to disrupt the ISP duopoly, but they didn't. Verizon and AT&T have rolled out some fiber-optic upgrades, but they have passed over many cities and neighborhoods. As a sign of desperation, one mayor threw himself in a freezing lake in a failed attempt to get Google to build a fiber-optic network in his town.

Taking the opposite approach, nearly 400 local governments have chosen to become public ISPs, according to the Institute for Local Self-Reliance. If the FCC strikes down a series of state bans on municipal broadband, hundreds more cities may pursue this model, but it is unlikely this solution will work for major metros where two companies have already built networks and acquired customers. The cities to try this route so far have generally been ones that the national ISPs have passed over; the largest cities with municipal networks are Chattanooga, Tenn., and Lafayette, La., with populations of roughly 170,000 and 125,000 respectively. Many smaller cities will not have the technical capacity or political will to take this leap.

Instead of getting caught between views of broadband as a wholly public utility or as a totally private amenity, big cities need to cultivate private-sector, non-profit, and cooperative broadband solutions neighborhood by neighborhood. The key is providing an "open access" network -- infrastructure that multiple service providers can use without each having to invest in their own citywide network. Cities can piece a network like this together the way they accumulate park land and affordable housing: through requirements on private developers and strategic use of public assets.

The citywide open-access network connects to the Internet backbone, then to key points in neighborhoods, like our libraries, firehouses, schools, and media centers. (Many cities already operate "institutional networks" that connect these community anchors, but they cannot use them to deliver Internet service per agreements with the cable companies.) The network doesn't offer Internet service, merely the opportunity to move data from one point in the city to another point at very low cost. Whether the data is heading to or coming from the Internet isn't the city's concern.

The price of Internet bandwidth varies widely across a city, as does the possible speed. Right now, it's only universities, major financial corporations, some hospitals, and Big Internet that get access to the speed and volume pricing of the backbone. It should be more like getting a street vendor license or a hack license, and open to that level of entrepreneurial effort. And those top speeds should not be available only in a central business district, but also in at least one spot in every neighborhood.

Cities can build these networks piece by piece, using "dig once" policies that coordinate infrastructure projects. If you are going to dig up the streets or lay new pipes for any purpose, the city should also install fiber-optic lines and conduits for future lines. As Columbia Telecommunications Corporation describes in their "Gigabit Communities" report, even if the local government doesn't make immediate use of these assets, they can potentially lease access to private providers, lowering a company's construction costs and minimizing disruptions for residents. Cities can be more aggressive in expanding the network with broadband-related requirements on new development, such as as rights of way for rooftop wireless links or a mandated fiber-optic tie-in, as we might require a developer to connect to sewage and water systems.

Any efforts to streamline construction or add zoning requirements should not be at the expense of due process, however. Local policymakers -- even community boards and block captains -- need a basic literacy in broadband deployment to serve their appropriate function of oversight and public participation.

While cities can hope to connect every neighborhood, they need to target their effort where it is needed most or can do the most good. They can divide the city into more manageable-sized markets for issuing franchises to access city light poles, streets, and sewers. Google Fiber divided Kansas City into "fiberhoods" where a critical mass of committed subscribers would determine if the company would build to that area. Only later did they realize the level of outreach and organizing needed to promote broadband in chronically underserved areas. Cities can be more proactive, designating underserved areas as "broadband enterprise zones" where traditional economic development incentives such as tax breaks or loans help ISPs start or expand service. (My colleagues and I have proposed a methodology for identifying these zones and a model policy framework can be adapted from the Center for Social Inclusion's concept of an "Energy Investment District.") Low-income areas are already targeted for digital literacy programs, and occasionally for reduced service rates or other subsidies, but usually with the mistaken idea that the current service options are sufficient.

Even with no local government support, entrepreneurial providers like WasabiNet in St. Louis and BKFiber in Brooklyn are taking advantage of new wireless networking technologies to compete for customers in neighborhoods that have been chronically underserved. Community-based organizations like Red Hook Initiative in Brooklyn and Allied Media Projects in Detroit are also constructing neighborhood-scale wireless networks, using free software and teaching tools developed with the Open Technology Institute, where I work. Instead of public property, these organizers build on their relationships with residents, churches and other partners to install equipment.

Red Hook Initiative's project has boosted BKFiber by becoming a paying customer, raising the company's profile in the community, and helping it gain access to various rooftops to place equipment. Community-based, sometimes rather informal projects face considerable organizational and regulatory challenges, but they are increasingly within reach for neighborhood associations or cooperatives wishing to sponsor hotspots, develop a resilient emergency communication system, or share connections to the Internet. Cities that wanted to see more of these projects could fund them as education or job training and provide access to city property, so long as the process for doing so was transparent and continued to incorporate community participation.

Broadband is an essential service. Municipal governments have both a moral obligation and an economic motivation to connect all residents, but the ways for them to do this will vary from town to town. Not all governments will become Internet service providers, but all should take an active role in ensuring a vibrant and competitive broadband marketplace for their residents. None can rest while the current duopoly remains in place.

Joshua Breitbart is a senior research fellow with New America's Open Technology Institute.

Ryan Gabrielson, Ryann Grochowski Jones & Eric Sagara, ProPublica - October 14, 2014

Young black males in recent years were at a far greater risk of being shot dead by police than their white counterparts -- 21 times greater i, according to a ProPublica analysis of federally collected data on fatal police shootings.

The 1,217 deadly police shootings from 2010 to 2012 captured in the federal data show that blacks, age 15 to 19, were killed at a rate of 31.17 per million, while just 1.47 per million white males in that age range died at the hands of police.

One way of appreciating that stark disparity, ProPublica's analysis shows, is to calculate how many more whites over those three years would have had to have been killed for them to have been at equal risk. The number is jarring -- 185, more than one per week.

ProPublica's risk analysis on young males killed by police certainly seems to support what has been an article of faith in the African American community for decades: Blacks are being killed at disturbing rates when set against the rest of the American population.

Our examination involved detailed accounts of more than 12,000 police homicides stretching from 1980 to 2012 contained in the FBI's Supplementary Homicide Report. The data, annually self-reported by hundreds of police departments across the country, confirms some assumptions, runs counter to others, and adds nuance to a wide range of questions about the use of deadly police force.

Colin Loftin, University at Albany professor and co-director of the Violence Research Group, said the FBI data is a minimum count of homicides by police, and that it is impossible to precisely measure what puts people at risk of homicide by police without more and better records. Still, what the data shows about the race of victims and officers, and the circumstances of killings, are "certainly relevant," Loftin said.

"No question, there are all kinds of racial disparities across our criminal justice system," he said. "This is one example."

The FBI's data has appeared in news accounts over the years, and surfaced again with the August killing of Michael Brown in Ferguson, Missouri. To a great degree, observers and experts lamented the limited nature of the FBI's reports. Their shortcomings are inarguable.

The data, for instance, is terribly incomplete. Vast numbers of the country's 17,000 police departments don't file fatal police shooting reports at all, and many have filed reports for some years but not others. Florida departments haven't filed reports since 1997 and New York City last reported in 2007. Information contained in the individual reports can also be flawed. Still, lots of the reporting police departments are in larger cities, and at least 1000 police departments filed a report or reports over the 33 years.

There is, then, value in what the data can show while accepting, and accounting for, its limitations. Indeed, while the absolute numbers are problematic, a comparison between white and black victims shows important trends. Our analysis included dividing the number of people of each race killed by police by the number of people of that race living in the country at the time, to produce two different rates: the risk of getting killed by police if you are white and if you are black.

David Klinger, a University of Missouri-St. Louis professor and expert on police use of deadly force, said racial disparities in the data could result from "measurement error," meaning that the unreported killings could alter ProPublica's findings.

However, he said the disparity between black and white teenage boys is so wide, "I doubt the measurement error would account for that."

ProPublica spent weeks digging into the many rich categories of information the reports hold: the race of the officers involved; the circumstances cited for the use of deadly force; the age of those killed.

Who Gets Killed?

The finding that young black men are 21 times as likely as their white peers to be killed by police is drawn from reports filed for the years 2010 to 2012, the three most recent years for which FBI numbers are available.

The black boys killed can be disturbingly young. There were 41 teens 14 years or younger reported killed by police from 1980 to 2012 ii. 27 of them were black iii; 8 were white iv; 4 were Hispanic v and 1 was Asian vi.

That's not to say officers weren't killing white people. Indeed, some 44 percent of all those killed by police across the 33 years were white.

White or black, though, those slain by police tended to be roughly the same age. The average age of blacks killed by police was 30. The average age of whites was 35.

Who is killing all those black men and boys?

Mostly white officers. But in hundreds of instances, black officers, too. Black officers account for a little more than 10 percent of all fatal police shootings. Of those they kill, though, 78 percent were black.

White officers, given their great numbers in so many of the country's police departments, are well represented in all categories of police killings. White officers killed 91 percent of the whites who died at the hands of police. And they were responsible for 68 percent of the people of color killed. Those people of color represented 46 percent of all those killed by white officers.

What were the circumstances surrounding all these fatal encounters?

There were 151 instances in which police noted that teens they had shot dead had been fleeing or resisting arrest at the time of the encounter. 67 percent of those killed in such circumstances were black. That disparity was even starker in the last couple of years: of the 15 teens shot fleeing arrest from 2010 to 2012, 14 were black.

Did police always list the circumstances of the killings? No, actually, there were many deadly shooting where the circumstances were listed as "undetermined." 77 percent of those killed in such instances were black.

Certainly, there were instances where police truly feared for their lives.

Of course, although the data show that police reported that as the cause of their actions in far greater numbers after the 1985 Supreme Court decision that said police could only justify using deadly force if the suspects posed a threat to the officer or others. From 1980 to 1984, "officer under attack" was listed as the cause for 33 percent of the deadly shootings. Twenty years later, looking at data from 2005 to 2009, "officer under attack" was cited in 62 percent xxxvii of police killings.

Does the data include cases where police killed people with something other than a standard service handgun?

Yes, and the Los Angeles Police Department stood out in its use of shotguns. Most police killings involve officers firing handguns xl. But from 1980 to 2012, 714 involved the use of a shotgun xli. The Los Angeles Police Department has a special claim on that category. It accounted for 47 cases xlii in which an officer used a shotgun. The next highest total came from the Dallas Police Department: 14 xliii.

This piece originally appeared at ProPublica, a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


i ProPublica calculated a statistical figure called a risk ratio by dividing the rate of black homicide victims by the rate of white victims. This ratio, commonly used in epidemiology, gives an estimate for how much more at risk black teenagers were to be killed by police officers.Risk ratios can have varying levels of precision, depending on a variety of mathematical factors. In this case, because such shootings are rare from a statistical perspective, a 95 percent confidence interval indicates that black teenagers are at between 10 and 40 times greater risk of being killed by a police officer. The calculation used 2010-2012 population estimates from the U.S. Census Bureau's American Community Survey.







xl Calculated from the "Weapon Used by Offender" variable. Ranked based on frequency of reported shotgun homicides by police agencies.




Airbnb Regulated Into Legality

Ann C. Logue, R Street - October 10, 2014

One of the many oddities of San Francisco is that the city is full of libertarians who love regulation. You can do your own thing, unless you're a tech bro, a landlord or a big corporation, and then you must be legislated into submission. The city's housing market is distorted by a series of regulations that seemed like good ideas at the time, such as rent control and tight zoning. Instead of making the city more charming and affordable, they drew tension between those who can afford housing (the very rich, the long-tenured tenant) and those who don't. The result is a nasty edge to daily life in an otherwise gorgeous city.

The twin pressures of rent control and a booming economy have created occupations that can scarcely be imagined elsewhere, such as the master tenant: this is a person who has a large rent-controlled apartment and who makes a living by subletting rooms at market rate. Sure, the subtenants can complain, but they aren't likely to in a city where the shortage of housing is a serious issue.

Then there's Airbnb. The zoning and construction limits that affect the housing market also affect the hotel market. In 2007, Airbnb was formed in this world of semi-anarchy: a service that allowed people to rent out rooms to visitors. The host received more money per night than he or she would from taking on a roommate. The money offset the very high cost of living in SF, and the visitor saved money on hotel bills.

Win-win? Not quite. With no regulation, participating in Airbnb raised questions: could renters rent out space in their apartments without violating their leases? What if the renter moved in with her boyfriend but kept the rent-controlled lease to make a living as a full-time hotelier? Could landlords kick out tenants in order to rent out apartments to short-term guests? Would the hosts have recourse against crazy, violent or thieving guests -- or squatters? Likewise, would the guests be protected against difficult hosts? And was the city due taxes for the lodging services? If so, should it go after the hosts, the guests or Airbnb itself to collect?

Excessive regulation led to the creation of Airbnb, and less-excessive regulation may just save it. On Oct. 7, the San Francisco Board of Supervisors passed legislation allowing residents to rent out rooms if they register with the city and hold $500,000 in liability insurance. Also, Airbnb must remit lodging taxes to the city. Airbnb is now legal, and guests and hosts alike, at the very least, know where they stood relative to the law.

Regulation is such a complicated beast. It would be nice to say that there should be no regulation whatsoever, but let's face it: some people will behave badly unless they are given limits. On the other hand, too much regulation creates its own issues. Rent control is a bad idea; it is an economic transfer from the landlord to the long-term tenant with no social advantages, as the tenants receive the benefit without regard to need. As with any transfer payment, once it's in place, the beneficiaries form a tight constituency to keep it. No politician has the will to take on an issue like rent control, and there's no time machine to undo it.

On the other hand, there's the very interesting phenomenon of creativity acting in response to constraints. Because regulation creates problems, it creates demand for work-arounds to solve them. Airbnb is one example. Another, also from SF, is Uber: restrictions on the number of taxis meant that people who lived in San Francisco's neighborhoods could not get cabs. The taxi drivers would rather serve tourists than troll for passengers in the Fog Belt. The market for medallions may be limited, but other forms of on-demand transportation solved the problem.

Maybe that's the secret to economic growth in Northern California. We like to think that a high-tax, high-regulation jurisdiction would be a terrible place to do business, but people are flocking to San Francisco and surrounding cities in the hope of hitting it big. The tight regulations force creative thinking to work around them -- and maybe lead to their destruction.

This piece originally appeared on the R Street Institute's blog.

Iliya Atanasov & H. Bradlee Perry - October 10, 2014

Public pension systems are now putting lots of money into hedge funds, hoping to earn higher returns and bolster their dangerously underfunded plans. But pension funds shouldn't be wasting resources on these expensive investments.

Hedge funds may indeed provide a worthwhile "hedge" during a stock-market decline -- their values tend to fall less than stocks' -- or if an extended period of low equity returns and high volatility ensues. But that advantage is more than wiped out when stocks experience their normal strong rebound, as has happened over the last seven years. For the long term, which is of overriding importance to pension funds, hedge funds are not the way to go.

For the fifth consecutive year, hedge funds underperformed the stock market in 2013. Their average total return was 9.1 percent, versus 32.4 percent for the S&P 500. That makes the longer-term numbers look like this:

It's always hard to pick the right time frame for a fair comparison. We've shown three periods here. The first starts before hedge funds became broadly popular and had a brief period of really good performance. That is when David Swenson made hay for Yale. It also starts just as tech stocks began to gain great attention.

The second one starts after the stock market had experienced a strong rebound from the tech crash. And the third period, while rather short, has the classic fairness of going from one market peak to another.

The main conclusion from these numbers is that the big advantage of hedge funds is long over. More recently, passive portfolios of large-cap stocks have done much better, and these numbers do not even account for the huge difference in fees. True, the best hedge funds will likely continue to outperform the market. But they're hard to identify in advance and almost impossible to get into for all but the biggest and most prestigious accounts.

To a great extent, hedge funds' underperformance is due to the flood of money into the field and the proliferation of funds. Too much money -- and not just from pension funds -- is chasing the available smart trades. Thus, much of it goes into mediocre strategies or causes a drift in investment styles at otherwise shrewd firms that don't have the discipline to close their funds. Excessive inflows are a classic red flag for an investment strategy even before the bad numbers turn up.

A second factor working against hedge funds is that securities markets have been much more correlated -- that is, a decision to invest in one security instead of another matters less than it used to -- mainly because of the dominant influence of macro factors and government intervention on all markets since the recession. With less of a difference between various types of investments, there have been many fewer advantageous trading opportunities. Maybe someday the world will settle down, with smaller influences becoming more important again. But odds are that that time is some way off. With Europe still on its knees and trouble brewing in China, the kind of environment we've had since 2007 will likely persist.

Finally, many hedge funds are not really designed to hedge at all. Hedging is about reducing the potential downside on an investment to some preset risk level. Yet many, if not most, hedge funds are little more than glorified trading houses or mutual funds, at several times the cost. Their fee structures reflect it, incentivizing the chase for more assets and higher returns instead of accomplishing risk-management goals and consistent return targets.

For these reasons, hedge funds in general should be considered an unrewarding and risky arena. It's one that public pension funds should steer clear from.

Iliya Atanasov is a senior fellow on finance at Pioneer Institute, a Boston-based think tank, and H. Bradlee Perry is an investment consultant and advisor to Marble Harbor Investment Counsel.

Price Tags on Health Care? Only In Massachusetts

Martha Bebinger, WBUR - October 9, 2014

Without much fanfare, Massachusetts launched a new era of health care shopping last week.

Anyone with private health insurance in the state can now go to his or her health insurer's website and find the price of everything from an office visit to an MRI to a Cesarean section. For the first time, health care prices are public.

It's a seismic event. Ten years ago, I filed Freedom of Information Act requests to get cost information in Massachusetts—nothing. Occasionally over the years, I'd receive manila envelopes with no return address, or secure .zip files with pricing spreadsheets from one hospital or another.

Then two years ago, Massachusetts passed a law that pushed health insurers and hospitals to start making this once-vigorously guarded information more public. Now as of Oct. 1, Massachusetts is the first state to require that insurers offer real-time prices by provider in consumer-friendly formats.

"This is a very big deal," said Undersecretary for Consumer Affairs and Business Regulation Barbara Anthony. "Let the light shine in on health care prices."

There are caveats.

1.) Prices are not standard, they vary from one insurer and provider to the next. I shopped for a bone density test. The low price was $16 at Tufts Health Plan, $87 on the Harvard-Pilgrim Health Care site and $190 at Blue Cross Blue Shield of Massachusetts. Why? Insurers negotiate their own rates with physicians and hospitals, and these vary too. Some of the prices include all charges related to your test, others don't (see No. 2).

2.) Posted prices may or may not include all charges, for example the cost of reading a test or a facility fee. Each insurer is defining "price" as it sees fit. Read the fine print.

3.) Prices seem to change frequently. The first time I shopped for a bone density test at Blue Cross, the low price was $120. Five days later it had gone up to $190.

4.) There is no standard list of priced tests and procedures. I found the price of an MRI for the upper back through Harvard Pilgrim's Now iKnow tool. That test is "not found" through the Blue Cross "Find a Doc" tool.

5.) Information about the quality of care is weak. Most of what you'll see are patient satisfaction scores. There is little hard data about where you'll get better care. This is not necessarily the insurer's fault, because the data simply doesn't exist for many tests.

6.) There are very few prices for inpatient care, such as a surgery or an illness that would keep you in the hospital overnight. Most of the prices you'll find are for outpatient care.

These tools are not perfect, but they are unlike anything else in the country. While a few states are moving toward more health care price transparency, none have gone as far as Massachusetts to make the information accessible to consumers. Tufts Health Plan Director of Commercial Product Strategy Athelstan Bellerand said the new tools "are a major step in the right direction." Bellerand added: "They will help patients become more informed consumers of health care."

Patients can finally have a sense of how much a test or procedure will cost in advance. They can see that some doctors and hospitals are a lot more expensive than others. For me, a bone density test would cost $190 at Harvard Vanguard and $445 at Brigham and Women's Hospital.

The most frequent early users of the newly disclosed data are probably providers. Anthony says some of the more expensive physicians and hospitals react with, "I don't want to be the highest priced provider on your website. I thought I was lower than my competitors."

Anthony is hoping that will generate more competition and drive down prices.

"I'm just talking about sensible rational pricing, which health prices are anything but," she added.

Take, for example, the cost of an upper back MRI.

"The range here is $614 to $1,800, so three times," said Sue Amsel, searching "Now I Know," the tool she manages at Harvard Pilgrim. "That to me is a very big range."

In this case, the most expensive MRI is at Boston Children's Hospital and the lowest cost option is at New England Baptist, with no apparent difference in quality.

"It's not just for choosing. It's primarily for getting you the information, about whatever you're having done, so you can plan for it," she said.

Most of us don't have to plan for anything except our co-pay. But about 15 percent of commercial insurance plans have high deductible plans, in which patients pay the full cost of an office visit or test up to the amount of their deductible, and that number is growing.

"As more and more members are faced with greater and greater cost share, this sort of information is really important," said Bill Gerlach, director of member decision support at Blue Cross.

To use these tools, you'll log in on to your insurer's website. If you have a high deductible, the online calculator shows how much you've spent so far this year toward your deductible. If your coverage does not include a deductible, the tool will calculate the balance towards your out-of-pocket maximum.

All these numbers are confusing. Most of us haven't thought about shopping for health care or paid attention to how much we spend. The state and most of the insurers are rolling out education campaigns to help us wrestle with the previously hidden world of health care prices.

One last tip: Each insurer uses a different title for its calculator. Look for the Blue Cross cost calculator under "Find a Doctor." It's not as easy to find as Tufts' "Empower Me" page or Harvard Pilgrim's "Now iKnow."

Both Tufts and Harvard Pilgrim used Castlight Health to build and now run their shopping tools while Blue Cross contracted with Vitals.

Aetna was the first insurer in Massachusetts to offer cost and quality comparisons through its Member Payment Estimator. It's not clear if all insurers doing business in the Bay State met the Oct. 1 deadline, but all of the major players did. There is no penalty for those who failed to do so.

This story originally appeared at Kaiser Health News and is part of a reporting partnership that includes WBUR, NPR and Kaiser Health News. Kaiser Health News is an editorially independent program of the Henry J. Kaiser Family Foundation, a nonprofit, nonpartisan health policy research and communication organization not affiliated with Kaiser Permanente.

Robert VerBruggen - October 8, 2014

Yesterday, RealClearBooks ran my 20th-anniversary review of Richard Herrnstein and Charles Murray's The Bell Curve. One thing I didn't have room to cover, though, was one of the more striking assertions in the book: The authors claim that there isn't much of a racial gap in income once IQ is taken into account.

As it happens, there's an updated version of the data set Herrnstein and Murray used, the 1979 National Longitudinal Survey of Youth (NLSY): The study began following a new cohort of adolescents (age 12-16) in 1997. I dug into the new data to see what I could find.

Having majored in journalism, I kept my analysis simple: I merely grabbed all the black and non-Hispanic white males who had both test-score data from when they were young (a percentile rank on the math and verbal portions of the ASVAB, a battery of cognitive tests) and income data from 2011 (when they were around their late 20s). Someone with more statistical training can no doubt take this analysis further -- with weights, adjustments for age, controls for other variables, etc. -- but I found some striking things just in the raw data.

The first thing I did, ignoring test scores, was to rank white and black men by income separately. Here are the results:

The difference may not be visually striking, but it's substantial, with the median white male earning about 30 percent more ($35,000) than the median black male ($27,000). This won't be a surprise to anyone familiar with inequality statistics.

Also note that the maximum income is the same for groups. This isn't a weird coincidence, but happens because the data are "top-coded" -- to protect the privacy of those participating in the survey, the NLSY groups together the top 2 percent of earners and assigns each of them the mean income for the whole group.

Here is a much messier graph, with all of the men plotted by ASVAB percentile and income (and the top-coding much more obvious). The lines approximate the median income for each ASVAB score.

A lot of interesting stuff here. First, if you look just at the folks rich enough to be top-coded, you see that they come disproportionately from the top half (and especially the top quarter) of the ASVAB distribution. For the top four-fifths of the ASVAB distribution, blacks and whites who score the same indeed tend to earn similar incomes, with the median lines weaving back and forth across each other. At the bottom of the ASVAB distribution, however, blacks earn much less than whites who score the same. (The income spike for blacks with very high ASVAB scores seems to stem from a low sample size in that range.) And especially for whites, the income advantage of a higher ASVAB score isn't as great as you might expect -- whites with low scores tend to earn around $30,000, whites with high scores closer to $40,000, for example.

Obviously, there are a million ways we can go from here: education, incarceration, the recession, employment discrimination, affirmative action, etc. I'll be interested to see the data that's collected a few years from now, when the educated have had more time to establish careers and the economy has (hopefully) recovered. But to avoid stretching my abilities even further, I'll just encourage others to explore the data. You can customize your own data set here, download the spreadsheet I used here, and see the R code I used to make the above plots here.

Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen

Obamacare Cancellations: Blame 'AV Drift'

Robert F. Graboyes - October 8, 2014

For the fifth autumn in a row, the Affordable Care Act (ACA) is spurring waves of health-insurance policy cancellations. As I explained in a recent video and article, some of the most recent cancellations (with more to come in future years) result from an odd and unnecessary regulation that sets up a conflict between the ACA's "metallic tiers" (bronze, silver, etc.) and a phenomenon we can call "actuarial value drift."

To understand the problem, let's begin with some history. In 2010, ACA supporters -- including the president -- told Americans, "If you like your health insurance, you can keep your health insurance." They pointed to the law's "grandfathering" provision, which allowed people to keep existing plans that lacked some ACA bells and whistles.

But in specifying the precise conditions under which you could keep your plan, this regulation virtually guaranteed you couldn't. Its restrictions effectively said, "If you like your insurance, you can keep your insurance -- as long as your employer isn't worried about cost and your insurer doesn't mind losing money."

The White House acknowledged this three months into the ACA, and the grandfathered policies soon began dropping away. Some remain on life support (via administrative delays), along with "grandmothered" policies, an ad hoc variation added to slow the arrival of cancellation notices.

But at least grandfathering cancellations are time-limited; once grandfathered policies are gone, the problem is over. But metallic-tier cancellations will be a permanent fixture, re-emerging every year.

Here's why: Under the ACA, insurers must restrict individual and small-group policies to four narrow tiers of "actuarial value" (AV) -- the average percentage of medical expenses a plan pays. The ranges are bronze (58 to 62 percent), silver (68 to 72 percent), gold (78 to 82 percent), and platinum (88 to 92 percent).

ACA supporters have referred to canceled plans as "subpar." But ironically, while a plan paying 59 percent of medical expenses is okay (because it lies in the ACA-approved bronze range), a plan paying 96 percent of expenses is consideredtoo generous; any plan paying more than 92 percent is forbidden.

Narrow tiers would be bad enough in a static world -- like restricting men's trousers to sizes 18, 28, 38, or 48. Pity the guy with the 33" waist.

But it's even worse. A plan's AV changes over time, partly because the calculation is recalibrated annually by the Department of Health and Human Services (HHS). Annual increases in the ACA's out-of-pocket maximum will push a plan's AV downward. Future medical-care price increases will have a similar effect. The problem may go critical in 2017, when many of the ACA's temporary fixes fall away.

With narrow tiers, AV drift forces insurers to either cancel policies or undergo considerable effort to push them back into compliance. Adjustments may render policies unprofitable. "Fixing" a plan means a new actuarial analysis and a costly, time-consuming process of submitting the amended plan to state and local officials for approval.

AV-drift cancellations will be worse for patients than the grandfathering cancellations. ACA plans have notoriously narrow provider networks, so unlike with earlier cancellations, switching plans nowadays often means switching doctors and hospitals. Even the act of tweaking an existing plan back into compliance may require the insurer to drop some doctors and hospitals from its network. Prepare for recurring news stories about cancer patients losing long-time oncologists.

Why did HHS decide to limit purchasers to such narrow AV bands? Ostensibly, narrow tiers were intended to provide "ease of comparison" -- ironic, given the mass confusion surrounding the exchanges. Commenters warned HHS that AV gaps could cause problems, but the agency decided consumers couldn't handle the range of choices they routinely enjoy in computers and cars and other types of insurance.

Insurance broker and InsureBlog contributor Pat Paule warned early on about AV drift (here and here). He and I co-authored a similar article here. This memo from the University of Virginia shows the somersaults required when a plan drifts out of its tier -- and the negative financial impact on those insured. Duke University's Chris Conover noted that if HHS had defined the tiers without gaps between, the drift problem wouldn't exist.

How big is this problem already, and more important, how much bigger will it get? Regrettably, these questions are impossible to answer, as HHS has not provided adequate ACA data. Insurance expert Robert Laszewski recently noted that we don't even have a reliable estimate of the single most basic number -- the number of people enrolled in ACA plans.

Robert F. Graboyes is a senior research fellow with the Mercatus Center at George Mason University.

An Easy Source of Wi-Fi Spectrum

Alan Daley - October 7, 2014

The Federal Communications Commission (FCC) is considering allowing the company Globalstar to use its existing 22 MHz of satellite spectrum to support Wi-Fi services in areas where its other services are in less demand. The proposal would increase U.S. Wi-Fi capacity by one-third in the 2.4 GHz band, is structured to avoid technical conflicts with other wireless broadband, and would deliver strong economic benefits. Because this satellite spectrum is adjacent to existing Wi-Fi spectrum, subscribers could immediately access the spectrum using their normal Wi-Fi devices.

When it comes to scarce spectrum, opportunities like this seldom arrive. And yet some doubters persist, as seen in recent stories about a short-seller who claims Globalstar's spectrum is "useless," in part because of the FCC's limits on its use. Meanwhile, the growth in consumer demand for wireless broadband is expected to increase twenty-fold over in the next five years.

One potential source of bandwidth is underused TV stations. To mine that ore, the FCC is running a complex auction that might free up some spectrum for wireless broadband use. In contrast, the use of satellite spectrum for Wi-Fi is voluntary, simple, and entirely consistent with Congress's and the FCC's quest for additional broadband spectrum.

Allowing for more efficient use of assigned satellite spectrum makes sense because satellite services are not as intensely used in urban markets, due to the presence of many alternative wireline and wireless broadband service providers. This means that this satellite spectrum is not being fully utilized in non-rural markets -- which is exactly where Wi-Fi services are in the most demand.

Wi-Fi traffic is expected to leapfrog the volume of wireline Internet traffic by 2018, which makes getting more spectrum devoted to Wi-Fi a necessity. New Wi-Fi spectrum would give the public faster upload and download speeds, more consistent Internet responsiveness, and more Wi-Fi hotspots via satellite. It could boost GDP by $11 billion and add 90,000 jobs to the economy.

Further, if the proposal is approved, the company "commits to deploy 20,000 free [terrestrial low power service] access points in the nation's public and non-profit schools, community colleges and hospitals." Further, the proposal would require Globalstar to provide its mobile satellite services free of charge to its customers in any federally declared disaster -- precisely during the time when satellite may be the only game in town.

Congress and the FCC need to get more spectrum onboard for wireless broadband services. The Globalstar proposal asks little or nothing from the FCC, the taxpayer, or the consumer. The benefits it delivers will be widely spread among the public.

Again, opportunities like this seldom come our way. The proposal should be welcomed and permission granted expeditiously.

Alan Daley writes for the American Consumer Institute Center for Citizen Research, a nonprofit educational and research organization.

Bing Bai & Taz George, Urban Institute - October 6, 2014

The housing bust, and the stunted recovery that followed, were back-to-back blows to many African American and Hispanic communities, where more households entered the market just before the crash, only to be locked out once prices began to bounce backWe visualized this story by mapping over 100 million mortgages from 2001 to 2013 and found that it held true in virtually every part of the country.

Housing prices have improved since the recession, but the number of mortgages taken out to purchase a home is still far below where it was at the peak of the market: 2.7 million in 2013, compared to 6.2 million in 2005. The decline is much sharper for African American and Hispanic borrowers (73 percent) than for white borrowers (48.2 percent). Credit availability is a big part of the story. In 2005, lending standards were extraordinarily relaxed, with lenders issuing mortgages to many households that would not typically be considered creditworthy. Today, the opposite is true; standards have tightened considerably, and the shift has disproportionately affected African American and Hispanic households, which tend to have lower credit profiles.

United States

Loans to African-Americans and Hispanics 2005-2013: -73.0 percent

Loans to non-Hispanic whites: -48.2 percent

Difference: 24.8 percent

Still, some metropolitan areas' recoveries stood out as extraordinarily uneven, with mortgages for white and Asian borrowers experiencing far smaller declines than for African American and Hispanic borrowers. Here, we rank the top 5 least equitable recoveries.

Number five: Detroit-Warren-Dearborn, MI

Loans to African-Americans and Hispanics 2005-2013: -81.3 percent

Loans to non-Hispanic whites: -46.8 percent

Difference: 34.6 percent


Number four: San Francisco-Oakland-Hayward, CA

Loans to African-Americans and Hispanics 2005-2013: -84.8 percent

Loans to non-Hispanic whites: -49.8 percent

Difference: 35.0 percent


Number three: San Jose-Sunnyvale-Santa Clara, CA

Loans to African-Americans and Hispanics 2005-2013: -88.4 percent

Loans to non-Hispanic whites: -52.6 percent

Difference: 35.8 percent


Number two: Santa Rosa, CA

Loans to African-Americans and Hispanics 2005-2013: -87.1 percent

Loans to non-Hispanic whites: -47.9 percent

Difference: 39.3 percent


Number one: Grand Rapids-Wyoming, MI

Loans to African-Americans and Hispanics 2005-2013: -65.9 percent

Loans to non-Hispanic whites: -21.4 percent

Difference: 44.5 percent

Bing Bai is a research associate in the Urban Institute's Housing Finance Policy Center, and Taz George is a research assistant in the Institute's Metropolitan Housing and Communities Policy Center. This piece originally appeared on the Urban Institute's MetroTrends blog.

Blog Archives