Filter | only charts

Green-Tip Ammo: An Update

Robert VerBruggen - March 3, 2015

Regarding my story from this morning, I was able to talk to someone with knowledge of the situation, who spoke on condition of anonymity. Green-tip rounds "were classified as AP [armor piercing] in 1986 because the steel penetrator is what is considered the core," my source said. "It's the regulatory process, and everyone can argue semantics and perhaps it's not written very well, but that is the story behind it. ... Having the additional component behind the tip isn't enough to get it out of AP classification."

The source also said that the Treasury Department (which then housed the ATF) corresponded with the legislators who were drafting the law in 1985, and in those discussions it was made clear that green-tip ammo would be classified as armor-piercing. "Apparently that didn't prevent anything from moving forward," the source said.

Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen

Is This Ammo 'Armor Piercing'?

Robert VerBruggen - March 3, 2015

[Update: After reading this post, please see this one for additional comments from a source with knowledge of the situation.]

The Bureau of Alcohol, Tobacco, Firearms, and Explosives is trying to ban "green tip" ammo, a common round for AR-15 rifles. This move wouldn't be possible without a decision that was made in the mid-1980s -- and may have been made in error.

In 1986, Congress and President Reagan prohibited "armor piercing" bullets. The law didn't define the term literally -- any rifle round that can reliably take down a deer will also pierce "bulletproof" vests -- but rather focused on rounds that can be fired from a handgun and are made of unusually hard metals. (Despite their lower lethality, handguns are easier to conceal and more popular with criminals.) And even when it came to these, the Treasury secretary, whose department then included ATF, could exempt ammo he considered to be "primarily intended to be used for sporting purposes." Shortly thereafter, the fateful decision was made: Green-tip rounds, which contain steel, were classified as "armor piercing" but given the sporting-purposes exemption.

The ammo is popular with everyone from target shooters to hunters. But ATF wants to yank the exemption, noting that, while handguns capable of firing these rifle rounds were not commercially available in the 1980s, they are available today.

Critics have been asking: Are these rounds really "armor piercing," legally speaking? Or did the ATF steal a base 29 years ago, making the legality of this ammo reliant on a subjective analysis of "sporting purposes," rather than on the simple text of the statute?

Here's the definition from the 1986 law:

The term 'armor piercing ammunition' means a projectile or projectile core which may be used in a handgun and which is constructed entirely (excluding the presence of traces of other substances) from one or a combination of tungsten alloys, steel, iron, brass, bronze, beryllium copper, or depleted uranium.

(This definition was supplemented in the 1990s, after the decision had been made, but the addition isn't relevant to green-tip ammo.)

As noted above, green-tip ammo does contain steel -- but just in its famous colored "penetrator" tip. It also contains a lot of lead, a softer metal not restricted in the law. It cannot truthfully be said that the projectile or its core is made up "entirely" of the regulated metals. Yet the ATF's proposal states as fact that the ammo has a "steel core" -- and a recent press release seems to warp the standard, saying bullets are covered if they merely "include" the named hard metals.

Three decades out, it's not clear what the ATF and the Treasury secretary were thinking. I can't find any trace of it in the bureau's industry circulars, for example, including the 1986 one focused on armor-piercing ammo. And while federal guides for firearm dealers have long listed green-tip ammo as an exemption (here's the one from 1988-1989), they do not explain why an exemption was necessary. Now that the exemption is in jeopardy, this is of more than just historical import.

Yesterday morning, I got in touch with ATF to find out what the bureau's argument might be. They were not able to give me a comment within the day, but I will update this post when they send one.

Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen

Five Things to Know About the Supreme Court Case Challenging the Health Law

Julie Rovner, Kaiser Health News - March 2, 2015

The Affordable Care Act is once again before the Supreme Court.

On March 4, the justices will hear oral arguments in King v. Burwell, a case challenging the validity of tax subsidies helping millions of Americans buy health insurance if they don't get it through an employer or the government. If the court rules against the Obama administration, those subsidies could be cut off for everyone in the three dozen states using, the federal exchange website. A decision is expected by the end of June.

Here are five things you should know about the case and its potential consequences:

1. This case does NOT challenge the constitutionality of the health law.

The Supreme Court has already found the Affordable Care Act is constitutional. That was settled in 2012's NFIB v. Sebelius.

At issue in this case is a line in the law stipulating that subsidies are available to those who sign up for coverage "through an exchange established by the state." In issuing regulations to implement the subsidies in 2012, however, the IRS said that subsidies would also be available to those enrolling through the federal health insurance exchange. The agency noted Congress had never discussed limiting the subsidies to state-run exchanges and that making subsidies available to all "is consistent with the language, purpose and structure" of the law as a whole.

Last summer, the U.S. Court of Appeals for the Fourth Circuit in Richmond ruled that the regulations were a permissible interpretation of the law. While the three-judge panel agreed that the language in the law is "ambiguous," they relied on so-called "Chevron deference," a legal principle that takes its name from a 1984 Supreme Court ruling that held that courts must defer to a federal agency's interpretation as long as that interpretation is not unreasonable.

Those challenging the law, however, insist that Congress intended to limit the subsidies to state exchanges. "As an inducement to state officials, the Act authorizes tax credits and subsidies for certain households that purchase health insurance through an Exchange, but restricts those entitlements to Exchanges created by states," wrote Michael Cannon and Jonathan Adler, two of the fiercest critics of the IRS interpretation, in an article in the Health Matrix: Journal of Law-Medicine.

In any case, a ruling in favor of the challengers would affect only the subsidies available in the states using the federal exchange. Those in the 13 states operating their own exchanges would be unaffected. The rest of the health law, including its expansion of Medicaid and requirements for coverage of those with pre-existing conditions, would remain in effect.

2. If the court rules against the Obama administration, millions of people could be forced to give up their insurance.

A study by the Urban Institute found that if subsidies in the federal health exchange are disallowed, 9.3 million people could lose $28.8 billion of federal help paying for their insurance in just the first year. Since many of those people would not be able to afford insurance without government help, the number of uninsured could rise by 8.2 million people.

separate study from the Urban Institute looked at those in danger of losing their coverage and found that most are low and moderate-income white, working adults who live in the South.

3. A ruling against the Obama administration could have other effects, too.

Experts say disallowing the subsidies in the federal exchange states could destabilize the entire individual insurance market, not just the exchanges in those states. Anticipating that only those most likely to need medical services will hold onto their plans, insurers would likely increase premiums for everyone in the state who buys their own insurance, no matter where they buy it from.

"If subsidies [in the federal exchange] are eliminated, premiums would increase by about 47 percent," said Christine Eibner of the RAND Corporation, who co-authored a study projecting a 70 percent drop in enrollment.

Eliminating tax subsidies for individuals would also impact the law's requirement that most larger employers provide health insurance. That's because the penalty for not providing coverage only kicks in if a worker goes to the state health exchange and receives a subsidy. If there are no subsidies, there are also no employer penalties.

4. Consumers could lose subsidies almost immediately.

Supreme Court decisions generally take effect 25 days after they are issued. That could mean that subsidies would stop flowing as soon as July or August, assuming a decision in late June. Insurers can't drop people for non-payment of their premiums for 90 days, although they have to continue to pay claims only for the first 30.

Although the law's requirement that individuals have health insurance would remain in effect, no one is required to purchase coverage if the lowest-priced plan in their area costs more than eight percent of their income. So without the subsidies, and with projected premium increases, many if not most people would become exempt.

5. Congress could make the entire issue go away by passing a one-page bill. But it won't.

All Congress would have to do to restore the subsidies is pass a bill striking the line about subsidies being available through exchanges "established by the state." But given how many Republicans oppose the law, leaders have already said they will not act to fix it. Republicans are still working to come up with a contingency plan should the ruling go against the subsidies. Even that will be difficult given their continuing ideological divides over health care.

States could solve the problem by setting up their own exchanges, but that is a lengthy and complicated process and in most cases requires the consent of state legislatures. And the Obama administration has no power to step in and fix things either, Health and Human Services Secretary Sylvia Burwell said in a letter to members of Congress.

This piece originally appeared at Kaiser Health News (KHN), a nonprofit national health policy news service at which Julie Rovner is the Robin Toner distinguished fellow. 

A New Bipartisan Agenda

Patrick Horan - February 28, 2015

This week, No Labels held a cocktail reception at the Library of Congress. The purpose was to honor the members of Congress who have received the organization's "Problem Solvers Seal of Approval" -- and to promote a four-part outline the group calls the National Strategic Agenda.

Founded in 2010 by veteran Democrats and Republicans, No Labels seeks to move America to a "politics of problem solving" and away from constant gridlock. Former senator Joe Lieberman (D., Conn.) and former Utah governor Jon Huntsman Jr. (R.) serve as co-chairs, and the group has state- and university-level chapters as well.

The reception featured a line of prominent speakers, including Lieberman and Huntsman, No Labels vice chairs Al Cardenas and Mack McLarty, and freshman senator Cory Gardner (R., Colo.). Senator Gardner said that receiving the No Labels Seal of Approval was pivotal in his race last November against incumbent Democratic senator Mark Udall. "If you run as a problem solver and talk about the Seal of Approval, then you start governing that way and leading that way," he said.

The National Strategic Agenda comprises four goals: First, create 25 million jobs over the next ten years; second, secure Social Security and Medicare for the next 75 years; third, balance the federal budget by 2030; and fourth, make America energy-secure by 2024. Embracing these goals, which enjoy bipartisan support, is intended to be the first step toward finding common ground.

"There is currently no framework for decision-making in this country, and what No Labels is advocating for is a new framework where leaders first start with agreeing to goals and then to policy," No Labels co-founder Nancy Jacobson told me. "That's why both Gingrich and Clinton agree to our approach. It's exactly what they did in the '90s -- first, they agreed to the goal of a balanced budget, and then they got to deciding on specifics." Following the 1994 Republican Revolution, despite partisan gridlock and government shutdowns in 1995 and 1996, President Clinton signed the bipartisan Balanced Budget Act of 1997. Welfare reform was another achievement both parties embraced during this time.

Fifty-eight National Strategic Agenda supporters -- 34 Republicans and 24 Democrats -- won their congressional races on Election Day 2014. Only four lost. This large number of victories could be a sign of voters' tiring with the stalemate that has defined Congress over the past few years.

Although voters generally want a government divided across party lines to ensure no single party has too much power, a December 2014 Pew poll shows that 71 percent of Americans think a failure of Republicans and Democrats to work together over the next two years would hurt the nation "a lot" and 16 percent believe it will hurt "some." Those who have embraced the mission of working across party and ideological lines have formed the Problem Solvers Caucus, currently chaired by Rep. Kurt Schrader (D., Ore.) and Rep. Reid Ripple (R., Wis.).

No Labels hopes the National Strategic Agenda can influence the debate in 2016. The group will award Problem Solver Seals of Approval during the primaries to candidates committed to the goals.

"We think the first 100 days of the next president's term is the best chance to get to serious problem solving," said Jacobson. "We also believe the next president will be that person who can articulate better than his or her competitors why he or she is the true problem solver."

Pat Horan is a research associate at RealClearPolitics and a contributor at RealClearHistory. He is a recent graduate of the College of the Holy Cross.

Robert VerBruggen - February 27, 2015

The FCC has decided to push forward with net neutrality (see our update today for two perspectives). This means that, assuming the decision holds up, Internet-service providers will no longer be able to charge companies like Netflix for higher speeds.

The fear is that, absent regulations, ISPs will abuse this privilege, extorting Internet businesses and blocking services that compete with products the ISPs offer. Others counter that "fast lanes" are actually the most user-friendly way of addressing the fact that some customers use a whole lot more Internet service than others. Charging based on usage directly, or cutting speeds for the heaviest users after they hit a bandwidth cap, makes people think twice every time they click a cat video -- maybe it's best if people who sign up for bandwidth-heavy services like Netflix pay for their usage when they pay their bills to those companies, rather than their Internet bills.

Without taking sides, it's worth putting some hard numbers on the light-vs.-heavy-user problem. Here's a chart from a 2014 report by the networking-equipment company Sandvine. Along the X axis, North American Internet subscribers are ranked from heaviest to lightest, and the Y axis shows their cumulative share of traffic. "Upstream" refers to data sent from the customer to someone else (e.g. outgoing e-mails); "downstream" refers to data that flows to the user's computer (e.g. websites and streaming media). For individual customers, as opposed to businesses like Netflix, most data flows downstream, as seen in the fact that the downstream and aggregate numbers are basically the same.

Half of users account for more than 95 percent of traffic.

Later in the report, the company breaks users into three groups: Heavy users of media (who may have "cut the cord" and canceled their cable, replacing it with Internet streaming), moderate users, and light users. The heavy users, the top 15 percent when it comes to audio and video streaming, use a mean of 212 GB per month total. Almost three-quarters of this traffic is specifically for streaming, which they spend an estimated average of 100 hours per month doing. The bottom 15 percent of media streamers use, on average, less than 5 GB per month. And the 70 percent of users who fall in the middle use 29 GB per month. Almost half of their traffic is for streaming, despite the fact they spend only 9 hours a month doing it.

Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen

Common Core Confusion: Blame Supporters

Neal McCluskey - February 26, 2015

As students across the country head into Common Core testing, a new poll reveals that Americans are confused about what, exactly, the Core is. But don't blame them. Blame Core advocates, whose rush to install nationwide curriculum standards has left Americans befuddled and angry.

What is the Core? Supposedly, just reading and math standards -- basic guidelines about what students should be able to do -- voluntarily adopted by states. But that is not how the public perceives it. According to a new Fairleigh Dickinson University poll, only 17 percent of Americans hold favorable opinions of the seemingly innocuous Core, and two-thirds think it covers specific content: at least one topic out of sex education, evolution, global warming, and the American Revolution.

Some of the public is misinformed. Unfortunately, that is in part because what Core advocates tell us is often quite misleading.

Start by looking at what the pollsters -- whose press release was very pro-Core -- assert. While it is true the Core does not explicitly tackle the four hot-button subjects mentioned above, it touches all of them, in one case forcefully. The English portion has sections on "literacy in history/social studies, science, and technical subjects," and more explicitly says that students in grades 11 and 12 will "analyze ... U.S. documents of historical and literary significance ... including the Declaration of Independence ... for their themes, purposes, and rhetorical features." Inextricably connected to the "themes" and "purposes" of the Declaration is, of course, the American Revolution. Yet the pollsters suggest it is flat wrong to think the Core includes this topic.

Then there are those national Common Core-aligned tests millions of students are facing. If they ask about global warming or sex education, those topics essentially become Core content.

The Core is not simply guidelines, but content, and that is without even mentioning the math standards, which are much more specific when it comes to dictating material than the reading standards.

Of course, the average person -- with a job, family, and countless political issues vying for his or her attention -- has little time to research any given topic, so some confusion is to be expected. But the way the Core became policy -- rushed through the back door -- made public understanding essentially impossible.

The key was the 2009 federal "stimulus," which allocated $4.35 billion for what became the Race to the Top (RTTT) program. While all eyes were on the Great Recession, RTTT made states compete to win federal dough, and among several things, they had to promise to adopt standards common to a "majority" of states -- a parameter only the Core met -- to truly compete. And applications were due before the final version of the Core was even published.

After RTTT came No Child Left Behind waivers, cementing adoption by giving states only two standards options: Either adopt the Core, which most states had already promised under RTTT, or have their own standards certified by a state university system as "college- and career-ready."

Core supporters almost certainly lobbied to include the Core in RTTT, and both RTTT and waivers were in line with what the National Governors Association and the Chief State School Officers had called for in their 2008 report "Benchmarking for Success: Ensuring U.S. Students Receive a World-Class Education." This was not truly a voluntary adoption of common standards, but was pushed by federal "incentives," including funding and regulatory relief.

Finally, much blame for confusion lies at the feet of advocates who, in trying to quell a revolt that erupted when the Core hit districts and the public finally became aware of it, have tried to sell the Core as both content-heavy and content-bereft. All things to all people.

A good example is E.D. Hirsch, author of the famous book Cultural Literacy: What Every American Needs to Know. In 2013 Hirsch endorsed the standards, writing that "they break the fearful silence about the critical importance of specific content." Then what did he write? The Core actually contains no "specific historical, scientific, and other knowledge that is required for mature literacy." It just embraces the idea of content.

Or consider former Secretary of Education William Bennett, who in 2014 wrote that he once polled hundreds of people about what all students should know, and "almost every person agreed on ... the Bible, Shakespeare, America’s founding documents ... 'Huckleberry Finn' and classical works of mythology and poetry." He then asked, "Why ... is Common Core drawing such heavy fire?" Answer: "A myth persists that [it] involves a required reading list."

See why the public is confused?

Neal McCluskey is the associate director of the Cato Institute's Center for Educational Freedom and author of Behind the Curtain: Assessing the Case for National Curriculum Standards.

Robert VerBruggen - February 25, 2015

Yes, says the National Student Clearinghouse -- an organization founded by the "higher education community" -- in a report we feature in White Papers & Research today. The organization points out that a substantial proportion of students graduate from schools other than the one they started at, which can throw off the Department of Education's attempts to track completion.

The official Ed Department statistic is that around 40 percent of full-time, four-year college students fail to graduate within six years, a number I've used in my writing in the past. The NSC's number, based on extensive data tracking students over time, is under 20 percent at both public and private four-year schools. The situation is worse at two-year schools and for part-time students, though.

To get a slightly different perspective, I pulled some numbers from the Census's Current Population Survey. Specifically, I was interested in seeing how many people had only "some college" despite being old enough that they should have graduated -- and how many of them went on to graduate later. Here's a chart that tracks the cohort of people who were 24 in 2003 (around the time they should have graduated if they started at age 18 and went for six years) through the time they were 35 in 2014. It represents those with "some college" as a percentage of people who reported either some college or at least a two-year degree.

This method overstates the dropout problem by including older students, but it's interesting in that it shows college completion continuing to rise significantly into the late 20s, and somewhat into the 30s.

Even then, however, about one-quarter of people who've started college haven't finished. So, things may not be as bad as we thought, but they're not exactly great, either. A whole lot of people are investing time and money in college without graduating.

You can see the spreadsheet where I compiled my Census data here. Hat tip to Bloomberg.

Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen

'Was the Ferguson Grand Jury Misled?'

Robert VerBruggen - February 24, 2015

That's the question posed by Caleb Mason at The Crime Report. It's a technical question, but a good one. The problem stems from a failure of Missouri legislators to update a law that the Supreme Court invalidated.

The provision in question appeared to give police the right to kill any felon if it was necessary to prevent his escape, regardless of the seriousness of the felony or the threat the felon posed. A number of people called attention to the rule as the Michael Brown case unfolded, especially when the narrative held that Brown had been shot in the back.

Problem was, the rule on Missouri's books wasn't the rule courts were supposed to apply. In the 1985 case Tennessee v. Garner, the Supreme Court had held that cops can't kill fleeing felons unless it's necessary to prevent escape and the felon poses a serious danger.

One Saturday morning in August, I set out to find the state's official jury instructions on the matter. It was difficult, because Missouri doesn't make its instructions publicly available without a huge fee, but after some Googling I was able to find a police-department document summarizing them. I whipped up a quick blog post showing that, as the Supreme Court requires, the "fleeing felon" rule is not applied in Missouri courts. Andrew Branca of Legal Insurrection, who has a "well-equipped" legal library, later posted the full text.

Apparently that was more effort than the prosecutors in the case bothered to put in. They actually instructed jurors in the fleeing-felon rule until the very last day, at which point they handed out updated instructions.

Mason's post reveals the text of those revised instructions:

A law enforcement officer need not retreat or desist from efforts to effect the arrest, or from efforts to prevent the escape from custody, of a person he reasonably believes to have committed an offense because of resistance or threatened resistance of the arrestee. The use of force, including deadly force by a law enforcement officer in making an arrest or in preventing an escape after arrest, is lawful in certain situations. An arrest is lawful if the officer reasonably believes that the person being arrested has committed or is committing a crime.

The officer is entitled to use such force as reasonably appears necessary to effect the arrest or prevent the escape. A law enforcement officer is not entitled to use deadly force unless he reasonably believes that the arrestee was attempting to escape by use of a deadly weapon or that the arrestee would endanger life or inflict serious physical injury unless arrested without delay; and even then, the officer may use force only if he reasonably believes the use of such force is immediately necessary to effect the arrest or prevent the escape.

What's particularly odd is that these are not the official instructions either. I think two words of difference are especially important. From those instructions as provided by Branca:

In making a lawful arrest or preventing escape after such an arrest, a law enforcement officer is entitled to use such force as reasonably appears necessary to effect the arrest or prevent the escape.

A law enforcement officer in making an arrest need not retreat or desist from his efforts because of resistance or threatened resistance by the person being arrested.

But in making an arrest or preventing escape, a law enforcement officer is not entitled to use deadly force, that is, force which he knows will create a substantial risk of causing death or serious physical injury, unless he reasonably believes that the person being arrested is attempting to escape by use of a deadly weapon or that the person may endanger life or inflict serious physical injury unless arrested without delay (emphasis added).

And, even then, a law enforcement officer may use deadly force only if he reasonably believes the use of such force is immediately necessary to effect the arrest or prevent the escape.

These instructions aren't great by any stretch; they still say an officer is "entitled to use such force as reasonably appears necessary to effect the arrest or prevent the escape," which is not true. Here, however, the word "but" makes clear that the second rule stated (that lethal force isn't okay unless the person poses a threat or is escaping with a deadly weapon) is an exception to the first. And the word "is" makes clear that the escape-with-a-deadly-weapon must be in progress to justify deadly force in response, whereas the prosecutors' instructions imply it's okay to shoot someone who tried to escape with a deadly weapon previously but now is escaping without one. (That the sentence switches tenses makes this especially problematic: "unless he reasonably believes that the arrestee was attempting to escape by use of a deadly weapon.")

It's unlikely that this affected the grand jury's decision. Brown was not shot in the back, and Darren Wilson didn't claim he killed Brown to prevent an escape. Mason himself says the decision was "probably correct." But as Mason notes, the jury didn't have to believe Wilson, and it's at least possible that something important hinged on these instructions. In particular, Brown may have tried to escape with a deadly weapon by grabbing Wilson's gun but then have run away without it.

At the very least, this is incredibly embarrassing for the prosecutors. All they had to do was take the pre-written jury instructions and use them to instruct the jury. Instead, they gave the jury a summary of the law that had been outdated for 30 years, and failed to give completely accurate information even when they found their mistake.

[Update: Fun fact: In a 2014 bill that became law a few months before the Brown shooting without the governor's signature (in Missouri a bill becomes a law if the governor ignores it), the Missouri legislature changed the text of this provision slightly. The new text, which goes into effect in 2017, still does not reflect Tennessee v. Garner.

[Update II: I see a few objections are coming up in the comments, so I'll address them briefly. First of all, there are quotation marks around my title because it's the title from Mason's piece. I think it's a legitimate question even if, on closer inspection, the answer turns out to be "not really." Second, yes, I am aware that a regular jury and a grand jury are not the same thing, and nowhere have I claimed that prosecutors are legally required to use the official jury instructions to explain the law to a grand jury. (Obviously they're not, as they didn't, twice.) What I am saying is, given that the law needed to be explained, simply using the official instructions (A) would have stopped the initial mistake from happening and (B) would not have raised the questions that the "revised" instructions do. I am further aware that Tennessee v. Garner had to do with civil liability, but this is irrelevant, not only because the ruling flatly said that the fleeing-felon rule is "unconstitutional" (there are interesting arguments to be made about whether defendants can rely on unconstitutional laws), but because Missouri criminal proceedings actually follow the ruling (see the Branca post linked above for a discussion).]

Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen

Robert VerBruggen - February 20, 2015

The Koch Brothers have teamed up with some liberal groups, including the Center for American Progress and the ACLU, to start a justice-reform initiative called the Coalition for Public Safety. Cue the stories about "strange bedfellows" and a "bipartisan consensus" that we're locking too many people up for too long.

There is a significant opportunity here: Most Americans are now okay with legalizing marijuana, for example, and surveys indicate overwhelming support for reducing sentences for "low-risk, nonviolent" offenders. But as I've noted before, the public is definitely not okay with reducing sentences across the board: More than 60 percent of Americans still think their local courts are not harsh enough with criminals, a 25- to 30-point drop since the '70s, '80s, and early '90s, but still a majority. Even the broad notion that we have "too many people in prison" nets just 45 percent support, with 41 percent believing we have the right amount or not enough. (The rest say they don't know.)

So, before we get too excited, we might want to get a grip on how many offenders are plausible candidates for reduced sentences. Here are some numbers we should bear in mind as we think through a strategy for reform.

First of all, here are the broad offense categories for the prison population -- state and federal, sentences of one year or more -- as of 2012. "Public order" offenses include immigration violations, as well as "weapons, drunk driving, and court offenses; commercialized vice, morals, and decency offenses; and liquor law violations."

For state prisoners released in 2012, the median time served was 28 months for violent offenses (ranging from 17 months for assault to 153 for murder and nonnegligent manslaughter), 12 months for property offenses, 13 months for drug offenses, and 12 months for public-order offenses. About three-quarters of released state prisoners, meanwhile, are arrested again within five years.

Given marijuana-law trends, drugs seem to have the most potential. What kinds of drugs are involved? Sometimes pot, but usually cocaine, which Americans, rightly or wrongly, absolutely do not support legalizing. Here's a chart from a 2004 Justice Department report:

What exactly were they doing with those drugs? What was their criminal history? How long were their sentences? Here's another helpful chart from the same report:

There's plenty of room for change here -- legal pot, lighter sentences for possession of other drugs, etc., could reduce the prison population by thousands. But most prisoners aren't there for drugs, and most drug prisoners have prior offenses and were charged with more than simple possession. It will be a challenge to sort out which offenders we can let out without risking a public backlash, and it's good to see some bipartisan cooperation on this project.

Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen

[Note: The numbers in the paragraph following the first chart have been corrected.]

America: Land of the 'Mostly Free'

Jim DeMint - February 20, 2015

When measuring which nations are the most economically free, you expect America, "land of the free," to finish at or near the top. But in a new study, it didn't.

For the last 21 years, the Heritage Foundation and the Wall Street Journal have collaborated on the annual Index of Economic Freedom, which ranks the economies of 178 nations from the most to the least free. These range from famously free-wheeling economies such as Hong Kong (No. 1 on the index) and Singapore (2) to oppressive states like Zimbabwe, Cuba, and North Korea (ranked 175th, 177th, and 178th, respectively).

Since 2008, economic freedom has been steadily declining in the United States. In last year's index, America fell from the ranks of the world's ten freest economies. Our score improved slightly in the 2015 edition, but we're still stuck in 12th place, right behind Denmark.

I'm happy for the eleven nations who finished ahead of us, but the fact remains that there is no reason we can't be doing better. A lot better.

A country's ranking is based on several indicators of how freely citizens are allowed to buy, sell, build businesses, and live their lives without undue burdens from the government or lawlessness. How strong are property and labor rights? What's the size of the government? How high are taxation and spending, and how much corruption is there? These are judged via our published methodology, independent of the editors' opinions on one country or another.

The reasons for America's long economic stagnation are as familiar as our government's refusal to address them: a massive regulatory state that slows innovation, grant programs that pick winners and losers in the marketplace, and a Byzantine tax code as long as seven copies of War and Peace stacked together.

Too many corporate interests manage to escape the tangled web of big government only by virtue of their political clout: Lobbying power wins these companies special tax carve-outs and subsidies, as well as a role in helping bureaucrats craft regulations. Their smaller competitors -- and the taxpayers -- aren't so lucky.

The index is widely read across the globe, and has been used in various countries to build better policies. I hope that our elected leaders can take similar inspiration to boost America's standing.

According to the Bureau of Labor Statistics, more than 92 million Americans are out of the labor force. I'm sure many of them feel the sting of why we're in 12th place. The American public doesn't need an index to know that our economy isn't that great, or how tough it is to start a business under miles of red tape.

That's a big reason why the voters sent a new crop of leaders to Washington last November, ones who promised to change the status quo. The 114th Congress is primed to reform the situation with job creation, all-of-the-above energy solutions, deregulation, and tax reform -- as long as the White House doesn't stand in the way.

Unfortunately, the current administration doesn't seem headed in that direction. President Obama recently promised more vetoes in his State of the Union address than any chief executive in living memory. Revealingly, while visiting Australia (No. 4) last year, President Obama made a point of mocking Prime Minister Tony Abbott for prioritizing the Australian economy over radical environmental initiatives. As long we have leaders who needlessly antagonize our allies because of their pro-growth policies, it's unlikely that the U.S. will be breaking back into the top ten anytime soon.

The president has a choice to make. Will he spend the final two years of his presidency scrambling to save Obamacare from itself and issuing rafts of executive orders? Or will he cooperate with the new Congress on commonsense reforms to give all Americans -- businesses, workers, and families alike -- more economic freedom?

If he chooses the latter, I'll be the first to congratulate him. Americans deserve better than 12th place.

Jim DeMint is the president of the Heritage Foundation.

Patent Litigation: Slow, Costly, Error-Prone

Alan Daley - February 20, 2015

Litigation between patent holders and those accused of infringement is resolved in federal district courts. These courts impose large, unproductive process costs that eventually hit consumers, and outcomes there vary widely depending on a number of legally irrelevant factors. It is long past time for Congress to address this.

First of all, some judges issue a "summary judgment" -- meaning a verdict without a full trial -- far more often than others do. A few even do this more than half the time. That may seem egocentric, but it may legitimately save costs and time.

At the opposite end of the spectrum are trials where a jury decides the award. While 24 percent of trials with damages during the years 1995 to 1999 were jury trials, that figure rose to 61 percent in the 2010-13 time frame. In the latter period, the median damage award was $4.3 million -- with jury trials giving a median of $15 million, compared to a paltry $400,000 for bench trials.

Further, getting a trial underway can take a long time. A "time to trial" of two to four years is not uncommon. Big time delays can warp litigants' view of what is a reasonable settlement. Some are more able to endure years without cash flow, and that may play a role in their decision to settle.

With anomalies like these, it can be tempting for the plaintiff's legal team to go court shopping. The payoff can be huge from getting the case heard before the right judge. Indeed, patent cases seem to pile up in particular courts: The district courts of Texas Eastern, Texas Northern, Florida Middle, and Delaware are atypically "favorable venues for patent holders with shorter time-to-trial, higher success rates and greater median damages awards."

A district-court decision isn't the end of the ordeal, however: A surprising 71 percent of district-court patent decisions are appealed to the Federal Circuit, and in 75 percent of those appeals, the higher court reverses at least part of the original decision. That means 53 percent of district-court decisions require some sort of correction. This likely stems not from attorney or judge ineptness but from other factors, like jury emotions and the vagueness of the law. We should be grateful that patent infringement is not a capital offense.

This area is ripe for reform. Something is very wrong when courts can't get it right more than half the time.

Alan Daley writes for the American Consumer Institute Center for Citizen Research, a nonprofit educational and research organization.

A Small Change to Make Trucking More Efficient

Karen Kerrigan - February 20, 2015

Moving massive quantities of goods to store shelves, or parts and equipment to manufacturers, is a complex undertaking. Improvements to the process, even the smallest efficiencies, can translate into big savings for the nation's small businesses and consumers.

Innovations such as computerized tracking and real-time traffic information have been helpful. Another way to improve efficiency is to slightly expand the length of "less than truckload" (LTL) shipping trailers. These are the shorter trailers used by trucking companies that specialize in combining multiple customers' shipments to fill an entire trailer. We usually see these trailers pulled in pairs along our nation's highways.

There is an effort underway in Congress to increase the maximum trailer length from 28 feet to 33 feet while leaving intact restrictions on the weight of the trailers. This legislation would make shipping more efficient and less expensive for small businesses, and it could reduce the number of trailers and trucks needed to move goods across the country and across town. Consequently, fewer miles would be driven and fewer gallons of fuel would be used.

The benefits of modernizing the rules are significant. Fewer trucks on the road would ease traffic congestion. Our highway and bridge infrastructure -- already in dire need of attention -- would get needed relief with fewer trucks on the road that adhere to current weight restrictions. Most drivers would not notice this modest increase in length. We already are accustomed to trailers as long as 53 feet, mostly used for FTLs (full-truckload shippers that fill the entire trailer). Slightly longer trailers could be safer, according to a study conducted by John Woodrooffe at the University of Michigan, because the extended wheelbase would make them more stable.

Those extra five feet could make a big difference in the industry and in the prices we eventually pay for shipped goods. Most LTL shipments are shrink-wrapped onto pallets that measure 40 by 48 inches. Extending trailers by five feet would allow room for two more pallets, so tandems would allow 18 percent more volume. It's almost like shipping two extra pallets for free, because most loads fill the trailer before they reach the weight limit. As a result, that efficiency would benefit the more than 9 million daily consumers of LTL freight transportation as well as shippers.

As the population and the economy continue to grow, it is critical that we embrace ways to make LTL trucking -- really, the circulatory system of the daily economy -- more efficient. Lengthening trailers by just five feet would have value for everyone from the parent hitting the corner market to the small manufacturer who needs parts delivered on time and with less cost.

Over the years Congress has modified highway legislation and regulations to keep up with the times. Standard FTL trailers once were 40 feet long, but they have incrementally increased to the current 53-foot standard.

Science and data support the benefits of longer LTL trailers, from road and bridge infrastructure to highway safety and transportation productivity. It's time for Congress to update the 1982 regulations that have limited LTL trailers to 28 feet. Twin 33-foot trailers are a commonsense solution for 21st-century businesses and our economy.

Karen Kerrigan is president and CEO of the Small Business and Entrepreneurship Council.

Presidents' Politicized Funding Priorities

Thomas Stratmann & Joshua Wojnilower - February 19, 2015

The Congressional Budget Office recently projected that discretionary outlays by the federal government will exceed $12.5 trillion over the next decade. With such large sums at stake, should we grant greater control over the bureaucracy to presidents?

For many scholars and political observers, the answer is clearly "yes." Legislators have a powerful incentive to win their next election, and the most influential members can use the power of the purse to secure their political futures. Over the long term, this can lead to a seemingly unfair distribution of federal funding across the nation. Presidents, so the thinking goes, will limit Congress's inefficient spending due to their nationwide constituency.

Our new research, however, demonstrates that presidents actually target politically important constituencies with federal funds just as much as Congress does. In 2009 and 2010, congressional districts within strongly Democratic states received approximately 50 percent more federal project-grant funding than comparable districts within strongly Republican states.

Although Congress authorizes the overall amount of project-grant funding, federal agencies exercise discretion as to the particular distribution. Presidents then exert influence over the federal agencies through their political appointees. This funding advantage represents $26 million more per district, or $37.50 more per capita, per year on average.

While these sums seemingly pale in comparison with the trillions noted above, imagine the difference $100 million could make for your district over the next four years. Would those changes influence your vote in an election? The evidence suggests that most, if not all, presidential administrations believe it might.

Our own research covers only two calendar years due to data limitations, but it is consistent with the results of several other studies that used longer time horizons as well as different combinations of presidential and congressional control. The presidential incentives driving our main results appear persistent regardless of which administration or party is in office.

This research, while significant in its own right, highlights important principles for a broader array of decisions from mandating vaccinations to education reform. In response to an undesirable outcome, such as a seemingly unfair distribution of federal funding, our initial inclination is often to make stark personnel changes in the decisionmaking body. Such action generally presumes the undesirable outcome was due to individual incompetency or greed. In contrast, our research demonstrates that institutional incentives can largely determine outcomes regardless of which individual or individuals are in charge. We should therefore seek to understand existing institutional incentives better before we make dramatic changes.

Over the coming decade, presidents and Congresses will influence the distribution of at least $12.5 trillion in federal funds. Our results show that presidents target politically important constituencies to influence voter behavior in presidential and congressional elections. Hence, granting greater control over the bureaucracy to presidents will not necessarily result in a more efficient distribution of federal funds. Perhaps scholars and pundits should reconsider whether granting presidents greater control over the bureaucracy is truly preferable.

Thomas Stratmann is a scholar with the Mercatus Center at George Mason University, where he is also a professor of economics. Joshua Wojnilower is currently a third-year Ph.D. student in the economics department at George Mason University. Both are coauthors of a new working paper published by the Mercatus Center on "Presidential Particularism: Distributing Funds Between Alternative Objectives and Strategies."

What John Marshall Would Think of the ACA

Thomas K. Lindsay - February 18, 2015

Across the country, state legislators have introduced over 200 measures aiming to prevent the enforcement of federal laws and regulations they believe to be unconstitutional. Here in Texas, 25 such measures have been filed. But these efforts face long-ago-constructed hurdles, one of which is the law-school orthodoxy regarding the constitutional vision of the nation's greatest Supreme Court chief justice, John Marshall. Both defenders and critics of federal expansion tend to view him as a champion of Big Government. Against this tide stand two legal scholars, Robert Natelson and David Kopel, who argue that both sides get Marshall wrong, and that we must get him right to reeducate ourselves in the constitutional basis for individual liberty and limited government.

In a thought-provoking exercise, Natelson and Kopel delve into the seminal opinions currently seen as proof of his judicial activism. They construct what a Marshall opinion on Obamacare would look like, drawing "chiefly from direct quotation and paraphrases of Marshall's own words." They ask, "Would the nationalist justice who, according to the New Deal Supreme Court, 'described the Federal commerce power with a breadth never yet exceeded,' agree that federal control of health care was within that power?"

Their answer is a resounding no. They find Marshall "far more restrained ... than the caricature drawn by case book editors and law professors," whose abridged accounts of Marshall's opinions "depict him as an activist in the cause of federal power."

Obamacare's defenders offer three arguments for the law's constitutionality. First, they point to Congress's power to "provide for ... the general Welfare." In weighing this claim, Natelson and Kopel's constructed "Marshall opinion" begins with his statement in McCulloch v. Maryland that the touchstone for proper interpretation is the "contemporaneous exposition" of those who ratified the Constitution. When it came to the General Welfare Clause, contemporaneous accounts, by both opponents and defenders of the proposed Constitution, asked whether it would lead to the virtually unlimited power this clause today is read to provide. Defenders allayed these fears by stipulating that the clause was not a "grant of power" but, instead, "served merely as a limit on the taxing power." In fact, the expansive interpretation underpinning Obamacare was advanced by Justice Story in Brown v. United States (1814). It was rejected by every other justice on the Supreme Court. In sum, Marshall's reading of the General Welfare Clause provides no basis for the constitutionality of Obamacare.

The second argument in the defense of Obamacare is Congress's constitutional power to "regulate Commerce ... among the several States." Here again, both Obamacare's defenders and critics read Marshall wrongly, asserting that, under Marshall's ruling in Gibbons v. Ogden, the federal power can legitimately reach to any economic activity deemed to "substantially affect" interstate commerce. Both focus on Marshall's statement in Gibbons that "commerce undoubtedly is traffic, but it is something more: it is intercourse." Employing Gibbons, the Court has over the last century validated congressional action in a number of areas previously regarded as the province of the states.

But while Marshall's Gibbons opinion finds the Commerce Clause includes the power to regulate navigation, this very opinion also lists powers reserved to the states alone, among which are "health laws of every description." This portion of his opinion is ignored by those who deem him a defender of expansive federal government.

The third plank in the Obamacare defense holds that the law is "necessary and proper for carrying into Execution" the federal government's interstate-commerce power. Marshall's written opinions deny such an expansive interpretation of the Necessary and Proper Clause. To be sure, he grants in McCulloch that there is no provision in the Constitution that "excludes incidental or implied powers; and which requires that everything granted shall be expressly and minutely described." He adds in Cohens v. Virginia (1821) that any power granted under the Constitution "carries with it all those incidental powers which are necessary to its complete and effectual execution." But his very defense of incidental powers defines them as necessarily less important than the principal powers they serve. Health care constitutes one-sixth of the economy; as such, it cannot be a mere "incident" of interstate commerce.

Nevertheless, the Obama administration offered this clause to the Court in defending the requirement that individuals purchase health insurance, arguing that such purchases "affect" interstate commerce. In rejecting this interpretation, Natelson and Kopel's construction of Marshall's opinion relies on "the intention of the makers, that is to say, the ratifiers of the Constitution."

Here Natelson and Kopel look to Marshall's anonymous newspaper essays published in defense of McCulloch. They find that scholars generally miss the meaning of key terms employed by Marshall. For example, in defending the constitutionality of "incidental powers," Marshall stipulated in an essay that these must be "the natural, direct, and appropriate means, or the known and usual means, for the execution of the given power." This might seem to contradict the text of McCulloch itself, which said that "incidental" frequently means "no more than that one thing is convenient, or useful, or essential to another." But "convenient" did not for Marshall have the expansive meaning some give it now. Johnson's contemporaneous Dictionary of the English Language (Marshall's favorite dictionary) defines "convenient" as "fit; suitable; proper; well-adapted." Sheridan's 1789 Dictionary concurs. Nor did the term, "appropriate," when defining incidental powers, carry the wide meaning attributed to it presently. Bacon's 1786 Abridgment of the Law (cited in 55 Supreme Court cases, most recently in 2001) declares "the incident must be one without which the principal would labor under 'great prejudice.'"

On this basis, McCulloch held incorporation of a national bank constitutional as an "incidental power" to Congress's enumerated powers. Incorporation was deemed "not of higher dignity" but, instead, "of inferior importance" to the enumerated powers. Therefore, while Obamacare's defenders correctly assert that its required purchase of health insurance "affects" interstate commerce, so also do many other activities. Yet the clear intent of those who ratified the Constitution was that -- as Marshall himself testified at the Virginia Ratifying Convention -- any law "affecting contracts, or claims, between citizens of the same state would go beyond the delegated powers and would be considered by the judges as an infringement of the Constitution." Thus, the claim that Obamacare is "incidental" to Congress's enumerated powers falls. Obamacare's provisions are "at least as substantive and independent" as the regulations governing interstate commerce; they are not merely "incidental" to them.

Finally, McCulloch holds that, for a power to be truly incidental to Congress's enumerated powers, "the end must be legitimate," meaning, "within the scope of the Constitution." But Obamacare's self-confessed purpose is to "protect patients" through regulating health care and health insurance. "That the Act was designed truly to regulate commerce is mere pretext," the authors write, as is the administration's contention that "penalties imposed on those who do not purchase insurance are taxes rather than penalties. ... Those exactions are penalties alone."

Will Natelson and Kopel's carefully reasoned arguments reach the law schools, sparking much-needed debate? No one knows. But as state legislators continue their long march against federal overreach, they should be encouraged to know that, in their battle to restore limited government and individual liberty, America's greatest chief justice is not, as they thought, an enemy, but a true friend.

Thomas K. Lindsay directs the Centers for Tenth Amendment Action and Higher Education at the Texas Public Policy Foundation and is editor of He was deputy chairman of the National Endowment for the Humanities under George W. Bush.

Federal Foot-Dragging on Clean Fuels

Steven J. Levy - February 17, 2015

In a room full of energy investors on Wall Street, Vice President Joe Biden recently said: "I'm no investment banker, but I wouldn't go long on investments that lead to more carbon pollution. I'd bet on clean energy."

Based on this statement and others like it by the Obama administration -- including comments that Obama made as a presidential candidate at a Pennsylvania biodiesel plant -- investors across the country have taken serious and long-term financial risks to enter the biodiesel industry. There are good reasons to support biodiesel: It is a true advanced biofuel, with the Environmental Protection Agency (EPA) having certified that it reduces greenhouse-gas emissions by up to 86 percent relative to traditional diesel. This is a win for both industry and the environment, and it creates U.S. jobs while lessening our dangerous dependence on petroleum.

But the administration's recent decisions on renewable fuels -- or, in some cases, the lack thereof -- are confounding. Up until 2013, with support from the Obama administration, the biodiesel industry was on a path toward strong, sustainable growth. That was thanks in part to a robust Renewable Fuels Standard (RFS), the federal policy passed under President George W. Bush and supported by Obama (then a senator) that requires blending biodiesel and other clean fuels into the U.S. fuel supply. But today, that growth has ground to a halt, and hundreds of businesses around the country supported by this industry are hurting.

The most recent surprise to the biodiesel industry was a unilateral decision by the EPA to streamline Argentinian biodiesel imports into the United States. Under the RFS, sellers must prove that their fuels were grown on land that was cleared or cultivated before late 2007, but the new decision relaxes the verification and reporting requirements for Argentinian exporters. There was no public hearing or comment.

What was so bizarre about this change was that it came at a time when the U.S. biodiesel industry is in turmoil as a result of the EPA's disastrous ongoing delays in administering the RFS. For the second consecutive year, the administration has failed to finalize the standards, meaning the market has no way of knowing how much biodiesel and other renewable fuels will be required on an annual basis going forward.

This one-two punch has wreaked havoc on the markets. The result has been less biodiesel blending, causing some biodiesel producers to close and many others to slow or suspend operations. In addition to putting financial strains on the industry, it has also meant that America's diesel fuel is emitting considerably more greenhouse gases into the atmosphere.

Like all biodiesel supporters, I am hopeful that the Obama administration will reassert its leadership in this area. For starters, the EPA needs to immediately finalize a robust RFS that allows the biodiesel industry to keep expanding and maximize its capabilities. The fact is, thousands of investors all across this nation decided to take up the White House's challenge to produce, distribute, supply, and consume fuel that reduces our dependence on oil and helps to slow the pace of global climate change. But like every young energy industry that has come before, biodiesel needs stable policy to grow and mature.

Steven J. Levy is chairman of the National Biodiesel Board.

Blog Archives