"We lose revenue on every single package we deliver." It was a clear and unmistakable point made by the U.S. Postal Service's inspector general during a recent House Oversight Committee hearing. Federal officials, industry representatives, and members of Congress were discussing the shipping rates that USPS charges to foreign sellers — artificially low rates that are one reason the agency is having so much financial trouble.
The rates come from the Universal Postal Union (UPU), a 141-year-old institution, comprising 192 countries, that works to set international postal prices. Each nation is granted voting power on these rates, and the Department of State handles the negotiations for the U.S.
As a result of this archaic arrangement, the U.S. now has rates that do not cover the costs of delivery. This leaves the Postal Service effectively subsidizing mail coming from foreign senders — costing the agency more than $300 million since 2010.
Such losses are particularly troubling considering the Postal Service’s continuing financial turmoil. The agency has posted losses each of the last eight years, amounting to more than $50 billion dollars. The outlook for the USPS in 2015 remains grim, as the organization reported a loss of $2.25 billion in the first two quarters.
To better negotiate on USPS's behalf, the State Department needs more accurate knowledge of service costs, which the Postal Service has failed to fully analyze. While the Postal Service has said that a move toward shape-based pricing could be an option to help alleviate costs, the reality is that greater systemic cost-tracking concerns persist within the agency.
A recent report commissioned by the USPS inspector general found that the agency needed to employ more thorough and granular cost data to measure how each of its products functions within an increasingly dynamic market. Further, Robert Taub, the Postal Regulatory Commissioner, has pointed out that the pricing regime does not reflect the burden of USPS’s institutional costs.
Intensifying the problems in international mailing is the Postal Service’s agreement to deliver lightweight packages known as ePackets from China at specialized rates: USPS collects just 94 cents on average, losing about a dollar on each package. Someone sending a similar package (up to 4.4 pounds) from one state to another could pay more than $10. Many small businesses have found that sellers from China now have a tremendous ability to undercut prices through low USPS rates.
A better pricing system is sorely needed, both to secure the long-term health of the USPS and to eliminate artificial and detrimental constraints on American businesses.
David Williams is president of the Taxpayers Protection Alliance.
Medicare hits an important marker today — its 50th birthday. To make sure it reaches its 100th, policymakers must remain vigilant in improving the program for generations to come.
On July 30, 1965, President Lyndon Johnson signed Medicare into law to provide health-care access to millions of Americans at a time when over half of the elderly population had no health insurance. Medicare was a game-changer back then, providing quality health care to older Americans who previously could not afford it.
Now, Medicare faces a landscape changed by demographic forces and rising health-care costs. As the venerable program turns 50, it faces questions beyond whether or not to join AARP. Policymakers need to take action to guarantee the program's financial stability for the next 50 years and beyond, as well as to improve the overall fiscal situation.
While the growth of health-care costs has slowed in the last few years, federal spending on health care remains on a dangerous path. Due to an aging population, fast-growing per-beneficiary costs, and expanding coverage, this spending is expected to grow substantially faster than the economy, continuing to crowd out other national priorities and investments.
The number of people age 65 or older will rise by more than 75 percent over the next 25 years. Meanwhile, per-beneficiary costs are projected to return to growing faster than the economy, with annual growth averaging over 4 percent through 2040. As a consequence, Medicare spending is set to consume an additional 2 percent of GDP within 25 years and double as a share of the economy by 2051, helping to push debt to levels never seen before in our history.
Devoting more and more resources to Medicare and other health spending will also squeeze public investments that can help grow the economy. Medicare spending currently accounts for 16 percent of federal non-interest spending, and is on pace to consume nearly one-quarter by 2040.
The increased spending will also drain Medicare's coffers, which could jeopardize benefits. Medicare's Hospital Insurance Trust Fund is projected to run out of money within 15 years, causing an automatic 14 percent cut in Medicare Part A benefits.
It doesn't have to be this way. Almost every other developed nation manages to spend far less money on health care while still delivering high-quality care.
Promising reforms are currently underway that seek to shift the U.S. health-care system away from the volume-based fee-for-service payment model that dominates today and encourages excessive services without regard for quality. Initiatives such as Accountable Care Organizations and bundled payments, both in Medicare and the private sector, hold great potential to moderate cost growth and improve the quality of care delivered in this country.
To achieve their full promise, though, policymakers will need to intervene over time to improve these models and double down on what works. Dartmouth-Hitchcock Health, the Dartmouth Institute for Health Policy & Clinical Practice, and the Campaign to Fix the Debt released a paper earlier this year with a number of suggestions for how to bolster the shift away from fee-for-service reimbursement and toward rewarding quality and coordinated care.
Moreover, there is much spending within Medicare today that provides little value or promotes waste. Beneficiary cost-sharing in Medicare is outdated and bears little connection to the value of care received. In many cases, the government unnecessarily pays more for a brand-name drug when an equivalent lower-cost alternative is available or for identical care provided in a hospital outpatient facility as opposed to a physician's office.
The Independent Payment Advisory Board and the Center for Medicare and Medicaid Innovation also have an important role to play in controlling federal health-care costs and promoting valuable reforms, and thus should be kept intact.
The Medicare Trustees recently warned that "even under current projections, Medicare faces a substantial financing gap," and that "we will need all of current law's cost containment and more to ensure that it remains on a financially secure footing."
Health care, particularly Medicare, will be the largest driver of the national debt in the years to come. Taking action in the near term to bend Medicare's cost curve to make it less of a drain on the budget and economy, while ensuring its sustainability for generations to come, would be a very appropriate birthday present.
As lawmakers sing Medicare's praises this week, let's see if they truly honor it by working to strengthen it.
Loren Adler is the research director at the Committee for a Responsible Federal Budget, a nonpartisan organization committed to educating the public about issues that have significant fiscal-policy impact.
A $57 million experiment to deliver better, more efficient care at federally funded health centers struggled to meet its goals and is unlikely to save money, says a new government report.
The test to coordinate treatment for high-risk Medicare patients in hundreds of communities was one of many demonstrations run by the Department of Health and Human Services' innovation center.
The Affordable Care Act created the lab and gave it $10 billion over a decade to test new ways to improve care and save money.
As the trial wound down last fall, 69 percent of the clinics that hadn't dropped out had obtained full accreditation as "medical homes" — primary care practices that coordinate care across the maze of specialists, hospitals and emergency rooms.
HHS had hoped for 90 percent.
Another goal was to cut unnecessary hospital visits. But admissions and emergency-room care rose in centers that were part of the experiment compared with results in those that weren't. So did expenses.
"It appears that the demonstration will not achieve cost savings," concluded the RAND Corp., an independent research group, in a study commissioned by HHS' Centers for Medicare and Medicaid Services, or CMS. HHS recently posted the report on its website.
There had been talk of extending the three-year demonstration. But the health law requires HHS to stop experiments that don't show signs of saving money or improving care. The program ended in October.
"They're saying that they actually saw potential increases in utilization and costs, which is not what CMS was hoping to see," said Dr. Eric Schneider, senior vice president for policy at the Commonwealth Fund. "But it's not necessarily surprising."
The project steered extra money to community health centers — nonprofit clinics that receive federal funds and care mainly for the poor. Medical homes typically appoint case managers to try to get people with diabetes, asthma and other chronic conditions to take medicine, eat well and stay out of the hospital.
HHS' demonstration may have helped clinics identify patients with pent-up medical needs, causing the spike in treatment and costs, analysts said. Poor people served by federally funded centers often have trouble getting medical care.
The clinics "were developed to care for more-indigent people," said Dr. Katherine Kahn, who led the RAND evaluation. "It's not even entirely clear that one should expect lower costs initially."
Medical homes have been widely promoted as one solution to America's disconnected health system, which by some estimates wastes 30 percent of its spending on unnecessary treatment, fraud and administrative lard.
By making primary care doctors quarterbacks for treating the chronically ill, the thinking goes, patients can be kept healthy and away from expensive providers.
Health-reform specialists cautioned against counting the RAND report as a strike against medical homes.
"It would be a mistake to say we can conclude that the medical home model does not work," said Dr. Marshall Chin, a professor at the University of Chicago medical school who reviewed drafts of the RAND study.
Indeed, the model was barely tested. Even among clinics that did qualify as medical homes, most weren't certified until late in the program. Becoming a medical home, requiring patient-tracking software, referral protocols and lots of training, was more difficult and took longer than many expected.
HHS did not respond to requests for an interview. The agency has compared the innovation lab to a venture capital fund, in which some investments are expected to fail as the cost of finding high-payoff winners. Republicans have criticized it as a waste.
"No one study should define the value" of medical homes, said Amy Simmons Farber, spokeswoman for the National Association of Community Health Centers.
The report paints a picture of understaffed clinics struggling to file reports and participate in conference calls for the experiment while they did their normal jobs of caring for patients and trying to get paid by insurance plans.
"Sometimes we're so caught up in all of these different requirements to be in our phone calls or webinars or whatever, and nobody has any time left to do the work that needs to be done," said one unidentified nonprofit executive helping guide the clinics, who was quoted in the RAND study. "And I think that that's kind of what the health centers are feeling, too."
Others suggested that the extra money from HHS — a median of only $26,000 a year per clinic —wasn't nearly enough to make a difference. Instead of trying to transform 500 outpatient centers, some said, HHS should have focused the same amount of money on fewer clinics.
"That's going to go nowhere in supporting the kind of staff or information technology or the time it will take to reconfigure the clinic," Chin said.
Other research shows mixed results on medical homes.
Many analysts believe the x-factor for successful medical homes and other payment innovation is giving doctors financial incentives to change referral patterns by letting them share savings from cutting unneeded care. The experiment with federally funded community clinics did not share savings with doctors.
"Transforming to a medical home — it's not fixing one thing," said Kahn. "It's really changing everything about how the clinic works. On every single level."
This piece originally appeared at Kaiser Health News (KHN), a nonprofit national health policy news service. KHN's coverage of aging and long-term care issues is supported in part by a grant from The SCAN Foundation.
Comedian John Oliver's rant against mandatory minimums is making the media rounds. Watching it, I was struck by the story of Kevin Ott (3:32), who says he was given life in prison for three ounces of meth. Oliver scoffs that "we're treating him like he's Season 5 Walter White when he's barely Episode 1 Jesse Pinkman."
Oliver's right: The drug war isn't going well to say the least, and some aspects of mandatory minimums need reform. But was Ott really put away for life just for having three ounces of meth? And how much meth is three ounces anyway?
A key fact is that Ott had a significant criminal history. He had previously been convicted of wife battery and drug and weapon violations. During his final arrest he was found with a loaded handgun in addition to the drugs, despite the fact he was under court supervision. But it was drugs alone that got him the life sentence in 1997: Oklahoma's "three strikes" law applied whenever a person was convicted of a drug felony and had two previous drug-related convictions on his record. (The law was weakened somewhat just a few months ago.)
As for the amount of meth, in an appeals-court court decision the precise amount is reported as 102.8 grams, actually closer to four ounces than three. (There are about 28 grams in an ounce.) The document also says meth is normally sold in 1/16- or 1/8-ounce packages, which amounts to 1.75 to 3.5 grams. Yes, Ott was more akin to early Jesse Pinkman than to late Walter White — and perhaps drugs should be legal entirely — but 102.8 grams is a fair amount of meth to have sitting around at a single point in time: about 30-60 sales, worth thousands of dollars. The court said, quoting from a previous decision: "This is not a minor drug offense but a major crime."
Illegal drugs are typically sold in fractions of an ounce, sometimes even fractions of a gram. (This report about the drug trade in Ohio has some more up-to-date numbers collected during interviews with drug users.) When something is sold in quantities that small, even seemingly tiny amounts can be substantial.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
The age of technology may be upon us, but not all are convinced we should cast our votes online. The Heritage Foundation has released a paper, "The Dangers of Internet Voting," chronicling other countries' experiences with online voting and arguing that America is not ready for it.
We talked with Hans von Spakovsky, the paper's author, to learn more. The interview has been edited for length and clarity.
Who is proposing online voting, and how likely it is to become reality?
There was a big push in the country in the late 1990s early 2000s over this, and then that kind of subsided when a series of reports came out from the National Science Foundation and other task forces that said this is dangerous, we shouldn't do this. But in recent years many states have allowed, not Internet voting, but the return of voted ballots by e-mail, and other states are considering joining them. Other folks are talking about using Internet voting for primaries.
There's also a real push with regard to overseas voters. I'll be the first to say we have a real problem with military voters, but proposals to allow them to vote over the Internet or to allow e-mail return of ballots is not a good idea, because it would make our systems very vulnerable.
What would the process for online voting would be like?
The District of Colombia several years ago was going to provide an online voting capability that would allow people to go to a website, set up new registration, check in, and then cast a ballot, and they'd be able to cast it from their home computers or their work computers. They were very confident that they had a very secure system and opened it up in a mock election, and they challenged hackers to try to get into it. It was almost immediately breached, and the election officials didn't even realize that folks had been able to get into the system.
Can you elaborate on the various cybersecurity issues here? Is there an argument that the increase in turnout would be worth the risk?
The problem with Internet voting is kind of inherent in the technology itself. Hardware, software, and computer scientists almost overwhelmingly say there's almost nothing that can be done giving the current state of the technology — the way the Internet is designed — to actually make a safe system. Those risks way outweigh any possibility that it might increase turnout, and actually, there's evidence from some other countries that have actually tried Internet voting that it doesn't really increase voter turnout. It just makes it easier for people who would vote anyway to cast their ballot, but it does it at a much greater risk.
Can you summarize some of the other countries' experiences?
Estonia is a country that in 2005 because the first country to offer Internet voting in a national election. They've used it a number of times since then. And they've done that despite the fact that a team of computer scientists at the University of Michigan — who, by the way, were the same people who easily breached the proposed District of Columbia system — went in and identified numerous major security risks and vulnerabilities in the Estonian system and recommended its immediate termination. The biggest problem they saw was that hackers, particularly dedicated, well-organized hackers, such as a foreign agency, perhaps in Russia or China, could not only get into the system and manipulate election results, but likely would be able to do it without detection, and that makes that kind of system even more dangerous.
The same kind of system was proposed in the neighboring country of Latvia, and there, they said we are not doing this, because with the current technology, it's not possible to ensure the security of the Internet voting process.
Some of the problems you highlight happened more than a decade ago. Do you know if these countries have improved their systems since then?
There's no indication that they have. Other countries that have tried it have stopped doing it after having problems. France tried to do this just two years ago in a mayoral election in Paris. This was for a primary election. Again, the backers of the system said it was fraud-proof, that it was ultra-secure; however, reporters were able to breach the security of the system and vote several times using different names, and in fact, one of the reporters was able to vote five times in that primary under the name of the former French president, Nicolas Sarkozy.
Why has there not been another test program since D.C.'s got hacked in 2010?
That one was particularly interesting, because when the Michigan team got into the D.C. system, they found hackers from other parts of the world trying to get into the system, and that exposes one of the great dangers of an Internet voting system.
Everyone knows very well the huge breaches of security we just had with not only the Office of Personal Management, but now the IRSI. It was suspected in the OPM breach that this was part of a special team that the Chinese government set up some years ago. There have been a number of newspaper rticles that have talked about this — how professional hackers are being used by the Chinese government. This kind of system in a U.S. election would be a prime target, not just for individual hackers, but for a government trying to get into the system to manipulate elections.
What is your recommendation for those who want to switch to online voting? How can online voting be safe?
Given current technology, online voting cannot be safe. All they have to do is read the various reports that have been done by people who are experts in the field — computer scientists, software engineers, who almost overwhelmingly say that the current system is not able to be secure.
And to those who say we do a lot of e-commerce now over the Internet — that system itself is not very secure. There are billions of dollars of fraud committed with e-commerce, and the requirements for that are quite different. If someone has breached your bank account through the Internet, when you go and check your bank statement, you'll be able to figure that out. If someone intercepts the vote over the Internet that you're trying to cast at a website, there's no way for you to check whether that's happened, or whether your vote has been changed or not. There is just no way to combine security and the anonymity that is required for the secret ballot.
Is there anything I didn't ask but should have?
This is not really a partisan issue. A lot of election issues, particularly regarding the rules, seem to unfortunately devolve down into different party issues. This is not one of those. This is something that people of all political parties ought to realize would be a very dangerous development in America, and it is not one that we should encourage.
Courtney Such is a RealClearPolitics intern.
When he announced his candidacy for president in mid-June, Donald Trump made the provocative assertion that Mexican immigrants are "bringing crime." The comment gained greater resonance when, two weeks after Trump's speech, Kathryn Steinle was shot and killed in San Francisco by an illegal alien from Mexico. The alien, Francisco Sanchez, had been in local policy custody back in April, and federal officials intended to deport him. But Sanchez was instead released due to San Francisco's "sanctuary city" policy.
In response to the resulting outcry, some mainstream media outlets correctly noted that, although good data are had to come by, the overall immigrant crime rate does not appear to be especially high. But then advocates of mass immigration went much further, making wild claims that Mexican immigrants have a miniscule crime rate that somehow even suppresses native crime. Only an uninformed rabble-rouser would worry about criminals crossing our borders, according to immigration enthusiasts.
The truth is more complex. In a detailed report, a colleague and I have explained why it is very difficult to measure immigrant crime. There is research showing that immigrants do commit a disproportionate share of crime, but there is also research showing that the opposite is the case. Census Bureau data collected on the institutional population (such as those in prisons and jails) might be a way to at least measure incarnation rates in an unbiased fashion. But as we explained in the reported mentioned above, the Bureau's ability to record whether the institutionalized are immigrant or native broke down in the past and it still not clear if this problem has been entirely corrected.
There is also the issue of what should be the proper benchmark for measuring immigrant crime. As we point out in our crime study:
In social science research, raw numbers need to be placed into some kind of context, often by comparing one population of interest to another. Assuming one can measure immigrant crime, the next question that arises is: To what should it be compared? This is an important question because crime rates among natives differ widely by group. For example, the share of native-born black men arrested or incarcerated is dramatically higher than for all other groups… However, the discrimination and racism black Americans have experienced and the severe social problems that exist in some black communities make this population unique when it comes to the issue of crime. One can reasonably ask whether it makes sense to compare immigrants, who are overwhelmingly not black, to black Americans who have a unique historical experience.
Data collected by the Census Bureau in 2013 shows that 23 per 1,000 male Mexican immigrants ages 18 to 40 are institutionalized (mainly in jails or prisons; few people at that age are in nursing homes or similar institutions). This compares to 31 per 1,000 for native-born men in this age group. However, looking at only non-black native men (18-40) shows an incarceration rate of 20 per 1,000. This is somewhat lower than the rate of Mexican-born men and a good deal lower than the 38 per 1,000 for U.S.-born men of Mexican ancestry. It is also worth noting that Mexican men are included in the figure for non-black natives; if they are excluded then the rate for natives would be 18 per 1,000. The rate for native-born whites alone is 16 per 1,000.
All this matters because studies that examine what happens to crime rates in predominately black areas when immigrants move in are looking at communities with crime rates that reflect the marginalization and unique situation of black Americans. When it comes to crime, these communities are statistical outliers. So even if crime falls as the immigrants arrive, it is somewhat misleading because the baseline rate was unusually high in the first place. Further, the impact of Mexican immigration on other communities, with much lower pre-existing crime rates, could be very different.
Two other points are worth making with regard to immigrant crime. First, the crime rate of immigrants generally, or illegal immigrants in particular, is irrelevant to the issue of sanctuary cities, which as a matter of policy release illegal immigrants from jails even after Immigration and Customs Enforcement asks them to hold these individuals. That policy is directly responsible for Steinle's death and for the deaths of many others over the years — regardless of statistics about overall crime rates. The public is right to be outraged.
Second, immigration is supposed to benefit our country. Therefore the goal of policy is to select immigrants that have much lower crime rates than natives, not rates that are somewhat higher or even somewhat lower than natives'. Given the strong correlation between crime and educational attainment, moving away from our current system that selects immigrants based primarily on whether they have a relative in the U.S. to one that emphasizes education levels would be one way to move toward such a goal.
Steven A. Camarota is director of research at the Center for Immigration Studies. This piece originally appeared on CIS's blog.
OB/GYN Jen Gunter tells us that the Planned Parenthood fetal-tissue donation debate has nothing to do with "baby parts," because medical professionals don't use the word "baby" until the child has been born. The specimens are instead the "products of conception."
This is not how language works. The medical community is perfectly free to restrict words' meanings in its own conversations and publications, but it has no right to impose those restrictions on the wider debate. And even a cursory analysis of common usage reveals there's nothing unusual about referring to an unborn child as a "baby," even in contexts that have nothing to do with the politically charged issue of abortion.
Anyone who's ever known a pregnant woman has heard her talk about how she can "feel the baby kick" in her stomach. Here is an example from 1947, and this construction has only become more popular since then (from Google Ngram, American English):
We can trace this type of thing back further — again using nothing but Google's Ngram tool — if we include the etymologically related "babe," which until the mid-1800s was more common than "baby" in the U.S. From 1806: "The uncommon motion of the babe in her womb, was a token of the extraordinary emotion of her spirit under a divine impulse." It's also been common for decades to refer to a miscarriage as "losing the baby."
Again, the medical community can use language however it wants. And none of this speaks to the broader question of whether what Planned Parenthood is doing is immoral or illegal. But linguistic preferences do not magically become facts when the people holding them are doctors.
Robert VerBruggen is editor of RealClearPolicy. Twitter: @RAVerBruggen
The Survey on the Future of Government Service, released last week by Vanderbilt University's Center for the Study of Democratic Institutions, reveals significant problems with the federal workforce. According to the data, collected from 3,551 federal executives, the civil service is struggling to recruit and retain America's best and brightest — and agencies are plagued by underperforming employees who are difficult to fire.
We have seen the by-products of this malfunctioning personnel system for years. The Department of Veterans Affairs has lurched from one crisis to another. Government-wide improper payments reached a new height of $124.7 billion in 2014, fueled by mistakes made by the Department of Health and Human Services and the U.S. Treasury. The General Services Administration, for its part, is unable to provide a correct inventory of the number of federal properties, let alone unload the unneeded ones.
The true insight of this survey is that these crises are predictable; our current civil-service system is not structured to be highly productive. Politicians have ladened the system with other objectives, such as job security.
Here are four more notable findings from the study.
1. The federal workforce is inadequately skilled and likely to stay that way.
Recruitment is a problem for the public sector — 42 percent of federal executives believe their agency is unable to recruit the best employees. Troublingly, 39 percent of respondents think inadequately skilled federal workers represent a significant obstacle for agency mission fulfillment.
Recruitment is hindered by a lack of opportunity (cited by 54 percent of respondents), "rigid civil service rules" (54 percent), and salary (53 percent). However, only 32 percent of federal executives report lacking a qualified applicant pool. So not all is lost; high performers are still interested in public-sector jobs despite these negatives. But there are barriers (like the cumbersome USAjobs site and baroque agency hiring practices) that keep employers from effectively landing them.
2. Underperforming federal managers and employees are seldom fired.
Even if agencies streamlined recruitment, they still would be stuck with low-performing employees who are nearly impossible to dismiss. Some 64 percent of respondents said subpar managers are rarely (if ever) dismissed, and 70 percent said the same for non-managers. Private companies face far fewer obstacles, with 52 percent of private-sector executives surveyed saying non-managers could be reassigned or dismissed within six months. Only 4 percent of public-sector executives said the same.)
3. Federal executives do not feel that they have been properly trained.
Not only do the executives feel their workforce is inadequate, but many also don't feel up to the task of managing those employees. Fewer than three-quarters of career executives and just 45 percent of appointees felt they had received "sufficient training and guidance on how to manage" federal employees.
The appointee/career split is stark, but unsurprising. Political appointees often come to their positions with little background in the civil service and its maddeningly complex thicket of statutes and rules.
Though career executives felt more confident in their abilities than appointees, the percentages are still distressingly low. It prompts the question, How are people getting to such high levels without sufficient training?
4. Agencies are poaching leaders from one another.
Some agencies are lucky enough to hire the best and brightest. But once they do, the battle to keep them begins. Of the executives polled, 42 percent of appointees and 39 percent of career executives said they've been approached about other positions within the last year. Who was top poacher? Other federal agencies.
Having executives with experience at multiple agencies is a good thing, so long as there are enough of them to go around. As agencies struggle to pull in new talent, they are turning to beggar-they-neighbor hiring to get their workforce up to par.
This behavior is unsurprising. Since the 1960s, federal spending has quadrupled while federal-employee counts have remained steady. Congress and the federal courts have created a complicated system of hiring and firing that prevent agencies from acquiring and maintaining a skilled workforce.
The new survey reveals a high degree of variability among agencies, demonstrating that the situation is not hopeless. When asked about employee retention, 66 percent of the executives from one agency said they were able to retain top employees, while only 30 percent of executives at another agency reported the same. Some agencies, like the Federal Trade Commission, are doing a particularly good job. In the Best Places to Work Index of 2014, the FTC scored highly, and the survey confirmed the agency's executives felt it could recruit top performers.
This variability was also found in a recent GAO survey that gauged federal employees' level of engagement. The agency breakdowns are similar to those in the new study, with the VA and Department of Defense scoring low while the FTC maintains high engagement.
As the Vanderbilt survey points out, we can easily examine which agencies are succeeding to determine best practices to implement at others. We have the data for this type of reform; we just need to use it.
Chloe Booth is a research assistant, and Kevin R. Kosar is the director of the Governance Project, at the R Street Institute.
Reducing young people's access to marijuana was one rationale offered by the movement to legalize and regulate the drug. Therefore, a crucial question is whether that promise has been delivered — both in states that run "medical marijuana" programs and in states that have legalized recreational marijuana.
Many early studies of medical marijuana found little effect on juveniles. But the most recent, comprehensive, and methodologically careful studies, reported in June, show exactly what opponents feared — an adverse impact on youth from both medical marijuana and outright legalization.
The clearest finding of a negative impact is from the school-based survey "Monitoring the Future," which was used to examine California's decision to decriminalize marijuana in 2010. Youth who were 10th graders when the law changed showed, in comparison with youth in other states, 25 percent higher current use of marijuana by the time they were in 12th grade.
That study was complemented by a sophisticated longitudinal analysis using data from the National Survey on Drug Use and Health, which was able to capture school dropouts — likely heavier users. Medical-marijuana laws, the authors conclude, "amplify" rates of youth marijuana use, arguably because they allay social stigma and placate fear of a negative health outcome.
These results fit in with previous research better than one might think. Media reporting on these studies is biased; commonly, reports with "good news" for legalization are featured, those finding danger, ignored. The actual academic literature on this subject is highly contested, which isn't surprising because studying the impact of liberalized marijuana laws is not easy.
For example, it's difficult to "bound" the impact of more accessible marijuana, which readily moves across state lines and is used by neighboring youth. In addition, the specifics of the programs (eligibility, penalties, etc.) matter. With medical marijuana in particular, because of different rules of eligibility and distribution, lumping all programs together and looking for their effect turned out to be not particularly revealing.
But that's not to say we knew nothing until last month. Earlier research showed that a generalized decline in perceptions of risk in using marijuana, as well as norms of social disapproval, imply greater marijuana use, and likely follow from official approval. A study of marijuana legalization in Colorado examined these declines, while also presenting some evidence of increased marijuana abuse and dependence. Moreover, some studies found increases in adult use, as well as youth initiation, in association with medical marijuana.
Further, most studies found that states with medical-marijuana programs had significantly higher rates of youth use, and correspondingly lower perceptions of risk in using the drug, though these differences seemed to have pre-dated the programs. There was also evidence that increased childhood exposure to marijuana edibles was associated with medical-marijuana programs — episodes of poisoning increased at four times the rate in states with such programs compared with nationwide increases.
Certainly, it's a broad literature with many conflicting results, and until June the evidence was less than convincing that medical-marijuana programs produced greater youth marijuana use. But now, as we have seen, the research profile has changed. Emerging studies of marijuana commercialization show pronounced negative effects. And more comprehensive studies of even medical marijuana show harm to youth.
The legalization movement must confront this new reality. Expressions of relief that their "reforms" do not actively damage youth must be revisited, as current evidence has disabled that comforting assurance. It remains stunning that media decline to report these troubling findings to the public.
David W. Murray and John P. Walters direct Hudson Institute's Center for Substance Abuse Policy Research. They both served in the Office of National Drug Control Policy during the George W. Bush administration.
The Fourth Amendment protects people from unreasonable searches and seizures, requiring that warrants for these activities be backed up by probable cause. But the proliferation of computers and electronic data has raised new questions. What is an unreasonable search and seizure of computer files?
We recently spoke with Orin Kerr of George Washington University Law School, who argues in a new paper that electronic searches and seizures should be limited by what he calls the "ongoing-seizure approach": Searches and seizures become unreasonable when the government uses data that extends beyond the limits of the warrant. The conversation has been edited for clarity and brevity.
In your paper, you repeatedly mention that Riley v. California was something of a game-changer when it came to electronic seizures. What was the Riley decision? And how did it affect your views on computer searches and the Fourth Amendment?
Riley v. California dealt with how the Fourth Amendment applies to searches incident to arrest. The traditional rule is that when somebody is arrested, the government can search everything on their person for evidence, with no limitations. The question in Riley was whether that rule applies when the item is a cell phone. And the Supreme Court said there's a different rule for cell phones because of the nature of computer searches: Computer and cell-phone searches are so different, so much information is stored there — and such personal information — that if the government wants to search a cell phone incident to arrest, they need a warrant. And the result is a computer-specific rule: one rule for physical searches, another rule for computer searches.
This doesn't really change my view of computer searches, because the Court adopted the approach that I've been saying they should adopt, so I was pleased to see that. It's the first Supreme Court decision on computer search and seizure, and it really points out an important dynamic, which is that computer searches are different in terms of how they're carried out than physical searches. So we need new rules on the traditional limits of the Fourth Amendment in this current environment.
Your paper advocates an "ongoing-seizure approach." Can you tell us about that?
Here's the basic idea: When the government executes a computer search, they usually go into the suspect's home, seize all of their computers, and then take them away for searching later. And they need to do that for practical reasons. It turns out it just takes too long — it can take weeks to search the target's computer — so they usually seize all and search later. And what that means is that the government has access to all of this "nonresponsive" information, information that doesn't relate to the warrant, that they can search at their leisure back in the government's lab.
My argument is that the government is allowed to seize all that nonresponsive information, but they're not allowed to use information that they find that's outside the scope of the warrant when they search through the electronic information.
That means that if the government gets a warrant for fraud records, they can go into a house, seize the computers, and search the computers for fraud records, but they can't use that search for fraud records as an excuse to look for everything else on the computer. They can't turn that into a general search. When they're back in the lab and they're searching the computer for weeks, they might come across information about other crimes or even just information that's embarrassing. I think that when the government tries to use that information in the ongoing seizure of the nonresponsive information, it becomes an unreasonable seizure as the Fourth Amendment prohibits.
Near the end of the paper, you mention that you're not suggesting that the data be destroyed afterwards — you're just saying that it shouldn't be used. What is the difference between having it not be used at all in the future and just destroying it?
I'm skeptical that there's a requirement of destruction, although you could have it. Clearly, if the item is destroyed, it can't be used, but it actually is tricky to figure out what it means to destroy data. Does it mean zeroing out the hard drive? What if there are other copies of the file? I think use is a clearer idea. We could say that disclosure is use, or we could say use as evidence is use.
Use is in some ways simpler a concept to follow, and also it doesn't have a time element. If there's a Fourth Amendment rule that the government has to destroy the nonresponsive record at some time, when do they have to do that? Is it a week? Is it a month? Is it a year?
What if the government needs the original computer to show that there was not exculpatory evidence on the nonresponsive files? If you're a defendant charged in court, you're going to say, "I want to see the full computer because I think all the evidence is showing that I didn't commit the crime." And so there are reasons, for trial integrity purposes, to keep the full computer, at least while the case is pending. After the case is over, it's a different story.
So my approach isn't necessarily rejecting destruction — I just don't think you need it in order to ensure that computers aren't searched in an unlimited way.
Could you connect this to some of the political debates that we've seen over the past couple of months over topics like the NSA?
The NSA debates are mostly over what a search or seizure is, not so much when a search or seizure is reasonable. For example, the Section 215 debates about collecting metadata are about whether non-content records held by the phone company are protected at all. If they're protected by the Fourth Amendment, then the program is very likely unconstitutional. The real debate is what is a search, not what is a reasonable search.
This paper, in contrast, is about what is a reasonable search. Everybody agrees that the contents of your hard drives or the contents of your cell phone are fully protected by the Fourth Amendment, whether in your home or in your pocket or even in the cloud.
There might be a similarity in that the big question is: What do you do with information that's not actually evidence of a crime or not actually incident to a terrorist attack? In all of these cases you've got so much data out there. Some of the data is responsive to the government's concerns, some of it is not. If the government necessarily gets lots of information in the hunt for the important information — they need to get the whole haystack to find the needle — the broad question is similar: What do you do with the data, once the government has found its needle or it turns out there is no needle?
Matthew Disler is a RealClearPolitics intern.
The Supreme Court's recent decision upholding federal subsidies to help low-income Americans buy health insurance means health reform is here to stay, and states have no reason to delay taking up the option under health reform to expand their Medicaid programs. At the same time, Medicaid continues to face attacks from critics who would cut it deeply or undermine it structurally.
With all this in mind, and with Medicaid turning 50 this month, now's a good time to take stock of the program. One aspect of Medicaid is especially worth considering: According to a significant body of recent research, it has long-term benefits for the millions of children it has served in the past and the 32 million kids it serves today.
For starters, Medicaid provides cost-efficient and effective coverage for all its beneficiaries, including children; the cost of covering a child under Medicaid is 27 percent less than private insurance. And participation among children is very high: More than 87 percent of eligible kids participate in Medicaid or the Children's Health Insurance Program (CHIP).
By ensuring that families and children can access primary and preventive care, in addition to emergency care like hospital visits, Medicaid helps people of all ages live healthier lives. For children, the benefits begin even before birth. Comprehensive health coverage for a pregnant woman improves her child's cognitive ability and educational outcomes, the research shows.
Largely because they have access to preventive and primary care, children who are eligible for Medicaid are generally healthier, miss fewer days of school due to illness or injury, and perform at a high level in the classroom.
And these benefits extend up the educational ladder. People eligible for Medicaid in childhood are less likely to drop out of high school and likelier to earn a bachelor's degree than those who weren't eligible.
Covering more low-income children on Medicaid between 1980 and 1990 had an impact equivalent to cutting today's high-school dropout rate by 9.7-14 percent and raising the college-completion rate by 5.5-7.2 percent, one recent study demonstrated.
Those results are dramatic. In fact, the scale of gains from Medicaid access is similar to those from educational reforms like reducing class sizes and adopting schoolwide performance standards.
The results also show that, in addition to transforming the lives of individual kids, covering children produces a workforce with higher skill levels, which is important for fueling stronger economic growth.
Medicaid's also a powerful tool for expanding opportunity for low-income kids. Medicaid coverage narrows the gap in college graduation rates between low-income and higher-income children, research shows.
The benefits of Medicaid in childhood also extend to a healthier and more prosperous adulthood.
Children eligible for Medicaid for more of their childhood were hospitalized 8 to 13 percent less and visited the emergency room 3 to 4 percent less at age 25, a recent study reported. Along with improving overall health and quality of life, this drop in hospitalizations and trips to the emergency room generated considerable savings for the government.
Finally, children eligible for Medicaid have higher earnings as adults, according to a May 2015 study. Like the drop in hospitalizations, the higher incomes from adults help pay for the program. Each of these adults contributed $186 more in taxes through age 28 for each additional year they benefited from Medicaid.
Research on Medicaid's long-term benefits is part of a growing body of work showing that safety-net programs promote opportunity for their beneficiaries. In recent months, our research has highlighted the long-term benefits of other safety-net programs.
As Medicaid approaches its 50th birthday, the program clearly has wide-ranging benefits for kids — just one part of an impressive legacy of providing access to health care for millions of Americans while cutting the number of uninsured Americans.
Judith Solomon is vice president for health policy at the Center on Budget and Policy Priorities.
Thirty-five years ago, President Jimmy Carter signed the Staggers Rail Act, which largely deregulated freight railroads. Deregulation reduced rail rates for most shippers, restored railroads to profitability, and eliminated the risk that taxpayers would be on the hook for future railroad bailouts. But unfortunately, several contentious issues perpetually threaten to prompt ill-considered legislation or renewed regulation. A recent report from a Transportation Research Board committee, on which I served, proposes targeted solutions to these problems.
Railroad deregulation was a response to a well-known crisis. By the late 1970s, one-fifth of the nation's track was operated by bankrupt railroads. One-third of the largest railroads were losing money. The federal government spent $7 billion to bail out several Northeastern railroads and combine them to form Conrail. Railroads faced a sea of red ink in spite of the fact that rail rates were rising faster than inflation. The industry's woes even pervaded popular culture as well-known singers like Jimmy Buffet and Arlo Guthrie crooned matter-of-factly about dying railroads.
Bipartisan majorities in Congress chose deregulation to prevent future bailouts. Deregulation generated large productivity increases that allowed railroads to reduce rates substantially for most shippers — and freight railroads became profitable, eliminating the danger that they would require ongoing taxpayer subsidies. Their improved ability to attract capital allowed railroads to invest in maintaining and upgrading the rail system, improving service and safety.
In 2012, Congress appropriated funds for the Transportation Research Board (part of the National Academy of Sciences) to convene the committee I served on. The report addresses the topics that have created the most acrimonious debate since deregulation, including: maximum rate protections, mandated switching, shipper service complaints, railroad merger approvals, and annual calculations of railroad "revenue adequacy."
Maximum rate protections. The Staggers Act eliminated rate regulation for the majority of rail shipments. But shippers who lack good transportation alternatives to a single railroad can have their rates reviewed by the Surface Transportation Board. Because railroads incur very high fixed costs to build and maintain the network, they will inevitably have to charge different shippers different markups beyond the marginal cost of serving each shipper. Rate regulation is supposed to ensure that these markups are not "too high" — clearly a distributional issue that requires policymakers to make subjective value judgments.
To determine whether a rate is eligible for review, and then to judge whether it is reasonable, regulators compare the rate to a "cost" figure that pretends many costs of providing the rail network can be allocated to individual shippers or shipments, even though those costs are not caused by an individual shipper or shipment. These cost figures are inherently arbitrary.
For example, the regulators' "cost" calculations imply that railroads lose money on about 20 percent of traffic because it is priced below the cost of providing the service. Railroads have been accused of a lot of evil things since deregulation, but intentionally losing money is not one of them! The nonsensical numbers clearly suggest that the system overestimates the cost of many shipments.
The report recommends that regulators use the rates charged for similar shipments in markets where the railroad faces competition as a benchmark for determining whether a rate is eligible for challenge, instead of comparing rates to arbitrary and misleading cost figures. Rate challenges would go to an arbitrator instead of regulatory hearings. This change would provide a transparent mechanism for determining whether a rate can be challenged, and it would get regulators out of the business of conducting individual rate cases.
Mandated switching. Shipper groups want regulators to increase their competitive options by ordering a railroad to physically transfer cars to a nearby competing railroad when the shipper is served by only one railroad, so the customer can access the other railroad's network and prices despite not being located close enough to contract with that railroad for the entire length of the shipment. Regulators have usually declined. The report recommends that shippers should be allowed to propose switching as a remedy in arbitration.
This proposal could allow some increase in the use of mandatory switching, but only in individual cases where a clear problem has been demonstrated to exist — a shipper is "captive" to one railroad and the rate has been judged unreasonable.
Shipper service complaints. Shipper complaints about the responsiveness and timeliness of rail service ebb and flow. Unfortunately, the evidence about alleged service problems is anecdotal, because regulators do not collect shipment-level data on the timeliness of service, like the on-time data collected for airline flights.
The report recommends that regulators should collect these data to help determine whether there is a significant problem. It also recommends a top-to-bottom review of all rail industry data collection to eliminate data reporting requirements that no longer serve a useful purpose.
Railroad merger approvals. The Surface Transportation Board reviews proposed railroad mergers under a vague "public interest" standard that lets regulators consider virtually any factors they believe might be relevant. The report recommends that merger-review authority should be transferred to the Department of Justice's Antitrust Division, which reviews mergers in other transportation industries solely for their effects on competition.
Railroad revenue adequacy. The Surface Transportation Board annually calculates whether individual major railroads are earning revenues adequate to let them attract capital to maintain and improve the rail network. This calculation was important information to have when railroads were going bankrupt and the government wanted to see if deregulation would improve their financial health. Now, the annual calculation has turned into a highly contentious event, because regulators hinted in the past that they might regulate rates more strictly once railroads became "revenue adequate."
The report recommends that this annual ritual should be eliminated, thus eliminating the danger that it could be used as a vehicle to impose public-utility style rate of return regulation on railroads. Instead, the Department of Transportation should undertake a broader assessment of the industry's financial health over a number of years.
Most of these proposals would require legislative changes, and most would require some new (one-time) regulatory proceedings. All of them would help preserve the benefits of railroad deregulation by laying to rest the persistent problems that threaten to derail it.
Jerry Ellig is a senior research fellow with the Mercatus Center at George Mason University and a member of the committee that produced the Transportation Research Board report, "Modernizing Freight Rail Regulation," released in June.
In May, Republicans in Congress announced a joint budget resolution that, if enacted, would repeal Obamacare and balance the federal books in ten years. That is all well and good. Unfortunately, when they pass health-care legislation that actually has a chance of becoming law, they fail to pay for their promises. Can they be trusted to repeal and replace Obamacare with fiscally responsible, patient-centered health reform?
Last month, the Congressional Budget Office estimated that repealing Obamacare would increase the federal deficit by $353 billion over ten years, not counting the economic growth that would result from repeal. Factoring in such growth, the deficit would still rise by $137 billion. So, if Republicans actually repeal Obamacare, they would still have to cut $137 billion of spending from elsewhere in the budget.
Yet the Republicans have not even proposed minuscule spending cuts to pay for their current health-related bills. Their latest lapse involves the medical-device excise tax. This is a 2.3 percent tax on medical devices — from pacemakers to MRI scanners — to help pay for Obamacare.
On June 18, every Republican in the House of Representatives who was present voted to kill the tax, as did about one-fifth of Democrats. With those 46 Democrats joining the majority, the votes in favor added up to 280, just eight short of the number needed to override the promised presidential veto. The bill awaits a vote in the Senate.
President Obama has promised to veto the bill because it is fiscally irresponsible. The CBO estimates that device-tax repeal will increase the deficit by $24 billion in the next ten years. Spending offsets? Zero. Nada. Zilch.
If there is a chance to get rid of any part of Obamacare, it should be taken at the earliest opportunity. So, by all means, Congress should eliminate the medical-device tax. And if a repeal bill can get enough Democrats to override the president’s veto, better yet.
However, Congress has no excuse for avoiding the spending offsets necessary to prevent the deficit from rising. Indeed, finding spending cuts is easier now that it was a few years ago, when the device tax was expected to generate much more revenue than it has. Repealing the medical-device tax without enacting spending offsets does nothing to repeal Obamacare; it just gives us a deficit-financed Obamacare.
This episode is the second time in 2015 that the Republican-majority Congress has voted to increase deficit spending on health care. In April, they jacked up Medicare spending on physicians’ fees — winning the praise of physician lobbyists. At least that time around, they found a few pennies on the dollar to pay for the increase. Still, the CBO estimates the so-called Medicare “doc fix” will add $141 billion to the cumulative ten-year deficit.
In the grand scheme of government expenditures, or even just health spending, these are small sums. To anyone who is earnestly looking for spending offsets, it is hard not to find them. For 2016, the medical-device tax repeal will cost the federal government just $1.8 billion of revenue, while it will spend over $1 trillion on Medicare and Medicaid.
President Obama himself has proposed a way to cut Medicaid spending that should appeal to conservatives. In his February 2012 budget, he proposed reforms to "provider taxes." Because the federal government automatically matches (or, in most states, more than matches) each dollar the state pays for Medicaid, hospitals and state politicians have figured out a neat trick to maximize federal payments. Hospitals agree to a special state "tax," and the money flows into the state Medicaid program — and thereby attracts more federal dollars. Most of that money becomes hospital revenue, so hospitals actually earn more than they are "taxed."
Congress could stop this abuse and thereby save $22 billion over ten years. All it has to do is steal the Medicaid proposal from President Obama’s 2012 budget, and it would pay for almost all of the revenue lost from repealing the medical device tax.
The Obama administration is not known for fiscal discipline, but even the president has had enough of Republicans’ fiscally reckless approach to health spending. It is long past time for congressional Republicans to walk the talk on balancing the budget.
John R. Graham is an Independent Institute senior fellow and a senior fellow at the National Center for Policy Analysis.
Ann Coulter makes important points in her new book about our chaotic, overloaded immigration system. She is right that we have too much immigration, especially of the poor and poorly educated. That immigration is widening the gap between rich and poor and accelerating our decline into a country of haves and have-nots. That the job prospects of young blacks and many others are damaged by the influx of workers willing to work for whatever they can get. That poor enforcement of the law undermines public confidence in our government. That political correctness has muzzled liberals who were once committed to limited population growth in the name of environmental conservation. That immigration is transforming the electorate by expanding the numbers of people who depend on government programs and therefore are likely to vote for Democrats.
These are big issues that need far more examination than they receive from our news media. But the problem is that Coulter writes with such venomous hostility toward immigrants and their liberal enablers that most people will turn away from her screed as they would from a street-corner rant. Instead of creating space for the national discussion we badly need to have, she will once again stake her claim to the true-believing and legitimately frustrated Americans who make all her books bestsellers.
Coulter is the shock jock of the printed page. She writes with wit, hyperbole, and Cassandra's fascination with impending doom. Liberals think global warming is cooking our goose, but Coulter is convinced that immigration will get us first. She thinks it has become a sort of national self-immolation, brought to us courtesy of the soft-headed advocates of open borders and immigration unconstrained by law.
The title of Coulter's new book introduces her gloomy thesis: "¡Adios, America! The Left's Plan to Turn our Country into a Third World Hellhole."
Coulter doesn't seem to like any immigrants except those like her Northern European ancestors. But she is particularly nasty to those from Mexico, the largest immigrant group by far, whose numbers have grown from about 700,000 in 1970 to more than 10 million today.
Coulter takes credit for Donald Trump's Mexican-immigrants-are-rapists rant. "Where do you think all that spicy stuff about Mexican rape culture came from?" she tweeted. Sure enough, Trump called the book "a great read".
Trump's obnoxious denunciation of Mexican immigrants at least included the caveat that "some, I assume, are good people." It was a concession not supported by Coulter, who prefers this description of our southern neighbors: "Mexicans specialize in corpse desecration, burning people alive, rolling human heads onto packed nightclub dance floors, dissolving bodies in acid, and hanging mutilated bodies from bridges."
This is nasty stuff. It's malicious hysteria. Coulter's reporting would benefit from a trip to the Mexican state of Jalisco, where tens of thousands of Americans have flocked to a retirement community that was featured on last night's PBS News Hour. One of the retirees, who happened to be a native of Great Britain, summarized the contentment of her contemporaries when she said the Mexicans who work there "have compassion written into their DNA."
But Coulter sees immigrants from many lands as genetically or culturally predisposed to rape. She tells gruesome stories of brutal sexual attacks committed by the Hmong, tribal people admitted to the United States in order to shield them from retaliation for helping American forces in Southeast Asia. One of their most important advocates was Michael Johns, a former aide to President Reagan who said that to deny them asylum would be "a betrayal." That came in a 1995 article in William F. Buckley's National Review.
Coulter missed all that. Her story is that the Hmong were admitted under the 1965 immigration legislation that knocked out the old system that had favored Northern Europeans. That bill's principal Senate sponsor was Ted Kennedy. Therefore, Coulter erroneously concludes, Kennedy is the sponsor of the Hmong and is responsible for the rapes any of them committed. "Thank you, Teddy Kennedy," she writes sarcastically.
Kennedy is Coulter's public enemy No. 1. No. 2 is the New York Times, which she accuses of tailoring its immigration coverage to the open-border specifications of controversial billionaire Mexican businessman Carlos Slim. In 2009, when the Times was in financial peril and money was tight, Slim lent the paper $250 million.
Coulter's conspiratorial theory is that in return for the money, the Times sold its journalistic soul. She spins a post-hoc-ergo-propter-hoc fantasy in which the Times was vigilant against illegal immigration until it cashed Slim's check. Her conclusion: "What a difference one thieving Mexican billionaire makes!"
As someone who has reported on Latin American immigration and politics for years, I am familiar with the style and tenor of Coulter's book. Ironically, her views from the strident right are reminiscent of a leftist tract that has long been a bestselling denunciation of the United States and Europe and all their imperialist works. The Open Veins of Latin America by Eduardo Galeano, was aptly described in The Economist as: "written in powerful prose, with intoxicating passion. But it is also a work of crude propaganda, a mix of selective truths, exaggeration and falsehood, caricature and conspiracy theory."
The same can be said of ¡Adios, America!
Jerry Kammeris a senior research fellow for the Center for Immigration Studies.
The unemployment rate has dropped to 5.3 percent, which is near the level some economists consider "full employment" and is substantially lower than the 10 percent peak in October 2009. Total nonfarm job creation has been 11.9 million since then, for an average annual job creation of almost 2 million.
In the Obama administration's telling, these numbers prove that liberal policies of spending, taxing, and regulating create jobs. Unfortunately, a closer look at the data shows otherwise.
Many Americans have dropped out of the labor force, bringing the labor-force participation rate to a 40-year low of 62.6 percent after declining by 2.4 percentage points since October 2009. The broader "U-6" unemployment rate — which includes involuntary part-timers and those "marginally attached to the labor force" — now stands at 10.5 percent. It was below 9 percent for all of 2006 and 2007. These weak labor-market signals more accurately reflect Americans’ attitudes: A majority believe the economy is "getting worse," according to a recent Gallup poll.
Historical context is also important. Consider that after a severe recession in the early 1980s, the unemployment rate peaked at 10.8 percent in December 1982. But during the next six years, average annual job creation was 2.8 million, for a total of about 17 million — 5 million more than during the last six years, despite the fact that the U.S. population was only 80 percent of what it is now — while the participation rate increased by 2.4 percentage points, to 69 percent.
Austan Goolsbee, former chairman of the Council of Economic Advisers, and other liberal policy wonks claim that today's declining labor-force participation rate is simply a natural demographic phenomenon from an aging population. However, the share of the labor force that is at least 55 years old has not changed, and the share of the total civilian non-institutional population in that demographic has actually increased 1.5 percentage points since October 2009.
Further, the participation decline since October 2009 is not limited to the aging. The 16-19 age group's participation rate has fallen by 1.6 percentage points, the 20-24 age group's by 0.7 percentage points. Declining labor participation reduces the on-the-job training that is vital to increasing these groups’ lifetime earning potential.
The federal minimum-wage increase in 2009 and a host of other liberal policies arbitrarily increased the cost of employing the least educated and least skilled, and pushed many of them out of the labor market. This in turn forced many of them into government assistance, which starts a downward spiral of dependency that’s difficult to escape.
Perhaps an even greater threat to the nation’s future prosperity is seen in those in their prime earning (and childrearing) years: 25 to 54. Their labor-force participation rate declined 1.7 percentage points. While some rationally choose to stay home or go to college after unsuccessful job searches, loss of lifetime earnings and students loans (which are made artificially attractive by federal assistance) could have long-term consequences for many.
The increased cost of doing business from higher income-tax rates, Obamacare, stifling banking and environmental regulations, and other big-government policies have contributed to many Americans living in their parents' garage. This is in stark contrast to the Reagan administration’s pro-growth policies of lowering taxes and lessening regulation when people started businesses in their garage.
Variations of these policies, along with a sensible lawsuit climate, have led to the successful model in Texas that has created 40 percent of all U.S. net nonfarm jobs since the start of the Great Recession, with a 64.4 percent participation rate.
It’s time to implement time-tested, pro-growth policies that will invigorate the economy by getting government out of the way so Americans have the opportunity to fulfill their hopes and dreams.
Vance Ginn is an economist in the Center for Fiscal Policy at the Texas Public Policy Foundation, a non-profit, free-market research institute based in Austin. He may be reached at email@example.com.