Texas, the country's second-largest state, will commence its next legislative session in January. Among the bills already introduced (during the "pre-filing" session), two seek slightly less than $3 billion in "Tuition Revenue Bonds" for new college-campus construction, which the universities would subsequently pay back through increasing student tuition.
On its face, such expansion seems natural for the Lone Star State, whose population is booming. Voting with their feet for the jobs created by Texas's relatively lower taxes and common-sense regulatory environment, many have been flocking to Texas (487 a day, net) recently. A bigger population means more college students, who would seem to require more college spaces -- more classrooms, laboratories, dorms, and the like -- to accommodate them. But is this necessarily the case?
To ensure that the tuition hikes that fund new buildings are fully warranted, two criteria should be met going forward. First, it must be shown that existing space is being used fully. Second, the state's growth in the college-age population must be weighed against the growth in the number of college students now taking some courses online. Online classes lessen the need for new brick-and-mortar classrooms.
As it considers its new campus construction proposals, Texas can learn from the example of the Pennsylvania State University System, which, after a building boom in the last decade, has been forced to cut costs. As a result, efficiency in allocating space has grown in importance. A recent report demonstrates that "facilities are second only to personnel in campus expenditures. ... On a five-million-square-foot campus, one percent of underutilized lab and office space equals about $3.7-million in wasted construction costs." Moreover, "maintenance, utilities, and renewal costs can compose about 70 percent of the lifetime costs of a building." Accordingly, says the University of Michigan's Paul Courant, if universities "make better use of existing space, we can save substantial funds."
With such savings, universities would experience less pressure to increase tuition prices, which have been escalating at unsustainable rates. In Texas, between 2003 and 2009, statewide average academic charges for a student taking 15 semester credit hours at a public university increased 72 percent in constant dollars. Nationwide, according to one study, average tuitions have risen 440 percent -- faster than general inflation and faster than health-care cost increases over the same period. To pay for tuition, students and their parents have taken on historic levels of debt. At $1.2 trillion, total student-loan debt is now, for the first time in history, greater than total national credit-card debt.
How much have campus construction projects contributed to this crisis? According to architectural planners Philip Parsons and Gregory Janks, college building has far exceeded the growth in the student population. After four decades of massive building, "the space per student has in some cases tripled since the 1970s," Parsons estimates. "Colleges have been prodigal." Janks agrees: "The mind-set that many institutions have had is that each institution needs to be complete unto itself, with one of every shiny toy that it can get, which means that there is often duplication of facilities on a regional basis. That leads to massive inefficiencies." Add to this the fact that, over the past few decades, class schedules "have narrowed to the middle of the day," leaving classroom space unused during the early morning and evening hours.
Like Penn State, New Jersey's Kean University discovered space inefficiencies. Merely 11 percent of its classrooms were being used on Friday afternoons, and only 8 percent on Saturdays. If it did not expand its class schedule, Kean would be forced to hike tuition by nearly 20 percent. Instead, under its new, expanded classroom schedule, classroom utilization on Fridays is now nearly 50 percent; Saturday classroom utilization now totals 16 percent. The result? Kean has been able to accept more than 700 additional students without any new construction and with a tuition hike under 5 percent.
Kean's efforts speak forcefully to Texas. Kean trumpets its space-economizing measures in keeping college affordable for its students, one-quarter of whom are either first-generation Americans or first-generation college students. Texas is also home to a large and growing number of students who fall into these categories.
Another successful space- and tuition-saving initiative comes from BYU-Idaho, which has adopted a year-round academic calendar divided into three semesters of 14 weeks apiece -- each student is assigned to a "track" in which they take classes during two of these semesters. This move is anticipated to increase student enrollment by as much as 50 percent while increasing savings; under the old system, buildings sat half-empty in the summer while salaried personnel continued to work during these months with far fewer students. BYU-Idaho projects that through this measure the school can save 20 percent of these fixed costs per student while also raising teachers' salaries by 15 percent and giving faculty the month of August off.
As I have shown previously, online learning will further reduce the need for new buildings and thus college costs. Growing enrollment in online courses suggests that caution is called for before we declare the next new-classroom-building project shovel-ready. For over a decade, the Babson Group has tracked online learning nationwide. It finds that "the rate of growth in online enrollments is ten times that of the rate in all higher education." Over 6 million students enrolled in at least one online course during the fall 2010 term, an increase of 560,000 students over the previous year. With each additional online course taken, the need for space decreases.
Through incentivizing universities both to maximize their use of existing space and to offer additional courses online, the Texas legislature would go no small way toward ensuring a more affordable college education for Texas students and therewith smaller student-debt loads.
Thomas K. Lindsay directs the Center for Higher Education at the Texas Public Policy Foundation and is editor of SeeThruEdu.com. He was deputy chairman of the National Endowment for the Humanities under George W. Bush. He recently published Investigating American Democracy with Gary D. Glenn (Oxford University Press).
You'd think the GOP would side with ride-sharing companies such as Uber, Lyft, and Sidecar over the taxi cartels and the local governments that are trying to protect them. After all, the Republican party campaigns on smaller government, less regulation, and entrepreneurship. And this summer, the Republican party and RNC chairman Reince Priebus issued a petition on the GOP's website calling on readers to support the ride-sharing company Uber against "taxi-unions and liberal bureaucrats."
However, as Josh Barro noted in a recent article at The New York Times, Republicans have not always lived up to their rhetoric when it comes to legalizing ride-sharing. He pointed to a recent study by the R Street Institute (where I work) and Engine, a group that promotes policies that favor start-ups, that graded 50 cities on how friendly they were to ride-sharing services and for-hire transportation more broadly. The study found no correlation between how friendly cities were to ride-sharing and the direction they leaned politically. For example, the three cities that earned A grades (Washington, Fresno, and Minneapolis) and the two that earned F grades (Portland, and Las Vegas) are all decidedly blue in their voting patterns.
As for the South, the scores ranged from B+'s for Virginia Beach, Louisville, and Raleigh, to D-'s for San Antonio and Kansas City, Mo. As the battles over ride-sharing heat up all over the South, here's an opportunity for Southern Republicans to prove that they really are for limited government and that it's not just a catchphrase.
Lawmakers, regulators, and city officials are discussing or have recently discussed ride-sharing in Florida, Georgia, North Carolina, South Carolina, Tennessee, and Virginia. Most of these states are solidly Republican on the state -- and in many cases the local -- level. But the reaction to ride-sharing has been mixed, with some regulators and lawmakers wanting to ban the practice and others supporting a lighter touch that reflects the unique nature of these services. For example, both Uber and Lyft use smartphone apps to schedule and dispatch riders, whereas traditional taxis are dispatched by telephone and can be hailed off the street.
All too often, state and local officials have been quick to place ride-sharing services in the box of unlicensed taxi operators. Major cities across the country restrict the supply of taxi licenses, sometimes forcing operators to buy taxi "medallions" from the local government. In New York City, medallions have topped $1 million and an entire industry has sprung up around financing them. Traditional taxi operators also accept massive regulations from local governments on everything from where stickers are placed to how much they can charge for fares. Ride-sharing companies argue that they are already much more transparent than taxis, both in terms of the prices they charge and in offering a rating system for every driver and every passenger.
If Republicans in the South want to show they're truly for free markets and limited government, they should support efforts both to avoid overregulation of these new services and to repeal the anti-competitive measures already on the books for taxis and limos. They should work to ensure that all segments of the for-hire driver market compete on an even playing field. For example, there should be uniform minimum requirements in regards to such things as background checks and liability insurance.
Bold Southern Republicans should even consider abolishing taxi medallions entirely. Let taxis compete with ride-sharing services (and each other) on rates, and loosen the numerous regulations companies have to comply with that have nothing to do with public safety.
The American people are right to ask whether conservative politicians stand for less government, or for cronyism and big business. If Republicans in the solid-red South decide to lead the way in creating a level playing field for ride-sharing companies and traditional taxi cabs to compete, it would go a long way toward proving conservatives can walk the walk, and not just talk the talk.
Kevin Boyd is an associate policy analyst with the R Street Institute. He lives in Louisiana.
You hear it so often it's almost a cliché: The nation is facing a serious shortage of doctors, particularly doctors who practice primary care, in the coming years.
But is that really the case?
Many medical groups, led by the Association of American Medical Colleges, say there's little doubt. "We think the shortage is going to be close to 130,000 in the next 10 to 12 years," says Atul Grover, the group's chief public policy officer.
But others, particularly health care economists, are less convinced. "Concerns that the nation faces a looming physician shortage, particularly in primary care specialties, are common," wrote an expert panel of the Institute of Medicine (IOM) in a report on the financing of graduate medical education in July. "The committee did not find credible evidence to support such claims."
Gail Wilensky, a health economist and co-chair of the IOM panel, says previous predictions of impending shortages "haven't even been directionally correct sometimes. Which is we thought we were going into a surplus and we ended up in a shortage, or vice versa."
Those warning of a shortage have a strong case. Not only are millions of Americans gaining coverage through the Affordable Care Act, but 10,000 baby boomers are becoming eligible for Medicare every day. And older people tend to have more medical needs.
"We know essentially with the doubling of the population over the age of 65 over the course of a couple of decades, they're driving the demand for services," says Grover.
In addition to a numerical shortage, there's also a mismatch between what kind of doctors the nation is producing and the kind of doctors it needs, says Andrew Bazemore, a family physician with the Robert Graham Center, an independent project of the American Academy of Family Physicians.
"We do a lot of our training in the northeastern part of our country, and it's not surprising that the largest ratio of physicians and other providers, in general, also appear in those areas," says Bazemore. "We have shown again and again that where you train matters an awful lot to where you practice." That ends up resulting in an oversupply in urban centers in the Northeast and an undersupply elsewhere.
Even aside from geography, there are other questions, he says, such as "do the providers reflect the populations they serve? And that means by their race and ethnicity, by their age, by their gender?"
While few dispute the idea that there will be a growing need for primary care in the coming years, it is not at all clear whether all those primary care services have to be provided by doctors.
"There are a lot of services that can be provided by a lot of people other than primary care doctors," says Wilensky. That includes physician assistants, nurse practitioners, and even pharmacists and social workers.
"How many physicians we ‘need' depends entirely on how the delivery system is organized," Wilensky says. "What we allow other health care professionals to do; whether they are reimbursed in a reasonable way that will increase the interest in having people go into those professions."
Currently, physicians who are specialists make considerably more than those who practice primary care, which many experts say is a huge deterrent to doctors becoming generalists, particularly when they have large medical school loans to pay off.
At the same time, "team-based care," in which a physician oversees a group of health professionals, is considered by many to be not only more cost-effective, but also a way to lower the number of doctors the nation needs to train.
"All of the efforts to the future…are to mold and morph our medical system into one that is less ‘single-combat warriors' practicing medicine here and there, and physicians and others practicing in efficient systems," says Fitzhugh Mullan, a professor of medicine and health policy at George Washington University.
Until that happens, though, Atul Grover of the AAMC says the nation needs to be training far more physicians.
"We don't think we should put patients at risk by saying 'Let's not train enough doctors just in case everything lines up perfectly and we don't need them,'" Grover said in a recent appearance on C-SPAN.
Wilensky is among those who find that attitude wasteful. "Are you really serious?" she says. "You're talking about somebody who is potentially 12 to 15 years post high school, to invest in a skill set that we're not sure we're going to need?"
And it's not just the individuals who could be at risk for wasteful spending. "Training another doctor isn't cheap," says Mullan. "Isn't cheap for the individual doing the training, isn't cheap for the institution providing the education, and ultimately isn't cheap for the health system. Because the more doctors we have, the more activity there will be."
Princeton health economist Uwe Reinhardt points out that groups like the AAMC have a self-interest in saying there's a shortage, to move more money towards the medical schools and hospitals it represents.
"Anything that would move money their way they would favor," he says.
Reinhardt also says that a small shortage of physicians would probably be preferable to a surplus, because it would spur innovative ways to provide care.
"My view is whatever the physician supply is, the system will adjust. And cope with it," he says. "And if it gets really tight, we will invent stuff to deal with it."
This article was produced by Kaiser Health News, on whose website it originally appeared, with support from The SCAN Foundation. Kaiser Health News (KHN) is a nonprofit national health policy news service.
Air traffic congestion often raises safety concerns for passengers. In the last year, U.S. airlines flew 753 million passengers both domestically and internationally. As Thanksgiving Day approaches, airline travel will reach its most hectic pace across the country, with Los Angeles International and Chicago O’Hare predicted to be the two busiest domestic airports. On top of the holiday bustle, there are reports that flight delays could soon reach their worst levels seen in the last twenty years. While air safety should always be the regulatory priority, recent policy changes at the Federal Aviation Administration (FAA) have raised some serious questions and the flying public deserves some answers.
Recall when Ronald Reagan fired over 12,000 striking government air traffic controllers in 1981? Now most of the air traffic controllers that were hired to replace the strikers face mandatory retirement. In fact, according to a U.S. Department of Transportation Inspector General Office report, more than 11,700 air traffic controllers will retire by 2021. While that should be enough to get the FAA geared up to meet this growing challenge, a string of problems – agency mismanagement and overspending, a proposal to sideline the current training program, and turning away potentially prime candidates from selection into the training program – are impeding the pathway to get more air traffic controllers into the airport towers where they are needed.
Recent news reports provide a quick reminder of how air traffic controller shortages could create adverse consequences on both travelers and airlines, such as when last year’s sequestration made flight delays and cancellations commonplace, or when a recent fire at an airport control tower in Chicago occurred. These examples provide ample evidence of the harms that can occur in the face of a shortage. They also demonstrate how a problem in one airport can produce cascading problems, including delays and cancellations, in airports throughout the nation.
At a time when the FAA needs to ramp up its hiring and training to fill the growing void, its proposal to do away with the current air traffic controller training program belies logic. First, since training can take two or more years to complete, there is an immediate need to keep the process moving along in order to minimize the shortage of trained air traffic controllers. Impending shortages that the FAA should have foreseen would bring stress for existing air traffic controllers and produce flight delays for passengers, which would lead to increased safety risks for passengers and needless costs for airlines. The resulting costs, which could reach billions of dollars, would be passed on to consumers in the form of higher airline prices.
Second, the FAA has overhauled its commonsense practice of recruiting students from flight schools and tapping into already trained vets leaving the military, who have direct knowledge and experience. Instead, the FAA is now recruiting new off-the-street hires with no previous experience, whose training takes twice as long and at extra cost.
All in all, the timing of the FAA’s decision does not coincide with the needs of the flying public, and it is inconsistent with the agency’s focus on public safety. It amounts to regulatory malpractice and a problem that policymakers will need to take quick action to fix.
With nearly 90,000 flights in the U.S. each day, having more eyes on the sky seems as important as ever. The decision by the FAA to throw out the “baby with the bathwater” seems irresponsible and it could jeopardize public safety. At the very least, their actions would increase flight delays and airline costs, which ultimately would cost consumers more in lost time and higher prices.
To that end, if flight delays and cancelations increased by a mere 1%, the cost (by my estimates) to American consumers well over $1 billion dollars of lost time, but it would also mean increased costs for airlines. In short, passengers lose, and all of the impending airline delays, as well as the potential safety risks associated with increased traffic congestion, could have been avoided.
The FAA needs to revisit its proposal to turn off its established air traffic controller training program, and instead, direct its attention to immediately accelerating its training. That effort would avoid stretching air traffic controllers too thin, which would spare the public a lot of misery and costs from delays, as the busy holiday season approaches and, more importantly, for years to come.
Steve Pociask is president of the American Consumer Institute Center for Citizen Research, a nonprofit educational and research organization. For more information about the Institute, visit www.theamericanconsumer.org.
“This bill [Affordable Care Act] was written in a tortured way to make sure the C.B.O. did not score the mandate as taxes. If C.B.O. scored the mandate as taxes, the bill dies.”—Jonathan Gruber
Shockingly, political considerations were in play in the passage of the Affordable Care Act. This was not particularly hard to see at the time. But Jonathan Gruber, he of the stupid American voter and architect of Obamacare, has exploded any remaining pretense.
One man in particular should be paying attention.
In his opinion upholding the Affordable Care Act’s individual mandate, Chief Justice John Roberts noted that it’s not the Court’s “job to protect the people from the consequences of their political choices.”
Clearly, the “political choices” the Chief Justice had in mind were electoral. The people elect their legislative and executive branch representatives, and must live with the consequences of the collective decisions made by those representatives – at least until they have a chance to throw them out of office.
What Roberts failed to appreciate—or willfully ignored—was that his decision did, in fact, offer protection from political choices, though of a different sort. Roberts’s opinion upholding the constitutionality of the mandate hinged on Congress’s authority to tax. What Congress labelled a “penalty” in the language of the law could be construed as a “tax,” according to Roberts, and thus was within Congress’s power. Roberts wrote,“That choice [of label] does not, however, control whether an exaction is within Congress’s constitutional power to tax.” In other words, Congress could do that which it said it was not doing. An interesting interpretation, for sure, but also one that protected Congress “from the consequences of [its] political decisions.”
Congressional Democrats certainly could have written a bill expressly levying a new tax to promote the purchase of health insurance, without any question of constitutionality. But they didn’t. They chose to write Obamacare with penalties rather than taxes. The bill’s authors, and Jonathan Gruber, recognized the political cost of increasing taxes—both to the bill (it would not have passed) and to themselves (fear of Democratic electoral losses).
This is all relevant today because the Supreme Court has agreed to hear King v. Burwell, a case challenging the ability of the federal government to subsidize insurance (“premium assistance”) through insurance exchanges set up by the federal government. The plain text of Obamacare provides subsidies to those enrolled through exchanges “established by the State”. Those challenging the implementation of the law argue the IRS is not authorized to issue subsidies through the federally-established exchanges.
In recently revealed comments from 2012, Gruber offers Americans insight into the naked political considerations underlying this provision. Gruber argued the law was explicitly designed to “squeeze” states into setting up their own exchanges. The political calculation was that the cost of not expanding coverage through exchanges would be too great for governors and state legislators: “What’s important to remember politically about this is if you're a state and you don’t set up an exchange, that means your citizens don't get their tax credits—but your citizens still pay the taxes that support this bill. …I hope that that's a blatant enough political reality that states will get their act together...” The law could have been written to clarify that the federal government could offer subsidies through its own exchanges – but, by political choice, it wasn’t.
Democrats were wrong in their calculations, however: 36 states have not established exchanges. The federal government established them in those states instead. If the law were applied as written, citizens in those states could not access subsidies through the federal insurance exchanges. Not one to be constrained by the language of a law, however, the Obama administration and the IRS have chosen to provide subsidies for those on federal exchanges, deciding “the state” really means “the federal government too.”
Again the Court is faced with political decisions and their consequences. Will Chief Justice Roberts note that the decisions of those 36 states not to set up state exchanges are the consequence of the political choices made by the people in those states?
Or will he and the Court override those political decisions? Will he again protect Congress from the consequences of its own political choices?
He would be wrong to do so – just as he was wrong in 2012.
Joel Scanlon is the Director of Studies at the Hudson Institute.
Over the weekend I spoke on a panel at the Millennial Success Conference hosted by GenFKD. FKD stands for “Financial Knowledge Development”; the organization is funded at least in part by The Home Depot founder Bernie Marcus, who beamed in a video message about entrepreneurship.
The panel was on “Millennial Identity,” and under the tutelage of RCP’s David DeRosiers, I sat alongside fellow Millennials Elizabeth Plank, Spencer Carnes, and Gabrielle Jackson. We generally agreed the defining historical moments for Millennials were 9-11 and the financial crisis of 2008, events creating profound turbulence for our generation. During the Q&A portion a member of the audience asked how we balance the notion of corporate responsibility--specifically mentioning Apple’s labor practices in China--while in pursuit of success.
Jackson, a thoughtful, rising star in DC, mentioned how she was faced with this quandary while working at a PR firm whose client included Wal-Mart. This flummoxed her a bit since she’d previously spoken out about the corporate behemoth’s labor practices. Wal-Mart is a frequent target for critics who question the fairness of its wages and health care offerings and its unparalleled ability to drive mom-and-pop stores out of business.
While there wasn’t time, I wanted to expound on that scenario a bit to bring in another economic consideration, and that is the significant "consumer surplus" wrought by Wally World driving down prices to rock bottom. Sure wages are low, but on balance the economic gains are wonderful for consumers. And since consumers of Wal-Mart goods tend to be the poorest among us, that’s an added net benefit to society.
As I’ve written elsewhere, as one of eight children in a low-income family money during my early childhood, Wal-Mart greatly enhanced the quality of our lives. Yes, people complain about the quality of the products vs. traditional mom-and-pop shops, but if those products were above our price point, they were totally irrelevant for us. And that meant higher-paying jobs at those mom-and-pop shops were out of reach for many workers, too.
Quantitatively, my story is one of millions that aggregates to some $50 billion in savings for American consumers each year, according a study highlighted by Gregory Mankiw, chairman of Harvard University’s Department of Economics. That means $50 billion more in Americans’ pockets to be used for many other purposes, whether education, travel, business creation, you name it.
The study’s authors, economists with Massachusetts Institute of Technology and the United States Department of Agriculture, write that “while we do not estimate the costs to workers who may receive lower wages and benefits, we find the effects of supercenter entry and expansion to be sufficiently large so that overall we find it to be extremely unlikely that the expansion of supercenters does not confer a significant overall benefit to consumers.”
They break out consumers by income bracket and show that supercenters have, in economicspeak, increasing “compensating variation,” as a shopper’s income declines. In plain English, that means the personal economic benefit grows in a powerful way--nearly 50 percent from the highest to the lowest brackets.
Like any firm believer in free markets, I despise crony capitalism and unsafe, exploitative labor and environmental practices around the world. We live in an imperfect world, though to cite Rev. Martin Luther King, Jr., “The arc of the moral universe is long, but it bends towards justice.” The Millennial generation is infused with a profound reverence for social justice; I would argue that globalization enhances social justice around the world. For every cherry-picked, soulless mogul, there's also a Bill Gates curing diseases and alleviating poverty.
If we take the world’s current GDP of roughly $85 trillion and divide that by 7 billion people, that's a scant $121 per person per year, not enough to live well. We need to increase GDP rather than enacting redistributionist, sclerotic policies in the utopian hope of creating social justice. Global poverty can be slain through government reforms that allow free markets to flourish and increase global GDP. Yet some 1.5 billion people still live under communism, and India’s a massive democracy plagued by crony capitalism and red tape. While we abhor the nightmare scenes such as collapsing factories in Bangladesh, the developed world shows us the exciting possibilities. And Wal-Mart is certainly one of them.
MIT professor Jon Gruber is getting a lot of flak lately. As the intellectual architect of ObamaCare, he has shocked a lot of people with his video confessions that passing health reform required “deception” because the public is too “stupid” to understand what needs to be done.
I believe voters are smart. And that with three simple (and very transparent) reforms we could replace the mess that is ObamaCare with a health system the public would readily accept:
1. Replace all the ObamaCare mandates and subsidies with a universal tax credit that is the same for everyone.
2. Allow Medicaid (or private insurance that looks very much like Medicaid) to compete with other insurance, with everyone having the right to buy in or get out.
3. Denationalize and deregulate the exchanges.
You could have a very workable health care system by making these changes and these changes alone.
Technical problems with the online exchanges would be gone. Virtually every problem with the online exchanges has one and only one cause: People at different incomelevels and in different insurance pools get different subsidies from the federal government.
In theory, when you apply for insurance on an exchange, the exchange needs to check with the IRS to verify your income; it needs to check with Social Security to see how many different employers you work for; it needs to check with the Department of Labor to see if those employers are offering affordable, qualified insurance; and it has to check with your state Medicaid program to see if you are eligible for that.
To make matters worse, everyone’s subsidy is almost certain to be wrong – leading to refunds or extra taxes next April 15th.
With a universal tax credit, it wouldn’t matter where you work or what your employer offers you. It wouldn’t matter what your income is. It wouldn’t matter if you qualify for Medicaid.
All the perverse outcomes in the labor market would be gone. As is well known, employers have perverse incentives to keep the number of employees small, to reduce their hours of work, to use independent contractors and temp labor instead of full time employees, to end insurance for below average wage employees, to self-insure while the workforce is healthy and pay fines instead of providing the insurance the law requires.
With a universal tax credit and no mandate, all of these perversions would be gone. The subsidy for private health insurance would be the same for all: whether they work less than 30 hours a week or more; whether their workplace has fewer than 50 employees or more; and whether they obtain insurance at work or obtain it on their own.
The “race to the bottom” in the health insurance exchanges would end. Health insurers are choosing narrow networks in order to keep costs down and premiums low. They are doing that on the theory that only sick people pay attention to networks and the healthy buy on price; and they are clearly trying to attract the healthy and avoid the sick.
The perverse incentives that are causing these results have one and only one cause: when individuals enter a health plan, the premium the insurer receives is different from the enrollee’s expected medical costs.
Precisely the opposite happens in the Medicare Advantage program, where Medicare makes a significant effort to pay insurers an actuarially fair premium. The enrollees themselves all pay the same premium, but Medicare adds an additional sum, depending on the enrollee’s expected costs.
What I call “change of health status insurance” would accomplish the same result. The only difference is that the extra premium adjustments would be paid by one insurer to another and the amount paid would be determined in the marketplace — not by Medicare.
People would no longer be trapped in one insurance system rather than another. If you are offered affordable coverage by an employer you cannot get subsidized insurance in the exchange. If you are eligible for Medicaid you are not allowed into the exchange. If your income is 100% below poverty, you are not allowed into the exchange -- even if you aren’t eligible for Medicaid.
To make matters worse, eligibility for one system versus another will change frequently for millions of people because of fluctuations in their incomes.
With a universal tax credit that is independent of income, it would not matter where people get their insurance. People could join a plan and stay there.
Note: This change would work best if the universal tax credit is set at the level the CBO estimates a new enrollee in Medicaid will cost. Currently, that’s about $2,500 for an adult and $8,000 for a family of four.
There you have it: Three easy-to-understand, not very difficult changes, and millions of problems vanish in a heartbeat.
Every year, the College Board publishes its Trends in College Pricing and Trends in Student Aid reports. And every year, the news is the same: the price of college is up; debt is up; and the benefits of a college education are moving farther and farther away from the average family.
On one level, this year’s reports are no different. Average tuition and fees for in-state students at public four-year colleges increased 2.9 percent over the past year. The average tuition at two-year colleges increased 3.3 percent, while tuition at private nonprofit colleges increased 3.7 percent. And roughly 60 percent of students who earned bachelor’s degrees in 2012-13 from the institutions at which they began their studies graduated with debt. They borrowed an average of $27,300—an increase of 13 percent over five years. And the grand total of student debt is up too.
But there is also what appears to be a bit of good news. The rate of price increase is actually down, and the sum of what students borrowed this year was 13 percent lower than their borrowing in 2010-11.
Some are greeting these reports with optimism, taking them as evidence that calls for dramatic higher ed reforms are premature. According to Inside Higher Ed, “Justin Draeger, president of the National Association of Student Financial Aid Administrators, said this year’s reports are good news over all. They’re also a good reminder of why permanent changes, such as cuts to Pell Grants, shouldn’t be made in response to acute budgetary problems.”
But when one digs into the data, even the “good” news reveals itself to be superficial.
First, it is important to remember why tuition and fees had been growing at such an especially alarming rate over the past several years, especially at public institutions. The Great Recession put a tremendous amount of strain of college and university budgets, as state appropriations evaporated and families’ finances suffered. Instead of making the tough decisions that would allow them to maintain academic quality while cutting back, many schools made up their financial shortfalls by passing the costs onto students in the form of higher tuition and fees. The decline in the rate of price increase this year doesn’t reflect institutions’ learning to control costs; it is simply the process of returning toward the pre-recession status quo.
Furthermore, much of the fall in total student borrowing is the result of sharp declines in enrollment. As the College Board notes, “Growth in full-time equivalent (FTE) postsecondary enrollment of 16% over the first three years, followed by a decline of 4% over the next three years, contributed to this pattern [of declining student borrowing].” Far from representing a success, this fact illustrates that more and more families feel higher education is out of reach. Even when it comes to the per-student decrease in borrowing, which is unaffected by enrollment, grant aid has simply taken up much of the slack. Less borrowing has little to do with greater cost effectiveness.
And all of this comes at a time when the average American’s income remains stagnant. So, even as growth in tuition and fees slows, a college education continues to become increasingly less affordable for most Americans.
Finally, it is also important to remember what the College Board’s reports don’t measure: what students are getting for all of this money. The publication of these reports comes not long after Richard Arum and Josipa Roksa released Aspiring Adults Adrift, the follow-up book to their 2011 study on the limited learning that occurs on college campuses. What they found is that today’s graduates are entering the world less prepared than ever. They lack the skills to be useful employees and the knowledge to be informed citizens. College price growth isn’t just outpacing inflation; it’s outpacing student learning by light-years.
A slower increase in tuition and fees and less student borrowing are surely good news. But the fundamental problems plaguing higher education remain as acute as ever. Far from dulling our desire for higher ed reform, a deeper look into the data should spur the country to stop focusing on the symptoms and begin tackling the root causes of our higher ed crisis.
Last Friday, America’s four postal employee unions organized a mass protest against Postmaster General Patrick Donahoe’s plan to shut down 80 distribution centers in January 2015. The postal workers, quite understandably, see their livelihoods at stake. Many reformers, however, see the rising share of public sector unionization as a drain on our tax dollars and a likely source of government growth—which, as new research reveals, may not be the case.
Regardless of where one falls on controversies like the postal worker strike or the attempted recall of Wisconsin Governor Scott Walker in 2012, most of us recognize the need for states to keep their promises to government workers, retirees, and citizens who rely on essential state services like education, Medicaid and public safety. In a study published today by the Mercatus Center at George Mason University, we outline just how challenging this can be for policymakers. Public sector unions are highly effective at securing pay and benefits for their members, but appear to have no effect on overall government spending. This leaves an obvious question: How are we paying for everything?
In our new research, we examine public sector union lobbying and collective bargaining activity. Because unions have several tools at their disposal to influence policy, it is difficult to gauge each tool’s effect on workers and taxpayers. To address this, we measured the impact of unions’ collective bargaining rights and political contributions on state budgets and employee compensation. After controlling for a number of factors, we made two important findings:
First, political activity by public sector unions works. Specifically, more collective bargaining tends to mean more government jobs, and more union political spending tends to mean higher growth in employees’ incomes. Rather than demonize unions, we should recognize that they are responding to strong political incentives. Their job is to take care of their members, and they do this extremely well. In economic terms, public sector unionization functions as a “club good” where members pay dues and, in return, receive higher salaries.
Second, while many public sector union critics believe they are a driving force behind government growth—according to the numbers we examined—union political activity does not appear to lead to higher state government spending. Instead, our findings suggest that it is geared toward securing a larger share of an existing pie, rather than growing the government pie. There appears to be a tradeoff between spending on public services and spending on employees.
We also find similar results for teachers’ unions: They take care of their members, and the data clearly indicate that stronger unions and more activity guarantee higher salaries for teachers. But, again, a larger spillover effect is that the data do not show an obvious correlation between increased teachers’ union spending leading to increases in state spending. So it’s reasonable to wonder if in-classroom funding is suffering.
In our current economic environment—where wages are stagnated and state budgets are already being squeezed by less revenue—these findings are doubly important for policymakers. Budgets are unlikely to rise, so increased public sector union activity seems likely to come at the cost of other services. As a result, we can expect to hear more stories like those coming from Detroit, San Bernardino, and Stockton, California—municipal bankruptcies driven in large part by policymakers’ inability to balance union priorities with financial commitments to the general public.
While our data indicate that the unions may not drive much new spending growth, they carve out such a large share of budgets for their members that municipal governments seem destined to fail. If the nationwide pension crisis—which could very well be related to the dynamic we’ve uncovered—is any indication, the longer politicians wait to address the problem, the more painful the fix will be for public workers and retirees.
The scene from failing cities is not all that different from what we’re seeing this week with postal employees: Their unions are fighting hard to protect their members and are willing to go down swinging to get the job done. But with states either unable or unwilling to increase the overall size of government, the result for American taxpayers is an increasingly squeezed public sector that is being asked again and again to do more with less.
Policymakers have a different job: to balance the priorities of different interest groups and the general public. Let’s hope they’re up to the challenge.
Scott Beaulier is chair of the economics and finance division and director of the Johnson Center at Troy University. George Crowley is an assistant professor of economics in the Johnson Center at Troy University. They are the authors of a new working paper published by the Mercatus Center at George Mason University on “Public-Sector Unions and Government Policy: The Effects of Political Contributions and Collective Bargaining Rights Reexamined.”
The reaction to this week’s joint announcement by the U.S. and China on plans to drastically cut emissions has been mixed. According to the fact sheet released by the White House, under the agreement the U.S. agrees to cut net greenhouse gas emissions to 26-28 percent below 2005 levels by 2025. President Xi Jinping of China announced his intention to halt the increase in China's CO2 emissions by 2030, with an attempt to peak earlier, and to increase the non-fossil fuel share of China's energy usage to around 20 percent by 2030.
One major step for the U.S. is the EPA’s recently released Clean Power Plan, with the goal of reducing power sector emissions for existing power plants to 30% below 2005 levels by 2030. However, critics of the proposal have already voiced numerous concerns about the legality and feasibility of the plan, as well as concerns about the plan's impact on the reliability of the power grid. Reliability will be a key issue since the plan intends to dramatically decrease the share of coal fired generation relative to the nationwide electric power generation mix in favor of renewables.
The shale renaissance that created an abundant supply of natural gas in U.S. has been one of the key factors in the switch away from the use of coal in electricity generation. In fact, over the last decade, the increase in electricity generated by natural gas reduced the share of coal in electricity generation by 10 percentage points. A recent estimate by the Government Accountability Office states that, since 2012, 13 percent of the country’s coal capacity has either been retired or is planned to be retired by 2025.
At the same time, the country’s nuclear generation capacity is also under threat. In addition to low natural gas prices, subsidies for renewable energy undermine the value of nuclear plants, causing premature retirement of these plants.
Diversity of supply (or integration of different fuels and technologies) plays a key role in lowering the cost of electricity generation, as well as maintaining reliability, and also reduces the variability in monthly power bills. This past winter’s polar vortex was a perfect case study for how delivery and price issues in one fuel source can impact electricity consumers. The situation could have been worse if the system did not have other fuel sources, mainly coal to provide a relief valve for power generation in the Midwest and East.
In fact, a recent study conducted by IHS Energy shows how valuable a diverse power supply is for U.S. electricity generation and, consequently, the U.S. economy. Comparing the current mix of supply with a hypothetical case in which there is no meaningful contribution from coal and nuclear, the study found that the cost of generating electricity would be $93 billion higher per year without coal and nuclear. The study also calculates the macroeconomic impacts of a less diverse energy supply. The increase in the cost of electricity would reduce real U.S. GDP by nearly $200 billion, lead to roughly 1 million fewer jobs, and reduce the typical household’s annual disposable income by around $2,100 within the three years after the power price changes.
Then there is the issue of efficiently integrating renewable power sources into the nation’s power grid. The aging of the nation's infrastructure has been a concern for the last decade without any major action to address the issue. In fact, according to the 2013 Report Card for America’s Infrastructure, conducted every 4 years by the American Society of Civil Engineers, the country’s grade for energy and the national power grid is D+, which means poor and at risk. Similarly, a new assessment by grid overseers North American Electric Reliability Corp, argues that the surge toward natural gas and renewable energy, driven by cheap gas and new government rules and policies, is creating reliability concerns -- especially in the Midwest, New York and Texas -- and weakening buffers for blackouts. Furthermore, this analysis did not include the impact of the EPA's Clean Power Plan that can only exacerbate reliability concerns.
As any smart investor would know, it is not wise to put all your eggs in one basket. Unfortunately, the current regulatory climate both at the state and federal levels is encouraging the trend of decreasing the diversity of our power supply in electricity generation. A closer look at policies that encourage the phase out of certain fuels, like the Clean Power Plan and state level renewable portfolio standards, is warranted. While fighting climate change is a noble goal, there need to be smart, cost effective ways of dealing with the problem. As a new paper by Hugh Byrd and Steve Matthewman concluded: “no matter how smart a city may be, it becomes dumb when the power goes out.”
Dr. Pınar Çebi Wilber is a senior economist for the American Council for Capital Formation, a nonprofit, nonpartisan organization promoting pro-capital formation policies and cost-effective regulatory policies.
Vacations are different from weekends.
Two days off at the end of the week is nice, but anyone who's ever felt Sunday-afternoon angst knows you need a lot more than that to get a true respite from the working world. Three or four days in, you finally start relaxing. After a week, you stop compulsively checking your e-mail. Another week and PowerPoints and conference calls begin to seem a fanciful memory, not unlike payphones and smoking in restaurants.
Unfortunately, the deepness of the relaxation is matched only by the harshness of the wake-up call. Upon returning to the office, you are at once inundated and overwhelmed, unable to remember what you were working on or, worse still, why it was important.
For most Americans, a really long vacation might last for a couple of weeks. Now imagine how you'd feel if you were out of work for a year. You'd notice your skills depreciating. Your self-esteem might take a hit. And if you didn't have a set job to return to, you might start to doubt if you'd be able to find a job at all.
If you can appreciate these feelings, you can begin to get a sense of why long-term unemployment is important.
Long-Term Unemployment at Historic Highs
You may have heard that unemployment has been dropping recently. It's true. At 5.8 percent, it's only a quarter above its 2007 average.
But seven years after the onset of the Great Recession, long-term unemployment remains at 1.9 percent. Although it has steadily fallen from its record of 4.5 percent in 2010, it's still higher than it was at any time between 1983 and the Great Recession.
In normal times, most unemployment is of the short-term variety, which is defined as spells lasting 26 weeks or less. Take a look at the figure below, which comes from my new report, "Uncovering the Labor Market Recovery," published last week by the Century Foundation. From 2000 to 2007, long-term unemployment constituted less than a fifth of total unemployment, on average.
But in the aftermath of the recession it spiked, grabbing a 45 percent share in 2010. Four years later, the long-term unemployed are still a third of all unemployed. What that means is one in every three unemployed workers have been without jobs for more than 27 weeks. That's 2.9 million people.
So while the short-term unemployment rate is just 4.1 percent above its pre-recession average, the long-term unemployment rate is still elevated by a staggering 130 percent. As of October, the average unemployed worker had been out of work for 32.7 weeks -- nearly a doubling of the average duration of unemployment in 2007.
Duration matters. Like a summer vacation on steroids, time spent out of work causes productive potential to erode. What's more, the long-term unemployed often must switch industries or change occupations, which requires not only not losing old skills, but acquiring new ones.
Employers know this. So the longer someone sits on the sidelines, the larger the stigma they must overcome in making their case to recruiters. And the longer the drought, the more skills dry up, which makes finding a job harder still -- a vicious cycle.
Effects of Long-Term Unemployment Can Be Permanent
The unemployed themselves aren't the only ones hurt by this unforgiving pattern. At the macro level, sustained labor underutilization can permanently harm an economy's productive capacity. Economists refer to this as hysteresis, a term borrowed from physics that describes situations in which temporary perturbations have permanent consequences. In our case, persistence in long-term unemployment may mean strained safety nets and diminished living standards for years to come.
It's too early to tell whether the Great Recession's long-term unemployment legacy will linger. But the next figure suggests just how unusual our present situation is. It shows the evolution of the long-term unemployment rate in the five years following the month in which the overall unemployment rate peaked during the three most recent recessions (excluding the minor recession of 2001).
In each case, long-term unemployment considerably exceeded pre-recession levels. But in the 1981-82 and 1990-91 recessions, it declined to normal levels within about two and a half years. (From 1980 to 2007, the long-term unemployment rate averaged 1.0 percent.)
But the Great Recession was different. Not only did the long-term unemployment rate reach nearly twice the level it did during the two previous recessions, but now, five years out, it's still double its pre-recession norm. That's not a good sign.
What Makes the Great Recession Different?
So why has the Great Recession been different for long-term unemployment -- and what does this imply for policy? These questions are not easy, and remain fairly unresolved. However, a recent Brookings paper by Princeton economists Alan Krueger, Judd Cramer, and David Cho offers a few important insights.
In one sense, the long-term unemployed are a different species from those unemployed only for short periods. They have much greater difficulty finding jobs; indeed, the authors find that, from 2008 to 2012, only about one in ten long-term unemployed returned to full-time work within a year. Not surprisingly, the long-term unemployed are also more prone to stop looking for work -- that is, to withdraw from the labor market.
Consequently, the long-term unemployed exert little pressure on hiring or wages, which may help explain why we have not experienced deflation even as the unemployment rate remained high. For purposes of prices and wages, the long-term unemployed have seemingly had the effect of artificially inflating the unemployment rate.
But in another important sense, the long-term unemployed are just like the rest of us. The study finds that, contrary to popular conception, the long-term unemployed are spread widely across demographic and occupational groups. In other words, the long-term unemployed are unlucky.
When consumer demand collapsed during the Great Recession, unemployment spiked. When it did, some laid-off workers got trapped in the long-term unemployment vortex, often through no fault of their own. Many remain mired there today.
Employment policies should serve the needs of these hard-luck workers. Education and training programs that equip displaced workers with new skills are a good place to start, as are incentives that militate against employer biases. But as the Brookings study suggests, the diversity of the long-term unemployed necessitates a potluck of solutions, carefully calibrated to individual circumstances.
Most importantly, we must act now: Getting the involuntarily idle back to work is not just for their sake; it's for ours too. Let's hope it doesn't take much longer.
Mike Cassidy is a policy associate at the Century Foundation.
J. Wellington Wimpy, the glutton from the comic strip Popeye, is famous for saying "I'd gladly pay you Tuesday for a hamburger today."
As part of the Balanced Budget Act of 1997, Congress based the Medicare "Sustainable Growth Rate" (SGR) on economist Robert C. Higgins's formula for calculating how quickly a corporation's sales can grow. It hoped to prevent Medicare payments to health providers from rising faster than the rate of growth (or contraction) of the Gross Domestic Product, weighted for the number of Medicare beneficiaries. Were spending to rise or fall faster than that in a given year, payment rates for the following year would be adjusted to maintain budget neutrality, paralleling Wimpy's offer, albeit in reverse.
But like a "Guess What Happens Next?" segment from America's Funniest Home Videos, you already know this won't turn out pretty. The idea that a future penalty, imposed on all providers, could somehow limit current services provided by individuals defies common sense. Higgins, for example, did not see his formula as a reason for corporations to raise their prices whenever sales didn't meet targets.
"You mean, if I have a hamburger now, the ones sold next Tuesday might be an ounce smaller? Who cares? I'm hungry today, and there might not even be a next Tuesday!"
So the Medicare SGR may have actually (at least initially) encouraged overutilization by creating the expectation that future reimbursements could be lower for the same work. Indeed, with the exceptions of 2000 and 2001, each year has been slated for a reimbursement cut (see chart) -- but Congress hasn't had the stomach to enforce it since 2002, recognizing the threat posed to health-care access for seniors, a key constituency.
The nearly annual financial patch is now lovingly called the "Doc Fix." However, since Congress has only rarely allocated money to offset these costs, the SGR formula does not incorporate Doc Fixes into its baseline and now produces yearly cuts of 20 to 35 percent, which would bankrupt most practices if they ever occurred. What could actually have been a real incentive to control costs has, for nearly twelve years, become nothing more than an expensive game of crying wolf.
Building on its "success" with the SGR, Congress has since, through various laws (the 2006 Tax Relief and Health Care Act, the 2009 Health Information Technology for Economic and Clinical Health Act, and the 2010 Affordable Care Act), overlaid it with similarly well-intentioned but dubious mechanisms attempting to encourage cost containment -- but now also "quality" care -- by tying Medicare payments to "performance." These include the Physician Quality Reporting System, the "Value-Based Modifier," and standards for demonstrating "Meaningful Use" of electronic health records.
Were it only as simple as Dana Carvey might have put it, impersonating President George H.W. Bush: "Quality gooood ... Gooood, spending too much baaaaad." Unfortunately, the overly complex and time-consuming processes that have since taken shape -- with looming potential combined penalties of around 10 percent for failure to achieve statutory goals -- will almost certainly give Congress yet another chance to blink.
Measuring a provider's quality in a credible way would require actual chart review by qualified peers of a sizable number of different types of patient encounters over time. Such a comprehensive assessment could determine whether reasonable care had been given over a reasonable timeframe in a reasonably cost-effective way based on individual patients' circumstances. Unfortunately, this is an extremely labor-intensive and thus expensive process.
So we instead are asking computers to divine "quality" from claims or registry data using unproven surrogate measures. For example, one measure uses the glycosylated hemoglobin (A1c) blood test as a surrogate for good control of diabetes. This unfairly penalizes providers who have higher-than-average proportions of Medicare patients who either are non-adherent or do not respond to accepted treatment regimens -- something not under the provider's control. More concerning, the mere existence of such a measure may cause providers to focus solely on the test at the expense of other non-measured yet vital aspects of diabetic treatment -- the "treating to the test" phenomenon (similar to "teaching to the test" in education).
And providers aren't the only ones concerned. The well-respected Robert Wood Johnson Foundation strongly cautions, "The adoption of flawed measurement approaches that do not accurately discriminate between providers can undermine professional and public support for provider accountability, reward indiscriminately, and divert attention from more appropriate and productive quality improvement efforts."
In addition, data just disclosed by the Centers for Medicare and Medicaid Services (CMS) at a November 4 meeting indicate that only about 2 percent of 500,000 participating providers have to date attested to meeting the most current (Stage 2) of standards related to "Meaningful Use" of electronic health records, raising concerns about the complexity of this program as well.
One might conclude the government is actually relying on the inability of providers to navigate such complexity to generate the maximum penalties, regardless of quality or cost containment, as a surefire means of stabilizing the Medicare Trust Fund. After all, rather than addressing the standards themselves, CMS still recoups through audits vast sums from providers who fail to understand or consistently implement complex Medicare documentation requirements that have been in place since the late 1990s -- a period notable for Susan Powter's "Stop the Insanity" weight-loss craze.
We need to follow her advice.
Yes, it is very important to get quality and value for the money we spend on health care. And it's not as if the programs put in place had no potential at all. But any surrogate quality measures need to be validated -- perhaps through small regional pilots -- by comparing their results with those of more accepted means (such as chart review) before they are applied to an industry that constitutes one-sixth of the U.S. economy. Furthermore, rather than passing hard-and-fast laws with unachievable deadlines on these issues and treating providers as the enemy, Congress could increase its odds of success by giving CMS more general directives to innovate to restrain costs (perhaps with flexible targets) in actual partnership with those rendering care.
The Medicare SGR Repeal and Beneficiary Access Improvement Act of 2014, sponsored by Senate Finance Committee chairman Ron Wyden (D., Ore.), would have eliminated the SGR, limited many of the above-referenced draconian penalties, and -- at least according to early estimates that included other fixes -- cost a relatively small $131 to $180 billion over ten years. (This is just 2 percent of total Medicare outlays.) The bill almost passed with bipartisan support, but was scuttled at the last minute in favor of another temporary "fix," with Congress unable to reach agreement on a funding mechanism during an election year.
Interestingly, based on the Congressional Budget Office’s own estimates (see page 2) for the next decade, the Wyden bill's cost to taxpayers represents just shy of 2 percent of total Medicare outlays. I may be going out on a limb here, but if that money can't be found in the budget, providers might be willing to accept the cut (or some portion) instead if -- in fairness -- other sectors of Medicare (hospitals, etc., that have been spared the SGR over the same twelve years in favor of modest annual increases) did so as well. However, this would have to be in exchange for sidelining or scaling back the above-discussed programs and similar onerous and costly ones affecting the other sectors, possibly including abandoning the counterproductive "ICD-10" disease-classification system as I have previously advocated, making all of this potentially a financial wash.
Of course, the next cycle of crying wolf has already begun. CMS has just announced the SGR will yield a cut of 21.2 percent when the most recent patch expires in April 2015, which almost no one believes will happen. Given its track record, can a similar "fix" for program penalties be far behind?
It’s time to say enough is enough. Now that the election is over, the lame-duck Congress has one final chance to address these issues, and the outcome of the election shouldn't change the bipartisan resolve to get this done. However they fund it, legislators need to swap their hamburgers for some spinach so they can be strong to the finish like Popeye.
Craig H. Kliger is an ophthalmologist and executive vice president of the California Academy of Eye Physicians and Surgeons.
Totaling more than $111,000,000.00, the 2014 North Carolina Senate contest between Kay Hagan and Thom Tillis is the most expensive Senate election in the nation's history (not adjusted for inflation). As we investigated earlier this week, outside money has been flowing into American politics in the wake of the Supreme Court's Citizens United decision in 2010.
When candidate and independent spending are combined, 2014 ranks among the most expensive, if not the most expensive, in history. However, understanding campaign spending takes more than a simple examination of total dollars. Spending differences across states can occur for a variety of reasons, including geographic size, population size, and the expense of media markets.
As a result, a more useful metric for understanding the magnitude of campaign activity is spending per voter, and 2014 offers an interesting case: Alaska. This year, Alaska saw a highly competitive Senate race in which both outside groups and candidates spend substantial amounts of money. Alaska ranks 47th in population with just over 700,000 residents and an estimated 503,000 eligible voters. After adjusting spending (both candidate and independent expenditures) for each state's estimated voting eligible population, Alaska's 2014 Senate race, unsurprisingly, ranks as the most expensive in US history.
Alaska originally ranked 6th most expensive in 2014, with about $60 million spent total. But it jumps to first place in dollars spent per voter. Candidates and outside groups spent roughly $120 per voter in Alaska this year, about double the next most-expensive race, Montana 2012, where candidates and outside groups spent $66.5 per voter. By comparison, the $111 million Senate race in North Carolina -- with a voting-eligible population of about 6,826,610 -- equaled only $16.25 per voter. That's still far above the median spending per race for all three cycles ($7.3 per voter) but certainly serves to put the spending in context.
Relative to 2012 and 2014, in terms of both combined and per-voter spending, 2010 could be considered one of the cheaper cycles for Senate races thus far.
These data lend some support to the observation that, since Citizens (and more recently McCutcheon v. FEC) independent expenditures are quickly outpacing contributions to candidates. But given changes in reporting requirements and limited data, there is still a lot about outside spending we still don't know.
All in all, candidate and outside group spending totaled just over a billion dollars in Senate races in 2014. The fact that North Carolina alone accounted for more than ten percent of that spending is astonishing, but no less remarkable is the intensity of spending per voter in Alaska. But if spending continues to grow as it has the last three election cycles, both of those records will likely be shattered in 2016.
Grace Wallack is a research and editorial assistant in governance studies at the Brookings Institution's Center for Effective Public Management. John Hudak is a fellow in governance studies and managing editor of the FixGov blog, where this piece originally appeared.
The U.S. is preparing to deploy another 3,000 troops to the Ebola-stricken nations of Western Africa to join the 700 already there -- but the International Monetary Fund has beaten the Pentagon to the punch, evading its own byzantine rules to push $130 million to Guinea, Liberia, and Sierra Leone.
Already in early July, the IMF's staff had completed a thorough review of the fiscal impact of the Ebola outbreak. The decline in receipts on trade, income and other taxes, and mining royalties was estimated at $46 million in Liberia alone, or roughly 20 percent of the country's GDP. The additional spending on emergency health, security, and food imports added another $20 million. The damage to the other two nations was equally devastating.
Realizing the scope of the problem, on October 9 IMF changed its rules and allowed the three most threatened countries to receive the money almost instantly under the aptly named Rapid Credit Facility arrangement. So today, as help is reaching West Africa from all around the world, the governments of Liberia, Guinea, and Sierra Leone can continue to function and avoid a social unrest that would have compounded the health-care crisis.
U.S. ambassador to the United Nations Samantha Power recently traveled to the region and noted "positive signs" in West Africa. What American officials are reluctant to mention is that the money behind much of the progress has come through the one international institution that works and yet has been denied the full support of the U.S. Later this month at the G20 Summit in Brisbane, Australia, the president will likely again hear from all global leaders that U.S. commitment to the IMF remains unfulfilled.
Congress has so far refused to ratify an IMF reform that was due to take place in 2012. Instead of a slight increase in the U.S. stake in the IMF, the Treasury has been compelled to provide a temporary loan, which in effect blocks the reform -- which the U.S. itself pushed through in the wake of the Lehman collapse. Little wonder that other members, who have all held up their part of the bargain, are now busy developing alternative financial institutions.
If we allow our leadership in this essential global institution to lapse, we will be left with just the Army, Navy, and Air Force to meet every major global challenge. We have to recognize that despite their many drawbacks, the post-World War II international financial institutions have worked, have delivered, and will be needed again in the future. The deadline for Congress to act on our commitment is December 31. It will be a tall order for the lame-duck legislature. Will the world keep waiting on us?
Gary Litman is vice president of international strategic initiatives at the U.S. Chamber of Commerce.
Will allowing the government-sponsored enterprises (GSEs) to guarantee smaller down payment loans in an effort to increase mortgage availability lead to more defaults? Some skeptics have raised this concern in response to the Federal Housing Finance Agency's recent move to encourage lenders to issue mortgages with down payments as low as 3 percent. Based on a review of the performance of low-down-payment GSE mortgages in recent years, however, these fears are not well founded.
Fannie Mae and Freddie Mac (the GSEs), the guarantors of most of the nation's mortgage debt, currently only purchase loans that have at least a 5 percent down payment. Prior to late 2013, however, Fannie Mae guaranteed loans with down payments between three and 5 percent. By examining the performance of these pre-2013 loans, we can get a sense of how likely it is that borrowers with similar loans will default going forward.
The default rates of 3-5 and 5-10 percent-down payment GSE loans are similar.
Loans that originated in recent years with down payments between 3-5 percent exhibit default rates similar to the default rates of those with slightly larger down payments -- in the 90-95 LTV category.
Of loans that originated in 2011 with a down payment between 3-5 percent, only 0.4 percent of borrowers have defaulted. For loans with slightly larger down payments -- between 5-10 percent -- the default rate was exactly the same. The story is similar for loans made in 2012, with 0.2 percent in the 3-5 percent down-payment group defaulting, versus 0.1 percent of loans in the 5-10 percent down-payment group.
While this database is limited to 30-year, fixed-rate, amortizing mortgages (interest-only mortgages, 40-year mortgages, and negative-amortization loans are excluded), it is representative of GSE loans made in the post-crises period.
Borrower's credit is a stronger indicator of default risk than down payment size with these loans.
The pattern is consistent even in the years leading up to the crisis, when overall default rates were much higher. In 2007, the worst issue year, 95-97 LTV loans in any given FICO bucket performed only marginally worse than the 90-95 LTV loans, and FICO score was a larger determinant of performance. For example, 95-97 LTV loans with a 700-750 FICO score have a default rate of 21.3 percent, versus 18.2 percent for 90-95 LTV loans. However, the 95-97 LTV loans with a FICO score above 750 had a 13.5 percent default rate, much lower than the 90-95 LTV loans with a 700-750 FICO score.
The GSEs' risk-based pricing means only a small group of lower-risk borrowers will end up with these loans.
This analysis tells us that there is likely to be minimal impact on default rates as low-down payment GSE lending gravitates towards borrowers with otherwise strong credit profiles. And this makes sense because GSE loans are priced on the basis of risk (including loan-level pricing adjustment and mortgage insurance costs), while Federal Housing Authority (FHA) loans are not. Thus, borrowers with high LTVs and low FICO scores will find it more economically favorable to obtain an FHA loan.
Furthermore, in recent years, a miniscule number of these loans were put back by Fannie Mae following a default, an action taken when Fannie determines that a delinquent loan was irresponsibly underwritten. The number of putbacks on 95-97 LTV loans over the entire 1999-2013 period was 0.5 percent, little different than the 0.4 percent for the 90-95 LTV bucket.
Those who have criticized low-down payment lending as excessively risky should know that if the past is a guide, only a narrow group of borrowers will receive these loans, and the overall impact on default rates is likely to be negligible. This low down payment lending was never more than 3.5 percent of the Fannie Mae book of business, and in recent years, had been even less. If executed carefully, this constitutes a small step forward in opening the credit box -- one that safely, but only incrementally, expands the pool of who can qualify for a mortgage.
Taz George, Laurie Goodman, and Jun Zhu are researchers in the Urban Institute's Housing Finance Policy Center. This piece originally appeared on the Urban Institute's MetroTrends blog.