Your Cart

GAO Defends Annual Weapons Review: Let’s Look at All the Facts

Posted by Shelby Oakley on

Government Accountability Office GAO logo

Peter Levine, former top staffer at the Senate Armed Services Committee and now with the Institute for Defense Analyses, penned a piece for us critiquing the GAO’s annual report on major weapons systems. The GAO begs to differ with Levine’s conclusions and assertions. Below is their reply, penned by Shelby Oakley, director at GAO. Read on! The Editor. 

Here at the Government Accountability Office, when it comes to our non-partisan audit work, all facts matter. In fact, they play the central role in our quality control process, in which we ensure through multiple reviews and multiple reviewers that our stated facts can stand up to scrutiny. Peter Levine’s June 6 op-ed on GAO’s annual DOD weapon system assessment fails to mention key facts from the work and lacks important context. 

First, the piece mischaracterizes the overall message of the work as focusing on cost growth in major defense acquisition programs (MDAPs). In fact, as the title states, the work focused on the lack of consistent implementation of knowledge-based practices and what that portends for the future performance of the MDAP portfolio. A significant portion of the report is focused on the implications of DoD’s programs not adhering to knowledge-based acquisition practices. For example, we found that newer programs showed cost growth between 2017 and 2018, where previously these programs had shown cost decreases.

These newer programs began after the 2009 Weapon System Acquisition Reform Act (WSARA), an important piece of legislation that directed DoD to follow knowledge-based practices in acquiring major weapon systems. (Eds note: Levine played an important role in writing and supporting WSARA.) We attribute the deteriorating performance of these newer programs to the inconsistent implementation of these practices, which our work has shown leads to unfavorable cost and schedule outcomes. 

As Mr. Levine points out, we acknowledge in the report that DoD has room to reverse the declining cost trends we observed through strong leadership and insistence that programs implement knowledge-based best practices. 

Second, Mr. Levine stated that we provided no evidence of cost growth in DOD’s MDAP portfolio. We disagree. Consistent with the methodology we employ each year, we assessed costs in a number of different ways.  The assertion that “the supposed cost growth … does not exist at all,” and is due solely to increases in quantities indicates a fundamental misreading of our report. 

The suggestion that we should confine our cost analyses exclusively to buying power outcomes would not provide the full picture. While we agree that analysis is important, we also believe it is important to provide more in-depth information and context to help Congress, DOD, and the general public. Our report does this by assessing multiple factors that constitute cost performance. This year, we found that the portfolio’s costs have increased even though the total number of MDAPs has decreased since 2017.

Further, the average age of programs has steadily increased since 2012. This reflects Pentagon decisions to introduce new capabilities through additions to existing programs rather than by starting new programs. F-35 Block IV and DDG 51 Flight III are examples. Taking this approach means older programs, and their associated costs, remain in the portfolio longer, which leads to a general increase in the cost of the overall weapon system portfolio. When assessing the estimated total acquisition cost of the 2018 portfolio against the cost of portfolios from prior years, cost growth is approximately $8 billion since 2017. 

Another part of our analysis focused on how costs have changed over the past year. For this analysis, we modified the data set to account for programs that were newly entering and exiting the portfolio. These changes increased the cost of the portfolio by $26.6 billion since 2017. This is an overall increase in the cost of the portfolio, which provides useful information on how the investments are changing. It does not reflect nor attempt to convey undesirable cost growth. 

Of course, quantity changes also affect the overall cost of the MDAP portfolio. Excluding quantity changes — increases and decreases — in individual programs shows an overall buying power gain in the portfolio totaling $3.9 billion, which we report and on which Mr. Levine focused. The insight missed in looking only at this aggregate figure is that this buying power gain is attributed to only 25 programs, most of which are older programs (pre-WSARA), such as the Air Force’s C-130J and the Navy’s CVN-78.

On the other hand, since 2017, 53 programs suffered decreases in buying power, including 39 programs that had no changes in quantities. Further, over half of the 28 programs begun after WSARA suffered buying power losses. While the overall buying power number is a good one, the degradation in the newer programs raises concerns about the direction we see the performance of the portfolio heading. True to the intent of this report, such analysis provides decision- and policymakers information so they can take action early to reverse declining trends in weapons programs.

Further, decisions DoD makes to increase quantities in a program do not necessarily correspond with the programs becoming ‘more efficient,” as Levine’s piece suggests. Decisions to increase quantities in an older program can be indicative of acquisition shortfalls in newer programs. Sometimes, DoD accompanies these quantity increases with new and often unproven capability additions and design changes that bypass key steps in the knowledge-based acquisition process. The Navy’s DDG 51 destroyer program offers an example of both of these dynamics. The Navy previously ended the DDG 51 program with the expectation that it would acquire new DDG 1000 ships. However, following DDG 1000 acquisition shortfalls, the Navy restarted DDG 51 ship construction and, more recently, has added significant new capabilities, which it has not yet fully demonstrated.

Also the article was off the mark in its comments about our assessment of sole source contracts and the extent to which five defense companies capture the bulk of the dollars. A competitive environment saves money. Our analysis this year sheds light on the state of that environment. This was the first time we conducted an analysis along these lines for our annual report so we could begin to bring attention to this issue. But a comparison to prior year’s reports wasn’t within our scope. 

Finally, Mr. Levine takes issue with our excluding DoD’s “middle-tier” acquisition programs from the report. While Congress required the department to develop guidance for middle-tier pathways in the National Defense Authorization Act for fiscal 2016, the Office of the Secretary of the Defense did not grant authority to the military departments to start middle-tier acquisition programs until April 2018. Our analysis—as it is each year—is based on the most recent available DoD Selected Acquisition Reports (SAR) to Congress, in this case dated December 2017. It is clear, then, that middle-tier programs did not have data from a comparable time frame, which is why we could not have incorporated these programs into our analysis this year.

Yet, the same week as Mr. Levine’s piece was published, we issued a report on DOD’s acquisition reform efforts. We recommended steps the Pentagon should take to ensure that performance of middle-tier programs is consistently measured going forward. DoD agreed with that and our other recommendations in the report. The concerns that we “may choose to exclude” middle-tier programs from future weapon systems assessments are completely unfounded. In fact, Congress has already asked us to assess middle-tier programs as part of our future annual assessments, and we plan to do so in our 2020 report. 

These are the facts and the facts are important. We stand by our work.

What Others Are Reading Right Now