Data Availability, Information Quality, Accountability...

Software Accountability

One of the Strategic Goals and Objectives defined on the OSTP website is to "Energize and nurture the processes by which government programs in science and technology are resourced, evaluated, and coordinated." I have yet to see adequate evaluation of IT projects in the government. I suggest a more robust process for evaluating technology programs (particularly related to software development). Anyone with experience developing software for the government has heard of government programs that cost millions of dollars resulting in a software product that is not used. There are many reasons for such failures:

- the end product doesn't satisfy the user's needs (e.g., due to poor requirements or lack of communication with end users)

- the users needs changed during the development of the product (e.g., due to changes in command)

- the product was developed to spend money that was allocated without a need ever existing

- the end product is of poor quality

Regardless of the reason, millions of tax payer dollars are being spent on programs that result in failure. The government should have a more robust system for evaluating software development programs to identify the following:

- is the end product actually being used

- does the end product satisfy the user's needs

- if there was a failure, what was the cause

- was the money allocated wisely

- is the quality of the product satisfactory

These evaluations will help prevent failures from occurring in the future.

To expand on the software quality aspect of the proposed evaluation, I suggest the government impose software quality standards that all government IT initiatives must conform to. In particular, software testing is one of the only areas of IT where the industry is behind academia. Formal methods of evaluating the quality of software exist now, but are only being used for a small portion of software projects. The government should define acceptable testing approaches, guidance for testing software, and the minimum criteria required for software products to be accepted (e.g., level of detail for model-based approaches and specific coverage criteria). This will provide a basis for ensuring the quality of software and help prevent waste.

The government has a responsibility to ensure our tax dollars are spent responsibly. These suggestions will provide a first step to ensure the government is accountable for the money they spend.


Submitted by

Stage: Active

Feedback Score

5 votes
Voting Disabled

Idea Details

Vote Activity

  1. Upvoted
  2. Upvoted
  3. Upvoted
  4. Upvoted
  5. Upvoted


  1. Comment

    I emphatically agree. Having been a high-tech government contractor for 23 years, I have seen plenty of government programs purchase faulty products and fund errant programs without being questioned regarding functionality or even validity.

    In 1999 I successfully and singlehandedly terminated the acquisition of a U.S. Navy Navigational radar that, if implemented, would have caused multiple collisions at sea, involving U.S. Navy and foreign Naval vessels. And this entirely due to terribly faulty software.

    This required diligence and courage on my part, as my job and my life were both threatened in consideration of my "whistle-blowing." (Nobody likes a whistle-blower, right?) What I felt compelled to do, per my job description and my desire to prevent scores of unnecessary deaths at sea, should not have been necessary.

    Yet this is what it took to arrest an implementation that would have caused many deaths at sea, because the U.S. Federal Government quality control system currently in place doesn't discriminate between faulty and functional -- only who stands to make a lot of money. I include in this infrastructure Congressional and U.S. Navy personnel who fully knew about this lethal problem yet chose to do nothing, per preference for their personal financial gain.

    The U.S. government's software testing infrastructure is effectively non-existent as it currently stands, and Mr. Donley's suggestion here constitutes the first effort I have yet seen to change the current software testing process from worthless to worthwhile. In consideration of the breadth and gravity of the problem as it currently stands, Mr. Donley is fully deserving of the Nobel Prize.