Thursday 6 December 2007

Program verification in the future

How do you envision the ways program verification and performance tuning will be accomplished in the future? Will it still be a work of art? Will it always be the result of one's experience? Or do you forecast more and more automatic ways (algorithms?) to accomplish these tasks?

One vision of the future of program verification was presented by J. Strother Moore in 2005. Moore stated that "the verification community seeks to build a system that can follow the reason of an average human engaged in a creative, computation, problem". My reading around this subject indicates that the holy grail defined by Moore is a long way off yet. On his weblog Wesner Moise (who worked on Excel for six years at Microsoft) states "it seems that just a few people are driving the field of software verification forward".

The verification of programs is much more complicated than actually writing the programs themselves. Much of the literature on the World Wide Web comes from international conferences on the subject that are attended by professors from university computer science departments. The field is incredibly theoretical and requires an excellent understanding of mathematics to fully comprehend. While I can type 'C# hello world tutorial' into Google and see a list of about a million websites that will teach me how to implement this simple algorithm I am sure that a similar exercise in terms of program verification would be fruitless.

It is because of the inherent complexity that I think program verification will continue to be more about personal skill and experience rather than automatic means.
Algorithms do contribute - for example, Microsoft's Visual Studio development package pre - compiles code when you are writing it and identifies where potential problems are before letting you build the code into a library. I maintain though that if the methods of program verification were easy they would be taught at undergraduate level along with database design and software engineering and so verification will continue to come from testing and good design.

Testing only goes so far, though. A partner company recently released a new version of their indexing database and they fired about six million different test cases at it as part of the verification. All this means is that case six million and one - which could potentially occur to a customer - will cause the software to fail. The best form of verification in my opinion is to adopt correct design methodologies - for example a bottom up methodology, where software is built from smaller components that are themselves easily tested and easy to identify when they cause an error. This robust design comes from the experience of system architects and from the discipline of system developers in making sure that the code they write works properly when they write it rather than from automatic means.

References:

Moore, J. S. (2005) A Mechanized Program Verifier [Online] Austin, University of Texas
Available from http://www.cs.utexas.edu/users/moore/publications/zurich-05/talk.html (Accessed 6th Dec 2007)

Moise, W (2006) Program Verification [Online] Seattle, USA
Available from http://wesnerm.blogs.com/net_undocumented/2006/12/program_verific.html (Accessed 6th Dec 2007)

No comments: