It was thought Computer programs could stop working or produce erroneous results because they stored years with only two digits and that the year 2000 would be represented by '00' and would not follow 1999 (i.e., '99') in numerical sequence. This would cause date comparisions to fail and thus cause the computer programs to fail.
Y2K is the common abbreviation for the year 2000 problem.
The generally used argument for the problem was as follows:
The underlying programming problem was quite real. The problem started back in the 1960s, when computer memory and storage were scarce and expensive. Saving 2 digits for every date field was a significant savings at that time. Most programmers then did not expect their programs to be used for many decades, so it was not considered a significant problem. Programmers started recognizing it as a looming problem in the 1980s, but inertia and apathy caused it to be mostly ignored.
However, no computer programmer in his right mind ever stored a date as two ASCII characters as there are much more efficient ways of doing it. Two Ascii characters occupy sixteen binary bits, capable of representing 32168 years. However, interpretation of a shorthand date represented by the last two digits of the year has led to more sophisticated algorithms being developed. However, storage of a combined date and time within a fixed binary field is still a significant issue and such times are often relative to some defined origin such as the date of the first operating system. Roll-over of such time date systems are still a problem but can happen at varying dates.
Another related problem for the year 2000 was that it was a leap year converse to the usual rules of once per four but not for a century.
Some industries started experiencing problems related to it early in the 1990s as future-date-handling software started processing dates past 1999. For example, in 1993, some people with financial loans that were due in 2000 received (incorrect) notices that they were 93 years past due. As the decade progressed, more and more companies experienced problems and lost money due to erroneous date data. As another example, meat-processing companies incorrectly destroyed large amounts of good meat because the computerized inventory system identified the meat as expired.
As the decade approached 1999, identifying and correcting or replacing affected computer systems or computerized devices became the major focus of information technology departments in most large companies and organizations. Millions of lines of programming code were reviewed and fixed during this period. Many corporations replaced major software systems with completely new ones that did not have the date processing problems.
It was the media hype story of all of 1999, but when January 1, 2000 finally came, there were hardly any major problems though a large number of them had been expected. Ironically, many people were upset that there appeared to be so much hype over nothing, because the vast majority of problems had been fixed correctly. Some more sophisticated critics have suggested that much preventative effort was unnecessary - it would have been cheaper not to spend as much examining non-critical systems for flaws and simply fix the few that would failed after the event. Such conclusions are easy to draw with the benefit of hindsight, but in any case the overhaul of many systems involved replacement with new, improved functionality anyway and thus in many cases the expenditure proved useful regardless.
Some items of interest: