You will be hearing about the "Year 2000" problem constantly in the news this year. And you will hear a lot of conflicting information in the process. There is also a good bit of "end of the world" rhetoric floating around on the Internet. What should you believe?
In this edition of How Stuff Works we will discuss the Year 2000 problem (also known as the Y2K problem) so that you understand exactly what is happening and what is being done about it. You can also explore a variety of links. From this information you draw your own informed conclusions.
Advertisement
What Is the Y2K Problem?
The cause of the Y2K problem is pretty simple. Until recently, computer programmers have been in the habit of using two digit placeholders for the year portion of the date in their software. For example, the expiration date for a typical insurance policy or credit card is stored in a computer file in MM/DD/YY format (e.g. - 08/31/99). Programmers have done this for a variety of reasons, including:
- That's how everyone does it in their normal lives. When you write a check by hand and you use the "slash" format for the date, you write it like that.
- It takes less space to store 2 digits instead of 4 (not a big deal now because hard disks are so cheap, but it was once a big deal on older machines).
- Standards agencies did not recommend a 4-digit date format until recently.
- No one expected a lot of this software to have such a long lifetime. People writing software in 1970 had no reason to believe the software would still be in use 30 years later.
The 2-digit year format creates a problem for most programs when "00" is entered for the year. The software does not know whether to interpret "00" as "1900" or "2000". Most programs therefore default to 1900. That is, the code that most programmer's wrote either prepends "19" to the front of the two-digit date, or it makes no assumption about the century and therefore, by default, it is "19". This wouldn't be a problem except that programs perform lots of calculations on dates. For example, to calculate how old you are a program will take today's date and subtract your birthdate from it. That subtraction works fine on two-digit year dates until today's date and your birthdate are in different centuries. Then the calculation no longer works. For example, if the program thinks that today's date is 1/1/00 and your birthday is 1/1/65, then it may calculate that you are -65 years old rather than 35 years old. As a result, date calculations give erroneous output and software crashes or produces the wrong results.
The important thing to recognize is that that's it. That is the whole Year 2000 problem. Many programmers used a 2-digit format for the year in their programs, and as a result their date calculations won't produce the right answers on 1/1/2000. There is nothing more to it than that.
The solution, obviously, is to fix the programs so that they work properly. There are a couple of standard solutions:
- Recode the software so that it understands that years like 00, 01, 02, etc. really mean 2000, 2001, 2002, etc.
- "Truly fix the problem" by using 4-digit placeholders for years and recoding all the software to deal with 4-digit dates. [Interesting thought question - why use 4 digits for the year? Why not use 5, or even 6? Because most people assume that no one will be using this software 8,000 years from now, and that seems like a reasonable assumption. Now you can see how we got ourselves into the Y2K problem...]
Either of these fixes is easy to do at the conceptual level - you go into the code, find every date calculation and change them to handle things properly. It's just that there are millions of places in software that have to be fixed, and each fix has to be done by hand and then tested. For example, an insurance company might have 20 or 30 million lines of code that performs its insurance calculations. Inside the code there might be 100,000 or 200,000 date calculations. Depending on how the code was written, it may be that programmers have to go in by hand and modify each point in the program that uses a date. Then they have to test each change. The testing is the hard part in most cases - it can take a lot of time.
If you figure it takes one day to make and test each change, and there's 100,000 changes to make, and a person works 200 days a year, then that means it will take 500 people a year to make all the changes. If you also figure that most companies don't have 500 idle programmers sitting around for a year to do it and they have to go hire those people, you can see why this can become a pretty expensive problem. If you figure that a programmer costs something like $150,000 per year (once you include everything like the programmer's salary, benefits, office space, equipment, management, training, etc.), you can see that it can cost a company tens of millions of dollars to fix all of the date calculations in a large program.
Advertisement