howtogeek — 2013-05-08T13:39:01-04:00 — #1
geek — 2013-05-08T16:15:19-04:00 — #2
Well, at least the 2012 apocalypse didn't happen.
I never believed that the Y2K problem was as bad as everybody was claiming... seemed just like a marketing and publicity stunt to get everybody to pay for an upgrade to a new version of whatever software it was. And the consultants at the time wanted the big fat contracts for compliance checking, so they played it up even more.
iszi — 2013-05-08T16:36:05-04:00 — #3
geekintexas — 2013-05-08T22:51:38-04:00 — #4
03:14:07 Tuesday, January 19 2038.
That's weird. I always thought it was in October, 2038. Sure you're converting your numbers right?
iszi — 2013-05-09T01:30:14-04:00 — #5
Watch the Numberphile video. Those guys are pretty spot-on.
ultimape — 2013-05-09T02:46:04-04:00 — #7
I'm 3 months into a job involving COBOL. The y2k problem was actually quite real and serious.
Lots of legacy systems written in COBOL are so well tested and run in that they don't really even bother to update them anymore - which meant no one had the skills to go through and read the code. There were no active developers. Even when they did have the skillset, we're talking about systems with hundreds of thousands of lines of code that may reference the date in any number of ways.
If it weren't for all the effort everybody put into updating all the code, some programs would either crash outright or worse would have done some really crazy calculations and maybe even divide by 0. Imagine the inconveninece of having all the bank-tellers at your bank (and the ATMs) being unable to do their jobs because the system is down... now consider that it could take months ot fix the issue once it happend - and multiply that by every financial institution > 25 years in buisness (or of them who buys products from IBM or DIEBOLD).
For a onetime issue like this, it makes sense to hire contractors who can get the job done once and correctly. Hiring a newbie who doesn't understand any COBOL would cause all sorts of problems down the line. We're talking about banking systems, utility billing and management applications, and lottery programs - all of which deal with millions of dollars and can't afford to be broken.
Where COBOL excels at keeping track of financial data, there are some pretty big limitations on how it can represent other information and is very strict about changing datatypes. Part of the problem wasn't so much the code, but all of the data that had already been written out to files was also hardcoded with that date limitation - so it isn't just a matter of fixing a few bits of code here or there - you have to also include a reader to handle the divergent datasets otherwise it just blows up.
This is all based on code that was first written 30 years ago and Siloed inside of these financial institutions (and really old government programs that had been required to be written in COBOL). Modern COBOL is light years ahead compared to that. Legacy code is a pain in the butt!
COBOL is going to outlive us all.
nsdcars5 — 2013-05-09T05:35:41-04:00 — #8
03:14:07 Tuesday, January 19 2038.
Exactly one hour and forty one minutes before I turn 37 Also, anyone noticed that the hr:mn forms the value of pi till 2 decimal places?
EDIT: What the heck?
ron_mcnichol — 2013-05-09T11:18:35-04:00 — #9
A single 32 bit integer you say? You'd thunk that with ram the size of disk drives and disk drives in the TB range they'd have fixed this problem by storing at least 64 bit integers after Y2K. 128 bit integers? We've come a LONG way since I had to write a bootstrap on a single 51 column card. (my card reader was set up to read 80 column cards after the stub was removed leaving 51 columns). Union 76 gasoline transactions. Customer kept the stub.
Spectra 70 was pre-programmed to read a single punched card. The bootstrap would then load the program. Memory was in 8K increments. A BIG system would have 32K. Disk drives? Sorts were done on card sorters or 3 tape drives if you were lucky. Up to 80 passes of the cards to sort the whole deck (usually less, say employee # or account # depending on the data. Don't drop that deck!). Back then space WAS at a premium.
bobpopeyeowen — 2013-05-09T12:34:27-04:00 — #10
The gap between the times you stated works out at exactly X7FFFFFFF seconds which is indeed the maximum value for a signed 32 bit integer. However it seems to me the problem will occur at least 25 seconds before the time you specify. The following is an an abstract from the Wikipedia entry under Leap Second;
A leap second is a one-second adjustment that is occasionally applied to Coordinated Universal Time (UTC) in order to keep its time of day close to the mean solar time. This is necessary because the duration of one mean solar day is slightly longer than 24 hours (86400 SI seconds). Therefore, if the UTC day were defined as precisely 86400 SI seconds, the UTC time-of-day would slowly drift apart from that of solar-based standards, such as Greenwich Mean Time (GMT) and its successor UT1. The purpose of a leap second is to compensate for this drift, by occasionally scheduling some UTC days with 86401 or 86399 SI seconds. Because the Earth's rotation speed also varies in response to climatic and geological events, UTC leap seconds are irregularly spaced and unpredictable. Insertion of each UTC leap second is usually decided about six months in advance by the International Earth Rotation and Reference Systems Service (IERS), when needed to ensure that the difference between the UTC and UT1 readings will never exceed 0.9 second. Between their adoption in 1972 and June 2012, 25 leap seconds have been scheduled, all positive, negative leap seconds have not been needed.
At that rate no doubt there will be one or two more leap seconds before 2038. We better hurry if we want to be ready in time!
furrycanary — 2013-05-09T16:09:42-04:00 — #11
More than twelve years on, those of us who worked on removing the ‘millennium bug’ from legacy systems are still getting it in the neck both ways. Thanks to your comments, it is clear that we made two unforgiveable mistakes:
- We wanted 'big fat contracts'. (TRANSLATION: We undertook paid employment.)
- The collapse of the banking system and all computerised civilisation didn’t occur, proving that the ‘millennium bug’ was just 'a marketing and publicity stunt'. (TRANSLATION: We did what we were paid for.)
In hindsight then, it is quite clear what we ought to have done.
1. Undertaken the work for no pay.
2. Failed to fix the problem.
Thanks for clearing that up. Incidentally, the clearly intentional irony in your having chosen to call yourself 'geek' is truly brilliant.
iluv9mm — 2013-05-10T02:25:28-04:00 — #12
I know of numerous vital corporate systems that were proven to fail in year 2000. The only reason they didn't fail was because they were very easy to fix.
Most of our databases were fine - SQL Server already used a 64 bit datetime structure that would store date/time to the millisecond from like years 1750-9999, and most other Microsoft programs used the 32 bit structure that will max out around 2038,so all Microsoft had to do was roll out patches that changed the default date conversions when people typed in 2-digit years - no biggie.
Some programmers however chose to use 6-digit character string or integer variables to store dates in programs where time messes up date calculations and comparisons though, because they were too lazy to trim time from the normal datetime variables when they displayed it or stored it in the database.
all we had to do was identify their DYI year and date of types, change them to standard datetime types, and search for every place they were used to correct the calculations and conversions.
One thing no one thought of was our backup software, which keeps the last 3 years of data online, and keeps 10 years offline but on a tape robot that could load history data online as needed, then takes it back offline after it's not accessed for three months or something like that.
Well, when y2k came, we installed and retested all the stuff we modified and then went home after working for like 48 hours straight.
Well, an auditor called us about 10 hours later, and said that history data was completely missing. So our manager runs down there and finds the silly tape robot in the process of securely erasing all its tapes and piling them in the destruction bin, since it thought the data and tapes was 2000 years old (or - 2000 years old which made it go wacko or something).
luckily there was a off site backup system that was manual, so most of those tapes were good, and the routine to get rid of disk-resident data was just sitting there waiting for the backup to happen. I think we lost like 2 day's worth of data because of bad tapes , which wasn't fun at all typing it back in from paper printouts, scanned for files still in temp files on PC's, etc.
ultimape — 2013-05-10T20:20:42-04:00 — #13
Modern COBOL date functions return a 4 digit year and expected a 4 character field to insert it in. It was a built in function.
It is not that developers were lazy, or even so much as it was an oversight in the system. Back when these systems were first minted hardrives were Montrosities the size of refridgerators that only held 20mb, those 2 extra digits actually mattered quite a bit if you have 100,000+ customer records. It was a good decision at the time.
Consider the math.
andrewbrmn — 2013-05-10T23:11:01-04:00 — #14
As one of the programmers who coded both the Y2K bug and the 2038 bug, just let me say we absolutely knew what we were doing and what the consequences would be. We proceeded as we did anyway for two reasons. 1. There was no other way around the hardware limitations. 2. We fully expected all of our code would be rewritten prior to Y2K. (Ultimately it was.) We didn't do what we did in secret either. We were pointing out the potential problem from the earliest design phase of our earliest software right up through 12/31/1999. Finally, the problem was not just in COBOL. It existed in all storage conscious data centers regardless of whether the programming was Y2K compliant. And don't forget all of the early generation PCs that needed replacing.
jan_heckman — 2013-06-16T16:10:11-04:00 — #15
't was in 1990 to write a company-wide timeplanning and accounting system. Many attempts had gone before. It should be easy to use and without delay, using network storage in those days. Being an Atari ST geek, I ported and tweeked VDI and AES back to PC to produce a modeless dialog on PC's, had to write my own memory allocation system, bitshifted my storage to a minimum to keep traffic low. I never fell for the Y2K bug - too obvious - but hey, I never anticipated my system to see the year 2038. Everything up to that point was taken care off.
Endgame was (approx) 2003. Somebody noted that not all of my code was documented and OH MY - no familiar database backend was anywhere in sight. Now the work I did singlehanded again does not get done properly, but hey, consultants rejoice.
Of course you work with the times...
viggenboy — 2013-06-27T11:15:22-04:00 — #16
It really winds me up when people sya "oh that Y2K thing was a lot of fuss about nothing". Well in the end yes it was, that's because people like me and many many thousands of others spent years and many $millions making sure that it was not a problem.
My team started working on Y2K as early as 1994 to make sure that everytyhing was in place and ready so that nothing untoward happened.
kjetilho — 2013-09-17T12:21:07-04:00 — #17
When these systems reach that date (or the software runs calculations that pass that projected date), there will be an integer overflow which will effectively reset the time back to January 1 1970.
nope, it will be reset to 1901-12-13, since it goes from 2147483647 to -2147483648.