Time And Date

From OSDev.wiki
Jump to navigation Jump to search

Users, the filesystem, the scheduler, system applications, and some user applications will all need to know the date and/or time to various accuracies. Some applications will need to be started, sent signals, or sent messages when the clock strikes a certain time. To provide these services, the OS needs to obtain the current time during boot and maintain this time.

There are three important subdivisions of "time" in a system: human time, filesystem time, and scheduler timeslice "ticks". Human time can be measured in seconds (or even days, for longer times), where "ticks" must always be .01 seconds or smaller. An important design decision is whether to keep these time formats separate, or whether to merge them together.

Maintaining The Time Internally

During operation, a typical OS will use a local hardware timer to drive its own internal time keeping code -- often called the "system clock". On an x86 system, there are two to four timers available: the PIT, the RTC, the TSC, or the local APIC (older systems may not have a TSC or an APIC). These timers may produce an interrupt (or may even be polled) at regular intervals to inform the OS of the passage of time. Using at least one of these timers is probably necessary to control and generate scheduler timeslice ticks. It takes very little additional code in the driver to also use that timer to update the system clock. The OS may only need to know how often the chosen timer ticks during one second, and keep a counter -- or the system clock may be designed to tick at the same speed as the chosen timer.

However, some of these timers are only accurate to within a handful of seconds a day, so it may be desirable to benchmark the system clock to a reference, periodically, even while the system is running.

It is also possible not to use a timer to update the system clock at all, and only use an "accurate" external reference when the system tries to access the system clock. Depending on the reference used, this method may only be accurate to 1 second, it may be slow, it may be needed often (for file system accesses, especially), and the format used often isn't what the OS wants.

Obtaining The Initial Time And Date

When the computer is turned off, obviously system software cannot keep an internal clock updated using a timer. So when the computer is rebooted, the OS needs to use some other method for obtaining an initial time and date. There are really only two automated methods. If those methods fail, the only fallback method that exists is to ask a user to enter the date/time.

Battery-Backed Clock

For x86 PCs, there's a special "Real Time Clock" (RTC), which is "combined" with the system's CMOS. It has its own battery so that it keeps running when the computer is turned off and the contents of its memory are not lost. See the CMOS article for information on reading the time and date from the RTC.

Almost any other system besides x86 will also have some kind of battery-backed Date/Time clock.

The downside to this is that the battery will always eventually die, and may not get replaced. It is a good idea to check the results of reading any battery-backed clock for "sane" values.

Network Time

Getting some kind of network time is a superior method for achieving accuracy and consistency across machines. The downside is that the end-user's computer may not be (currently) connected to a network, or the server may go down.

Internet

NTP is a good protocol with a very large pool of free servers which may be automatically selected by DNS. It uses UDP on port 123. (Wikipedia article)

The TIME protocol is extremely simple: attach a TCP socket to port 37 on a NIST server, read a 32bit bigendian value (seconds since midnight, Jan 1, 1900, UTC), and close the socket. This has caveats: it's not as accurate as NTP; the value will roll over in 2036; and as stated on the server page, using the complete machinery of TCP to retrieve only 32 bits is not kind to the server's bandwidth. The Unix rdate command uses the TIME protocol.

LAN

The official NTP software can easily be configured to serve NTP on a LAN (or indeed the Internet).

It's very easy to write a TIME server, (it's as easy as a server can be,) and using TCP to retrieve 32 bits often doesn't matter on a LAN.

It's just as easy to write your own protocol for LAN use. You can make resolution and range better than NIST, but it may be hard to equal NTP's accuracy. There is at least one professional precedent: Diskless Plan 9 computers get the time from their file server.

Internal OS Time Formats

Choosing a good time format, and writing the code to support this format can be more complex than it seems at first. As said above, the humans, the filesystem, and the scheduler all have somewhat different needs when it comes to time. They operate on different timescales. You can create a separate format for each, or, it can make more sense for the OS to maintain a single universal time format with higher accuracy. All possible choices have downsides, either in complexity or in wasted computer resources. It is important to note that there is no standard yet, and there may be no best choice to make.

Human Timescales

In the short term, humans are comfortable dealing with seconds. A time format that is specified to times shorter than seconds is partially wasted on humans. In fact, presenting users with too much accuracy can confuse them and decrease their productivity.

On the other hand, once a file (for example) is more than a year old, a user will no longer care about what second it was created. So in the longer term, users are going to be much more interested in a time specified in days.

This may argue for a system time that is kept in seconds -- or perhaps a more flexible format that is initially in seconds and then switches to something like Julian Day Number for longer periods of time. Or for a system that is specified with shorter intervals than seconds, but that only displays a limited amount of the actual time information available.

Filesystem Timestamps

Almost every filesystem uses a predefined time format. Once again, there is no standard. If your OS only supports one filesystem, it may be smart to match your OS time format to the one that the filesystem uses, so that you never need to do conversions. Many filesystems use time formats that are specified in seconds, which can cause "less than perfect" results for utilities such as "make" -- if the utility is badly designed. When you design an OS, you will need to make a decision about whether to coddle the bad design decisions of other people, to help make their software work.

Scheduler Timeslices

On a multitasking OS, the length of time that each thread is allowed to run is often based on a little piece of time called a "timeslice". This is an extremely important function of an OS, so it is quite important to have some kind of counter available that measures time on this timescale. One major obstacle to having a universal time format for your OS is if your timeslices are variable-length, because this may make it very difficult to establish the minimum length of time that your universal time format needs to represent.

Historical Dates

Keep in mind that Accounting packages, Databases, and other programs may need to store dates from the past century, such as birthdates. If you try hard to imagine such a thing, it's even possible to imagine wanting to store actual historical dates from many centuries ago. So, once again, you may need to make a less-than-optimal choice for your time format, in order to support this feature.

Example Time Formats

The *NIX time format keeps track of 32bit seconds since the start of year 1970. This value becomes obsolete around the year 2100, and cannot store some historical dates.

The Windows time format uses a 64-bit unsigned value of the number of 100-nanosecond (100 x 10-9 second) intervals since January 1, 1601. This value becomes obsolete in about 50 thousand years. Why 1601? It can be considered the start of a 400-year cycle of leap years, which makes conversion into a date simpler. It is also very close to the actual beginning of the use of the Gregorian Calendar (the one most people use). The slight drawback is that you need 8 bytes of storage, everywhere you want to store a date. Which is a lot of places. Which you could use for storing things such as file version numbers, instead.

BCOS uses something very similar, except using signed milliseconds from Y2K, rather than unsigned 100 nanosecond intervals from 1601.

"Scientific" Format

There is in fact a measurement of time that is used in Astronomy, called Julian Day Number. For reasons regarding historical calendar systems, it starts measuring time from noon, UTC, January 1, 4713 BC, and it measures time in days with a floating point value. At the very least, you can consider it to be a standard, in an area that is very lacking in standards. Because it is a standard, there is code available for converting JDN's to calendar dates. In any case, once you have divided out the seconds and fractions of seconds from any time format, you are left with "days". Which means that the difference between your number and the Julian Day Number is merely an offset. Of course, it is possible to truncate JDNs, and use them as integers. See the Julian Day Number article for some date calculation code examples.


Which Time?

Once you've determined what format to use for keeping track of time, it's important to decide which time you'll keep track of. In general there are three different times: the user's "wall clock" time, local standard time and UTC ("Universal Co-ordinated Time").

At any instant, UTC is the same everywhere around the world. Local standard time depends on which time zone you're in (for example, my local standard time is always UTC + 9.5 hours). "Wall clock time" is the same as local standard time unless daylight savings is in effect (for example, my wall clock time is UTC + 9.5 hours except during summer where it becomes UTC + 10.5 hours).

OSes that were created before the internet existed were built assuming that the user would set the computer's battery-backed clock to wall-clock time, so that the computer could easily function as a clock for the computer's owner. The OS assumed that it may need to take responsibility for adjusting the computer's actual battery-backed clock for Daylight Savings Time. Old versions of Windows do this. This can cause problems when the computer dual-boots two OSes that both expect to adjust the battery-backed clock for daylight savings time (so that it's accidentally changed by 2 hours instead of one). See http://www.cl.cam.ac.uk/~mgk25/mswish/ut-rtc.html for a detailed discussion of this problem.

With distributed computing, however, comes the need to synchonize multiple computer clocks. The obvious way to do that is to use UTC for setting the clocks on all systems. The OS has the responsibility of converting UTC to wall clock time (including all the complexity of Daylight Savings) before displaying the time to the user. This is what Linux does. If you have a system that dual-boots both types of OSes, and both of them try to read the battery-backed clock, then obviously you will have an unfixable conflict between them.

Some solutions:

  • do not use the battery-backed clock
  • make using the battery-backed clock optional
  • allow the superuser to set a flag for using the battery-backed clock either way

The stupid thing to do is to force the clock to be only one way or the other. As an indication of how unworkable this is, even Windows can use the clock either way. (But, of course, it doesn't support its non-default very well.)

Complexities

Regardless of what you do you will eventually need to convert the time format that your OS uses into other time formats. Unfortunately time itself is not a simple thing, and conversion between time formats can be quite complex (especially if done accurately). Also there are some problems maintaining the OSs time that don't involve conversions. The things you may need to watch out for include:

Time Zones
This is mostly for converting between local standard time and UTC. Most OSs have a database so that each time zone can be given a name or location (for example, "Adelaide, South Australia" rather than "UTC +9.5 hours"). A map of timezones and their locations can be found here: [1] (1.23MB)
Daylight Savings
This is a nightmare. Some countries have different time zone rules for each state, some countries have different time zone rules for each local area within a state (the USA is particularly messed up). Worse still is that some areas decide what they are going to do each year so that it's impossible to work it out in advance, and for most areas the daylight savings rules are subject to the whims of politicians. For some OSs daylight savings information is also kept within the same database as time zone information, so that a user can tell the OS where they are and the OS can figure out the appropriate time zone and daylight savings rules from this. Daylight savings can be especially problematic for dates in the past.
Leap Years
As you all know, a year isn't exactly 365 days. The Gregorian leap year rules are that if a year is a multiple of 4 then it's a leap year, unless it happens to be a new century and it can't be divided by 400. For example, the years 2004, 1996 and 1968 are leap years, the years 1700, 1800, and 1900 are not leap years, but years 2000, 1600 and 2400 are. This keeps the date in synchronization with the seasons.
Leap Seconds
Due to standards bodies, atomic clocks, and gravitational influences, a day is not exactly 86400 seconds long (and on average each day is slightly longer than the last). To account for this an extra leap second is added (roughly one second each 5 years). A list of when leap seconds have been added can be found at http://tf.nist.gov/pubs/bulletin/leapsecond.htm. This is beyond the timekeeping accuracy of any computer clock, but may be an issue if you want to create very accurate answers when subtracting one time from another in your format.
Calendars
Most of the world uses the Gregorian calendar, but some people don't and some use other calendars in conjunction with the Gregorian calendar. If you intend to make your OS international, or if you convert your time format into Gregorian dates before 1920 then you may want to research other calendars and (for past dates) the history of calendars. An excellent starting point for this can be found at [2].
Fixing Drift
Any timer in a system may run slow or fast, and that may be detectable -- either by the user or the OS. It may be desirable to add or subtract a small extra amount of time on each timer tick.
Accuracy
Unfortunately, the electronics in PCs isn't as accurate as it could be, and over time (regardless of everything else) the computer's time will become inaccurate. Some OSs ignore this problem and allow the user to change or adjust the time whenever they like. This causes problems with some utilities (if you've ever got a "modification time in the future" error message from 'make' you'll know why). For other OSs (often the OSs that are designed for servers that are never turned off) there's a way of adjusting the time in a more subtle way, with many tiny changes rather than a larger sudden change. An example of this is the 'adjtimex' utility on *nix systems. However, enforcing such subtle changes may make life difficult for users if the internal clock is far wrong. I (eekee) once had a situation where the system clock was off by 8 years, and ntpd was patched to ignore its option to allow sudden changes. ntpd made tiny changes effectively accelerating the time to about 3 times normal speed, so the computer's clock wouldn't have been correct for over 2.5 years! If an OS doesn't set its idea of the time from an authoritative external source on boot, (and sometimes you can't because there's no network,) then it must either query the user on boot, or allow the user to adjust the time whenever they like. And some users will get very annoyed with having to manually enter the time every boot.

See Also

Articles

Threads

External Links