Unix Timestamps Explained: What They Are and How to Use Them
Published 2026-04-04
If you work with databases, APIs, or log files, you've likely seen numbers like 1712345678 used to represent dates. These are Unix timestamps, and they're one of the most common ways computers track time.
What is a Unix Timestamp?
A Unix timestamp (also called Epoch time or POSIX time) is the number of seconds that have elapsed since January 1, 1970 00:00:00 UTC — a date known as the "Unix Epoch."
For example:
0= January 1, 1970 00:00:00 UTC1000000000= September 9, 20011700000000= November 14, 2023
Why Use Timestamps?
- Universal — no timezone confusion, no date format ambiguity
- Easy math — calculating time differences is just subtraction
- Compact — a single integer vs. a long date string
- Sortable — timestamps sort chronologically as numbers
Seconds vs. Milliseconds
Unix timestamps are traditionally in seconds (10 digits, like 1712345678). But JavaScript and some APIs use milliseconds (13 digits, like 1712345678000). Always check which one you're working with!
The Year 2038 Problem
32-bit systems store timestamps as a signed 32-bit integer, which maxes out on January 19, 2038. After that, these systems would overflow and interpret dates as 1901. Most modern systems use 64-bit timestamps now, which won't overflow for 292 billion years.
Convert Timestamps
Use our free Unix Timestamp Converter to convert between timestamps and human-readable dates instantly.