Understanding Timestamp Precision
Timestamp precision refers to the level of detail at which a timestamp measures time. Different applications require different levels of precision, from basic second-level accuracy to ultra-precise nanosecond measurements. Understanding these precision levels is crucial for choosing the right format for your use case.
The four main precision levels are:
- Seconds (10 digits) - Standard Unix timestamp
- Milliseconds (13 digits) - JavaScript, Java default
- Microseconds (16 digits) - High-precision systems
- Nanoseconds (19 digits) - Ultra-precise timing
The Four Precision Levels
1. Seconds (10 Digits)
Standard Unix Timestamp - The original and most common format.
Format
Example: 1704067200
Represents: January 1, 2024, 00:00:00 UTC
Precision: 1 second
Digit Count: 10 digits
Characteristics
- Range: December 13, 1901 to January 19, 2038 (32-bit signed)
- Range: September 21, 1677 to December 4, 292,277,026,596 (64-bit signed)
- Storage: 4 bytes (32-bit) or 8 bytes (64-bit)
- Accuracy: ±0.5 seconds
When to Use
- ✅ Event logging (user registration, login times)
- ✅ Database timestamps (created_at, updated_at)
- ✅ File modification times
- ✅ Scheduling tasks (cron jobs, batch processes)
- ✅ General timestamping where sub-second precision isn't needed
Code Examples
C/C++
1#include <time.h> 2#include <stdio.h> 3 4int main() { 5 time_t timestamp = time(NULL); 6 printf("Current timestamp: %ld\n", timestamp); 7 // Output: 1704067200 (10 digits) 8 return 0; 9}
Python
1import time 2 3timestamp = int(time.time()) 4print(f"Current timestamp: {timestamp}") 5# Output: 1704067200 (10 digits)
PHP
1<?php 2$timestamp = time(); 3echo "Current timestamp: $timestamp\n"; 4// Output: 1704067200 (10 digits) 5?>
SQL
1-- Most databases store TIMESTAMP with second precision 2SELECT UNIX_TIMESTAMP(); 3-- Output: 1704067200
2. Milliseconds (13 Digits)
JavaScript/Java Standard - Adds three decimal places for millisecond precision.
Format
Example: 1704067200000
Represents: January 1, 2024, 00:00:00.000 UTC
Precision: 0.001 seconds (1 millisecond)
Digit Count: 13 digits
Characteristics
- Range: ±8,640,000,000,000,000 milliseconds from epoch
- Storage: 8 bytes (64-bit integer or double)
- Accuracy: ±0.0005 seconds (0.5 milliseconds)
- Resolution: 1/1,000th of a second
When to Use
- ✅ Web applications (JavaScript Date.now())
- ✅ Performance monitoring (API response times)
- ✅ Animation timing (frame rates, transitions)
- ✅ Event tracking (click times, user interactions)
- ✅ Trading systems (stock prices, order execution)
- ✅ Real-time communications (chat applications)
Code Examples
JavaScript
1// Get current timestamp in milliseconds 2const timestamp = Date.now(); 3console.log(timestamp); 4// Output: 1704067200000 (13 digits) 5 6// Create Date from millisecond timestamp 7const date = new Date(1704067200000); 8console.log(date.toISOString()); 9// Output: 2024-01-01T00:00:00.000Z
Java
1// Get current timestamp in milliseconds 2long timestamp = System.currentTimeMillis(); 3System.out.println(timestamp); 4// Output: 1704067200000 (13 digits) 5 6// Create Date from millisecond timestamp 7Date date = new Date(1704067200000L); 8System.out.println(date);
Python
1import time 2 3# Get timestamp in milliseconds 4timestamp_ms = int(time.time() * 1000) 5print(f"Millisecond timestamp: {timestamp_ms}") 6# Output: 1704067200000 (13 digits)
Node.js
1// High-resolution time in milliseconds 2const start = performance.now(); 3// ... some operation ... 4const end = performance.now(); 5console.log(`Operation took ${end - start} milliseconds`);
3. Microseconds (16 Digits)
High-Precision Systems - Six decimal places for microsecond precision.
Format
Example: 1704067200000000
Represents: January 1, 2024, 00:00:00.000000 UTC
Precision: 0.000001 seconds (1 microsecond)
Digit Count: 16 digits
Characteristics
- Range: Extremely wide (±292,471 years from epoch)
- Storage: 8 bytes (64-bit integer)
- Accuracy: ±0.0000005 seconds (0.5 microseconds)
- Resolution: 1/1,000,000th of a second
When to Use
- ✅ Database systems (PostgreSQL, MongoDB)
- ✅ Scientific computing (physics simulations)
- ✅ Network protocols (packet timestamping)
- ✅ Audio/video processing (frame synchronization)
- ✅ High-frequency trading (microsecond-level execution)
- ✅ Distributed systems (event ordering, causality)
Code Examples
Python
1import time 2 3# Get timestamp in microseconds 4timestamp_us = int(time.time() * 1_000_000) 5print(f"Microsecond timestamp: {timestamp_us}") 6# Output: 1704067200000000 (16 digits) 7 8# Using datetime 9from datetime import datetime 10dt = datetime.now() 11timestamp_us = int(dt.timestamp() * 1_000_000) 12print(f"Microsecond timestamp: {timestamp_us}")
Go
1package main 2 3import ( 4 "fmt" 5 "time" 6) 7 8func main() { 9 // Get current timestamp in microseconds 10 timestamp := time.Now().UnixMicro() 11 fmt.Printf("Microsecond timestamp: %d\n", timestamp) 12 // Output: 1704067200000000 (16 digits) 13}
PostgreSQL
1-- PostgreSQL stores timestamps with microsecond precision 2SELECT EXTRACT(EPOCH FROM NOW()) * 1000000; 3-- Output: 1704067200000000 4 5-- Create timestamp with microsecond precision 6SELECT to_timestamp(1704067200.123456); 7-- Output: 2024-01-01 00:00:00.123456+00
C++
1#include <chrono> 2#include <iostream> 3 4int main() { 5 using namespace std::chrono; 6 7 // Get microsecond timestamp 8 auto now = system_clock::now(); 9 auto micros = duration_cast<microseconds>( 10 now.time_since_epoch() 11 ).count(); 12 13 std::cout << "Microsecond timestamp: " << micros << std::endl; 14 // Output: 1704067200000000 (16 digits) 15 return 0; 16}
4. Nanoseconds (19 Digits)
Ultra-Precise Timing - Nine decimal places for nanosecond precision.
Format
Example: 1704067200000000000
Represents: January 1, 2024, 00:00:00.000000000 UTC
Precision: 0.000000001 seconds (1 nanosecond)
Digit Count: 19 digits
Characteristics
- Range: ±292 years from epoch (64-bit signed)
- Storage: 8 bytes (64-bit integer)
- Accuracy: ±0.0000000005 seconds (0.5 nanoseconds)
- Resolution: 1/1,000,000,000th of a second
When to Use
- ✅ Performance profiling (CPU cycle measurements)
- ✅ Hardware instrumentation (oscilloscopes, logic analyzers)
- ✅ Kernel development (scheduler timestamps)
- ✅ Real-time systems (robotics, aerospace)
- ✅ Cryptographic timestamping (blockchain, security)
- ✅ Physics experiments (particle detection)
Code Examples
Go
1package main 2 3import ( 4 "fmt" 5 "time" 6) 7 8func main() { 9 // Get current timestamp in nanoseconds 10 timestamp := time.Now().UnixNano() 11 fmt.Printf("Nanosecond timestamp: %d\n", timestamp) 12 // Output: 1704067200000000000 (19 digits) 13 14 // Benchmark operations 15 start := time.Now() 16 // ... some operation ... 17 elapsed := time.Since(start).Nanoseconds() 18 fmt.Printf("Operation took %d nanoseconds\n", elapsed) 19}
Rust
1use std::time::{SystemTime, UNIX_EPOCH}; 2 3fn main() { 4 // Get nanosecond timestamp 5 let duration = SystemTime::now() 6 .duration_since(UNIX_EPOCH) 7 .unwrap(); 8 9 let nanos = duration.as_nanos(); 10 println!("Nanosecond timestamp: {}", nanos); 11 // Output: 1704067200000000000 (19 digits) 12}
C++
1#include <chrono> 2#include <iostream> 3 4int main() { 5 using namespace std::chrono; 6 7 // Get nanosecond timestamp 8 auto now = system_clock::now(); 9 auto nanos = duration_cast<nanoseconds>( 10 now.time_since_epoch() 11 ).count(); 12 13 std::cout << "Nanosecond timestamp: " << nanos << std::endl; 14 // Output: 1704067200000000000 (19 digits) 15 return 0; 16}
Linux (C)
1#include <time.h> 2#include <stdio.h> 3 4int main() { 5 struct timespec ts; 6 clock_gettime(CLOCK_REALTIME, &ts); 7 8 long long nanos = (long long)ts.tv_sec * 1000000000LL + ts.tv_nsec; 9 printf("Nanosecond timestamp: %lld\n", nanos); 10 // Output: 1704067200000000000 (19 digits) 11 return 0; 12}
Precision Comparison Table
| Level | Precision | Digits | Example | Use Cases | Languages/Systems |
|---|---|---|---|---|---|
| Second | 1s | 10 | 1704067200 | Logs, databases, scheduling | C, PHP, Python, SQL |
| Millisecond | 1ms (10⁻³s) | 13 | 1704067200000 | Web apps, trading, APIs | JavaScript, Java |
| Microsecond | 1μs (10⁻⁶s) | 16 | 1704067200000000 | HFT, audio/video, networks | Python, Go, PostgreSQL |
| Nanosecond | 1ns (10⁻⁹s) | 19 | 1704067200000000000 | Profiling, hardware, crypto | Go, Rust, C++ |
Converting Between Precision Levels
Scaling Up (Adding Precision)
1// Second to Millisecond 2const seconds = 1704067200; 3const milliseconds = seconds * 1000; 4// 1704067200000 5 6// Millisecond to Microsecond 7const microseconds = milliseconds * 1000; 8// 1704067200000000 9 10// Microsecond to Nanosecond 11const nanoseconds = microseconds * 1000; 12// 1704067200000000000
Scaling Down (Reducing Precision)
1// Nanosecond to Microsecond 2const nanos = 1704067200123456789; 3const micros = Math.floor(nanos / 1000); 4// 1704067200123456 5 6// Microsecond to Millisecond 7const millis = Math.floor(micros / 1000); 8// 1704067200123 9 10// Millisecond to Second 11const secs = Math.floor(millis / 1000); 12// 1704067200
Python Conversion Utility
1class TimestampConverter: 2 """Convert between different timestamp precision levels""" 3 4 @staticmethod 5 def to_milliseconds(timestamp, from_precision='seconds'): 6 """Convert any precision to milliseconds""" 7 multipliers = { 8 'seconds': 1000, 9 'milliseconds': 1, 10 'microseconds': 0.001, 11 'nanoseconds': 0.000001 12 } 13 return int(timestamp * multipliers[from_precision]) 14 15 @staticmethod 16 def to_microseconds(timestamp, from_precision='seconds'): 17 """Convert any precision to microseconds""" 18 multipliers = { 19 'seconds': 1_000_000, 20 'milliseconds': 1000, 21 'microseconds': 1, 22 'nanoseconds': 0.001 23 } 24 return int(timestamp * multipliers[from_precision]) 25 26 @staticmethod 27 def to_nanoseconds(timestamp, from_precision='seconds'): 28 """Convert any precision to nanoseconds""" 29 multipliers = { 30 'seconds': 1_000_000_000, 31 'milliseconds': 1_000_000, 32 'microseconds': 1000, 33 'nanoseconds': 1 34 } 35 return int(timestamp * multipliers[from_precision]) 36 37# Usage 38converter = TimestampConverter() 39 40# Convert 1704067200 seconds to milliseconds 41ms = converter.to_milliseconds(1704067200, 'seconds') 42print(ms) # 1704067200000
Performance Considerations
Storage Requirements
| Precision | 32-bit | 64-bit | Database Storage |
|---|---|---|---|
| Seconds | 4 bytes | 8 bytes | TIMESTAMP (4-8 bytes) |
| Milliseconds | ❌ Overflow | 8 bytes | BIGINT (8 bytes) |
| Microseconds | ❌ Overflow | 8 bytes | BIGINT (8 bytes) |
| Nanoseconds | ❌ Overflow | 8 bytes | BIGINT (8 bytes) |
Processing Speed
1// Benchmark: Different precision levels 2const iterations = 1000000; 3 4// Seconds (fastest) 5console.time('Seconds'); 6for (let i = 0; i < iterations; i++) { 7 const ts = Math.floor(Date.now() / 1000); 8} 9console.timeEnd('Seconds'); 10// ~10ms 11 12// Milliseconds (fast) 13console.time('Milliseconds'); 14for (let i = 0; i < iterations; i++) { 15 const ts = Date.now(); 16} 17console.timeEnd('Milliseconds'); 18// ~12ms 19 20// Microseconds (slower) 21console.time('Microseconds'); 22for (let i = 0; i < iterations; i++) { 23 const ts = performance.now() * 1000; 24} 25console.timeEnd('Microseconds'); 26// ~25ms
Memory Impact
1import sys 2 3# Storage comparison 4second_ts = 1704067200 5millisecond_ts = 1704067200000 6microsecond_ts = 1704067200000000 7nanosecond_ts = 1704067200000000000 8 9print(f"Second: {sys.getsizeof(second_ts)} bytes") # 28 bytes 10print(f"Millisecond: {sys.getsizeof(millisecond_ts)} bytes") # 28 bytes 11print(f"Microsecond: {sys.getsizeof(microsecond_ts)} bytes") # 28 bytes 12print(f"Nanosecond: {sys.getsizeof(nanosecond_ts)} bytes") # 32 bytes 13 14# In arrays/databases, smaller integers = better performance
Accuracy vs. Precision
Understanding the Difference
- Precision: How finely you can measure (the number of digits)
- Accuracy: How close your measurement is to the true value
Example:
Precision: Nanosecond timestamp (19 digits)
Accuracy: System clock may only be accurate to ±50ms
Result: High precision, low accuracy
System Clock Limitations
| System | Typical Resolution | Accuracy |
|---|---|---|
| Windows | 15.6ms | ±10-50ms |
| Linux | 1μs - 1ms | ±1-10ms |
| macOS | 1μs | ±1-10ms |
| Real-Time OS | 1ns - 1μs | ±1μs |
Testing Your System's Resolution
1import time 2 3def measure_clock_resolution(): 4 """Measure actual system clock resolution""" 5 samples = [] 6 prev = time.time() 7 8 for _ in range(100000): 9 current = time.time() 10 if current != prev: 11 samples.append(current - prev) 12 prev = current 13 14 if samples: 15 min_diff = min(samples) 16 print(f"Minimum time difference: {min_diff * 1000:.6f}ms") 17 print(f"Approximate resolution: {min_diff * 1_000_000:.2f}μs") 18 19measure_clock_resolution()
Best Practices
1. Choose Appropriate Precision
1# ✅ GOOD: Match precision to use case 2user_login_time = int(time.time()) # Seconds are enough 3 4# ❌ BAD: Unnecessary precision 5user_login_time = int(time.time() * 1_000_000_000) # Overkill!
2. Store Consistently
1-- ✅ GOOD: Consistent precision across table 2CREATE TABLE events ( 3 id BIGINT PRIMARY KEY, 4 created_at BIGINT, -- All in milliseconds 5 updated_at BIGINT -- All in milliseconds 6); 7 8-- ❌ BAD: Mixed precision 9CREATE TABLE events ( 10 id BIGINT PRIMARY KEY, 11 created_at INT, -- Seconds 12 updated_at BIGINT -- Milliseconds (inconsistent!) 13);
3. Document Your Choice
1/** 2 * Timestamp precision: Milliseconds (13 digits) 3 * Format: Unix timestamp * 1000 4 * Example: 1704067200000 = Jan 1, 2024 00:00:00.000 UTC 5 */ 6const timestamp = Date.now();
4. Handle Conversion Carefully
1# ✅ GOOD: Explicit conversion 2def seconds_to_milliseconds(seconds): 3 """Convert seconds to milliseconds""" 4 return int(seconds * 1000) 5 6# ❌ BAD: Implicit/unclear 7def convert(ts): 8 return ts * 1000 # What precision is this?
5. Validate Precision
1function validateTimestamp(timestamp, expectedPrecision) { 2 const digitCount = timestamp.toString().length; 3 4 const expectedDigits = { 5 'seconds': 10, 6 'milliseconds': 13, 7 'microseconds': 16, 8 'nanoseconds': 19 9 }; 10 11 if (digitCount !== expectedDigits[expectedPrecision]) { 12 throw new Error( 13 `Invalid ${expectedPrecision} timestamp: expected ${expectedDigits[expectedPrecision]} digits, got ${digitCount}` 14 ); 15 } 16 17 return true; 18} 19 20// Usage 21validateTimestamp(1704067200000, 'milliseconds'); // ✅ Pass 22validateTimestamp(1704067200, 'milliseconds'); // ❌ Error
Common Pitfalls
1. Precision Loss in Floating Point
1// ❌ BAD: JavaScript Number precision limit 2const nanos = 1704067200123456789; // 19 digits 3console.log(nanos); 4// Output: 1704067200123456800 (last digits lost!) 5 6// ✅ GOOD: Use BigInt for nanoseconds 7const nanos = 1704067200123456789n; 8console.log(nanos.toString()); 9// Output: 1704067200123456789 (exact)
2. Timezone Confusion
1# ❌ BAD: Local time affects precision 2import datetime 3local_time = datetime.datetime.now() # Includes local timezone 4timestamp = local_time.timestamp() 5 6# ✅ GOOD: Always use UTC 7utc_time = datetime.datetime.utcnow() 8timestamp = utc_time.timestamp()
3. Overflow Issues
1// ❌ BAD: 32-bit overflow with milliseconds 2int32_t timestamp_ms = time(NULL) * 1000; // Overflow! 3 4// ✅ GOOD: Use 64-bit for higher precision 5int64_t timestamp_ms = (int64_t)time(NULL) * 1000;
Related Tools
Use our free tools to work with different timestamp precisions:
- Unix Timestamp Converter - Convert between precision levels
- Batch Timestamp Converter - Convert multiple timestamps
- Timestamp Format Builder - Create custom formats
- Current Timestamp - Get timestamps in all precisions
Conclusion
Understanding timestamp precision levels is essential for modern software development. Choose the right precision level based on your specific requirements:
- Seconds: General-purpose timestamping, logs, databases
- Milliseconds: Web applications, APIs, real-time features
- Microseconds: High-frequency trading, scientific computing
- Nanoseconds: Performance profiling, hardware instrumentation
Remember:
- Higher precision = More storage + More processing
- Match precision to actual system accuracy
- Be consistent across your application
- Document your choice for future developers
Last updated: January 2025