Your Node.js application logs are critical for debugging production issues, monitoring performance, and maintaining system health. But if your logging library becomes a bottleneck, it defeats the purpose.
Pino solves this with a fundamentally different approach to logging:
- 5x faster than Winston with minimal CPU overhead
- Structured JSON output that's immediately machine-readable
- Asynchronous by design to prevent event loop blocking
- Production-ready with built-in security and performance optimizations
This guide covers everything from basic setup to advanced production patterns, including integration with observability platforms for comprehensive monitoring.
What Makes Pino Different
Pino is built around a simple principle: logging should never slow down your application. Unlike traditional loggers that perform expensive formatting operations in the main thread, Pino takes a minimalist approach.
Core Design Principles:
- JSON-first: Every log is structured data, not formatted text
- Async transports: Heavy operations happen in worker threads
- Minimal serialization: Only essential data transformation
- Zero-cost abstractions: Logs below threshold level have no performance impact
Performance Comparison
Here's how Pino compares to popular alternatives in real-world scenarios:
Library | Logs/Second | CPU Usage | Memory Overhead |
---|---|---|---|
Pino | 50,000+ | 2-4% | ~45MB |
Winston | ~10,000 | 10-15% | ~180MB |
Bunyan | ~15,000 | 8-12% | ~150MB |
These numbers translate to concrete benefits:
- Faster response times under high load
- Lower infrastructure costs through reduced resource usage
- Better application stability with minimal logging overhead
Installation and Basic Setup
Install Pino with npm:
npm install pino
For development with readable output:
npm install --save-dev pino-pretty
First Logger Implementation
const pino = require('pino')
const logger = pino()
logger.info('Application started')
logger.error('Database connection failed')
Default output is structured JSON:
{"level":30,"time":1690747200000,"pid":12345,"hostname":"server-01","msg":"Application started"}
{"level":50,"time":1690747201000,"pid":12345,"hostname":"server-01","msg":"Database connection failed"}
Each log includes:
level
: Numeric severity (30=info, 50=error)time
: High-precision timestamppid
: Process ID for multi-process debugginghostname
: Server identification
Environment-Adaptive Configuration
Create a logger that adapts to your deployment environment:
const pino = require('pino')
function createLogger() {
const isDevelopment = process.env.NODE_ENV === 'development'
const isTest = process.env.NODE_ENV === 'test'
return pino({
level: process.env.LOG_LEVEL || (isDevelopment ? 'debug' : 'info'),
// Pretty output for development
transport: isDevelopment ? {
target: 'pino-pretty',
options: {
colorize: true,
ignore: 'pid,hostname',
translateTime: 'yyyy-mm-dd HH:MM:ss'
}
} : undefined,
// Disable in tests unless explicitly needed
enabled: !isTest,
// Add application context
base: {
env: process.env.NODE_ENV,
version: process.env.APP_VERSION
}
})
}
module.exports = createLogger()
Log Levels and Strategic Usage
Pino uses numeric levels that correspond to severity. Understanding when to use each level is crucial for effective monitoring and debugging.
Standard Levels
Level | Numeric | Purpose | Example Use Cases |
---|---|---|---|
fatal | 60 | Application crash imminent | Database connection lost, critical service down |
error | 50 | Errors requiring investigation | API failures, validation errors, exceptions |
warn | 40 | Potential issues | Deprecated API usage, resource limits approaching |
info | 30 | Significant application events | User authentication, service starts, key operations |
debug | 20 | Detailed debugging information | Function entry/exit, variable states |
trace | 10 | Very detailed execution flow | Loop iterations, deep debugging |
Level Configuration Strategy
const logger = pino({ level: 'info' })
// These won't appear in logs (below threshold)
logger.trace('Entering user validation function')
logger.debug('Checking user permissions')
// These will appear (at or above threshold)
logger.info({ userId: 123 }, 'User login successful')
logger.error({ err: error }, 'Payment processing failed')
Runtime Level Changes
Enable dynamic log level adjustment for production debugging:
const express = require('express')
const logger = require('./logger')
const app = express()
app.post('/admin/log-level', (req, res) => {
const { level } = req.body
const validLevels = ['trace', 'debug', 'info', 'warn', 'error', 'fatal']
if (!validLevels.includes(level)) {
return res.status(400).json({ error: 'Invalid log level' })
}
logger.level = level
logger.info({ newLevel: level }, 'Log level changed')
res.json({ message: `Log level changed to ${level}` })
})
Structured Logging for Observability
Structured logging makes your logs immediately useful for monitoring, alerting, and debugging. Instead of parsing text strings, you get queryable data.
Basic Structured Patterns
const logger = pino()
// Traditional string interpolation (avoid this)
logger.info(`User ${userId} completed order ${orderId} for $${amount}`)
// Structured approach (better)
logger.info({
userId: 'usr_123',
orderId: 'ord_456',
amount: 99.99,
currency: 'USD',
paymentMethod: 'credit_card'
}, 'Order completed successfully')
Error Logging with Context
When logging errors, include relevant context for faster debugging:
async function processPayment(orderId, userId) {
try {
const result = await paymentService.charge(orderId)
logger.info({
orderId,
userId,
paymentId: result.id,
amount: result.amount,
duration: result.processingTime
}, 'Payment processed successfully')
return result
} catch (error) {
logger.error({
err: error,
orderId,
userId,
operation: 'payment_processing',
paymentProvider: 'stripe'
}, 'Payment processing failed')
throw error
}
}
Performance Monitoring Through Logs
Track application performance with structured timing data:
async function performDatabaseQuery(query) {
const startTime = process.hrtime.bigint()
try {
const result = await db.query(query)
const duration = Number(process.hrtime.bigint() - startTime) / 1000000 // Convert to ms
logger.info({
operation: 'database_query',
table: query.table,
duration: Math.round(duration * 100) / 100,
recordCount: result.length,
performanceCategory: categorizePerformance(duration)
}, 'Database query completed')
return result
} catch (error) {
const duration = Number(process.hrtime.bigint() - startTime) / 1000000
logger.error({
err: error,
operation: 'database_query',
table: query.table,
duration: Math.round(duration * 100) / 100
}, 'Database query failed')
throw error
}
}
function categorizePerformance(ms) {
if (ms < 100) return 'fast'
if (ms < 500) return 'normal'
if (ms < 1000) return 'slow'
return 'critical'
}
Child Loggers for Context Management
Child loggers inherit parent configuration while adding contextual information. This creates consistent context across related operations without repetitive logging.
Request-Scoped Logging
const express = require('express')
const { v4: uuidv4 } = require('uuid')
const logger = require('./logger')
const app = express()
// Create request-scoped logger middleware
app.use((req, res, next) => {
const requestId = req.headers['x-request-id'] || uuidv4()
req.log = logger.child({
requestId,
method: req.method,
path: req.path,
userAgent: req.headers['user-agent'],
ip: req.ip
})
req.log.info('Request started')
next()
})
// Route handlers automatically have contextual logging
app.get('/users/:id', async (req, res) => {
const { id } = req.params
req.log.debug({ userId: id }, 'Fetching user data')
try {
const user = await getUserById(id)
req.log.info({ userId: id, fetchDuration: 45 }, 'User retrieved')
res.json(user)
} catch (error) {
req.log.error({ err: error, userId: id }, 'User fetch failed')
res.status(500).json({ error: 'Internal server error' })
}
})
Service-Specific Loggers
Create specialized loggers for different application components:
// logger.js
const pino = require('pino')
const baseLogger = pino()
module.exports = {
auth: baseLogger.child({ service: 'auth' }),
database: baseLogger.child({ service: 'database' }),
payment: baseLogger.child({ service: 'payment' }),
notification: baseLogger.child({ service: 'notification' })
}
// auth-service.js
const { auth: logger } = require('./logger')
class AuthService {
async authenticateUser(email, password) {
logger.info({ email }, 'Authentication attempt')
try {
const user = await this.validateCredentials(email, password)
logger.info({ userId: user.id, email }, 'Authentication successful')
return this.generateToken(user)
} catch (error) {
logger.warn({ email, reason: error.message }, 'Authentication failed')
throw new AuthenticationError('Invalid credentials')
}
}
}
Custom Serializers for Security and Performance
Serializers transform objects before logging, giving you control over what data appears in logs. This is essential for security, performance, and consistency.
Security-Focused Serializers
Protect sensitive information with custom serialization:
const logger = pino({
serializers: {
user: (user) => ({
id: user.id,
username: user.username,
email: user.email ? maskEmail(user.email) : undefined,
role: user.role,
// Never log password, apiKey, tokens, etc.
lastLogin: user.lastLogin
}),
request: (req) => {
const safe = { ...req }
if (safe.headers) {
// Remove sensitive headers
delete safe.headers.authorization
delete safe.headers['x-api-key']
delete safe.headers.cookie
}
return safe
},
error: (err) => {
const safeError = { ...err }
// Sanitize error messages that might contain sensitive data
if (safeError.message) {
safeError.message = safeError.message
.replace(/password=\w+/gi, 'password=***')
.replace(/token=[\w-]+/gi, 'token=***')
}
return safeError
}
}
})
function maskEmail(email) {
const [local, domain] = email.split('@')
return `${local.slice(0, 2)}***@${domain}`
}
Global Data Redaction
Use Pino's built-in redaction for automatic sensitive data removal:
const logger = pino({
redact: {
paths: [
'password',
'token',
'apiKey',
'creditCard.number',
'ssn',
'*.password',
'*.token',
'req.headers.authorization',
'req.headers.cookie'
],
remove: true // Completely remove these fields
}
})
// This will automatically remove sensitive fields
logger.info({
user: {
id: 123,
email: 'user@example.com',
password: 'secret123', // This won't appear in logs
apiKey: 'sk_live_abc123' // This won't appear either
}
}, 'User data processed')
HTTP Request Logging
Effective HTTP logging provides insights into API performance, user behavior, and system issues.
Express.js Integration
Use pino-http
for automatic request/response logging:
npm install pino-http
const express = require('express')
const pinoHttp = require('pino-http')
const logger = require('./logger')
const app = express()
app.use(pinoHttp({
logger,
// Custom log levels based on response status
customLogLevel: (req, res, err) => {
if (res.statusCode >= 400 && res.statusCode < 500) return 'warn'
if (res.statusCode >= 500 || err) return 'error'
if (res.statusCode >= 300 && res.statusCode < 400) return 'silent'
return 'info'
},
// Custom success message with timing
customSuccessMessage: (req, res) => {
return `${req.method} ${req.url} completed in ${res.responseTime}ms`
}
}))
app.get('/api/users/:id', async (req, res) => {
// req.log is automatically available with request context
req.log.info({ userId: req.params.id }, 'Processing user request')
try {
const user = await getUserById(req.params.id)
res.json(user)
} catch (error) {
req.log.error({ err: error }, 'User retrieval failed')
res.status(500).json({ error: 'Internal server error' })
}
})
Performance-Optimized HTTP Logging
For high-traffic APIs, optimize logging performance:
const pinoHttp = require('pino-http')
const httpLogger = pinoHttp({
logger: pino({ level: 'info' }),
// Skip logging for health checks and static assets
autoLogging: {
ignore: (req) => {
return req.url === '/health' ||
req.url.startsWith('/static/') ||
req.url.match(/\.(css|js|png|jpg|ico)$/)
}
},
// Minimal serialization for performance
serializers: {
req: (req) => ({
method: req.method,
url: req.url,
id: req.id
}),
res: (res) => ({
statusCode: res.statusCode
})
}
})
app.use(httpLogger)
Production Configuration Best Practices
Production logging requires careful configuration for performance, security, and reliability.
Environment-Specific Setup
function createProductionLogger() {
const isDevelopment = process.env.NODE_ENV === 'development'
const isProduction = process.env.NODE_ENV === 'production'
const config = {
level: process.env.LOG_LEVEL || (isDevelopment ? 'debug' : 'info'),
formatters: {
level: (label) => ({ level: label.toUpperCase() }),
bindings: (bindings) => ({
pid: bindings.pid,
hostname: bindings.hostname,
env: process.env.NODE_ENV,
version: process.env.APP_VERSION
})
}
}
if (isDevelopment) {
config.transport = {
target: 'pino-pretty',
options: { colorize: true, translateTime: 'yyyy-mm-dd HH:MM:ss' }
}
} else if (isProduction) {
// Production optimizations
config.redact = {
paths: ['password', 'token', 'apiKey', '*.password', '*.token'],
remove: true
}
}
return pino(config)
}
module.exports = createProductionLogger()
High-Performance Async Logging
For maximum performance, use asynchronous file destinations:
const pino = require('pino')
const SonicBoom = require('sonic-boom')
// High-performance file destination
const dest = new SonicBoom({
dest: '/var/log/app/application.log',
sync: false, // Async writes
append: true,
mkdir: true
})
const logger = pino({
level: 'info',
timestamp: pino.stdTimeFunctions.epochTime, // Faster timestamps
}, dest)
// Graceful shutdown handling
process.on('SIGTERM', () => {
dest.flush()
dest.end()
process.exit(0)
})
Error Handling and Crash Safety
Ensure logs are captured even during application crashes:
const logger = pino()
// Handle uncaught exceptions
process.on('uncaughtException', (error) => {
const finalLogger = pino.final(logger)
finalLogger.fatal({ err: error }, 'Uncaught exception')
process.exit(1)
})
// Handle unhandled promise rejections
process.on('unhandledRejection', (reason, promise) => {
const finalLogger = pino.final(logger)
finalLogger.fatal({ reason }, 'Unhandled promise rejection')
process.exit(1)
})
// Graceful shutdown
process.on('SIGTERM', () => {
logger.info('Received SIGTERM, shutting down gracefully')
pino.final(logger, (err, finalLogger) => {
if (err) finalLogger.error(err, 'Shutdown error')
finalLogger.info('Application shutdown complete')
process.exit(0)
})
})
Integration with SigNoz for Complete Observability
SigNoz provides a unified platform for logs, metrics, and traces, making it ideal for comprehensive Node.js application monitoring with Pino.
Why SigNoz with Pino
SigNoz offers several advantages for Pino integration:
- Unified Dashboard: View logs alongside traces and metrics
- OpenTelemetry Native: Seamless integration with modern observability standards
- SQL-like Querying: Query structured log data with familiar syntax
- Real-time Monitoring: Live log streaming and alerting
- Open Source: Cost-effective alternative to commercial solutions
Setup with OpenTelemetry Transport
Install required packages:
npm install pino-opentelemetry-transport
Configure Pino with OpenTelemetry transport:
const pino = require('pino')
const logger = pino({
transport: {
targets: [
{
target: 'pino-opentelemetry-transport',
options: {
resourceAttributes: {
'service.name': 'nodejs-api',
'service.version': process.env.APP_VERSION || '1.0.0',
'deployment.environment': process.env.NODE_ENV || 'development'
}
},
level: 'info'
},
// Keep console output for development
...(process.env.NODE_ENV === 'development' ? [{
target: 'pino-pretty',
level: 'debug',
options: { colorize: true }
}] : [])
]
}
})
module.exports = logger
Environment Configuration for SigNoz
Set environment variables for SigNoz integration:
# For SigNoz Cloud
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://ingest.{region}.signoz.cloud:443/v1/logs"
export OTEL_EXPORTER_OTLP_HEADERS="signoz-access-token=YOUR_ACCESS_TOKEN"
export OTEL_RESOURCE_ATTRIBUTES="service.name=nodejs-api,service.version=1.0.0"
# For self-hosted SigNoz
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="http://your-signoz-instance:4318/v1/logs"
export OTEL_RESOURCE_ATTRIBUTES="service.name=nodejs-api,service.version=1.0.0"
Complete Application Example
// app.js
const express = require('express')
const pinoHttp = require('pino-http')
const logger = require('./logger')
const app = express()
// Add request logging middleware
app.use(pinoHttp({ logger }))
app.get('/api/users/:id', async (req, res) => {
const { id } = req.params
const startTime = Date.now()
req.log.info({ userId: id }, 'Fetching user data')
try {
const user = await getUserById(id)
const duration = Date.now() - startTime
req.log.info({
userId: id,
userEmail: user.email,
fetchDuration: duration,
cacheHit: user.fromCache
}, 'User data retrieved')
res.json(user)
} catch (error) {
const duration = Date.now() - startTime
req.log.error({
err: error,
userId: id,
duration,
operation: 'user_fetch'
}, 'Failed to retrieve user data')
res.status(500).json({ error: 'Internal server error' })
}
})
app.listen(3000, () => {
logger.info({
port: 3000,
env: process.env.NODE_ENV,
version: process.env.APP_VERSION
}, 'Server started successfully')
})
Get Started with SigNoz
You can choose between various deployment options in SigNoz. The easiest way to get started with SigNoz is SigNoz cloud. We offer a 30-day free trial account with access to all features.
Those who have data privacy concerns and can't send their data outside their infrastructure can sign up for either enterprise self-hosted or BYOC offering.
Those who have the expertise to manage SigNoz themselves or just want to start with a free self-hosted option can use our community edition.
Hope we answered all your questions regarding Pino logging in Node.js. If you have more questions, feel free to use the SigNoz AI chatbot, or join our slack community.
Common Issues and Troubleshooting
Logs Stop Writing Suddenly
Problem: Pino stops logging without errors, requiring application restart.
Solution: Always handle stream errors and implement fallbacks:
const pino = require('pino')
const fs = require('fs')
const path = require('path')
// Ensure log directory exists
const logDir = path.dirname('./logs/app.log')
if (!fs.existsSync(logDir)) {
fs.mkdirSync(logDir, { recursive: true })
}
const dest = pino.destination({
dest: './logs/app.log',
sync: false,
mkdir: true
})
// Handle destination errors
dest.on('error', (err) => {
console.error('Log destination error:', err)
// Implement fallback logging mechanism
})
const logger = pino(dest)
Memory Leaks Under High Load
Problem: Memory usage increases continuously under high logging volume.
Solution: Implement proper backpressure handling:
const SonicBoom = require('sonic-boom')
const dest = new SonicBoom({
dest: './app.log',
sync: false,
minLength: 4096, // Buffer before writing
maxWrite: 16384 // Maximum write size
})
// Monitor backpressure
dest.on('drain', () => {
console.log('Log buffer drained')
})
const logger = pino(dest)
// Graceful shutdown
process.on('SIGTERM', async () => {
await dest.flush()
dest.end()
})
Transport Configuration Issues
Problem: Complex transport setups fail silently.
Solution: Validate configuration and add error handling:
function createSafeLogger() {
try {
return pino({
transport: process.env.NODE_ENV === 'development'
? { target: 'pino-pretty', options: { colorize: true } }
: { target: 'pino/file', options: { destination: './logs/app.log' } }
})
} catch (error) {
console.error('Logger creation failed, falling back to console:', error)
return pino({ transport: { target: 'pino-pretty' } })
}
}
Migration from Other Loggers
From Winston to Pino
Winston and Pino have different approaches, but migration patterns are straightforward:
// Winston pattern
const winston = require('winston')
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
transports: [
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'combined.log' })
]
})
// Pino equivalent
const pino = require('pino')
const logger = pino({
level: 'info',
transport: {
targets: [
{ target: 'pino/file', options: { destination: './combined.log' }, level: 'info' },
{ target: 'pino/file', options: { destination: './error.log' }, level: 'error' }
]
}
})
Migration Strategy
- Replace logger creation with Pino initialization
- Convert string interpolation to structured logging
- Update error logging to use Pino's error serializers
- Test performance impact and adjust configuration
- Update monitoring configurations to handle new log format
Key Takeaways
Pino represents a fundamental shift in Node.js logging philosophy, prioritizing performance without sacrificing the structured data needed for modern observability.
Performance Benefits:
- 5x faster than traditional loggers with minimal CPU overhead
- Asynchronous architecture prevents event loop blocking
- Memory-efficient design maintains stability under load
Production Readiness:
- Structured JSON output enables powerful querying and analysis
- Child loggers provide excellent context management
- Built-in security features protect sensitive data
Modern Integration:
- Native OpenTelemetry support enables trace correlation
- Seamless integration with observability platforms like SigNoz
- Extensive ecosystem of transports and plugins
For new projects, start with Pino from day one using the patterns demonstrated in this guide. For existing applications, the performance benefits justify gradual migration, starting with new features and high-traffic endpoints.
As Node.js applications continue to scale, Pino's combination of speed, structure, and extensibility makes it the ideal logging solution for performance-critical applications.
Frequently Asked Questions
What is Pino package?
Pino is a high-performance Node.js logging library optimized for speed and structured output. It produces JSON logs by default, uses asynchronous I/O, and focuses on minimal overhead to prevent logging from becoming an application bottleneck.
What is the common log format in Pino?
Pino uses NDJSON (Newline Delimited JSON) format by default. Each log entry includes standard fields like level
, time
, pid
, hostname
, and msg
, making logs immediately machine-readable for automated processing and analysis.
What is Pino pretty?
pino-pretty
is a development tool that transforms Pino's JSON output into human-readable, colorized text. Use it during development for easier log reading, but avoid it in production as it adds overhead and defeats Pino's performance advantages.
What are the benefits of Pino logger?
Key benefits include: 5x faster performance than Winston, minimal CPU and memory overhead, structured JSON logging by default, child logger support for context management, built-in security features with data redaction, and seamless integration with modern observability platforms.
Can Pino handle asynchronous logging?
Yes, Pino excels at asynchronous logging through worker threads and non-blocking I/O operations. This prevents logging from blocking the main event loop, maintaining application performance even under high logging volume.
Is it possible to change the log level in Pino dynamically?
Yes, you can change Pino's log level at runtime using logger.level = 'newLevel'
. This is valuable in production for temporarily increasing verbosity to debug issues without restarting the application.
What is a child logger?
A child logger inherits parent configuration while adding consistent context fields to every log message. Perfect for request-scoped logging, service-specific logging, or adding persistent context like user IDs or request IDs.
What is Pino transport?
Pino transport handles where and how logs are processed and output. Transports can send logs to files, external services, or format them for different purposes. Pino 7+ uses worker threads for transports to minimize performance impact.
What is Pino HTTP?
pino-http
is Express.js middleware that automatically logs HTTP requests and responses. It provides detailed request/response logging with minimal setup and integrates with Pino's performance optimizations.
What are the three main log levels in Pino?
While Pino supports six levels, the three most commonly used are: info
for general application information and significant events, warn
for potential issues requiring attention, and error
for actual failures requiring investigation.