Skip to main content

Performance Benchmarks

Arc has been benchmarked using ClickBench - the industry-standard analytical database benchmark.

ClickBench Results

Arc is the fastest time-series database, completing all 43 queries on ClickBench in 107.66 seconds total (36.43s cold run).

Official Rankings (Cold Run)

DatabaseCold Runvs ArcArchitecture
Arc36.43s1.0xDuckDB + Parquet + HTTP API
VictoriaLogs113.8s3.1x slowerLogsQL engine
QuestDB223.2s6.1x slowerColumnar time-series
Timescale Cloud626.6s17.2x slowerPostgreSQL extension
TimescaleDB1022.5s28.1x slowerPostgreSQL extension
Key Achievements
  • Cold Run: 36.43s (fastest cold start among time-series databases)
  • Complete Benchmark: 107.66s total for all 43 queries (3 runs each)
  • Arc is the only time-series database to complete ClickBench in under 2 minutes via HTTP API

Test Environment

Hardware: AWS c6a.4xlarge

  • CPU: 16 vCPU AMD EPYC 7R13 Processor
  • RAM: 32GB
  • Storage: EBS gp2 (500GB, 1500 baseline IOPS)
  • Network: Up to 12.5 Gbps
  • Cost: ~$0.62/hour

Configuration

  • Workers: 32 (cores × 2)
  • Query Cache: Disabled (per ClickBench rules)
  • Storage: Local Parquet files on EBS gp2
  • Query Method: HTTP REST API (POST /query with JSON)
  • DuckDB: Default settings (no tuning)

Dataset

ClickBench Query Results

All 43 analytical queries completed successfully. Results show 3 runs per query:

Query Performance (seconds)

QueryRun 1Run 2Run 3BestNotes
Q00.33850.26060.32110.2606Simple aggregation
Q10.59510.56080.59280.5608COUNT with GROUP BY
Q20.56310.30300.14360.1436Aggregation with filter
Q30.10570.09050.07640.0764Fast filter scan
Q40.35000.33350.32340.3234Multiple GROUP BY
Q50.56960.53860.55150.5386Complex aggregation
Q60.05980.05900.05930.0590Selective filter
Q70.05640.05480.05500.0548Simple scan
Q80.44290.47160.44870.4429JOIN operation
Q90.56820.56760.56020.5602Heavy aggregation
Q100.14130.14280.14080.1408String operations
Q110.18750.21390.18150.1815Complex filter
Q120.57420.54660.56480.5466Window functions
Q130.91760.87870.86990.8699Multiple JOINs
Q140.57640.59770.62070.5764Subqueries
Q150.38920.40110.40740.3892DISTINCT operations
Q161.07981.03831.01531.0153Heavy computation
Q170.79850.77270.78530.7727String matching
Q183.33403.30203.34783.3020Complex analytics
Q190.07570.06830.05700.0570Simple filter
Q201.03600.91060.90790.9079Aggregation pipeline
Q210.84820.84000.85200.8400GROUP BY with filter
Q221.72281.67821.72081.6782Multiple aggregations
Q230.50970.53170.52370.5097Complex WHERE
Q240.19730.20580.20730.1973Simple aggregation
Q250.30040.29410.29230.2923String operations
Q260.13750.14610.13840.1375Fast lookup
Q270.99750.98660.98470.9847Complex JOIN
Q289.12639.13349.17139.1263Heavy analytics
Q290.07870.08020.07870.0787Simple filter
Q300.78540.68780.57420.5742Scan with filter
Q310.67810.67990.69200.6781Aggregation
Q321.95621.92391.93221.9239Window functions
Q332.33682.28772.33252.2877Complex analytics
Q342.37242.36402.36112.3611Multiple GROUP BY
Q350.57920.74500.57650.5765Aggregation pipeline
Q360.16090.15600.16660.1560Simple scan
Q370.13660.14550.12820.1282Fast filter
Q380.10070.10720.09920.0992Selective scan
Q390.26870.27800.27500.2687String operations
Q400.06510.06330.06860.0633Simple lookup
Q410.07570.06420.06260.0626Fast aggregation
Q420.23650.22690.22510.2251Final query

Total Cold Run (First Run): 36.43s Average Query Time: 0.847s Fastest Query: 0.0548s (Q7) Slowest Query: 9.1263s (Q28)

Why Arc is Fast

1. DuckDB Query Engine

Arc leverages DuckDB's columnar execution engine, providing:

  • Vectorized execution: Process thousands of values per CPU instruction
  • Parallel query execution: Utilize all CPU cores automatically
  • Advanced optimizations: Join reordering, predicate pushdown, filter pushdown
  • SIMD instructions: Use modern CPU features (AVX2, AVX-512)

2. Parquet Columnar Storage

  • Columnar format: Read only columns needed for queries
  • Compression: 80% smaller than raw data (Snappy/ZSTD)
  • Predicate pushdown: Skip entire row groups based on statistics
  • Efficient scans: DuckDB reads Parquet natively

3. Stateless Architecture

  • No warm-up needed: Query engine starts fresh for each query
  • Direct Parquet access: No intermediate indexes or caches
  • HTTP API overhead included: Real-world performance with full stack

4. Query Optimization

Arc uses DuckDB's query optimizer which includes:

  • Column pruning (only read needed columns)
  • Row group filtering (skip irrelevant data)
  • Join optimization (intelligent join ordering)
  • Parallel execution (use all CPU cores)

Performance Characteristics

Analytical Workload Strengths

Arc excels at:

  • Aggregations: GROUP BY queries across millions of rows
  • Window functions: OVER/PARTITION BY operations
  • JOINs: Multi-table analytical queries
  • Scans: Full table scans with filters
  • Time-series analytics: Time bucketing and rollups

Query Patterns

Best Performance:

  • Aggregations on few columns (0.05-0.5s)
  • Selective filters (0.05-0.1s)
  • Time-series rollups (0.1-0.5s)

Good Performance:

  • Multi-table JOINs (0.5-1.0s)
  • Complex GROUP BY (0.5-1.5s)
  • Window functions (1.0-2.0s)

Acceptable Performance:

  • Heavy analytics on full dataset (2.0-10.0s)

Comparison with Other Databases

vs VictoriaLogs (3.1x faster)

  • Architecture: LogsQL vs SQL
  • Storage: Custom format vs Parquet
  • Query interface: Custom API vs HTTP REST + SQL

vs QuestDB (6.1x faster)

  • Architecture: Custom columnar engine vs DuckDB
  • Storage: Proprietary vs Parquet
  • Query complexity: Limited SQL vs Full SQL

vs TimescaleDB (28.1x faster)

  • Architecture: PostgreSQL extension vs Purpose-built
  • Storage: Row-based vs Columnar
  • Query engine: General-purpose vs Analytics-optimized

Write Performance

Arc achieves exceptional write throughput through MessagePack columnar binary protocol.

Write Benchmarks - Format Comparison

Wire FormatThroughputp50 Latencyp95 Latencyp99 LatencyNotes
MessagePack Columnar2.42M RPS1.74ms28.13ms45.27msZero-copy passthrough + auth cache (RECOMMENDED)
MessagePack Row908K RPS136.86ms851.71ms1542msLegacy format with conversion overhead
Line Protocol240K RPSN/AN/AN/AInfluxDB compatibility mode

Columnar Format Advantages:

  • 2.66x faster throughput vs row format (2.42M vs 908K RPS)
  • 78x lower p50 latency (1.74ms vs 136.86ms)
  • 30x lower p95 latency (28.13ms vs 851.71ms)
  • 34x lower p99 latency (45.27ms vs 1542ms)
  • Near-zero authentication overhead with 30s token cache

Test Configuration:

  • Hardware: Apple M3 Max (14 cores)
  • Workers: 400
  • Protocol: MessagePack columnar binary streaming
  • Deployment: Native mode
  • Storage: MinIO

MessagePack Columnar vs Line Protocol: 9.7x faster

Query Format Performance

Arc supports two query result formats: JSON and Apache Arrow.

Apache Arrow vs JSON Benchmarks

Result SizeJSON TimeArrow TimeSpeedupSize Reduction
1K rows0.0130s0.0099s1.31x42.8% smaller
10K rows0.0443s0.0271s1.63x43.4% smaller
100K rows0.3627s0.0493s7.36x43.5% smaller

Test Configuration:

  • Hardware: Apple M3 Max
  • Query: SELECT * FROM cpu LIMIT N
  • Endpoints: /query (JSON) vs /query/arrow (Arrow IPC)

Key Findings:

  • Arrow format is 7.36x faster for large result sets (100K+ rows)
  • Payloads are 43% smaller with Arrow
  • Zero-copy conversion to Pandas/Polars
  • Columnar format stays efficient end-to-end

When to use Arrow:

  • Large result sets (10K+ rows)
  • Wide tables with many columns
  • Data pipelines feeding into Pandas/Polars
  • Analytics notebooks and dashboards

Reproducibility

All benchmarks are reproducible. See Running Benchmarks for instructions.

Download Results

What This Means

Arc's ClickBench performance demonstrates:

  1. Production-Ready Analytics: Handle complex queries on 100M+ row datasets
  2. Cost-Effective: Fast queries on commodity hardware (AWS c6a.4xlarge)
  3. No Tuning Required: Default DuckDB settings perform excellently
  4. Stateless Efficiency: No warm-up or pre-loading needed
  5. Real-World Performance: HTTP API overhead included in all measurements
tip

For maximum query performance, enable automatic compaction to merge small files into optimized 512MB chunks.

Next Steps