Getting Started
Get Arc up and running in 5 minutes.
Prerequisites
- 4GB RAM minimum, 8GB+ recommended
- Docker, Kubernetes, or Linux (Debian/RHEL)
Quick Start
- Docker
- Kubernetes
- Debian/Ubuntu
- RHEL/Fedora
docker run -d \
--name arc \
-p 8000:8000 \
-v arc-data:/app/data \
ghcr.io/basekick-labs/arc:25.12.1
helm install arc https://github.com/basekick-labs/arc/releases/download/v25.12.1/arc-25.12.1.tgz
kubectl port-forward svc/arc 8000:8000
wget https://github.com/basekick-labs/arc/releases/download/v25.12.1/arc_25.12.1_amd64.deb
sudo dpkg -i arc_25.12.1_amd64.deb
sudo systemctl enable arc && sudo systemctl start arc
wget https://github.com/basekick-labs/arc/releases/download/v25.12.1/arc-25.12.1-1.x86_64.rpm
sudo rpm -i arc-25.12.1-1.x86_64.rpm
sudo systemctl enable arc && sudo systemctl start arc
Verify it's running:
curl http://localhost:8000/health
Get Your Admin Token
Arc generates an admin token on first startup. Copy it immediately - you won't see it again!
- Docker
- Kubernetes
- Native (systemd)
docker logs arc 2>&1 | grep -i "admin"
kubectl logs -l app=arc | grep -i "admin"
sudo journalctl -u arc | grep -i "admin"
You'll see:
======================================================================
FIRST RUN - INITIAL ADMIN TOKEN GENERATED
======================================================================
Initial admin API token: arc_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
======================================================================
Save it:
export ARC_TOKEN="arc_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Write Data
- MessagePack (9.47M rec/s)
- Line Protocol
- Python SDK
import msgpack
import requests
from datetime import datetime
import os
token = os.getenv("ARC_TOKEN")
data = {
"m": "cpu",
"columns": {
"time": [int(datetime.now().timestamp() * 1000)],
"host": ["server01"],
"usage_idle": [95.0],
"usage_user": [3.2]
}
}
response = requests.post(
"http://localhost:8000/api/v1/write/msgpack",
headers={
"Authorization": f"Bearer {token}",
"Content-Type": "application/msgpack"
},
data=msgpack.packb(data)
)
curl -X POST "http://localhost:8000/api/v1/write" \
-H "Authorization: Bearer $ARC_TOKEN" \
-H "Content-Type: text/plain" \
--data-binary "cpu,host=server01 usage_idle=95.0,usage_user=3.2"
from arc_client import ArcClient
with ArcClient(host="localhost", token="your-token") as client:
client.write.write_columnar(
measurement="cpu",
columns={
"time": [1704067200000],
"host": ["server01"],
"usage_idle": [95.0],
},
)
Query Data
- curl
- Python
- Arrow (2.88M rows/s)
curl -X POST http://localhost:8000/api/v1/query \
-H "Authorization: Bearer $ARC_TOKEN" \
-H "Content-Type: application/json" \
-d '{"sql": "SELECT * FROM default.cpu LIMIT 10", "format": "json"}'
import requests
import os
token = os.getenv("ARC_TOKEN")
response = requests.post(
"http://localhost:8000/api/v1/query",
headers={"Authorization": f"Bearer {token}"},
json={"sql": "SELECT * FROM default.cpu LIMIT 10", "format": "json"}
)
print(response.json())
import requests
import pyarrow as pa
import os
token = os.getenv("ARC_TOKEN")
response = requests.post(
"http://localhost:8000/api/v1/query/arrow",
headers={"Authorization": f"Bearer {token}"},
json={"sql": "SELECT * FROM default.cpu LIMIT 10000"}
)
reader = pa.ipc.open_stream(response.content)
df = reader.read_all().to_pandas()
print(df.head())
Next Steps
- Python SDK - Official client with DataFrame support
- Telegraf Integration - Collect system metrics
- Apache Superset - Build dashboards
- Configuration - Tune Arc for your workload
Troubleshooting
- Docker
- Kubernetes
- Native
docker logs arc
kubectl logs -l app=arc
kubectl describe pod -l app=arc
sudo journalctl -u arc -n 50
sudo systemctl status arc
Common Issues
Authentication errors: Make sure ARC_TOKEN is set and included in headers.
No data returned: Data may not be flushed yet. Force flush:
curl -X POST http://localhost:8000/api/v1/write/line-protocol/flush \
-H "Authorization: Bearer $ARC_TOKEN"