High-signal load testing for HTTP, WebSocket, SSE, and gRPC from the CLI.
Crankfire lets you describe realistic workloads against these protocols using one cohesive config model (choose one protocol per run). It’s built for engineers who care about proper arrival modeling, protocol‑aware metrics, and tight CI/CD integration—without running a cluster or a web UI.
👉 Docs: https://torosent.github.io/crankfire/
Use Crankfire when you need more than a simple curl loop, but don’t want the overhead of a heavyweight load-testing platform.
- Multi‑protocol coverage – HTTP, WebSocket, SSE, and gRPC share the same configuration and reporting engine (select the protocol mode per run).
- Realistic traffic patterns – Ramp/step/spike load phases plus uniform or Poisson arrivals.
- Production‑grade metrics – HDR histogram percentiles (P50/P90/P95/P99), per‑endpoint stats, and protocol‑specific error buckets.
- Live dashboard or JSON – Watch tests in your terminal, or export structured JSON for automation.
- Auth & data built‑in – OAuth2/OIDC helpers and CSV/JSON feeders for realistic test data.
- Request chaining – Extract values from responses (JSON path, regex) and use them in subsequent requests.
- HAR import – Record browser sessions and replay them as load tests with automatic filtering.
- Single binary – Written in Go with minimal runtime dependencies.
See the full feature overview in the docs.
| Feature | HTTP | WebSocket | SSE | gRPC |
|---|---|---|---|---|
| Basic Load Testing | ✅ | ✅ | ✅ | ✅ |
| Authentication | ✅ | ✅ | ✅ | ✅ |
| Data Feeders | ✅ | ✅ | ✅ | ✅ |
| Request Chaining | ✅ | — | — | — |
| HAR Import | ✅ | — | — | — |
| Thresholds/Assertions | ✅ | ✅ | ✅ | ✅ |
| Retries | ✅ | ❌ | ❌ | ❌ |
| Protocol-Specific Metrics | - | Messages sent/received, bytes | Events received, bytes | Calls, responses |
| Dashboard Support | ✅ | ✅ | ✅ | ✅ |
| HTML Report | ✅ | ✅ | ✅ | ✅ |
| JSON Output | ✅ | ✅ | ✅ | ✅ |
| Rate Limiting | ✅ | ✅ | ✅ | ✅ |
| Arrival Models | ✅ | ✅ | ✅ | ✅ |
| Load Patterns | ✅ | ✅ | ✅ | ✅ |
| Multiple HTTP Endpoints | ✅ | — | — | — |
- Performance Testing: Measure API response times and throughput
- Load Testing: Simulate concurrent users and high traffic
- Stress Testing: Find breaking points (histogram percentile accuracy)
- CI/CD Integration: Automated performance regression testing with JSON output
- Failure Analysis: Status bucket breakdown highlights protocol-specific failure patterns
brew tap torosent/crankfire
brew install crankfiredocker run ghcr.io/torosent/crankfire --target https://example.com --total 100go install github.com/torosent/crankfire/cmd/crankfire@latestgit clone https://github.com/torosent/crankfire.git
cd crankfire
go build -o build/crankfire ./cmd/crankfirecrankfire --target https://example.com --concurrency 10 --total 100crankfire \
--target https://api.example.com \
--concurrency 20 \
--rate 100 \
--duration 30s \
--dashboardcrankfire \
--target https://api.example.com/users \
--method POST \
--body '{"name":"crankfire"}' \
--header "Content-Type=application/json" \
--total 100crankfire --config loadtest.yamlcrankfire --target https://example.com --total 1000 --html-output report.htmlRecord a browser session and replay it as a load test:
crankfire --har recording.har --har-filter "host:api.example.com" --total 100See the Getting Started guide for a step‑by‑step walkthrough.
| Flag | Description | Default |
|---|---|---|
--target |
Target URL to test | (required unless using --har) |
--method |
HTTP method (GET, POST, PUT, DELETE, PATCH, etc.; case-insensitive, defaults to GET when omitted) | GET |
--header |
Add HTTP header (Key=Value, repeatable; last wins) |
- |
--body |
Inline request body | - |
--body-file |
Path to request body file | - |
--concurrency, -c |
Number of parallel workers | 1 |
--rate, -r |
Requests per second (0=unlimited) | 0 |
--duration, -d |
Test duration (e.g., 30s, 1m) | 0 |
--total, -t |
Total number of requests | 0 |
--timeout |
Per-request timeout | 30s |
--retries |
Number of retry attempts | 0 |
--arrival-model |
Arrival model (uniform or poisson) |
uniform |
--html-output |
Generate HTML report to the specified file path | - |
--json-output |
Output results as JSON | false |
--dashboard |
Show live terminal dashboard | false |
--log-errors |
Log each failed request to stderr | false |
--config |
Path to config file (JSON/YAML) | - |
--har |
Path to HAR file to import as endpoints | - |
--har-filter |
Filter HAR entries (e.g., host:example.com or method:GET,POST) |
- |
--feeder-path |
Path to CSV/JSON file for per-request data injection | - |
--feeder-type |
Feeder file type (csv or json) |
- |
--protocol |
Protocol mode (http, websocket, or sse) |
http |
--ws-messages |
WebSocket messages to send (repeatable) | - |
--ws-message-interval |
Interval between WebSocket messages | 0 |
--ws-receive-timeout |
WebSocket receive timeout | 10s |
--ws-handshake-timeout |
WebSocket handshake timeout | 30s |
--sse-read-timeout |
SSE read timeout | 30s |
--sse-max-events |
Max SSE events to read (0=unlimited) | 0 |
--grpc-proto-file |
Path to .proto file for gRPC | - |
--grpc-service |
gRPC service name (e.g., myapp.Service) |
- |
--grpc-method |
gRPC method name (e.g., GetData) |
- |
--grpc-message |
JSON message payload for gRPC (supports templates) | - |
--grpc-metadata |
gRPC metadata (repeatable, Key=Value) |
- |
--grpc-timeout |
gRPC call timeout | 30s |
--grpc-tls |
Use TLS for gRPC | false |
--grpc-insecure |
Skip TLS certificate verification | false |
--threshold |
Performance threshold (repeatable, e.g., http_req_duration:p95 < 500) |
- |
For a complete CLI reference and configuration guide, see Configuration & CLI Reference.
Define pass/fail criteria for your load tests. Perfect for catching performance regressions in CI pipelines.
crankfire --target https://api.example.com \
--concurrency 50 \
--duration 1m \
--threshold "http_req_duration:p95 < 500" \
--threshold "http_req_failed:rate < 0.01" \
--threshold "http_requests:rate > 100"Or in a config file:
target: https://api.example.com
concurrency: 50
duration: 1m
thresholds:
- "http_req_duration:p95 < 500" # 95th percentile under 500ms
- "http_req_failed:rate < 0.01" # Less than 1% failures
- "http_requests:rate > 100" # At least 100 RPSThe test exits with code 1 if any threshold fails, making it ideal for CI/CD gates. See Thresholds Documentation for details.
Example JSON config:
{
"target": "https://api.example.com/users",
"method": "POST",
"headers": {
"Content-Type": "application/json",
"Authorization": "Bearer token123"
},
"body": "{\"name\":\"test\"}",
"concurrency": 50,
"rate": 200,
"duration": "1m",
"timeout": "5s",
"retries": 3
}Example YAML config:
target: https://api.example.com/users
method: POST
headers:
Content-Type: application/json
Authorization: Bearer token123
body: '{"name":"test"}'
concurrency: 50
rate: 200
duration: 1m
timeout: 5s
retries: 3Simulate a realistic traffic curve with a warm-up phase followed by a sustained load period.
target: https://api.example.com
concurrency: 50
load_patterns:
- name: "warmup"
type: ramp
from_rps: 10
to_rps: 100
duration: 30s
- name: "sustained"
type: step
steps:
- rps: 100
duration: 2mDistribute traffic across multiple endpoints to simulate real user behavior (e.g., 80% browsing, 20% purchasing).
target: https://api.example.com
concurrency: 20
endpoints:
- name: "list-products"
weight: 8
method: GET
path: /products
- name: "create-order"
weight: 2
method: POST
path: /orders
body: '{"product_id": "123", "quantity": 1}'Inject dynamic data from a CSV file into request bodies or URLs.
orders.csv:
product_id,quantity
101,2
102,1
103,5
config.yaml:
target: https://api.example.com/orders
method: POST
feeder:
type: csv
path: ./orders.csv
body: '{"product_id": "{{.product_id}}", "quantity": {{.quantity}}}'Extract values from responses and use them in subsequent requests. Perfect for workflows like login → use token → access protected resources.
target: https://api.example.com
concurrency: 10
endpoints:
- name: "login"
weight: 1
method: POST
path: /auth/login
body: '{"email": "[email protected]", "password": "secret"}'
extractors:
- jsonpath: $.token
var: auth_token
- jsonpath: $.user.id
var: user_id
- name: "get-profile"
weight: 3
method: GET
path: /users/{{user_id}}
headers:
Authorization: "Bearer {{auth_token}}"See the Request Chaining documentation for JSON path, regex extraction, defaults, and error handling.
Load test a gRPC service using a Protocol Buffers definition.
target: localhost:50051
protocol: grpc
grpc:
proto_file: ./proto/service.proto
service: myapp.OrderService
method: CreateOrder
message: '{"user_id": "123", "item": "book"}'
timeout: 2s--- Load Test Results ---
Total Requests: 1000
Successful: 998
Failed: 2
Duration: 10.2s
Requests/sec: 98.04
Latency:
Min: 12ms
Max: 156ms
Mean: 45ms
P50: 42ms
P90: 68ms
P99: 112ms
Status Buckets:
HTTP 404: 1
HTTP 500: 1
crankfire --target https://example.com --total 100 --json-output{
"total": 100,
"successes": 100,
"failures": 0,
"min_latency_ms": 12.4,
"max_latency_ms": 156.8,
"mean_latency_ms": 45.2,
"p50_latency_ms": 42.1,
"p90_latency_ms": 68.3,
"p99_latency_ms": 112.7,
"duration_ms": 10245.3,
"requests_per_sec": 9.76,
"status_buckets": {
"http": {
"404": 2,
"500": 1
}
},
"endpoints": {
"list-users": {
"total": 600,
"successes": 600,
"failures": 0,
"p99_latency_ms": 95.1,
"requests_per_sec": 58.6
},
"create-order": {
"total": 400,
"successes": 392,
"failures": 8,
"p99_latency_ms": 180.4,
"requests_per_sec": 39.1,
"status_buckets": {
"http": {
"503": 8
}
}
}
}
}Generate a standalone HTML report with interactive charts and detailed statistics.
crankfire --target https://example.com --total 1000 --html-output report.htmlThe report includes:
- Summary Cards: Key metrics at a glance
- Interactive Charts: RPS and latency percentiles over time
- Latency Statistics: Detailed percentile breakdown
- Threshold Results: Pass/fail status for configured thresholds
- Endpoint Breakdown: Per-endpoint performance metrics
The dashboard provides a real-time, interactive view of your load test with the following features:
- Sparkline Latency Graph: Visual representation of latency over time
- RPS Gauge: Current requests per second with percentage indicator
- Metrics Table: Real-time statistics including total requests, success/failure counts, and latency percentiles
- Status Buckets: Protocol-specific failure codes with counts
- Endpoint Breakdown: Weighted endpoint view with live share, RPS, and tail latency
- Test Summary: Elapsed time, total requests, and success rate
crankfire --target https://api.example.com \
--concurrency 20 \
--rate 100 \
--duration 60s \
--dashboard
You MUST:
- Only use Crankfire against systems you own or have explicit written permission to test
- Verify you have authorization before running any load tests
- Be aware that unauthorized load testing may violate terms of service, acceptable use policies, or laws
- Consider the impact on production systems and third-party services
Security Best Practices:
- Never commit authentication credentials to version control
- Use environment variables for secrets:
CRANKFIRE_AUTH_CLIENT_SECRET,CRANKFIRE_AUTH_PASSWORD,CRANKFIRE_AUTH_STATIC_TOKEN - Avoid passing secrets via CLI flags (they appear in shell history and process lists)
- Use short-lived tokens with minimal scopes
- Prefer
oauth2_client_credentialsover legacyoauth2_resource_ownerpassword flow - For gRPC, always enable TLS verification in production (
insecure: false)
Rate Limiting:
- Start with conservative rate and concurrency settings
- Gradually increase load while monitoring target system health
Misuse of this tool may result in service disruptions, account termination, legal action, or criminal charges. The authors and contributors are not responsible for misuse of this software.
MIT