Troubleshooting
Tips and answers to FAQs about how to run Electric successfully.
Local development
Slow shapes / slow HMR / slow dev server — why is my local development slow?
Sometimes people encounter mysterious slow-downs with Electric in local development — slow shape loading, sluggish HMR (Hot Module Replacement), or an unresponsive development server. This commonly happens when your web app is subscribed to 6 or more shapes. The slow-down is caused by a limitation of the legacy version of HTTP, 1.1.
With HTTP/1.1, browsers only allow 6 simultaneous requests to a specific backend. This is because each HTTP/1.1 request uses its own expensive TCP connection. As shapes are loaded over HTTP, this means only 6 shapes can be getting updates with HTTP/1.1 due to this browser restriction. All other requests pause until there's an opening.
This also affects your development server (Vite, webpack, etc.) because the browser's TCP connection limit is shared across all requests to your dev server — including HMR updates, asset loading, and shape sync. If Electric shapes are holding connections open, your HMR may take minutes instead of milliseconds.
Luckily, HTTP/2, introduced in 2015, fixes this problem by multiplexing each request to a server over the same TCP connection. This allows essentially unlimited connections. HTTP/2 is standard across the vast majority of hosts now. Unfortunately it's not yet standard in local dev environments.
Solution — run Caddy
To fix this, you can setup a local reverse-proxy using the popular Caddy server. Caddy automatically sets up HTTP/2 and proxies requests to Electric, getting around the 6 requests limitation with HTTP/1.1 in the browser.
- Install Caddy for your OS — https://caddyserver.com/docs/install
- Run
caddy trustso Caddy can install its certificate into your OS. This is necessary for http/2 to Just Work™ without SSL warnings/errors in your browser — https://caddyserver.com/docs/command-line#caddy-trust
Note — it's really important you run Caddy directly from your computer and not in e.g. a Docker container as otherwise, Caddy won't be able to use http/2 and will fallback to http/1 defeating the purpose of using it!
Once you have Caddy installed and have added its certs — you can run this command to start Caddy listening on port 3001 and proxying shape requests to Electric on port 3000. If you're loading shapes through your API or framework dev server, replace 3000 with the port that your API or dev server is listening on. The browser should talk directly to Caddy.
caddy run \
--config - \
--adapter caddyfile \
<<EOF
localhost:3001 {
reverse_proxy localhost:3000
encode {
gzip
}
}
EOFNow change your shape URLs in your frontend code to use port 3001 instead of port 3000 and everything will run much faster 🚀
SSE connections — why is my client falling back to long polling?
When using Server-Sent Events (SSE) mode for live updates (liveSse: true), you might see a warning in the console:
[Electric] SSE connections are closing immediately (possibly due to proxy buffering or misconfiguration).
Falling back to long polling.This happens when the Electric client detects that SSE connections are closing immediately after opening, which typically indicates proxy buffering or caching issues.
Solution — configure your proxy for SSE streaming
SSE requires proxies to support streaming responses without buffering the complete response. Here's how to configure common proxies:
Caddy
Add flush_interval -1 to your reverse_proxy configuration:
localhost:3001 {
reverse_proxy localhost:3000 {
# SSE: disable internal buffering so events are flushed immediately
flush_interval -1
}
encode gzip
# Helpful headers for streaming
header {
Cache-Control "no-cache, no-transform"
X-Accel-Buffering "no"
}
}Nginx
Disable proxy buffering for SSE endpoints:
location /v1/shape {
proxy_pass http://localhost:3000;
proxy_buffering off; # Disable buffering for SSE streaming
proxy_http_version 1.1;
# Preserve Electric's cache headers for request collapsing
proxy_cache_valid 200 1s;
}Important: Do NOT disable caching entirely! Electric uses cache headers to enable request collapsing/fanout for efficiency. Your proxy should:
- Support streaming (not buffer complete responses)
- Respect Electric's cache headers for request collapsing
- Flush SSE events immediately as they arrive
How the client handles SSE issues
When SSE connections close immediately, the Electric client:
- Retries with exponential backoff (0-200ms, 0-400ms, 0-800ms)
- After 3 consecutive short connections, automatically falls back to long polling
- Continues working normally in long polling mode (slightly less efficient)
To verify your SSE setup is working, check that:
- Console shows no fallback warnings
- Network tab shows a persistent SSE connection (not rapidly reconnecting)
shapeStream.isConnected()returnstrueafter initial sync
Shape logs — how do I clear the server state?
Electric writes shape logs to disk.
During development, you may want to clear this state. However, just restarting Electric doesn't clear the underlying storage, which can lead to unexpected behaviour.
Solution — clear shape logs
You can remove STORAGE_DIR to delete all shape logs. This will ensure that following shape requests will be re-synced from scratch.
Using docker
If you're running using Docker Compose, the simplest solution is to bring the Postgres and Electric services down, using the --volumes flag to also clear their mounted storage volumes:
docker compose down --volumesYou can then bring a fresh backend up from scratch:
docker compose upUnexpected 409 — why is my shape handle invalid?
If, when you request a shape, you get an unexpected 409 status despite the shape existing (for example, straight after you've created it), e.g.:
url: http://localhost:3000/v1/shape?table=projects&offset=-1
sec: 0.086570622 seconds
status: 200
url: http://localhost:3000/v1/shape?table=projects&offset=0_0&handle=17612588-1732280609822
sec: 1.153542301 seconds
status: 409
conflict reading Location
url: http://localhost:3000/v1/shape?table=projects&offset=0_0&handle=51930383-1732543076951
sec: 0.003023737 seconds
status: 200This indicates that your client library or proxy layer is caching requests to Electric and responding to them without actually hitting Electric for the correct response. For example, when running unit tests your library may be maintaining an unexpected global HTTP cache.
Solution — clear your cache
The problem will resolve itself as client/proxy caches empty. You can force this by clearing your client or proxy cache. See Control messages for more context on 409 messages.
Production
WAL growth — why is my Postgres database storage filling up?
Electric creates a logical replication slot in Postgres to stream changes. This slot tracks a position in the Write-Ahead Log (WAL) and prevents Postgres from removing WAL segments that Electric hasn't yet processed. If the slot doesn't advance, WAL accumulates and consumes disk space.
Understanding replication slot status
Run this query to check your replication slot's health:
SELECT
slot_name,
active,
wal_status,
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS retained_wal,
pg_size_pretty(safe_wal_size) AS safe_wal_remaining,
restart_lsn,
confirmed_flush_lsn
FROM pg_replication_slots
WHERE slot_name LIKE 'electric%';Key columns:
| Column | Meaning |
|---|---|
active | true if Electric is currently connected |
wal_status | Current WAL retention state (see below) |
retained_wal | Total WAL size held by this slot |
confirmed_flush_lsn | Last position Electric confirmed processing |
Understanding wal_status values:
| Status | Meaning | Action |
|---|---|---|
reserved | Normal — WAL is within max_wal_size | None required |
extended | Warning — exceeded max_wal_size but protected by slot limits | Monitor closely |
unreserved | Danger — WAL may be removed at next checkpoint | Urgent: slot will be invalidated |
lost | Critical — required WAL was removed, slot is invalid | Must recreate slot |
Common causes and solutions
Electric is disconnected
When Electric isn't running, its replication slot remains but becomes inactive. WAL accumulates indefinitely until Electric reconnects or the slot is removed.
Solution: If stopping Electric for an extended period, remove the replication slot:
SELECT pg_drop_replication_slot('electric_slot_default');When Electric restarts, it will recreate the slot and rebuild shape logs from scratch.
Slot is active but not advancing
If active = true but confirmed_flush_lsn isn't advancing, verify Electric is processing changes:
Check for errors in Electric logs — storage issues or database connectivity problems can prevent processing
Verify shaped tables are in the publication:
sqlSELECT * FROM pg_publication_tables WHERE pubname LIKE 'electric_publication%';Test that changes flow through — make a change to a shaped table and check if
confirmed_flush_lsnadvances:sql-- Note the current position SELECT confirmed_flush_lsn FROM pg_replication_slots WHERE slot_name = 'electric_slot_default'; -- Make a change to a table with an active shape UPDATE your_shaped_table SET updated_at = now() WHERE id = 1; -- After a few seconds, check if position advanced SELECT confirmed_flush_lsn FROM pg_replication_slots WHERE slot_name = 'electric_slot_default';Check Electric's storage — if
ELECTRIC_STORAGE_DIRhas disk space or permission issues, Electric can't flush data and won't acknowledge progress
High write volume
If your database has heavy write activity, there will always be some lag between writes and Electric's acknowledgment. This is normal, but you should configure limits to prevent unbounded growth.
Solution: Set max_slot_wal_keep_size to cap WAL retention:
-- Limit each slot to 10GB of WAL (adjust based on your needs)
ALTER SYSTEM SET max_slot_wal_keep_size = '10GB';
SELECT pg_reload_conf();WARNING
If a slot exceeds this limit, Postgres will invalidate it at the next checkpoint. Electric will detect this, drop all shapes, and recreate the slot. This is generally preferable to filling your disk.
Recommended PostgreSQL settings
| Setting | Recommended Value | Purpose |
|---|---|---|
max_slot_wal_keep_size | 10GB - 50GB | Prevents any single slot from causing unbounded WAL growth. Default is -1 (unlimited). |
wal_keep_size | 2GB (RDS default) | Minimum WAL retained regardless of slots |
For AWS RDS, these can be set in your parameter group. Note that max_slot_wal_keep_size requires PostgreSQL 13+.
Monitoring replication health
Electric exposes metrics for monitoring replication slot health. If you have Prometheus configured, watch these metrics:
electric.postgres.replication.slot_retained_wal_size— bytes of WAL retained by the slotelectric.postgres.replication.slot_confirmed_flush_lsn_lag— bytes between Electric's confirmed position and current WAL
Set alerts when retained WAL exceeds your threshold or when lag grows continuously.
Quick diagnostic checklist
- Is the slot active? —
active = truemeans Electric is connected - Is
confirmed_flush_lsnadvancing? — should increase after changes to shaped tables - What's the
wal_status? —reservedis healthy,extendedneeds attention - Is
max_slot_wal_keep_sizeset? — prevents unbounded growth (default is unlimited) - Any errors in Electric logs? — storage or connectivity issues prevent processing
Database permissions — how do I configure PostgreSQL users for Electric?
Electric requires specific PostgreSQL permissions to function correctly, including the REPLICATION role and appropriate table permissions.
Solution — see the PostgreSQL Permissions guide
See the PostgreSQL Permissions guide for detailed instructions on:
- Quick start setup for development and production
- Different permission levels (superuser, dedicated user, least-privilege)
- How to handle
REPLICA IDENTITY FULLrequirements
Common permission errors
Error: "insufficient privilege to create publication"
Cause: The user doesn't have CREATE privilege on the database.
Solution: Either:
- Grant
CREATEprivilege:GRANT CREATE ON DATABASE mydb TO electric_user; - Or use manual publication management (create the publication as a superuser and set
ELECTRIC_MANUAL_TABLE_PUBLISHING=true)
Error: "publication not owned by the provided user"
Cause: The publication exists but is owned by a different user.
Solution: Change the publication owner:
ALTER PUBLICATION electric_publication_default OWNER TO electric_user;Error: "table does not have its replica identity set to FULL"
Cause: The table hasn't been configured with REPLICA IDENTITY FULL.
Solution: Set replica identity manually:
ALTER TABLE schema.tablename REPLICA IDENTITY FULL;Error: "permission denied for table"
Cause: The Electric user doesn't have SELECT permission on the table.
Solution: Grant appropriate permissions:
GRANT SELECT ON schema.tablename TO electric_user;Error: "must be owner of table"
Cause: You attempted an operation that requires ownership (e.g., ALTER TABLE ... REPLICA IDENTITY FULL or adding the table to a publication).
Solution: Run as the table owner (or superuser), or transfer ownership:
ALTER TABLE schema.tablename OWNER TO electric_user;IPv6 support
If Electric or Postgres are running behind an IPv6 network, you might have to perform additional configurations on your network.
Postgres running behind IPv6 network
In order for Electric to connect to Postgres over IPv6, you need to set ELECTRIC_DATABASE_USE_IPV6 to true.
Local development
If you're running Electric on your own computer, check if you have IPv6 support by opening test-ipv6.com. If you see "No IPv6 address detected" on that page, consider sshing into another machine or using a VPN service that works with IPv6 networks.
When running Electric in a Docker container, there's an additional hurdle in that Docker does not enable IPv6 out-of-the-box. Follow the official guide to configure your Docker daemon for IPv6.
Cloud
If you're running Electric in a Cloud provider, you need to ensure that your VPC is configured with IPv6 support. Check your Cloud provider documentation to learn how to set it up.
Electric running behind IPv6 network
By default Electric only binds to IPv4 addresses. You need to set ELECTRIC_LISTEN_ON_IPV6 to true to bind to bind to IPv6 addresses as well.
Missing headers — why is the client complaining about missing headers?
When Electric responds to shape requests it includes headers that are required by the client to follow the shape log. It is common to run Electric behind a proxy to authenticate users and authorise shape requests. However, the proxy might not keep the response headers in which case the client may complain about missing headers.
Solution — configure proxy to keep headers
Verify the proxy configuration and make sure it doesn't remove any of the electric-... headers.
414 Request-URI Too Long — why are my subset snapshot requests failing?
When using subset snapshots (via requestSnapshot or fetchSnapshot), you might encounter a 414 Request-URI Too Long error:
Bandit.HTTPError: Request URI is too longThis happens when the subset parameters (especially WHERE clauses with many values) exceed the maximum URL length. This is common when:
- Using
WHERE id = ANY($1)with hundreds of IDs (typical in join queries) - TanStack DB generates large filter lists from JOIN operations
- Any query with many positional parameters
Solution — use POST requests for subset snapshots
Instead of sending subset parameters as URL query parameters (GET), send them in the request body (POST). The Electric server supports both methods.
TypeScript Client
Set subsetMethod: 'POST' on the stream to use POST for all subset requests:
const stream = new ShapeStream({
url: 'http://localhost:3000/v1/shape',
params: { table: 'items' },
log: 'changes_only',
subsetMethod: 'POST', // Use POST for all subset requests
})
// All subset requests will now use POST
const { metadata, data } = await stream.requestSnapshot({
where: "id = ANY($1)",
params: { '1': '{id1,id2,id3,...hundreds more...}' },
})Or override per-request:
const { metadata, data } = await stream.requestSnapshot({
where: "id = ANY($1)",
params: { '1': '{id1,id2,id3,...}' },
method: 'POST', // Use POST for this request only
})Direct HTTP
Use POST with subset parameters in the JSON body:
curl -X POST 'http://localhost:3000/v1/shape?table=items&offset=123_4&handle=abc-123' \
-H 'Content-Type: application/json' \
-d '{
"where": "id = ANY($1)",
"params": {"1": "{id1,id2,id3,...}"},
"order_by": "created_at",
"limit": 100
}'See the HTTP API documentation for more details.
Future change
In Electric 2.0, GET requests for subset snapshots will be deprecated. Only POST will be supported. We recommend migrating to POST now to avoid future breaking changes.