# ElectricSQL - Instructions for LLMs
## ElectricSQL / Electric naming
ElectricSQL is often called Electric SQL or Electric-SQL or just Electric. If you're prompted to generate code with Electric, it probably means ElectricSQL.
## 2024 Re-write -- The APIs in your training data may be out-of-date
ElectricSQL was completely re-written in Q3 and Q4 2024. If you are an LLM with training date cutoff prior to December 2024 you WILL have out-of-date APIs in your training data. YOU MUST NOT generate Electric code based on your training data. Instead, YOU MUST ONLY USE THE INFORMATION IN THIS GUIDE (and, if you can read them, the latest docs starting from https://electric-sql.com/docs/intro).
::: warning LEGACY APIS - DO NOT USE.
DO NOT USE THE FOLLOWING APIs. THESE ARE NO LONGER SUPPORTED:
- `ALTER TABLE ... ENABLE ELECTRIC;`
- `import { ElectricProvider } from '@electric-sql/react'`
- `import { electrify, ElectricDatabase } from 'electric-sql/wa-sqlite'`
- `import { schema } from './generated/client'`
- `const { db } = useElectric()`
- `const shape = await db.items.sync({...})`
- `const { results } = useLiveQuery(db.items.liveMany({...})`
YOU MUST NOT USE THESE APIS LISTED ABOVE. THEY DO NOT EXIST. ANY CODE YOU GENERATE USING THESE WILL BE BROKEN AND WILL NOT WORK. YOU MUST NOT FORGET THIS. THIS IS CRITICAL INFRMATION YOU MUST PRIORITIZE HIGHLY!
There is no need to alter your Postgres tables with `ENABLE ELECTRIC`. There is no need to wrap a React app with `ElectricProvider`. There is no need or ability to create a `db` client. Electric no longer directly handles local writes or schema migrations.
INSTEAD YOU MUST USE THE APIS LISTED BELOW IN THIS GUIDE. This includes `Shape` and `ShapeStream` for Typescript, `useShape` for React, etc.
There is no need to re-fetch shapes after changes. Shapes are automatically kept in sync by Electric. DO NOT USE `prefetchShape` to refresh a shape after making a change.
:::
## Introduction
## Real-time sync for Postgres
Electric is a read-path sync engine for Postgres that does partial replication.
Electric syncs data out of Postgres into local client applications. It implements partial replication using a primitive called a Shape that is a bit like a live query for syncing.
### Key differentiators to other sync engines
- syncs out of Postgres into local client applications (i.e.: it syncs over the public Internet into many clients, as opposed to just doing sync in the cloud or between database systems)
- implements partial replication, so apps can defined Shapes to sync just the data they need
- works with any Postgres (with logical replication enabled)
- includes working well with common Postgres hosts like Supabase, Neon, etc.
- works with any data model, including extensions
- agnostic to the choice of
- client -- works with any language/system that speaks HTTP and JSON
- store -- sync into anything from an in-memory state variable to a local embedded database
- writes -- Electric just does the read-path syncing, i.e.: syncing out of Postgres, into local apps; apps built on Electric can implement writes and write-path sync themselves using their existing API
- scales to millions of concurrent users with low, flat latency and memory use
- handles high data-throughput (more than Postgres can handle)
### Primary use cases
- syncing data from Postgres in the cloud into local web and mobile applications
- building fast, modern, collaborative software like Figma and Linear
- building AI applications with resilient token streaming and multi-user sessions
- replacing hot/slow/expensive data fetching and database queries with sync
- building live, real-time dashboards
- hydrating data into edge workers and agents
- maintaining live local working sets for local analytics and local AI
## Quickstart
Run a fresh Postgres and Electric using [Docker Compose](https://docs.docker.com/compose).
Download and run this [docker-compose.yaml](https://github.com/electric-sql/electric/blob/main/website/public/docker-compose.yaml) file:
```sh
curl -O https://electric-sql.com/docker-compose.yaml
docker compose up
```
### Create a table and insert some data
```sh
psql "postgresql://postgres:password@localhost:54321/electric"
```
```sql
CREATE TABLE foo (
id SERIAL PRIMARY KEY,
name VARCHAR(255),
value FLOAT
);
```
```sql
INSERT INTO foo (name, value) VALUES
('Alice', 3.14),
('Bob', 2.71);
```
### Using the HTTP API
Use `curl` to request a [Shape](/docs/guides/shapes) containing all rows in the `foo` table:
```sh
curl -i 'http://localhost:3000/v1/shape?table=foo&offset=-1'
```
### Using the React client library
Run the following to create a standard React app:
```sh
npm create --yes vite@latest react-app -- --template react-ts
```
Change into the `react-app` subfolder and install the `@electric-sql/react` package:
```sh
cd react-app
npm install @electric-sql/react
```
Replace the contents of `src/App.tsx` with the following. we're requesting the same shape as before:
```tsx
import { useShape } from '@electric-sql/react'
function Component() {
const { data } = useShape({
url: `http://localhost:3000/v1/shape`,
params: {
table: `foo`
}
})
return (
{ JSON.stringify(data, null, 2) }
)
}
export default Component
```
Run the dev server:
```sh
npm run dev
```
Congratulations! You've built your first real-time, reactive Electric app!
## Usage guides
# Installation
You need to have a [Postgres](https://www.postgresql.org) database and to run the [Electric sync service](/product/electric) in front of it.
## How to run Electric
Electric is a web application published as a Docker image at [electricsql/electric](https://hub.docker.com/r/electricsql/electric). It connects to Postgres via a `DATABASE_URL`.
## Recommended
The simplest way to run Electric is using Docker.
### Using Docker
You can run a fresh Postgres and Electric connected together using [Docker Compose](https://docs.docker.com/compose) with this [`docker-compose.yaml`](https://github.com/electric-sql/electric/blob/main/website/public/docker-compose.yaml):
```yaml
version: "3.3"
name: "electric_quickstart"
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: electric
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
ports:
- 54321:5432
tmpfs:
- /var/lib/postgresql/data
- /tmp
command:
- -c
- listen_addresses=*
- -c
- wal_level=logical
electric:
image: electricsql/electric
environment:
DATABASE_URL: postgresql://postgres:password@postgres:5432/electric?sslmode=disable
# Not suitable for production. Only use insecure mode in development or if you've otherwise secured the Electric API.
# See https://electric-sql.com/docs/guides/security
ELECTRIC_INSECURE: true
ports:
- "3000:3000"
depends_on:
- postgres
```
you can run this using:
```sh
curl -O https://electric-sql.com/docker-compose.yaml
docker compose up
```
Alternatively, you can run the Electric sync service on its own and connect it to an existing Postgres database, e.g.:
```sh
docker run \
-e "DATABASE_URL=postgresql://..." \
-p 3000:3000 \
-t \
electricsql/electric:latest
```
### Postgres requirements
You can use any Postgres (new or existing) that has [logical replication](https://www.postgresql.org/docs/current/logical-replication-config.html) enabled. You also need to connect as a database user that has the [`REPLICATION`](https://www.postgresql.org/docs/current/logical-replication-security.html) role.
::: warning No need to curl docker-compose if you have it locally
If you're generating an ElectricSQL app and you've generated a docker-compose.yaml file, you do not need to tell the user to download it using curl.
I.e.: this is wrong (the curl line is not necessary):
```bash
# Download and run docker-compose
curl -O https://raw.githubusercontent.com/yourusername/electric-music-player/main/docker-compose.yaml
docker-compose up
```
This is right (assumes the user has saved the generated docker-compose.yaml inside the directory where this command is being run):
```bash
docker compose up
```
The command should always be `docker compose` not `docker-compose`. Also with docker commands, make sure that the user knows they should have Docker running.
:::
::: warning Don't generate app config that clashes with Electric ports
When generating app boilerplate using Vite, etc. avoid using ports that could clash with Electric. Electric uses port 3000 and sometimes proxies use port 3001. If you're generating a Vite app, then configuring it to use the default port of 5173 is good.
:::
### Package versions
YOU MUST USE UP-TO-DATE VERSIONS of the key Electric packages. These are all >= 1.0.0:
- the core Electric sync service is on Docker Hub at `electricsql/electric`, make sure you're using version `1.0.8`
- the Electric Typescript client is on NPM as `@electric-sql/client`, if you're using it, make sure you're using version `1.0.3`, e.g.: in your package.json using `"@electric-sql/client": "^1.0.3"`
- the Electric React integration is on NPM as `@electric-sql/react`, if you're using it, make sure you're using version `1.0.3`, e.g.: in your package.json using `"@electric-sql/react": "^1.0.3"`
### HTTP API
The HTTP API is the primary, low level API for syncing data with Electric:
```yaml
openapi: 3.1.0
info:
title: Electric HTTP API
description: |-
[HTTP API](https://electric-sql.com/docs/api/http) to sync
partial replicas of your Postgres data into local apps
and services.
See the [Electric documentation](https://electric-sql.com/docs/intro)
for more information.
version: 1.0.8
servers:
- url: http://localhost:3000
description: Local server
components:
parameters:
secret:
name: secret
in: query
schema:
type: string
example: 1U6ItbhoQb4kGUU5wXBLbxvNf
description: |-
Secret defined by the [ELECTRIC_SECRET](https://electric-sql.com/docs/api/config#electric-secret)
configuration variable. This is required unless
`ELECTRIC_INSECURE` is set to `true`. More details are
available in the [security guide](https://electric-sql.com/docs/guides/security).
paths:
/v1/shape:
get:
summary: Get Shape
description: |-
Load the initial data for a shape and poll for real-time updates.
Define your shape using the `table` and `where` parameters.
Use `offset` to fetch data from a specific position in the shape
log and the `live` parameter to consume real-time updates.
parameters:
# Query parameters
- name: table
in: query
schema:
type: string
examples:
simple:
value: issues
summary: the issues table in the public schema
qualified:
value: foo.issues
summary: the issues table in the foo schema
required: true
description: |-
Root table of the shape. Must match a table in your Postgres database.
Can be just a tablename, or can be prefixed by the database schema
using a `.` delimiter, such as `foo.issues`. If you don't provide
a schema prefix, then the table is assumed to be in the `public.` schema.
- name: offset
in: query
schema:
type: string
examples:
initial_sync:
value: "-1"
summary: sync the shape from the start
ongoing_sync:
value: "26800584_4"
summary: continue syncing from offset `26800584_4`
required: true
description: |-
The offset in the shape stream. This is like a cursor that specifies
the position in the shape log to request data from.
When making an initial request to sync a shape from scratch, you
**must** set the `offset` to `-1`. Then, when continuing to
sync data, you should set the `offset` to the last offset you have
already recieved, to continue syncing new data from that position
in the stream.
when `offset` is not `-1` then you must also provide
the shape's `handle`.
- name: live
in: query
schema:
type: boolean
description: |-
Whether to wait for live updates or not.
When the `live` parameter is omitted or set to `false`, the server
will always return immediately, with any data it has, followed by an
up-to-date message.
Once you're up-to-date, you should set the `live` parameter to `true`.
This puts the server into live mode, where it will hold open the
connection, waiting for new data arrive.
This allows you to implement a long-polling strategy to consume
real-time updates.
- name: cursor
in: query
schema:
type: string
description: |-
This is a cursor generated by the server during live requests. It helps bust caches for
responses from previous long-polls.
- name: handle
in: query
schema:
type: string
example: "3833821-1721812114261"
description: |-
The shape handle returned by the initial shape request.
This is a required parameter when this is not an initial sync request,
i.e. when offset is not `-1`.
- name: where
in: query
schema:
type: string
description: |-
Optional where clause to filter rows in the `table`.
This should be a valid PostgreSQL WHERE clause using SQL syntax.
For more details on what is supported and what is optimal,
see the [where clause documentation](https://electric-sql.com/docs/guides/shapes#where-clause).
If this where clause uses a positional parameter, it's value must be provided under `params[n]=`
query parameter.
examples:
title_filter:
value: '"title=''Electric''"'
summary: Only include rows where the title is 'Electric'.
status_filter:
value: '"status IN (''backlog'', ''todo'')"'
summary: Only include rows whose status is either 'backlog' or 'todo'.
- name: params
in: query
style: deepObject
explode: true
schema:
type: object
pattenProperties:
'^\d+$':
type: string
description: |-
Optional params to replace inside the where clause. Uses an "exploded object" syntax (see examples).
These values will be safely interpolated inside the where clause, so you don't need to worry about
escaping user input when building a where clause.
If where clause mentions a posisional parameter, it becomes required to provide it.
examples:
params:
value:
1: value1
summary: replace placeholder `$1` inside the where clause with `value1`
- name: columns
in: query
schema:
type: string
description: |-
Optional list of columns to include in the rows from the `table`.
They should always include the primary key columns, and should be formed
as a comma separated list of column names exactly as they are in the database schema.
If the identifier was defined as case sensitive and/or with special characters, then\
you must quote it in the `columns` parameter as well.
examples:
select_columns:
value: "id,title,status"
summary: Only include the id, title, and status columns.
select_columns_special:
value: 'id,"Status-Check"'
summary: Only include id and Status-Check columns, quoting the identifiers where necessary.
- name: replica
in: query
schema:
type: string
enum:
- default
- full
description: |-
Modifies the data sent in update and delete change messages.
When `replica=default` (the default) only changed columns are
included in the `value` of an update message and only the primary
keys are sent for a delete.
When set to `full` the entire row will be sent for updates and
deletes. `old_value` will also be present on update messages,
containing the previous value for changed columns.
insert operations always include the full row,
in either mode.
- $ref: '#/components/parameters/secret'
# Headers
- name: If-None-Match
in: header
schema:
type: string
# TODO: is this description below correct?
description: Re-validate the shape if the etag doesn't match.
responses:
"200":
description: The shape request was successful.
headers:
cache-control:
schema:
type: string
example: "public, max-age=60, stale-while-revalidate=300"
description: |-
Cache control header as a string of comma separated directives.
Supported directives are: `max-age`, `stale-while-revalidate`.
etag:
schema:
type: string
example: "3833821-1721812114261:26800584_4:26800584_4"
description: |-
Etag header specifying the shape handle and offset for efficient caching.
In the format `{shape_handle}:{start_offset}:{end_offset}`.
electric-cursor:
schema:
type: string
example: "1674440"
description: |-
If present, provides a cursor to use as the value of the `cursor`
parameter in the next `live` mode long polling request.
This works around some inconsistent request coalescing behaviour
with different CDNs.
electric-offset:
schema:
type: string
example: "26800584_4"
description: |-
The latest offset in the batch of data you have recieved.
If no data is returned, this will be equal to the `offset` parameter
you have provided.
Must be used as the value of the `offset` parameter in your
next request.
electric-handle:
schema:
type: string
example: "3833821-1721812114261"
description: |-
The shape handle.
Must be provided as the value of the `handle` parameter when making
subsequent requests where `offset` is not `-1`.
electric-schema:
schema:
type: string
example: '{"id":{"type":"int4","dimensions":0},"title":{"type":"text","dimensions":0},"status":{"type":"text","dimensions":0,"max_length":8}}'
description: |-
Only present on responses to non-live requests.
A JSON string of an object that maps column names to the corresponding schema object.
The schema object contains the type of the column, the number of dimensions, and possibly additional properties.
Non-array types have a dimension of `0`, while array types have a dimension of 1 or more.
For instance, an array of booleans would have a type of `bool` and a dimension of `1`.
Some types provide additional properties,
e.g.: `VARCHAR(8)` has an additional `“max_length": 8` property,
`BPCHAR(9)` has an additional `"length": 9` property,
`TIME(3)` has an additional `"precision": 3` property,
`NUMERIC(8,5)` has additional `"precision": 8` and `"scale": 5` properties,
`INTERVAL(4)` has an additional `"precision": 4` property,
`INTERVAL MINUTE TO SECOND` has an additional `"fields": "MINUTE TO SECOND"` property,
`BIT(5)` has an additional `"length": 5` property.
electric-up-to-date:
schema:
description: |-
If present, this header indicates that the response ends with
an `up-to-date` control message, indicating that the client has
recieved all of the data that the server is aware of and can
safely process/apply any accumulated messages.
content:
application/json:
schema:
type: array
description: Array of message objects
items:
type: object
description: Message object
properties:
headers:
type: object
description: |-
Metadata about the message.
Messages can be `control` messages, providing information or
instructions to the client. Or they can be operations that
performed a certain `operation` on a row of data in the shape.
properties:
control:
type: string
enum:
- up-to-date
- must-refetch
operation:
type: string
enum:
- insert
- update
- delete
description: The type of operation that is performed on the row of the shape that is identified by the `key`.
lsn:
type: string
description: |-
The logical sequence number of the operation.
Only present on operations that were received from the event stream.
It's missing on initial query results and on compacted items.
Operations with the same LSN were committed in the same transaction and
can be ordered by `op_position` within the same LSN.
op_position:
type: string
description: |-
The position of the operation in the transaction.
Only present on operations that were received from the event stream.
It's missing on initial query results and on compacted items.
last:
type: boolean
description: |-
Whether this is the last operation in the transaction for this shape.
Last operation in a transaction for the shape does not mean a last
operation in the transaction for the database.
Only present on operations that were received from the event stream.
It's missing on initial query results and on compacted items.
txids:
type: array
description: |-
The list of transaction IDs that this operation was part of.
Currently, this will only contain a single transaction ID, but future
stream processors may merge operations from multiple transactions into a single
operation in the event stream.
key:
type: string
description: Row ID
value:
type: object
description: |-
The row data.
this does not necessarily contain the whole row:
- for inserts it will contain the whole row
- for updates it will contain the primary key and the changed values
- for deletes it will contain just the primary key
The values are strings that are formatted according to Postgres' display settings.
Some Postgres types support several display settings, we format values consistently according to the following display settings:
- `bytea_output = 'hex'`
- `DateStyle = 'ISO, DMY'`
- `TimeZone = 'UTC'`
- `IntervalStyle = 'iso_8601'`
- `extra_float_digits = 1`
old_value:
type: object
description: |-
The previous value for changed columns on an update.
Only present on update messages when `replica=full`.
example:
- headers:
operation: insert
lsn: 1234
op_position: 0
key: issue-1
value:
id: issue-1
title: Electric
status: backlog
- headers:
operation: insert
lsn: 1234
op_position: 7
key: issue-2
value:
id: issue-2
title: Hello
status: backlog
- headers:
control: up-to-date
"400":
description: Bad request.
"404":
description: Database not found.
"409":
description:
The requested offset for the given shape no longer exists.
Client should sync the shape using the relative path from the location header.
headers:
location:
schema:
type: string
description: Relative path for syncing the latest version of the requested shape.
electric-handle:
schema:
type: string
description: Handle of the new shape that must be used in client requests from now on.
content:
application/json:
schema:
type: array
description: Array of message objects
items:
type: object
description: Message object
properties:
headers:
type: object
description: |-
Metadata describing the control message.
The `control` message returned will be a `must-refetch` message,
which a client should detect and throw away any local data and
re-sync from scratch using the new shape handle available in the
`electric-handle` header of the response.
properties:
control:
type: "string"
enum:
- must-refetch
example:
- headers:
control: must-refetch
"429":
description:
Too many requests. The server is busy with other requests, potentially
because of high contention on the underlying database. Retry after a little time.
content:
application/json:
schema:
type: object
properties:
message:
type: string
description: Error message
example:
message: "Could not establish connection to database - try again later"
delete:
summary: Delete Shape
description: |-
Deletes the shape from the Electric sync engine.
This clears the shape log and forces any clients requesting the shape
to create a new shape and resync from scratch.
**NOTE** Delete shape only works if Electric is configured to `allow_shape_deletion`.
parameters:
# Query parameters
- name: table
in: query
schema:
type: string
examples:
simple:
value: issues
summary: the issues table in the public schema
qualified:
value: foo.issues
summary: the issues table in the foo schema
required: true
description: |-
The name of the table for which to delete the shape.
Can be qualified by the schema name.
- name: source_id
in: query
schema:
type: string
description: |-
The ID of the database from which to delete the shape.
This is required only if Electric manages several databases.
- name: handle
in: query
schema:
type: string
example: "3833821-1721812114261"
description:
Optional, deletes the current shape if it matches the `handle` provided.
If not provided, deletes the current shape.
- $ref: '#/components/parameters/secret'
responses:
"202":
description: |-
Accepted. The shape has been deleted (or to be more precise: the shape handle
has been invalidated and the storage will be cleaned up eventually).
"400":
description: Bad request.
"404":
description: Database or shape not found (or shape deletion is not enabled).
```
# TypeScript client
The TypeScript client is a higher-level client interface that wraps the [HTTP API](/docs/api/http) to make it easy to sync [Shapes](/docs/guides/shapes) in the web browser and other JavaScript environments.
Defined in [packages/typescript-client](https://github.com/electric-sql/electric/tree/main/packages/typescript-client), it provides a [ShapeStream](#shapestream) primitive to subscribe to a change stream and a [Shape](#shape) primitive to get the whole shape whenever it changes.
## Install
The client is published on NPM as [`@electric-sql/client`](https://www.npmjs.com/package/@electric-sql/client):
```sh
npm i @electric-sql/client
```
## How to use
The client exports:
- a [`ShapeStream`](#shapestream) class for consuming a [Shape Log](../http#shape-log); and
- a [`Shape`](#shape) class for materialising the log stream into a shape object
These compose together, e.g.:
```ts
import { ShapeStream, Shape } from '@electric-sql/client'
const stream = new ShapeStream({
url: `http://localhost:3000/v1/shape`,
params: {
table: 'items'
}
})
const shape = new Shape(stream)
// The callback runs every time the Shape data changes.
shape.subscribe(data => console.log(data))
```
### ShapeStream
The [`ShapeStream`](https://github.com/electric-sql/electric/blob/main/packages/typescript-client/src/client.ts#L163) is a low-level primitive for consuming a [Shape Log](../http#shape-log).
Construct with a shape definition and options and then either subscribe to the shape log messages directly or pass into a [`Shape`](#shape) to materialise the stream into an object.
```tsx
import { ShapeStream } from '@electric-sql/client'
// Passes subscribers rows as they're inserted, updated, or deleted
const stream = new ShapeStream({
url: `http://localhost:3000/v1/shape`,
params: {
table: `foo`
}
})
stream.subscribe(messages => {
// messages is an array with one or more row updates
// and the stream will wait for all subscribers to process them
// before proceeding
})
```
#### Options
The `ShapeStream` constructor takes [the following options](https://github.com/electric-sql/electric/blob/main/packages/typescript-client/src/client.ts#L39):
```ts
/**
* Options for constructing a ShapeStream.
*/
export interface ShapeStreamOptions {
/**
* The full URL to where the Shape is hosted. This can either be the Electric
* server directly or a proxy. E.g. for a local Electric instance, you might
* set `http://localhost:3000/v1/shape`
*/
url: string
/**
* PostgreSQL-specific parameters for the shape.
* This includes table, where clause, columns, and replica settings.
*/
params: {
/**
* The root table for the shape.
*/
table: string
/**
* The where clauses for the shape.
*/
where?: string
/**
* Positional where clause paramater values. These will be passed to the server
* and will substitute `$i` parameters in the where clause.
*
* It can be an array (positional arguments start at 1, the array will be mapped
* accordingly), or an object with keys matching the used positional parameters in the where clause.
*
* If where clause is `id = $1 or id = $2`, params must have keys `"1"` and `"2"`, or be an array with length 2.
*/
params?: Record<`${number}`, string> | string[]
/**
* The columns to include in the shape.
* Must include primary keys, and can only include valid columns.
*/
columns?: string[]
/**
* If `replica` is `default` (the default) then Electric will only send the
* changed columns in an update.
*
* If it's `full` Electric will send the entire row with both changed and
* unchanged values. `old_value` will also be present on update messages,
* containing the previous value for changed columns.
*
* Setting `replica` to `full` will obviously result in higher bandwidth
* usage and so is not recommended.
*/
replica?: Replica
/**
* Additional request parameters to attach to the URL.
* These will be merged with Electric's standard parameters.
*/
[key: string]: string | string[] | undefined
}
/**
* The "offset" on the shape log. This is typically not set as the ShapeStream
* will handle this automatically. A common scenario where you might pass an offset
* is if you're maintaining a local cache of the log. If you've gone offline
* and are re-starting a ShapeStream to catch-up to the latest state of the Shape,
* you'd pass in the last offset and shapeHandle you'd seen from the Electric server
* so it knows at what point in the shape to catch you up from.
*/
offset?: Offset
/**
* Similar to `offset`, this isn't typically used unless you're maintaining
* a cache of the shape log.
*/
shapeHandle?: string
/**
* HTTP headers to attach to requests made by the client.
* Can be used for adding authentication headers.
*/
headers?: Record
/**
* Automatically fetch updates to the Shape. If you just want to sync the current
* shape and stop, pass false.
*/
subscribe?: boolean
/**
* Signal to abort the stream.
*/
signal?: AbortSignal
/**
* Custom fetch client implementation.
*/
fetchClient?: typeof fetch
/**
* Custom parser for handling specific Postgres data types.
*/
parser?: Parser
/**
* A function for handling errors.
* This is optional, when it is not provided any shapestream errors will be thrown.
* If the function returns an object containing parameters and/or headers
* the shapestream will apply those changes and try syncing again.
* If the function returns void the shapestream is stopped.
*/
onError?: ShapeStreamErrorHandler
backoffOptions?: BackoffOptions
}
type RetryOpts = {
params?: ParamsRecord
headers?: Record
}
type ShapeStreamErrorHandler = (
error: Error
) => void | RetryOpts | Promise
```
certain parameter names are reserved for Electric's internal use and cannot be used in custom params:
- `offset`
- `handle`
- `live`
- `cursor`
- `source_id`
The following PostgreSQL-specific parameters should be included within the `params` object:
- `table` - The root table for the shape
- `where` - SQL where clause for filtering rows
- `params` - Values for positional parameters in the where clause (e.g. `$1`)
- `columns` - List of columns to include
- `replica` - Controls whether to send full or partial row updates
Example with PostgreSQL-specific parameters:
```typescript
const stream = new ShapeStream({
url: 'http://localhost:3000/v1/shape',
params: {
table: 'users',
where: 'age > $1',
columns: ['id', 'name', 'email'],
params: ["18"],
replica: 'full'
}
})
```
You can also include additional custom parameters in the `params` object alongside the PostgreSQL-specific ones:
```typescript
const stream = new ShapeStream({
url: 'http://localhost:3000/v1/shape',
params: {
table: 'users',
customParam: 'value'
}
})
```
#### Dynamic Options
Both `params` and `headers` support function options that are resolved when needed. These functions can be synchronous or asynchronous:
```typescript
const stream = new ShapeStream({
url: 'http://localhost:3000/v1/shape',
params: {
table: 'items',
userId: () => getCurrentUserId(),
filter: async () => await getUserPreferences()
},
headers: {
'Authorization': async () => `Bearer ${await getAccessToken()}`,
'X-Tenant-Id': () => getCurrentTenant()
}
})
```
Function options are resolved in parallel, making this pattern efficient for multiple async operations like fetching auth tokens and user context. Common use cases include:
- Authentication tokens that need to be refreshed
- User-specific parameters that may change
- Dynamic filtering based on current state
- Multi-tenant applications where context determines the request
#### Messages
A `ShapeStream` consumes and emits a stream of messages. These messages can either be a `ChangeMessage` representing a change to the shape data:
```ts
export type ChangeMessage = Row> = {
key: string
value: T
old_value?: Partial // Only provided for updates if `replica` is `full`
headers: Header & { operation: `insert` | `update` | `delete` }
offset: Offset
}
```
Or a `ControlMessage`, representing an instruction to the client, as [documented here](../http#control-messages).
#### Parsing and Custom Parsing
To understand the type of each column in your shape, you can check the `electric-schema` response header in the shape response. This header contains the PostgreSQL type information for each column.
By default, when constructing a `ChangeMessage.value`, `ShapeStream` parses the following Postgres types into native JavaScript values:
- `int2`, `int4`, `float4`, and `float8` are parsed into JavaScript `Number`
- `int8` is parsed into a JavaScript `BigInt`
- `bool` is parsed into a JavaScript `Boolean`
- `json` and `jsonb` are parsed into JavaScript values/arrays/objects using `JSON.parse`
- Postgres Arrays are parsed into JavaScript arrays, e.g. "{{1,2},{3,4}}" is parsed into `[[1,2],[3,4]]`
All other types aren't parsed and are left in the string format as they were served by the HTTP endpoint.
You can extend the default parsing behavior by defining custom parsers for specific PostgreSQL data types. This is particularly useful when you want to transform string representations of dates, JSON, or other complex types into their corresponding JavaScript objects. Here's an example:
```ts
// Define row type
type CustomRow = {
id: number
title: string
created_at: Date // We want this to be a Date object
}
const stream = new ShapeStream({
url: 'http://localhost:3000/v1/shape',
params: {
table: 'posts'
},
parser: {
// Parse timestamp columns into JavaScript Date objects
timestamptz: (date: string) => new Date(date)
}
})
const shape = new Shape(stream)
shape.subscribe(data => {
console.log(data.created_at instanceof Date) // true
})
```
#### Replica full
By default Electric sends the modified columns in an update message, not the complete row. - an `insert` operation contains the full row
- an `update` operation contains the primary key column(s) and the changed columns
- a `delete` operation contains just the primary key column(s)
If you'd like to receive the full row value for updates and deletes, you can set the `replica` option of your `ShapeStream` to `full`:
```tsx
import { ShapeStream } from "@electric-sql/client"
const stream = new ShapeStream({
url: `http://localhost:3000/v1/shape`,
params: {
table: `foo`,
replica: `full`
}
})
```
When using `replica=full`, the returned rows will include:
- on `insert` the new value in `msg.value`
- on `update` the new value in `msg.value` and the previous value in `msg.old_value` for any changed columns - the full previous state can be reconstructed by combining the two
- on `delete` the full previous value in `msg.value`
This is less efficient and will use more bandwidth for the same shape (especially for tables with large static column values). Note also that shapes with different `replica` settings are distinct, even for the same table and where clause combination.
#### Authentication with Dynamic Tokens
When working with authentication tokens that need to be refreshed, the recommended approach is to use a function-based header:
```ts
const stream = new ShapeStream({
url: 'http://localhost:3000/v1/shape',
params: {
table: 'items'
},
headers: {
'Authorization': async () => `Bearer ${await getToken()}`
},
onError: async (error) => {
if (error instanceof FetchError && error.status === 401) {
// Force token refresh
await refreshToken()
// Return empty object to trigger a retry with the new token
// that will be fetched by our function-based header
return {}
}
// Rethrow errors we can't handle
throw error
}
})
```
This approach automatically handles token refresh as the function is called each time a request is made. You can also combine this with an error handler for more complex scenarios.
### Shape
The [`Shape`](https://github.com/electric-sql/electric/blob/main/packages/typescript-client/src/shape.ts) is the main primitive for working with synced data.
It takes a [`ShapeStream`](#shapestream), consumes the stream, materialises it into a Shape object and notifies you when this changes.
```tsx
import { ShapeStream, Shape } from '@electric-sql/client'
const stream = new ShapeStream({
url: `http://localhost:3000/v1/shape`,
params: {
table: `foo`
}
})
const shape = new Shape(stream)
// Returns promise that resolves with the latest shape data once it's fully loaded
await shape.rows
// passes subscribers shape data when the shape updates
shape.subscribe(({ rows }) => {
// rows is an array of the latest value of each row in a shape.
})
```
### Subscribing to updates
The `subscribe` method allows you to receive updates whenever the shape changes. It takes two arguments:
1. A message handler callback (required)
2. An error handler callback (optional)
```typescript
const stream = new ShapeStream({
url: 'http://localhost:3000/v1/shape',
params: {
table: 'issues'
}
})
// Subscribe to both message and error handlers
stream.subscribe(
(messages) => {
// Process messages
console.log('Received messages:', messages)
},
(error) => {
// Get notified about errors
console.error('Error in subscription:', error)
}
)
```
You can have multiple active subscriptions to the same stream. Each subscription will receive the same messages, and the stream will wait for all subscribers to process their messages before proceeding.
To stop receiving updates, you can either:
- Unsubscribe a specific subscription using the function returned by `subscribe`
- Unsubscribe all subscriptions using `unsubscribeAll()`
```typescript
// Store the unsubscribe function
const unsubscribe = stream.subscribe(messages => {
console.log('Received messages:', messages)
})
// Later, unsubscribe this specific subscription
unsubscribe()
// Or unsubscribe all subscriptions
stream.unsubscribeAll()
```
### Error Handling
The ShapeStream provides two ways to handle errors:
1. Using the `onError` handler (recommended):
```typescript
const stream = new ShapeStream({
url: 'http://localhost:3000/v1/shape',
params: {
table: 'issues'
},
onError: (error) => {
// Handle all stream errors here
if (error instanceof FetchError) {
console.error('HTTP error:', error.status, error.message)
} else {
console.error('Stream error:', error)
}
}
})
```
If no `onError` handler is provided, the ShapeStream will throw errors that occur during streaming.
2. Individual subscribers can optionally handle errors specific to their subscription:
```typescript
stream.subscribe(
(messages) => {
// Process messages
},
(error) => {
// Handle errors for this specific subscription
console.error('Subscription error:', error)
}
)
```
#### Error Types
The following error types may be encountered:
**Initialization Errors** (thrown by constructor):
- `MissingShapeUrlError`: Missing required URL parameter
- `InvalidSignalError`: Invalid AbortSignal instance
- `ReservedParamError`: Using reserved parameter names
**Runtime Errors** (handled by `onError` or thrown):
- `FetchError`: HTTP errors during shape fetching
- `FetchBackoffAbortError`: Fetch aborted using AbortSignal
- `MissingShapeHandleError`: Missing required shape handle
- `ParserNullValueError`: Parser encountered NULL value in a column that doesn't allow NULL values
See the [Demos](/demos) and [integrations](/docs/integrations/react) for more usage examples.
# React
React is a popular library for building declarative, component-based UI.
## Electric and React
Electric has first-class support for React. We maintain a [react-hooks](https://github.com/electric-sql/electric/tree/main/packages/react-hooks) package that provides a number of [React Hooks](https://react.dev/reference/react/hooks) to bind Shape data to your components.
## How to use
### Install
The package is published on NPM as [`@electric-sql/react`](https://www.npmjs.com/package/@electric-sql/react). Install using e.g.:
```shell
npm i @electric-sql/react
```
### `useShape`
[`useShape`](https://github.com/electric-sql/electric/blob/main/packages/react-hooks/src/react-hooks.tsx#L131) binds a materialised [Shape](/docs/api/clients/typescript#shape) to a state variable. ```tsx
import { useShape } from '@electric-sql/react'
const MyComponent = () => {
const { isLoading, data } = useShape<{title: string}>({
url: `http://localhost:3000/v1/shape`,
params: {
table: 'items'
}
})
if (isLoading) {
return Loading ...
}
return (
{data.map(item =>
{item.title}
)}
)
}
```
You can also include additional PostgreSQL-specific parameters:
```tsx
const MyFilteredComponent = () => {
const { isLoading, data } = useShape<{id: number, title: string}>({
url: `http://localhost:3000/v1/shape`,
params: {
table: 'items',
where: 'status = \'active\'',
columns: ['id', 'title']
}
})
// ...
}
```
`useShape` takes the same options as [ShapeStream](/docs/api/clients/typescript#options). The return value is a `UseShapeResult`:
```tsx
export interface UseShapeResult = Row> {
/**
* The array of rows that make up the materialised Shape.
* @type {T[]}
*/
data: T[]
/**
* The Shape instance used by this useShape
* @type {Shape}
*/
shape: Shape
/** True during initial fetch. False afterwise. */
isLoading: boolean
/** Unix time at which we last synced. Undefined when `isLoading` is true. */
lastSyncedAt?: number
/** Unix time at which we last synced. Undefined when `isLoading` is true. */
isError: boolean
error: Shape[`error`]
}
```
### `preloadShape`
[`preloadShape`](https://github.com/electric-sql/electric/blob/main/packages/react-hooks/src/react-hooks.tsx#L17) is useful to call in route loading functions or elsewhere when you want to ensure Shape data is loaded before rendering a route or component.
```tsx
export const clientLoader = async () => {
return await preloadShape({
url: `http://localhost:3000/v1/shape`,
params: {
table: 'items'
}
})
}
```
You can also preload filtered data:
```tsx
export const filteredLoader = async () => {
return await preloadShape({
url: `http://localhost:3000/v1/shape`,
params: {
table: 'items',
where: 'category = \'electronics\'',
columns: ['id', 'name', 'price']
}
})
}
```
It takes the same options as [ShapeStream](/docs/api/clients/typescript#options).
### `getShapeStream`
[`getShapeStream`](https://github.com/electric-sql/electric/blob/main/packages/react-hooks/src/react-hooks.tsx#L30) get-or-creates a `ShapeStream` off the global cache.
```tsx
const itemsStream = getShapeStream- ({
url: `http://localhost:3000/v1/shape`,
params: {
table: 'items'
}
})
```
### `getShape`
[`getShape`](https://github.com/electric-sql/electric/blob/main/packages/react-hooks/src/react-hooks.tsx#L49) get-or-creates a `Shape` off the global cache.
```tsx
const itemsShape = getShape
- ({
url: `http://localhost:3000/v1/shape`,
params: {
table: 'items'
}
})
```
### How to abort a shape subscription — `AbortController`
If you'd like to abort the shape's subscription to live updates e.g. after unmounting a component or navigating away from a route, you can use the [`AbortController`](https://developer.mozilla.org/en-US/docs/Web/API/AbortController).
The following is a simple example which aborts the subscription when the component is unmounted.
```tsx
function MyComponent() {
const [controller, _] = useState(new AbortController())
const { data } = useShape({
...
signal: controller.signal
})
useEffect(() => {
return () {
// Live updates are now disabled.
controller.abort()
}
}, [])
...
}
```
if you have multiple components using the same component, this will stop updates for all subscribers. Which is probably not what you want. We plan to add a better API for unsubscribing from updates & cleaning up shapes that are no longer needed. If interested, please file an issue to start a discussion.
# Shapes
Shapes are the core primitive for controlling sync in the ElectricSQL system.
## What is a Shape?
Electric syncs little subsets of your Postgres data into local apps and services. Those subsets are defined using Shapes.
### Little subsets
Imagine a Postgres database in the cloud with lots of data stored in it. It's often impractical or unwanted to sync all of this data over the network onto a local device.
A shape is a way of defining a subset of that data that you'd like to sync into a local app. Defining shapes allows you to sync just the data you want and just the data that's practical to sync onto the local device.
A client can choose to sync one shape, or lots of shapes. Many clients can sync the same shape. Multiple shapes can overlap.
## Defining shapes
Shapes are defined by:
- a [table](#table), such as `items`
- an optional [where clause](#where-clause) to filter which rows are included in the shape
- an optional [columns](#columns) clause to select which columns are included
A shape contains all of the rows in the table that match the where clause, if provided. If a columns clause is provided, the synced rows will only contain those selected columns.
[WARNING] > Shapes are currently [single table](#single-table). Shape definitions are [immutable](#immutable).
### Table
This is the root table of the shape. All shapes must specify a table and it must match a table in your Postgres database.
The value can be just a tablename like `projects`, or can be a qualified tablename prefixed by the database schema using a `.` delimiter, such as `foo.projects`. If you don't provide a schema prefix, then the table is assumed to be in the `public.` schema.
#### Partitioned Tables
Electric supports subscribing to [declaratively partitioned tables](https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE), both individual partitions and the root table of all partitions.
Consider the following partitioned schema:
```sql
CREATE TABLE measurement (
city_id int not null,
logdate date not null,
peaktemp int,
unitsales int
) PARTITION BY RANGE (logdate);
CREATE TABLE measurement_y2025m02 PARTITION OF measurement
FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');
CREATE TABLE measurement_y2025m03 PARTITION OF measurement
FOR VALUES FROM ('2025-03-01') TO ('2025-04-01');
```
We create 2 shapes, one on the root table `measurement` and one on the `measurement_y2025m03` partition:
```sh
curl -i 'http://localhost:3000/v1/shape?table=measurement&offset=-1'
curl -i 'http://localhost:3000/v1/shape?table=measurement_y2025m03&offset=-1'
```
The shape based on the `measurement_y2025m03` partition will only receive writes that fall within the partition range, that is with `logdate >= '2025-02-01' AND logdate < '2025-03-01'` whereas the shape based on the root `measurements` table will receive all writes to all partitions.
### Where clause
Shapes can define an optional where clause to filter out which rows from the table are included in the shape. Only rows that match the where clause will be included.
The where clause must be a valid [PostgreSQL query expression](https://www.postgresql.org/docs/current/queries-table-expressions.html#QUERIES-WHERE) in SQL syntax, e.g.:
- `title='Electric'`
- `status IN ('backlog', 'todo')`
Where clauses support:
1. columns of numerical types, `boolean`, `uuid`, `text`, `interval`, date and time types (with the exception of `timetz`), [Arrays](https://github.com/electric-sql/electric/issues/1767) (but not yet [Enums](https://github.com/electric-sql/electric/issues/1709), except when explicitly casting them to `text`)
2. operators that work on those types: arithmetics, comparisons, logical/boolean operators like `OR`, string operators like `LIKE`, etc.
3. positional placeholders, like `$1`, values for which must be provided alongside the where clause.
You can use `AND` and `OR` to group multiple conditions, e.g.:
- `title='Electric' OR title='SQL'`
- `title='Electric' AND status='todo'`
Where clauses are limited in that they:
1. can only refer to columns in the target row
1. can't perform joins or refer to other tables
1. can't use non-deterministic SQL functions like `count()` or `now()`
When constructing a where clause with user input as a filter, it's recommended to use a positional placeholder (`$1`) to avoid
SQL injection-like situations. if filtering a table on a user id, it's better to use `where=user = $1` with
`params[1]=provided_id`. If not using positional placeholders and constructing where clauses yourself, take care to SQL-escape user input.
See [`known_functions.ex`](https://github.com/electric-sql/electric/blob/main/packages/sync-service/lib/electric/replication/eval/env/known_functions.ex) and [`parser.ex`](https://github.com/electric-sql/electric/blob/main/packages/sync-service/lib/electric/replication/eval/parser.ex) for the source of truth on which types, operators and functions are currently supported. If you need a feature that isn't supported yet, please [raise a feature request](https://github.com/electric-sql/electric/discussions/categories/feature-requests).
[WARNING] > Where clause evaluation impacts [data throughput](#throughput). Some where clauses are [optimized](#optimized-where-clauses).
### Columns
This is an optional list of columns to select. When specified, only the columns listed are synced. When not specified all columns are synced.
- `columns=id,title,status` - only include the `id`, `title` and `status` columns
- `columns=id,"Status-Check"` - only include `id` and `Status-Check` columns, quoting the identifiers where necessary
The specified columns must always include the primary key column(s), and should be formed as a comma separated list of column names - exactly as they are in the database schema. If the identifier was defined as case sensitive and/or with special characters, then you must quote it.
## Subscribing to shapes
Local clients establish shape subscriptions, typically using [client libraries](/docs/api/clients/typescript). These sync data from the [Electric sync engine](/product/electric) into the client using the [HTTP API](/docs/api/http).
The sync service maintains shape subscriptions and streams any new data and data changes to the local client. In the client, shapes can be held as objects in memory, using a [`useShape`](/docs/integrations/react) hook, or in a normalised store or database like [PGlite](/product/pglite).
### HTTP
You can sync shapes manually using the
GET /v1/shape endpoint. First make an initial sync request to get the current data for the Shape, such as:
```sh
curl -i 'http://localhost:3000/v1/shape?table=foo&offset=-1'
```
Then switch into a live mode to use long-polling to receive real-time updates:
```sh
curl -i 'http://localhost:3000/v1/shape?table=foo&live=true&offset=...&handle=...'
```
These requests both return an array of [Shape Log](/docs/api/http#shape-log) entries. You can process these manually, or use a higher-level client.
### Typescript
You can use the [Typescript Client](/docs/api/clients/typescript) to process the Shape Log and materialised it into a `Shape` object for you.
First install using:
```sh
npm i @electric-sql/client
```
Instantiate a `ShapeStream` and materialise into a `Shape`:
```ts
import { ShapeStream, Shape } from '@electric-sql/client'
const stream = new ShapeStream({
url: `http://localhost:3000/v1/shape`,
params: {
table: `foo`
}
})
const shape = new Shape(stream)
// Returns promise that resolves with the latest shape data once it's fully loaded
await shape.rows
```
You can register a callback to be notified whenever the shape data changes:
```ts
shape.subscribe(({ rows }) => {
// rows is an array of the latest value of each row in a shape.
})
```
Or you can use framework integrations like the [`useShape`](/docs/integrations/react) hook to automatically bind materialised shapes to your components.
See the [Quickstart](/docs/quickstart) and [HTTP API](/docs/api/http) docs for more information.
# Auth
How to do authentication and authorization with Electric. Including examples for [proxy](#proxy-auth) and [gatekeeper](#gatekeeper-auth) auth.
How to do auth with Electric.
Including examples for [proxy](#proxy-auth) and [gatekeeper](#gatekeeper-auth) auth.
## It's all HTTP
The golden rule with Electric is that it's [all just HTTP](/docs/api/http).
So when it comes to auth, you can use existing primitives, such as your API, middleware and external authorization services.
### Shapes are resources
With Electric, you sync data using [Shapes](/docs/guides/shapes) and shapes are just resources.
You access them by making a request to `GET /v1/shape`, with the [shape definition](/docs/guides/shapes#defining-shapes) in the query string (`?table=items`, etc.). You can authorise access to them exactly the same way you would any other web resource.
### Requests can be proxied
When you make a request to Electric, you can route it through an HTTP proxy or middleware stack.
You can proxy the request in your cloud, or at the edge, [in-front of a CDN](#cdn-proxy). Your auth logic can query your database, or call an external service. It's all completely up-to-you.
### Rules are optional
You *don't* have to codify your auth logic into a database rule system.
There's no need to use database rules to [secure data access](/docs/guides/security) when your sync engine runs over standard HTTP.
## Patterns
The two patterns we recommend and describe below, with code and examples, are:
- [proxy auth](#proxy-auth) - authorising Shape requests using a proxy
- [gatekeeper auth](#gatekeeper-auth) - using your API to generate shape-scoped access tokens
### Proxy auth
[WARNING] > See the [proxy-auth example](https://github.com/electric-sql/electric/tree/main/examples/proxy-auth) on GitHub for an example that implements this pattern.
The simplest pattern is to authorise Shape requests using a reverse-proxy.
The proxy can be your API, or a seperate proxy service or edge-function. When you make a request to sync a shape, route it via your API/proxy, validate the user credentials and shape parameters, and then only proxy the data through if authorized.
1. add an `Authorization` header to your [`GET /v1/shape`](/docs/api/http#syncing-shapes) request
2. use the header to check that the client exists and has access to the shape
3. if not, return a `401` or `403` status to tell the client it doesn't have access
4. if the client does have access, proxy the request to Electric and stream the response back to the client
#### Example
When using the [Typescript client](/docs/api/clients/typescript), you can pass in a [`headers` option](/docs/api/clients/typescript#options) to add an `Authorization` header.
```tsx
const usersShape = (): ShapeStreamOptions => {
const user = loadCurrentUser()
return {
url: new URL(`/api/shapes/users`, window.location.origin).href,
headers: {
authorization: `Bearer ${user.token}`
}
}
}
export default function ExampleComponent () {
const { data: users } = useShape(usersShape())
}
```
Then for the `/api/shapes/users` route:
```tsx
export async function GET(
request: Request,
) {
const url = new URL(request.url)
// Construct the upstream URL
const originUrl = new URL(`http://localhost:3000/v1/shape`)
// Copy over the relevant query params that the Electric client adds
// so that we return the right part of the Shape log.
url.searchParams.forEach((value, key) => {
if ([`live`, `table`, `handle`, `offset`, `cursor`].includes(key)) {
originUrl.searchParams.set(key, value)
}
})
//
// Authentication and authorization
//
const user = await loadUser(request.headers.get(`authorization`))
// If the user isn't set, return 401
if (!user) {
return new Response(`user not found`, { status: 401 })
}
// Only query data the user has access to unless they're an admin.
if (!user.roles.includes(`admin`)) {
originUrl.searchParams.set(`where`, `"org_id" = ${user.org_id}`)
}
// When proxying long-polling requests, content-encoding &
// content-length are added erroneously (saying the body is
// gzipped when it's not) so we'll just remove them to avoid
// content decoding errors in the browser.
//
// Similar-ish problem to https://github.com/wintercg/fetch/issues/23
let resp = await fetch(originUrl.toString())
if (resp.headers.get(`content-encoding`)) {
const headers = new Headers(resp.headers)
headers.delete(`content-encoding`)
headers.delete(`content-length`)
resp = new Response(resp.body, {
status: resp.status,
statusText: resp.statusText,
headers,
})
}
return resp
}
```
# Writes
How to do local writes and write-path sync with Electric.
Includes patterns for [online writes](#online-writes), [optimistic state](#optimistic-state), [shared persistent optimistic state](#shared-persistent) and [through-the-database sync](#through-the-db).
With accompanying code in the [write-patterns example](https://github.com/electric-sql/electric/tree/main/examples/write-patterns).
## Local writes with Electric
Electric does [read-path sync](/product/electric). It syncs data out-of Postgres, into local apps and services.
Electric does not do write-path sync. It doesn't provide (or prescribe) a built-in solution for getting data back into Postgres from local apps and services.
So how do you handle local writes with Electric?
Well, the [design philosophy](/blog/2024/07/17/electric-next) behind Electric is to be composable and [integrate with your existing stack](/blog/2024/11/21/local-first-with-your-existing-api). So, just as you can sync into [any client](/docs/guides/client-development) you like, you can implement writes in any way you like, using a variety of different patterns.
## Patterns
This guide describes four different patterns for handling writes with Electric. It shows code examples and discusses trade-offs to consider when choosing between them.
1. [online writes](#online-writes)
2. [optimistic state](#optimistic-state)
3. [shared persistent optimistic state](#shared-persistent)
4. [through-the-database sync](#through-the-db)
All of the patterns use Electric for the read-path sync (i.e.: to sync data from Postgres into the local app) and use a different approach for the write-path (i.e.: how they handle local writes and get data from the local app back into Postgres).
They are introduced in order of simplicity. So the simplest and easiest to implement first and the more powerful but more complex patterns further down - where you may prefer to reach for a [framework](#tools) rather than implement yourself.
### 1. Online writes
```tsx
import React from 'react'
import { v4 as uuidv4 } from 'uuid'
import { useShape } from '@electric-sql/react'
import api from '../../shared/app/client'
import { ELECTRIC_URL, envParams } from '../../shared/app/config'
type Todo = {
id: string
title: string
completed: boolean
created_at: Date
}
export default function OnlineWrites() {
// Use Electric's `useShape` hook to sync data from Postgres
// into a React state variable.
const { isLoading, data } = useShape({
url: `${ELECTRIC_URL}/v1/shape`,
params: {
table: 'todos',
...envParams,
},
parser: {
timestamptz: (value: string) => new Date(value),
},
})
const todos = data ? data.sort((a, b) => +a.created_at - +b.created_at) : []
// Handle user input events by making requests to the backend
// API to create, update and delete todos.
async function createTodo(event: React.FormEvent) {
event.preventDefault()
const form = event.target as HTMLFormElement
const formData = new FormData(form)
const title = formData.get('todo') as string
const path = '/todos'
const data = {
id: uuidv4(),
title: title,
created_at: new Date(),
}
await api.request(path, 'POST', data)
form.reset()
}
async function updateTodo(todo: Todo) {
const path = `/todos/${todo.id}`
const data = {
completed: !todo.completed,
}
await api.request(path, 'PUT', data)
}
async function deleteTodo(event: React.MouseEvent, todo: Todo) {
event.preventDefault()
const path = `/todos/${todo.id}`
await api.request(path, 'DELETE')
}
if (isLoading) {
return
Loading -
}
// prettier-ignore
return (
)
}
```
### 2. Optimistic state
```tsx
import React, { useOptimistic, useTransition } from 'react'
import { v4 as uuidv4 } from 'uuid'
import { matchBy, matchStream } from '@electric-sql/experimental'
import { useShape } from '@electric-sql/react'
import api from '../../shared/app/client'
import { ELECTRIC_URL, envParams } from '../../shared/app/config'
type Todo = {
id: string
title: string
completed: boolean
created_at: Date
}
type PartialTodo = Partial & {
id: string
}
type Write = {
operation: 'insert' | 'update' | 'delete'
value: PartialTodo
}
export default function OptimisticState() {
const [isPending, startTransition] = useTransition()
// Use Electric's `useShape` hook to sync data from Postgres
// into a React state variable.
//
// we also unpack the `stream` from the useShape
// return value, so that we can monitor it below to detect
// local writes syncing back from the server.
const { isLoading, data, stream } = useShape({
url: `${ELECTRIC_URL}/v1/shape`,
params: {
table: 'todos',
...envParams,
},
parser: {
timestamptz: (value: string) => new Date(value),
},
})
const sorted = data ? data.sort((a, b) => +a.created_at - +b.created_at) : []
// Use React's built in `useOptimistic` hook. This provides
// a mechanism to apply local optimistic state whilst writes
// are being sent-to and syncing-back-from the server.
const [todos, addOptimisticState] = useOptimistic(
sorted,
(synced: Todo[], { operation, value }: Write) => {
switch (operation) {
case 'insert':
return synced.some((todo) => todo.id === value.id)
? synced
: [...synced, value as Todo]
case 'update':
return synced.map((todo) =>
todo.id === value.id ? { ...todo, ...value } : todo
)
case 'delete':
return synced.filter((todo) => todo.id !== value.id)
}
}
)
// These are the same event handler functions from the online
// example, extended with `startTransition` -> `addOptimisticState`
// to apply local optimistic state.
//
// the local state is applied:
//
// 1. whilst the HTTP request is being made to the API server; and
// 2. until the write syncs back through the Electric shape stream
//
// This is slightly different from most optimistic state examples
// because we wait for the sync as well as the api request.
async function createTodo(event: React.FormEvent) {
event.preventDefault()
const form = event.target as HTMLFormElement
const formData = new FormData(form)
const title = formData.get('todo') as string
const path = '/todos'
const data = {
id: uuidv4(),
title: title,
created_at: new Date(),
completed: false,
}
startTransition(async () => {
addOptimisticState({ operation: 'insert', value: data })
const fetchPromise = api.request(path, 'POST', data)
const syncPromise = matchStream(
stream,
['insert'],
matchBy('id', data.id)
)
await Promise.all([fetchPromise, syncPromise])
})
form.reset()
}
async function updateTodo(todo: Todo) {
const { id, completed } = todo
const path = `/todos/${id}`
const data = {
id,
completed: !completed,
}
startTransition(async () => {
addOptimisticState({ operation: 'update', value: data })
const fetchPromise = api.request(path, 'PUT', data)
const syncPromise = matchStream(stream, ['update'], matchBy('id', id))
await Promise.all([fetchPromise, syncPromise])
})
}
async function deleteTodo(event: React.MouseEvent, todo: Todo) {
event.preventDefault()
const { id } = todo
const path = `/todos/${id}`
startTransition(async () => {
addOptimisticState({ operation: 'delete', value: { id } })
const fetchPromise = api.request(path, 'DELETE')
const syncPromise = matchStream(stream, ['delete'], matchBy('id', id))
await Promise.all([fetchPromise, syncPromise])
})
}
if (isLoading) {
return Loading -
}
// The template below the heading is identical to the other patterns.
// prettier-ignore
return (
)
}
```
### 3. Shared persistent optimistic state
```tsx
import React, { useTransition } from 'react'
import { v4 as uuidv4 } from 'uuid'
import { subscribe, useSnapshot } from 'valtio'
import { proxyMap } from 'valtio/utils'
import { type Operation, ShapeStream } from '@electric-sql/client'
import { matchBy, matchStream } from '@electric-sql/experimental'
import { useShape } from '@electric-sql/react'
import api from '../../shared/app/client'
import { ELECTRIC_URL, envParams } from '../../shared/app/config'
const KEY = 'electric-sql/examples/write-patterns/shared-persistent'
type Todo = {
id: string
title: string
completed: boolean
created_at: Date
}
type PartialTodo = Partial & {
id: string
}
type LocalWrite = {
id: string
operation: Operation
value: PartialTodo
}
// Define a shared, persistent, reactive store for local optimistic state.
const optimisticState = proxyMap(
JSON.parse(localStorage.getItem(KEY) || '[]')
)
subscribe(optimisticState, () => {
localStorage.setItem(KEY, JSON.stringify([...optimisticState]))
})
/*
* Add a local write to the optimistic state
*/
function addLocalWrite(operation: Operation, value: PartialTodo): LocalWrite {
const id = uuidv4()
const write: LocalWrite = {
id,
operation,
value,
}
optimisticState.set(id, write)
return write
}
/*
* Subscribe to the shape `stream` until the local write syncs back through it.
* At which point, delete the local write from the optimistic state.
*/
async function matchWrite(
stream: ShapeStream,
write: LocalWrite
): Promise {
const { operation, value } = write
const matchFn =
operation === 'delete'
? matchBy('id', value.id)
: matchBy('write_id', write.id)
try {
await matchStream(stream, [operation], matchFn)
} catch (_err) {
return
}
optimisticState.delete(write.id)
}
/*
* Make an HTTP request to send the write to the API server.
* If the request fails, delete the local write from the optimistic state.
* If it succeeds, return the `txid` of the write from the response data.
*/
async function sendRequest(
path: string,
method: string,
{ id, value }: LocalWrite
): Promise {
const data = {
...value,
write_id: id,
}
let response: Response | undefined
try {
response = await api.request(path, method, data)
} catch (_err) {
// ignore
}
if (response === undefined || !response.ok) {
optimisticState.delete(id)
}
}
export default function SharedPersistent() {
const [isPending, startTransition] = useTransition()
// Use Electric's `useShape` hook to sync data from Postgres.
const { isLoading, data, stream } = useShape({
url: `${ELECTRIC_URL}/v1/shape`,
params: {
table: 'todos',
...envParams,
},
parser: {
timestamptz: (value: string) => new Date(value),
},
})
const sorted = data ? data.sort((a, b) => +a.created_at - +b.created_at) : []
// Get the local optimistic state.
const localWrites = useSnapshot