Save data in SQLite tables, vector databases, and buckets
insert
or upsert
in your response handler to write a row to your crawler’s SQLite database.
The row
object must conform to your schema to be written successfully.
0
or a 1
.
Date
object and the date will get saved in an SQLite-compatible format.
JSON.stringify()
before writing, and JSON.parse()
after reading.
Pass in another Zod object as its parameter to specify the types of items inside the array.
NOT NULL
constraint).
Crawlspace enforces nullable columns to simplify migrations between deployments.
Unique
If you’d like a column to have unique values, then append .decribe('unique')
to your Zod object.
This is useful when upserting data to your table.
If you attempt to insert (not upsert) a duplicate value to a unique column, then the insertion will fail.
id
The auto-incrementing primary key.trace_id
: A hexadecimal value used to trace the datum’s request.atch_key
: A path of an optional object stored in a bucket.url
: The URL of the request.created_at
: When the row was created.updated_at
: When the row was updated. Only relevant for upserts.vector.enabled
to true
in your crawler’s config.
insert.embeddings
property.
You can use the built-in ai.embed
convenience method for generating the embeddings themselves,
or you can generate them with a third-party provider like OpenAI.
bucket.enabled
to true
in your crawler’s config.
attach
property in your response handler to put an object in your crawler’s bucket.
atch_key
column in SQLite.
You can change the key, as well as include arbitrary metadata, to the object.