All docs
4 min read

Exporting

Two formats, three ways to call it (UI, REST, MCP), one consistent shape.

CSV vs JSON

Pick the format based on what you're feeding next:

Format When to use
CSV Excel, Google Sheets, pivot tables, anything that wants a flat grid.
JSON Backends, scripts, anything that wants structure preserved (file metadata, AI scores, nested arrays).

CSV flattens. JSON does not.

CSV: payload-key column flattening

A submission's payload is a JSON object — keys are field names, values can be strings, numbers, arrays. CSV has no concept of nested structure, so we flatten:

  • Scalar fields become single columns (name, email, subject).
  • Array fields (a multi-checkbox value, for example) get joined with |. So {"interests": ["A","B"]} lands in the interests column as A|B.
  • File fields become two columns: cv (original filename) and cv_url (signed URL, 15-min TTL at export time).
  • Metadata like ip, user_agent, submitted_at, status come last.

Column order: schema-defined fields first (in their builder order), then files, then metadata. Stable across exports — your downstream pipelines won't break when someone adds a new field at the bottom.

If two submissions have different schemas (you added a field after the form was already collecting data), the union of all keys becomes the column set. Missing values are empty cells, not null.

CSV encoding

UTF-8 with a BOM (\xEF\xBB\xBF). The BOM is for Excel — without it, Excel on Windows opens UTF-8 CSVs as Windows-1252 and mangles every accent. Numbers and Sheets handle the BOM transparently. Almost every modern CSV parser does too; if yours doesn't, strip the first three bytes.

Line terminator: \r\n. Quote char: ". Escape: doubled quote ("").

JSON: full fidelity

JSON exports keep everything:

{
  "exported_at": "2026-05-07T11:30:00Z",
  "form_id": "01J3W...",
  "form_name": "Contact",
  "filters": { "since": "2026-04-01", "folder": "inbox" },
  "submissions": [
    {
      "id": "01K2M...",
      "status": "received",
      "submitted_at": "2026-04-15T09:12:33Z",
      "payload": { "name": "Ada", "email": "ada@example.com", "interests": ["A","B"] },
      "files": [{ "id": "01K8...", "original_name": "cv.pdf", "size_bytes": 84112, "signed_url": "https://..." }],
      "metadata": { "ip": "203.0.113.4", "user_agent": "...", "ai_moderation_score": 0.02 }
    }
  ]
}

Use this when you need everything — including AI moderation scores, full file metadata, and nested array structure.

Filters

Both formats accept the same filter set:

Filter Type Default
since ISO date (none — all time)
until ISO date now
folder inbox | spam | all inbox
status received | processed | failed | spam (any)

You'll see these in the export modal in the UI. Filters compose; only rows matching all filters land in the export.

From the dashboard

Open a form, then Export in the toolbar. Pick format, set filters, hit Download. Exports under 5 MB stream directly. Larger ones queue a background job and email you when ready (the email contains a one-time download URL valid for 24 hours).

From the REST API

GET /api/v1/forms/{form}/submissions/export?format=csv&since=2026-04-01&folder=inbox

Streams the response — no buffering. Run it through curl --output to write to disk:

curl -H "Authorization: Bearer $TOKEN" \
  "https://formspring.io/api/v1/forms/01J3W/submissions/export?format=csv&since=2026-04-01" \
  --output contact-april.csv

Set ?format=json for JSON. Same filter params.

From MCP

For agents:

export_submissions(form_id="01J3W", format="csv", since="2026-04-01", limit=500)

Capped at 500 rows per call (the agent context isn't a place for million-row exports — use the REST API for that). Returns the export inline as a string.

What's next