Articles on: Spreadsheet

AI columns and formulas

AI columns and formulas


Lido lets you call large language models directly from spreadsheet formulas. Use this to enrich rows with AI output (summaries, classifications, translations, structured extraction) without leaving the sheet. This article covers every AI formula, when to use which, and how to set up credentials.



Which AI formula should I use?


If you want to…

Use…

Send a text prompt to OpenAI and write the response to a cell

GPT

Analyze an image or PDF with OpenAI

GPTVISION

Send a text prompt (with optional file) to Anthropic

CLAUDE

Send a text prompt (with optional file) to Google Gemini

GEMINI

Send a text prompt (with optional file) to AWS Bedrock

BEDROCK

Have GPT analyze a URL's content

GPTURL

Extract structured rows from text into a table

EXTRACTTOTABLE

Extract data from a file into a single row

EXTRACTFROMFILETOROW


For document extraction at scale, use the Data Extractor (UI, workflow node, or API) instead — it's tuned for that and reuses one configuration across surfaces.



Before you start


You need a credential for the AI provider you want to call:


  • OpenAI (GPT, GPTVISION, GPTURL) — get an OpenAI API key, add it as a credential in workspace settings.
  • Anthropic (CLAUDE) — get an Anthropic API key.
  • Google (GEMINI) — get a Google API key with Generative AI access.
  • AWS Bedrock (BEDROCK) — get AWS access keys for an account with Bedrock model access.


EXTRACTTOTABLE and EXTRACTFROMFILETOROW accept any AI credential — pick one based on which provider you want to use.



Calling a single model


GPT — OpenAI text generation


=GPT(credential, prompts, output_ref)
=GPT(credential, prompts, output_ref, model, max_tokens, response_format, temperature, usage_output, tools)


prompts can be a string or an array of messages. output_ref is the cell to write the response into. model defaults to a current GPT model; specify one like "gpt-4o" to pin it.


GPTVISION — analyze images and PDFs


=GPTVISION(files, credential, prompt, output_ref)
=GPTVISION(files, credential, prompt, output_ref, page_ranges, model, max_tokens, response_format, image_detail, usage_output)


files is one or more file URLs. Use page_ranges like "1-3" to limit which PDF pages get analyzed.


CLAUDE — Anthropic


=CLAUDE(credential, prompt, output_ref)
=CLAUDE(credential, prompt, output_ref, model, max_tokens, system_message, files, page_ranges, usage_output)


CLAUDE accepts files in the same call. Useful when you want a single model to read a document and reason about it.


GEMINI — Google


=GEMINI(credential, files, prompt, output_ref)
=GEMINI(credential, files, prompt, output_ref, page_ranges, model, max_tokens, response_format, response_schema, usage_output)


response_schema accepts a JSON schema for structured output — useful when you want the model to return JSON in a fixed shape.


BEDROCK — AWS


=BEDROCK(credential, prompt, output_ref)
=BEDROCK(credential, prompt, output_ref, model, max_tokens, system_message, files, page_ranges, usage_output)


Same shape as CLAUDE. Use when your security or compliance setup requires AI inference inside AWS.


GPTURL — analyze a URL


=GPTURL(credential, url, prompt, output_ref)
=GPTURL(credential, url, prompt, output_ref, model, max_tokens)


Fetches the URL's content and runs your prompt against it. Good for "summarize this article" workflows.



Extraction formulas


EXTRACTTOTABLE — text → structured table


=EXTRACTTOTABLE(credential, input_content, output_table_ref)
=EXTRACTTOTABLE(credential, input_content, output_table_ref, prompt, model, max_tokens, usage_output)


Send unstructured text (an email body, an article, a transcript) and get back a structured table. Use the optional prompt to specify what columns to extract.


EXTRACTFROMFILETOROW — file → one row


=EXTRACTFROMFILETOROW(credential, file_source, output_ref)
=EXTRACTFROMFILETOROW(credential, file_source, output_ref, prompt, model, max_tokens, usage_output)


Send a file URL and get back a single row of structured data. Use the optional prompt to describe the fields.



Patterns


AI per row in a table


Add a computed column to a table:


=GPT("openai-cred", "Summarize in one sentence: " & Articles[@Body], Articles[@Summary])


Every row gets a one-sentence summary. Lido evaluates it once per row.


Classify and route


=GPT("openai-cred",
"Classify this support ticket as one of: Billing, Bug, Feature Request, Other. Return only the category. Ticket: " & Tickets[@Body],
Tickets[@Category])


Then use the Category column to filter, sort, or trigger different downstream actions per category.


Structured JSON output


For machine-readable output, set response_format to "json" and ask for a specific shape:


=GPT("openai-cred",
"Return JSON with keys 'sentiment' and 'urgency' (each 1-5). Text: " & Reviews[@Text],
Reviews[@Analysis], "gpt-4o", 200, "json")



Tips


  • Pin the model when you care about consistency. The default model can change as providers update.
  • Use usage_output to track token consumption and cost per row.
  • Cache when you can. AI calls cost money and time. If the prompt and inputs haven't changed, the result hasn't either — reuse it.
  • For document extraction at scale, use the Data Extractor, not these formulas. The Data Extractor is tuned for that job and lets you share one configuration across the spreadsheet UI, workflows, and the API.
  • Add credentials before writing formulas. The credential ID is required.



Common mistakes


  • No credential ID. Every AI formula starts with the credential ID. Add the credential in your workspace settings first; copy the ID into the formula.
  • Putting an AI formula in a column where it'll re-run on every refresh. AI calls cost money. Either put the formula in a plain column where the user triggers it, or accept the cost.
  • Asking for "JSON" without response_format: "json". The model may return prose with JSON inside. Set response_format to lock the output shape.
  • Hitting rate limits with big tables. Every AI provider enforces rate limits. For large tables, break the work into batches with LIMIT or use a workflow with the Prompt AI node and concurrency settings.
  • Reaching for GPT or CLAUDE when you need extraction. The Data Extractor handles file-to-rows extraction far better. See Extract data from PDFs and documents.




  • Tables: how columns work
  • Actions: how spreadsheet automation works
  • Extract data from PDFs and documents
  • Expressions and the formula bar
  • Build your first workflow (Prompt AI node)


Updated on: 16/04/2026

Was this article helpful?

Share your feedback

Cancel

Thank you!