Back to blog
5 minute read

Introducing Data Notebooks

Alexander Netz
Alexander Netz
Founder

Today, we're introducing Data Notebooks, a new way to explore and analyze your data using SQL and Python, entirely in your browser.

Data exploration shouldn't require spinning up servers, managing dependencies, or uploading your files to third-party services. With Data Notebooks, everything runs locally via WebAssembly. Your data never leaves your device.

What are Data Notebooks?

Data Notebooks are browser-based notebooks for SQL and Python. Think of them as Jupyter notebooks, but with SQL as a first-class citizen and no backend to manage.

Conceptually:

  • Each notebook contains cells, either SQL or Python.
  • SQL cells behave like dbt models: they create named views you can reference in later cells.
  • Python cells let you do anything SQL can't: custom transformations, visualizations, or calling external APIs.
  • Everything runs in your browser using DuckDB (for SQL) and Pyodide (for Python).

When to use it

Data Notebooks are useful when you have a file and want to explore it quickly. Some examples:

  • You got a CSV export and want to run a few queries before deciding what to do with it.
  • You need to check the shape of a Parquet file before loading it into your data warehouse.
  • You want to build a quick chart for a meeting without setting up a BI tool.
  • You're working with sensitive data that can't leave your machine.

If you need a full data pipeline or collaboration features, you probably want something else. But for quick, local exploration, Data Notebooks get out of your way.

How it works

Say you have a CSV export from your CRM with sales data. Open a notebook and drag the file into the sidebar. In this example, our file is called q4_sales.csv, so we query it as 'q4_sales.csv'.

SQLView: by_region
SELECT region, SUM(quantity * price) AS revenue
FROM 'q4_sales.csv'
GROUP BY region
ORDER BY revenue DESC

Each SQL cell has a name. When you run this cell, it creates a view called by_region. If you've used dbt, this will feel familiar: each cell behaves like a model. You can reference by_region in any subsequent cell, just like you would reference a table. This lets you build up your analysis step by step, with each cell adding to what came before.

SQL
SELECT * FROM by_region WHERE revenue > 50000

When you need Python, every view is automatically available as a pandas DataFrame with the same name. No conversion or import needed.

Python
import matplotlib.pyplot as plt
plt.bar(by_region['region'], by_region['revenue'])
plt.show()

That's the core idea: SQL cells create views, views can reference other views, and Python has access to all of them. You build up a data pipeline where each step is visible and editable.

Key features

SQL cells as views

Each SQL cell automatically materializes a named view. Write a query, give it a name, and reference it downstream. No manual CREATE VIEW statements. Just write your SELECT and move on.

Python when you need it

SQL handles most data work. When you need more flexibility (custom parsing, ML models, complex visualizations) drop into Python. Every SQL view is automatically available as a pandas DataFrame.

Privacy by default

Your data stays on your device. We don't upload files to any server. The entire notebook engine runs in your browser via WebAssembly. This makes Data Notebooks a good fit for sensitive data that shouldn't leave your machine.

Standard notebook format

Notebooks are saved as nbformat v4.5, the same format Jupyter uses. You can export them, version them in git, or open them in other tools.

Get started

You can try Data Notebooks right now:

  1. Open a notebook
  2. Drop in a CSV, Parquet, or JSON file
  3. Start writing SQL or Python

No account required. No data uploaded. Just you and your data.

We're excited to see what you build. If you have feedback or questions, reach out. We'd love to hear from you.

Stay updated

Monthly updates

We send a brief update on the 14th of each month. What's new, what's coming.