If you've ever tried to pull a serious volume of data out of a Power BI semantic model through the existing Execute Queries API, you know the pain. JSON payloads balloon. The 100,000-row cap hems you in. Data types get squishy on the way out. For dashboard-style queries, fine. For data engineering, ETL, or anything Direct Lake-shaped, less fine.
Microsoft has just put a new option into preview: an Execute DAX Queries REST API that returns results in Apache Arrow IPC format instead of JSON. It's the same API behind DAX Query View in Power BI Desktop and the service — now exposed publicly so you can wire it into your own solutions.
What's actually new
The headline change is the wire format. Where the existing Execute Queries API hands back JSON, the new API streams results as Apache Arrow IPC — a columnar binary format with native data-type fidelity. That sounds like a plumbing detail, but it has real consequences: smaller payloads, faster parsing, no type guessing, and a much bigger room to move on result sizes.
How much bigger? The default resultSetRowCountLimit is one million rows, configurable per request, against the old API's hard cap of 100,000. Record batches are LZ4-frame compressed on the wire — pyarrow handles this automatically; for .NET you'll want the Apache.Arrow.Compression NuGet package.
The other notable change: a single request can carry one DAX query with multiple EVALUATE statements, and the response comes back as concatenated Arrow IPC streams — one per result set. So a workflow that previously meant several round trips can collapse into one.
A couple of constraints to be aware of. Per the announcement blog, the API targets Power BI Premium and Microsoft Fabric capacities — not Pro. The caller has to be able to read Arrow streams, but that's a low bar in practice, since Arrow has libraries for Python, C#, Java, JavaScript and most other things you'd plausibly be writing in. There's also a rate limit of 120 query requests per minute per user, and the usual tenant settings (XMLA endpoints, Dataset Execute Queries REST API) need to be enabled. Only DAX queries and INFO functions are supported — no MDX or DMV.
Why this matters for builders
The previous API was clearly designed for visual-style queries — a few thousand rows powering a chart or a card. The new one looks engineered for the cases people have been hacking around for years: extracting large amounts of data from a semantic model into something else.
That "something else" is increasingly a Fabric lakehouse. The pattern Microsoft is pushing in the announcement is telling: run a DAX query against a semantic model, get Arrow back, convert to a pandas DataFrame, write it out as a V-Ordered Delta table, then consume it from a Direct Lake model. In other words, the semantic model becomes a data source for Fabric, not just a sink.
For anyone building reusable analytics components, finance close processes, or auditable data extracts, that's a meaningful upgrade. You stop fighting the API and start using it the way you actually wanted to.
Getting started in a Fabric notebook
The path of least resistance is a Fabric notebook, because the runtime hands you authentication for free — no Entra ID app registration required. The notebook's built-in credential provider can request a token for the Power BI resource (https://analysis.windows.net/powerbi/api) and you're off.
From there, the workflow is straightforward:
- Acquire an access token via
notebookutils.credentials.getToken. - Define a helper that POSTs to
/v1.0/myorg/groups/{workspace_id}/datasets/{dataset_id}/executeDaxQuerieswith the DAX in the body. - Open the response as an Arrow IPC stream with
pyarrow, read it into a pandas DataFrame. - Optionally persist it as a Delta table with V-Order enabled, ready for Direct Lake consumption.
One detail worth noting if you go the Delta route: column names from DAX often contain spaces, brackets, and other characters Delta won't accept, so a quick regex pass to sanitise column names is part of the recipe. Microsoft's sample uses a one-liner to swap problem characters for underscores.
A subtler point on error handling: query errors come back as HTTP 200 with an "error rowset" inside the Arrow stream, identified by IsError=true in the schema metadata. Don't trust the status code alone — check the metadata.
What to watch for
A few things to think about before you build a production pipeline on top of this:
- It's preview. Behaviour, limits, and shape can shift before general availability.
- Premium or Fabric capacity is required — factor that into any architecture you propose.
- You're still hitting a semantic model, which means model size, refresh state, and DAX performance all still matter. Arrow makes the transport faster; it doesn't make a slow measure quick.
- Outside of Fabric notebooks, authentication is the usual Power BI REST song-and-dance — user tokens or service principals via Entra ID, with the relevant tenant settings enabled.
- Service principals don't get RLS, which is the same caveat as the older API.
A quietly significant API
It's tempting to file this under "internal plumbing now exposed", because that's literally what it is. But the move from JSON to Arrow, combined with multi-EVALUATE queries and the much larger row ceiling, nudges semantic models toward being a more general-purpose data interface — not just the thing that feeds a visual.
For teams already invested in Power BI, that's an opening to consolidate. The semantic model you've spent years curating — with its measures, security, and business logic — becomes addressable from any Arrow-aware client. Python notebooks, C# services, JavaScript apps, Spark jobs. The same canonical numbers, fewer parallel definitions floating around the organisation.
This is the kind of change that won't trend on LinkedIn but will show up in architecture diagrams across the European Microsoft community over the next year — quietly making integration patterns simpler at the exact moment Fabric and Direct Lake are pulling the data and BI worlds closer together. Conversations like this one — where a small API change reshapes how teams build — are the bread and butter of the ECS community year-round.