Skip to main content
Version: Next (v1.11)

Distributed Query

Learn how to configure and run Spice in distributed mode to handle larger scale queries across multiple nodes.

Preview

Multi-node distributed query execution based on Apache Ballista is available as a preview feature in Spice v1.9.0.

Overview

Spice integrates Apache Ballista to schedule and coordinate distributed queries across multiple executor nodes. This integration is useful when querying large, partitioned datasets in data lake formats such as Parquet, Delta Lake, or Iceberg. For smaller workloads or non-partitioned data, a single Spice instance is typically sufficient.

Architecture

A distributed Spice cluster consists of two components:

  • Scheduler – Plans distributed queries and manages the work queue for the executor fleet. Single instance per cluster.
  • Executors – One or more nodes responsible for executing physical query plans.

The scheduler holds the cluster-wide configuration for a Spicepod, while executors connect to the scheduler to receive work.

Network Ports

Spice separates public and internal cluster traffic across different ports:

PortServiceDescription
50051Flight SQLPublic query endpoint
8090HTTP APIPublic REST API
9090PrometheusMetrics endpoint
50052Cluster ServiceInternal scheduler/executor communication (mTLS enforced, by default)

Internal cluster services are isolated on port 50052 with mTLS enforced by default.

Secure Cluster Communication (mTLS)

Distributed query cluster mode uses mutual TLS (mTLS) for secure communication between schedulers and executors. Internal cluster communication includes highly privileged RPC calls like fetching Spicepod configuration and expanding secrets. mTLS ensures only authenticated nodes can join the cluster and access sensitive data.

Certificate Requirements

Each node in the cluster requires:

  • A CA certificate (ca.crt) trusted by all nodes
  • A node certificate with the node's advertise address in the Subject Alternative Names (SANs)
  • A private key for the node certificate

Production deployments should use the organization's PKI infrastructure to generate certificates with proper SANs for each node.

Development Certificates

For local development and testing, the Spice CLI provides commands to generate self-signed certificates:

# Initialize CA and generate CA certificate
spice cluster tls init

# Generate certificate for the scheduler node
spice cluster tls add scheduler1

# Generate certificate for an executor node
spice cluster tls add executor1

Certificates are stored in ~/.spice/pki/ by default.

warning

CLI-generated certificates are not intended for production use. Production deployments should use certificates issued by the organization's PKI or a trusted certificate authority.

Insecure Mode

For local development and testing, mTLS can be disabled using the --allow-insecure-connections flag:

spiced --role scheduler --allow-insecure-connections
warning

Do not use --allow-insecure-connections in production environments. This flag disables authentication and encryption for internal cluster communication.

Getting Started

Cluster deployment typically starts with a scheduler instance, followed by one or more executors that register with the scheduler.

The following examples use CLI-generated development certificates. For production, substitute certificates from your organization's PKI.

Generate Development Certificates

spice cluster tls init
spice cluster tls add scheduler1
spice cluster tls add executor1

Start the Scheduler

The scheduler is the only spiced process that needs to be configured (i.e. have a spicepod.yaml in the current dir). Override the Flight bind address when it must be reachable outside of localhost:

spiced --role scheduler \
--flight 0.0.0.0:50051 \
--node-mtls-ca-certificate-file ~/.spice/pki/ca.crt \
--node-mtls-certificate-file ~/.spice/pki/scheduler1.crt \
--node-mtls-key-file ~/.spice/pki/scheduler1.key

Start Executors

Executors connect to the scheduler's internal cluster port (50052) to register and pull work. Executors do not require a spicepod.yaml; they fetch the configuration from the scheduler. Each executor automatically selects a free port if the default is unavailable:

spiced --role executor \
--scheduler-address https://scheduler1:50052 \
--node-mtls-ca-certificate-file ~/.spice/pki/ca.crt \
--node-mtls-certificate-file ~/.spice/pki/executor1.crt \
--node-mtls-key-file ~/.spice/pki/executor1.key

Specifying --scheduler-address implies --role executor.

Query Execution

Queries run against the scheduler endpoint. The EXPLAIN output confirms that distributed planning is active—Spice includes a distributed_plan section showing how stages are split across executors:

EXPLAIN SELECT count(id) FROM my_dataset;
Limitations
  • Accelerated datasets are not yet supported; distributed query currently targets partitioned data lake sources.
  • As a preview feature, clusters may encounter stability or performance issues.
  • Accelerator support is planned for future releases; follow release notes for updates.