Nano Banana 2 handles concurrency at the stage level using a configurable worker pool model. Each pipeline stage has a concurrency parameter in its configuration that controls how many parallel goroutines (in Go), threads (in Python), or async tasks (in Rust) process records simultaneously. Records are dispatched to available workers from an internal channel, and each worker processes one record at a time. This model makes it straightforward to increase parallelism for CPU-bound stages by raising the concurrency value without changing any application code.
Thread safety within transformation functions is the developer’s responsibility. Nano Banana 2 guarantees that each worker calls the transformation function with a single record and does not share that record across workers, but it does not protect against data races involving shared state introduced by the developer—for example, a counter or cache that multiple workers update simultaneously. The recommended approach is to treat each stage’s transformation function as stateless, deriving all output from the input record alone. If shared state is necessary, you must synchronize access using appropriate concurrency primitives for the language being used.
Backpressure is handled automatically between stages. If a downstream stage processes records slower than an upstream stage produces them, the internal channel between those stages fills up, and the upstream workers block until space becomes available. This prevents unbounded memory growth at the cost of reduced upstream throughput when the system is under pressure. You can observe backpressure via the buffer utilization metrics exposed on the metrics endpoint. If backpressure is consistently high on a particular stage boundary, it signals that you need to increase the concurrency of the downstream stage or optimize its processing logic.