Hi everyone,
We are doubling down on improving the speed of flow execution and builder and will soon release updated benchmarks.
This update mainly affects the self-hosted version of Activepieces, because there is no sandbox reuse across runs like on the cloud.
The main bottleneck on the cloud is strict sandboxing, It still have to do clean up after each run, although we have upgraded all infrastructure and you should expect activepieces to be more snappy.
What changed in execution
-
Zero database calls before execution- Before: about 10 database calls were made before a flow could start
- Now: 0 database calls are needed
-
Database calls replaced with Redis- Most database calls were removed
- Worker now have access to Redis.
- Now replaced with fewer than 3 Redis cache calls and rest are cached locally
-
Better handling of webhook surges that slowed down the cloud- Before: whenever a surge of webhooks came in, runs were inserted straight into the database, slowing down everything
- Now: runs are not inserted into the database immediately — instead, all updates are queued in Redis for faster processing
-
App Intensive tasks moved to the Worker
Before: The app and the worker shared the workload about 50/50. This meant you had to scale both together, which was not happening in practice.
Now: Most of the heavy and intensive tasks have been moved to the worker, which makes it easy to scale horizontally.
-
Shared Cache Volume (Beta — not in cloud yet)- Before: Each worker had its own local cache of pieces and flows stored on disk. This caused the first request handled by a new worker to be slow, because it had to build its cache from scratch.
- Now: You can mount the same shared network volume across all workers. This lets them reuse the same cache and automatically coordinate access, which removes the “cold start” delay.
Impact
On two machines with 4 CPUs and 16 GB of RAM each, we were able to achieve 120 requests per second with a simple webhook-and-response flow. The slowest request finished in 300 ms.
Next Steps
We are not done yet — this update is just to unblock more use cases. We are working on a 10x improvement next, along with tests to prevent regressions so performance will keep getting better, not worse.