ActivePieces slow processing vs N8N

Hello, as mentioned in another post, I have been doing some testing with my ActivePieces setup vs N8N setup and where I love a lot the interface and to build flows much more than N8N… the processing time is just too slow I can’t use it in prod.

I’ve made this test.
Same resources: 16vcpu + 32 GB RAM

ActivePieces: Main + 10 workers (20 concurrency each)
N8N: Main + 10 workers (15 concurrency each)

Where N8N takes 0.5-1s to process a single request, Activepieces needs about 15s:

In the next test I’ve made a BULK stress test with 77 request.
Since I send them from my API Gateway to N8N, it took less 20 seconds to receive and complete all the tasks.

But when I sent them from my API Gateway to ActivePieces it took a minute to just add them to BullQ and about 2 minutes to procecss all the tasks and complete them:

Why?

Hi @yukyo,

That’s super interesting; I love it.

I have many ideas to make it faster. Do you mind decreasing Activepieces to a single worker and benchmarking again?

I am curious how much time it would take for a single worker. Please make one call before you benchmark on Activepieces.

Also, are you in the Discord community? If so, please add me since we have a big project for next month regarding these problems, and I would love to stay in touch to improve it together.


Explanation:

The difference is that our sandboxing method requires a separate processor. So, if you call node index.js, the node itself has a large overhead (with separate namespaces). This is great for isolation and to support multi-tenancy.

In n8n, it uses a lightweight isolation (Whack a mole), which prevents it from supporting multi-tenancy with any npm packages or accessing the file system.

There are many trade-offs to consider:

Security: Process Isolation + namespaces are stronger in terms of isolation because you are isolating the file system, CPU, and memory.

For Context: vm2 - npm the n8n uses is already compromised. If you have a code piece, you can shut down the server from a code piece, which in a multi-tenant environment, you have to disable code pieces and other pieces like bash or pieces that access the file system.

Speed: Isolation without a new process is the clear winner.

Current AP Approach

The current spead breakdown.

  • Sandbox Preparation: The first request on each server is always slow, the current optimization to cache the sandbox preparation for future use.
  • Sandbox Cold start time: Running a new process has an overhead. It’s fast, but it’s so CPU intensive, the throughput is too low.
  • Flow Execution: We are fast here as with any JS program.

Future Approach

Dedicated Runners:
If we remove the multi-tenancy requirement, then for people who want speed, they can spin up dedicated workers for their own use in a trusted environment. In this place where there is no need for isolation, the worker will be able to process at least 50x faster.

So what we are planning is that you can do something similar to GitLab/GitHub self-hosted runners, where you connect your processing machine. Since it’s for your own use and a trusted environment, it will be extremely fast.

Job Affinity:
We will always send the jobs to the same workers. If possible, in this case, it will reuse the same cache.

I always wanted to write a blog about this topic. If anyone is interested, please let me know.

3 Likes

Hi @yukyo

We landed the bottle neck of webhook processing, the webserver will just push it to queue and workers will handle it.

Next to improve the throughput of workers.

2 Likes