Plenty of API calls fail because the server on the other side is overloaded. e.g. anyone who ever worked with OpenAI knows those models often go 500 because they are overloaded. And very often, just repeating the same request straight away fixes the problem.
If we could add a setting to automatically re-execute a module (specifically yeying OpenAI upon failure), it would be help massively.
A workaround to fix this specific case, is to allow the chat gpt piece in your flow continue if it created an error, then put in a branch, which will trigger on the kind of response from the ChatGPT pieces, and you can recreate the same ChatGPT piece in the branch if the first one failed, probably can also do a second one, in this way you can prevent this issue from happening
This would be great. I left the following reply to a similar request here: Retry after fail - #9 by jerbecca
"Yes, this is essential. If I’m building something that I’m charging a client for, I need to know that if there is a failure, the action will eventually be carried out. Zapier and others have similar fail-safes. For Zapier, it’s called “autoreplay” and it will only replay actions that fail due to a Stopped/Errored status.
This is one of the most important updates, in my opinion. Thanks for your consideration."