{“level”:30,“time”:1754842293855,“pid”:10,“hostname”:“048f18435ebc”,“msg”:“Received message”}
{“level”:30,“time”:1754842293855,“pid”:10,“hostname”:“048f18435ebc”,“msg”:“Session not found”}
{
message: “Unsupported parameter: ‘max_tokens’ is not supported with this model. Use ‘max_completion_tokens’ instead.”,
type: ‘invalid_request_error’,
param: ‘max_tokens’,
code: ‘unsupported_parameter’
}
So seems like a bug will someone plz review as I think in GPT5 their way of accepting the parameters is bit changed so may be its creating some clashes with what we have used in past.
we have tried re-creating the flow and disconnecting/reconnecting the api, still having the same issue … we primarily use the “Ask GPT” and “Audio Transcribe” pieces
If not fixed not only me many have to leave AP its very horrible situation to workflow creation plans… we are afraid of using it as we don’t have proper support for latest OpneAI Model.
I have no idea why but your universal AI piece taking longer for output not sure if there is something causing that or its just my wrong observation.
Also seems issue is there with preview option as I ran even was empty but at Time of run it went properly.
Seems there is a bug with OpenAI GPT5 module of preview part.
Earlier also it was there and later version fixed it so far I remembered few months back the situation was like this with older OpanAI version that time.
The problem here is with output tokens, which are equal to reasoning tokens plus output tokens. GPT-5 models are heavy on reasoning, so I recommend either increasing the max tokens limit or using a different model.