Bug - gpt5 - sometime showing blank output

I found that sometime gpt5 showing blank output.

When I checked the console error came as

{“level”:30,“time”:1754842293855,“pid”:10,“hostname”:“048f18435ebc”,“msg”:“Received message”}
{“level”:30,“time”:1754842293855,“pid”:10,“hostname”:“048f18435ebc”,“msg”:“Session not found”}
{
message: “Unsupported parameter: ‘max_tokens’ is not supported with this model. Use ‘max_completion_tokens’ instead.”,
type: ‘invalid_request_error’,
param: ‘max_tokens’,
code: ‘unsupported_parameter’
}

So seems like a bug will someone plz review as I think in GPT5 their way of accepting the parameters is bit changed so may be its creating some clashes with what we have used in past.

1 Like

Which version of OpenAI piece does flow use?

@kishanprmr I went to admin area and it says me

In my admin area it says it is
OpenAI
@activepieces/piece-openai
0.5.3

Hi @andish12
Can you try to replace the OpenAI action (or create a new one) in your flow? It should work then. Let me know if the issue persists.

I have seen this issue in newly created action too

Likewise - since the GPT5 update, our OpenAI pieces are throwing blank outputs …

we have tried re-creating the flow and disconnecting/reconnecting the api, still having the same issue … we primarily use the “Ask GPT” and “Audio Transcribe” pieces

The bug still exists.

We released version 0.6.0 of OpenAI piece, could you try it?

Screenshot below

If not fixed not only me many have to leave AP its very horrible situation to workflow creation plans… we are afraid of using it as we don’t have proper support for latest OpneAI Model.

Can you share what action are you using and values? What activepieces version are you using?

We recommend using Universal AI pieces like Text AI and Utility AI.

I have no idea why but your universal AI piece taking longer for output not sure if there is something causing that or its just my wrong observation.

Also seems issue is there with preview option as I ran even was empty but at Time of run it went properly.

Seems there is a bug with OpenAI GPT5 module of preview part.

Earlier also it was there and later version fixed it so far I remembered few months back the situation was like this with older OpanAI version that time.

The problem here is with output tokens, which are equal to reasoning tokens plus output tokens. GPT-5 models are heavy on reasoning, so I recommend either increasing the max tokens limit or using a different model.

This was reported by many users on the OpenAI forum: What is going on with the GPT-5 API? - API - OpenAI Developer Community