Hi team,
I am receiving the same error constantly with ChatGPT. My prompt is being truncated. Is there a limit on the amount of characters send to Chat GPT via the prompt?
Could you please have a look, and I do need an “unlimited lenght”.
Hi team,
I am receiving the same error constantly with ChatGPT. My prompt is being truncated. Is there a limit on the amount of characters send to Chat GPT via the prompt?
Could you please have a look, and I do need an “unlimited lenght”.
Also sometimes manually i have success, also sometime manually i do get a timeout error.
The respons from Chat GPT btw is “Undefined.” WHich makes sense as nothing reached OpenAI
Hi @Bram
The (truncated) part is coming from activepieces user interface, it’s mainly to avoid loading huge text over the browser so it doesn’t crash, It’s not the issue,
Can you click on enlarge button It has larger limit do you still see the truncated part?
Hi @abuaboud,
Thanks for your reply. I’ve enlarged it and the truncated button didn’t show up so that is a good thing, i saw everything.
But i can’t see any other errors which make sense, do you know what i should change? here is a screenshot. I also noticed that the duration is just 307 ms… which is 0.307 seconds…
@abuaboud I think i found the error, i’ve copied the prompt, which had all the data of the dynamic fields too and tried it again in a new Flow, below error came up. The total amount of Token exceeds the 8192 tokens… But these token are supposed to be there for the output right? not the input? or is this the same?
Is there a way we could tackle this so more context could be allowed in the prompt?
I will also try to see if i can solve this issue in some other way but, could you please confirm this is the issue?
{“token”:400,“error”:{“message”:“This model’s maximum context length is 8192 tokens. However, you requested 9897 tokens (7849 in the messages, 2048 in the completion). Please reduce the length of the messages or completion.”,“type”:“invalid_request_error”,“param”:“messages”,“code”:“context_length_exceeded”}}
I think i have the answer to my question… I will need to lower my amount of tokens…
There are models which can handle more tokens but they not made available for the API.
https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4
Not sure 3.5 will give me the quality i look for, so i will lower the tokens.
maybe good to have the erro message coming showing up in the failed tasks that it has to do with the limit of tokens. As it was quiet a search to find the error.
The error is fixed now as seen in Issue #2899 .
Now you can see the error instead of undefined,
This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.