Chat GPT Instruct Model - Possible Bug!


I saw that in the models for the Chat GPT piece, the new gpt-3.5-turbo-instruct was in there, so I was messing around to see how it faired. However, on running the sample data test, I get the following error each time:

  "token": 404,
  "error": {
    "message": "This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?",
    "type": "invalid_request_error",
    "param": "model",
    "code": null

I assume this is a bug or not set up correctly in ActivePieces. Looks like it is calling the wrong API endpoint.

Kind regards

Was going to start a topic about this. Hope this can be solved asap.

1 Like

Thank you @ahwork20230915

Hello GunnerJnr !

I opened an issue on Github for this, we will check it from here and notify once it is solved

Thank you. That is greatly appreciated.

Hey all, this is not an Activepieces bug. According to OpenAI’s docs models page, this model is only compatible with the legacy text completion endpoint and not the chat endpoint.

I moved this topic to Discussions to discuss whether we’ll add a new action for completions or wait for OpenAI to make it compatible with Chat.

I do hope we take initiative instead of waiting, granted if the task is not time consuming

Hi @GunnerJnr and @ahwork20230915 It’s possible to use gpt-3.5-turbo-instruct within the OpenRouter piece instead of OpenAI. Here is a screenshot:



Awesome, thanks man.

1 Like