I have an article creator posting to Wordpress, and it works well. Except that sometimes it some sort of cut off the article before it’s finished. So I sometimes get articles that suddenly stops in the middle of a sentence.
How can I avoid it?
I have an article creator posting to Wordpress, and it works well. Except that sometimes it some sort of cut off the article before it’s finished. So I sometimes get articles that suddenly stops in the middle of a sentence.
How can I avoid it?
Hi @Preben - this is interesting.
Let’s inspect this together, how long was the already-produced content (in characters or words) before the cut off?
Do you suspect length to be the issue or something else? For example, if you have one article cut off at 1,000 words but another article successfully produced 2,000 words, that means it’s most likely not the capabilities of OpenAI per se.
Another reason could be maybe that the last step didn’t run successfully therefore didn’t complete the article? Because of OpenAI rate limiting maybe?
Articles cut off (in the middle of a sentence):
819 words = 5793 characters
945 words = 5789 characters
914 words = 6029 characters
I wonder if I have the token setting wrong?
What are the best settings for tokens for the different Open AI models?
Can you show me your flow here?
If it is of any help, using the long-form article example you have as a template (with modified instructions and a modified flow to output to a Google sheet instead of posting to WordPress), it successfully generated the following amounts stated below:
Note: I have only tested with 2 articles thus far. These were the results (with no cut-offs); however, it is worth noting I am using the biggest gpt-4 model, ‘gpt-4-0613’, so I haven’t tried with any other models. I suspect that could be your issue, though, with the token limit, as your prompts will use so many tokens, then what is left over will be used to write the article, although I don’t know how this works in ActivePieces with looping, so perhaps the tokens reset each loop? Perhaps @ashrafsam can clarify? It should also be noted this was only through ‘test flow’ before publishing.
Kind regards.
It should work as expected and as it worked with you @GunnerJnr so I’d like to see how the flow looks like for @Preben
On a side note, we recently added the OpenRouter piece that allows you to use the LLM of your choice. As I know, Claude AI has the ability to generate longer texts with only one request, so it might be interesting to you!
@GunnerJnr , yes, I am aware of that solution. Unfortunately it limits me to not tell Open AI to include a link to certain site etc, since it will include that link in all the loops.
How ever I think I have fixed it by adding a number of tokens in that setting. It was empty before.
Now I am just fighting Open AI’s stupidity in now following all instructions.
My flow is in essence:
I feed it a keyword, a mention about what I want the article to be about, a URL, the anchor text I want it to use, some embed code and thinks like this.
Then:
Open AI: Write an engaging SEO optimized blog title for (keyword), that will be about (what the article should be about)
Open AI: Write a blog outline for a blog post with this title (title), about (keyword) and (what the article should be about)
Open AI: Write a blog post with this outline for a post with this title (title), about … etc. Include a link to (url) with this anchor text (anchor). Input this embed code in the article.
In essence that is the flow. I have more instructions, but that is basically it.
I am still trying to adjust the prompts etc to get better results.
This is so interesting. Unfortunately I am not allowed by Claude to sign up yet. I’m abroad…
Feel free to open a new thread if you need something else, some community members like @thisthatjosh have great experience in prompt engineering for content creation!
This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.