Hi there Adam.
With regards to the Chatty deepseek thing…
this is interesting.
I didnt realise that… you can create a model, (in this instance using Ollama,) and… change how it behaves!
so, you could , following this video example… create
a Deepseek R1 model that follows the Claude prompt protocol, which is passed to the model with each request.
wow.
So bye bye Chatty R1, Hello Claude Style Replies… running on the R1 model.
… or… you COULD make it speak PiRaTe!
not sure how that would work out for coding but…
This video shows how to do it…
And its that simple.
I might try that later, Tonight.
and here is a link to a smaller model of
Llama 3.1 …using the Claude Sonnet Prompt which another user has created and uploaded based on this video.
Note: Llama model is currently 3.2 and this user created model is 9 months old. Also, the Antrhopic model isnt open source (apparently - correct me if wrong)
So creating an updated version of this from 3.1 to the 3.2 instead of this 3.1 older LLM model I was using (not much at all laterly) should be easy.
Note.
The text to be copied for this can be found on the Anthopic site here;
within the
<claude_info>
</claude_info>
Section
So, based on what you experienced…
, if you could find the System prompts used in those 2 models you favour, you could do the same, and create your own personality or Styled model of DeepSeek R1, to see how it delivers code to be parsed by Pinegrow.
Challenge accepted?

PS.
I just tried Devstral’s own online chat agent to ask about their own System prompt…
https://chat.mistral.ai/chat
… it had no idea…
Suspicious!
it DID…however have the exact same info for … Claude.. so yeah, maybe dont bother 