Using Claude Opus/Sonnet for writing up ProRealTime code

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • #259170 quote

    Hi Guys, Emma here from the U.K. starting out developing code for some automated trading to back test a particular strategy. has anyone any experience of using Claude sonic/sonnet for generating codes from a detailed description of trade set up and through to execution. Is this really a thing thats possible.

    #259174 quote
    Nicolas
    Keymaster
    Master

    Hi Emma, welcome to the community!

    Great question and yes, using Claude, ChatGPT or similar AI tools to generate ProBuilder code is absolutely something people are doing, and it can give you a decent starting point, especially for straightforward logic like basic indicator crossovers or simple entry and exit conditions.

    That said, it is never fully reliable on its own, and here is why. General purpose AI models have a broad but surface level knowledge of ProRealTime and ProBuilder. They tend to struggle with the more subtle aspects of the platform, things like how specific order types behave, broker side constraints, the way certain functions interact with live data versus historical data, or the precise syntax required for some of the less common instructions. You can end up with code that looks right but throws errors, or worse, code that runs but does not do what you actually intended. In a backtesting context that can give you misleading results, which is the last thing you want when you are trying to validate a real strategy.

    So the best approach right now is to use AI as a rough drafting tool if you like, but always bring the result here to the forum to get it reviewed by people who actually know the platform inside out 😉

    Post your trade setup description, share what you are trying to achieve, and the community will help you get to something solid and tested.

    On a related note, we are currently building our own dedicated AI assistant specifically trained on ProRealTime programming, and it will be a completely different experience compared to generic models. The reason for that is simple: ProRealCode is the reference source that most of those general AI models have drawn their ProRealTime knowledge from in the first place. Our own tool will go much deeper and stay accurate on the specifics that matter. We will share more on that soon.

    In the meantime, feel free to post your strategy details and we will get you pointed in the right direction.

    #259187 quote

    OK this is a fabulous response, including my suspicions being addressed, so thank you very much. Ive been a bit lazy when it comes to backtesting as its a full process in itself and although i don’t want to cut corners , its will be wise to have useable indicators to at least trigger sets ups in place. yes i also see how the syntax is dealing 9 or rather not dealing) with the Claude generated codes, which is why i have stripped it back to creating an indicator only rather than a fully strategy set up…AI simply couldn’t cope with the executions…was firing up far too many errors. So yes thank you for this and i shall for certain flick over the indicator once its working for it to be tested by the community. Regards.

    Bel thanked this post
    #259191 quote
    Iván González
    Moderator
    Master

    Hi Emma!

    As Nicolas rightly says, business models have a superficial knowledge of Probuilder and you have to be careful because they mix languages… That said, here’s my opinion on business models.

    I’ve thoroughly tested ChatGPT, Gemini, and Claude, using the top versions of each. The best by far is Claude, specifically the Opus 4.6 model.

    I hope this helps. We’re here to help 🙂

    #259277 quote

    thank you, ive started working with claude. im feeling im about to level up , after 7000 hours already studying and participating! thanks for the support guys.

    #259299 quote
    Quino
    Participant
    Senior

    Thank you for opening this discussion on the use of AI with ProRealTime.

    AI is a fantastic tool for answering our questions, and I personally use it from time to time for coding. Regarding previous discussions, I would like to contribute the following points:

    ·       Dedicated Section: Perhaps in the future, we should have a discussion category specifically dedicated to AI in ProRealCode. This would prevent interesting conversations from getting lost among other topics.

    ·       Code Mastery: I believe we must always remain masters of our own code. AI should not be used to code in place of the user. Mastering the code allows the user to modify and structure it so that it can be easily and properly integrated into other indicators and/or screeners.

    ·       Value Add: Using AI to manage code based on standard indicators (MACD, RSI, etc.) is of limited interest, as the ProRealCode library already contains many examples that meet these needs. However, AI is excellent for generating a coding draft for new, original algorithmic ideas that might exceed one’s own implementation knowledge—such as using specific mathematical formulas or principles.

    ·       Prompt Precision: When making a request to an AI, I recommend describing the requirements as precisely as possible from the very start. It is best to avoid multiple iterations with new criteria or changes to the initial scope, as this risks generating complex and potentially inappropriate code.”

    “Translated from French with Gemini”

    #259401 quote
    Gianluca
    Participant
    Master

    maybe the staff with prc staff should create a proper guide to instruct llm models.

    i’ve tried to give them the only 2 guide avaible (pro order and pro builder) with very poor results, since both guided don’t cover 100% of the language

    #259402 quote
    Nicolas
    Keymaster
    Master

    Ha, a guide to instruct LLM models on ProBuilder… funny you mention that, because strangely enough, that idea had never once crossed our minds here. Never. Not even once. 😄


    Seriously though, you’re touching on something that is genuinely far more complex than it sounds, and we’ve been down that road more than once.


    The core problem is this: you cannot simply “fix” an existing model. These models were pre-trained on billions of tokens of data across the entire internet, and whatever they learned about ProBuilder and ProRealTime, they learned almost exclusively from this very website. Every indicator, every strategy, every forum thread where someone tried something, got it wrong, got corrected, tried again: that’s the raw material. Which means the model didn’t just learn the correct patterns. It learned all the trial and error too, all the half-working code, all the “this almost works but” moments that are part of any real community. And now it falls on us to untangle that.


    We’ve explored pretty much every angle at this point. Fine-tuning, LoRA adapters, embedded system prompts, skills layers, RAG pipelines ; each approach taught us something, and each one hit a wall in its own way. The current direction we’re investigating is MCP (Model Context Protocol), combined with real-time syntax validation and post-generation correction. The idea being that instead of trying to teach the model ProBuilder upfront, you let it generate, then intercept the output, validate it against actual ProBuilder rules, and correct before it reaches the user. Sounds elegant in theory. In practice it is extremely token-heavy, and it requires exhaustive test coverage of every incorrect pattern current models produce and without even knowing what new mistakes future model versions will introduce. It’s a moving target by definition.


    And that’s before we even get to the prompt side of things. What happens when a user from Italy, or Brazil, or Japan describes their trade setup using their own trading vocabulary, their own terminology, their own mental model of what a strategy is? How do you validate that the model understood the intent correctly, not just the words? There’s no clean answer to that yet.


    Which brings me to a point that doesn’t get said enough: the companies behind these models have collectively raised hundreds of billions of dollars in funding. We are a small team. And yet here we are, spending our time doing the cleanup work they never bothered to do: correcting the mistakes their models make with our own content, content that was built by this community over years of free contributions, and that those models quietly absorbed to train on. The knowledge they have about ProRealTime came from here. And now it’s on us to fix what they got wrong with it, so that their product works better. You really can’t make this stuff up.


    Anyway. AI is changing the world, they said. And indeed it is, it’s just that for some of us, the change mostly looks like unpaid quality assurance for the tech industry. Progress has never felt so exciting.


    On a more positive note, I do hope to be able to offer a proper ProBuilder code generation feature to the community before too long. I won’t make any promises on timing, and I’ll be upfront now: it is unlikely to be entirely free to run given the infrastructure costs involved, and no matter how much work goes into it, it will never be 100% accurate. That’s not a disclaimer, that’s just the honest reality of where the technology is today. But it will be meaningfully better than what you get from a general purpose model and that, at least, I’m fairly confident about.

    GraHal and Quino thanked this post
Viewing 8 posts - 1 through 8 (of 8 total)
  • You must be logged in to reply to this topic.

Using Claude Opus/Sonnet for writing up ProRealTime code


ProBuilder: Indicators & Custom Tools

New Reply
Summary

This topic contains 7 replies,
has 5 voices, and was last updated by Nicolas
1 month, 2 weeks ago.

Topic Details
Forum: ProBuilder: Indicators & Custom Tools
Language: English
Started: 03/20/2026
Status: Active
Attachments: No files
Logo Logo
Loading...