Ha, a guide to instruct LLM models on ProBuilder… funny you mention that, because strangely enough, that idea had never once crossed our minds here. Never. Not even once. 😄
Seriously though, you’re touching on something that is genuinely far more complex than it sounds, and we’ve been down that road more than once.
The core problem is this: you cannot simply “fix” an existing model. These models were pre-trained on billions of tokens of data across the entire internet, and whatever they learned about ProBuilder and ProRealTime, they learned almost exclusively from this very website. Every indicator, every strategy, every forum thread where someone tried something, got it wrong, got corrected, tried again: that’s the raw material. Which means the model didn’t just learn the correct patterns. It learned all the trial and error too, all the half-working code, all the “this almost works but” moments that are part of any real community. And now it falls on us to untangle that.
We’ve explored pretty much every angle at this point. Fine-tuning, LoRA adapters, embedded system prompts, skills layers, RAG pipelines ; each approach taught us something, and each one hit a wall in its own way. The current direction we’re investigating is MCP (Model Context Protocol), combined with real-time syntax validation and post-generation correction. The idea being that instead of trying to teach the model ProBuilder upfront, you let it generate, then intercept the output, validate it against actual ProBuilder rules, and correct before it reaches the user. Sounds elegant in theory. In practice it is extremely token-heavy, and it requires exhaustive test coverage of every incorrect pattern current models produce and without even knowing what new mistakes future model versions will introduce. It’s a moving target by definition.
And that’s before we even get to the prompt side of things. What happens when a user from Italy, or Brazil, or Japan describes their trade setup using their own trading vocabulary, their own terminology, their own mental model of what a strategy is? How do you validate that the model understood the intent correctly, not just the words? There’s no clean answer to that yet.
Which brings me to a point that doesn’t get said enough: the companies behind these models have collectively raised hundreds of billions of dollars in funding. We are a small team. And yet here we are, spending our time doing the cleanup work they never bothered to do: correcting the mistakes their models make with our own content, content that was built by this community over years of free contributions, and that those models quietly absorbed to train on. The knowledge they have about ProRealTime came from here. And now it’s on us to fix what they got wrong with it, so that their product works better. You really can’t make this stuff up.
Anyway. AI is changing the world, they said. And indeed it is, it’s just that for some of us, the change mostly looks like unpaid quality assurance for the tech industry. Progress has never felt so exciting.
On a more positive note, I do hope to be able to offer a proper ProBuilder code generation feature to the community before too long. I won’t make any promises on timing, and I’ll be upfront now: it is unlikely to be entirely free to run given the infrastructure costs involved, and no matter how much work goes into it, it will never be 100% accurate. That’s not a disclaimer, that’s just the honest reality of where the technology is today. But it will be meaningfully better than what you get from a general purpose model and that, at least, I’m fairly confident about.