🛸 I’ve just had a breakthrough in prompt engineering using the o1 model. It might be the most transformative thing I’ve seen since the field began. For the past few months, I’ve been struggling with a deeply complex analysis system for a pharmaceutical client, dealing with massive amounts of data. This wasn’t just about processing—it was about uncovering nuanced relationships and errors hidden in interconnected data. A true needle-in-a-haystack problem. No matter how I approached it, traditional language-based prompts couldn’t handle the complexity. Traditional applied ai also couldn’t find the perfect answer. The AI struggled to manage intricate relationships, dependencies, and conditions buried in the data. Results were inconsistent, and the cost per analysis was too high—around $15 per document. The system just couldn’t scale. Then I shifted my approach entirely. Instead of describing the problem in natural language, I used a symbolic math problem framework using mathematical symbols and structures to define the relationships, rules, and constraints within a series of complex problems. Basically I broke down the problem algebraically, representing the data as sets and relationships. For example, instead of saying “identify roles for each person based on their linked documents,” I would define: “Each person belongs to a set, and their role is determined by specific rules tied to their associated documents.” Prompt Example: Traditional Prompt: "Identify all personnel roles and check their document statuses." Symbolic Math Prompt: Defines sets (e.g., \( P \) for persons), parameters (e.g., \( IsPI(p) \)), and constraints (e.g., if \( IsPI(p)=1 \), then \( R_p = \text{PI} \)) to systematically determine roles and statuses. By structuring the task this way, the AI could reason more like a mathematician than a linguist, eliminating ambiguity and delivering consistent, precise outputs. The impact was immediate. Costs dropped to $1 (update: $0.22c) per document, and the AI performed with a level of accuracy and consistency that felt almost effortless. What’s more, this approach doesn’t just solve one problem—it opens the door to tackling any task with deeply interconnected data. Think financial modeling, health diagnostics, or systems involving graphs and hypergraphs. If your problem requires navigating complex relationships and finding granular insights, this could be a game-changer. This breakthrough has completely reframed how I think about AI and complex reasoning. Combining symbolic math with the reflective power of modern models feels like the start of something transformative—not just for this project but for entire industries. You can use my little app to create your own Symbolic Prompts. https://lnkd.in/gdyZ6v5R
Here's the source code.. cause I'm a nice guy. https://github.com/ruvnet/symbolic-scribe
Makes sense. Programming languages can be seen as structured notations over underlying mathematical concepts, designed to be more accessible to human reasoning. An LLM, when asked to generate or reason about code, leverages patterns similar to those a human programmer applies, just at a massive scale thanks to its training data. By conceptualizing the LLM as another layer of abstraction—much like any other high-level programming language—you’re essentially treating it as a tool to translate your intended logic into functional output. This perspective removes perceived barriers, letting you and the LLM focus on problem-solving rather than getting bogged down in syntactic details.
Neuro-symbolic programming is taking shape rapidly, some of the ontologies emerging will redefine the next wave of understanding.
Reuven Cohen I love what you’ve stumbled upon - and thank you for sharing so promptly after the release! (Yes, I DID do that!) I had a client years ago - very ‘original’ thinkers in the hedge fund industry - who used Symbolic Math to explain a great many things in work, life, and people. Such a fascinating discovery and can’t wait to mess around with it as well. Question: did you do any riffing with the model initially to co-create this or just dive in with your own thing. I could see instances where the model could help ‘say what it wants to see’…
Reuven Cohen you should check out my startup, we are doing that already with programming.
So to dumb it down Reuven, turn complex problems into if statements or decision trees? Or for the developers among us, use structured notations.
Missing something... Agree with the folks this sounds less like a LLM breakthrough and more like a reversion to classical computing. Mathematica ca. 2000, with some convenient language abstractions and an insanely powerful data ingest/ETL
Cool! Which o1? The poor or rich one ? :)
What spoke to me most here is you found a way to make the implicit, explicit. Showing on a screen (and therefore database and program) a direct relationship that our minds would normally make the connection with, but is extremely hard to do through language. I'm about to start working with a large influencer platform and one of the problems I've had to think through is how they can improve giving advice to brands, and Creators, on what to post, when to maximize campaign results. I realized creating a synthetic audience, that mirrors the Creator's real audience, and testing creative out on that audience would be best. However, how do you make that audience efficiently and cost effectively? How do you uncover hidden relationships so you avoid confusing correlation with causation? I realized I'd have to algebraically represent the audience, its groupings and relationships, based on past behaviour. Therefore, you could 'post' to that synthetic audience with variations on image, caption and timing and see what you get. Then make adjustments. That way the Creator, brand and platform win. Your math is going to be much more elegant than mine. However, I love that people are thinking on the same wavelength.
Agentic Engineer / aiCTO / Consultant
2wMy follow up post: The art of the possible is defined by how far we’re willing to push. In a coaching session, I explored building the most complex system in the shortest time. Leveraging OpenAI’s O1 Pro and Mini models alongside Google’s Willow quantum architecture whitepapers, we tackled a pressing challenge: creating a quantum-based cryptocurrency immune to quantum decryption. https://www.linkedin.com/posts/reuvencohen_the-art-of-the-possible-is-defined-by-how-activity-7272287754749665280-EmqU?utm_source=share&utm_medium=member_ios