[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Without Graph Tech’s Help, Advances in GenAI Aren’t Enough for Real-world Projects

By: Dominik Tomicevic, CEO at Memgraph

Database expert Dominik Tomicevic on why the future of practical enterprise AI lies in the synergy between knowledge graphs and GraphRAG, despite what the headlines on the latest LLM advances claim

If 2024 was the year of ChatGPT’s dominance, OpenAI must have been betting 2025 would be a repeat, as it unveiled its next plan for market dominance—the o-series, a major leap in reasoning for large language models (LLMs).

Initially, investment in LLM reasoning models seemed to pay off, with commentators praising Sam Altman’s team for using reinforcement learning to curb GenAI’s hallucinations and position LLMs as a reliable foundation for business AI.

But that initial optimism has unraveled. A Chinese firm, DeepSeek, stunned the AI world by releasing an LLM trained to the same level at a fraction of the price and able to run on a laptop. Then came Doubao, an even more cost-effective alternative, further intensifying the upheaval in the LLM and reasoning model landscape.

Also Read: How the Art and Science of Data Resiliency Protects Businesses Against AI Threats

A reasoning roadblock 

The fallout has been swift—AI chipmakers’ stocks have tumbled, and U.S. tech dominance has taken a hit. OpenAI isn’t alone—Anthropic’s Claude 3.5 Sonnet is also under fire. I don’t build LLMs, so I have no stake in this. But working with customers and developers striving for safe, practical AI, o1’s real issue isn’t just training costs (though that’s certainly a challenge), it’s the illusion that LLMs’ longstanding flaws have been fixed.

That matters because it’s a path that leads to some painful dead ends. Despite all the progress, issues like hallucination remain unresolved. This is why I need to emphasize that, from what I’ve seen, the future of AI isn’t AGI or endlessly scaling LLMs. Instead, it lies in the fusion of LLMs with knowledge graphs—particularly those enhanced by retrieval-augmented generation (GraphRAG).

Hidden code call-outs are not enough

Why? No matter how cheap or efficient, an LLM is fundamentally a fixed, pre-trained model and always costly and impractical to retrain. In contrast, knowledge graphs are dynamic, evolving networks of meaning that offer a more adaptable and reliable foundation for reasoning.

Enriching an LLM’s conceptual map with structured, interconnected data using graphs moves it from probabilistic guesswork to precision. This hybrid approach enables true practical reasoning, providing a dependable way to address complex enterprise challenges with clarity, which is something OpenAI-style “reasoning” often falls short of delivering.

Why flag this? I know the difference between true reasoning and the tricks LLMs use to emulate it. Model makers are loading their latest creations with shortcuts—Thinking Out Loud (Chain-of-Thought Prompting), Using Examples (Few-Shot Learning), Pretending to Think (Simulated Reasoning), Learning from Others (Synthetic Data), and Fancy Wrapping (Pseudo-Structure).

These techniques make models appear smarter, and they’re more effective than some of the other sleights of hand at play. Take OpenAI, for example, it’s injecting actual code execution when a model detects a calculation in the context window, creating the illusion of reasoning through stagecraft rather than intelligence.

But in the end, these tricks aren’t enough because they don’t solve the core problem: The model doesn’t understand what it’s doing. The major LLM players—OpenAI, DeepSeek, and others—are mistaken when they claim their latest models, like OpenAI’s o-series or DeepSeek’s R1, can now “reason.” This isn’t AGI. It’s just an advanced text predictor.

Does a one-size-fits-all model understand?

Related Posts
1 of 14,530

If we want AI to be transformative, we must move beyond the notion of reasoning as a one-size-fits-all model.

But isn’t that what the o-series is doing? Aren’t we, as knowledge graph advocates, just following the same playbook? I’d argue no.

While knowledge graphs have solved the classic ChatGPT logic fail—where an LLM struggles to tell you how long to dry five white shirts in the sun—there will always be countless other logical gaps. The difference is that graphs provide a structured foundation for reasoning, rather than masking limitations with clever tricks.

And anyway, what’s needed isn’t an AI that comprehends the world, but one that understands your world, your specific domain. Whether it’s chemical engineering, fertilizer production, blood pressure monitors, or pigment dispersion for paint, AI must function within your corporate information space, not just harvest insights from the public web.

We’ve seen what happens when you force ChatGPT into this role. It fabricates confident but unreliable answers or risks exposing proprietary data to train itself. That’s a fundamental flaw. Tasks like predicting financial trends, managing supply chains, or analyzing domain-specific data require more than surface-level reasoning. 

The reality is that business users need models that provide accurate, explainable answers while operating securely within the walled garden of their corporate infosphere.

Now, consider the training problem. Let’s say you sign a major contract with an LLM provider. Unless they build you a private, dedicated model, it won’t truly grasp your domain without extensive training on your data. But here’s the catch: the moment new data arrives, that training is outdated—forcing yet another costly retraining cycle.

Also Read: AiThority Interview with Brian Stafford, President and Chief Executive Officer at Diligent

More than one way to skin the AI cat

That’s simply not practical, no matter how customized or secure your version of o1, 2, 3, or 4 might be. But with a knowledge graph—especially one powered by high-performance dynamic algorithms—you don’t need to keep retraining the model, you just update the context it operates within.

For example, o1 and its rivals can recognize when a question involves arithmetic—it sees “How many x?” But you don’t care about generic x; you want it to understand your data, like “How many servers are in our AWS account?” A knowledge graph ensures it can reason over that specific information reliably, without needing constant retraining.

Using graph-based approach users can query their LLM with private data—something the best LLM can’t do (nor would you want it to, given the security risks). A secure, continuously updated knowledge graph can supervise and refine the LLM, ensuring that when you update the record, it stays accurate.

Amid the noise around DeepSeek and Alibaba AI, the smart move is clear: practical AI needs knowledge graphs, RAG, and advanced retrieval like vector search and graph algorithms.

Could an LLM with the right graph-based strategies be the answer? Absolutely.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.