AiThority Interview with Waseem Alshikh, Co-founder and CTO at Writer: Breaking Down The Benefits of Writer’s Latest RAG Integration
Waseem Alshikh Co-founder and CTO at Writer talks about their latest RAG capabilities and product innovations at Writer, while discussing today’s biggest AI myths, and future trends.
————–
Hi Waseem, tell us about yourself and your role at Writer
Hi, I’m the co-founder and CTO of Writer.
For my co-founder and Writer’s CEO, May Habib, and me, language has always been central to our mission. Growing up in non-English-speaking households, we both developed a deep belief that the language you speak should never limit your opportunities.
Over the past decade, we’ve dedicated ourselves to solving complex language challenges. In 2013, we launched a company that developed software for dynamic content localization, where we first began working with transformers in the machine translation space. We quickly realized that natural language would become the new programming language. By 2020, May and I took this expertise, combined it with our deep understanding of natural language processing and enterprise needs, and founded Writer.
At Writer, our research, applied engineering, AI, and go-to-market teams are pioneering how people work with AI to create greater intelligence, passion, and performance in every organization. Today, the Writer platform offers enterprises the fastest way to build and scale generative AI applications and workflows, and we’ve helped over a million users integrate AI into their daily work. Our Palmyra LLMs are some of the highest-benchmarked models in the world.
Our commitment to setting the enterprise standard for trust and accuracy is fueling rapid growth, and today Writer is the fastest growing company in enterprise AI.
We’d like to hear about some of your latest product enhancements and more on your latest chat offering?
Yes! We recently released major updates to the Writer chat app offering, including the integration of Graph-based RAG. What’s particularly exciting is that this is the industry’s first instance of built-in-RAG chat apps available off the shelf. These apps also now have a 10M token upload capacity. More details on features and benefits:
- Upload files with up to 10 million words (equivalent of 20,000 pages), and ask questions, summarize text, and generate content.
- Enhanced transparency – Writer showcases its thought process, or the steps it took to formulate a response, including all sub questions and the specific excerpts from contributing sources.
- Writer “modes” — new dedicated user experiences designed for specific types of tasks and better outputs, including “document mode” to help users go deep on research.
Zooming out for a second, these updates now provide Writer customers with detailed insights into how the LLM and platform is processing their questions, and the sources that contributed to the platform’s response. This transparency is imperative for enterprises to understand and trust that AI systems are going to elicit a response that is accurate and informed. Their businesses are counting on it and Writer is delivering.
Writer is leading the market in enterprise-grade accuracy, efficiency, and transparency, and with these RAG upgrades, we are doubling down on that commitment.
Why should AI users pay more attention to RAG capabilities today?
RAG combines the creativity of LLMs with targeted information retrieval from enterprise data. The result is more accurate and relevant AI outputs. However, many teams are still stuck in “proof-of-concept purgatory,” making small changes with very little progress.
Graph-based RAG offers a way out by enhancing LLMs with targeted data retrieval. It’s not just a better solution — it’s the only scalable solution for generative AI in business contexts. Unlike standard vector-based RAG approaches, which often fail due to issues like high hallucination rates and crude chunking, graph-based RAG provides the precision and depth required for real-world applications. This makes it the emerging standard for enterprise-grade AI, ensuring scalability and consistent performance where other methods fail.
At Writer, our approach to RAG fundamentally reshapes how organizations leverage AI, helping them move from PoC to powerful, scalable implementations that drive business value. We’ve helped hundreds of enterprises scale AI using our graph-based RAG solution, Knowledge Graph. Benchmarking studies show our approach outperforms traditional methods in accuracy and speed, reducing decision-making time and minimizing errors, thus improving business operations and preventing costly mistakes.
This method isn’t just about handling data — it’s about weaving a network of meaningful connections that boost the system’s accuracy and adaptability — key traits for thriving in dynamic production landscapes. By building Graph-Based RAG into our chat apps, Writer is bringing an important layer of accuracy and transparency to our enterprise customers. Writer’s RAG approach is industry leading, and our Palmyra LLMs are consistently top-benchmarked by Stanford HELM.
Also Read: Improving Franchise Operations With Purpose-built LLMs and AI Agents
When it comes to the state of AI as of today, what are some of the biggest myths you’d like to bust and the most exciting things to look forward to?
Let’s start with what I’m excited about, and perhaps that will also clear up some misconceptions. I’ve witnessed numerous paradigm shifts. However, none compare to the current revolution we’re experiencing with AI-infused enterprise software. This isn’t just an upgrade; it’s a fundamental reimagining of how businesses operate, decide, and grow.
We’re moving beyond AI as a mere add-on. Today’s cutting-edge enterprise applications have AI woven into their very fabric, creating systems that are intuitive, predictive, and adaptive.
Key features of this new breed of software include:
- Contextual Intelligence: These systems don’t just process data; they understand it. For instance, an AI-powered CRM doesn’t just log customer interactions; it interprets sentiment, predicts needs, and suggests personalized engagement strategies.
- Natural Language Interfaces: Imagine asking your ERP system, “What’s our inventory situation for Product X in Region Y?” and receiving an instant, comprehensive answer. This is the reality of AI-native applications.
- Predictive and Prescriptive Analytics: We’re moving from “What happened?” to “What will happen, and what should we do about it?” For example, AI-infused supply chain software can predict disruptions months in advance and suggest mitigation strategies.
- Autonomous Decision-Making: In a recent project, we implemented an AI system that autonomously adjusts manufacturing parameters in real-time, optimizing for quality and efficiency without human intervention.
- Continuous Learning: These systems evolve. An AI-powered marketing platform we deployed last year has increased its predictive accuracy by 27% through continuous learning from campaign data.
As enterprise AI platforms evolve to serve different business needs, what should end users be weary of or more stringent about?
AI-Native applications are touching every corner of the enterprise. Here’s just some of what’s happening in organizations today:
- Customer Service: Chatbots are old news. We’re now seeing AI systems that can handle complex, multi-step customer issues, even detecting and responding to emotional cues.
- Sales and Marketing: AI is enabling hyper-personalization at scale. One customer saw a +45% increase in conversion rates using an AI system that tailors product recommendations and pricing in real-time.
- Human Resources: From predicting employee churn to personalizing development plans, AI is transforming HR into a strategic powerhouse.
- Finance: We’ve implemented AI systems that can audit 100% of reporting data in real-time, a task previously impossible for human teams.
The shift to AI-native enterprise software isn’t coming; it’s here, and it’s reshaping what’s possible in business.
To achieve outcomes like these, enterprises need more than just LLMs. Developers must assemble a complex stack of tools for inference, RAG, process management, feature extraction, and more. Configuring these can take teams of engineers months of effort, and even longer to improve and refine them to meet enterprise-level accuracy and quality requirements.
Fully integrated enterprise AI platforms like Writer give enterprise engineers everything they need to start quickly building and scaling powerful AI applications that solve complex business challenges, rather than spending their time piecing together tools.
But even with the best technology, it’s important to keep in mind that AI is as much an organizational transformation as a technological one. We spend a lot of time guiding our customers on how to drive AI adoption throughout their workforces.
Also Read: AiThority Interview with Michael Berthold, CEO of KNIME
Tell us about the most interesting AI platforms from around the world that have piqued your interest in the last few months before we wrap up.
In the rapidly evolving landscape of AI/LLMs, a significant paradigm shift is taking place: the rise of Domain-Specific Large Language Models (LLMs). While general-purpose LLMs have garnered a lot of attention, it’s the domain-specific models that are setting the stage for the next wave of AI innovation.
These models offer unparalleled precision within their designated fields, thanks to their training on curated, industry-specific datasets. This specialization not only enhances accuracy but also optimizes resource utilization, allowing for more sustainable and efficient AI solutions. Additionally, domain-specific LLMs provide superior data governance, which is critical in today’s environment of stringent data protection regulations.
Our experience with implementing these models has shown impressive results—such as a 40% increase in task-specific accuracy and a 50% reduction in the time required to deploy new AI features. These focused models also lead to accelerated innovation cycles, providing a competitive edge by enabling organizations to develop proprietary AI capabilities tailored to their unique market challenges.
[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]
Waseem Alshikh is the CTO and co-founder Writer, the full-stack generative AI platform built for enterprises.
Waseem has been recognized for his contributions to the tech industry, earning numerous awards and accolades. He holds degrees in Electronics from Beirut Arab University and Damascus Polytechnic University.
Writer is trusted by leading companies such as Vanguard, Intuit, L’Oréal, Accenture, Spotify, Uber, and more to deploy GenAI across their businesses. The platform allows enterprises to automate and enhance key operational activities, thereby boosting employee creativity and productivity.
Writer’s family of large language models (LLMs) are recognized for their state-of-the-art performance, topping leaderboards in natural language understanding and generation. The company’s commitment to security ensures that its LLMs and GenAI platform are deployed within an enterprise’s own computing infrastructure, safeguarding sensitive data.
Since its launch in 2020, Writer has achieved significant success, experiencing a 10x revenue growth over the last two years and maintaining over 150% net revenue retention. Waseem, alongside co-founder and CEO May Habib, has played a pivotal role in raising over $126M in funding from esteemed investors, including ICONIQ Growth, Balderton Capital, and Insight Partners.
Comments are closed.