ChatGPT has become a favorite conversation partner for most of us, becoming our GoTo for knowledge and insights. Enterprises are deploying Microsoft Copilot company-wide to bring these types of interactions into the workplace. AI assistants are demonstrating their proficiency by summarizing messages, creating documents, and providing answers to common questions.
Yet, according to Ira Palti , Executive Chairman of GigaSpaces Technologies , enterprise AI adoption remains fundamentally stuck. Despite the hype and investment, most organizations are struggling to prove ROI from generative AI beyond personal productivity gains. The technology is phenomenal, but concrete use cases that transform how businesses actually operate have been largely elusive.
The problem isn’t the AI itself; it’s what the AI can't see.
Palti frames the core challenge facing enterprise AI as a “context gap” – the disconnect between powerful language models (LLMs) and the operational data that actually runs businesses. While employees can ask ChatGPT about general topics or have Copilot summarize documents, the systems that contain inventory levels, sales pipeline status, supply chain information, and real-time KPIs remain inaccessible to these AI tools.
“Without access to operational reality, even the smartest AI is flying blind,” Palti explains. Generic LLMs “will never understand the unique context of a specific enterprise and how they operate without access to its operational reality.”
This creates a paradox: The AI tools everyone is excited about can’t access the information that would make them genuinely transformative. Meanwhile, the valuable operational data sits trapped in structured systems, accessible only through traditional BI reports that take weeks to build and show what happened yesterday, not what is happening today.
GigaSpaces – Platinum sponsor of the recent Generative AI Expo in Fort Lauderdale, Florida (part of the ITEXPO #TECHSUPERSHOW) – has responded to this challenge with eRAG (Enterprise Retrieval Augmented Generation), a solution built on the company’s two decades of experience with real-time operational data systems. Its recently announced integration with Microsoft 365 Copilot represents a concrete implementation, enabling natural language interaction with structured enterprise data through platforms employees already use.
Beyond Personal Productivity: The ROI Challenge
The current state of enterprise AI adoption reveals a pattern, where research consistently shows organizations struggling to demonstrate clear ROI from Gen AI investments. Palti explains it clearly: “The broad use of GPT and GenAI remains at the personal productivity level and focuses primarily on text summaries and content creation.”
This isn’t because personal productivity gains aren’t valuable; of course they are. But, they’re not transformative. An employee who can draft emails 20% faster is helpful. An organization that can achieve significant cost savings by querying operational data in natural language is transformative.
Ira Palti, GigaSpaces
Palti shares a concrete example: “An international logistics company used eRAG to identify pricing and cost mismatches on delivery routes. The insights and analysis delivered by eRAG led to the company to carry out business model changes on how they work with their delivery partners, resulting in cost savings of more than $150,000.”
Another use case involved eRAG’s detection of material delivery delays in supply chains. This insight enabled active mitigation of manufacturing workflows and ensured end products would meet delivery SLAs. These aren’t hypothetical benefits or efficiency estimates; they are measurable business outcomes tied directly to AI-enabled access to operational data.
The difference between incremental productivity and transformative ROI comes down to what the AI can access and understand.
Part of what makes the context gap Palti talks about so frustrating is that organizations already have systems designed to provide data access – their BI platforms. Yet, Palti argues these systems are increasingly inadequate for how knowledge workers actually need to interact with information.
Traditional BI suffers from fundamental limitations. “Data is modeled to reflect specific reports and data sources are static,” he notes. “Typical BI projects are very complex and lengthy, and typical users rely on analysts to get reports, creating delays and bottlenecks.”
This creates a workflow that’s fundamentally at odds with the pace needed for modern businesses. By the time a report is requested, scoped, built, and delivered, the business context may have shifted. The question you asked last week might not be the question you need answered today.
eRAG represents a more agile alternative: “It complements BI systems by allowing all users to engage with multiple data sources spontaneously in natural language through familiar GPT platforms including Copilot, Teams, ChatGPT and others,” explains Palti.
Rather than waiting for predetermined reports, users can explore operational data interactively, asking follow-up questions, testing hypotheses, and drilling into unexpected patterns as they emerge. Palti frames this as the ability to “brainstorm with your data” – a fundamentally different mode of interaction than consuming static reports.
The Skills Gap in AI Adoption
Beyond the technology and data access challenges, Palti identifies a more human barrier to AI adoption that often gets overlooked. While knowledge workers are successfully using AI for simple tasks, “When we want AI to execute more complex tasks, employees need to be able to break down tasks into a workflow that can be correctly executed by AI. Not all employees have the time or the ability to do this,” he says.
This is the flip side of the prompt engineering phenomenon. Certainly, some users become adept at crafting effective prompts and breaking down complex requests into manageable AI workflows. But, expecting every employee to develop these skills is unrealistic, particularly when their primary job is something other than AI interaction.
The implication is that to be adopted successfully, enterprise AI needs to meet employees where they are, with interfaces and interaction patterns that don't require specialized training or AI expertise. Integration with platforms like Copilot and Teams isn't just about convenience, but about embedding AI capabilities into workflows employees already understand.
The GenAI Killer App
I asked Palti what he sees as the “killer app” for GenAI. His response cuts through the technology to a deeper, enterprise-level perspective: “Interpreting the meaning and context of operational data to LLMs so that they understand the reality and function of a specific organization.”
This is important because it shifts the conversation from model capabilities to data accessibility . The race to build larger, more capable models may be missing the point. “It’s not reasonable to assume that ordinary organizations will be able to train small or large models with the resources at their disposal,” Palti notes.
Instead, the breakthrough comes from solving the context problem and by making operational reality visible and interpretable to existing language models. This doesn’t have to require custom model training or massive AI infrastructure. It requires solving the data integration, context interpretation, and semantic reasoning challenges that allow general-purpose LLMs to understand specific enterprise contexts.
Looking ahead, Palti’s bold prediction for AI over the next three years paints a vivid picture of where this trajectory leads: “Your most productive teammate might not be human. Imagine, you'll finish a remote meeting and have one question: ‘Wait, who on that call was actually a human?’”
This vision extends beyond AI assistants that process work to AI teammates who are just as integrated into workflows as their human “colleagues.” It’s a significant deviation from current AI tools, which remain clearly separate from human collaboration. They are utilities we leverage, not participants in the work itself.
As AI systems gain access to operational context, understand business goals, and can engage with the same information employees are working with, the boundary between tool and teammate blurs. An AI that can participate in a sales pipeline review because it has real-time access to CRM data and understands the business context isn't just automating tasks, it's collaborating.
“We are moving toward a world where AI doesn't just process our work, it understands the context, the goal, and the culture at least as well as we do,” Palti suggests.
GigaSpaces’ approach with eRAG focuses on operational data interpretation rather than chasing model sophistication and integrating with existing platforms rather than requiring new tools. eRAG delivers ROI in weeks rather than requiring lengthy AI projects, and represents a pragmatic path through the current enterprise AI paradox for positive business impact.
The challenge for organizations isn't primarily about choosing the right AI model or building AI expertise. It’s about making their most valuable data, the operational information that drives daily business decisions, accessible to the AI tools they're already deploying. The question isn’t whether AI will transform enterprise operations. It’s whether organizations will solve the context problem that allows that transformation to actually happen.
The companies that figure this out won’t just have more productive employees. They’ll have AI that actually understands their business, and those organizations will arrive first at the AI future Palti sees on the horizon.
To learn more, book your free strategy session today .
Edited by
Erik Linask