Oakland

Agentic AI and AI Agents: Shaping the Market in 2025

What a difference a year makes! As 2024 comes to a close, and for everyone in the world of AI, it’s been a blast! It’s been a pivotal moment in the evolution of artificial intelligence, particularly generative AI. The year has been defined by a number of major milestones in deploying large language models (LLMs) into production. As we finish the year, we can’t help but feel we had our fingers right on the pulse when we launched our intelligent agent solutions, which seems like years ago.

We start 2025 with a new term, but the same premise Agentic AI and AI Agents are where the market is heading. Autonomous systems or ‘AI agents’ perform all of those mundane (and increasingly not so mundane tasks that we never get around to doing). If 2024 was about figuring out how to take models into production, 2025 will be the year when we see agents emerging as first-class citizens in enterprise environments.

Where we started: The AI landscape at the beginning of 2024

At the outset of 2024, the AI world was still buzzing from the release of GPT-4, which solidified OpenAI’s leadership in the generative AI space. GPT-3.5 Turbo was being phased out as GPT-4 became the new standard. In the open-source arena, Meta’s LLaMA 2 was gaining traction, offering businesses an alternative to closed, proprietary models. Meanwhile, Anthropic continued to develop its Claude series, and Google’s Gemini 1.0 arrived, signalling serious competition in the race to dominate AI capabilities.

At the same time, Retrieval-Augmented Generation (RAG) started to gain serious traction in enterprise contexts. This approach—which combines large language models with external data retrieval—became the default architecture for deploying AI systems that required domain-specific knowledge. Tools like Azure OpenAI’s implementation of RAG and Google’s NotebookLM began offering practical solutions for integrating business data into AI systems. By the end of the year, deploying an AI solution without integrating it with your organisation’s data became unthinkable.

Production the key focus of 2024

While 2023 was the year of experimentation, 2024 was the year of making AI operational. Enterprises moved beyond proof-of-concepts (PoCs) to attempt production-grade implementations of LLMs. Yet, this process proved anything but smooth. The Gartner statistic that 85% of generative AI projects fail to make it into production rang true, illustrating the unique challenges that this technology presents.

The Production Problem

Deploying LLMs into production is fundamentally different from traditional software. In 2024, businesses grappled with challenges such as:

Cost considerations

Dedicated AI budgets, in the main, are few and far between. Data teams are being challenged to do more with less. Add to this the double whammy that most AI projects are far more expensive than initially projected. Gartner reported that many projects cost 300 to 500 times more than anticipated, due in large part to infrastructure, security, and integration costs.

Data integration

LLMs need access to enterprise data to add value. Whether via APIs, RAG pipelines, or custom integrations, aligning AI capabilities with business data became a priority.

Infrastructure limitations

AI requires massive compute resources. Many organisations found themselves constrained by global hardware shortages. For example, in the UK, models have to be deployed in Sweden because the necessary GPUs are simply not available locally.

Security and governance

The biggest challenge we see and one that the biggest tech vendors in the globe are wrestling with. Moving your enterprise data into AI workflows raises serious concerns around data sovereignty and governance. Even Microsoft’s Copilot could not guarantee that enterprise data would stay within a specific geographic region; an issue that created significant friction for regulated industries.

The SaaS boom

To address these production challenges, 2024 saw an explosion of Software-as-a-Service (SaaS) offerings targeting AI deployment. These tools aimed to simplify everything from RAG implementation to software development augmentation. For instance:

Copilot emerged as the dominant enterprise AI tool, thanks to Microsoft’s vertically integrated stack.

Third-party solutions like Cursor, Devin, and Windsurf tackled specific pain points for developers, automating workflows and improving coding efficiency.

Numerous RAG-focused SaaS platforms offered plug-and-play solutions for integrating enterprise data into LLMs, such as vector database providers like Pinecone and open-source alternatives like Weaviate.

The result? Companies could adopt AI solutions faster, but they also became increasingly dependent on these third-party tools to abstract away the complexity of production deployment.

The Physical Infrastructure Bottleneck

A major theme of 2024 was the physical infrastructure constraints facing AI. As global demand for compute power skyrocketed, manufacturers like TSMC and NVIDIA struggled to produce enough GPUs to keep up. Cloud providers like Azure, AWS, and Google Cloud were unable to provision sufficient hardware in all regions, forcing businesses to look for creative solutions.

This scarcity brought back challenges reminiscent of the 1990s on-premise era: organisations had to think carefully about where their workloads would run and how physical infrastructure limitations would impact their deployments. A system that worked seamlessly in Sweden, for example, might be impossible to deploy in the UK. As AI models become increasingly powerful, ensuring access to hardware remains a critical factor for enterprise success.

Several trends emerged in the enterprise AI space this year:

1. Knowledge Management as the Leading Use Case

Our research found that knowledge management was the top enterprise use case for AI in 2024. Tools like RAG-based systems made it possible to extract insights from vast documentation repositories, improving workflows in industries like transportation, legal, and healthcare. For example, Oakland launched Network Rail’s first AI programme by deploying an AI-driven knowledge management system to surface lessons learned from past incidents, providing actionable insights for operations teams.

2. Custom Copilots

Enterprises increasingly embraced custom Copilots, integrating Microsoft’s Copilot platform with their own data to create tailored AI assistants. Rather than treating Copilot as a glorified spell-checker or summarisation tool, businesses began leveraging it for deeper productivity gains. Custom copilots became the new standard for enhancing workflows in applications like SharePoint, Teams, and Outlook. If you’ve seen the keynote speech at Microsoft Ignite you may be forgiven for thinking launching an AI agent or custom Copilot was as simple as creating a Power Point presentation. Sadly this is far from the reality.

3. The Imagination Gap

Despite these advancements, many organisations still struggled to identify where AI could deliver value. As one client insightfully put it: “I thought this was just a search bar.” Bridging the imagination gap remained a key challenge in 2024. Demonstrations of AI capabilities—particularly in workshops and proofs of concept—became essential for helping businesses visualise AI’s potential.

Preparing for 2025: The Rise of Agents

If 2024 was the year of LLM production, 2025 will be the year of AI agents. These systems will go beyond answering questions or summarising documents to autonomously take actions on behalf of users.

What is an AI Agent?

An AI agent combines a large language model with tools, workflows, and integrations that enable it to perform tasks independently. For example, a customer service agent might:

Retrieve information from a knowledge base (RAG).

Analyse a customer query.

Update a CRM system or initiate a workflow without human oversight.

These semi-autonomous systems will begin with routine tasks, such as scheduling, updating records, or generating reports. Over time, they will evolve to handle increasingly complex and non-routine business actions.

Risks of Agentic Systems

While agents hold immense promise, they also introduce new risks:

Capability gaps: Are LLMs capable enough to handle critical business tasks? The limitations of current models will be tested as organisations push agents to take on more responsibility.

Oversight and control: How do we ensure that agents operate within predefined guardrails? Governance frameworks will need to evolve to account for AI systems acting autonomously.

Trust and accountability: Who is responsible when an AI agent makes a mistake? Ensuring transparency and accountability in agentic systems will be critical.

Lessons Learned from 2024

As we look ahead to 2025, there are several lessons that enterprises can take from the past year:

Stay close to vendor roadmaps: The pace of AI development is unprecedented. Whether you’re using OpenAI, Microsoft, or Google tools, keeping a close eye on their roadmaps is essential to avoid obsolescence.

Build for flexibility: Given how quickly AI models evolve, organisations must build solutions that are adaptable to new technologies and frameworks.

Prioritise data integration: AI without access to enterprise data is just a toy. Ensuring seamless, secure integration between LLMs and business data sources is non-negotiable.

Invest in security and governance: Understanding where data is stored, how it moves, and what security wrappers are in place is essential for any AI project.

Start small, scale thoughtfully: AI projects are expensive, complex, and prone to failure. Begin with focused, high-impact use cases and scale gradually as you learn what works.

Conclusion

2024 has been a foundational year for AI. Enterprises have begun to move beyond experiments to deploy large language models into production, but challenges around infrastructure, security, and cost remain significant. The rise of SaaS solutions has made AI more accessible, yet the complexity of integration persists.

Looking ahead, 2025 will bring the next evolution: Agentic AI and AI agents that autonomously perform tasks and drive measurable business outcomes. As organisations prepare for this shift, the lessons of 2024 will serve as a valuable lesson.

Looking Ahead to 2025

As we screech into 2025, the role of AI and data within organisations will continue to accelerate and evolve. Several key trends and priorities are set to define the year ahead for data leaders, presenting both challenges and significant opportunities:

Operationalising AI at Scale

While 2024 saw a wave of AI experimentation and pilot projects, the focus in 2025 will shift toward scaling these initiatives to drive measurable ROI. Leaders will prioritise robust AI operationalisation frameworks, integrating models into production systems while ensuring governance, efficiency, and reliability. This will require alignment between data infrastructure, processes, and teams to ensure AI initiatives deliver tangible business value.

AI Governance and Ethical Frameworks

As regulatory environments mature and AI becomes increasingly embedded in decision-making, governance will move to the forefront of the AI agenda. Data and AI leaders must ensure their models adhere to ethical, transparent, and explainable practices, especially in sensitive industries such as finance, healthcare, and public services. Implementing strong governance policies that align with emerging regulations like the EU AI Act will be paramount.

Focus on Data Quality and Unified Platforms

Scaling AI is impossible without trusted, high-quality data. In 2025, data leaders will invest in modern data platforms that centralise, clean, and govern data across organisational silos. Unified platforms will become the backbone for scalable analytics and AI, enabling faster time-to-insight while reducing the friction between engineering and business teams.

Generative AI Maturity

The generative AI wave will mature further, with organisations moving from experimentation toward targeted, high-impact use cases. In particular, enterprise adoption of generative AI for content generation, code development, knowledge management, and customer experience will drive competitive differentiation. The demand for fine-tuning and customising foundational models to fit specific business needs will rise.

Investment in AI Talent and Upskilling

AI and data teams will continue to face talent gaps as demand outstrips supply. Organisations will double down on upskilling initiatives, developing AI fluency across the enterprise not just for technical teams but also for business leaders and frontline employees. Data and AI leaders will need to foster a culture of collaboration, pairing technical expertise with domain knowledge to drive innovation.

Measuring Value from AI and Data

In a challenging economic environment, data and AI leaders will face increasing pressure to prove ROI from their investments. Organisations will emphasise tracking clear KPIs, aligning AI and data initiatives with strategic business outcomes, and ensuring budgets are directed toward high-value priorities.

AI for Sustainability and Social Impact

2025 will also see AI playing a growing role in addressing sustainability and ESG (Environmental, Social, and Governance) goals. From optimising energy consumption and reducing waste to improving social equity, forward-thinking organisations will leverage AI to balance profit with purpose.

Our Closing Thoughts

As we move into 2025, the organisations that work hard to bridge the AI ‘imagination gap’ will thrive. Overcoming budget constraints and pressure to deliver a strong ROI will need data leaders with a strong stomach. Balancing innovation with governance, agility with resilience, and experimentation with measurable outcomes is no mean feat. But in the world of AI – who dares wins.

At Oakland, we provide pragmatic and actionable ways to get started with AI. In a rapidly evolving technology landscape, you need experts who can anticipate what’s next and align advanced AI—like Agentic AI—with your existing platforms, strategy, and governance frameworks. Oakland’s AI consultants can take the legwork out of putting AI to work across your organisation. Get in touch today or learn how we approach AI.