Oakland

Everything you need to know about Big Data & AI World

Introduction 

Last week, I had the opportunity to attend the highly anticipated Big Data and AI World conference, an all-encompassing event that showcased the breadth and depth of the data industry. From the infrastructure backbone of data centers to the cutting-edge applications in machine learning (ML) and large language models (LLMs), the conference offered a comprehensive look at the current state and future directions of data technologies. 

Highlights 

The conference prominently featured a range of topics within the data pipeline, emphasizing well-established processes including DevOps, DevSecOps, and Cloud Security. While the ML and LLM segment of the conference was relatively modest, it provided a valuable glimpse into the evolving landscape of AI applications. 

Key Topics and Speakers 

One of the opening sessions, aimed at a broad audience, focused on how to use ChatGPT, indicating a general entry-level understanding of LLMs among attendees. Despite this, there were notable discussions and presentations that delved into more innovative applications of LLMs: 

Microsoft also made an appearance with their Copilot solution, underscoring the growing interest in AI-assisted development tools – as well as a few resellers of Copilot. 

Notable Exhibitors and Innovations 

Among the exhibitors, Compare the Market stood out for their advanced development work with LLMs. They shared a series of best practices that they’ve implemented, offering valuable insights into the practical challenges and solutions in deploying LLMs in a business context. 

Equally compelling was the announcement from Multiverse, a pioneering quantum computing company, which presented a revolutionary algorithm capable of significantly condensing the size of large language models (LLMs), such as LLAMA2. Their technology has achieved an astounding 85% reduction in the model’s size with a mere 5% loss in accuracy. This breakthrough paves the way for more sustainable and accessible LLM implementations, reducing computational demand and making it feasible to deploy sophisticated AI applications in more constrained environments. 

These exhibitors highlight the innovative spirit permeating the conference, showcasing not only the practical application of LLM technologies but also the forefront of AI and quantum computing research. 

Personal Takeaways 

The conference provided a fascinating overview of the data and AI industry’s current landscape, with a broad spectrum of technologies on display. However, one of my key observations was that the application of large language models (LLMs) among exhibitors was primarily at a superficial level. Despite the potential of LLMs to act as powerful reasoning engines, their utilization seemed to be confined to more basic tasks such as content generation, serving as advanced chatbots, or simplifying interactions with data. 

This suggests that there’s a significant gap between the capabilities of LLMs and their current usage in the industry. Many exhibitors are yet to explore the full maturity scale of these models, leveraging them beyond the initial layers of application to truly tap into their transformative potential. The reliance on LLMs for relatively straightforward tasks underscores a broader industry trend of cautious engagement with AI’s deeper functionalities. 

Reflecting on the conference, it’s clear that there’s a vast untapped potential for LLMs to revolutionize not just how we interact with data, but how we reason with it, use it to make decisions, and innovate. The journey of integrating LLMs more profoundly into our technological solutions is just beginning, and I’m excited to see how they will be pushed beyond their current boundaries to achieve their full transformative potential. 

Author: Mike Le Galloudec is an Innovation Lead at Oakland