For this edition, we only needed a quick trip to the MIT Media Lab in Cambridge, MA—where some of the foremost thought leaders in AI converged for the third annual MIT Sloan AI Conference. The theme of this year’s event was “The Age of Implementation”, with presentations and panel discussions dedicated to a range of ideas focusing on AI in action.
For those of you that couldn’t make it, you’re in luck—a few of our tech-savvy Scratch team members were in attendance to soak up the insights and report back on the most innovative ideas and key takeaways. Get the snippets from panelists and speakers from companies like Meta, Google, Microsoft, OpenAI, Hugging Face to cutting edge start-ups, researchers, academics and VCs:
LiquidAI Showcases a Leaner, More Efficient Approach to Language Models

One of the event’s most memorable presentations was the opening keynote from LiquidAI CEO Ramin Hasani (for a deeper dive see our blogpost from LiquidAI’s Product Launch in Oct’24). In his presentation, Hasani showcased his company’s revolutionary new approach to generative AI model development—the Liquid Foundation Model (LFM). Unlike traditional large language models (LLMs), LFMs do not rely on the current industry-standard transformer architecture. Instead, they utilize a “liquid architecture”, which is capable of cutting-edge performance at scale, all while maintaining a smaller memory footprint and more efficient inference.
Given this significant improvement in size and efficiency, Hasani says LFMs will soon be capable of operating locally on any device (such as our phones), thus driving hosting costs to zero and relying solely on the phone battery or another local power source. With this, Hasani claims, there will be “no need to access data centers,” introducing an age of “AI with Zero Cloud Calls”.
Obviously, these advances will translate to massive cost savings enabling widespread democratization, as well as countless new use cases and applications that have thus far been prohibited by cost, compute, connectivity, or some combination of the three.
Finally, Hasani says, these efficiency gains will go a long way to solving the increasingly worrying sustainability issues associated with today’s LLMs.
Future Opportunities and Competitive Considerations

Another highlight from the conference was an AI founders’ panel discussion on the topic of “Driving Behavioral Change” with AI. With the possibilities outlined above still fresh in the audience’s mind, Stack AI Co-Founder Antoni Rosinol kicked off the discussion by exploring the wealth of possibilities that will be afforded by introducing agentic AI to back-office business processes.
He highlighted the example of consumers’ phone interactions with businesses, and how many inefficient, error-prone processes are reliant upon this medium: “why do I still have to press 3 for service”? With robotic process automation (RPA) still lagging behind operational needs, Antoni explains how agentic AI models could rapidly automate everything from taking orders, to navigating phone trees, and delivering effective customer service over a voice interface.
These ideas were then expounded upon by Beth Porter, Head of Studio Operations at C10 Labs, who discussed the realm of possibilities that could be unlocked by small, efficient models running on our personal cellular devices.
“Where things will get interesting is when consumers bring agents to their phone,” Porter said. She went on to cite possible applications such as digital renderings and even digital companions — both of which will be instrumental in breaking down the barriers of fear and anxiety that exist around using AI (by way of humanization).
For the rest of the discussion, the group—including Fireflies.ai Founder, Sam Udotong, and MIT Sloan Professor Don Sull—was discussing the strategic, operational, and competitive considerations businesses must begin mulling over now, if they hope to remain relevant in the space.
A few ideas that stood out:
- Data Ownership & Provenance are Key – Companies will soon begin moving to private clouds , and managing downstream deliverables in each region to ensure control over data access and model training. This becomes especially critical once we reach the point that an entire organization is using said data. In the same vein, companies will need to begin to take matters of data provenance seriously, being able to trace their data, from origins and beyond, for quality assurance measures.
- Switching Costs are Most Companies’ Biggest Barrier to Entry – The cost of adoption, and the continued need to adapt, reinvent, and remain agile will put significant strain on businesses (especially smaller ones). As VC’s and AI experts continue to mull over the drivers of AI moats, history has shown us that no leading company has had a tech moat at inception that’s remained relevant 10 years down the road. The key to competitive durability is and will be (even more so) continuous learning and agility.
- Industry-Specific AI Wrappers Offer the Solution — A major enabler of such agility and a huge cost and time-saving tool for new adopters will be industry-specific AI wrappers. In short, an AI wrapper is a lightweight application that leverages existing AI services to solve specific user problems with minimal overhead and without needing to build complex AI systems from scratch.
AI Embodied — The Rise of Intelligent Robots and its Implications

In addition to the inspiring discussions around leaner, meaner, purpose-built agentic AI, it was Boston Dynamic’s turn to spark our imaginations. Speaking on the intersection of robotics and LLMs, the team referenced videos demoing ChatGPT being run on some of their latest, high-tech robots.
To many of us, it felt something like watching a science-fiction novel come to life. The walking, talking, highly agile and intelligent robot didn’t take long to convince us of the very powerful and imminent possibilities afforded by this combination of emerging technologies.
Amidst the discussions of the enormous and frankly jaw-dropping possibilities, it was time Khosla Venture’s founder – Vinod Khosla’s talk that tempered our enthusiasm with a healthy dose of caution. Vinod spoke on many of the ideas covered in his recent article, “AI: Dystopia or Utopia?” — including the stark and unavoidable reality that these technologies will likely lead to severe labor market disruptions (citing a finding that some 1 billion robots will be operational by 2040).

While the warnings for policy makers and society at large were real, Vinod concluded that the path forward was not that of the luddite, where he referenced the recent actor strike and the current balance struck between actors and studios, which is likely detrimental in the long run. Overall, Vinod stressed the importance of remaining on the cutting edge, making early moves, and embracing technologies while simultaneously pushing for economic and social reforms that will help to ensure these awesome technological advances ultimately serve the greater good—and not just the lucky few.
Make sure to check back soon for our next installment of “No Registration”, and plenty more lanyard-free lessons straight from the conference room floor!
Or, interested in leveraging some of Scratch’s AI expertise for your own marketing and media initiatives?