Skip to main content

AWS re:Invent Roadshow - Charlotte


AWS re:Invent is Amazon Web Services flagship annual conference. This conference is highly sought-after, always packed, and filled with huge announcements. It’s also quite unattainable for most of us lower on the totem pole, boots-on-the-ground workers. This is why I was delighted when they announced a series of “Roadshows” hitting several major cities. I had the awesome opportunity to attend the re:Invent Roadshow in my home city, Charlotte, North Carolina.

I will skip over the well-known stuff and try to focus on what seems like “new information and insights.” So yeah, no mention of Modernize, Optimize, or Monetize, and I'll leave out all of the stuff about Amazon Q and CodeWhisperer because they are well-documented elsewhere.

Generative AI

As you probably could have guessed, Generative AI was a big theme at the event. Even the announcements surrounding new storage options for S3 were communicated through the lens of “to aid in the development of new Generative AI capabilities.” Suffice it to say, Amazon is trying to make it known that they are not only a major player in the Generative AI space, but that they do it better than anyone else in the industry.

Although this point may be hotly debated, there are a few facts that we need to consider. AWS’ Generative AI offerings via Bedrock give users access to many different foundational models, such as Anthropic, Cohere, Meta, Titan, and many others. This was a big focus. Amazon offers choice, whereas others do not. One of my favorite capabilities powered by this choice is the ability to send a prompt to multiple different models at one time and pick the output that is right for your business use case.

Another fact is that AWS has been doing AI for a very long time- over a decade. How do you think all of their Prime Delivery trucks get to the right place at the right time? It’s all powered by AI of course. While this may not seem like a big deal, since you know, there are different kinds of AI out there (for instance, their health care AI system HealthScribe just uses NLP and not necessarily an LLM), it actually lends itself to a different point: AI Chips. 

Of course, AWS is building its own chips to run AI! They got really great at making chips with Graviton, and now they are combining those chip-making capabilities with their historical usage (and understanding) of AI systems. This allows them to design new, highly performant, and cost-effective chips, such as AWS Trainium2, a purpose-built chip for training AI with a freaking awesome name.

Putting Your Data House in Order

The next big focus was on data. It is well known that AWS has a new “limitless Aurora Postgres” in preview. What may not be as well known is how the underlying architecture of this new “limitless” relational database offering can contribute to the speed at which data can be accessed by other services, such as train models and Retrieval Augmented Generation. 

Speaking of putting data to work with Generative AI, it is clear that 80% of the “lift” when preparing a business for Generative AI use cases is in “getting your data house in order.” This means all the usual big data stuff: Data Catalogs, Governance, Lineage, Cleansing, etc.  AWS and partners at the conference see this as the biggest roadblock for most organizations. If the data is “Garbage,” then the GenAI output will also be “Garbage,”

Another thing worth mentioning on the data front is EFx for Lustre. AWS now offers the ability to move data from S3 to EFx for Lustre and back again. This means you can leverage the high-performance file system for things such as training models, but then stop paying for it as soon as you are done. The data just moves back and forth between S3 and EFx for Lustre. I thought this was pretty clever.

One last thing that deserves a quick mention: Amazon EBS now integrates with ECS and Fargate. And guess what? It’s great for running inference workloads via ECS tasks. Check back later for a blog on how to test this new capability out. 


As far as trying to decipher whether or not Generative AI will bring about a “new era” in computing, it’s still hard to tell. New capabilities, yes. But as transformative as the internet? I think the jury may still be out on that one. One thing is certain: Generative AI is here to stay, and if you have the data to support a good use case, you should 100% go build it. Start with the problem you are trying to solve and work backwards; don’t just build it to build it and expect money to start appearing out of nowhere. These systems are complex and expensive, and not everyone needs them (although that could change as usage patterns evolve). 

If you are looking for some help with that “80% of the lift is getting your data in order,” then, as always, Ippon is here to help. If you already have your data in order and just want to start creating use cases, drop us a line at We have experts ready to make your dreams a reality.

Post by Lucas Ward
Feb 1, 2024 9:15:21 AM


©Copyright 2024 Ippon USA. All Rights Reserved.   |   Terms and Conditions   |   Privacy Policy   |   Website by Skol Marketing