Startup Suno AI helps consumers generate their own music online with a very simple interface. Unlike many startups that focus on text-based generative AI, Suno takes on the very different problem of building, testing, and serving models for audio. The Cambridge, Mass. company employs Oracle Cloud Infrastructure (OCI) AI infrastructure and other services to create and run these models.
Below, Leo Leung, Vice President, Oracle Tech and OCI chats with Mikey Shulman CEO of Suno AI, about what generative AI startups want and need from their providers. The interview was edited for length and clarity.
Leung: What should AI startup founders be thinking about when it comes to foundational technology and infrastructure?
Shulman: The first thing would be picking very carefully where you want to innovate and that really means picking very carefully where you don’t want to innovate. Before Suno, we learned that things like system administration aren’t really places where you move the needle. So, we focus all day and all night on figuring out the right way to model audio and plugging that in. We are open about the fact that we borrow so much from the open-source community for things like making transformer models on text and it’s lovely to not have to reinvent the wheel there. We don’t just think about models that map A to B since that’s not how most humans think about interacting with these things. Ultimately, we are trying to build products that people love to use and figuring out what the foundational technology is to help ensure a pleasurable experience for the user.
Leung: It would be interesting to hear more about the data of music and the different types of workloads music represents. Can you talk a bit more about that and how that maybe influenced your choice of infrastructure or technology underneath?
Shulman: Music, or audio in general, is very far behind images and text in terms of modeling. The key problem is how to represent audio in a way that should be intelligible to transformers? There are hiccups, one being transformers work on what are called tokens, but they’re discreet things and audio is not a discreet signal, it’s a continuous wave. Furthermore, the problem for audio, especially high-quality audio, is it’s sampled at either 44 kilohertz or 48 kilohertz —one second of audio will have roughly 50,000 samples. That’s just way too many samples and we need some way to take this very high frequency signal and kind of smush it down into something more manageable. We spend a lot of time innovating on what is the right way to take this very quickly sampled continuous signal and represent it as a much more slowly sampled discreet signal.
Leung: Did that influence the kind of infrastructure you needed or are you thinking about the same infrastructure but again trying to reduce the data to a place where you could put it into those models?
Shulman: Definitely. Just like any other machine learning model, these things aren’t super cheap to run. You want to do things quickly both in production, but also even when you’re just experimenting. We are constantly trying to make things better so having some elasticity of compute, having availability of compute is important.
Leung: That is a good lead into my next question, which is what needs have changed for you and the company that you couldn’t have predicted as you scaled?
Shulman: When we started the company, the first thing we did was buy the biggest GPU box that you can safely plug into a home outlet and start to train the initial models there. That box sits unplugged in the next room. We did not really anticipate just how much scale matters for your models, your experiment throughput, and the way you roll things out to people. This is a cliche, but humans are very bad at reasoning about exponential growth. And so, despite having a PhD in physics, I too am very bad about reasoning about exponential growth. That certainly caught us by surprise. We also did not realize the extent to which products can come to market that take care of some of these concerns. For example, when we first logged into our Oracle cluster, it was like you just had everything we needed there. It was kind of a weird moment because it was not just a machine that you’re starting to do everything. It is a cluster. It is like this was a product built for people like me. I get all the creature comforts that I need to do really good work.
Leung: When I talk infrastructure, everyone gravitates just towards the GPUs, but there’s more than just processors. From your perspective, what other important components of AI infrastructure do you leverage?
Shulman: I think one concentric circle out from GPUs is all the fit and finish that is on our cluster. Whether that is the ability to add users, launch jobs, have network-attached storage, have fast SSDs, all the things that let us utilize the GPUs, that’s amazing. Storage buckets for larger bits of data or user-generated content, etc. We need all kinds of things to make the products run smoothly without GPUs on the training side. Whether that’s a service to deliver content quickly, whether it’s user management and queue management and lots of building blocks–some of them we build, some of them we buy.
Leung: What are those special problems and solutions that you feel are specific to generative AI?
Shulman: This is an area that is quickly evolving and things that you can take for granted today, you’re not necessarily sure you can take for granted tomorrow. Can I fit my model on one card today? Maybe I can and in a month I can’t which would screw everything up. Something like Modal is amazing. It lets us launch workers on GPUs extremely easily.
Look, generative AI is very compute intensive, and GPUs are annoying for software developers. They break a very sacred hardware-software abstraction barrier, and that has a way of rearing its head everywhere. I think that’s why a lot of this stack can be a little difficult to navigate.
But it’s not all GPUs, there’s also a ton of CPU work that goes into these things. There’s audio processing in general. When I daydream, it’s like maybe my cloud provider has made me not really care what cards I’m using. That would be swell the same way I don’t necessarily care if I get spun up on an Intel CPU or an AMD CPU in my cloud machine. Why should I care exactly what card it is?
Leung: Going beyond the tech, what other support should AI startups be looking for from their service providers?
Shulman: We’re always asking: “How much pain can my provider take away so I can focus on the stuff that is my comparative advantage?” Every company’s answer here is going to be different. In a more research-heavy company, there’s going to be a lot of research tooling and experiment management and job management, etc. In a less research-heavy company, maybe it’s the world’s fastest CDN because I need to deliver content to people. I’m always thinking about what are we doing that we shouldn’t be doing and how do we stop doing that? And very often there are solutions out there, you just have to know where to look.
Leung: My final question is how should fast-growth AI companies think about costs?
Shulman: For AI companies, a big fraction of your spend is compute, so that’s something you have to think about judiciously. Sometimes you can find some slightly cheaper solutions, but the cost savings can be far outweighed by the reliability and the flexibility of going with a real provider. There’s a lot of things popping up and going away, and we want to be around in 10 years so that means we should probably be trying to do business with companies that are also going to be around in 10 years. If you have a plan to start using somebody and then get off them in a year, that needs to be a very conscious decision and not one that we should make lightly. That’s part of why we picked OCI – trust.
Are you building your company and evaluating cloud provider options? Learn more about OCI’s broad selection of ISVs that offer AI services to help accelerate your development and deployment here.