Presentation: ESPN Next Generation APIs Powering Web, Mobile, TV



11:50am - 12:40pm

Day of week:



Key Takeaways

  • Learn the architecture ESPN uses to power mobile, TV, and apps.
  • Hear how ESPN developed their API’s to enable consumption from a variety different sources.
  • Understand some of design decisions ESPN went through as they evolved into their current architecture.


In this talk, Manny will discuss the tension of optimizing APIs for different experiences, while supporting hundreds of endpoints and many web and mobile applications at extreme scale. In the last few years ESPN has made many iterations on its API platform, eventually settling on developing a Product centric architecture along with a new platform to build APIs. This new platform supports a two-layer architecture that allows ESPN to optimize both for engineering productivity as well as the Fan experience. In addition to the technical issues that the teams worked through, they also faced organizational challenges that this new approach helped mitigate. This new platform is now powering the recently redesigned site and mobile app. ESPN is able to push 1 million events out to Fans in less than 100 milliseconds, complete with fully autonomous scaling. On a busy day, over 3 Billion messages are pushed out to Fans!


What is your role today?
I’m Senior Director of Distribution Platforms at ESPN. My role is focused on how we get data to our fans, our partners, and many internal systems. There is a concentration on on APIs and real-time messaging. This is where I spend most of my time.
What are the problems that you are trying to solve? Is it integration with lots of distributed devices? Is it mobile? Is it TV? Is it the Fantasy Football app?
All of the above. One of the challenges with distributing data to different partners, both internal and external, is that use cases differ dramatically. Some folks want to consume basic scores. For these types of use cases, data normalization across sports is advantageous. Then there are other use cases where a client will need a much larger dataset, perhaps to analyze and visualize statistics. We also have very sophisticated web pages that go into great detail on the data we have. For these more complex use cases, they need the full breadth of data.
The challenge is to efficiently transfer data around, whether it is inside our firewalls or outside of it. However, there is a different API design and distribution paradigm that goes along with the two use cases above. The difficulty is in balancing data needs for all of our products. How do you make your APIs reusable while serving these very disparate needs?
What is the basic approach that you are using with your new API to be able to shape the data for all these different devices?
At a high level, we started down our journey like most companies with a ‘one-size-fits-all’ API model. We designed the APIs in a way that made sense when looking at the domain model, so we didn’t think about every use case and client.
We are able to power a number of different external partners and this worked fairly well at small scale. However, where it breaks down is in those complex use cases where you are trying to share more than just basic, simple data. When you get into the more complex integration patterns, that one-size API model does not work so well. So we evolved and at a high level we are now promoting an architecture by which we segment our APIs into two distinct tiers of functionality.
The first is what we refer to as the ‘Core API’ tier and the second is the ‘Product API’ tier that sits over the top of the Core APIs. In a nutshell, the Core APIs are responsible for containing the core business logic and directly interact with the database tier. This tier is completely product agnostic. Also, the Core API tier leverages REST architecture complete with HTTP and JSON. One important principle with Core APIs is that they are lightweight - instead of modeling many attributes together, we liberally use references to other related core APIs. Ultimately, it’s a large graph that you can traverse on demand.
We use both Java and Groovy. In this talk, I will get into more details on the other frameworks we are leveraging for the Core API tier. The Product API tier on top of Core is where things get more interesting, and this is the platform that we built internally which we call Binder. Binder’s mission in life is to bind Core APIs together for a given product use case. The Product API tier is all about aggregating and composing a number of different Core APIs to build a richer API model for a specific use case.
The first thing we can do within the platform is composition. We are able to compose a number of different Core APIs with relative ease. The framework handles that for us, and we will probably go through a few code samples just to see how easy that is. The second thing it can do is composing related data structures together. We have another technology built in the framework called Link Expansion.
For example, let’s take a player. We have a player Core API that has the basic information about a player like the first name, the last name- the basic information that you would imagine about a player. But when it comes to deep statistics for a given season or for a previous season or a season 10 years ago, each one of those would be a different Core API call, and that is a reference from that base player API that you can use to traverse that graph from the Product API platform. So, Binder has a simple way of expanding those entities as you see fit for a given Product API.
I was just going to say, top to bottom, it’s JSON?
Top to bottom, we standardized on JSON. We did have quite a few nerdy debates on alternatives but ultimately stuck with HTTP and JSON. What we really love about HTTP and JSON is that it’s really easy to debug.
What is the primary focus for the talk?
There are two parts to the talk and we just talked about the first part, which is both the architecture and the platform. A lot of the details that we will get into include things like caching. Caching is huge for this platform to work at scale. The other is automatic scaling in AWS. It was designed to work in the cloud from the ground up. We’ll also hit on how we support making all calls asynchronously using RXJava with a dialect that looks like JS Promises.
The second half of the discussion is going to focus on how we then expanded the architecture to do real-time push to our fans. Instead of a traditional API polling, we have a cutting edge WebSocket platform that is able to integrate with all our APIs and push messages using WebSockets to millions of concurrently connected fans.
Is this talk intermediate, advanced or somewhere in between?
It definitely isn’t beginner. I would say intermediate to advanced.
What are the key takeaways that you want someone coming to your talk to gain?
The first big one is that the old way of designing and building APIs is too monolithic. We were stubborn about changing this approach because we thought that it meant more complexity to have 2 tiers of APIs. However, we later discovered that once we embraced separation of concerns in the API tier, life was much easier for everyone.
The other topic that I am going to tease out is this new thing that everyone has been talking about: streams processing. One of the challenges with streams processing is if you already have APIs built out, plugging that into a streams processing model is very challenging. You have to rethink how you are processing data and how you are fanning that out to folks.
It’s a big challenge. We figured out an innovative way to plug into existing APIs based on REST and JSON, and façade it to look like streams and push that out at scale to millions and millions of fans. We are able to plug that part of the system in with very little effort on the back-end. That’s another piece of the message we want to get out there and get folks thinking about.

Speaker: Manny Pelarinos

Senior Director of Distribution Platforms @ESPN

Manny is the Senior Director of APIs at ESPN. He is a highly experienced technology leader with 15 years of extensive experience building progressive systems within media, healthcare, financial, and retail sectors. Skilled problem-solver with expertise in the complete software development life-cycle. Repeated success in building new teams consisting of software engineers from the ground up to solve new complex business problems as well as grow new businesses. Expert knowledge of web, mobile, application architecture, and continuous integration. Worked in a broad range of roles, including Director, Manager, Team Lead, Software Architect, and Developer.

Find Manny Pelarinos at


Monday, 13 June

Tuesday, 14 June

Wednesday, 15 June