Emit is the conference on event-driven, serverless architectures.
Emit Conference closed out with a panel on the future of event-driven computing. On the panel, we had Chris Anderson from Azure Functions, Jason Polites from Google Cloud Functions, and Anne Thomas from Gartner. If anyone would have solid insight and future predictions, it'd be this crew.
Austen Collins facilitated. He asked them for all kinds of tidbits: whether pricing was really the main driver for serverless adoption, what were the most common use cases they saw (as well as the most interesting—serverless in academia, anyone?), and what concerns and problems they were dealing with to keep this stuff up and running.
Watch the panel discussion in the video below, or read on to see the transcript.
More videos:
The entire playlist of talks is available on our YouTube channel here: Emit Conf 2017
To stay in the loop about Emit Conf, follow us at @emitconf and/or sign up for the Serverless.com newsletter.
Transcript
Austen: We have a very interesting panel here. We've heard a lot from the Serverless community, the Serverless user base, a lot of the smaller Serverless vendors. But I don't think there's ever been a conversation between the actual service providers, the major ones. And we were fortunate enough to have a couple of them attend and agree to participate in a casual conversation about Serverless, serverless architectures, adventure of an architectures, talking about use cases, problems, best practices, and potentially what the future looks like. So, we have one person who wasn't able to make it today and that was Michael from IBM, but to fill in his shoes at the very last minute, we have Anne Thomas over here, a distinguished analyst from Gartner. And this is kind of designed as a casual conversation and the questions were designed for vendors, but Anne's gonna do her best to fill in here, and provide the industry perspective from outside the vendor. Joining us also, we have Jason Polites, the PM of Google Cloud Functions. Here, everyone, this is Jason.
Jason: Hi, guys.
Austin: Also, we have Chris Anderson, the PM of Azure Functions as well.
Chris: Hey, hey.
Austen: All right, let's jump right into it. I'm personally super excited about this, and again, thank you for joining. I think everybody is so interested in hearing from you guys in particular. So, let's start off with something kind of lighthearted, maybe a little controversial, actually, and it's that Serverless name. So, what does serverless mean to you? Because we've heard from everybody, everybody on Twitter, of course, they've weighed in on this and pretty much everything else that's ever happened. What does serverless mean to you and what is your preferred name for this new cloud computing service in general? Is it event-driven computing, serverless computing, functions as a service or something else?
Jason: Yeah, so, you know, this is one of those terms that you might say is poorly defined but actually, it's overly defined, seems to have many definitions. I think what we see happening is...so three categories of definitions that people tend to assign to this term. One is in the spirit of the word, "I don't have a server," or, "I don't manage a server." And so that component of administration is relieved of me. Then there's these other two things that sort of get interwoven. One is the event-driven side of it and then the third is the economics associated with it. Am I paying for the machine when I'm not executing code on it? And those sort of emerge into the market as a bundle and all called serverless. And then you also have the fourth, I suppose, which is functions as a service as distinct from a platforms and service or a container or any other kind of unit of compute. I'm not sure that the bundling of that is necessary. I think that, you know, we see a lot of customers who gravitate towards one or the other depending on the nature of the nature of the customer. The abstractional way of infrastructure is probably the most dominant characteristic that I see. I think it's the most appropriate. And I think that for two reasons. One, because the eventing side of it sort of goes well like peanut butter and jelly with serverless, but it's not necessarily...you should be able to emit events to something that's not serverless and that should work. And so, for me, the best definition I think is just really as the word describes, no servers.
Austen: That's great. Chris, any thoughts?
Chris: Yeah, I mean, I think the word is pretty good. I think it's like two-thirds there, right? I'm going to think less about my servers. That's kind of the promise of serverless. I think it's just everyone likes the short form of it. I think the full version of what we should describe would be like eventful serverless. And then you've got the full story there. That's the last third that's missing where having to think about, you know, what your events are, how you're gonna source your events, how you're gonna store your events. That whole piece is missing from just the names. So, whenever we're talking about serverless, everyone always indexes on the lack of infrastructure. But then I almost feel like it's kind of a violation on what we're trying to do. I want to think less about the servers, I want to not have that conversation, and I actually wanna talk about the events more. That's the whole point. We're unblocking the way to using the events in a clingier fashion. Then, of course, you get into the confusion of everyone things of serverless as functions as a service and is clearly way more in the area both in terms of, you know, data services, workflow as a service. You know, now, we have these new kind of event, you know, gateways to think about. I think that the space is gonna widen up, and maybe by widening the space will help to reduce the over index thing at serverless on functions as a service, but I like that it's kind of a community-driven word. It wasn't like one of the vendors came up with it. So, from that point of view, it's really up the crowd how that word will evolve.
Austen: Right, that's true. Anne, would you like to weigh in on this?
Anne: Yeah. Well, Gartner loves to define things, so obviously, we have...
Austen: Yes. You're very good at it.
Anne: ...we have plenty of definitions on this particular topic. So, as Jason was saying, I think that the economics is a really big part of what serverless is all about because I don't want to have to pay for a bunch of infrastructure that I'm not currently using, right? And so, the real appeal to serverless is the fact that I am only paying for what I'm actually using for the periods that I'm actually using it in 100-millisecond increments and things like that. I think that is a core definitional aspect of serverless. But I don't think that what we call function pass, Fpass [SP] is, in fact, the only type of serverless environment we can have.
So, my colleague Lidia Young [SP] will certainly identify the fact that we think that something like Apenchen [SP] is a serverless environment because you don't really need to be concerned with how many machines am I actually allocating underneath? I don't need to you know, you know, I don't even have to pay for all that stuff, it's not like I have to pre-allocate a bunch of stuff, right? And in fact, if you look at the AWS serverless platform, Lambda is only one tiny piece of the full serverless platform.
There's a whole bunch of other technologies behind or that go along with the whole platform like the API Gateway and DynamoDB and you know, a bunch of the other components out there which are necessarily to build applications because you can't build an entire application and run it just in your little function environment, right? And I call it Fpass as opposed to just FAS because it's a platform on which I'm running functions. I'm not getting functions provided for me. I have to actually put the functions onto the platform. And besides, we don't like to create new excess of service primary levels and things like that. So, is event-driven or eventful? I like that, I like that a lot.
Are those essential components to it? I think that in the current version of what people are doing in serverless computing, we are doing it entirely through this event-driven model because how do you trigger functions? They're triggered by events. But what I found really interesting right after the AWS Lambda, they came out with the API Gateway because people were saying, "Well, how do I actually invoke my functions? I want a request response invocation model to invoke my functions." So, you know, I think that there are a lot of people out there who are currently using these function platforms and not necessarily really building an event-driven model. And I think event-driven models are really hard for the vast majority, certainly for mainstream organizations.
Jason: There's one characteristic that I've noticed that both agrees and disagrees with that point. So, the first thing, I think a lot of people...my intuition is a lot of people gravitated towards the functions as a service in the context of HTTP and the context of the synchronous request from the client. Not necessarily because, "Well, this is a better unit of compute for that model." But there was just a simplicity to it, you know, I could just get up and running with no frameworks, with no extra additional work.
And my actual need was to do an HTTP request, and this just seemed the simplest sort of within arm's reach kind of thing I could grab hold of. But the other point I was gonna make is that events from what I would refer to from the Google perspective as a third-party service, so this would be like a Stripe or a GitHub or some existing service out there. Events emitted from those things have been around for a long time and they tend to be delivered over HTTP via webhooks. So, I think there is also an argument to be made for the API gateway style deployment still being within the realm of eventing, although it arguably lacks some of the orchestration or frameworks around that to give guarantees and so on, but we do see a lot of those sorts of use cases as well.
Anne: Well, I like from the examples that we saw today that almost all requests coming from outside tend to be request-driven, but then once you get in through the outer layer, everything else behind it winds up becoming [inaudible 00:10:24]. And I actually anticipate that that's going to be the more normal way of doing it, but obviously, you know, supply change generate events all the time, and as you start building out an ecosystem, your environment has to be able to handle events that are coming in because they're not necessarily going to be request-driven. And if your systems aren't event-driven, you're not gonna be able to play very effectively in the digital business market.
Austen: Yes, absolutely. Good answers. There's definitely a lot of definitions here being put under this big umbrella term. And it's been hard to nail down but there's no mystery from my perspective as to why it's caught on, because when you say that word to an engineer or a developer you can see them light up, right? It's this purely emotional response and Jason kind of touched on it too. It's just like the simplest solution to get something done. And so when I see that reaction in people, just saying that word it's clear that you know, that this is the right term even though it's not technically accurate. Still, it's been hard to pin down. So, from all your perspectives, you know, why should people be excited about serverless computing, and is there a specific value prop that you say gets people the most excited?
Chris: I think for me, the value prop of serverless is really delivering on like the most concentrated value of the cloud, right, it's all about trying to get the most value out of someone who can solve the problems for me that are unrelated to solving business problems. Serverless gets me the closed to the point where everything that I do is unique to my business problem and I'm not doing nearly as much technical chores.
There's still cases where people are having to use serverless technologies to go and build technologies in which they then solve their business problems, but for a lot of cases, people can walk up to it and get an HTPN point for their [inaudible 00:12:25] just like that. And before, you know, it took a couple of extra steps of deciding the app that I was gonna run on, the infrastructure I was gonna run on. You know, that to me is the promise is the agility that comes from shedding the chores that are unnecessary to my business.
Jason: Yeah, I definitely think I would agree absolutely with that. The other thing I'll say is that this may not be obvious and I'm not even convinced it's real yet, but there's somewhat of a forcing function to encourage people into a given architecture. So, by saying that while you have this small unit of code, it really has to be stateless because it might disappear at any moment. It's going to scale up and scale down.
You are sort of drawn into a service orientated architecture or microservices architecture, and anyone that's trying to operate at scale needs to have certain primitives in place and you know, some mentions have been made around databases, and sharding, and scaling those things, and so on. And so, the entry into that, you know, give a person a machine in a programming language and they might come up with a monolith because that's the way that they think. But I wonder if there's an evolution happening here that people are just sort of being drawn into because of, you know, getting started with being very, very simple, getting started with not having to worry about infrastructure, but just ending up in a services-based model as, you know, assuming that's a better place to be.
Austen: Do you feel that you guys kind of touched on similar topics there? Do you feel that those value propositions are more compelling than the pricing model which is also fairly disruptive? And from our perspective, I'd say we see a lot of people come in through the front door of serverless because they're attracted to that pricing model and they stay because they realize that this is a lowest total cost of ownership situation that helps people get onboarded into microservice architectures and all of that. So, I'm just curious, you know, from your perspective, how compelling has that pricing model been for people?
Chris: I mean, to me, agility is like, "How many decisions do I have to make before I've solved the problem?" Right? And that point of view, if I don't have to think as hard about pricing because I know that I don't have to worry about that zero to one kind of debt that I've got to get through. That's, you know, agility, that's some burden I don't have to pay. We do know that there's cases where I can essentially save money after I reach a certain amount of scale or it's not running on Serverless. You know, for us, we actually have the ability to run on dedicated instances or the serverless instances, but we always have the default be serverless because we know that for getting started, you're just trying to get in there and not think, you're trying to solve the problem that you have, not solve problems that you don't know you had yet. You know, we can have tools beyond that, but we want that intro experience to be just buttery smooth.
Anne: So...
Jason: Yeah, I think that... Oh, I'm sorry. Go ahead.
Anne: I was just gonna say that most of the increase that I have with my clients regarding serverless are 100% about the cost. And I'll push back on that. I mean, it's like almost any cloud service you can find, there's a free tier. And so, I mean, I can go start and build applications for free and do, you know, a whole bunch of testing on it. So, that's not the cost concern. The cost concern is once I put this into production and my load suddenly starts to jettison up, how much is that gonna cost me? And is there a way to me to actually reduce that? So, almost all the questions I get related to serverless invariably are, "How much money can we save if we go this route versus going the more traditional approach?"
Jason: Yeah, I mean, there's certainly a point at a particular scale where if you have a persistent QPS, you know, 1,000 QPS sort of relatively, persistently, and maybe there's some sort of diagonal pattern to it, but you have some auto-scaling capability even on virtual machines or containers, then, you know, an argument might be made that, "I'm already optimized because my infrastructure is matching the shape of my curve." But what actually happens is when, to the points being made, when somebody gets started, they are price-sensitive. Even if it's a small group in a large organization, they might start with a small virtual machine because it's the most cost effective. And that might be cheap, but then what do they do? Do they scale vertically? Do they now have to think about horizontal scaling? And the other important point is that if we do take this leap towards sort of microservices or nanoservices or even smaller, then we're taking sort of an application that was shaped like this and we're breaking it up and spreading it like this. Now, some of those services are not gonna be called very frequently, you'll have sort of a distribution. So, the ones at the tail that are not called very frequently are at zero for most of the time, so it's almost...so you can't really go to that model and pay for all of these [inaudible 00:17:35] infrastructure. So, that's where the two sort of relate very well together, I think.
Anne: But I've talked to a lot of organizations that are now going and rearchitecting their applications because they're trying to reduce that operational cost. You know, so for example, they built out web applications and they're running in ECS or something else like that. And they're saying, "Okay, we've got all of these VMs that are running all these containers that are running all of the time, and they're not being used most of the time. You know, and then I talked to them and it's like, "Well, okay, you can basically define your web code as a file which can be downloaded, and when they execute different individual methods, that'll trigger a function which is gonna go away as soon as it's done. And, okay, maybe your data store, that's still gonna be running in a more persistent type of environment. But they're saying, "Yeah, that's like maybe 8% of our system and all the rest of it is where, you know, we're getting all the costs." So, yes, they're now talking about doing a complete rearchitecture of the application just so that they can reduce that cost.
Austen: Right. So, we chatted on about what the definition of this is, we've talked about why it's exciting, what the value props are. So, from each of your perspectives, what are the major use cases for this on your respective platforms? And maybe, Anne, if you could chime in on just from a broader industry perspective, what are people using this stuff for? And also, on a personal level, what are the use cases that you've seen that are most exciting to each of you?
Jason: Yeah, so the first thing I would say is the use cases we see today, it's not clear whether they're representative of the longer term when some of these customers you're referring to start adopting in force. But what we do see today is largely, I would say, three main use cases. One is on the synchronous HTTP side which is really just a replacement for, you know, platforms and service, and app engine style solution where they have a web client or a mobile client and they're using it as a back-end for that and then they're [crosstalk 00:19:49] some data source somewhere.
On the event-based side, that tends to be largely around what we would call lightweight ETL, so data processing in some way. A mutation occurred in some data source, so I'm gonna pull the data from that source, transform it in some way, and then send it somewhere else. That pops up a lot in kind of IOT style use cases where you have, you know, [inaudible 00:20:13] coming from the field, they need to be processed in some way. And then the third use case which also answers your second question which is the one that I personally find the most interesting.
We see emerging from academia where they might need to do a huge amount of processing on data in a very short space of time and then not need anything for a week. And so, you know, spinning up a large number of virtual machines takes a long period of time, it requires a certain amount of sophistication in understanding infrastructure as opposed to just pushing a button and letting the provider deal with the pain of starting that thing up, you know, instantly.
Austen: It's powerful.
Jason: And then disappearing an hour later or whatever it is. So, getting massively parallel processing by virtual of the fact that they don't need to concern themselves with how to scale it.
Austen: Can you give us any type of breakdown on those use cases, anything that's more popular than the other?
Jason: It depends on what metric you're talking about. On numbers of customers more on the HTTP side. On consumption of compute time, way more on the data processing side.
Austin: Right.
Jason: And the third is sort of more of a fringe use case at this point. You know, smaller numbers of customers, large amounts of compute, but not for extended periods of time.
Austen: Interesting. Chris?
Chris: Yeah, I mean I actually liked, and I made sure I got a picture of it. One of the slides which you put up there in terms of just... It's been amazing how many different types of scenarios people have come in there and tried to use it. I haven't found it getting pigeonholed in any one particular scenario. There's clearly, you know, we can see that HTTP is the most popular in terms of number of functions, we see that as well. And in terms of, you know, compute time, it's you know, stream processing, for the most part, stream processing very, very high-scale stream processing consumes the most, you know, amount of compute.
But how you can approach that can solve lots of different problems from just, you know, various crud style applications, service-to-service webhook type things, to really, really high-scale very important IOT scenarios. I haven't really seen anyone shy away from the space so far. You know, we got hit by a PCI compliant certified, so all the various, you know, hostiles that we're trying to use this could finally, you know, stop knocking on our door. It really hasn't been one of those things that has excluded anyone thus far. The coolest scenario that I've seen has generally been the cases where customers have used us to build like an extensibility platform for their business.
So, we had [inaudible 00:22:51] I guess is one of the case studies we have for this where they were using functions to build these various extensions to their service and they got tired of that and just built functions into their own portals so that way their customers could then write the code for them. And then that saves them time of having to have developers to write these custom extensions, they can empower their developers to solve their problems without having to go through some kind of salesperson to do that kind of stuff. And so, that to me, is a unique way where someone's able to use our platform to build a platform which is always fun to see.
Austen: Yeah, that's it. Interesting trend. We see a lot of these SAS companies now offering their own version of serverless compute for that same reason. And that's been fun to watch but that's a whole other subject, so. Anne, when you hear clients talk about this, what are they using this for, what are the most popular use cases, and is there anything particularly interesting out there that you've seen?
Anne: Yeah, so as both Jason and Chris have said, you know, I think that the web back-end and the mobile back-end are pretty popular. Number one, that's probably the most common application people are building, right? But I also think that the analytics. I think potentially, you know, some AI and machine learning type things would actually be great to put into that kind of scenario. But I would say that the vast majority of applications that are being developed for the serverless environment are relatively simple applications that don't have a really complex set of events triggering other events, triggering other events because we don't have tools yet, although we will very soon.
Austen: That's why we're here.
Anne: Which enable people to grok the massive event, the tendencies that, you know, so how do I go build an account management system in serverless, you know, how do I build an ERP system in serverless? That's kind of a scary thought right now, but you know, with proper tooling and as we start rethinking the way we think, and we start thinking in terms of events triggering things, working in a functional model [inaudible 00:25:01] imperative model it might become simpler and easier, and we can, in fact, build really complex systems this way.
Austen: So, the cloud providers are providing this fantastic service, serverless compute, it's great. We offload all the maintenance, all the management, all that hard effort over to you and occasionally get upset, you know when it goes down. But it's fantastic for us because the rest of us can focus on solving business problems, getting out to market, you know, reducing overhead and all that stuff. Given that we have you here and for the sake of empathy, just so we can get this out there, I'm curious, what concerns and problems are you guys worried about on your end to keep this stuff up and running so that we can you know, drive benefit from it.
Jason: Yes, so I think that's a good question. I think the way I would describe it is pace, and so, you know, the main is that large companies move slowly, and that's true, but it's less about a lot of bureaucracy and red tape and more about the fact that certainly Google we're concurrently building many things as a platform, and so it's not just one sort of product that is being built as fast as it can, it's a platform approach and that necessarily means that our expectation, I'd say I hope our expectation is that customers will use more than one other products on the platform and that sort of implies that there's an amount of consistency between them if I go from one product to the next.
I sort of have some expectation that there's some consistent paradigms in play, and that means that although product X might say, "Well, really we wanna do things this way." And product Y says, "Well, we wanna do things the other way." There's a level of sort of consistency that needs to be brought to bear. And the challenge is that the cost of getting it wrong is high because it's the customer that pays for it. If we, you know, misjudge a trajectory of where we think, you know, something's going, then later we have to unwind that or we have to reset. Have we created a deprecation problem for the customer?
And so, that's on the one side as a challenge generally, and then on the other side you have just the hyper iterative market, so things moving very, very quickly, peoples ideas materializing very, very quickly. And that tension is good but that I would say if I put the better term of pace, the pace at which we can operate and the pace at which the market operates, and the impedance mismatch between those.
Austen: Chris, what keeps you up at night in order to provide this service?
Chris: Yeah, if you ask my engineering team they'll probably have a different answer than me, but for me, like you know, my job is to kind of help figure out what do we need to do next? And in a lot of ways I have those same kind of problems. I think maybe more specifically, you know, my background before this was working on app service, Azure Mobile Services, and as we move up the stack from like IAZ and these other services, the problem becomes...we have to become opinionated in order to solve the problem, and then so the question is, "What are the right opinions? Are we sure that the opinions we have are correct?
How can we get other peoples opinions on whether or not our opinions are proper?" Those kinds of things. And I think that with serverless we're at the point now where the primitives of functions are getting pretty close to well-defined, and I wonder is there direction to even move further up the stack with functions? Do we need, you know, opinionated frameworks to help us solve these more complex problems? You know, I can't build an ERP system with just throwing functions at the cloud and hoping it sticks. I might need a framework, like serverless framework or something else to solve those problems more opinionated. That kind of keeps me up in terms of like, "What is the right direction for us to go? Should we just keep on offering functions the best way possible?"
That isn't necessarily going to guarantee us more successful customers. We might have to think about going out there and trying to be more opinionated to make sure that we have more customers who are more successful with more complex scenarios. And how to do that in a way where I don't feel like I'm stomping other peoples ideas of serverless either. Like, I want to do it in a way that is inclusive of what everyone wants to go and do.
So, it's a lot of things to balance as you're trying to approach the next set of problems. If you ask my dads, they'll probably be like throwing more servers at the problem. How do we keep on doing that more and more efficiently? That's what keeps them up at night, but as far as the product moving forward, it's definitely how do we start to approach more complex scenarios for serverless?
Austen: And I have to give all the providers credit, I think you've all done a great job interacting with the community embracing open source especially to get that feedback and make sure you're coming out with the right opinions there. It seems like you are evolving in all the right ways as these large organizations, and it's really great to watch, so lots of credit to you. Anne, I think you wanted to get in a question or two, and we've got about 15 minutes left.
Anne: Well, so just in response to this. I just wanna point out that event-driven architecture has been around for a very long time. It has always been an edge case, you know. But, I mean, there are a lot of systems that were built on Tuxedo Rendezvous or Tuxedo and TIBCO Rendezvous which were both event-driven middleware systems. And, you know, but that's still just this tiny little slice of the vast majority of applications out there. And as I've pointed out before, I think the biggest challenge that we have from this perspective is getting people to learn their way around event-driven architecture because it is a really [inaudible 00:31:05]. As Cornelius said, you know, "It's like you have to rethink thinking."
Chris: And that's exactly what I face, like, "How do I help people take these things to the next level?"
Anne: Frameworks, I think, are fundamental to making this. Well, and also, there's a whole bunch of infrastructure which I think is still missing, like especially the observability infrastructure is really primitive right now.
Austen: Right, right. I guess kind of this leads perfectly into the next question, and that is, what's the next step in the serverless movement? You know, what should we expect from the cloud providers? Do you predict any common patterns emerging here, standardization, what are your thoughts about standardization across the different cloud providers? Where are we going from here?
Jason: Yeah, so I think that there's two macro trends that I would suggest are plausible. One is that, you know, the simplicity of the serverless model on the compute side has meant...sort of a center of gravity has formed around this model of building systems and applications. And almost by definition, we don't have to think about some of the more complicated things, but those complicated things still exist at scale. And I think Cornelia, in the Pivotal presentation, really, really nailed it. And if you have a chance to watch the replay of that, I really encourage you to because there are a whole class of challenges surrounding what an event means, what is at least wants, what is at most wants guarantees, what is item potency? How does that all relate to how I build my application when all I wanna do is write a little bit of just a function? And so what I think...a plausible trend is that these problems will emerge and solutions will emerge as they typically do in a fragmented way and then there'll be some level of consolidation. And the real challenge here I think for the providers is how do we contribute to that in a meaningful way? You know, if you get a bunch of smart people in a room and imagine what the future world looks like, they might pop out with a great solution five years from now. Meanwhile, the market has taken off in some other direction. So, that's one thing.
I think these problems are [inaudible 00:33:31] right now. I think they will become...they'll come to the floor as people start to do more complicated systems. And the other smaller trend I would say is that, you know, people, younger people now, programming is becoming more ambient to their education, to their experience. And so, you know, the comment of, you know, customers using the platform to provide an executioner, code executioner environment to their customers. I think it's reasonable to say that most many services will have a programmability aspect to them, and I think serverless as a general sort of umbrella is a great fit for that sort of model, so whether it's...you know, whether you're in a word processor and you need to execute some script or, you know, whatever the tool is, having an augmentation of that that makes it programmable and having that as a serverless thing, I think we'll see more of that emerge as well.
Austen: Chris, any thoughts?
Chris: Yeah, I mean, kind of related to like my answer for the last question. As we've tried to go ahead and take approaches to solving these more complex problems in consistent ways so that the customers don't have to reinvent the wheel every time they walk up to the thing and figure out how to rate my ERP system, I think the problem we're gonna see, and we already see it today, we have, you know, Apex/up, we've got Arc, we've got Serverless framework, all these things produce different outputs that all end up, you know, touching some cloud service at the very end. I do wonder whether or not we need to think about how can these various things work together, like an opinionated framework which helps get you a web page standing up very quickly like Up. It's much different from, you know, being able to handle the more complex scenarios that serverless tries to approach. Can they work together? Can they have [inaudible 00:35:19].
They're trying to address different parts of the application problem, but they do it entirely independently with their own sets of assumptions with how things are gonna work. I do wonder whether or not we need to spend time over the next, you know, however long out time frames actually are nowadays. They get quite short. We move very quickly nowadays. Thinking about how do things like the tools and the frameworks that we have coordinate with the cloud vendors to make sure that we're not having to reinvent the wheel every time we go and approach one of these new novel sets of problems.
We don't need a fourth way of, you know, deploying the code out there potentially, but there's totally room for, you know, 4, 5, 6, 7, 8, 9, 10 different ways of building my application. There's lots of different ways to think about that. I wonder if that's where we kind of need to think about of how can we coordinate better to, you know, get more productive value. We don't wanna solve the same problems twice.
Austen: Yeah, I think we see some effort around this right now especially in the CNCF in their serverless working group I think it's needed but it'll be interesting to see how it checks out. Outside of the vendors, where do you see this going, Anne? Where's the future of serverless event-driven architectures?
Anne: Well, those are two separate distinct things. Serverless I think is very much being driven by the economics as I've said a couple of times. But I think that right now we have a girth [SP] of tooling to help people build real systems that way. Right now it's all very primitive, so at what point does serverless kind of replace the traditional PAS? [SP] I don't know. I also think that there's certain applications that probably are not well-suited to be deployed in serverless, so that means that regular PAS may stick around in perpetuity.
But the next question is, you know, when I get Pivotal or OpenShift or IBM Cloud or something like that, should I expect that there's a serverless space inside that standard PAS or do I continue to look at this decomposition of platform into, you know, 1,000 different types of services that I then have to pick and choose and figure out. It becomes a really complex world. For example, if I'm using app service, I've got like basically four different types of services that I work with.
Austen: Just one.
Anne: But you know, it's a relatively simple thing for me to create an application in that environment. On the other hand, when I start building things based on Lambda and the 45 other serverless services that AWS supplies, and I have to kind of navigate my way through all these different... That's too difficult for the average mainstream application development organization in you know, a big insurance company to really grok. [SP] You know? And meanwhile, you've got a bunch of braggarts out there who still haven't quite figured out how to build services in the first place, right?
When are they gonna get into that space? So, you know, I think that the biggest challenge right now is that significant learning curve to move into the event-driven architecture. And I still have big debates with my colleague. [inaudible 00:39:10] about whether or not event-driven architecture is going to become a mainstream approach or whether it's gonna remain a fringe. I think it's gonna become bigger than it has been, but I don't know.
Jason: I would also say that the vendors are on the hook for some of this as well. You know, we've spent time trying to create a primitive that we think, you know, we generally I feel like we've spent time creating primitives that are the building blocks, but what we lack now are the design patterns that make the most sense. And the reason I say it's incumbent upon the providers in some part is because as soon as you make something auto awesome, it's hard to be completely auto awesome.
There are going to be characteristics that sort of bleed through that if you do something a certain way it's gonna behave differently in the infrastructure, but there's no way for the customer to know what because deliberately it's a black box. And so, I'll give you a concrete example. In a serverless space when these functions spin up out of thin air, there's this period of time referred to as cold start. Well, what affects the duration of a cold start? It's partially the infrastructure, it's partially the code that you write, it's partially what happens during that cold start. All the customer knows is, "Oh, this thing is slow," or, "It's taking a long time." And have they architectured it wrong?
Have they written their code wrong? It's really not clear. And so, even before you even get to, "How do I build an ER peace system, how do I do service discovery? There's basics that I think certainly there's some assistance that providers can make to help people understand, help people grok not necessarily what it is, but where do I put it where it makes sense?
Chris: I think I'm glass half full there already, you know, we can already do a lot...at least talking about my platform like we have lots of data about what happened when your function ran. The nice thing about functions versus like app service is the service area which you're programming against a platform. I know a lot about your function, I know what's gonna trigger it, I know how long it ran for, I can even see the kind of the state of the memory at the time it was going.
We can do a lot of things to plug, you know, EPM's deeply into the platform to get insights in a way that on app service I have to beg you to please include the EPM on your service, and if you don't, i try my best to look at your logs and see if I can help you if there's a debugging event. I've actually enjoyed how much easier it is to debug functions problems when it comes to servicing loops that I have to do compared to app service problems in a lot of cases because I have a lot more insight into what the customer may or may not have done correctly. So, I actually see a pretty bright future here where not only it'll be easier as we make these tools and frameworks better to tell people that you're doing something wrong. It'll be easier to guide in through a direction. so it's not just saying, "Oh, you had an oom issue." I can tell you why you had an oom issue to an extent.
Anne: Yeah. You know, personally, I get really excited when I look at systems being built with event sourcing, CQRS because I think I have so much more control over how updates are being executed and where that information has to get directed to. And you know, I look at this and I think, "This is like the best way to do master data management. Because I have the ability to capture every single event and I have the ability to process every single event, and I have the ability to push that even to wherever it needs to go.But I also recognize that because it's a very complex environment and there's so many interdependencies within the system, that lots and lots of people are going to build systems that are just fundamentally awful.
Austen: Yeah. I'm sure that these gentlemen have seen that. We've certainly seen it from the vantage point of the serverless framework. Yeah, guidance is needed here, certainly. And actually, on that topic, if there's a team or organization that wants to get started doing serverless stuff, do you have any best practices, logical starting points for them that you recommend? How is this application, how is this architecture similar to past architectures and, also what do developers have to think about what's different with the serverless architecture?
Jason: I think there's a couple of things that come to mind. One is the stateless nature of the compute that you're running. And that's an easy word to say but people still have a tendency to rely on state in some form, and things like...even things as trivial as maybe a database connection pool. You know, in a traditional model you would say, "Well, I have a connection pool because it's a performance optimization that I'm going to do."
But that pool is stateful in the container and when that container disappears that pool disappears, and so what are you really pooling at that point? So, thinking about things in those terms and then on the topic of data, do you have just a regular database and then how is that going to scale? I think there's a sort of a...not a hurdle so much but a step that needs to be taken to understand again, coming back to this being instructed in some way, how does this actually work?
What is the execution environment for this? What are the things that I need to understand? Under what conditions does it scale? Where does it scale? So, there's those sorts of questions that we get asked quite a lot. And then the other side of it which is a much more gnarly set of problems is about security and authorization. You know, comments have been made today about different types of events and what data is inside the event. You can if you're not very careful run into situations where the person viewing the payload of the event was not authorized to view the content that emitted the event. And who's making sure that that's all as it should be? Again, one of these sort of nascent challenges that will hit when things start to grow in complexity.
Austen: Chris, any thoughts for developers, teams, or organizations getting started with this architecture? What's similar and what's different about it?
Chris: Yeah, I mean again, it really depends on your provider. We've done a lot of stuff with functions that can be a bit different from other providers, so we've even had to spend as much time in like the deployment madness world because we do the multi-functional deployment thing. So, that's generally the first advice in the rest in the world, but for us, we have to skip that particular nightmare. The thing to think about, you have to kind of think about the sense of problems that you'd have with microservices, right?
What functions do I associate with each other? How do I actually approach these things as a team? If I've got you know, one section of things that have to be modified, I have to manage the service versioning, you know, stuff that was talked about in the last talk. You know, a lot of problems that you have with functions after you get over the fact that you don't have to worry about managing them, your architecture problems are pretty similar to, you know, traditional microservices problems.
So, you can generally start there, go read some blogs, Martin Fowler's blog and you know, you'll be in a kind of a good state in terms of where your headspace needs to be. And a lot of times it's just, you know, it's really simple. Like for the most part just to kind of take each problem step by step by step. There's a lot of good documentation out there. You know, a year ago if you asked me this problem, I'd be like, "Oh, there's a ton of docs and samples and stuff like that that needs to happen."
But it's been a fast year, right? I mean, I'm starting to see a lot more good content out there for people to get started with this stuff. You know, there's a lot more answers out there than there used to be, and I think that we'll continue to kinda see that progress. You know, if someone is watching this video, you know, three months in the future, six months in the future, even by then, they'll be a new set of, you know, tools, frameworks, docs out there to go and address these problems. You know, I think that we're gonna keep on seeing great progress there. And, you know, just like any other sort of technology stuff that we've been doing, progress happens pretty quickly and it's always targeted towards solving these problems, so, you know, kind of have that faith that it's gonna move in that direction.
Austen: Sure. And your providers have been moving incredibly fast. It's been amazing to watch the rate of innovation. There's a time where I thought that startups had the advantage of moving faster, but these big organizations are moving at incredible speed, and I think they've adopted a lot of the startup methodologies to be able to do that, smaller teams, microservices, so kudos to you in managing these teams and guiding them to success. So, it's been exciting to watch this whole space evolve. And again, three months later, I mean, three months from now, we might be having a totally different conversation, but it'll be a very interesting one, I'm sure. Anne, from your perspective, any advice for people just getting started, organizations and teams just getting started with serverless architectures?
Anne: Yeah, focus on data. You know, this is not just about your code, but it's also about data that's owned by the code and it's how you partition that data. You know, one of the presenters, I forget which one, was talking about doing aggregations and letting anybody do aggregations of your data. But are you sure that's actually gonna wind up being valid data? Is it quality data if everybody can just go create their own aggregations? So, yeah, I think that data partitioning issue is gonna bite a lot of people and, you know, if you don't actually have a decent large-scale perspective of how this data is coming together. And then, of course, there's also the event schemas which now become the tight binding between your components. So, putting in some type of mediation in there that enables a little bit flexibility between consumers and providers of those events, that's also another really critical piece.
Jason: One other thing I'll say is that what we said before about it being simple, I think the best way is just to actually roll your sleeves up and deploy a function. And a lot of these things will become apparent really quickly. It's one of the reasons why it's such a wave of adoption of this is because it's so easy to consume. And I think if there's uncertainty around, "I'm not sure what serverless is," from customers, then, you know, the simplest thing is just to put a person in front of the console and have them deploy "Hello, World!" in you know, a matter of minutes, so a light bulb will go off.
Austen: That magic moment, yeah. Anne, did you have any questions you wanna ask? We're just coming up on the end of the session here, so...
Anne: So, let's see. I would say that my biggest question is how you're going to bring together the serverless capabilities with the rest of your cloud services?
Chris: Yeah, I mean, it's really a good question. Now that we added the event grid service, like, a lot of, you know, our conversations have been about how do we actually bring these things together in the same way? We've had to do the same thing with logic apps where we kinda have this nice little visual UI for combining logic apps plus the functions, but then you start to jump down into like Visual Studio or something and the tooling is a little bit less connected.
I think the way that we approach it is the same way that we approach a lot of problems. We need tools and frameworks. We have the primitives now and the services. I think, in order to build more complex systems, you want the tool and the frameworks to help address that complexity. The primitives themselves need to get more complex. The tooling needs to become better to solve complexities of combining those various primitives together.
Jason: Yeah, I think it's all about events in that space. You know, I think I agree with you in some sense that event-driven model, for a lot of organizations, is a big scary monster. Within Google, it's all events, and the way that we tie, you know, the components of the platform, other resources is all through an event model. So, you know, and I mentioned that I think, Austin, you mentioned that the rate at which we're doing things. And I said before that we're focused on building a platform. Well, one of the advantages of that is that there is internal consistency across services, and so it's not a huge challenge, implementation-wise, but really, it's just all about events.
Austen: I agree. Again, it's why we're here. We just hit the end of the session, so I'd like to thank Jason and Chris. Thank you so much for volunteering and agreeing to join this panel. Anne, you are a saint for jumping in at the last minute and giving a wider industry perspective. I really appreciate that. And I think that's the end of our conference. I'm looking forward to seeing all of you at the after party. And thank you so much for attending, it's been a lot of fun.