What's Covered in my 2019 Serverless Azure Workshops

Years ago, when I've started playing around with serverless on Azure, all we had were Logic Apps, which can be best described as if-this-then-that "on steroids"/for the enterprise.

After some time, Azure Functions came, and they've started this "huge" wave of serverless on Azure. At that time we didn't know what to do with them, how to use them properly, and where to fit them in existing systems. They were cool, but at the time they've represented something that is an anti-pattern (and still is) for microservices - nanoservices. Although everything has evolved since then, functions aren't, and won't be a silver bullet, so it's always up to you - the developer, the architect, the enthusiast to make an informed decision about what suits your needs.

At that time, organizing a half/full day workshop about Azure Functions didn't make much sense... at least to me.

Fast forward today, and we have a bunch of new "serverless" stuff on Azure. To mention some:

  • Functions V2 can be run on your machines, and in containers
  • Durable Functions allow us to develop stateful, long-running, and orchestrated workflows
  • Azure Container Instances leverage the serverless infrastructure to give you all the resources you need when you need them to run your container images on them easily
  • Azure Kubernetes Service is a managed Kubernetes service that's offered to you free of charge (well... not really, since you need to have at least one VM, and AKS will "eat" a portion of its resources)
  • Azure Event Grid came out, and it allows us to transform our solutions to event-driven ones, it allows us to easily integrate a wide array of services not only on Azure, but on other clouds, and your on-premise machines as well. The best part - it's serverless, and you get charged by the number of events
  • Azure API Management is something that you want to put in front of your APIs to organize them, create subscriptions for your clients, throttle the throughput, expose different endpoints, add some security, etc. The problem was, it cost $150 per month to get started in production, but since late 2018 it's available as a serverless service - you pay for what you use, and thus it became a "must have".

Now that we've covered all those serverless offerings, imagine having a workshop where you go through each one of them in some order, and just deep dive into details, without mentioning how to incorporate them into a system that makes sense.

Unfortunately, I don't have to, because I've held a workshop like that, and although the participants told me that they've learned a lot and that it was cool, it was boring to me, and not on a level where I wanted it.

So... I went back to the drawing board to see how can I improve it, and figured that it would be much better if we had an imaginary problem which could be solved by building a serverless system which leverages all of the above. After a while, I've designed one, and although it's a bit over-engineered for the simple imaginary problem, the concepts are valid, and applicable to real-world usages. Plus, it makes much more sense to discuss why are containers a better solution for something that Azure Functions on the consumption plan and vice versa.

To get a better understanding of the workshop, you can roam through the slides which cover the same topic in the form of a talk.

Btw, if you took a look at the list of my public talks, you've seen that there are two workshops. One is named "Going serverless on Azure", and the other "Building mostly serverless distributed cloud systems step by step". Don't worry... they are the same, the only difference is when have I submitted the talk - before I've refactored the description, or after. :) Ah well... you learn till you die.

Enough beating around the bush... let's see what we'll cover!

The workshop revolves around an imaginary problem – a news agency is receiving many news reports that someone needs to categorize by hand based on the content.

What we’ll be doing is design and build a serverless system around a third-party solution that categorizes videos and images based on their content to automate the task for the news agency.

As we go over all the services that we’ll be using, we’ll also dive deeper into their pros and cons.

We’ll start by using containers, and a regular ASP.NET Core API. Naturally, we’ll run the instances on a Kubernetes service provided by Azure, and try out the serverless scaling options, test the scaling and the latency, etc.

The next step is to start building Functions, where we’ll relatively quickly see how much manual labor would it take to have Functions with their state saved somewhere to send them to sleep so that we don’t get charged while waiting for something else to finish.

That’s where Durable Functions come in to save the day. Naturally, it wouldn’t be fun to skip covering something opposite to serverless – hosting Functions on-premise, and the benefits of it, as well as options for hybrid deployments.

The second step involves connecting all those containers and Functions to have a system that does something. We’ll first cover Service Bus and start using both Topics and Queues.

However, it wouldn’t be a proper serverless system without using Event Grid, which we won’t just apply for "usual" purposes, but we’ll cover some "cool hacks" that can ease your life as well.

The third part will start with API Management – a cool service that we want and need in front of our APIs. We’ll cover exposing API endpoints, creating subscriptions with different APIs, throttling the number of parallel connections based on the subscription type, mocking APIs, etc.

At that point, we’ll have a pretty much functional system, but without any monitoring, so that will be our next step – implement monitoring into every segment of our system, so that not only can we see what’s wrong in it, but so that we can also track how much does the whole thing cost to run.

Once that we’ve set up the whole thing, the only natural next step is to make the system more resilient, and what better way is there to do it than to deploy it to multiple data centers/regions.

The first step in this direction is to learn how to provision all those services programmatically, instead of spending the whole day clicking through the Azure Portal. Next step would be to deploy our code base to those newly provisioned services from Azure DevOps.

To make everything flawless, we’ll have to cover sharing and deploying secrets and connection strings both in a more conventional way and by using KeyVault.

Now that we’ve covered all of that, we’ll go through each deployed service, figure out how can we make it redundant, how to tackle failed requests in another region and also how to route traffic to various regions.

And now, now you’re ready to start building your systems.

Preparing for the workshop

You can get all the instructions needed to prepare for this workshop in this blog post. If you need additional help, don't hesitate to contact me.

P.S. what's described up there is covered in a full-day workshop. If you'll attend a half-day workshop, you won't have hands-on experience with provisioning and deployment, but you'll hear about it. There's also a 2 hours workshop which will be delivered without any hands-on experience, but we'll cover the whole thing.