webhooks and asp.netBy now you’ve probably heard that ASP.NET now supports WebHooks and not only does it support them, but it goes along with them quite well.

Disclaimer: If you’ve read my posts before, you probably know by now that I’m not a trumpet-kind of guy to promote things which were already decently promoted by team members, company blogs and other community leaders. More specifically, the announcement for WebHooks support was already made by Scott Guthrie, Scott Hanselman,  and others. If you missed any of these, please go ahead and check them out first.

The announcement regarding ASP.NET WebHooks support has been well covered for the last month or so. So rather than go through the announcement, I wanted to detail the process of sending WebHooks. Before you read on any more, please make sure you read this blog post from Henrik F Nielsen on ‘Sending WebHooks with ASP.NET’ – the article is very thorough and well written, but lacks explaining a few things if you’re new to WebHooks.

Basics

If you’re familiar with WebHooks, skip to Receiving WebHooks. Otherwise, happy reading.

The whole concept of WebHooks isn’t that new anyway, since it’s only a further standardization wanna-be of autonomous requests going back and forth between autonomous web servers, by calling specific REST endpoints. When I say standardization wanna-be, I mean that the request which gets sent out the the target endpoint will have a specific request body format – within a JSON object in a specific format, as defined by the convention of independent groups working to define the guidelines which will eventually evolve into standards. So the sender is going to specify a few things, such as the reason for why it ended up sending the request in the first place (this is called an event). In other words, WebHooks is nothing else than a convention on what the request body of an automated HTTP request should look like in order for it to get along with other external web services. This link from pbworks.com explains it in more detail.

Taking a closer look at the various services which claim support for WebHooks, such as SendGrid, PayPal, MailChimp, GitHub, Salesforce etc., you come to understand that whenever you, as a user, configure a particular service’s WebHook implementation, you get to a part where you usually put in a URL and possibly select a list of given events which might force that URL to be hit by a new request. If you’ll go over more services’ configuration webpages for WebHooks, you’ll realize that this configuration part is fairly common to all and thus becomes a pattern.

Receiving WebHooks

Until recently, the difficult part was developing your service in such a manner that it understands WebHooks. That was simply so because developing the next GitHub or PayPal service over night, so that user eventually use it to get WebHooks generated requests for their own web services was… well, let’s face it – unrealistic. Therefore, most articles on-line cover the part of receiving WebHooks and never forget to praise the ASP.NET teams in Redmond for the terrific work they did – they totally deserve it.

Sending WebHooks

However, what if you DO develop the next PayPal? Or maybe simply a number of independent services you want to work and sporadically communicate with each other, on an event-based manner?

Well, on one hand, considering that you want WebHooks to be sent out, you have to remember that WebHooks is in the end a fancy name for an HTTP requests which contains a special body request format. Therefore, I’d a no-brainer that you could instantiate an object of type HttpClient or WebClient and have the request issued accordingly. But still, remember that if your services are going to be used by external customers, they’ll eventually desire these requests to go to their own services as well. In other words, your services should be able to issue request on an event-based manner to a multitude of HTTP endpoints, based on a series of configurations: which actions, trigger which events and run requests at which URLs.

More specifically, consider that you develop the next most popular on-line invoicing SaaS API. Since you’re following the best practices for web services, you’ll most likely not generate the invoice and send it to an e-mail address in the web server code-behind code, would you? Instead, you probably architect some sort of n-tier application type where your front-facing web application get any invoice generation requests, respond back with a ‘promise’ that the invoice will be generated and push the request to a queue of some type so that a worker role which actually generates the invoices will work in a nicely load-balanced environment.

The question is now, how could the external clients get notified that a new invoice has been generated and possibly even sent through an e-mail at the e-mail address specified in the request? Well, WebHooks could solve this problem quite nicely:

  1. the worker role would first generate the invoice
  2. once it is generated, considering this is an event of its own type (e.g. invoice_generated), it would raise this event and call a URL the customer has configured, only if the customer chose to receive requests for this event type
  3. next, the worker role would try to send the invoice attached in an e-mail specified by the client when it originally created the request
  4. if the e-mail was sent successfully, the client could again be pinged at the URL the customer configured with another type of event (e.g. email_sent), considering that the customer chose to receive request for this event type

It’s probably obvious by now that there’s a tremendous amount of work left to be done by the developer in order to send out a WebHook request, if that WebHook request is generated by a HttpClient object – or anything similar.

Don’t get me wrong – there’s nothing wrong with this approach. But there’s a better way of doing all this if-registered-get-URL kind of logic when it comes to WebHooks and .NET code.

Put The NuGet Packages To Work

At the time of this writing, there are exactly four NuGet packages containing the Microsoft.AspNet.WebHooks.Custom name prefix and the reason for this large number is going to be explained throughout the remainder of this post.

First, there’s a package called simply Microsoft.AspNet.WebHooks.Custom, which is the core package you want to install when you’re creating your own custom WebHook. Additionally, there’s a so-called Microsoft.AspNet.WebHooks.Custom.AzureStorage package which will work like a charm when you want to store your WebHook registrations in a persistent storage – and yes, by now I’ve spoiled the surprise. The NuGet packages not only send WebHooks, but they also do the entire registration and selection based on event registration story for you as well, and this is not exactly obvious in my humble opinion. Third, there’s an Microsoft.AspNet.WebHooks.Custom.Mvc package which aids in the actual registration process, should you application run as an ASP.NET MVC application. Lastly, there’s an Microsoft.AspNet.WebHooks.Custom.Api package which handles does a great job by adding an optional set of ASP.NET WebAPI controllers useful for management purposes of WebHooks, in the form of a REST-like API.

I’ll keep things simple in this post, so rather than focus on the magic which comes along with the .Mvc, .AzureStorage and .Api packages, I’ll simply create a console application that will act as a self-registree and sender of WebHooks. In order to intercept the WebHooks and check that the implementation actually works, I’ll create a plain simple WebAPI application and add the required NuGet packages to it so that it can handle incoming WebHooks requests.

The entire source code is available on GitHub here.

As you’ll see, the majority of the code currently runs in Program.cs. The work done id the Main method is simply in relation of getting things ready; more specifically, I first instantiate the object called _whStore and _whManager – the latter requires the _whStore as a parameter. These object are responsible for the following:

    1. What events he/she is interested in, in the form of Filters. This will instruct the manager object which will do the actual sending to only send web hook requests when those specific events occur
    2. Its secret, which should ideally be unique – this secret is to be used in order to calculate a SHA256 hash of the body. The subscriber afterwards should only accept WebHooks which contain a properly calculated hash over their request bodies – otherwise, these might be scam WebHooks
    3. A list of HTTP header attributes
    4. A list properties which are to be sent out at each and every web hook request with the exact same values, no matter what the event is
  • _whManager is the do-er, the object which will actually send the web hook request. Since it has to know to whom to send the web hook requests in the first place, it requires the WebHookStore-type object sent as a parameter in its constructor. In addition, it also requires an ILogger-type object as a second constructor parameter, which is going to be used as a diagnostics logger
class Program
{
   private static IWebHookManager _whManager;
   private static IWebHookStore _whStore;

   static void Main(string[] args)
   {
       _whStore = new MemoryWebHookStore();
       _whManager = new WebHookManager(_whStore, new TraceLogger());
      SubscribeNewUser();
      SendWebhookAsync().Wait();
      Console.ReadLine();
   }
   private static void SubscribeNewUser()
   {
       var webhook = new WebHook();
       webhook.Filters.Add("event1");
       webhook.Properties.Add("StaticParamA", 10);
       webhook.Properties.Add("StaticParamB", 20);
       webhook.Secret = "PSBuMnbzZqVir4OnN4DE10IqB7HXfQ9l";
       webhook.WebHookUri = "http://www.alexmang.com";
       _whStore.InsertWebHookAsync("user1", webhook);
   }
   private static async Task SendWebhookAsync()
   {
      var notifications = new List<NotificationDictionary> { new NotificationDictionary("event1", new { DynamicParamA = 100, DynamicParamB = 200 }) };
      var x = await _whManager.NotifyAsync("user1", notifications);
   } 
}

 

The good thing about a simple ‘Hello, World’ sample application

The good thing in this sample is that WebHooks can be, in my opinion, self-taught if the proper explanations are added. More specifically, the reason for the existence of the IWebHookStore interface is due to the fact that you’ll most likely NOT use a MemoryWebHookStore in production workloads, simply because stopping the application and running it again will completely delete any subscriber registrations – ouch.

Therefore, implementing the IWebHookManager interface will help you a lot, meaning that you could implement your own database design for storing the subscriber registrations along with all the properties, extra HTTP headers they require whenever, based on the events (a.k.a. actions, a.k.a. filters) they chose in a form somehow. However, please be aware that there’s an .AzureStorage NuGet Package I mentioned earlier which eases the development even further, by auto-“magically” doing the persistent storage part of the registration on your behalf – uber-cool! I’ll detail the process of using Azure Storage as your backend for web hook subscriptions in a future post.

Additionally, there’s an interface for the manager as well which only does two things (currently!) – verify the web hooks registered and create a new notification. There are a few things which are important for you to keep in mind here:

  1. Notification will be done by sending in the user name as a parameter. If it isn’t obvious why you’d do that since you’ve already specified the users’ usernames upon registration, remember the flow: users register, an event occurs in the system on a per-user-action-basis, that particular user gets notified. The second parameter is an enumerable of notification dictionaries, which is actually a list of objects where you specify the event which just occurred and determines the WebHook request to be fired in the first place – since the notification can also send extra data to the subscriber in the request body, this parameter cannot be a simple string, and as such requires two parameters when instantiated: the event name (as a string) and an object which will eventually get serialized as a JSON object.
  2. I’d argue that the default implementation of IWebHookManager, namely WebHookManager, will meet most of your needs and there’s probably little to no reason to implement your own WebHookManager instead. If you’re not convinced, take a look at its source-code (yes, Microsoft DOES LOVE OPEN-SOURCE!) and check out the tremendous work they did so far on the WebHookManager class. I do have to admit though, that in term of coding-style, I’m very unhappy with the fact that if the manager fails to send the web hook request, no exception or error code will ever be thrown from the .NotifyAsync() method – this decision might have been taken since the method will most likely be called from a worker-role-type application which shouldn’t ever freeze due to unhandled exception. If that is the case, too bad that you, as a developer, cannot take the decision on your own instead. On the other hand though, remember the ILogger object (of type TraceLogger) you used when you originally instantiated the manager – many methods will eventually use the logger to send out diagnostics and these could help a lot when you’re trying to figure out if any web hook requests where sent out.

And since I’ve mentioned ILogger, let me remind you that if you add a trace listener to your application and use the already available TraceLogger type from the NuGet package, you will get the diagnostics data within the trace you’ve added as a trace listener. Should that be of type TextWriterTraceListener, the traces the WebHookManager creates will be written down on the disk.


 <system.diagnostics>
   <trace autoflush="true" indentsize="4">
     <listeners>
      <add name="TextListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="trace.log" />
      <remove name="Default" />
     </listeners>
   </trace>
 </system.diagnostics>

Options, Options, Options…

I’ve mentioned earlier the usefulness of the interfaces the nugget NuGet packages bring along due to their flexibility of covering any scenario you’d need. There’s however something even better than that, and that’s Dependency Injection support. More specifically, the NuGet packages also has a so-called CustomService static class which you can use to create instances of your WebHookManager, WebHookStore and so on and so forth.

Conclusion

WebHooks are here to connect the disconnected nature of the web and are here to stay. They are certainly not a new technology, not even a new concept – but it could still revolutionize the way we trigger our REST-based endpoints to execute task-based operations. If you’re new to WebHooks, get started today. If you’re a hard-core ASP.NET MVC developer, integrate WebHooks in your projects today. And if you’re an Azure Web App developer, why not develop WebJobs triggered by WebHooks? Oops, I spoiled my next post’s surprise 🙂

 

Happy web-hooking-up the World!

-Alex

Hi guys,

It has been a while since my last post and that’s because I had quite a busy summer; more specifically, besides my day-to-day job, a few trips and conference preparations for the 2015/2016 season, I also got the chance to work with O’Reilly on one of their video trainings. So in other words, I hereby kindly announce my first project as a trainer for O’Reilly Media.

oreilly logo

From their website:

O’Reilly Media started out as a technical writing and consulting company named O’Reilly & Associates. In 1984, we started retaining rights to manuals we created for Unix vendors. Our books were grounded in our hands-on experience with the technology, and we wrote them in a straightforward, conversational voice. We weren’t afraid to say in print that a vendor’s technology didn’t work as advertised. While our publishing program has expanded to include everything from digital photography to desktop applications to software engineering, those early principles still guide our editorial approach.

Read More →

One of the leading companies to provide in-depth analysis of emerging technologies and their impact on either individual or corporate environments, namely Gigaom Research, formed of over 200 independent analysts, has recently provided, after a thorough analysis of various PaaS cloud providers, a chart of score results against several disruptive vectors.

The way this works is by averaging scores given by their experts for key cloud provider capabilities, such as multi-cloud deployments, DevOps, mobile app development and more.

Guess what the result was! Microsoft Azure fronted the ‘wolf pack’, ‘edging out competitors including Amazon and Google’, as they stated in a recent announcement.

Every now and then when I try to delete an Azure Active Directory directory it just so happens that I get this funny ‘Directory contains one or more applications that were added by a user or administrator’ error message.


error msg delete directory azure active directory

What’s so funny about it? Well, the simple fact that all the applications the message mentiones seem, at least from the portal side, to be automatically created when the directory is set up. So what can the solution be?

As it turns out, the Azure Management Portal doesn’t actually list ALL the applications it creates when you set up a new directory and not only does it do that, but it also creates a few application on your behalf (you, the administrator) when you create the directory service from within the Portal. In order to delete these AAD applications, you’re required to get your hands dirty and do some PowerShell scripting.

First, because Azure Active Directory is an upgrade from the former Microsft Online Services identity service, please be aware that you might need to install a few additional tools on your computer, namely Microsoft Online Services Sign-In Assistant for IT Professionals RTW (that sounds so Microsoft :-)) and also the Azure Active Directory Module for Windows PowerShell – it’s preferable to install the 64-bit version of these tools, as the 32-bit version is discontinued by the time of this writing.

Once installed, go back to the Azure Management Portal and create a new organizational user within that particular directory (yes, I know, you need to have max. 1 identity within a directory to delete it, but you will still need an additional user IF your single AAD global admin is a Microsoft Account):

aad new user azure active directory

Make sure you mark the new user as a Global Admin and have an additional e-mail address in handy, since Global Admins are required to provide a backup e-mail address in order to get automated e-mail from the system.

Since the New User dialog created a temporary password for this user, quickly go to http://portal.microsoftonline.com and login using the new user you’ve just created. You will be prompted to change the temporary password.

Once you did this, you can open up a new PowerShell console or PowerShell ISE window. Within PowerShell, write the following cmdlet in order to connect to the directory. When prompted, use the credentials of the user account you just created from within the Azure Management Portal.

Connect-MsolService

Next, you can use the following cmdlet to retrieve the list of applications which reside on that AAD directory.

Get-MsolServicePrincipal | Select DisplayName

This will return the list of application which are currently installed on that AAD directory and you’ll quickly realize that the list contains way more than just the two application you see inside the Azure Management Portal:

  • Microsoft.Azure.ActiveDirectory
  • Microsoft.SMIT
  • Microsoft.Office365.Configure
  • Windows Azure Service Management API
  • Microsoft.SupportTicketSubmission
  • Microsoft.Azure.ActiveDirectoryUX
  • Microsoft.Azure.GraphExplorer
  • Microsoft.Azure.Portal
  • AzureApplicationInsights
  • Microsoft Policy Administration Service
  • Microsoft.VisualStudio.Online
  • SelfServicePasswordReset

In order to delete all these applications, you can go ahead and run the following cmdlet. Be aware though that not all application can be deleted and that some deletion processes will end up in an error different from the one shown within the PS console (nuts, right?) – ignore this.

Get-MsolServicePrincipal | Remove-MsolServicePrincipal

Afterwards, go back in the Azure Management Portal and delete the organization user account you created earlier and then delete the entire directory.

Voila, worked like a charm!

The ITCamp 2015 video recordings have recently been made publicly available on Vimeo and I’ve taken the liberty of embedding my session’s ‘Application Insights for Any App: Must-Have Tool For Understanding Your Customers‘ recording here. Since all the time slots were an hour long, I suggest you get some coffee, sandwiches, sit down and relax :-).

Enjoy the video and reach out to me on Twitter, Facebook, via the contact form or comment section below if you have any questions on Application Insights, Azure or anything alike.

Alex

P.S.: you can skip directly to 01:00

During my today’s session at DevSum 2015 I got lots of super cool questions regarding Azure SQL Database Elastic Scale and the different sharding strategies/models. Since the presentation wasn’t recorded, I’m up for any questions which you might find yourself having or for any questions which couldn’t be put on time (due to the though 50-min constraint); you know where to find me :-).

Anyway, I’ve taken the liberty of attaching my today’s session slides hereby.

Have a nice one,

Alex

While the ITCamp 2015 orga team is still ‘cooking’ all the video recordings of these great sessions, I’ve decided to make my slides from my today’s session, entitled ‘Application Insights for Any App: Must-Have Tool for Understanding Your Customers’, available to you using OneDrive.

Whether you were among the 150+ attendees today or not, among all the other important things I have mentioned, I cannot reiterate enough: knowing both your users and application behavior whilst its in the wild is crucial if you want to be the developer of a successful app and not any other production app. Therefore, DO monitor your users, BUT don’t do it like the NSA does. Instead, let your users know that you know what frustrates them the most, let your users know you are aware of how the application performs and most importantly, ACT on all the telemetry data you are gathering.

Read More →

itcamp-logo-white[1]For the past 5 years, two great IT community volunteers, namely Tudor Damian and Mihai Tataran, along with a team of engaged volunteers, have put together what is in my opinion the greatest community driven IT conference in Romania, namely ITCamp.

Last year’s edition gathered over 500 attendees, mostly mid- and high-level software developers, all keen to learn from and network with an impressive panel of speakers coming from all over the world, each of them an expert in the IT industry.

Given the public agenda available on http://itcamp.ro, this year’s edition will easily surpasses both content quality and quantity; let me explain:

  • on one hand, this year’s ITCamp will also host a *NEW* track of business-oriented sessions where you could get a lot of insights on how to manage IT risk, what the cloud business models are given the industry cloud-emerging market worldwide, how to become a productive product owner and, one of my very favorites, how to manage intellectual property upon application launch
  • on the other hand, ITCamp 2015 has an impressive list of speakers, such as Paula Januszkiewicz – Enterprise Security MVP, Andy Malone – Enterprise Security MVP, Daniel Petri – Directory Services MVP, Andy Cross – Azure MVP and Microsoft RD, Raffaele Rialdi – Developer Security MVP, Tobiasz Koprowski – SQL Server MVP, David Giard – Microsoft Technical Evangelist, Adam Granicz – F# MVP, to name a few (of course, myself included 🙂 )

To quickly conclude, if you haven’t yet, now is your chance to register for ITCamp 2015 at http://itcamp.ro. The ticket costs around EUR130.00, a bargain considering that this is a once-in-a-year opportunity to get really valuable networking, along with great sessions and wonderful food from the caterer – Grand Hotel Italia.

Building on the exceptional success of last year’s edition, Global Azure Bootcamp 2014 (#GlobalAzure) is a free one-day training event, taking place on the 25th of April 2015 in several venues worldwide, driven by local Microsoft Azure community enthusiasts and experts. It consists of a day of sessions and labs based on the Microsoft Azure Readiness Kit or custom content. The event has been originally designed by 5 Microsoft Azure MVPs in order to benefit the local community members and teach essential Microsoft Azure skills and know-how. While supported by several sponsors, including Microsoft, the event is completely independent and community-driven.

Global Azure Bootcamp 2014 took place in March 2014, and ran at 136 locations in 54 countries on the same day, including countries like Nepal and Mauritius – possibly the largest community event ever. Approx. 480 organizers welcomed about 5,600 attendees. The event also featured a charity lab where attendees deployed virtual machines into Azure to help analyse data for diabetes research.

Read More →

Just a few days ago the team in Redmond has announced the general availability for Azure Search and other new announcements along with it.

For the past few months I had the opportunity to talk, blog and answer questions about Azure Search while it was still under public preview. Today however, the service is no longer in preview and this means that the search-as-a-service solution managed by Microsoft is now fully baked with SLA, stable and less-changing REST API schema and models which can be concluded as: full-text search in a box.

The purpose of Azure Search is to help software developers implement a search system within their applications (whether web, mobile or desktop) without the friction and complexity of writing SQL, JavaScript (or anything else) queries and with all the benefits of an administration-less system.

Not only did the team make the service generally available, but they also added some more flavor to this release since it comes out with great new features such as an indexer mechanism which allows Azure Search to literally crawl for data in any modern data repository such as Azure DocumentDB, Azure SQL Database or SQL Server running on Azure VMs and also the concept of suggesters (previously under preview in the 2014-10-20-Preview API version – I wrote about suggesters in the Azure Search Client Library update announcement here) which allows users to specify a suggest algorithm upon running the suggest operation available in Azure Search.

Read More →