sherlock

Thou shall not fail with unhandled exception!

As a software developer, whether you have one or a million applications deployed in production, getting the answer to ‘How’s it going, app?’ is priceless.

In terms of web applications, when it comes to diagnostics there are two types of telemetry data you can get and use in either forensic operations or maintenance operations. More specifically, the hosting infrastructure itself has its own set of telemetry data (1) generated from the running application – this is commonly called site diagnostic logs as they are usually generated by the hosting infrastructure; site diagnostic logs have input form the operating system as well as from the web server, so that is Windows and IIS if you’re still using the common hosting  method. In terms of Azure Web Apps, these are generated on your behalf by the hosting infrastructure and can be accessible in a number of ways – but there’s some configuration required first. As for the second telemetry data type, this is the so-called application log which is generated by the application as a result of explicit logging code specified in code, as Debug.WriteLine() or Trace.TraceError().

This general rule however doesn’t fully explain though why when in the Azure portal there’s a larger number of settings for log files and what these settings represent. For quite a long time now, in both the Generally Available Azure portal (manage.windowsazure.com) and in the preview portal (a.k.a. Ibiza – portal.azure.com), there’s always a configuration for diagnostics. Within the portals, there are (by the time of this writing) four different settings which have an On-Off toggle switch, meaning that you can either set that set of telemetry data to be collected or not. If you’re wondering why this is the case, please hear this: writing files over any storage technology and over the Ethernet wire especially will take time and will eventually increase IO load.

Storing logs

Within the Preview Azure Portal (a.k.a Ibiza) settings blade for Web Apps, the four settings for diagnostics are (picture below):

Diagnostic Logs

  1. Application Logging (Filesystem) – these logs represent the logs written explicitly by the application itself by the use of Traces of Debugs (Trace.* and Debug.* respectively). Of course, the methods available in the Debug class are only going to work when the application has been compiled in a debug environment setting. This setting also requires you to specify what the logging level should be stored and you can choose between Error, Warning, Information or Verbose. Each of these levels will include the logs contained within the previous log level – I’ve attached a representative pyramid below. So for example, if you only want to export the error logs generated by your application, you set the level to Error and you will only get these logs – but if you configure the level to warning, you’ll get both warnings and error logs. Pay attention though, as Verbose isn’t proofless to the debug environment symbol – it will still only store debug output lines only if the application has been built with the DEBUG symbol.
error levels
  1. Web server logging – Once configured, will make the environment store the IIS logs generated by the web server on which the web application runs. These are very useful especially when you try to debug crashes or poor performance issues, as these contain information such as the HTTP header sent the client (requestee), his IP address and other useful data. Another priceless information especially when you don’t know why your application runs slow is request time, which specifies how long it took the web server to process a particular request. Properly visualized, these can change the decisions you’re taking in terms of optimization dramatically
  2. Detailed error messages – Here’s where things get a lot more interesting, as detailed error messages are HTML files generated by the web server directly for all the requests which turned out to result in an error, based on the HTTP status code. So in other words, if a particular request results in an HTTP status code in the form of 4xx or 5xx, the environment will store an HTML file containing both the request information (with lots of details) and possible solutions.
  3. Failed request tracing – With Failed request tracing, the environment will create XML files which contain a deeper level of information for failed requests. In terms of IIS, you might already know that each request goes through a number of HTTP modules that you either install via GAC or specify in the system.web node in the web.config file. In terms of ASP.NET 5, things change a lot as modules can be added programmatically into code, as you can self-host the entire environment. Anyway, the XMLs generated will contain information about each HTTP module that was invoked whilst processing the request along with information as how long it took for each module to process the request, messages out of the traces written by that module and much more.

As cool as it is to get so much data out of Azure Web Apps simply for forensic purposes, there are at least two huge drawbacks which come by default:

  1. All logs are (currently) saved by default… locally. This basically means that whenever the Fabric will decide to swap your app to a different hosting environment, you will loose all your diagnostic data – as will happen if for whatever reason the machine reboots or such. In addition, remember the stateless emphasis I (and everyone else) insisted on during any presentations, workshops and such, given so far? Well, that’s because in a clustered environment one never gets the promise that each and every request will ever go to the same actual target. Therefore, you might find yourself that clients continuously requesting your apps will generate logs on multiple machines, which makes forensic operations difficult
  2. The previous point can however be solved by exporting the log data to Azure Storage. The bad news though is that as extensive the Web App blade (and everything that’s related to Web Apps) is, it lacks the option of configuring the Azure Storage account the logs should be exported to – therefore, you have to swap between the old (still, generally available) portal – https://manage.windowsazure.com – and the new portal – https://portal.azure.com. This will most likely be solved by the Web App team in Redmond in the upcoming future. Just as a sidenote, that is EXACLTY what the word filesystem means in the Application Logging toggle switch, mentioned earlier. In order to make the change, simply open up the website in the management portal, go to the CONFIGURE tab and scroll down to the site diagnostics section. In addition, there’s an extra configuration section which allows you to explicitly configure application logs to go to the file system, Azure Storage Table and/or Azure Storage Blogs and, even better, allows you to configure which log level to be stored in each of these containers. Remember that this is also the place where you can change the default 35 MB storage capacity limit either up to 100MB, or as low as 25MB. Just as a side note, keep in mind that in terms of Azure Storage, the limit is determined by the limitations Azure Storage has, so that you can easily break the 100MB limit free.

Reading logs

Using File Transfer Protocol (FTP)

Storing is just one part of the story – the real deal is about consuming the data. Happily enough, accessing the log data is easy enough even from within the Preview Azure Portal – there’s a set of two settings in the Essentials group which give you access to the file system via File Transfer Protocol. As you can imagine, this is protected by a username and password dictionary. The host name and the username a sent in clear text and available right from within the Essentials group on the Web App’s main blade. The password however, which matches the deployment password, is only available from the .PublishSettings file which in turn can be downloaded by clicking the Get PublishSettings icon on the blade’s toolbar.

Once you connect to the hosting environment via FTP, drill down into the File System until you reach the LogFiles folder (located in the root, actually) – this is the place where application and site diagnostics logs are stored.

Using Visual Studio

As a developer, Visual Studio is the #1 most used tool on my PC, and it’s rarely used for DevOps or IT-Pro related tasks. This however, even if it might fall into the latter categories, can be done via Visual Studio too.

In either Visual Studio 2015 or Visual Studio 2015, there are two windows which relate to Azure, one being the legacy Server Explorer window and the other the Cloud Explorer window. Whilst Cloud Explorer is the new guy in town, it offers (in terms of accessing log files) the same functionalities as Server Explorer, the mature sibling; that is, the ability of drilling through the file system of a web app’s hosting environment and show the Log Files folder, with all of its subfolders and files. These can also be read inside Visual Studio so there’s no Alt+tab-ing between windows. Cool enough is that VS also allows you do download the log files (one, multiple or all) locally, for further analysis, machine learning, PowerBI – whatever.

Third party tools

Going into too much details on the fact that third party tools which let you access a web app’s setting, file system etc. exist is pointless – please be reminded that they exist and let’s move on :-).

Azure Web Site Logs Browser

Here’s yet again the place where things get interesting, as there’s a web app extension which allows you to do exactly ONE thing, once installed – that is to view logs. The cool thing about it though is that it will create an HTTP endpoint within Kudu (that is, http://[appname].scm.azurewebsites.net/websitelogs, which you can open up via your favorite web browser; from there, you’ll get exactly the same Log Files folder listing you’ve seen earlier. This makes things a lot more easier, ease there’s no need to work with too many tools if you’re in a quick search for a specific log/log file.

Log Streaming

In this post, I’ve kept the sweets for last. Reading logs is an obvious task you have to do if you want to diagnose performance issues or failures; in my opinion however, it couldn’t get any more passive than that. However, how do you deal scenarios when you’re being told that things go wrong but cannot reproduce them by yourself? What if you could remotely see how your customers’ requests are causing the system to fail or the application to simply return unexpected error messages? Meet log streaming, a near-real time streaming service provided by Azure.

The idea behind streaming service is that, considering you have logs enabled, the system will start streaming logs which can be retriever either by Visual Studio, PowerShell cmdlets or the Ibiza portal directly.

Conclusion

It’s my opinion that the diagnostics services offered by Azure, especially in terms of Web Apps are incredibly thorough and mature enough for any production workload – it’s just a matter of getting the right configuration without impacting performance and afterwards making use of the data generated by the requests your application processes.

Happy coding!

-Alex

Leave a Reply

Post Navigation