Guys, not that long ago (OK, at the end of last year, but still…) an awesome video went online! And yes, I’m starring in it :-O.

No, it’s not that kind of a video, of course!!! It’s my presentation recording from last year’s awesome .NET DeveloperDays, where I had the great opportunity of doing a deep dive on Azure SQL Database and an intro on Docker on Windows and Azure. Here’s the recording – let me know what you think!

Oh, and by the way: this year, in October, I will deliver a full-day training on Docker, Visual Studio, Windows and Azure during .NET DeveloperDays 2017. It’s called ‘Breaking Apps Apart Intentionally – Visual Studio + Docker + Sprinkles of Azure = Modern Microservices‘ (fabulous name, isn’t it? :-)).

If you hurry, you can still get in for a super-modest super-early bird fee (offer ends at the end of March):

Hope to see you there,


Earlier this year I had the awesome opportunity to share my SQL Database From A Dev’s Perspective session to the overwhelming audience of Transylvanian software developers in Cluj-Napoca (Romania) during one of the very best IT conferences in Eastern Europe, namely ITCamp.

I do realize you might have had the chance of downloading the slides before (either from my post-event blog-posts ) or you might have attended the session at WinDays in Croatia, NT Konferenca in Slovenia or CloudBurst in Sweden. However, if that is not the case or if you just needed a quick reminder on DDM, RLS, AlwaysEncrypted, InMemory OLTP etc. (what are all these?!), here’s your chance to watch the session from the comfort of your own couch.

All the best and happy entertainment/learning 🙂


Azure SQL for Developers – Alex Mang from ITCamp on Vimeo.

The entire DevOps story with the Microsoft Stack is expanding its reach to more and more services and with an ever-growing set of advanced features. During this article, I will cover the benefits and ways to configure Service Endpoints within either Visual Studio Team Services and Team Foundation Service, in order to create a highly coupled ALM story for your apps.

What are Service Endpoints?

Back in the days of Team Foundation Services (2013 and prior to that), everyone was asking for a way to make Release Management expand to other project types rather than .NET and VB/C++. Taken this feedback (along with many other requests) Microsoft rewrote Team Build. Personally, I believe the entire DevOps story using the Microsoft Stack has become more mature than ever and ready to solve the most complex requirements your application has. In order to achieve this type of extensibility, Team Build allows one to add features in two ways: (1) by installing extensions which can either be wrote and uploaded by yourself or by installed from the Visual Studio Marketplace or by (2) taking advantage of the TFX Command Line Interface which allows you to add custom designed tasks. The latter is especially useful when it comes to creating a single-task functionality as an atomic process part of the build or release definition, rather than leverage several tasks individually. This ensures that in the situation of build and release processes which have to do the same tasks over and over again a few times are easily configurable and thus reduces the error-prone nature of a highly-configurable workflow system, such as Team Build.

The beauty of these tasks are that they are not exclusively designed to Microsoft-specific products and services – in fact, most of the tasks which have to deal with external services will specify the external service’s endpoint in the form of a connection setting which is team-project wide. Again, this helps prevent errors related to connection strings and such.

These connection settings are known as Service Endpoints and can be configured from the Settings pane of any team project, both in Visual Studio Team Services and Team Foundation Services, under the Services tab.

Visual Studio Service Endpoints

Read More →

 Croatia, thanks for having me!

This year I got the chance to engage the Croatian developer communities in Porec, a beautiful Croatian city at the Mediterranean sea side, during WinDays – the largest Microsoft driven event in Croatia.

As I said in my interview with the WinDays media crew, this was my second time in Croatia. Whilst my first experience in Croatia as a tourist (when I got to visit Dubrovnik) blew me away by the historical culture Croatia has to over, my second time in Croatia blew my away by the quality and software developers, their deep knowledge and familiarity with the latest technologies and trends. Good job, Croatia!


WinDays is by far the largest Microsoft driven event in Croatia, which brings together developers, IT professionals and IT business managers and decision makers, making is thus the best opportunity to learn stuff and meet interesting people. Right before I got to the WinDays island (yep, you’ve read that right – the event was hosted in a private resort, on a car-free island! How cool is that?!), I was mindblown by the logistics behind this many-thousands-attendees large event. For example, a few kilometers before arriving in Porec, I started following road signs with the conference’s logo. Second, I had my car parked in one of the largest parking lots completely reserved for the conference, where a minibus got to pick us up. Once we got to the sea-side, a shuttle boat reserved for the conference cross us over to the island, and they had such a boat trip every roughly 15 mins. Once I got to my room, the TV automatically turned on and started playing the conference’s teaser video; in addition, all the WiFi SSID, both in the hotel (room, restaurant, lobby etc.) and in the other conference venues were named WinDays. That’s as much branding as you can get, honestly! Great job, WinDays team!!!

I got to deliver two sessions at WinDays, one on Application Insights (my already traditional ‘Know Your Customers, Know Your Apps!’), where it is my turn to blow people’s mind with the power of usage and performance analytics Application Insights offers almost completely out of the box.

My other session covered features specifically designed for SQL users, named ‘SQL Database from a Developer’s Perspective’ – during this session, I’ve covered my top favorite security and performance features which have either been around for a year or so or were recently added to Azure SQL Database.


One of the biggest surprises I had at WinDays was the party they’ve put together for attendees and speakers alike. This was also the moment when they offered a prize of 40,000 kuna for the winning Software Startup Academy competition which ran for a few months. Anyway, during this party, they got a Queens cover band to entertain us. At first, especially because of the ‘cover band’ part, I have to admit I didn’t really have high expectations. However, this turned out to be one of the BEST concerts I ever attended -really! I mean the songs they chose, the impersonation they did on Freddy Mercury, the way they got the entire crowd cheered up and in the right mood… WOW!

I’ve also added some photos taken at WinDays, below.

This post describes the latest Team Build updates with features available both in Team Foundation Server (TFS) 2015 Update 2 RC1 and Visual Studio Team Services (VSTS) and has been posted in the Azure Development Community blog on


‘Team Build is Dead! Long Live Team Build!’

This was one of the main titles of last year’s Ignite conference when the latest version of Team Build was introduced and there a simple reason behind it – more specifically, the new Team Build system is a complete re-write of the former Team Build. One of the first results in this re-write is that there no longer is any reason to raise the shoulders when questions such as “I love TFS, but why can’t I use it to build my Android projects?” are asked. As it turns out, the latest build of Team Build allows for more extensibility than ever, easier management over the web portal and much easier build agent deployment – throughout this post I will try to cover as much as possible in terms of the new available features.

What’s new?

Ever opened a XAML Build Definition Before? Yikes!

Even though the entire workflow-based schema of a build definition prior to TFS 2015 was cool as it allowed a lot of complexity in the entire logic of an automated build, it turned out that due to the lack of extensibility and difficulty of understanding the underlying XML schema, build definitions needed another approach. This is probably one of the main reasons behind the decision of ditching XAML altogether from the new Team Build system. Don’t get me wrong – XAML-based build definitions didn’t go anywhere: you can still create XAML-based build definitions both in TFS and VSTS, but as the team has put it, they will become obsolete at some point in time and therefore, it’s best to put up a strategy of migrating from the XAML-build definitions to the new Team Build task-based system. And to be fair, the new system also comes along with tons of benefits, extensibility being one of the greatest one (at least in my opinion).

Read More →

Speaking at Microsoft Summit & CodeCamp Iasi

Last week was one full of traveling experiences and speaking engagements at the two largest IT conferences in Romania: Microsoft Summit and CodeCamp Iasi. I got a chance to talk on the same subject at these conferences, namely Microsoft Azure Visual Studio Online Application Insights (this name is so Microsoft :-)) and according to Andrei Ignat‘s (Microsoft Visual Studio and Development Technologies MVP) review here, I made a good job delivering this session.

Whilst this year’s Microsoft Summit focused a lot on networking, with lots of great opportunities to meet and chat with brave entrepreneurs, successful business all-stars, experienced technical fellows and gizmo master minds, CodeCamp was a developer hardcore event, with not two, not three, but ten (10!) simultaneous developer tracks. Why such a big number, you might ask? Well, considering that there were at least 1.800 attendees at the event, you can imagine why :-). Don’t get me wrong, Microsoft Summit wasn’t any smaller either, especially in terms of attendees. Rumor has it that over 2.100 attendees registered, but the exact number wasn’t made public yet.

However, the absolutely amazing thing about my Application Insights sessions this year at these two conferences was the fact that some developers who attended my session in Bucharest (at Microsoft Summit), decided to show up again (two days later) at the exact same session in Iasi (at CodeCamp), in order to get additional questions answered and take extra notes on the Application Insights usage scenarios.

This is not only overwhelming, but also extremely flattering! For those of you who attended any of my sessions: you were a great audience: THANK YOU!

For those of you who didn’t make it to either of these sessions, I’ve posted the slides further down this blog post. Be aware though that more than half of the session time was spent on demos and How-Tos rather than just slides – the recordings are yet to be announced by the Microsoft Summit organizers; as soon as they’re public, I’ll make them available on as well.

Also Speaking At CloudBrew Later This Month

CloudBrew AZUG

In addition, if you happen to be in Belgium by the end of the month (November 28th), make sure you register for Cloud Brew – at CloudBrew I’ll focus my Application Insights session of IoT monitoring techniques and some other goodies.

Happy coding!


Public Events! Register Today!

Public Speaking @ Microsoft Summit 2015

As part of my continious community involvment, for the next 30 days or so I will be busy traveling yet again across all of Romania (South, North-East and then West again) and Belgium. As you might already expect, I’m engaged on a few public events. If you want to drop by and say ‘Hi’, I’d be more than happy:

Unlike the events taking place in Romania – which are considerably large (1500+ participants) – CloudBrew is a very intimate event, with nothing but excellent talks, valuable networking opportunities, beer sampling (that’s why it’s called CloudBrew), excellent food and wonderful prizes. Both CodeCamp and CloudBrew are community driven events (organized by CodeCamp and AZUG (Azure User Group) Belgium respectively), but if you’re especially interested in cloud computing, than CloudBrew is with no doubt the event for you!

At these events I’m yet again going to cover Azure related content. This time however, I’ll go deep into the service called Visual Studio Online Application Insights, show you tips and tricks on various patterns, show you how you could use Application Insights for any Internet of Things projects and how to customize dashboards so they fit your DevOps team’s  requirements. Lastly, you’ll also get a chance to see a complete IoT application running on a Raspberry PI 2 powered by Windows 10 IoT, monitored using Application Insights – #CoolStuffAlert.

To get a sneak preview of what I’m putting together for these events, you still have a chance to watch the recording of my ITCamp session in May here:

ITCamp 2015 – Application Insights for Any App Must-Have Tool for Understanding Your Customers (Alex Mang) from ITCamp on Vimeo.

…or the recording David Giard and I did before my session there:

In addition to what you get from the roughly 50 minute video, please be advised that I’ve updated the presentation so that it’s up-to-date with the features which were added in the meantime – yes, Visual Studio Online is packed with lots of cool features for monitoring usage and application performance.

See you there!


Thou shall not fail with unhandled exception!

As a software developer, whether you have one or a million applications deployed in production, getting the answer to ‘How’s it going, app?’ is priceless.

In terms of web applications, when it comes to diagnostics there are two types of telemetry data you can get and use in either forensic operations or maintenance operations. More specifically, the hosting infrastructure itself has its own set of telemetry data (1) generated from the running application – this is commonly called site diagnostic logs as they are usually generated by the hosting infrastructure; site diagnostic logs have input form the operating system as well as from the web server, so that is Windows and IIS if you’re still using the common hosting  method. In terms of Azure Web Apps, these are generated on your behalf by the hosting infrastructure and can be accessible in a number of ways – but there’s some configuration required first. As for the second telemetry data type, this is the so-called application log which is generated by the application as a result of explicit logging code specified in code, as Debug.WriteLine() or Trace.TraceError().

This general rule however doesn’t fully explain though why when in the Azure portal there’s a larger number of settings for log files and what these settings represent. For quite a long time now, in both the Generally Available Azure portal ( and in the preview portal (a.k.a. Ibiza –, there’s always a configuration for diagnostics. Within the portals, there are (by the time of this writing) four different settings which have an On-Off toggle switch, meaning that you can either set that set of telemetry data to be collected or not. If you’re wondering why this is the case, please hear this: writing files over any storage technology and over the Ethernet wire especially will take time and will eventually increase IO load.

Storing logs

Within the Preview Azure Portal (a.k.a Ibiza) settings blade for Web Apps, the four settings for diagnostics are (picture below):

Diagnostic Logs

  1. Application Logging (Filesystem) – these logs represent the logs written explicitly by the application itself by the use of Traces of Debugs (Trace.* and Debug.* respectively). Of course, the methods available in the Debug class are only going to work when the application has been compiled in a debug environment setting. This setting also requires you to specify what the logging level should be stored and you can choose between Error, Warning, Information or Verbose. Each of these levels will include the logs contained within the previous log level – I’ve attached a representative pyramid below. So for example, if you only want to export the error logs generated by your application, you set the level to Error and you will only get these logs – but if you configure the level to warning, you’ll get both warnings and error logs. Pay attention though, as Verbose isn’t proofless to the debug environment symbol – it will still only store debug output lines only if the application has been built with the DEBUG symbol.
error levels
  1. Web server logging – Once configured, will make the environment store the IIS logs generated by the web server on which the web application runs. These are very useful especially when you try to debug crashes or poor performance issues, as these contain information such as the HTTP header sent the client (requestee), his IP address and other useful data. Another priceless information especially when you don’t know why your application runs slow is request time, which specifies how long it took the web server to process a particular request. Properly visualized, these can change the decisions you’re taking in terms of optimization dramatically
  2. Detailed error messages – Here’s where things get a lot more interesting, as detailed error messages are HTML files generated by the web server directly for all the requests which turned out to result in an error, based on the HTTP status code. So in other words, if a particular request results in an HTTP status code in the form of 4xx or 5xx, the environment will store an HTML file containing both the request information (with lots of details) and possible solutions.
  3. Failed request tracing – With Failed request tracing, the environment will create XML files which contain a deeper level of information for failed requests. In terms of IIS, you might already know that each request goes through a number of HTTP modules that you either install via GAC or specify in the system.web node in the web.config file. In terms of ASP.NET 5, things change a lot as modules can be added programmatically into code, as you can self-host the entire environment. Anyway, the XMLs generated will contain information about each HTTP module that was invoked whilst processing the request along with information as how long it took for each module to process the request, messages out of the traces written by that module and much more.

As cool as it is to get so much data out of Azure Web Apps simply for forensic purposes, there are at least two huge drawbacks which come by default:

  1. All logs are (currently) saved by default… locally. This basically means that whenever the Fabric will decide to swap your app to a different hosting environment, you will loose all your diagnostic data – as will happen if for whatever reason the machine reboots or such. In addition, remember the stateless emphasis I (and everyone else) insisted on during any presentations, workshops and such, given so far? Well, that’s because in a clustered environment one never gets the promise that each and every request will ever go to the same actual target. Therefore, you might find yourself that clients continuously requesting your apps will generate logs on multiple machines, which makes forensic operations difficult
  2. The previous point can however be solved by exporting the log data to Azure Storage. The bad news though is that as extensive the Web App blade (and everything that’s related to Web Apps) is, it lacks the option of configuring the Azure Storage account the logs should be exported to – therefore, you have to swap between the old (still, generally available) portal – – and the new portal – This will most likely be solved by the Web App team in Redmond in the upcoming future. Just as a sidenote, that is EXACLTY what the word filesystem means in the Application Logging toggle switch, mentioned earlier. In order to make the change, simply open up the website in the management portal, go to the CONFIGURE tab and scroll down to the site diagnostics section. In addition, there’s an extra configuration section which allows you to explicitly configure application logs to go to the file system, Azure Storage Table and/or Azure Storage Blogs and, even better, allows you to configure which log level to be stored in each of these containers. Remember that this is also the place where you can change the default 35 MB storage capacity limit either up to 100MB, or as low as 25MB. Just as a side note, keep in mind that in terms of Azure Storage, the limit is determined by the limitations Azure Storage has, so that you can easily break the 100MB limit free.

Reading logs

Using File Transfer Protocol (FTP)

Storing is just one part of the story – the real deal is about consuming the data. Happily enough, accessing the log data is easy enough even from within the Preview Azure Portal – there’s a set of two settings in the Essentials group which give you access to the file system via File Transfer Protocol. As you can imagine, this is protected by a username and password dictionary. The host name and the username a sent in clear text and available right from within the Essentials group on the Web App’s main blade. The password however, which matches the deployment password, is only available from the .PublishSettings file which in turn can be downloaded by clicking the Get PublishSettings icon on the blade’s toolbar.

Once you connect to the hosting environment via FTP, drill down into the File System until you reach the LogFiles folder (located in the root, actually) – this is the place where application and site diagnostics logs are stored.

Using Visual Studio

As a developer, Visual Studio is the #1 most used tool on my PC, and it’s rarely used for DevOps or IT-Pro related tasks. This however, even if it might fall into the latter categories, can be done via Visual Studio too.

In either Visual Studio 2015 or Visual Studio 2015, there are two windows which relate to Azure, one being the legacy Server Explorer window and the other the Cloud Explorer window. Whilst Cloud Explorer is the new guy in town, it offers (in terms of accessing log files) the same functionalities as Server Explorer, the mature sibling; that is, the ability of drilling through the file system of a web app’s hosting environment and show the Log Files folder, with all of its subfolders and files. These can also be read inside Visual Studio so there’s no Alt+tab-ing between windows. Cool enough is that VS also allows you do download the log files (one, multiple or all) locally, for further analysis, machine learning, PowerBI – whatever.

Third party tools

Going into too much details on the fact that third party tools which let you access a web app’s setting, file system etc. exist is pointless – please be reminded that they exist and let’s move on :-).

Azure Web Site Logs Browser

Here’s yet again the place where things get interesting, as there’s a web app extension which allows you to do exactly ONE thing, once installed – that is to view logs. The cool thing about it though is that it will create an HTTP endpoint within Kudu (that is, http://[appname], which you can open up via your favorite web browser; from there, you’ll get exactly the same Log Files folder listing you’ve seen earlier. This makes things a lot more easier, ease there’s no need to work with too many tools if you’re in a quick search for a specific log/log file.

Log Streaming

In this post, I’ve kept the sweets for last. Reading logs is an obvious task you have to do if you want to diagnose performance issues or failures; in my opinion however, it couldn’t get any more passive than that. However, how do you deal scenarios when you’re being told that things go wrong but cannot reproduce them by yourself? What if you could remotely see how your customers’ requests are causing the system to fail or the application to simply return unexpected error messages? Meet log streaming, a near-real time streaming service provided by Azure.

The idea behind streaming service is that, considering you have logs enabled, the system will start streaming logs which can be retriever either by Visual Studio, PowerShell cmdlets or the Ibiza portal directly.


It’s my opinion that the diagnostics services offered by Azure, especially in terms of Web Apps are incredibly thorough and mature enough for any production workload – it’s just a matter of getting the right configuration without impacting performance and afterwards making use of the data generated by the requests your application processes.

Happy coding!