Guys, not very long ago (OK, at the end of last year, but still…) an awesome video went online! And yes, I’m starring in it :-O.

No, it’s not that kind of a video, of course!!! It’s my presentation recording from last year’s awesome .NET DeveloperDays, where I had the great opportunity of doing a deep dive on Azure SQL Database and an intro on Docker on Windows and Azure. Here’s the recording – let me know what you think!

Oh, and by the way: this year, in October, I will deliver a full-day training on Docker, Visual Studio, Windows and Azure during .NET DeveloperDays 2017. It’s called ‘Breaking Apps Apart Intentionally – Visual Studio + Docker + Sprinkles of Azure = Modern Microservices‘ (fabulous name, isn’t it? :-)).

If you hurry, you can still get in for a super-modest super-early bird fee (offer ends at the end of March): http://net.developerdays.pl/registration.

Hope to see you there,

Alex

The entire DevOps story with the Microsoft Stack is expanding its reach to more and more services and with an ever-growing set of advanced features. During this article, I will cover the benefits and ways to configure Service Endpoints within either Visual Studio Team Services and Team Foundation Service, in order to create a highly coupled ALM story for your apps.

What are Service Endpoints?

Back in the days of Team Foundation Services (2013 and prior to that), everyone was asking for a way to make Release Management expand to other project types rather than .NET and VB/C++. Taken this feedback (along with many other requests) Microsoft rewrote Team Build. Personally, I believe the entire DevOps story using the Microsoft Stack has become more mature than ever and ready to solve the most complex requirements your application has. In order to achieve this type of extensibility, Team Build allows one to add features in two ways: (1) by installing extensions which can either be wrote and uploaded by yourself or by installed from the Visual Studio Marketplace or by (2) taking advantage of the TFX Command Line Interface which allows you to add custom designed tasks. The latter is especially useful when it comes to creating a single-task functionality as an atomic process part of the build or release definition, rather than leverage several tasks individually. This ensures that in the situation of build and release processes which have to do the same tasks over and over again a few times are easily configurable and thus reduces the error-prone nature of a highly-configurable workflow system, such as Team Build.

The beauty of these tasks are that they are not exclusively designed to Microsoft-specific products and services – in fact, most of the tasks which have to deal with external services will specify the external service’s endpoint in the form of a connection setting which is team-project wide. Again, this helps prevent errors related to connection strings and such.

These connection settings are known as Service Endpoints and can be configured from the Settings pane of any team project, both in Visual Studio Team Services and Team Foundation Services, under the Services tab.

Visual Studio Service Endpoints

Read More →

This post describes the latest Team Build updates with features available both in Team Foundation Server (TFS) 2015 Update 2 RC1 and Visual Studio Team Services (VSTS) and has been posted in the Azure Development Community blog on https://blogs.msdn.microsoft.com/azuredev/.

build

‘Team Build is Dead! Long Live Team Build!’

This was one of the main titles of last year’s Ignite conference when the latest version of Team Build was introduced and there a simple reason behind it – more specifically, the new Team Build system is a complete re-write of the former Team Build. One of the first results in this re-write is that there no longer is any reason to raise the shoulders when questions such as “I love TFS, but why can’t I use it to build my Android projects?” are asked. As it turns out, the latest build of Team Build allows for more extensibility than ever, easier management over the web portal and much easier build agent deployment – throughout this post I will try to cover as much as possible in terms of the new available features.

What’s new?

Ever opened a XAML Build Definition Before? Yikes!

Even though the entire workflow-based schema of a build definition prior to TFS 2015 was cool as it allowed a lot of complexity in the entire logic of an automated build, it turned out that due to the lack of extensibility and difficulty of understanding the underlying XML schema, build definitions needed another approach. This is probably one of the main reasons behind the decision of ditching XAML altogether from the new Team Build system. Don’t get me wrong – XAML-based build definitions didn’t go anywhere: you can still create XAML-based build definitions both in TFS and VSTS, but as the team has put it, they will become obsolete at some point in time and therefore, it’s best to put up a strategy of migrating from the XAML-build definitions to the new Team Build task-based system. And to be fair, the new system also comes along with tons of benefits, extensibility being one of the greatest one (at least in my opinion).

Read More →

sherlock

Thou shall not fail with unhandled exception!

As a software developer, whether you have one or a million applications deployed in production, getting the answer to ‘How’s it going, app?’ is priceless.

In terms of web applications, when it comes to diagnostics there are two types of telemetry data you can get and use in either forensic operations or maintenance operations. More specifically, the hosting infrastructure itself has its own set of telemetry data (1) generated from the running application – this is commonly called site diagnostic logs as they are usually generated by the hosting infrastructure; site diagnostic logs have input form the operating system as well as from the web server, so that is Windows and IIS if you’re still using the common hosting  method. In terms of Azure Web Apps, these are generated on your behalf by the hosting infrastructure and can be accessible in a number of ways – but there’s some configuration required first. As for the second telemetry data type, this is the so-called application log which is generated by the application as a result of explicit logging code specified in code, as Debug.WriteLine() or Trace.TraceError().

This general rule however doesn’t fully explain though why when in the Azure portal there’s a larger number of settings for log files and what these settings represent. For quite a long time now, in both the Generally Available Azure portal (manage.windowsazure.com) and in the preview portal (a.k.a. Ibiza – portal.azure.com), there’s always a configuration for diagnostics. Within the portals, there are (by the time of this writing) four different settings which have an On-Off toggle switch, meaning that you can either set that set of telemetry data to be collected or not. If you’re wondering why this is the case, please hear this: writing files over any storage technology and over the Ethernet wire especially will take time and will eventually increase IO load.

Storing logs

Within the Preview Azure Portal (a.k.a Ibiza) settings blade for Web Apps, the four settings for diagnostics are (picture below):

Diagnostic Logs

  1. Application Logging (Filesystem) – these logs represent the logs written explicitly by the application itself by the use of Traces of Debugs (Trace.* and Debug.* respectively). Of course, the methods available in the Debug class are only going to work when the application has been compiled in a debug environment setting. This setting also requires you to specify what the logging level should be stored and you can choose between Error, Warning, Information or Verbose. Each of these levels will include the logs contained within the previous log level – I’ve attached a representative pyramid below. So for example, if you only want to export the error logs generated by your application, you set the level to Error and you will only get these logs – but if you configure the level to warning, you’ll get both warnings and error logs. Pay attention though, as Verbose isn’t proofless to the debug environment symbol – it will still only store debug output lines only if the application has been built with the DEBUG symbol.
error levels
  1. Web server logging – Once configured, will make the environment store the IIS logs generated by the web server on which the web application runs. These are very useful especially when you try to debug crashes or poor performance issues, as these contain information such as the HTTP header sent the client (requestee), his IP address and other useful data. Another priceless information especially when you don’t know why your application runs slow is request time, which specifies how long it took the web server to process a particular request. Properly visualized, these can change the decisions you’re taking in terms of optimization dramatically
  2. Detailed error messages – Here’s where things get a lot more interesting, as detailed error messages are HTML files generated by the web server directly for all the requests which turned out to result in an error, based on the HTTP status code. So in other words, if a particular request results in an HTTP status code in the form of 4xx or 5xx, the environment will store an HTML file containing both the request information (with lots of details) and possible solutions.
  3. Failed request tracing – With Failed request tracing, the environment will create XML files which contain a deeper level of information for failed requests. In terms of IIS, you might already know that each request goes through a number of HTTP modules that you either install via GAC or specify in the system.web node in the web.config file. In terms of ASP.NET 5, things change a lot as modules can be added programmatically into code, as you can self-host the entire environment. Anyway, the XMLs generated will contain information about each HTTP module that was invoked whilst processing the request along with information as how long it took for each module to process the request, messages out of the traces written by that module and much more.

As cool as it is to get so much data out of Azure Web Apps simply for forensic purposes, there are at least two huge drawbacks which come by default:

  1. All logs are (currently) saved by default… locally. This basically means that whenever the Fabric will decide to swap your app to a different hosting environment, you will loose all your diagnostic data – as will happen if for whatever reason the machine reboots or such. In addition, remember the stateless emphasis I (and everyone else) insisted on during any presentations, workshops and such, given so far? Well, that’s because in a clustered environment one never gets the promise that each and every request will ever go to the same actual target. Therefore, you might find yourself that clients continuously requesting your apps will generate logs on multiple machines, which makes forensic operations difficult
  2. The previous point can however be solved by exporting the log data to Azure Storage. The bad news though is that as extensive the Web App blade (and everything that’s related to Web Apps) is, it lacks the option of configuring the Azure Storage account the logs should be exported to – therefore, you have to swap between the old (still, generally available) portal – https://manage.windowsazure.com – and the new portal – https://portal.azure.com. This will most likely be solved by the Web App team in Redmond in the upcoming future. Just as a sidenote, that is EXACLTY what the word filesystem means in the Application Logging toggle switch, mentioned earlier. In order to make the change, simply open up the website in the management portal, go to the CONFIGURE tab and scroll down to the site diagnostics section. In addition, there’s an extra configuration section which allows you to explicitly configure application logs to go to the file system, Azure Storage Table and/or Azure Storage Blogs and, even better, allows you to configure which log level to be stored in each of these containers. Remember that this is also the place where you can change the default 35 MB storage capacity limit either up to 100MB, or as low as 25MB. Just as a side note, keep in mind that in terms of Azure Storage, the limit is determined by the limitations Azure Storage has, so that you can easily break the 100MB limit free.

Reading logs

Using File Transfer Protocol (FTP)

Storing is just one part of the story – the real deal is about consuming the data. Happily enough, accessing the log data is easy enough even from within the Preview Azure Portal – there’s a set of two settings in the Essentials group which give you access to the file system via File Transfer Protocol. As you can imagine, this is protected by a username and password dictionary. The host name and the username a sent in clear text and available right from within the Essentials group on the Web App’s main blade. The password however, which matches the deployment password, is only available from the .PublishSettings file which in turn can be downloaded by clicking the Get PublishSettings icon on the blade’s toolbar.

Once you connect to the hosting environment via FTP, drill down into the File System until you reach the LogFiles folder (located in the root, actually) – this is the place where application and site diagnostics logs are stored.

Using Visual Studio

As a developer, Visual Studio is the #1 most used tool on my PC, and it’s rarely used for DevOps or IT-Pro related tasks. This however, even if it might fall into the latter categories, can be done via Visual Studio too.

In either Visual Studio 2015 or Visual Studio 2015, there are two windows which relate to Azure, one being the legacy Server Explorer window and the other the Cloud Explorer window. Whilst Cloud Explorer is the new guy in town, it offers (in terms of accessing log files) the same functionalities as Server Explorer, the mature sibling; that is, the ability of drilling through the file system of a web app’s hosting environment and show the Log Files folder, with all of its subfolders and files. These can also be read inside Visual Studio so there’s no Alt+tab-ing between windows. Cool enough is that VS also allows you do download the log files (one, multiple or all) locally, for further analysis, machine learning, PowerBI – whatever.

Third party tools

Going into too much details on the fact that third party tools which let you access a web app’s setting, file system etc. exist is pointless – please be reminded that they exist and let’s move on :-).

Azure Web Site Logs Browser

Here’s yet again the place where things get interesting, as there’s a web app extension which allows you to do exactly ONE thing, once installed – that is to view logs. The cool thing about it though is that it will create an HTTP endpoint within Kudu (that is, http://[appname].scm.azurewebsites.net/websitelogs, which you can open up via your favorite web browser; from there, you’ll get exactly the same Log Files folder listing you’ve seen earlier. This makes things a lot more easier, ease there’s no need to work with too many tools if you’re in a quick search for a specific log/log file.

Log Streaming

In this post, I’ve kept the sweets for last. Reading logs is an obvious task you have to do if you want to diagnose performance issues or failures; in my opinion however, it couldn’t get any more passive than that. However, how do you deal scenarios when you’re being told that things go wrong but cannot reproduce them by yourself? What if you could remotely see how your customers’ requests are causing the system to fail or the application to simply return unexpected error messages? Meet log streaming, a near-real time streaming service provided by Azure.

The idea behind streaming service is that, considering you have logs enabled, the system will start streaming logs which can be retriever either by Visual Studio, PowerShell cmdlets or the Ibiza portal directly.

Conclusion

It’s my opinion that the diagnostics services offered by Azure, especially in terms of Web Apps are incredibly thorough and mature enough for any production workload – it’s just a matter of getting the right configuration without impacting performance and afterwards making use of the data generated by the requests your application processes.

Happy coding!

-Alex

No really, do YOU seriously believe your IT infrastructure is safe?

Paula JanuszkiewiczRenowned Security Expert, Paula Januszkiewicz, specialized in Penetration Testing, Enterprise Security MVP and MCT and Microsoft Security Trusted Advisor, #1 speaker at premium IT conferences such as Microsoft Ignite, TechEd, RSA and more, will be in Romania for a 5-day security hands-on class in Bucharest, put together by Avaelgo Training.

During the 17th and 21st of August, Paula will host the Windows Infrastructure Masterclass, which aims to bring the specialization within hacking and securing IT infrastructures. This course especially designed for enterprise administrators, infrastructure architects, security professionals, system engineers, network administrators, IT professionals and security consultants. As an added benefit, attendees will become Certified Security Engineers (CSEN).

Therefore, if you have you’re feet on the ground and realize that you MUST know more about security, this course is a must-attend-to and you’d better make sure you have you’re agenda free during August 17 and 21.

itcamp-logo-white[1]For the past 5 years, two great IT community volunteers, namely Tudor Damian and Mihai Tataran, along with a team of engaged volunteers, have put together what is in my opinion the greatest community driven IT conference in Romania, namely ITCamp.

Last year’s edition gathered over 500 attendees, mostly mid- and high-level software developers, all keen to learn from and network with an impressive panel of speakers coming from all over the world, each of them an expert in the IT industry.

Given the public agenda available on http://itcamp.ro, this year’s edition will easily surpasses both content quality and quantity; let me explain:

  • on one hand, this year’s ITCamp will also host a *NEW* track of business-oriented sessions where you could get a lot of insights on how to manage IT risk, what the cloud business models are given the industry cloud-emerging market worldwide, how to become a productive product owner and, one of my very favorites, how to manage intellectual property upon application launch
  • on the other hand, ITCamp 2015 has an impressive list of speakers, such as Paula Januszkiewicz – Enterprise Security MVP, Andy Malone – Enterprise Security MVP, Daniel Petri – Directory Services MVP, Andy Cross – Azure MVP and Microsoft RD, Raffaele Rialdi – Developer Security MVP, Tobiasz Koprowski – SQL Server MVP, David Giard – Microsoft Technical Evangelist, Adam Granicz – F# MVP, to name a few (of course, myself included 🙂 )

To quickly conclude, if you haven’t yet, now is your chance to register for ITCamp 2015 at http://itcamp.ro. The ticket costs around EUR130.00, a bargain considering that this is a once-in-a-year opportunity to get really valuable networking, along with great sessions and wonderful food from the caterer – Grand Hotel Italia.

Lab Management is a great piece of software that takes great use of virtual machines in order to create virtual labs where you, your team and your testers can test out an application in a clean environment. Lab Management integrates with Team Foundation Server 2010 and thus enables you to create the lab environements out of Visual Studio with ease.

So I started upgrading our TFS 2008 with the not-so-brand-new TFS 2010, on a completely different new machine. Besides the hassle regarding upgrading the databases, prepairing the user accounts, the shared folders, the services etc. I got to the point where I had everything working (except SharePoint Services 3 integration; will talk about that later) and was getting ready to install the Lab Management stuff.

First things are first, so I installed the Hyper-V role on my Windows Server 2008 R2 machine and afterwars System Center Virtual Machine Manager 2008 R2, because Lab Management works with SCVMM. After putting everything up and creating the SCVMM configuration to work with Hyper-V, I got the the final (and, in the end, not so final after all) point where I would configure the Lab Management in Team Foundation Server Admin Console.

So I put in the machine’s fully qualified domain name and click Test, but then suddenty a dialog box pops up requesting a user account. So I enter the user account I created for the Lab Management stuff (TFSLAB), insert the password and click Test. The credentials are fine, so I click Ok. Boom! I get this error:

 

TF260078: Team Foundation Server could not connect to the System Center Virtual Machine Manager Server: servername. More information for administrator: You cannot contact the Virtual Machine Manager server. The credentials provided have insufficient privileges on servername.

Ensure that your account has access to the Virtual Machine Manager server on servername, and then try the operation again.

Right. Now what? I double-check the password. Password’s fine. I double-check the username. Username’s fine. Obsiously this doesn’t have anything to do with the credentials. I check the Configuring Lab Management for th

So my previous configuration was this:

  • TFS 2010, running on the same machine with a WSS 3.0
  • SCVMM on the same machine
  • SQL Server 2008 R2 databases on the same machine

Almost everything was running smoothly, except for some people-picker issues that I had on the team project site permission page.

Anyway, I decided to upgrade the TFS server to TFS2012. In order to do that, I first backed-up everything, both using the backup utility that comes along with TFS and also using a mirorring RAID configuration. I did an in-place upgrade for TFS. After upgrading it, everything worked fine. However, since I didn’t like to look and feel of WSS 3.0 sites, I decided to do an in-place upgrade of SharePoint as well, and I upgraded to SharePoint Foundation 2010.

In-place upgrade worked like a charm, except for the fact that I had to manually install a prerequisite because it wouldn’t download it from the web for some perculiar reason. Once the installation was complete, I came across my first error with SharePoint 2010 Foundation, namely a very generic “server error: <help link>”. Unfortunately the help link only suggested I download the updates, so I downloaded and installed SharePoint Foundation 2010 SP1. After installing it, SharePoint services worked, but TFS no longer worked. I found out that the prerequisites installed actually installed .NET Framework 4 as well and that the applicationHost.config file was updated to use specific assemblies from .NET Framework 4 as well. Unfortunately, one of the updated entries from the applicationHost.config file was not correctly updated, meaning that the runtinme version was not mentioned to be v2.0, thus the runtime it was running on was 4.0. I had to manually correct the applicationHost.config file. Afterwards, everything worked like a charm.

This was just a short introduction of some of the problems I ran across when I updated my TFS. Today I came across another strange thing. When I create a new project collection, I apparently cannot create any SharePoint site for the project collection, and thus for any team project. Specifically, I get the following error when I create the project collection: “tf252005: Configuration of SharePoint Products failed with the following error: Server was unable to process request. —> Cannot retrieve the information for application credential key..”

Moreover, I realized that I cannot change any site collection administrators in SharePoint Central Administration either, having returned this error: “No Results matching your search were found.” (which apparently is quite common to SharePoint users).

The things you would want to check out are:

  • check if the TFSService account (whichever that is) is a farm administrator as well
  • check whether the service accounts are domain accounts, rather than local accounts
  • check whether the application pool credentials the TFS’s site collection and Central Administration run under are set to a domain account
  • check if SharePoint 2010 has set an app password (use the stsadm -o setapppassword -password [yourpasswordhere] command)
  • check if SharePoint 2010 Central Administration is configured to search the correct AD forest(s) (use the stsadm -o setproperty -pn peoplepicker-searchadforests -pv “[yourdomain],[yourusername],[yourpassword]” -url http://[yoursharepointserver] command)

I found that the solution for me was to configure the app password. Using peoplepicker-searchadforests command without running the setapppassword command returned this error “Cannot retrieve the information for application credential key”. Moreover, keep in mind to run the setapppassword command on all front web servers before doing anything else, and also keep in mind to use the same password on all front web servers.

Lab Management is a great piece of software that takes great use of virtual machines in order to create virtual labs where you, your team and your testers can test out an application in a clean environment. Lab Management integrates with Team Foundation Server 2010 and thus enables you to create the lab environements out of Visual Studio with ease.

So I started upgrading our TFS 2008 with the not-so-brand-new TFS 2010, on a completely different new machine. Besides the hassle regarding upgrading the databases, prepairing the user accounts, the shared folders, the services etc. I got to the point where I had everything working (except SharePoint Services 3 integration; will talk about that later) and was getting ready to install the Lab Management stuff.

Before starting of with the real subject of this post, let me tell you, in short, the environment topology: Active Direcotory with several domain controllers running WS2003, one WS2008R2 machine running two instanced of SQL Server 2008 R2 (one for TFS 2010, one for SCVMM 2008 R2 –> it is important not to use the first instance for SCVMM), System Center Virtual Machine Manager 2008 R2, Team Foundation Server 2010 and SharePoint Services 3. Also, for running SCVMM that machine has the Hyper-V role activated.

First things are first, so I installed the Hyper-V role on my Windows Server 2008 R2 machine and afterwars System Center Virtual Machine Manager 2008 R2, because Lab Management works with SCVMM. After putting everything up and creating the SCVMM configuration to work with Hyper-V, I got the the final (and, in the end, not so final after all) point where I would configure the Lab Management in Team Foundation Server Admin Console.

So I put in the machine’s fully qualified domain name and click Test, but then suddenty a dialog box pops up requesting a user account. So I enter the user account I created for the Lab Management stuff (TFSLAB), insert the password and click Test. The credentials are fine, so I click Ok. Boom! I get this error:

TF260078: Team Foundation Server could not connect to the System Center Virtual Machine Manager Server: servername. More information for administrator: You cannot contact the Virtual Machine Manager server. The credentials provided have insufficient privileges on servername.

Ensure that your account has access to the Virtual Machine Manager server on servername, and then try the operation again.

Right. Now what? I double-check the password. Password’s fine. I double-check the username. Username’s fine. Obsiously this doesn’t have anything to do with the credentials. I check the Configuring Lab Management for the First TIme article on MSDN (here). Scroll down the site, and come across a Troubleshooting link. Click the link and come across a short text that basically tells me to check some blog or the forums. Check the blog. Nothing there about 260078. Check the forums. No similar error.

Obsiously, I’m special! Search the Web some more. After about an hour or so, I decide to post in the MSDN forums: maybe a wise man does have an answer after all. Post the error in the forum. Wait for two hours. I’m notified on my phone that someone has replied to my post. Don’t really have the time to check the post in that moment, so I’ll leave it for a couple of minutes, but than another notification alerts me! Surely I must have found gold! Two replies one after the other? Problem is as good as solved. Check the thread and find out only that someone else if trying to figure out the same thing.

Ok, enough with the introductory chit-chat.

What I did:

1. (don’t really know if it helps, or not, but this was required for similar errors) Added TFSLAB (and eventually TFSSERVICE – the account Team Foundation Service runs under) to these AD groups Pre-Windows 2000 Compatible Accessand Windows Authorization Access Group.

I tried running the Lab Configuration again, but still no luck.

2. Changed the accounts the Virtual Machine Manager and SQL Server (this is absolutely required) and Virtual Machine Agent run under to the TFSLAB account. Restart the services, but Virtual Machine Agent doesn’t start. The Service Manager posts some extremely generic error message (The Virtual Machine Manager Agent service terminated with service-specific error %%-2147217405.), so I check the Event Viewer and find this: The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {9C38ED61-D565-4728-AEEE-C80952F0ECDE} and APPID {5364ED0E-493F-4B16-9DBF-AE486CF22660} to the user domaintfslab SID (S-1-5-21-1004336348-790525478-1801674531-15332) from address LocalHost (Using LRPC). This security permission can be modified using the Component Services administrative tool.

As the message suggest, VMM Agent cannot start because of a component. My suspicion is that TFSLAB doesn’t have priviledges to that component, so I immediately open Component Services. However, the components are only listed by their friendly name, so I open the registry editor in order to find the component’s friendly name: Virtual Disk Service Loader. Sounds promising. I go back to Component Services, search for Virtual Disk Service Loader, right-click it in order to configure the security permission and find that everything is grayed out, as it were disabled. I check whether DCOM is enabled (right-click on the computer in the Component Services list) and find that it is.

Search online books some more and find that for Windows Server 2008 R2 Microsoft, for some “security” reason decided that Administrators, no matter their level of godness, are no longer permitted to configure anything in the Component Services and instead, a user called TrustedInstaller has acces (not even the godlike SYSTEM accound is no longer permitted acces there –> WHY?!).

Some article on the Web stated that going back to registry editor, back to HKEY_CURRENT_ROOTCLSIDidHere, clicking on the permissions option in the CLSID’s context menu (replace CLSID with that long ID your’s looking for 9C38…) and configuring the FULL CONTROL permission for the Administrator should solve the problem. However, it didn’t. What I did though (as a temporary resort, because it was getting frustrating) was to add the TFSLAB account to the local admin accounts group.

So, in conclusion:

  • add the TFSLAB account to the local admins
  • add TFSLAB to the AD built-in groups I mentioned earlier
  • run SQL Server instance service with the TFSLAB account
  • run the VMM service with the TFSLAB account

and you should have everything up and running (after finishing the configuration, of course).

Till next time,

A