Alex Speaking User Group - Community Event

During a Community Engagement in Oradea

Given my less-frequent posts this year, you probably realized by now that it has been quite a busy ride for me and the team. We’re just about to launch two new products (stay tuned!), signed a few sweet deals and kept the great flow going – I’ll keep you updated as I can disclose more stuff; right now, everything is very hush-hush. And yet, I’ve quite the busiest agenda for the upcoming months, community-wise, ever!

Here are some of the public conferences, meetings and user groups which I will attend as a speaker. If you happen to attend any of these, make sure you drop by and say ‘Hi!’. I’d love to chat with you over anything related to Azure, Microsoft, Xamarin, SQL or whatever your favorite topic might be.

Community Engamgement Cvasi-calendar

  • March 25th, DevExperience Iasi – I’ll talk about ‘High Scalability and Security for Web, Mobile and API Apps‘ running in Azure and was also invited to be part of a discussion panel about Microservices – guess which Fabric I’ll get to talk about (hint: it start with Service…)
  • April 16th, Global Azure Bootcamp Oradea – yet again, Oradea is part of this fabulous global community event and this time it’s going to be bigger and better than ever before. I’m hosting the event in a new location with fellow MVPs who’ll attend as speakers and given this year’s topic, it will be a blast! Stay tuned, if you happen to be in Oradea on April 16th! (link coming up…)
  • April 23rd, CodeCamp Iasi – the gang in Iasi is doing it again: 1200+ developers will most likely show up to one of the largest community-driven events for software developers in Romania. My session at CodeCamp Iasi will be about Cloud Patterns and Best-Practices. And if you think I could talk the entire day about this, think again: I could keep on going for months, honestly. Instead, I only have 50 minutes available to cover the best of cloud architecture and thus this session promises to be a lot of fun!
  • April 26th-29th, WinDays – hosted by Microsoft Croatia in the beautiful country of… you guessed it: Croatia! I will deliver not one but two sessions at WinDays this year, both related to cloud topics: ‘SQL Database from a Developer’s Perspective‘ and my favourite ‘Know Your Customers, Know Yours Apps!
  • May 7th, CodeCamp Cluj-Napoca – basically more or less CodeCamp Iasi, but on the other side of the country, for an entirely different group of talented software developers
  • May 16th-18th, Microsoft NT Konferenca, in the wonderful country of Slovenia. Again, this is a conference I’m very excited about, especially since my last trip to Slovenia which left me utterly surprised – Slovenia is a great country and exceeded my expectations in absolutely every aspect you could imagine. Here are some (huge) numbers about Microsoft NT Konferenca: 3 days long, 1700+ participants, 10+ one day, trainings, 150+ speaker and a tradition of 21 (yes, 21!) years. You could almost think of this as a mini-Build, European-style. Consider that the event is hosted at the sea-side in a luxurious hotel which also offers a with pool warmed salty water – did you start to search for a plane ticket already? At Microsoft NT Konferenca, I’ll talk about ‘Using Azure for Dev-Test Environments’ and about ‘Knowing Your Customers, Knowing Your Apps’
  • I’ve saved the best for last: May 26th-27th, ITCamp Cluj-Napoca. This is the kind of conference which doesn’t need an introduction any more. This year however, given the list of speakers and their background, you’ll get the chance to learn a lot (and I mean A LOT) about security. The agenda is still being put together, so I don’t want to spoil any surprises the organizers are preparing…

So, there’s a lot of community-stuff going on across Europe. And as I happen to be part of some of these events, either as an organizers or as a speaker, it’s my pleasure to declare the conferencing-season OPEN!

See you there 😉

Speaking at Microsoft Summit & CodeCamp Iasi

Last week was one full of traveling experiences and speaking engagements at the two largest IT conferences in Romania: Microsoft Summit and CodeCamp Iasi. I got a chance to talk on the same subject at these conferences, namely Microsoft Azure Visual Studio Online Application Insights (this name is so Microsoft :-)) and according to Andrei Ignat‘s (Microsoft Visual Studio and Development Technologies MVP) review here, I made a good job delivering this session.

Whilst this year’s Microsoft Summit focused a lot on networking, with lots of great opportunities to meet and chat with brave entrepreneurs, successful business all-stars, experienced technical fellows and gizmo master minds, CodeCamp was a developer hardcore event, with not two, not three, but ten (10!) simultaneous developer tracks. Why such a big number, you might ask? Well, considering that there were at least 1.800 attendees at the event, you can imagine why :-). Don’t get me wrong, Microsoft Summit wasn’t any smaller either, especially in terms of attendees. Rumor has it that over 2.100 attendees registered, but the exact number wasn’t made public yet.

However, the absolutely amazing thing about my Application Insights sessions this year at these two conferences was the fact that some developers who attended my session in Bucharest (at Microsoft Summit), decided to show up again (two days later) at the exact same session in Iasi (at CodeCamp), in order to get additional questions answered and take extra notes on the Application Insights usage scenarios.

This is not only overwhelming, but also extremely flattering! For those of you who attended any of my sessions: you were a great audience: THANK YOU!

For those of you who didn’t make it to either of these sessions, I’ve posted the slides further down this blog post. Be aware though that more than half of the session time was spent on demos and How-Tos rather than just slides – the recordings are yet to be announced by the Microsoft Summit organizers; as soon as they’re public, I’ll make them available on alexmang.com as well.

Also Speaking At CloudBrew Later This Month

CloudBrew AZUG

In addition, if you happen to be in Belgium by the end of the month (November 28th), make sure you register for Cloud Brew – at CloudBrew I’ll focus my Application Insights session of IoT monitoring techniques and some other goodies.

Happy coding!

 

Public Events! Register Today!

Public Speaking @ Microsoft Summit 2015

As part of my continious community involvment, for the next 30 days or so I will be busy traveling yet again across all of Romania (South, North-East and then West again) and Belgium. As you might already expect, I’m engaged on a few public events. If you want to drop by and say ‘Hi’, I’d be more than happy:

Unlike the events taking place in Romania – which are considerably large (1500+ participants) – CloudBrew is a very intimate event, with nothing but excellent talks, valuable networking opportunities, beer sampling (that’s why it’s called CloudBrew), excellent food and wonderful prizes. Both CodeCamp and CloudBrew are community driven events (organized by CodeCamp and AZUG (Azure User Group) Belgium respectively), but if you’re especially interested in cloud computing, than CloudBrew is with no doubt the event for you!

At these events I’m yet again going to cover Azure related content. This time however, I’ll go deep into the service called Visual Studio Online Application Insights, show you tips and tricks on various patterns, show you how you could use Application Insights for any Internet of Things projects and how to customize dashboards so they fit your DevOps team’s  requirements. Lastly, you’ll also get a chance to see a complete IoT application running on a Raspberry PI 2 powered by Windows 10 IoT, monitored using Application Insights – #CoolStuffAlert.

To get a sneak preview of what I’m putting together for these events, you still have a chance to watch the recording of my ITCamp session in May here:

ITCamp 2015 – Application Insights for Any App Must-Have Tool for Understanding Your Customers (Alex Mang) from ITCamp on Vimeo.

…or the recording David Giard and I did before my session there:

In addition to what you get from the roughly 50 minute video, please be advised that I’ve updated the presentation so that it’s up-to-date with the features which were added in the meantime – yes, Visual Studio Online is packed with lots of cool features for monitoring usage and application performance.

See you there!

webhooks and asp.netBy now you’ve probably heard that ASP.NET now supports WebHooks and not only does it support them, but it goes along with them quite well.

Disclaimer: If you’ve read my posts before, you probably know by now that I’m not a trumpet-kind of guy to promote things which were already decently promoted by team members, company blogs and other community leaders. More specifically, the announcement for WebHooks support was already made by Scott Guthrie, Scott Hanselman,  and others. If you missed any of these, please go ahead and check them out first.

The announcement regarding ASP.NET WebHooks support has been well covered for the last month or so. So rather than go through the announcement, I wanted to detail the process of sending WebHooks. Before you read on any more, please make sure you read this blog post from Henrik F Nielsen on ‘Sending WebHooks with ASP.NET’ – the article is very thorough and well written, but lacks explaining a few things if you’re new to WebHooks.

Basics

If you’re familiar with WebHooks, skip to Receiving WebHooks. Otherwise, happy reading.

The whole concept of WebHooks isn’t that new anyway, since it’s only a further standardization wanna-be of autonomous requests going back and forth between autonomous web servers, by calling specific REST endpoints. When I say standardization wanna-be, I mean that the request which gets sent out the the target endpoint will have a specific request body format – within a JSON object in a specific format, as defined by the convention of independent groups working to define the guidelines which will eventually evolve into standards. So the sender is going to specify a few things, such as the reason for why it ended up sending the request in the first place (this is called an event). In other words, WebHooks is nothing else than a convention on what the request body of an automated HTTP request should look like in order for it to get along with other external web services. This link from pbworks.com explains it in more detail.

Taking a closer look at the various services which claim support for WebHooks, such as SendGrid, PayPal, MailChimp, GitHub, Salesforce etc., you come to understand that whenever you, as a user, configure a particular service’s WebHook implementation, you get to a part where you usually put in a URL and possibly select a list of given events which might force that URL to be hit by a new request. If you’ll go over more services’ configuration webpages for WebHooks, you’ll realize that this configuration part is fairly common to all and thus becomes a pattern.

Receiving WebHooks

Until recently, the difficult part was developing your service in such a manner that it understands WebHooks. That was simply so because developing the next GitHub or PayPal service over night, so that user eventually use it to get WebHooks generated requests for their own web services was… well, let’s face it – unrealistic. Therefore, most articles on-line cover the part of receiving WebHooks and never forget to praise the ASP.NET teams in Redmond for the terrific work they did – they totally deserve it.

Sending WebHooks

However, what if you DO develop the next PayPal? Or maybe simply a number of independent services you want to work and sporadically communicate with each other, on an event-based manner?

Well, on one hand, considering that you want WebHooks to be sent out, you have to remember that WebHooks is in the end a fancy name for an HTTP requests which contains a special body request format. Therefore, I’d a no-brainer that you could instantiate an object of type HttpClient or WebClient and have the request issued accordingly. But still, remember that if your services are going to be used by external customers, they’ll eventually desire these requests to go to their own services as well. In other words, your services should be able to issue request on an event-based manner to a multitude of HTTP endpoints, based on a series of configurations: which actions, trigger which events and run requests at which URLs.

More specifically, consider that you develop the next most popular on-line invoicing SaaS API. Since you’re following the best practices for web services, you’ll most likely not generate the invoice and send it to an e-mail address in the web server code-behind code, would you? Instead, you probably architect some sort of n-tier application type where your front-facing web application get any invoice generation requests, respond back with a ‘promise’ that the invoice will be generated and push the request to a queue of some type so that a worker role which actually generates the invoices will work in a nicely load-balanced environment.

The question is now, how could the external clients get notified that a new invoice has been generated and possibly even sent through an e-mail at the e-mail address specified in the request? Well, WebHooks could solve this problem quite nicely:

  1. the worker role would first generate the invoice
  2. once it is generated, considering this is an event of its own type (e.g. invoice_generated), it would raise this event and call a URL the customer has configured, only if the customer chose to receive requests for this event type
  3. next, the worker role would try to send the invoice attached in an e-mail specified by the client when it originally created the request
  4. if the e-mail was sent successfully, the client could again be pinged at the URL the customer configured with another type of event (e.g. email_sent), considering that the customer chose to receive request for this event type

It’s probably obvious by now that there’s a tremendous amount of work left to be done by the developer in order to send out a WebHook request, if that WebHook request is generated by a HttpClient object – or anything similar.

Don’t get me wrong – there’s nothing wrong with this approach. But there’s a better way of doing all this if-registered-get-URL kind of logic when it comes to WebHooks and .NET code.

Put The NuGet Packages To Work

At the time of this writing, there are exactly four NuGet packages containing the Microsoft.AspNet.WebHooks.Custom name prefix and the reason for this large number is going to be explained throughout the remainder of this post.

First, there’s a package called simply Microsoft.AspNet.WebHooks.Custom, which is the core package you want to install when you’re creating your own custom WebHook. Additionally, there’s a so-called Microsoft.AspNet.WebHooks.Custom.AzureStorage package which will work like a charm when you want to store your WebHook registrations in a persistent storage – and yes, by now I’ve spoiled the surprise. The NuGet packages not only send WebHooks, but they also do the entire registration and selection based on event registration story for you as well, and this is not exactly obvious in my humble opinion. Third, there’s an Microsoft.AspNet.WebHooks.Custom.Mvc package which aids in the actual registration process, should you application run as an ASP.NET MVC application. Lastly, there’s an Microsoft.AspNet.WebHooks.Custom.Api package which handles does a great job by adding an optional set of ASP.NET WebAPI controllers useful for management purposes of WebHooks, in the form of a REST-like API.

I’ll keep things simple in this post, so rather than focus on the magic which comes along with the .Mvc, .AzureStorage and .Api packages, I’ll simply create a console application that will act as a self-registree and sender of WebHooks. In order to intercept the WebHooks and check that the implementation actually works, I’ll create a plain simple WebAPI application and add the required NuGet packages to it so that it can handle incoming WebHooks requests.

The entire source code is available on GitHub here.

As you’ll see, the majority of the code currently runs in Program.cs. The work done id the Main method is simply in relation of getting things ready; more specifically, I first instantiate the object called _whStore and _whManager – the latter requires the _whStore as a parameter. These object are responsible for the following:

    1. What events he/she is interested in, in the form of Filters. This will instruct the manager object which will do the actual sending to only send web hook requests when those specific events occur
    2. Its secret, which should ideally be unique – this secret is to be used in order to calculate a SHA256 hash of the body. The subscriber afterwards should only accept WebHooks which contain a properly calculated hash over their request bodies – otherwise, these might be scam WebHooks
    3. A list of HTTP header attributes
    4. A list properties which are to be sent out at each and every web hook request with the exact same values, no matter what the event is
  • _whManager is the do-er, the object which will actually send the web hook request. Since it has to know to whom to send the web hook requests in the first place, it requires the WebHookStore-type object sent as a parameter in its constructor. In addition, it also requires an ILogger-type object as a second constructor parameter, which is going to be used as a diagnostics logger
class Program
{
   private static IWebHookManager _whManager;
   private static IWebHookStore _whStore;

   static void Main(string[] args)
   {
       _whStore = new MemoryWebHookStore();
       _whManager = new WebHookManager(_whStore, new TraceLogger());
      SubscribeNewUser();
      SendWebhookAsync().Wait();
      Console.ReadLine();
   }
   private static void SubscribeNewUser()
   {
       var webhook = new WebHook();
       webhook.Filters.Add("event1");
       webhook.Properties.Add("StaticParamA", 10);
       webhook.Properties.Add("StaticParamB", 20);
       webhook.Secret = "PSBuMnbzZqVir4OnN4DE10IqB7HXfQ9l";
       webhook.WebHookUri = "http://www.alexmang.com";
       _whStore.InsertWebHookAsync("user1", webhook);
   }
   private static async Task SendWebhookAsync()
   {
      var notifications = new List<NotificationDictionary> { new NotificationDictionary("event1", new { DynamicParamA = 100, DynamicParamB = 200 }) };
      var x = await _whManager.NotifyAsync("user1", notifications);
   } 
}

 

The good thing about a simple ‘Hello, World’ sample application

The good thing in this sample is that WebHooks can be, in my opinion, self-taught if the proper explanations are added. More specifically, the reason for the existence of the IWebHookStore interface is due to the fact that you’ll most likely NOT use a MemoryWebHookStore in production workloads, simply because stopping the application and running it again will completely delete any subscriber registrations – ouch.

Therefore, implementing the IWebHookManager interface will help you a lot, meaning that you could implement your own database design for storing the subscriber registrations along with all the properties, extra HTTP headers they require whenever, based on the events (a.k.a. actions, a.k.a. filters) they chose in a form somehow. However, please be aware that there’s an .AzureStorage NuGet Package I mentioned earlier which eases the development even further, by auto-“magically” doing the persistent storage part of the registration on your behalf – uber-cool! I’ll detail the process of using Azure Storage as your backend for web hook subscriptions in a future post.

Additionally, there’s an interface for the manager as well which only does two things (currently!) – verify the web hooks registered and create a new notification. There are a few things which are important for you to keep in mind here:

  1. Notification will be done by sending in the user name as a parameter. If it isn’t obvious why you’d do that since you’ve already specified the users’ usernames upon registration, remember the flow: users register, an event occurs in the system on a per-user-action-basis, that particular user gets notified. The second parameter is an enumerable of notification dictionaries, which is actually a list of objects where you specify the event which just occurred and determines the WebHook request to be fired in the first place – since the notification can also send extra data to the subscriber in the request body, this parameter cannot be a simple string, and as such requires two parameters when instantiated: the event name (as a string) and an object which will eventually get serialized as a JSON object.
  2. I’d argue that the default implementation of IWebHookManager, namely WebHookManager, will meet most of your needs and there’s probably little to no reason to implement your own WebHookManager instead. If you’re not convinced, take a look at its source-code (yes, Microsoft DOES LOVE OPEN-SOURCE!) and check out the tremendous work they did so far on the WebHookManager class. I do have to admit though, that in term of coding-style, I’m very unhappy with the fact that if the manager fails to send the web hook request, no exception or error code will ever be thrown from the .NotifyAsync() method – this decision might have been taken since the method will most likely be called from a worker-role-type application which shouldn’t ever freeze due to unhandled exception. If that is the case, too bad that you, as a developer, cannot take the decision on your own instead. On the other hand though, remember the ILogger object (of type TraceLogger) you used when you originally instantiated the manager – many methods will eventually use the logger to send out diagnostics and these could help a lot when you’re trying to figure out if any web hook requests where sent out.

And since I’ve mentioned ILogger, let me remind you that if you add a trace listener to your application and use the already available TraceLogger type from the NuGet package, you will get the diagnostics data within the trace you’ve added as a trace listener. Should that be of type TextWriterTraceListener, the traces the WebHookManager creates will be written down on the disk.


 <system.diagnostics>
   <trace autoflush="true" indentsize="4">
     <listeners>
      <add name="TextListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="trace.log" />
      <remove name="Default" />
     </listeners>
   </trace>
 </system.diagnostics>

Options, Options, Options…

I’ve mentioned earlier the usefulness of the interfaces the nugget NuGet packages bring along due to their flexibility of covering any scenario you’d need. There’s however something even better than that, and that’s Dependency Injection support. More specifically, the NuGet packages also has a so-called CustomService static class which you can use to create instances of your WebHookManager, WebHookStore and so on and so forth.

Conclusion

WebHooks are here to connect the disconnected nature of the web and are here to stay. They are certainly not a new technology, not even a new concept – but it could still revolutionize the way we trigger our REST-based endpoints to execute task-based operations. If you’re new to WebHooks, get started today. If you’re a hard-core ASP.NET MVC developer, integrate WebHooks in your projects today. And if you’re an Azure Web App developer, why not develop WebJobs triggered by WebHooks? Oops, I spoiled my next post’s surprise 🙂

 

Happy web-hooking-up the World!

-Alex

itcamp-logo-white[1]For the past 5 years, two great IT community volunteers, namely Tudor Damian and Mihai Tataran, along with a team of engaged volunteers, have put together what is in my opinion the greatest community driven IT conference in Romania, namely ITCamp.

Last year’s edition gathered over 500 attendees, mostly mid- and high-level software developers, all keen to learn from and network with an impressive panel of speakers coming from all over the world, each of them an expert in the IT industry.

Given the public agenda available on http://itcamp.ro, this year’s edition will easily surpasses both content quality and quantity; let me explain:

  • on one hand, this year’s ITCamp will also host a *NEW* track of business-oriented sessions where you could get a lot of insights on how to manage IT risk, what the cloud business models are given the industry cloud-emerging market worldwide, how to become a productive product owner and, one of my very favorites, how to manage intellectual property upon application launch
  • on the other hand, ITCamp 2015 has an impressive list of speakers, such as Paula Januszkiewicz – Enterprise Security MVP, Andy Malone – Enterprise Security MVP, Daniel Petri – Directory Services MVP, Andy Cross – Azure MVP and Microsoft RD, Raffaele Rialdi – Developer Security MVP, Tobiasz Koprowski – SQL Server MVP, David Giard – Microsoft Technical Evangelist, Adam Granicz – F# MVP, to name a few (of course, myself included 🙂 )

To quickly conclude, if you haven’t yet, now is your chance to register for ITCamp 2015 at http://itcamp.ro. The ticket costs around EUR130.00, a bargain considering that this is a once-in-a-year opportunity to get really valuable networking, along with great sessions and wonderful food from the caterer – Grand Hotel Italia.

For the past month I had the opportunity to talk about Azure Search via the Azure DevCamp roadshow put together by the ITCamp community, with the sponsorship of Microsoft. Not only did they put together a great event series, but I also had the chance to meet wonderful people interested in cloud computing across the entire country: Bucharest on February 13th, Oradea on February 20th, Timisoara on the 21st and Cluj-Napoca on the 28th.

Below are my slides (in English) and further down this post is the video recording of my presentation in Cluj-Napoca. For whatever strange reason related to my Surface’s OS going to sleep just before my presentation and not being able to find a particular .dll file part of Newtonsoft’s JSON.NET, one of my demos didn’t run as expected in Cluj-Napoca – even though everything went smooth during the other three events. I’ve also posted a few photos from some of these events in a photo gallery at the end of this post.

If you’re interested in Azure Search, feel free to download the slides and (if you’re OK with Romanian) watch my presentation’s recording.

VIDEO: Add Professional Search Features To Your Apps, Azure DevCamp 2015, Cluj-Napoca

Avaelgo

Here’s your chance to learn more about Web Development with Microsoft Azure, during an intensive 2-day course held in Cluj-Napoca. The course is organized by Avaelgo and is currently set to take place at the end of March. Even though still in a draft form, the agenda is vast and covers most IaaS, PaaS and SaaS services available in Azure and hence I totally recomment it.

Learn more about the Avaelgo and the Web Development with Microsoft Azure course here.

Register today!

Alex

I have to start off with two things I want you to bear in mind while you read this post:

  • this is my absolute first production deployment (ok, during these last 4 days I did hundreds of back-and-forth steps using Microsoft Deployment Toolkit (MDT) along with WDS in order to find the most manageable deployment architecture, but still…) of Windows 8.1 using MDT 2013
  • any comments are very welcome!

In order to take advantage of an easily maintainable and upgradeable, yet controllable IT infrastructure within the company, I’ve decided to deploy a few VMs running Windows Server 2012 R2 with the WDS role installed. I’ve also installed MDT 2013 (you can download it from here) and Assessment And Deployment Toolkit for Windows 8.1 Update (ADK – you can download it from here). ADK is required in order to get MDT 2013 to work. Also, make sure that you don’t have any older versions of ADK (such as, the ADK for Windows 8.0 which usually comes high up in the search results when you look for ‘ADK Windows 8.1’).

Installing both ADK (which should come first) and MDT 2013 is a child’s play, but only if you remember to sign out after you install ADK – this will force the PATH environment variable to get updated with the %ProgramFiles%\Windows ADK values. Trust me, this is a requirement for a smooth runtime experience with MDT 2013.

As a newcomer, one of the best approaches to learning MDT 2013 is by downloading the MDT Documentation archive from here, but bare in mind that there are a few best practices missing from the documentation kit and which will be extremely helpful on the long-run:

  1. When you create your first deployment share, bare in mind to use a single-worded share (UNC path), different than the default ‘DeploymentShare$’. Same goes for the deployment share name and folder name. The reason is that you will eventually boot using a customized version of Windows PE (Pre-installation Environment) which might eventually show you the list of task sequences you have defined within your deployment. If you’re like me and like to test things out, you’ll probably don’t want your production images to be mixed with the staging ones. Therefore, I’ve created a deployment share called ‘MDT Staging’.
  2. The deployment share is nothing else than the name suggests: a share – a network share to be specific. This basically means that whilst deploying the customized images of your OS, either you or your users will have to get access to the share. There are two options for this: you either manually send the share credentials out to your users, hoping that they won’t share this credentials with others and that they’ll get them right – why shouldn’t they? The second option is to configure the credentials within an initialization file called bootstrap.ini (which is actually configurable from within the Deployment Workbench directly – simply right-click on the deployment itself, choose Properties in the context menu, go to the ‘Rules’ tab and click the ‘Edit bootstrap.ini’ button). Here you can simply put the following value defaults: UserID, UserDomain and UserPassword. You might argue that this represents a security vulnerability because I’m saving a set of credentials which have access to one of my shares in clear text format. I admit that, but as long as this specific only has read access to my share (and write access to the ‘Logs’ folder within the deployment share), there’s no actual reason to concern anyway. Additionally, this user doesn’t even have to be a directory account, it can be a simple local account with read-only access to the share. And since were at the bootstrap.ini, it’s also worth sharing that the SkipBDDWelcome=YES default will help a lot as well: specifically, it will skip the welcome message on the deployment wizard.
  3. It might make more sense to go through the deployment as quickly and seamlessly as possible. Therefore, a few Skip defaults within the customsettings.ini (by the way, when you change anything within the ‘Rules’ tab in the main textbox, you’re actually updating the customsettings.ini, which is extremely convenient considering that you’d otherwise have to manually open and save a text file in an elevated Notepad) might help:
    • SkipAdminPassword=YES (if you also configure the AdminPassword default, this will force the Administrator password page to be skipped) – whether you’re creating a reference image or a target image, you’d probably be better with a unique administrator password, referenced within the Workbench rather than a bulky handwritten notepad somewhere in your office drawer
    • SkipProductKey=YES – whether you’re creating a reference image or a target image, the product key will probably be a MAK which you could safely put in the task sequence (you don’t want your curious users to write this MAK down and use back at their home, right?) or you might even use a KMS to activate your OS. If you don’t have a key altogether, don’t bother going through this deployment wizard page anyway: the installer will ask for it and you can just skip this step until you activate the OS
    • SkipDomainMembership=YES – it’s best to have the domain configured directly within the customsettings.ini file using the JoinDomain, DomainAdmin and DomainAdminPassword values. Keep in mind that Admin in DomainAdmin doesn’t mean that you need to put in your admin user’s password: instead, simply create a user within your Active Directory which is only allowed to Create Computer objects and Delete Computer objects, along with the option of configuring properties (read/write properties) on all your computers within the OU. This basically means that this will be a special user only allowed to join computers in the domain which helps a lot in automating the deployment process
    • SkipLocaleSelection=YES
    • SkipTimeZone=YES – instead, simply configure the time zone using the TimeZoneName default (e.g. ‘E. Europe Standard Time’). Remember that within Windows, you can get your current timezone and the names of the rest of the time zones using the tzutil command. After all, you’ll most likely deploy the computers based on a deployment share only within a single time zone.
    • SkipApplications=YES – makes this part of your task sequence instead; I’ll have more on this later on
    • SkipRoles=YES – same as before, make this part of your task sequence instead
    • SkipBitLocker=YES
    • SkipBDDWelcome=YES
  4. If you’re configuring a target deployment (which, as mentioned at #1, should be a different deployment share for the best deployment experience), make sure that you’re also configuring:
    • SkipCapture=YES – after all, you can both configure the DoCapture default to whatever you’d like your tasks sequence to end with and, again, having a simple wizard will be way more easy to manage on the long-run
  5. You might test out different default values and different task sequence options before you actually deploy to your hardware devices, so having some of this defaults configured to NO or not at all (such as, the domain defaults – you probably don’t want to add all your tests to your directory) might make sense. However, rather than deleting them from your file, you can comment them out using the ‘;’ symbol. This is also super helpful when you create a new deployment share, because you can simply either comment-out or un-comment settings based on your deployment share target.

When it comes to the actual deployment shares, there are a few things worth sharing:

  1. First and foremost, make sure that you always test your deployments using a VM (Hyper-V is probably one of the best virtualization technologies you can use for free right now for this purpose, especially due to the fact that Gen2 VMs can both PXE boot and are UEFI capable). This is a best practice due to the fact that you can always create a checkpoint and revert the machine back and forth just to make sure that your deployment works fine. It doesn’t make sense to wait too long for your reference deployment to be created just to find out that a variable or whatever application is messing the entire process. Additionally, using a VM will assure you that only the most generic hardware drivers will be used and no funny mouse-or-whatever-device drivers get injected if you’d use an old-PC to test your deployments (actually, you shouldn’t use an old-PC to deploy anything; you’d better get rid of it :-)).
  2. And since we’re talking about drivers, whatever you do, never ever add drivers to your reference image. Instead, add them to your target image only, because you might eventually need to buy a new PC which might have different specs than the original one: do you really want to create the entire reference image from scratch and install all the apps used within the company again?
  3. If you’re using PCs from known vendors (HP, Dell, Fujitsu, Lenovo etc.), make sure that you get the corresponding drivers from the enterprise support systems. In fact, there are some apps for that too, such as HP SoftPaq, ThinkVantage Update Retriever, but if you’re not able to use any of these, simply go through their enterprise support websites (here’s the one for Dell)
  4. Never ever download drivers from strange websites or aggregates (Softpedia and such). If the vendor has a website, use that website instead!

As a best practice, I’d also advise you to group all the drivers in an OS\Computer model hierarchy. Also, make sure that the model is exactly the same to the model specified by the vendor. You can get the model specified by your vendor by using the Get-WmiObject PowerShell cmdlet (Get-WmiObject -Class Win32_ComputerSystem).

Another best practice is to create task sequences based on the PC models you have in the company, considering these are brand PC from known vendors rather than custom-made PCs. The cool trick here is in regard to drivers: you can control the drivers which exist in the driver repository Windows is looking into when it first installs by changing the following:

  1. In the Preinstall step within a task sequence, go to Inject Drivers and change the default selection profile to ‘Nothing’, and also check the radio button option of ‘Install all drivers from the selection profile’. This might at first not make any sense, because we’re actually telling the deployment process to get all the drivers only from nowhere (?!), but the fact is that
  2. you configure (before the Inject Drivers phase) a Task Sequence Variable (from Add > General) and name it DriverGroup001 and give it the value of Windows 8.1\%model% (considering that you’re using an OS\Computer model hierarchy as advised earlier).
  • this will basically instruct Windows to look only in a computer model’s specific folder for drivers, not in the entire repository of all the drivers for all the PC you’re using in your company
  • unfortunately, if you’re using a custom-made PC you’ll get generic computer model names instead, such as ‘All Series’ if you have an Asus motherboard.

Earlier in this post I mentioned that it’s fine to skip the applications selection page. The idea is actually to get better control of the applications you’re installing and also more insights into the applications which have quite installers. Basically, rather than having the deployment process install the applications on your behalf as a bulky operation, you should create a new group right before the Windows Update (Pre-application installation) phase called ‘Custom tasks (Pre-Windows update) and have all your applications installed as Install Single Application phases. If you don’t like/need/want that kind of control, you could also create an application entry in the application group within the deployment share which depends on all the applications you want to install and have this application created as a install single application phase in your new group. Of course, you might be wondering now why you’d do that: the reason is that if you’re installing  Microsoft applications (which you probably will), you should get updated for these application too. You might be also installing chipset drivers, and this application-driver type should be installed first.

Anyway, the idea of having applications installed as install single application phases is to gain better control of the application installation process and finally to automate the entire deployment process altogether.

Another cool trick available in MDT (and not available in SCCM, at least not to my knowledge) is that you temporary suspend the deployment process for cases in which, let’s say, you need to manually download and installer or ClickOnce application or whatever. All you have to do is to copy the Tatoo phase in the task sequence, paste it wherever you need the deployment process suspended and replace the ZTITatoo with LTISuspend in the command line. This will automatically suspend the deployment process, allow you to run whatever tasks manually and when you’re done (even if you need to restart) just double-click the resume shortcut which was created on the desktop (this automatically resumes the deployment process from where it was left off). This tricks helps install ClickOnce applications which require licensing (they normally exit with any of the 0 or 3010 codes too soon and thus don’t get installed properly) or install apps or SDKs using Web Platform installer (such as, Azure SDK).

Last but not least, make sure that you select the Windows Update options in the task sequence of your deployment process to the target computers only. Downloading them during the deployment process on the reference computers will force the deployment process to take considerably longer (for example, it took in my tests an extra 3 hours to create the reference image if the computer was updated during reference image deployment) and thus doesn’t make too much sense. Instead, you might be interested into updating the target computers only. Moreover, you could also add the update packages (though it is tremendous work to keep the Packages folder up-to-date in the deployment share) or you could install the Windows Server Updating Services (WSUS) role on one of your servers and mark the update server URL within the customsettings.ini file using the WSUS Server default.

Ok, that’s it for now.

Happy deploying,

Alex

Wow! No, seriously! WOW! Had the conference not been a good one, this post would have been written either during the event or long after the event.

During these two days I had the opportunity of joining great session given by Microsoft professionals who work in both IT departments and R&D departments. Just to name a few, among the session I had the opportunity to attend are: SQL Server 2014 In-Memory OLTP Query Processing, NoSQL on Azure, Intelligent System Service – the bridge and heart of IoT, Managing Technical Debt, SQL Server for Developers, Using WinJS for web and cross-platform application, Build Your Custom Private Cloud Management Portal with Windows Azure Pack. Unfortunately though, I didn’t get a chance to attend the rest of super-cool sessions the rest of the speakers gave.

On the other hand, I also had the opportunity of offering an Elastic Scale session to a full room (people even decided to even stand just the get all the intel). Should you have missed Microsoft Summit or the session I gave, scrolling a bit down you’ll get a chance the go through the my slides. Also, I highly suggest to be patient and wait for the video recordings to be published on the conference’s website here: http://www.mssummit.ro/en/videos.

Here’s a photo I took a few minutes before the session started:

WP_20141113_10_57_48_Pro

After the event I was also invited to give a short talk about BizSpark and our partnership with Microsoft on Ziarul Financiar Live. The recording of this interview is available here: http://www.zf.ro/business-hi-tech/video-zf-live-special-romania-nevoie-start-up-uri-industria-it-c-sustina-dezvoltarea-urmariti-inregistrarea-emisiunii-alex-mang-ceo-keyticket-solutions-petru-jucovschi-microsoft-romania-13549279.

Microsoft Summit 2014

In just a few days (14, by the time of this writing), the second edition of Microsoft Summit will take place. For this year’s event, Microsoft put together an agenda for both IT specialists (network & infrastructure experts along with software developers) and business managers and the list of speakers is simply, well… staggering!

If you didn’t get a chance to register for the event yet, consider this your opportunity to win one of the two tickets I’m offering for free to you, my loyal followers and readers. The net worth of these two tickets is 1398 RON (little over $400,00) and it’s now your chance to get on the them completely free (meals included!).

Simply answer these three questions by either commenting to this post or by sending a private message via the Contact form:

  1. What options does one have for scaling his/her SQL database?
  2. What does database sharding mean?
  3. What is my Microsoft Summit session’s title?

Good luck!

Alex

 

P.S.: If you don’t know the answers, it’s your chance to learn more about DB sharding techniques via elastic scaling out during my session at Microsoft Summit, ‘Patterns for SQL Database Elasticity’ 🙂