I have to start off with two things I want you to bear in mind while you read this post:
- this is my absolute first production deployment (ok, during these last 4 days I did hundreds of back-and-forth steps using Microsoft Deployment Toolkit (MDT) along with WDS in order to find the most manageable deployment architecture, but still…) of Windows 8.1 using MDT 2013
- any comments are very welcome!
In order to take advantage of an easily maintainable and upgradeable, yet controllable IT infrastructure within the company, I’ve decided to deploy a few VMs running Windows Server 2012 R2 with the WDS role installed. I’ve also installed MDT 2013 (you can download it from here) and Assessment And Deployment Toolkit for Windows 8.1 Update (ADK – you can download it from here). ADK is required in order to get MDT 2013 to work. Also, make sure that you don’t have any older versions of ADK (such as, the ADK for Windows 8.0 which usually comes high up in the search results when you look for ‘ADK Windows 8.1’).
Installing both ADK (which should come first) and MDT 2013 is a child’s play, but only if you remember to sign out after you install ADK – this will force the PATH environment variable to get updated with the %ProgramFiles%\Windows ADK values. Trust me, this is a requirement for a smooth runtime experience with MDT 2013.
As a newcomer, one of the best approaches to learning MDT 2013 is by downloading the MDT Documentation archive from here, but bare in mind that there are a few best practices missing from the documentation kit and which will be extremely helpful on the long-run:
- When you create your first deployment share, bare in mind to use a single-worded share (UNC path), different than the default ‘DeploymentShare$’. Same goes for the deployment share name and folder name. The reason is that you will eventually boot using a customized version of Windows PE (Pre-installation Environment) which might eventually show you the list of task sequences you have defined within your deployment. If you’re like me and like to test things out, you’ll probably don’t want your production images to be mixed with the staging ones. Therefore, I’ve created a deployment share called ‘MDT Staging’.
- The deployment share is nothing else than the name suggests: a share – a network share to be specific. This basically means that whilst deploying the customized images of your OS, either you or your users will have to get access to the share. There are two options for this: you either manually send the share credentials out to your users, hoping that they won’t share this credentials with others and that they’ll get them right – why shouldn’t they? The second option is to configure the credentials within an initialization file called bootstrap.ini (which is actually configurable from within the Deployment Workbench directly – simply right-click on the deployment itself, choose Properties in the context menu, go to the ‘Rules’ tab and click the ‘Edit bootstrap.ini’ button). Here you can simply put the following value defaults: UserID, UserDomain and UserPassword. You might argue that this represents a security vulnerability because I’m saving a set of credentials which have access to one of my shares in clear text format. I admit that, but as long as this specific only has read access to my share (and write access to the ‘Logs’ folder within the deployment share), there’s no actual reason to concern anyway. Additionally, this user doesn’t even have to be a directory account, it can be a simple local account with read-only access to the share. And since were at the bootstrap.ini, it’s also worth sharing that the SkipBDDWelcome=YES default will help a lot as well: specifically, it will skip the welcome message on the deployment wizard.
- It might make more sense to go through the deployment as quickly and seamlessly as possible. Therefore, a few Skip defaults within the customsettings.ini (by the way, when you change anything within the ‘Rules’ tab in the main textbox, you’re actually updating the customsettings.ini, which is extremely convenient considering that you’d otherwise have to manually open and save a text file in an elevated Notepad) might help:
- SkipAdminPassword=YES (if you also configure the AdminPassword default, this will force the Administrator password page to be skipped) – whether you’re creating a reference image or a target image, you’d probably be better with a unique administrator password, referenced within the Workbench rather than a bulky handwritten notepad somewhere in your office drawer
- SkipProductKey=YES – whether you’re creating a reference image or a target image, the product key will probably be a MAK which you could safely put in the task sequence (you don’t want your curious users to write this MAK down and use back at their home, right?) or you might even use a KMS to activate your OS. If you don’t have a key altogether, don’t bother going through this deployment wizard page anyway: the installer will ask for it and you can just skip this step until you activate the OS
- SkipDomainMembership=YES – it’s best to have the domain configured directly within the customsettings.ini file using the JoinDomain, DomainAdmin and DomainAdminPassword values. Keep in mind that Admin in DomainAdmin doesn’t mean that you need to put in your admin user’s password: instead, simply create a user within your Active Directory which is only allowed to Create Computer objects and Delete Computer objects, along with the option of configuring properties (read/write properties) on all your computers within the OU. This basically means that this will be a special user only allowed to join computers in the domain which helps a lot in automating the deployment process
- SkipTimeZone=YES – instead, simply configure the time zone using the TimeZoneName default (e.g. ‘E. Europe Standard Time’). Remember that within Windows, you can get your current timezone and the names of the rest of the time zones using the tzutil command. After all, you’ll most likely deploy the computers based on a deployment share only within a single time zone.
- SkipApplications=YES – makes this part of your task sequence instead; I’ll have more on this later on
- SkipRoles=YES – same as before, make this part of your task sequence instead
- If you’re configuring a target deployment (which, as mentioned at #1, should be a different deployment share for the best deployment experience), make sure that you’re also configuring:
- SkipCapture=YES – after all, you can both configure the DoCapture default to whatever you’d like your tasks sequence to end with and, again, having a simple wizard will be way more easy to manage on the long-run
- You might test out different default values and different task sequence options before you actually deploy to your hardware devices, so having some of this defaults configured to NO or not at all (such as, the domain defaults – you probably don’t want to add all your tests to your directory) might make sense. However, rather than deleting them from your file, you can comment them out using the ‘;’ symbol. This is also super helpful when you create a new deployment share, because you can simply either comment-out or un-comment settings based on your deployment share target.
When it comes to the actual deployment shares, there are a few things worth sharing:
- First and foremost, make sure that you always test your deployments using a VM (Hyper-V is probably one of the best virtualization technologies you can use for free right now for this purpose, especially due to the fact that Gen2 VMs can both PXE boot and are UEFI capable). This is a best practice due to the fact that you can always create a checkpoint and revert the machine back and forth just to make sure that your deployment works fine. It doesn’t make sense to wait too long for your reference deployment to be created just to find out that a variable or whatever application is messing the entire process. Additionally, using a VM will assure you that only the most generic hardware drivers will be used and no funny mouse-or-whatever-device drivers get injected if you’d use an old-PC to test your deployments (actually, you shouldn’t use an old-PC to deploy anything; you’d better get rid of it :-)).
- And since we’re talking about drivers, whatever you do, never ever add drivers to your reference image. Instead, add them to your target image only, because you might eventually need to buy a new PC which might have different specs than the original one: do you really want to create the entire reference image from scratch and install all the apps used within the company again?
- If you’re using PCs from known vendors (HP, Dell, Fujitsu, Lenovo etc.), make sure that you get the corresponding drivers from the enterprise support systems. In fact, there are some apps for that too, such as HP SoftPaq, ThinkVantage Update Retriever, but if you’re not able to use any of these, simply go through their enterprise support websites (here’s the one for Dell)
- Never ever download drivers from strange websites or aggregates (Softpedia and such). If the vendor has a website, use that website instead!
As a best practice, I’d also advise you to group all the drivers in an OS\Computer model hierarchy. Also, make sure that the model is exactly the same to the model specified by the vendor. You can get the model specified by your vendor by using the Get-WmiObject PowerShell cmdlet (Get-WmiObject -Class Win32_ComputerSystem).
Another best practice is to create task sequences based on the PC models you have in the company, considering these are brand PC from known vendors rather than custom-made PCs. The cool trick here is in regard to drivers: you can control the drivers which exist in the driver repository Windows is looking into when it first installs by changing the following:
- In the Preinstall step within a task sequence, go to Inject Drivers and change the default selection profile to ‘Nothing’, and also check the radio button option of ‘Install all drivers from the selection profile’. This might at first not make any sense, because we’re actually telling the deployment process to get all the drivers only from nowhere (?!), but the fact is that
- you configure (before the Inject Drivers phase) a Task Sequence Variable (from Add > General) and name it DriverGroup001 and give it the value of Windows 8.1\%model% (considering that you’re using an OS\Computer model hierarchy as advised earlier).
- this will basically instruct Windows to look only in a computer model’s specific folder for drivers, not in the entire repository of all the drivers for all the PC you’re using in your company
- unfortunately, if you’re using a custom-made PC you’ll get generic computer model names instead, such as ‘All Series’ if you have an Asus motherboard.
Earlier in this post I mentioned that it’s fine to skip the applications selection page. The idea is actually to get better control of the applications you’re installing and also more insights into the applications which have quite installers. Basically, rather than having the deployment process install the applications on your behalf as a bulky operation, you should create a new group right before the Windows Update (Pre-application installation) phase called ‘Custom tasks (Pre-Windows update) and have all your applications installed as Install Single Application phases. If you don’t like/need/want that kind of control, you could also create an application entry in the application group within the deployment share which depends on all the applications you want to install and have this application created as a install single application phase in your new group. Of course, you might be wondering now why you’d do that: the reason is that if you’re installing Microsoft applications (which you probably will), you should get updated for these application too. You might be also installing chipset drivers, and this application-driver type should be installed first.
Anyway, the idea of having applications installed as install single application phases is to gain better control of the application installation process and finally to automate the entire deployment process altogether.
Another cool trick available in MDT (and not available in SCCM, at least not to my knowledge) is that you temporary suspend the deployment process for cases in which, let’s say, you need to manually download and installer or ClickOnce application or whatever. All you have to do is to copy the Tatoo phase in the task sequence, paste it wherever you need the deployment process suspended and replace the ZTITatoo with LTISuspend in the command line. This will automatically suspend the deployment process, allow you to run whatever tasks manually and when you’re done (even if you need to restart) just double-click the resume shortcut which was created on the desktop (this automatically resumes the deployment process from where it was left off). This tricks helps install ClickOnce applications which require licensing (they normally exit with any of the 0 or 3010 codes too soon and thus don’t get installed properly) or install apps or SDKs using Web Platform installer (such as, Azure SDK).
Last but not least, make sure that you select the Windows Update options in the task sequence of your deployment process to the target computers only. Downloading them during the deployment process on the reference computers will force the deployment process to take considerably longer (for example, it took in my tests an extra 3 hours to create the reference image if the computer was updated during reference image deployment) and thus doesn’t make too much sense. Instead, you might be interested into updating the target computers only. Moreover, you could also add the update packages (though it is tremendous work to keep the Packages folder up-to-date in the deployment share) or you could install the Windows Server Updating Services (WSUS) role on one of your servers and mark the update server URL within the customsettings.ini file using the WSUS Server default.
Ok, that’s it for now.