OSD Driver Management the easy way…

Driver management in OSD has always been a hot topic for deployment folks. If your using ConfigMgr for deployment you have the option of Applying Driver Packages or Auto-Apply drivers. There have been many posts over the years to discuss the merits of both so I won’t go over old ground…but if you want a refresher please refer to the many posts from @mniehaus and @jarwidmark

Back when Windows 8 was released and everyone was preoccupied with the Start Screen a new command was added to DISM to add drivers to an offline image. This command combined with a standard ConfigMgr package containing driver inf files gives us a third driver management option in ConfigMgr. I had never thought of using the command during OSD until I read @agerlund mention this in a tweet a few months back and I thought I would give it a go.

This is how it works:

  1. Download drivers for the chosen model from the vendor website.
  2. Extract drivers to a source folder and remove any unnecessary drivers (this is important…don’t include everything…only what is needed. Weed out drivers for other OS’s and different architectures.)
  3. Create a standard ConfigMgr Package (Not a driver package) pointing to the source folder containing the drivers. No program is needed.
    Drivers_01
  4. Edit your Task Sequence and add a “Run command Line” step in the “Post Install” phase of the Task Sequence…immediately after the default “Auto Apply drivers” step.
  5. Rename the Step to something more appropriate.
  6. Add the following command line:

    DISM.exe /Image:%OSDisk%\ /Add-Driver /Driver:.\ /Recurse

  7. Tick “Package” and browse to the package you created in step 3:
    Drivers_02
  8. Add a WMI query condition for the model you are deploying.
  9. Save changes.
  10. Deploy the Task Sequence

This is what the logs look like:
Driver Package method:
Drivers_03DISM Add-Driver Method:
Drivers_04

As you can see from above, DISM does a pretty good job logging out to SMSTS however if you want to run a post deployment verification check you can run the following PowerShell command to ensure all the drivers have been successfully added to the driver store:

Get-WindowsDriver -Online -All | ? {$_.Inbox -eq $false} | Select-object -Property OriginalFileName

Why I like this option:

  1. It’s simple. No need to import drivers into the ConfigMgr driver store which has always been slow and clunky. You can pretty much do away with the driver store completely unless you want to add drivers to boot images through the console.
  2. It’s quick. In my limited testing I found this to be quicker than applying the same drivers with a Driver Package (via unattend.xml). Around 40% faster in my test on a Surface Pro 3.

The downside is that as this is a legacy package it will not benefit from single instance storage…so takes up more space on DP’s if the content is duplicated.

Also note DISM with the recurse parameter will import every driver conatined in the package into the driver store. So keep the drivers clean.

. Surj

Advertisements

Windows 10 – Add Language Packs & Patches to Install.WIM 4 In-Place-Upgrade

Yes it is possible to add Language Packs and Patches to the Install.WIM used for In-Place-Upgrade.

Why:

  • If you have performed an offline Language Pack installation when deploying your computers, you must also offline service the Install.WIM used for your In-Place-Upgrade…adding the same language pack to this.
  • If you want to speed up the deployment…remove time associate with patching during In-Place-Upgrade
  • If you want to ensure the OS has the latest patches on first boot.

How:

1. Mount Install.wim from Windows 10 original source media.

Dism /Mount-Image /ImageFile:<Path to install.wim>\install.wim /Index:1 /MountDir:<Path to Offline folder>

2. Add your language Pack.

Dism /Image:<Path to Offline folder> /ScratchDir:%temp% /Add-Package /PackagePath:<Language pack Path>\<Language Pack cab file>

3. Set UI Language.

Dism /image:<Path to Offline folder> /Set-UILang:<Culture: https://technet.microsoft.com/en-us/library/cc722435(v=ws.10).aspx >

4. Generate new Lang.ini

Dism /image:<Path to Offline folder> /gen-langini /distribution:”<Root Path of Source Media>”

5. Add your patches

Dism /Image:<Path to Offline folder> /ScratchDir:%temp% /Add-Package /PackagePath:<Package Path>\<Security/Update cab file>

6. Un mount Install.wim, committing changes

Dism /unmount-wim /mountdir:<Path to Offline folder> /Commit

 

History:

This blog post was born out of the issue seen when first attempting to perform an In-Place-Upgrade on a computer that had the en-GB language pack installed offline during OSD.

The custom WIM used during OSD was created using Windows 10 Enterprise en-US source media.  Therefore the assumption made was the source media for In-Place-Upgrade also needed to be Windows 10 Enterprise en-US.

Well that is partly correct.  However, When a Language Pack is installed offline, it fundamentally changes the language in the source at build time.  To all intent purposes, the computer is being built with en-GB media in this case.

Now when we come to do an In-Place-Upgrade the languages no longer match and the following error is recorded in the last Compat Scan log  “C:\$Windows.~BT\Sources\Panther\CompatData_…xml” (check date modified):

<HardwareItem HardwareType=”Setup_MismatchedLanguage”><CompatibilityInfo BlockingType=”Hard”/><Action Name=”Setup_MismatchedLanguage” ResolveState=”Hard”/></HardwareItem>

To make sure they do match we have to, in this case,  add the en-GB language pack to the original en-US source media Install.wim used during In-Place-Upgrade.

The additional step of setting UI Language and Generate Lang.ini are also required.  To take this full circle, one should be able to perform an In-Place-Upgrade on the aforementioned computer using original Windows 10 Enterprise en-GB media.  I’ve not tested this though.

Dynamically set Computer name of Physical and Virtual Machines via CustomSettings.ini

I needed a clean way to set the hostname of physical and virtual machines in MDT via the customsettings.ini file. Sure I could use %SerialNumber% variable but if I deploy onto a virtual machine I would hit an error because the Serial Number of a VM tends to be too long for a hostname.

So rather than write a script and have additional steps in the Task Sequence I got the desired outcome with a few extra lines in the customsettings.ini:

Here’s how:

[Default]

Subsection=VM-%ISVM%

[VM-TRUEset ]

OSDComputerName=#REPLACE(“VM-%MACADDRESS001%”,”:”,””)#

[VM-FALSE]

OSDComputerName=PH-%SerialNumber%

The first line “Subsection=VM-%ISVM%” is added to the Default section of the customsettings.ini file. This line essentially tells ZTIGather to process another section in the ini file. The section we want to process is either VM-TRUE or VM-False. Which section is processed will depend on the in-built %IsVM% Task Sequence Variable.

The next part of the solution is to set the OSDComputerName variable.

For physical machines we simply user the %SerialNumber% variable. In the example I prefix the hostname with PH.

For Virtual machines we use the MAC address instead of the serial number. This ensures we stay within the 15 character hostname limit. We cannot use the %MACADDRESS001% variable directly, we first need to remove the “:” separators. To do this we can get ztigather to process code directly from the customsettings.ini. Anything inside “#” is treated as code, in our example we are simply using the Replace function to remove the “:”.

This works with an SCCM OSD Task Sequence too (with MDT integration).

DISM Cmdlets fail to run in Win PE with MDT 2013 Update 1 – WORKAROUND

Whilst working with Windows 10 and MDT 2013 Update 1 my colleague Graham Hardie and I ran into an issue which was preventing us from running DISM Cmdlets in Win PE.

The issue became apparent whilst trying to implement Michael Niehaus’s Windows 10 Inbox App Removal script and specifically trying to run the script offline in Win PE.

The MDT boot Image was successfully generated with the MDAC, .NET, PowerShell and DISM Cmdlets features but would fail to run the “Get-AppxProvisionedPackage” Cmdlet with the following error:

Get-AppxProvisionedPackage : The ‘Get-AppxProvisionedPackage’ command was found in the module ‘Dism’, but the module could not be loaded. For more information, run ‘Import-Module Dism’.
At line:1 char:1
+ Get-AppxProvisionedPackage -Path e:\
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : ObjectNotFound: (Get-AppxProvisionedPackage:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CouldNotAutoloadMatchingModule

We found that the Microsoft.Dism.Powershell.dll file was becoming corrupt at boot up…not had the chance to investigate what exactly is causing the corruption yet.

However, to workaround the issue you need to do the following:

  1. Delete the file from:
    x:\Windows\System32\WindowsPowerShell\v1.0\Modules\Dism
  2. Copy the file back down to the X: drive from the Deployment share:
    %DEPLOYROOT%\Servicing\x64\Microsoft.Dism.Powershell.dll 

You can obviously automate this as part of your Task Sequence. I simply added two ‘Run Command Line’ steps to Delete the file and copy the new file.  The steps need to be added before you run any script which has a dependency on DISM Cmdlets…in our case before Michael’s Inbox App Removal script:

The first step:
DISMCmdLetFix1

Command Line:

cmd.exe /c Del “%SystemRoot%\System32\WindowsPowerShell\v1.0\Modules\Dism\Microsoft.Dism.Powershell.dll” /q /s

The Second step:
DISMCmdLetFix2

Command Line:

robocopy Z:\Servicing\x64 X:\Windows\System32\WindowsPowerShell\v1.0\Modules\Dism Microsoft.Dism.Powershell.dll /is

Note: Add Error Code 1 to Success Codes for this step…robocopy can report a success with a non zero return.

A bug has also been opened on Connect.

Let us know if this works for you or better still if you identify the root cause of the issue.

Quickly Applying MBAM Policy

Implementing Microsoft Bitlocker Administration and Monitoring (MBAM) is a great way to manage Bitlocker on your devices and can be quickly included in the deployment task sequence so that devices are encrypted as part of the task sequence and policy is enforced right from the start… almost.

My preferred way to implement MBAM during an OS deployment is to configure Bitlocker to use the protector “TPM Only” and encrypt the disk as part of the task sequence.  Once the deployment is complete the computer reboots and Group Policy is then applied.  In this Group Policy are the MBAM policy settings that dictate that devices should be protected by both the “TPM Only” and “Startup PIN” protectors.  Therefore, when the MBAM agent refreshes policy, it realises the discrepancy between how Bitlocker is currently configured (TPM Only) and what Group Policy is stipulating and then goes ahead and prompts the user to configure a PIN via the MBAM GUI.

Doing it this way works great except for one simple fact: you need to wait for the MBAM agent policy refresh cycle to run and display the GUI to the user to resolve the configuration discrepancy.  This can take up to 90 minutes which means that during this time window a device is not configured with the Startup PIN as required.

If you don’t want to wait for this policy refresh you can shortcut it by using the below trick.

 

The MBAM agent actually detects straight away that the configuration set on the device does not match Group Policy and logs an error 2 event in the MBAM event log, but it doesn’t display the MBAM GUI immediately that obliges the user to add the PIN.  You can use the Windows Task Scheduler to attach a task to this event so that, when it is logged, the GUI will be loaded immediately.

The below screenshots show the configuration I use in the scheduled task.

I publish the scheduled task via a Group Policy Preference which means that it is automatically deployed to all in-scope computers as soon as they are built.  Doing it this way means it can be centrally managed and updated easily.

Fixing Office 2013 1920 Error during OSD on Windows 10 Technical Preview

I have been playing with the Windows 10 Technical Preview this week and I ran into an obscure error when trying to install Office 2013 as part of my Deployment Task Sequence in SCCM. The Office 2013 setup.exe returned a 1920 error when deploying the Task Sequence to Virtual Machines (Physical were fine). The Office Setup log reports the following error:

Error 1920. Service ‘Windows Font Cache Service’ (FontCache) failed to start. Verify that you have sufficient privileges to start system services.

Taking a closer look I could see the service was in a stopped state and I could not start it manually…that was until I restarted 🙂  Even with Office being the first Application installation in the Task Sequence it would fail. So if you are installing Office 2013 on the Win 10 TP you need to make sure you add an additional restart immediately before the Office installation step.

 

Surface Pro 2 Firmware and OSD problems

I’m currently working on a Windows 8.1 project and recently needed to deploy a number of Surface Pro 2 devices.  This should have been a pretty straight forward task but I started to see random failures during the Task Sequence.  I was using a Configuration Manager 2012 R2 Task Sequence (with MDT 2013 integration) and the latest Surface Pro 2 Driver/Firmware set (March Update).  The issues occurred only on certain machines where I would see application installation hangs during the build process and general instability of the build process.

After a bit of investigation I was pointed to the following KB Article:

KB2963062: Surface Pro 2 devices intermittently cannot boot past the UEFI screen

Although the symptom does not describe my issue the following does make me wonder if there were other issues with the march update:

This problem occurs because of a negative interaction between new libraries in the March 2014 UEFI update, and hardware components on a very small percentage of Surface Pro 2 devices. This interaction results in a timing issue in which the hardware component is not ready for the instructions that it receives from the UEFI, and this results in the boot process being unable to continue. Subtle differences in hardware component tolerances cause this problem to occur more frequently on some devices.

The KB Article links to an updated version of the Firmware and by injecting the new firmware into the my driver package for the Surface Pro 2 I was able successfully build again.

Moral of the story….if you are deploying Surface Pro 2’s make sure you update the firmware in your driver pack to this version:

http://download.microsoft.com/download/5/7/A/57A4F5B1-FB4B-478A-B780-7A834B24C835/May%202014%20Pro%202%20FW.zip