More easily manage DSC nodes with sample Configurations–Part 2

In Part –1, we went over some of the new modules and scripts that I put together to help track GUID and Thumbprint information on a DSC Pull Server, and then used the install-*.ps1 scripts to install a new pull server as well as “Register” a node with it.  For part 2, I wanted to go over some of the basics of the layout, as well as the same configuration and configuration data so you can get started with actually making declarative changes.  If you’re familiar with DSC this will largely be a review, but if not it may help reveal some of the method behind the madness.

Infrastructure as Code, Dev-Ops, and how it drives the Model

devops-mini

Yeah, that’s the Dev-Ops drum you hear in the background, and we’re all going to be marching to it or trampled down by it.  The thing many IT teams don’t understand is that Dev-Ops isn’t a department anymore,  it’s an operations model.  One that all the departments involved in a service need to be part of.  Without getting into too much of a tangent, we should understand that Infrastructure as Code is a model of system management that is exceptionally agile, with unparalleled change control when implemented correctly.  System changes can be tested, approved,  reliably applied to production, and easily reverted as needed to thousands of systems.  This means it fits cleanly in that Dev-Ops model, which is why Microsoft is making such big pushes in DSC and getting it adopted by IaC platforms such as Chef and Puppet.  However, that change control is not something native to IaC itself, nor even DSC, it’s because the nature of how it stores configurations:  as code.  In our case:  PowerShell.

See, at the end of the day, all my settings for EVERYTHING should be lines of code.  By doing this I can leverage things like Visual Studio, GitHub, or whatever other repository you’re comfortable with leverage the change management engines native to those solutions.  Different people can fork, check out, merge, and maintain that final configuration file … that when approved, can then be pushed to all the nodes in the company and reported on.  Need a test lab that matches the production?  Just grab that configuration and go.  Need to apply the new test lab changes to prod? copy the tested config back.  IaC basically falls back on a well established code submission process to to give reliable change control that fits within a strategy and leverages tools that used to belong to … well … Dev-Ops.  Thus Dev-Ops is now a model.

image

So now if we look at the Pull-Server a bit, folder structure may begin to look a bit more clear.  Everything in the $env:HOMEDRIVE\Program Files\WindowsPowerShell\DSCService folder are “core components”.  What I mean by that is it contains final configurations, runtime settings, and other pieces to keep IaC up and working.  In contrast, the folder structure under “$env:HOMEDRIVE\DSC-Manager” is code.  Things you’ll want to copy up-to GitHub or some other repository in order to utilize proper change management.  The _only_ reason I separated files like the dscnodes.csv file was because it made it easier to select entire folders to sync to your repository of choice so that the entire solution fits neatly within the Dev-Ops goals.

Make it so.  Configuring your Intent and Target

Because we don’t have any Puppet or Chef, we need to rely purely on PowerShell Configurations and hashtable to make up our code.  There are plenty of examples out there on this, many even within the various modules themselves, but I’ve always found them overly simple and thus not a good representation when it comes to organizing data.  What’s more, I’m used to the idea of working with a file, like a config or .ini, as it’s just an “Easy” way to organize things.  Need to add settings for a node?  Open a file.  Need to change the way all DCs behave?  Open a file.  So in following this concept, I wrote functions that look for and import files found in certain directories.  Now this is basically how PowerShell works anyway … except you normally run a few cmdlets, manipulate the in memory arrays and functions as needed …. I just put some glue in there.

Configuration\MasterConfig.ps1 contains the intent, or “rules” for the environment.  Basically it’s just a big “configuration”.  ConfigurationData\labhosts.ps1, on the other hand, contains the list of nodes and any specific rules.  In PowerShell terms, it’s a massive hash-table.  By separating the files this way it becomes easier to mix and match various configurations and check in/out various sub parts.  For example, keep a second “ProdHosts” file so that you can push a build to either/or environment with ease.   Use your code repository to roll back the “masterconfig” if things go south.  Larger environments will need more complex solutions but this framework should be a good start in contrast to all those big “flat files” you keep seeing as examples.

One last note on the MasterConfig.ps1:  If you open it, you’ll quickly see it simply calls a bunch of resources and doesn’t do much at all other than pass-on varables:

image

In this case most of the configuration is actually stored in what’s called a “composite configuration”.  Microsoft has a decent blog post on this, but the idea is that it allows one to split the configuration into multiple smaller configurations that are loaded into memory via a module (see the import-DscResource command), and then loaded as needed (notice I filter which configurations load depending on which “Service” I mention the server handles.  This not only make the configurations much more dynamic and usable, but it means you can check out parts of your configuration and update them independently from the main config.  Now in contrast, we can go into the xDSCMBase resource and see actual settings:

image

Now we see some settings, like everyone on the network marked “private” get a specific DNS server set (hurray for private clouds).  Also notice there’s additional filters in in case someone wrote specific DNS overrides for their nodes.  By doing this, we can now have strict version controlled standards for all roles and services in the infrastructure.  Tending cattle not pets, as they say.  Every configuration module represents company standards that are known.  Any “unique” settings can be called out directly in the labhosts file.  It’s up to you to build these files out … but hopefully a functional example is in place that is infinitely more scalable.

Oh where are these files?  Well if we have an import-dscresource command then we know it’s a module …. Ill let you take a wild guess which folders I stored modules in.

How the functions pull this together

So by now you’ve seen the build-and-deploy script in the root of the file.   It declares a ton of variables, then runs a mere three lines (technically the last one is commented out).   The variables are NOT strictly needed as all the functions default to the declared values,  but the variables double as a sort of configuration.ini as well as a reference so I left it in place.  There is a custom xDSManager module that contains the 3 functions (and then some,  there are some function in there that only exist to serve other functions, or are experimental).

image

The first key function run is update-DSCMTable:

Update-DSCMTable -ConfigurationData $ConfigurationData -ConfigurationDataFile $ConfigurationDataFile -FileName $PullServerNodeCSV -CertStore $PullServerCertStore

Again, all the variables are optional … the function only actually requires ConfigurationDatafile and ConfigurationData.  What this does is pretty simple actually:  it scans the configurationdata file (the labhosts/target file that contains all the nodes) and ensures every machine has a GUID assigned for management, and generates one if it’s missing.  It will then also crawl the CertificateStores directory for public key files that have the node machine name.  If found it also records the thumbprint.  It basically keeps the dscnodes.csv file up-to-date and relevant in a pull configuration.

The next function is Update-DSCMConfigurationData

$UpdatedConfigurationData = Update-DSCMConfigurationData -ConfigurationData $ConfigurationData -ConfigurationDataFile $ConfigurationDataFile -FileName $PullServerNodeCSV

This is the first “real trick”.  See most normal people (I like to pretend I’m still normal) will think in terms of server names.  But DSC pull configurations require the use of GUID.  Plus we need thumbprints and file paths in order to encrypt passwords.  The function  will grab the configuration hash, then “update it” and return a hashtable that contains all the missing info.  Awesome example time:

image

With an “updated” hashtable that pulled info from the csvnodes file, now we can run the configuration.  The final script not only generates the MOF, but it also generates the checksum and copies the files to the appropriate folder on the IIS service.  In other words … enables the entire configuration.

Our final major function Update-DSCMModules

Update-DSCMModules -SourceModules $SourceModules -PullServerModules $PullServerModules

This is not an everyday function, but it’s purpose is to package up all the modules in a directory (or single module) and copy them into the modules repo for the various nodes to download.  The final zip file name contains the version info and checksum as required.  This is purely a quality of life function, especially if you just used “”install-module” to dynamically download a new module and need to make it available to the environment.

BONUS – Request-NodeInformation

In the the Dev-Branch I’ve also added a simple function to get node status.  Request-NodeInformation is a sort of “simple report” to see how machines are checking in:

image

And yes, it ALSO used dscnodes.csv to convert the node name “back to english”.

So what’s next?  In part 3 we will look at the last “complicated” portion of managing DSC with pure PowerShell:  Passwords.

Advertisements

One thought on “More easily manage DSC nodes with sample Configurations–Part 2

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s