BPuhl’s Blog

A little bit of everything without actually being much of anything

Application behavior with AD

Posted by BPuhl on August 8, 2008

Recently received an e-mail from a rock star co-worker who, for purposes of this blog, we’ll call “Randy” (largely because that’s his name).  His question was based on some questions that he was getting from a large, enterprise customer, and (slightly edited) went something like this:


Is the MSIT AD team burdened with expectations that the DC’s Name and IP address will remain the same, requiring that a new DC maintain the old Name and IP address when it is replaced?

<company’s> AD team is burdened by users that hardcode applications to a DC’s Name or IP address which makes the DC replacements with new hardware or the introduction of new 2008 DCs to replace old 2003 DCs with new HW much more difficult. 

I have said ideally the AD team should be free of such constraints.  (TCO increase, added time delays negatively impacts service availability, and they lose flexibility I want them to have.)

I have said an application that is hardcoded to the DCName or IP is “poorly written” and instead it should do a serverless bind, or understand SRV DNS records, etc.  (Windows can use the Locator, non Windows applications may account for the majority which I think we can address, partly with a registration system where application owners can be notified about Name and IP changes (if accepted by management.)

Their AD team does not have the executive sponsorship and they get blamed when users escalate issues and says the DC’s “broke” their application.

If Microsoft’s AD team does not have this burden or has executive support in other ways you could share?

I have now told the AD team that at a minimum they need to explain that these additional constraints add to the overall costs of AD.  Executives can approve the costs and need to understand they are not following best practices that AD is a service with published (DNS, Locator) location mechanisms.  Also similar constraints can further limit flexibility into the future.

Any comments on any of these statements is eagerly encouraged and appreciated.


Depending on where you work, politics, management support, etc… As you read this, you might think that this type of situation is absolutely ridiculous, or else it’s completely familiar and comforting to know that someone else lives in the same hell that you find yourself in each day at the office.

Here was my response (again, slightly edited):

There are only 7 infrastructure servers (which are not DC’s) which I’m aware of, that have a hard-coded IP address requirement – and those are the 7 stand-alone DNS servers which make up our internal root.

We are incredibly aggressive AGAINST anyone/anything hardcoding an application to either a domain controller name or an IP address.  We have a slight benefit, because with dogfooding, we often have DC’s offline.  So our SLA is that we will “always have capacity to provide services, but will never guarantee that any given DC will be online at any given time”  Which works because we typically have 2-3 DC’s offline for troubleshooting.

The closest thing that we’ve come to an “accommodation”, is the case of legacy NAS devices which had a dependency on the PDC.  After working with this team, we decided to implement a notification script, which will send that team an e-mail within 15 minutes of the PDC role being moved to a new server.  They are then responsible for updating their device configuration.  We do not give prior notice, or anything like that, it’s just a batch script in a schedule task that checks the PDC and send e-mail.

What I’ve said above is our “official” position.  The reality is that operationally, it’s much easier for us to keep the same IP addresses on servers because we have a dependency on them.  Our IPSec policies, and firewall (router ACL) config’s have the DC’s listed by IP address, and getting those updated is operationally a hassle.  In the past 7 years, we’ve renamed every DC at least 3 times that I’m aware of though, so anyone taking a name dependency would have a hard time.  Our network team has also re-IP’d the network at least twice that I know of, and an additional time recently where they re-IP’d a datacenter, so taking an IP dependency would be bad news as well.

I guess in short, is that we have avoided digging ourselves into that hole, so we haven’t really required executive level support to get out of it.  We have just established clear guidelines for consumption of our service, and told the application builders that they can build at their own risk.  It probably also helps that we are so dynamic, that any app which was hardcoded would be down more than it was up.

Hope this helps,


In typical IT organization fashion, we learned a long time ago that we can’t “say no” when the business comes around with a requirement.  But what we’ve found, is that by publishing and evangelizing clear (or at least murky) guidelines, we’ve been able to head off some of these types of problems.

And like I said in my reply, it probably also helps that we keep changing our names and IP’s all the time.  So if they don’t follow our guidance, they are surely going to break at some point.  🙂


3 Responses to “Application behavior with AD”

  1. Mike Kline said

    Another interesting and good entry Brian.

    So if you guys were doing a complete hardware refresh of all your DC’s would you normally stand up the new DC’s with new names and use the old IP’s?


  2. BPuhl said

    Yes, in fact we’re in the middle of a complete hardware refresh, and we are standing up the new hardware with a new name, swapping the IP’s (so the new hardware has the old IP), and promoting the boxes. All in the name of not having to deal with network ACL’s

  3. KenB said

    We do the same thing (new names, old IP addresses)…mostly ’cause of hard-coded things that reside in many hard-to-reach places (DNS/WINS/DHCP IP addresses).

    But we do have some folks who have (generally 3rd party) applications (that are supposedly “AD aware” – HA!) that need to query AD (via LDAP) that keep wanting to hard-code in a SINGLE DC name (I guess they _never_ expect a DC to go down or have a hardware failure or a WAN outage or something…oh, it doesn’t in their TEST environment).

    After I laugh a bit (generally at the vendor’s support/dev guys…ok, maybe not so much laugh as ask them leading questions like…you do know we are not a small company (hundreds of DC’s)…and we have multiple forests and some of those are multi-domain forests.

    When I ask them how their application handles that (multi forest or multi-domain forest)…they get that blank look in their eye trying to figure out how to handle that…

    OK, I throw them a bone…we do have a round-robin_like “ldap” name from a DNS appliance that allows them to connect to (generally) 5 DC’s (per name) by querying a single name…but they still have to figure out the multi-domain/multi-forest scenario. The appliance will ping the IP address of the DC’s and if there is a response, it can pass back that IP address (TTL of about 2 seconds).

    For our “NOS” forest (where most employees login from and most of the servers/workstations reside) I tell them if they can query a GC for what they want (user/group info, generally) they can use those round-robin “ldap” names and it will give them the info they want (sigh of relief on their part) – but since DC’s _do_ LDAP referrals, they can actually get to whatelse they want, at least within that NOS forest.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: