Zephyrnet Logo

Should large enterprises self-host their authoritative DNS? – IBM Blog

Date:


Should large enterprises self-host their authoritative DNS? – IBM Blog

<!—->


<!– –>


young person singing with headphones while working on laptop at home

In a recent post, we outlined the pitfalls of self-hosted authoritative Domain Name System (DNS) from the perspective of a start-up or midsize company piecing together a DIY system using BIND DNS or other open source tools. The main idea was that every company gets to a point where they outgrow their self-hosted, home-grown authoritative DNS systems. For whatever reason—be it functionality, cost, reliability or resourcing—most companies naturally come around to the need for a managed DNS service delivered by a third party.

Nonetheless, there is a certain class of large enterprises where self-hosted authoritative DNS operates under a different kind of logic. With global footprints and enough scale to solve even complex technical projects in-house, these types of companies often default to building resolutions instead of buying another company’s product.

The pros of self-hosting for large enterprises

There are several reasons why a large enterprise would want to build and host an authoritative DNS service on its own:

Specific functional requirements: Large enterprises often want to deliver their applications, services and content in a customized way. This can be anything from hyper-specific routing of DNS queries to system-level support for distinctive application architectures to compliance requirements.

Using existing resources: When companies have servers and technical resources deployed at scale around the globe already, using that footprint to deliver authoritative DNS often seems like a logical next step.

Control: Some companies simply don’t want to be dependent on a vendor, particularly for something as business-critical as authoritative DNS. Other companies have a “build it” culture that sees value in developing in-house approaches that nurture technical skills.

Theory vs. reality

These are all valid reasons to self-host your DNS at scale—at least in theory. What we’ve found from talking to large enterprises in various industries is that the perceived advantages of self-hosted authoritative DNS often go unrealized. The logic behind self-hosting looks good on a PowerPoint, but doesn’t deliver actual business value.

Here are some areas where the reality of self-hosted authoritative DNS doesn’t match up to the theory:

Resilience: Any large business is probably important enough that any downtime would have a devastating impact on the bottom line. That’s why most authoritative DNS administrators insist on a secondary or failover option in case disaster strikes. Self-hosted authoritative DNS rarely includes this—it’s too resource intensive to build and maintain a secondary system as a form of insurance.

Brittle architectures: Most authoritative DNS infrastructures are built on BIND, which usually requires a Rube Goldberg machine of scripts to operate. Over time, the complexity of those scripts can become difficult to maintain as you account for new capabilities and operating requirements. One false move, such as one single coding error, could easily bring down your entire authoritative DNS infrastructure and take your customer-facing sites offline. For a large, complex enterprise, brittle BIND architectures and scripts can be especially perilous.

Technical debt: When you run your own authoritative DNS, it’s easy to rack up a significant backlog of feature requests. This is especially true if you have a DevOps, NetOps or CloudOps team working against a deadline. Let’s face it: most of those DNS features are going to be delivered on a much longer timeline than any application development team requires.

Cost: A self-hosted large enterprise may have done the math and concluded that building, deploying and maintaining an authoritative DNS system is worth the investment. However, the reality is that these decisions usually happen without a deliberate cost-benefit analysis. In the long term, the outlay cost and the hidden opportunity costs of self-hosted authoritative DNS tend to outweigh any perceived financial benefit.

Staff turnover: DIY architectures only work for as long as the person (or the team) who built them stays with the company. If that person leaves the company for whatever reason, their institutional knowledge about how DIY architectures were built leaves with them. Some companies get to the point where they’re afraid to change anything because it might easily result in a downtime incident that’s difficult to recover from.

Automation: BIND doesn’t have an Application Programming Interface (API) and wasn’t built to support any form of automation. DIY architectures usually aren’t built to support standard automation platforms like Ansible or Terraform. It’s nearly impossible to orchestrate DIY architectures using third-party tools. If you’ve got a DIY authoritative DNS, you’re probably stuck with manual changes that slow down application development efforts to a crawl.

Managed DNS just makes sense

As a provider of managed DNS solutions, we’re certainly biased. However, from our perspective, the cons of self-hosted authoritative DNS clearly outweigh the benefits, even (or especially) for large enterprises that usually default to building their own systems. When you weigh the long-term cost of maintaining an authoritative DNS system—both the CapEx hardware and the OpEx personnel—a managed DNS solution simply makes economic sense.

Managed DNS solutions also help IT teams do more with less. When you consider the admin hours required to operate an authoritative DNS network at scale, there’s far more value in directing those resources to other strategic priorities. Having operated authoritative DNS on behalf of a good portion of the internet for 10 years ourselves, we know just how costly and arduous a task it can be.

Dealing with DNS migration risk

We get it. It’s difficult to change. Even when large enterprises are ready to move on from their self-hosted authoritative DNS architectures, they often balk at the significant risks that come with migration to a managed DNS service. When existing DNS tools become ingrained in a company’s technical DNA, it can be hard to even think about the complex web of dependencies that would need to change.

This is where secondary DNS offers a lifeline. Any managed DNS service (like NS1) can operate alongside a self-hosted authoritative DNS system, either as an independent platform or as a failover option. With a secondary DNS layer in place, administrators can migrate application workloads over time, testing out the capabilities of the managed system and gradually unwinding complex connections to internal systems.

Operating a secondary DNS as a test environment also builds up confidence in the advanced features that a managed DNS service offers—things like traffic steering, APIs, DNS data analysis and other elements that deliver clear value but aren’t available in most self-hosted services.

Ready to move on from self-hosted authoritative DNS?

Get DNS that does more: IBM NS1 Connect

Was this article helpful?

YesNo


More from Cloud




ManagePlus—your journey before, with and beyond RISE with SAP

5 min readRISE with SAP has not only been a major cloud player in recent years, it’s also become the standard cloud offering from SAP across different products.   But when assessing what it takes to onboard into RISE with SAP, there are multiple points to consider. Especially important is a good understanding of the RACI split around Standard, Additional and Optional Services, along with relevant CAS (Cloud Application Service) packages.  If you’re wondering whether RISE with SAP is the right solution…




Density makes a difference with 4th Gen Intel Xeon on IBM Cloud Bare Metal Servers

4 min readWhen it comes to bare metal servers, being dense is a good thing. In fact, the denser the storage and cores, the better. This week, we introduced IBM Cloud Bare Metal Servers with 4th Gen Intel® Xeon® processors into more key IBM Cloud Data Centers around the globe. For anyone just catching up, 4th Gen Intel Xeon processors are Intel’s newest, most high-performing CPUs that we first announced in January 2023 across our core server fleet. Let’s unpack where core…




8 steps to build a successful multicloud strategy

6 min readIncreasingly, enterprise organizations are adopting a multicloud approach—the use of cloud services from more than one cloud vendor—to optimize performance, control costs and prevent vendor lock-in. According to a recent forecast from Gartner (link resides outside ibm.com) worldwide end-user spending on public cloud services is expected to grow 20.4% to total $678.8 billion in 2024, up from $563.6 billion in 2023. Multicloud architecture not only empowers businesses to choose a mix of the best cloud products and services to match their…




How does data deduplication work?

6 min readRecent years have witnessed an explosion in the proliferation of self-storage units. These large, warehouse units have sprung up nationally as a booming industry because of one reason—the average person now has more possessions than they know what to do with. The same basic situation also plagues the world of IT. We’re in the midst of an explosion of data. Even relatively simple, everyday objects now routinely generate data on their own thanks to Internet of Things (IoT) functionality. Never…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.

Subscribe now

More newsletters

spot_img

Latest Intelligence

spot_img