Far too often, SLED organizations look at monitoring solutions as “nice to have,” or better put, as luxury items that make the lives of IT administrators easier, but come at a cost. In this post, we will focus on the benefits of consolidating an organizations cross platform database monitoring under a single solution.
Many years ago, it was common that SLED IT departments had a small number of strategic/standard database platforms. We’d frequently hear “We are an Oracle shop” or “We’ve standardized on SQL Server, with a small handful of less critical systems on MySQL.”
Over the past several years, for a variety of reasons, local and state government agencies, and educational institutions, now have to manage a much more diverse set of databases, both in the database platforms that they use (SQL Server, Oracle, SAP, DB2, MySQL, MongoDB, PostgreSQL, etc.), and the technologies that they run those databases on (VM’s on-prem and cloud hosted, platform-as-a-service, containers, serverless, etc.).
As each of these platforms grow in criticality within an organization, there comes a need to provide proactive monitoring and diagnostics to the administrative staff. Some government agencies solve this problem by acquiring point solutions for each database technology that they use in their environment.
For example, they might use one tool for Microsoft SQL Server, Oracle Enterprise Manager/Cloud Control for Oracle, native AWS tools for cloud hosted systems, and another third-party tool for MySQL. Not only does this lead to an inefficiency in how SLED organizations manage their environments due to the differences in functionality and workflows when managing separate platforms, it can also lead to an unnecessarily expensive monitoring infrastructure.
When a government agency runs five separate monitoring solutions, to monitor and manage five separate database platforms, they need to negotiate five separate vendor contracts without the power of scale to help them negotiate for a better price. Not only is the upfront cost often more costly, the ongoing maintenance contracts can spiral out of control, given that they have five separate maintenance contracts, each with uplift, and differing terms and conditions.
Lastly, five separate monitoring tools, means five separate monitoring infrastructures in place, each with their own virtual machines, repositories, storage, and interfaces. Solutions like Quest’s Foglight for Databases offers deep coverage of most leading database platforms, including (but not limited to):
- Microsoft SQL Server
- IBM DB2
- SAP ASE
- SAP Hana
- Amazon RedShift
- Amazon Aurora
Rather than bringing in separate solutions for each platform listed above, products like Foglight can cover them all in a single solution, requiring a single monitoring infrastructure, saving on VM’s, repositories, and storage. Foglight for Databases also only requires a single vendor contract, giving an IT department the negotiating power of scale to secure not only a best of breed solution, but one that comes at an attractive price.
Organization Resource Management
Finding that “sweet spot” of resource utilization is really tricky. If you “under spec” the resource allocation (“hardware”) of a VM, a SLED organization runs the risk of suffering major performance problems or worse, system instability. For an organization’s critical applications, no one wants to hear that productivity and/or reliability has suffered because we tried to “cheap out” on hardware.
Because of this, IT administrators often “over provision” their systems, by in some cases a pretty high amount. “What’s a few extra CPU cores or gigs of RAM going to hurt?” In a small scale, it’s probably worth the few extra hundred to thousand bucks to give yourself some breathing room on resource allocation, but when you start multiplying that breathing room by dozens, to hundreds, and in some cases thousands of database servers, you end up with a tremendous amount of “wasted” hardware just sitting there not being consumed.
When you look beyond the hardware cost, other costs can add up as well, such as floor space in a data center, power costs, and probably biggest of all, software license costs that sit on over provisioned hardware. Many software products license by core, so every extra core used, can amount to thousands of dollars per year that is unneeded.
So now that we’ve concluded that “right sizing” hardware resource is important for saving costs, how do we do it? It’s unrealistic to assume that you are going to get sizing right from the get go. No matter how much you plan and test, real world application usage rarely matches exactly what you expect it to. Monitoring a production environment 24×7 will greatly assist in ensuring that systems are sized properly. If systems are “under sized”, performance problems are likely, and your monitoring solution should alert you to this so that you can diagnose and address the problem. How do you detect when systems are “over sized”?
Foglight for Databases collects hundreds of useful performance metrics, across all of your production database servers that can then be analyzed and reported on to determine how systems are being used.
With CPU as an example, database servers that never exceed 10% CPU utilization, could be candidates to reduce their core count, or possibly even consolidate with other underutilized systems. Database servers that never console more than a trivial amount of IO resources, may be candidates to run on less expensive disk. Database servers that aren’t even being connected to, may be candidates to deprecate altogether.
Because Foglight is collecting these metrics 24×7, and retaining that data for months’ worth of history, you can be assured that your analysis is complete. You aren’t missing peak periods of utilization that happen sporadically, and you have enough historical data to back whatever conclusions you come up with.