“Faster and faster, more and more” is the mantra for application development teams these days. As these teams move to shorter and more frequent release cycles, customers see new features sooner. Even better, feedback is provided to the application development teams more often and more rapidly. This leads to even more application improvements being delivered even faster. Everyone wins!!!

…Right?

overwork stress screen monitor software developer thinkstock photos monkey business images stockbroker
– Thinkstock / Monkey Business ImagesStockbroker

DBAs do the donkey work

Well, not quite. Not the DBAs. See, the release of applications has been automated from soup to nuts. A source code check-in triggers a build, which triggers a push to test, which triggers automated testing in the house that Jack built (as the saying goes). However, tucked inside all of that convenient automation is a comment that says “#TODO: AUTOMATE DB CHANGES.” Until that comment turns into actual code, DBAs have to manually migrate database changes. Not winning.

Of course, the impact to the dev teams is minimal. Outside of delaying the push to test and lower tier environments, those teams are hardly impacted. If nothing else, they can start working on the next release while they wait for the current deployment.

Unfortunately, the impact on the DBAs is inverse and enormous. As the number of releases increase due to continuous delivery (CD), DBAs must find ways to make more and more changes to the database with the same resources available to them. At some point, they will reach resource exhaustion and begin to either expand their service level agreement (SLA) time limits or cut corners on the work they perform on each release.

As the number of releases increase, DBAs must find ways to make more and more changes to the database with the same resources 

Here’s the kicker: Either way, the eventual, hidden, second or third-degree impact on the application development team becomes just as apparent and unenviable. Let’s uncover how.

SLA expansion

Each time a database change request ticket is created, there is some sort of SLA around that ticket. In return for allowing a queue of requests, the DBA team agrees to complete the task in a specific amount of time. Now, if the current SLA is 48 hours, you can pretty much expect the ticket to close in 47.5 hours. As tickets begin to violate the SLA, the first reaction is to expand the SLA to 72 hours.

Naturally, this is seen by all teams as “unacceptable.” And realistically, even if the SLA expansion is denied, it could happen anyway. The DBA team may simply violate the SLA and when challenged simply say, “Tough. Deal with it.”

The other byproduct that may occur is that easier tickets start taking more time. Tickets that would normally be turned around in an hour now take the full 48 hours. Solid evidence of this effect presents itself as the average turnaround begins to skew longer. It’s also important to track the “N” used in the average over time. If the number of change requests expands over time, this will further illuminate (and exacerbate) the scale of the challenge.

Cutting corners

Another result of resource exhaustion is DBAs eliminating tasks they were previously responsible for. Those tasks simply go undone. One such task could be performing a code review on each database change.

To get more specific, DBAs might have previously reviewed each change request and looked for badly written SQL or clear violations of technical standards. Perhaps the DBA looked for TRUNCATE in stored procedures or made sure added columns did not use a default value. (Table DML lock alert!)

Now that the volume of changes coming in is too much and resources are too strapped to consistently offer that premier level of service, naughty SQL is almost sure to begin appearing in the production database.

The immediately apparent impact on the application development team is minor. But in the long run, as more and more bad SQL begins to sneak its way into production, there comes a point where the DBAs will have to backtrack and fix it. When that happens, you can bet the amount of time required to push changes will expand again as DBAs are fighting fires.

What to do

Automation is key. Automation got us into this mess, and it will get us out.

Without question, all database changes should be vetted early in the development process to find issues well before the change ever reaches production.

Secondly, all database changes should be checked into the release branch to be part of the release artifacts that are moved to each subsequent environment.

Finally, database changes must be pushed from the same orchestration tool that is pushing the applications today.

That sounds like a lot of work, and it is. But let automation take the hard work out of many of those steps, from integration with source code control to the build server to DevOps orchestration tools. Here’s the bottom line: Extending automation to the database can liberate and revive DBA resources, return CD to being a force for good, and reinvigorate innovation in the enterprise.

Robert Reeves is CTO and co-founder of Datical