Like a few other developers at PMG, I had the pleasure of attending Amazon’s re:Invent conference in Las Vegas this year. It was not only my first time attending re:Invent, but also my first time at a major tech conference. It was an incredible experience with hours of interactive learning the likes of which will be difficult to replicate anywhere else.
One of the most interesting sessions I attended was ambitiously named ‘The Future of DevOps’. Presented by Alois Reitbauer and David Kennedy of the digital performance management company Dynatrace, the session covered what they believe will be the way that we interact with our systems in the future. They introduced the session with the video below.
At Dynatrace, Reitbauer and Kennedy took on the task of creating a virtual assistant, DAVIS, (à la Tony Stark’s JARVIS) to track and pinpoint performance issues in systems and notify relevant teams, completely removing the need to search through logs and stack traces. They demoed using an Amazon Echo and Slack integrations to interact with DAVIS. They were able to get updates on system performance and drill into issues that caught DAVIS’s attention over the past 24 hours for more information. To say the video above was several times more advanced than the live presentation would be an understatement, but it was impressive enough to wonder if they were on to something. Was this really how we would be interacting with our systems in the future?
One of the swag items we received for attending the conference was an Amazon Echo Dot. I set mine up before we left the hotel and have become only more attached to it ever since. I regularly catch myself on the cusp of asking Alexa for the weather or to turn down the volume when she is miles out of earshot, saying “Thank you” after she shares the morning’s news, and personifying the device when I discuss it like I’ve done all through this sentence. Its fun, its easy, and lately I’ve found myself identifying with Theodore Twombly much more than I’m comfortable with.
We all understand the benefits of being able to convey an idea using direct speech. Its much easier to explain and troubleshoot an issue when you can talk to the involved parties in real time versus, say, going back and forth in an email thread. However, many of nuances that facilitate this direct communication are the same nuances that devices like the Echo just can’t handle; or can’t handle yet anyway. While DAVIS was able to arrange the infrastructure issues that she rattled off to Kennedy and Reitbauer by order of impact, and provide in depth information when requested, she said the information with the tempo and cadence of, well, a machine. Much different than the way that you would hear the same information from your development lead or supervisor. There is valuable information in the nonverbal communication, and it was by and large missing from the demonstration.
Back to the question at hand; is this what we can expect for the future of DevOps? Personally, I wouldn’t mind being able to ask an assistant to clean up my inbox or let me know if there were any urgent issues with our systems overnight or alert the team when our Elasticsearch Clusters have given up the ghost. But, we seem to still be a ways off from having this as a reliable reality. Even Kennedy and Reitbauer struggled at times during the live demo to get the Echo, or DAVIS, to understand what exactly they were asking for. When it did work, the information was neutered without the cues we’re accustomed to letting us know the difference between the general information we get all day, and the vital information that we know to pay extra attention when hearing.