The 2018 Pivot for Dynamic Apps, DevOps: Live Deployment Monitoring Takes Center Stage Away From Container Orchestration

The yin-yang of dynamic apps and DevOps may come into a new balance in 2018. Container orchestration will be less important, while monitoring live deployments will become the crucial focus. This shift comes in large part due to big steps in Amazon Web Services, says Lee Atchison, senior director of strategic architecture at New Relic. IDN explores. 

Tags: app architecture, AWS, cloud, containers, deployment, DevOps, dynamic apps, EKS, Fargate, monitoring, New Relic, orchestration, workloads,

Lee Atchison, New Relic
Lee Atchison
senior director,
strategic architecture
New Relic


"Because of EKS, container orchestration doesn't matter anymore. Now you can launch [and manage] containers as easily as you can could virtual servers before."

The yin-yang of dynamic apps and DevOps may come into a new balance in 2018. Container orchestration will be less important, while monitoring live deployments will be more crucial.

 

This shift is coming thanks largely to technology innovations and decisions at AWS (Amazon Web Services), according to Lee Atchison, senior director of strategic architecture at New Relic. In specific, Atchison predicts in 2018 that:

  • Container orchestration will become less important, thanks to increasing automation. 
  • Just-in-time deployments will become more common. That will make deployment tooling and monitoring more important.

 

Atchison traces the root of his predictions to 2 AWS announcements from AWS late in 2017.

 

In an interview with IDN, and a thought-provoking blog post on 2018 predictions, Atchison put the AWS announcements in this context:

 

“Because of the advent of EKS, the battle of container orchestration is now over—and Kubernetes won,” he said, Further, with AWS Fargate, Amazon basically has said, ‘Orchestration doesn't matter anymore,’ Atchison added. “The implication here is: Now you can launch containers as easily as you can could [launch] virtual servers before.”

 

A check of the AWS Fargate website underscores what Atkison sees as the profound nature of the technology: It reads in part, AWS Fargate allows users

“to run containers without having to manage servers or clusters. [Users] no longer have to provision, configure, and scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing.”

Together, EKS and Fargate add up to a new vision – and a rejiggering of priorities for developers and IT working on agile app development and deployment.

 

“Much like [Amazon] EC2 instances removed the need to worry how server virtualization was implemented and how the underlying raw hardware servers were managed, Fargate will remove the need to worry about how the container infrastructure and container management is implemented,” he said.  

 

Atchison further detailed how developers and IT professionals should view the AWS moves.

Application Architecture Summit
Modern Application Development for Digital Business Success
Online Conference

To me, the bottom line is: Instead of launching a virtual server and figuring out what to do with it, you now just launch a container and let the underlying infrastructure deal with everything else. You’re going to get to the point very very quickly where managing [containers] is going to be done the exact same way as managing server [instances] are today.

 

When you allocate a server instance on. AWS, Azure or anywhere else, you don’t care how it's created, you don't care about the underlying hardware (for the most part), you don't care how it set-up, how the networking works, how the virtualization works. For containers, it will be the same, Atchison said, “You're going to say I want 20 instances of this container running, and I want an auto-scale, load balancing and all that good stuff automatically – and you don’t care where or how it's run.”

 

This idea of ignoring the orchestration altogether and letting somebody else deal with it is going to be highly attractive. Orchestration will be done, of course, but it is going to be the cloud companies (like AWS, Azure and Google and others) that decide how orchestration is done. So no one [in enterprise IT] will really care about that.

Time to Shift Focus from Orchestration to Monitoring Workloads, Deployments

All this intelligent automation for container orchestration Atchison described about leads to his  second – and equally impactful -- prediction: “As [container] orchestration is no longer a debate to be focused on, I think people are going to focus much more on the workload,” he told IDN.

Atchison’s equation goes like this: Because container orchestration will require less focus, deployment tooling and monitoring will become more important.  His blog points out some of the details.

Just-in-time deployments have become standard operating procedure at most leading-edge technology companies; continuous integration and continuous delivery (CI/CD) pipelines are becoming standard in most companies.

 

The result? Companies are doing more and more deployments. More releases, not fewer releases, is seen as the path to higher reliability, higher availability, and higher scalability for modern applications.

 

While this increase in deployments is good at managing large scale, healthy, modern applications, it is also clear that making sure deployments work and understanding the impact of a deployment is a critical aspect of seeing an application operational. As such, in 2018 monitoring deployment pipelines, deployment processes, and how deployments impact individual services and applications will be an important focus for most modern technology companies.

To support this conclusion, Atchison pointed to a longer-term historic perspective about the business of building (and deploying) software.

 

“There's been a major, major shift in the industry in the last several years, from the way that software was has been written for the last 50 years,” Atchison told IDN. “It used to be that when you wrote software you assumed the software was going to fail.” Perhaps this is a stark assessment, but Atchison asserts “that’s precisely why you spent just as much time testing and QAing your software as you spent writing the software in the first place. It was only after you went through a massive test phase would you release software.”  

That old model is on the way out, Atchison asserted.

 

“These days, we’re shifting to a new model that says, ‘Software doesn't fail. Software mostly works,’” he said. This perspective has led to a more pro-active stance favoring faster launches and deployment updates, he added – with an important proviso.

 

As long as you know you can rapidly change the software when or if it does fail, the thinking has become, ‘We can handle a small number of fails,’” he said. Put another way: “Today, the whole idea is the faster you can make a deployment, the less you care about whether there's a bug in that deployment – so long as you have the visibility and the tooling,” he said.

 

As a consequence, pre-deployment testing and QA will get less attention (and resources – money and staff time). The lion’s share of attention and investments should shift to deliver faster deployments – coupled with being sure IT has adequate tools and visibility into how whether software is working properly (or not), Atchison added.

 

“It’s becoming more common now to find people who, the day or even hour they write the code, are ready to deploy it to a production system,” he told IDN. “They just assume [the software] will work. If it does work, great!  But if it doesn’t, they have visibility and [management tools] to help them fix it very very quickly.”

 

2018 Levels the ‘Playing Field’ for Live Monitoring of Workloads, Deployments

So, the question for many large and mid-zed companies looking to build disruptive apps in 2018 will be an engaging one, Atchison posits.  “As companies recognize the reality that rapid deployment is better than slower deployments, they will also realize the need for automated deployment mechanisms,” he told IDN

 

In 2018, achieving this will be easier than many may think, he added.

 

Not many years ago, it was only the largest companies that could perform this level monitoring power – due to available budgets and talent they could build their own monitoring, Atchison pointed out.   “But in 2018, you’ll be seeing more and more companies have this ability as [monitoring tools] become standardized and productized,” he said. 

 

So this year, companies will discover they have got options for the tooling and cloud capabilities to make deployments automated. “Choosing just which options, and how they will monitor their apps, will come to the forefront,” he added. 

 




back