Working toward projects that can deploy in any environment, RedViking sees Kubernetes as a successful strategy for both on-prem and cloud-hosted solutions.
A couple years back, our applications developers began focusing on cloud-based software, building what we see as the future of manufacturing software. A majority of our application development is focused on building a set of core micro-services augmented with other customer-specific micro-services to support specific implementations.
Our applications are built and delivered as a set of Linux Docker containers to minimize the effect of different deployment environments, provide the ability to scale our applications, and run each micro-service in a declarative known state. If you are unfamiliar with Docker and the idea of containerization, it allows a developer to define and control the environment in which an application runs and package that environment in a known state for release. This solves the issues associated with making sure an application has access to all required dependencies at run time.
Docker containers are a natural fit when deploying to the cloud. Major service providers have platforms targeted at running applications deployed in this manner in a controlled and definable fashion. They provide services to define and deploy application stacks through declarative code. For example, Amazon Web Services (AWS) has Elastic Beanstalk and Elastic Container Service for Kubernetes (EKS) and Microsoft Azure has Azure Kubernetes Service (AKS). These services provide near instant access to compute and storage resources while also supporting scaling to multiple regions and data centers. These services also allow applications to scale for higher demand and to be configured for high availability in the event of a node or data center region outage.
Often, our DevOps teams focus their efforts on initial design for deploying highly available and scalable applications. However, many of our manufacturing customers (with very valid reasoning) are resistant to move toward a cloud-based infrastructure and often choose to rely on on-premise compute power to run their manufacturing applications. This decision can be based on sensitivity of data, quality and reliability of external connection, or other reasons. This blog post is an exploratory look at the cloud vs. non-cloud infrastructure, challenges in a non-cloud environment, and options for supporting non-cloud customers.
Challenges with variable environments
Non-cloud environments present a challenge at implementation because support is often required for both the application and the infrastructure. Our goal as an application provider and integrator is to design solutions that deploy in any environment, so we searched for a solution that provided cloud environment advantages like the ability to scale and provide fault tolerance. We have started using Kubernetes as the primary target to deploy our application stack. We believe this will be a successful strategy for both on-prem and cloud-hosted solutions.
Future convergence?
Supported by the Cloud Native Computing Foundation, Kubernetes has backing and code contribution from many of the world’s largest IT companies. It is the clear front runner in cloud-based deployments.
Most notable to this topic is that recently there have been large efforts and contributions to the project from VMware and Microsoft. The involvement of each of these on-premise giants hints at the future direction of these companies and support of typical cloud-native architectures, but for customers that typically run their IT solutions in house.
Recently, increased support of Kubernetes suggests that moving to Kubernetes-based deployments is the right move. Earlier this year, Microsoft introduced support for Windows nodes and workloads to be deployed and managed in a Kubernetes cluster. Microsoft is working toward also supporting Linux containers on Windows (LCOW), which will likely speed acceptance and adoption of this technology.
VMware and Pivotal teamed up to support Pivotal Container Service (PKS), which provides enterprise with a supported deployment strategy for Kubernetes. Other major Linux vendors have supported and continue to support either releases with Kubernetes or variants such as Red Hat’s OpenShift.
With these trends and activities in early 2019, we are looking forward to a future when a common deployment definition will require minimal custom configuration to deploy a system for customers, whether that be on their locally hosted clusters or their privately controlled cloud, or when we support their full stack in RedViking’s cloud services.
Source: Automation World