<img alt="" src="https://secure.cold5road.com/218968.png" style="display:none;"> Skip to main content
blog_banner
Blog

Cloud native architecture & application tactics, strategies, and best practices from orgs deploying 12x faster and at 92% less cost than peers

img-54612334

Cloud Native Architecture & Application Examples, Strategies & Tactics

Posted by OpenLegacy on May 20, 2020
Listen to audio version
18:55

Many organizations around the world have recognized the power of cloud computing and are transitioning to cloud-native architecture in order to keep up with innovations and deliver digital services in a fast and efficient manner. 

With the increased utilization of cloud computing, it’s expected that over 32% of new enterprise applications will be cloud-native in 2020. This shouldn’t come as a surprise, given that leveraging the power of cloud computing enables organizations to develop and deploy easily scalable, more resilient applications, for a fraction of the cost. 

OpenLegacyBy relying on containers, microservices, and the computing power of the cloud, organizations can save a ton of time and money they would otherwise spend on creating on-premise solutions.

What is cloud-native architecture? 

Cloud-native architecture is a system or architecture built specifically to exist and run in the cloud. The major advantage of cloud-native architecture over legacy systems is its flexibility. Legacy systems rely on hardware and software that doesn’t integrate with new systems and innovations, whereas cloud-native architectures aren’t limited by these requirements. 

Cloud-native architectures aren’t built on on-premise physical servers but are instead deployed on a cloud platform and leverage the cloud philosophy of distributed systems. This enables cloud-native architectures to take full advantage of the latest and best technologies around distributed systems. They are specifically designed to utilize the versatility and scalability benefits of the cloud.

Cloud-native applications

The best way to understand cloud-native architecture is to take a closer look at cloud-native applications. Cloud-native apps are built on a fundamentally different approach than monolithic applications. Rather than developing and deploying the application as a whole, cloud-native apps are based on microservices that are self-contained and independently deployable.

Microservices are the core of cloud-native application architecture. They are essentially small, self-sufficient mini-programs, each with their own data store and application logic, built to execute a single business function. 

If you want to learn more about microservices — here’s a detailed article that explains what microservices are, how they work, and what the main business benefits of embracing microservices and microservice-based APIs are. 

Microservices are loosely coupled and designed for orchestration inside a container. Containers allow for quick and easy deployment into any cloud platform — private, public, or hybrid clouds — using any toolsets (Kubernetes, OpenShift). 

The main idea behind microservices is to enable flexibility and scalability, support DevOps, and drastically increase the speed of deployment while reducing costs. 

Flexibility and scalability

The independence of microservices enables organizations to adopt a rather flexible approach to creating applications in the cloud. Since microservices are self-contained, developers can always choose the best tool for the job, based on the business capabilities and the desired functionality of the specific microservice. 

In other words, they can select the most appropriate data store and use the programming language that makes the most sense for the particular microservice they’re working on. This means that developers aren’t committed to one technology and aren’t limited by its capabilities. 

As innovations and new technologies are introduced, developers can easily take advantage of them, without having to fear how changes in one microservice will impact the application as a whole. They can update and scale individual microservices, as well as remove the existing and add new ones, without breaking the application or making it temporarily unavailable while they roll out a new version. 

DevOps and agile frameworks

Microservices are a perfect fit for agile organizations that embrace DevOps and CI/CD (continuous integration/continuous delivery). As we’ve mentioned, microservices are self-contained, meaning they are independently testable and deployable units. 

Microservices and cloud-native architecture enable multiple cross-functional developer teams to simultaneously work on developing different features of an application. They also enable organizations to embrace changing requirements all throughout the development lifecycle and respond to customer feedback quickly and efficiently. 

With microservices, the application is broken down into smaller, more manageable units that can be developed, tested, and deployed independently. This perfectly aligns with what agile frameworks and DevOps seek to accomplish. 

Click here to read more about how microservices enable APIs in DevOps

Speed of deployment

With monolithic applications, developers were forced to replicate the entire legacy system before applying any changes to it. Given the large interdependence of elements within a monolithic application, developers have to think long and hard about how the changes they’re planning to implement will reflect on the application as a whole.

It’s practically impossible to account for every possible scenario, and developers would have to test the entire system again before redeploying the application to ensure that everything works as intended. This testing process is often tedious and can be quite costly and time-consuming. There’s also the fact that, given that legacy systems are built on outdated technologies, developers are severely limited when it comes to what new functionalities they can introduce to the existing system. 

With microservices, this process is quick and simple. Developers only need to worry about the single microservice they’re working on and only need to test its functionality before deploying it into the cloud. This allows organizations to deploy outstanding digital services based on new technologies, ten times faster than they could with monolithic architectures. 

What about all those instances when you need to develop new digital services or update your application, but your entire legacy application is based on on-premise services? Most organizations mistakenly believe that they only have three options:

  • Stick to their legacy systems
  • Migrate the entire legacy application to the cloud
  • Create an entirely new, cloud-native application

Modern microservices abstract this choice and bridge the gap between legacy IT and the cloud. OpenLegacy enables you to create microservices in the cloud that pull data directly from your legacy systems — mainframes, midrange systems, and databases. 

This way, you can still keep the functionality of your existing legacy system, while simultaneously delivering digital services based on new technologies through cloud-native microservices.

Read more about how cloud-native architecture supports legacy systems.

Key characteristics of cloud-native apps

Cloud-native architecture is almost synonymous with microservices-based applications. To gain a better understanding of the impact transitioning to a cloud-native architecture, let’s take a look at the key characteristics of cloud-native apps. 

The key characteristics of cloud-native apps are:

  1. Designed as loosely coupled microservices
  2. Centered around APIs
  3. Packaged as lightweight containers
  4. Free from server and operating system dependencies
  5. Deployed on a flexible infrastructure
  6. Automated capabilities
  7. Independent lifecycles
  8. Improved resource utilization

Designed as loosely coupled microservices

Microservices are designed and deployed independently, and typically rely on asynchronous communication to share information with each other. This means they are completely self-sufficient and will never directly call on a data store of another service. 

In other words, when one service fails, for whatever reason, other microservices will continue functioning without an issue.

Thanks to this decoupling, developers can treat each microservice as a completely different entity. This enables them to focus on the core functionality of each microservice and enables efficient lifecycle management of the application as a whole. Each service is maintained independently, and every development team has ownership of the services they developed. 

Centered around APIs

Cloud-native architectures are centered around APIs (Application Programming Interfaces). APIs are essentially what the end-user sees — for instance, a log-in form within the application. Individual microservices can expose different APIs, but not every microservice has to. Some work in the background, so to speak, and handle functionalities that the end-user doesn’t directly interact with.

Cloud-native microservices typically rely on protocols like representational state transfer (REST), Google’s open-source remote procedure call (gRPC), and NATS. REST is often used to expose APIs over hypertext transfer protocol (HTTP). For internal communication between microservices, gRPC is often the first choice. NATS enables event sourcing and has publish-subscribe features that facilitate asynchronous communication within the app.

Packaged as lightweight containers

Cloud-native applications are essentially collections of independent microservices that are packaged as lightweight containers and developed into the cloud. Containers allow for easy deployment into any cloud platform (AWS, Google, etc.) and can quickly scale-in or scale-out. This way, the infrastructure utilization is fully optimized since the unit of scaling shifts from the application as a whole (in monolithic architectures) to containers.

Free from server and operating system dependencies

Cloud-native applications aren’t tied to a particular server or operating system. Microservices, by default, can be deployed across multiple servers and developed to function on any operating system. The one exception is instances where a particular microservices requires certain capabilities like solid-state drives (SSDs) and graphics processing units (GPUs), in which case it is somewhat dependent on physical capabilities offered by the hardware. 

Deployed on a flexible infrastructure

The cloud represents a virtual, flexible, elastic infrastructure. This enables cloud-native applications to align with the underlying infrastructure to dynamically adjust themselves to varying workloads. 

Given that microservices can be deployed across multiple servers, this means you can freely choose which services to place in a private cloud and which in a public cloud. This flexibility of the infrastructure enables you to leverage the computing power of public clouds and freely migrate services between different environments to deal with workload spikes efficiently. 

Automated capabilities

The beauty of cloud-native applications is that they’re designed for automation. Since cloud-native apps are delivered as microservices, this means you would likely have to deploy hundreds of containers. Doing so manually would be a nightmare. 

That’s why developers pay close attention to orchestration requirements and automation right from the get-go. This involves clearly defining the processes necessary for implementing different components of the application and minimizing the level of configuration applied to each component.

When it comes to cloud-native architectures, automation involves: 

  • Automated IP address assignment — Cloud-native apps typically rely on Dynamic Host Configuration Protocol (DHCP) to automatically obtain IP addresses. That said, IP addresses can be hard-assigned to cloud-native network function (CNF), when necessary.
  • Automated discovery — Cloud-native applications typically rely on Domain Name Server (DNS) or service meshes to discover the peers they need to communicate with.
  • Shared configuration stores — Components within cloud-native applications always participate in a shared distributed key-value store. This enables them to automatically obtain most of the initial configuration, without requiring the orchestrator’s direct input.

Independent lifecycles

Within a cloud-native architecture, each microservice has an independent lifecycle. This enables different teams to simultaneously work on developing different sets of services and allows organizations to utilize multiple continuous integration/continuous delivery (CI/CD) pipelines to develop, deploy, and manage a cloud-native application.

Improved resource utilization

Organizations developing cloud-native applications typically create policies through which they define how resources are allocated to particular services. 

In an enterprise scenario, central IT can create policies to allocate resources to each department, depending on the scope of the projects and functionalities of services each team is working on. This means that each developer team has full access to and ownership of resources assigned to them. 

Pros and cons of cloud-native architecture

There are many reasons why organizations choose to adopt a cloud-native approach. At the same time, you should be aware of the potential challenges you might face if you decide to develop applications in the cloud.

Let’s take a look at the pros and cons of cloud-native architecture.

Pros

Thanks largely to the fact that they’re based on microservices, cloud-native architectures offer various benefits:

  • They’re easier to manage — Cloud-native apps can be a lot easier to manage than monolithic applications, given the fact that iterative improvements are performed using DevOps and agile processes. 
  • Incremental improvements — Microservices are deployed and scaled independently, allowing for cloud-native applications to be improved incrementally to continuously improve the app’s capabilities and introduce new features. Improvements are made on microservice-level, meaning there is no down-time, and the end-user experience is uninterrupted. 
  • Microservices are platform-agnostic — As we’ve mentioned, developers can create microservices using the most suitable programming language and data store. They can also be deployed across different environments and work on multiple platforms — so you won’t encounter a problem like Windows-specific APIs. 
  • Maximum efficiency — Cloud-native architecture enables the central orchestrator to quickly and efficiently manage and schedule microservices, based on demand. 
  • Keeping pace with the innovations — The development process of cloud-native applications enables organizations to quickly embrace innovations and new technologies and integrate them without having to restructure the entire application. 

Cons

Cloud-native architecture isn’t without its challenges. Here are a few things to consider before embracing cloud-native architecture:

  • OS and hardware or software dependencies — Some microservices can be reliant on specific operating systems or machines if they require specific hardware or software capabilities, such as SSDs or GPUs.
  • Updating security systems — Containerization in cloud-native architectures necessitates a change in your current security systems. 
  • Adopting DevOps — Unless your organization is already agile, adopting DevOps can be a challenge if your development and operations teams aren’t used to working together.

Cloud-native applications examples

Each microservice in a cloud-native application executes a specific function. Take an eCommerce store, for example. One microservice would be used for the log-in form, another for taking customer orders, and the third one for special discounts and coupons. Each of them can be updated and scaled independently, although the customers view the eCommerce store as a whole. 

To implement a new feature or change a specific microservice, you wouldn’t have to take the entire site down. A specific function might be temporarily unavailable, but since developers only work on one microservice at a time, updates and new features can be rolled out in a matter of hours.

Here a few examples of cloud-native applications:

UBank — UBank wanted to improve its home-loan application process and help customers apply for the loans more easily. They created a cloud-native smart assistant app — RoboChat — using the IBM DevOps toolchain, which answered customer inquiries and helped boost the home-loan completion rate by 15 percent!

American Airlines — The company partnered with IBM to develop and deploy a Dynamic Rebooking app, which provides customers with detailed information and enables faster and easier flight rebooking. 

ThinkResearch — ThinkResearch leveraged cloud-native architecture to create and deploy an application that would deliver the latest research and relevant medical information to doctors at the point of care. The company relied on microservice-based APIs in the cloud to quickly deliver the solution, without having to make a huge upfront investment to build an on-premise IT infrastructure.

Conclusion

In today’s digital business world, change is the only constant. Innovations and new technologies are introduced at a rapid pace, and companies that heavily rely on their legacy systems will struggle to keep up and are likely to lose customers to their more tech-savvy competitors.

If you want to keep pace with the ever-changing customer demands and quickly and seamlessly integrate new technologies with your existing systems — cloud-native architectures and microservice-based APIs are the perfect solution! 

Partner up with OpenLegacy and discover how you can leverage cloud-native architecture and microservices to fast-track your digital transformation. 

Transform your business, embrace change, and deliver outstanding digital services ten times faster than with monolithic architecture! 

References

We’d love to give you a demo.

Please leave us your details and we'll be in touch shortly