Using Generative AI to Accelerate Cloud-Native Development

Cloud native is experiencing hypergrowth. Many organizations are turning toward cloud-native architecture, like Kubernetes, containerization and microservices, to make their software release strategies more robust and efficient. Yet, operationalizing a cloud-native strategy can be complex and time-consuming. This is one area where generative AI could be incredibly helpful.

“[Large language models] LLMs are going to change the way we build applications,” said Dattaraj Rao, chief data scientist at Persistent Systems. “While LLM-powered tools like GitHub Copilot and Amazon CodeWhisperer will improve developer productivity and overall code quality, LLMs can also enhance the development and DevOps cycle.”

Generative AI is quickly being adopted across many DevOps workflows, especially with regard to cloud-native development. Below, I’ll review some ways it can help cloud-native DevOps and highlight some of the tools that have begun to incorporate generative AI, like LLMs, into their offerings.

Generating Configuration Manifests

Many commentators discussed how ChatGPT can be used for code generation and documentation. But another potential use case for an LLM is manifest generation. For example, it could be tasked with producing complex Kubernetes manifests written in YAML or JSON that outline the optimal resources for your cluster.

This is something that the kubectl-ai extension, which integrates OpenAI’s GPT, can already perform. According to this walkthrough, initiating and deploying a manifest is as simple as writing out a command like: kubectl ai "create a service for the nginx deployment with a load balancer that uses nginx selector."

Streamlining Kubernetes Orchestration

Cloud-native DevOps involves a lot of complexity in managing containers, microservices and autoscaling capabilities. Generative AI could help troubleshoot and perform some of the operational tasks associated with platforms like Kubernetes. This could involve using natural language prompts to spin up, roll back or get visibility into clusters.

For example, at KubeCon + CloudNativeCon 2023, Kubiya debuted a generative AI workflow engine that’s able to interpret such commands from within Slack. Extending natural language processing might aid platform teams in creating new workflows that abstract the complexity of working with cloud-native platforms.

Automating Code Testing

LLMs could streamline reviews and test automation for cloud-native components. For example, Tabnine is building a custom LLM specifically trained to help automate testing. Similarly, Robin AI utilizes OpenAI’s GPT to provide an AI assistant that can review changes and offer feedback. By using AI tools to better identify defects, teams can avoid human error to create more stable cloud-native applications.

Continuous Integration and Deployment (CI/CD)

There is much potential for generative AI to enhance efforts around CI/CD. Generative AI models could assist in areas like analyzing code changes, identifying potential issues and suggesting new automation. They could also suggest new rules and policies to ensure artifacts are properly released. The results should help move code from development into production faster with less obstruction.

“One area which can highly benefit from this auto-generation is DataOps, MLOps and DevOps pipelines based on text specifications,” said Rao. “Depending on the IT infrastructure and application context, we could generate specific Ops pipelines for processing data and deployment of apps. We could even commission cloud infrastructure as code by building terraform scripts based on the description of the cloud infrastructure. Over time we will see such solutions drastically change the DevOps pipeline.”

Driving Cloud-Native Observability

Generative AI could be applied to analyze vast amounts of real-time data from cloud-native environments. This could be used to detect anomalies, such as errors or security incidents, which could negatively affect reliability. For example, Virtana is adding generative AI to its AIOps platform to enhance its observability efforts.

After accumulating enough data in the form of metrics, logs and traces, you could spot trends and automate more IT processes. For example, more AI-driven monitoring could also be used to find optimizations to reduce cost, such as fine-tuning the commissioning of GPU instances when training machine learning models.

Performing Cybersecurity Analysis

Generative AI can be applied to scan code to detect vulnerabilities and increase awareness and knowledge sharing around cloud-native threats and vulnerabilities. This could, at scale, improve cloud-native security.

GitLab is already integrating some of these sorts of features. GitLab 16, for example, uses Google-bred LLMs to offer features that explain highlighted code and dive deep into specific vulnerabilities.

An AI Assistant for Cloud-Native DevOps

Generative AI is also set to spur other areas, such as low-code application development. Microsoft has already integrated ChatGPT into its Power Platform and has even built a framework to enable developers to create their own custom Copilots with closer knowledge of specific domains. It’s not too difficult to imagine chat-based interfaces and generative AI being applied to cloud-native development and operations in many similar ways.

“These tools are going to be used sooner or later by all developers,” said Rao. “Some may have a steeper learning curve and more inertia, but ultimately, all developers will end up saving time delegating repetitive tasks to these AI-powered pair programmers.”

Final Thoughts

Generative AI is set to have significant ripple effects across the entire software development life cycle (SDLC). And hopefully, much of the toil in modern platform engineering could be lessened through its use. For example, the simple act of recalling the hundreds of Kubernetes commands could be made easier with generative AI.

Generative AI is making its way into countless DevOps tools — just consider new releases by BMC, Harness, Ansible, New Relic, Google and Boomi, to name a few. Yet, the new wave of AI does bring some potential downsides.

It should be noted that organizations integrating LLMs like ChatGPT face a slew of potential risks around intellectual property infringement, data governance and operational security concerns. It’s thus important to weigh the pros and cons of generative AI before utilizing it within your cloud-native stack. One way to sidestep these issues is to avoid general-purpose LLMs trained on public data.

“I feel organizations should step in to standardize the tools, processes and DevOps cycle,” said Rao. “Only certain AI-powered tools should be approved for development where there are agreeable security, privacy and IP protection agreements in place.”

Bill Doerrfeld

Bill Doerrfeld is a tech journalist and analyst. His beat is cloud technologies, specifically the web API economy. He began researching APIs as an Associate Editor at ProgrammableWeb, and since 2015 has been the Editor at Nordic APIs, a high-impact blog on API strategy for providers. He loves discovering new trends, interviewing key contributors, and researching new technology. He also gets out into the world to speak occasionally.

Bill Doerrfeld has 105 posts and counting. See all posts by Bill Doerrfeld