Shoreline.io Announces Open Source Solutions Library to Deliver Self-Healing Infrastructure
Shoreline’s library of pre-built Op Packs offers open source solutions for common production operations incidents, eliminating operational toil and increasing availability
Shoreline.io, the Incident Automation company, announced Shoreline’s open source solutions library, a collection of Op Packs that make it easier to diagnose and repair the most common infrastructure incidents in production cloud environments. Launching with over 35 Op Packs freely available to the community, the solutions library addresses issues like JVM memory leaks, filling disks, rogue processes, and stuck Kubernetes pods, among others.
AI ML in Marketing: AI and Big Data Analysis Used to Find Brands’ Emotional Connection
On-call teams understand that self-healing infrastructure drives higher availability, fewer tickets and better customer satisfaction. Until , the path to incident automation was long and hard. With Shoreline, developers can now create and share open source Op Packs built in hours, not months. These pre-built automations and diagnostic notebooks save time and accelerate the path to increased reliability.
Published and provisioned as open source Terraform modules, each Op Pack contains everything necessary to solve a specific issue, including pre-defined metrics, alarms, actions, bots, scripts, and tests. With Shoreline’s Op Pack library, the community identifies what to monitor, what alarms to set, and what scripts to run to complete the repair. All Op Packs are completely configurable and allow cloud operations teams to decide whether to use full automation or an interactive Notebook for human-in-the-loop repair. Co-developed with Shoreline customers, the Op Packs available at launch are based on real world on-call experience at large enterprises, rapidly growing unicorns, and the largest hyperscalar production environments.
“We’re all working in the same cloud environments, yet every company has to figure out on their own how to automate even commonplace issues, like filling disks or JVM memory leaks,” said Anurag Gupta, co-founder and CEO of Shoreline. “Companies can no longer afford to write their own runbooks or custom code automations from scratch. With Shoreline, every time someone in our community fixes a problem, everyone else benefits.”
“Shoreline makes our cloud infrastructure more reliable,” said Wojciech Krupa, CTO and co-founder at Knowde. “The platform automatically manages tickets, fixes issues, and resizes clusters for us, so we scale up and down at the right time. The Shoreline Op Pack library presents many more opportunities to make on-call shifts better for our team, and new op packs every month will only increase the value of this resource. My team is much happier now that they have Shoreline.”
AI and ML News: AI: Continuing the Chase for Brain-Level Efficiency
The following Op Pack solutions are immediately available, and free to Shoreline customers. The solutions library will continue to grow each month as new Op Packs are added by the Shoreline community. With each additional Op Pack in use by a customer, time is freed up for engineers to focus on innovation, rather than repetitive, mundane tasks that are better handled through automation. Op Packs available at launch include:
Streamline Kubernetes Operations
- Kubernetes node retirement – Gracefully terminate nodes when marked for retirement by the cloud provider.
- Kubernetes pod out of memory (OOM) – Generate diagnostic information and restart pods that ran out of memory.
- Kubernetes pods stuck in terminating – Identify, safely drain, and restart stuck pods.
- Kubernetes pods restarting too often – Detect pod restart loops and capture diagnostics to identify the root cause.
- IP exhaustion – Clear away failed jobs or pods that are consuming too many IP addresses.
- Stuck Argo workflows – Argo makes declaratively managing workflows easy, but it can leave behind many stale pods after workflow execution that should be deleted.
Reduce Toil (on both VMs or Kubernetes)
- Disk resize / disk clean – Disk full incidents can lead to wide-spread outages and data loss that can damage customer experiences and lose revenue.
- Networking issues – Network related issues are often hard to diagnose, and can lead to a very bad experience for customers.
- Intermittent JVM issues – Capture diagnostic information for intermittent issues that are hard to reproduce and debug.
- Server drift – Restore uniformity when configuration files, databases, and data sources on your VMs and containers differ.
- Config drift – Ensure observed state matches desired state on your system configuration, e.g. Kubernetes yaml, Cloud config, etc.
- Memory exhaustion – Running out of memory rapidly degrades customer experience and must be pre-empted.
- Disk failures in kern.log – Detect when a disk has errors or has entirely failed by inspecting the OS’s kern.log. Automatically capture these events and kick off fixes such as recycling the VM.
- Network failures in kern.log – Detect when a network interface has errors or has entirely failed by inspecting the OS’s kern.log. Automatically capture these events and initiate fixes such as recycling the VM.
- Endpoints unreachable – Determine when there are no endpoints behind your Kubernetes service or these endpoints have become unreachable.
- Elastic sharding replica management – Determine when your elastic search clusters have too few replicas per shard, and automatically kick off healing.
- Log processing at the edge – Analyze log files on the box to identify issues that cause production incidents, and eliminate costs of centralized logging.
- Kafka data Processing Lag – Restart slow/broken consumers when systems are falling behind in processing messages through a queue.
- Kafka topic management – When the length of your Kafka topic is too long, applications may begin to break.
- Processes consuming too many resources – Determine if the system is using too much memory or CPU at the process level.
- Restart CoreDNS service – CoreDNS, the default Kubernetes DNS service, can degrade in performance with too many calls causing massive latency.
Optimize Cloud Spend
- Rightsize pod CPU and memory allocations – Automatically reduce pod CPU and/or memory limits that are set too high.
- Reclaim idle hosts – Mark low utilization virtual machine instances for inactivity, then terminate them.
- Delete unused EBS volumes / snapshots – Eliminate costs from unused resources.
- Manage data transfer costs – Detect increased data transfer volumes, and pinpoint the reasons.
- Excessive use of on-demand hosts – Determine if converting on-demand VMs to reserved instances would create substantial savings.
Increase Security
- Privileged container check – Flag any container or pod running in privileged mode.
- Users with root access check – Flag any VM or container which has server processes running as a user with root permissions.
- Open port check – Ports can easily be opened unintentionally in a development environment, especially port 22 for SSH and port 3389 for remote login.
- Connections from unexpected ports – Detect network connections on ports that are not found on an approved list.
- Process list check – Ensure the correct server processes are running, since processes sometimes die silently or old versions are left running.
- Detect cryptocurrency mining operations – Unauthorized cryptocurrency miners must be stopped from abusing free tiers of cloud service providers.
Avoid Major Outages
- Certificate rotation – Sooner or later every company gets bitten by expired certificates and when they do, it can cause a catastrophic outage.
- DNS lag – Trigger rolling restarts of the DNS servers when they are responding slowly and causing widespread system issues.
Companies around the world rely on Shoreline’s incident automation platform to resolve common incidents in production, broaden the team that can safely repair incidents, and perform live site debugging of new incidents. Pairing this Op Pack solutions content with the Shoreline platform accelerates time to value and increases ROI for Shoreline customers.
Latest Aithority Insights: Detecting, Addressing and Debunking the Hidden AI Biases
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.