DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality.
DevOps is complementary with Agile software development; several DevOps aspects came from the Agile methodology.
There is no how-to guide when it comes to DevOps. To be a well-rounded DevOps engineer, you need hands-on experience in both development and operations.
Once a DevOps professional has this experience, they will never say they don’t need to worry about performance or writing code that contains empty try/catch blocks.
Their operations experience will show them the major headaches such issues can cause in a production environment.

At Sharks4IT we believe the world of Kubernetes is here to stay and they make the world of IT more manageable for Developers and Operations as well.
We have worked a lot with kubernetes in different varieties. Managed Kubernetes such as Openshift. On premise vanilla installation and Kubernetes in the Cloud. In all we have worked with kubernetes for 6+ years.
We have experience running a Kubernetes cluster from an administrative perspective. And have hands on with CI/CD and automated deployment into a Kubernetes Cluster using a variaty of scripting languages such as Ansible, terraform and other shell scripts.
Writing container images using Docker and creating Deployment , ReplicaSet and much more using tools such as Helm or just running simple Kubernetes admin commands.
We have worked with Cluster Setup where High Availability is required and a need for fast recovery time on different scales and approces.
If you have a Kubernetes related tasks then Please Contact Us
Kubernetes

At Sharks4IT we see ansible as the glue that holds infrastructure as code together. And we have been using ansible for 5+ years
Ansible is very good at managing infrastructure as code and can also be used to provide self-service solutions and provisioning of kubernetes infrastructure using ansible and k8s modules.
We have experience with using ansible for infrastructure testing such as VM-Server sizing validation, network configuration verification and not only for on premise we also used ansible to do infrastructure testing of Confluent Cloud. The solution was using ansible and kubernetes resources to set up the initial testing mechanisms.
If you have a Ansible related tasks then Please Contact Us
Ansible

We have worked with Openshift 3.11 for 4 years as administrators managing the platform. Openshift patching using redHat ansible playbooks.
Capacity management , capacity calculations and cost analysis. To do this we made sure that application teams were forced to consider their resource needs. This was done by setting the default values for cpu and memory so low. That the solutions would not be able to start. The reason for this is that kubernetes can better manage the cluster when quota & limits have been specified. Otherwise you can risk application pods starting up on already overloaded nodes.
The setup required a high availability and security. We configured 3 masters and 4 Infra nodes for the higher ENV settings. To make sure we would have as little outages as possible. The custers ended up having 30-40 worker nodes.
The implementation of the 3 pod policy was also introduced. Stating that if an application running on Openshift wanted guaranteed HA. They would as a minimum have 3 pods running. We also made sure that we had affinity and anti-affinity rules making sure that not all pods would be running on the same nodes.This way if Operations were doing patching and Openshift would be moving pods around there would still be at least 1 pod running.
The setup also required that we exported logs and metrics to an external log and metric system. In this case ELK was chosen. The solutions were many but we installed file and metricbeat as daemonsets making sure we would collect logs and metrics from each node in the cluster. We also used the metricbeat prometheus module to scrape and enrich metric data so we could present cost information in ELK based on a project's resource usage and consumption.
Migrating from 3.11 to 4.x was also part of the task. Instead of having openshift on prem. we would move to a managed solution. And for this we analyzed different approaches to how we could do the upgrade. Because going from 3.11 to 4 is not a standard upgrade but a lift and shift operation.
If you have a OpenShift related tasks then Please Contact Us
OpenShift

Doing DevOps work in Kubernetes sometimes requires debug tools. However container images do not always come with the desired tools that are needed. And to debug connection issues, certificate issues etc. it’s a good idea to be able to do this from within a container.
That is why we wrote a devopshelper docker image that we use to debug certain situations that arise when working with Kubernetes.
This image can be deployed into any namespace alongside the containers that are having the issues. This way our devopshelper can test a variety of situations using the tools in the image.
Beside the devopshelper we have also extended existing images that mostly are used on the operations side of things.
We have however also assisted in writing different java related docker images
If you have a Docker related tasks then Please
Docker

Sharks4IT has been working with automated deployment for over 18 years.
What we’ve learned is that people are simply not built to make manual changes to a production environment at 2 AM by following a step-by-step guide. This approach is destined to fail — and it's one of the fastest paths to employee burnout.
We have a saying:
“If you have to do something once, that’s fine. If you have to do it twice — automate it.”
You might say, “But I can do it much faster manually than if I had to automate it.”
Yes, that may be true the first time. But you can’t guarantee consistent results when repeating the same task manually. And when something goes wrong, finding the root cause is often ten times harder.
That’s why investing time in automation always pays off in the long run.
We’ve built a wide range of automation solutions using various languages and tools, including Ant, Python/Jython, Ansible, Terraform, and a variety of shell and Bash scripts.
On this page, you’ll find a more detailed explanation of the work we’ve done and the technologies we’ve worked with.
If you have a Automated Deployment related tasks then Please Contact Us
Automated Deployment

Sharks4IT has been working with Python for over 8 years.
We helped develop a Python-based deployment engine at a time when tools like Ansible and Terraform didn’t yet exist. This engine was capable of deploying a wide range of middleware application platforms, including WebSphere, WebSphere Clusters, WebLogic, Tomcat, JBoss, and many more.
The core principle behind the engine was simple: everything should be created from scratch to ensure that nothing was missing or out of place. What was in version control was considered the single source of truth.
This deployment engine was built before the era of Azure or AWS, meaning all deployments were performed on virtual machines running either Windows or Linux.
Quality Assurance (QA) and Quality Control (QC) were built into the process to ensure consistency and reliability. Each middleware server was redeployed every three hours to verify that everything worked as expected.
The idea for the engine originated from working with an XML-based scripting language called ANT. However, due to ANT’s slow deployment times, we replaced it with our own Python-based deployment engine.
Over time, the complexity and maintenance responsibility grew — but the engine stood the test of time.
If you have a Python related tasks then Please Contact Us
Python

Sharks4IT has been working with Terraform for about two years and has contributed to setting up self-service and infrastructure deployment for the Confluent Cloud Platform.
We’ve used Confluent Cloud providers and created reusable child module configurations, enabling the definition of ready-to-use Kafka clusters and environments tailored to the needs of individual departments.
When working in the cloud, Terraform is a powerful tool for maintaining infrastructure as code. Getting started is often quick, thanks to the wide range of community modules already available.
The real challenge, however, lies in configuring your Terraform repository to suit your specific needs — and in defining the level of code reuse that aligns with your infrastructure strategy.
If you have a Terraform related tasks then Please
Terraform

When working with Kubernetes and containers, it's essential to have a way to manage and maintain your Kubernetes configuration files.
For simple tests, running kubectl commands is usually sufficient. But when it comes to larger-scale deployments, you need greater control and a reliable method for managing the state of your Kubernetes resources.
This is where Helm comes in. Helm is a powerful tool designed to manage Kubernetes objects using templates and variables. One of its key strengths is that it tracks the state of your deployments, allowing you to install, upgrade, roll back, and delete applications in your Kubernetes cluster with ease.
At Sharks4IT, we highly recommend using Helm for Kubernetes management. We've been using it for over three years and have seen firsthand how it simplifies complex deployments and improves consistency across environments.
If you have a Helm related tasks then Please
Helm

When it comes to CI/CD, our tool of choice has primarily been Jenkins.
We’ve spent considerable time developing Jenkins pipelines capable of running Ansible playbooks, Python scripts, and Kubernetes-related tasks.
However, as we’ve increasingly worked with Kubernetes, we’ve also explored alternatives such as ArgoCD.
The key difference between Jenkins and ArgoCD lies in how they interact with Kubernetes. ArgoCD can actively monitor and manage the state of everything deployed in a Kubernetes cluster. It enforces the principle that what’s in version control is the single source of truth. If something is changed manually in the cluster, ArgoCD can detect the drift and automatically redeploy the correct state from version control.
From a DevOps perspective, ArgoCD significantly simplifies the management of Kubernetes infrastructure and container deployments, making it a strong complement — or even alternative — to traditional CI/CD tools in cloud-native environments.
If you have a CI/CD related tasks then Please
CI/CD

When it comes to DevOps, one discipline that is often overlooked is observability.
However, the insights it provides are essential for understanding the root causes of failures — and for detecting potential issues before they impact the production environment.
At Sharks4IT, we see observability as a critical component of any DevOps strategy. Platforms like ELK (Elasticsearch, Logstash, Kibana) offer a well-rounded solution for managing and visualizing observability data. We've been involved in implementing various Beats modules to meet different observability requirements across platforms.
One project involved using Metricbeat to extract metrics from Prometheus running in OpenShift. The goal was to gather capacity usage data and design dashboards in Kibana, enabling each team to visualize the total cost of running their projects in OpenShift.
We've also deployed Metricbeat and Filebeat on Kubernetes clusters to collect performance metrics and log data, sending it to ELK for centralized analysis.
Another example was implementing a REST Proxy solution for Confluent Cloud. As part of this, we needed to collect and process metrics from Confluent Kafka instances running both on-premises and in the cloud. For this solution, we used Logstash to collect and enrich the metrics, which were then visualized in project-specific dashboards to monitor performance and identify potential bottlenecks in message processing.
If you have a Elk related tasks then Please
Elk

At Sharks4IT, we ensure that all work delivered to our customers is thoroughly documented and properly handed over to the relevant department teams.
We take pride in producing high-quality documentation and writing clear, actionable how-to guides. Our belief is simple: when a new team member joins a department, there should already be well-prepared documentation in place to help them get started quickly and independently.
Rather than relying solely on existing team members to find time for onboarding and knowledge transfer, we advocate for a structured approach using how-to guides and runbooks that walk through procedures and processes step by step.
Another highly effective method is recording knowledge-sharing sessions within the team, which can serve as valuable reference material over time.
These practices help secure and retain critical knowledge within the department — and at Sharks4IT, we are committed to ensuring that this foundation is in place for every project we support.
If you have a HOW TO GUIDES related tasks then Please Contact Us
HOW TO GUIDES