DevOps 2023: With these 5 golden rules, your project will also be a success!

DevOps is a methodology for handling IT projects holistically - so far, so good. The concept is familiar, and if not, we'll link you right here to our explainer article that will refresh your knowledge. However, what is actually good DevOps is not always known. There are many providers, many projects, many clients - and in the end someone gets the contract. But why actually? What are they doing differently, other than separating the team into Development and Operations? What can they do better in this defined process chain, why are their projects getting better? We'll take a closer look at that together today. We have brought all kinds of experts on board for this, both Josip from the DevOps team of the Cognizant Mobility, as well as Alex Smith from Cognizant We wish you a lot of fun with our 5 golden DevOps rules, which will make your projects better and the orders more lucrative - for all stakeholders.


Marketing Professional


Ca. 17 min

Sharing is caring!

DevOps – fundamental errors in existing processes

Of course – you immediately want to know what the 5 exciting tips are, the golden rules, the salt in the soup, the big secret. Understand. But in order to understand what is right, we also need to know what could be wrong – even in companies and DevOps teams that otherwise do (almost) everything right.

Basically necessary for understanding is the idea that DevOps consists on the one hand of a developer team, the “Devs”, and the team responsible for operations, the “DevOps”. The latter ensure, for example, that software implementations are made, support requests are resolved, and optimizations are developed and implemented. The developers, in turn, well, develop – and certainly according to different standards. Where some developers write complex algorithms that work but are moderately documented, there is already a need for questioning between two otherwise closely cooperating parts of the DevOps team. For example, if the developer develops part of the final software but has not developed a fixed process for dealing with it in parallel, it can be difficult for the DevOps team to communicate a clearly defined solution based on a process when a support request comes in from, say, a user – costing time, efficiency, and ultimately resources, often at the expense of the end user, whom we ideally want to satisfy. Thus, even in experienced and successful DevOps teams, sources of error can arise that not only cause effort and can dilute the planned ROI (more project days, more personnel deployment, more resource requirements), but can also generate consequential errors when legacy projects are transferred to modern infrastructure. This brings us to our first tip:

DevOps: Follow the pipeline!

A strong USP is therefore “process pipelines”, in which it is defined from start to finish how the developed software is to be used, in which errors are documented in detail (and from which actions can be derived), and which take into account the user’s point of view. This is an elementary tip for you, which is not always followed even in experienced and large DevOps teams. Design your processes to be clean, unambiguous, and low-interpretability from the start so that the operations team can resolve emerging issues in an efficient and repeatable manner. This is the only way to quickly identify errors and optimize processes. Put an end to the individual processing of error tickets, clear the way for clean process design from which conclusions can be drawn. Divide your mistakes into levels:

  • Level 1 for clear, simple queries (“Where can I find my login? How can I change it?”) that the operations team can easily resolve.
  • Level 2 for technical requests involving servers, publishing, or infrastructure, which the Operations team can sometimes resolve on its own or in conjunction with the Dev team as needed.
  • Level 3 for points that don’t have a clean process yet – this goes to the developers to solve the problem, document it cleanly, and derive a process from it, which in turn will guide future issues.

The days when capable employees only wait for incoming tickets to meticulously search through them and similar tickets for solutions are definitely a thing of the past.

DevOps: Keep an eye on the future!

Who does not know this? A great customer project with a large scope, and with enthusiasm the stakeholders show you the server room. Huge, physical computers with 16 cores and 128 GB of RAM, real experts they are, nice. All users connect and cavort with us on these computers, capacities are used, left out, exhausted. And then more users than expected storm the computer, and even the 128 GB in the server go to its knees. The result: jerky software, long loading times, dissatisfaction. One solution could be to create more server capacity, buy computers or rent more space in the data center. And then the monitoring shows that the servers are practically empty at 3 o’clock at night. The carbon footprint – and corporate wallets – don’t care; on-premise servers draw power and work, even when the rush has subsided. And the problem? It’s not even permanently solved, it’s just become more expensive – because the new hardware can now handle 15,000 users instead of the 10,000 that just crippled the server. But when 30,000 come in, the team and the client face the same problem.

Yes, it’s true: migrating to the cloud can save time and resources, i.e. cash. However, a DevOps team that is not familiar with the infrastructure can also quickly create a hole in the budget.

As part of the holistic approach of strong DevOps teams, it is therefore essential to look ahead, and this clearly shows that “the cloud” (i.e., generally the top dogs AWS and Azure) is an inevitable solution approach. However, it is important to proceed smartly rather than quickly, as Paul Hammond of Cognizant also explains in his article on smart cloud migration. After all, the infrastructure, the entire system architecture of the cloud changes enormously, and booking the wrong or excess power in the cloud happens quickly – and puts ROI on the sidelines. Many customers are surprised by the costs that a cloud can incur.

However, if the migration is based on a strategy that is tailored to the needs of the project, it can solve urgent problems. Computing capacity can be booked on demand – at night at 3 we don’t need servers, at noon at 12 we need twenty? No problem in an increasingly modular and customized cloud. Instead of buying 100 servers that are only needed in extreme cases, required computing power can be booked that always kicks in when it’s needed – and doesn’t consume resources the rest of the time, as well as often being able to operate in a climate-neutral manner (which can easily be calculated into a track record).

Sure – DevOps teams also need to consider any obstacles. A lock-in effect must be avoided, in which companies become dependent on individual providers; phases and concepts are needed, even for the time after the migration, when everyday DevOps life has returned. Processes, baby, don’t forget – or take a step back to tip 1.

DevOps – There is no team without a team

Of course, it’s no secret that teams need to be strengthened. Treating employees well, paying them fairly, sharing on an equal footing, asking for expertise, all bread and butter. But the quality of work performance is also relevant, especially when it comes to future-proofing it, and this is the task of employers: as the industry changes – one example would be the migration from physical hardware to the cloud – new challenges also arise. Those using only on-premise hardware don’t need to worry about Docker or Kubernetes or competently deploy these technologies. Those who want to move to the cloud should have their DevOps teams trained in it. If you want to work in a modern way, leave traditional system architectures behind and work with real long-term business value, you can’t avoid keeping the knowledge in your team up to date. A seemingly simple tip, but still many DevOps teams work in physical environments and are sometimes overwhelmed by new tasks. While this too can almost always be traced back to a lack of processes and strategies, activating, educating and motivating employees is also elementary to DevOps. All parts of the process chain must be highly functional or, alternatively, will always be oriented to the weakest link in the chain.

Know your infrastructure – and make it secure with DevSecOps!

Before the project is after the project – and this is not at all about economic aspects such as simply subsequent orders, or lessons learned. To bring a project to a successful conclusion and to stand out from the DevOps landscape, knowing the infrastructure coming at the team is essential. Because the entire development environment must not only be as productive as possible: it must also be as secure as possible. The term “DevSecOps” (which now combines “Development”, “Security” and “Operations”) is therefore on the rise and is intended to help secure testing, e-testing, integration and similarly important sub-areas all around, already in the pipeline. Certainly (no pun intended): Testing directly in the build will continue to be practiced and may also have its justification, but as already stated, the integration of testing into the process pipeline is not only more effective, but also safer. Problems can thus be identified earlier and solved within the DevOps landscape and the specified processes – saving time, nerves and costs.

Regarding infrastructure, the story reads similarly: where the output is fundamentally different and deals with infrastructure patterns and libraries, with server builds on which dedicated testing is sometimes done, procedural templates can still be created. So if additional developers are now also working on a dedicated (server) landscape outside the process pipeline, shared resources and processes can still be used and designed securely. Optimizing such procedural methods requires a high level of knowledge of infrastructure patterns: should the project migrate to a hyperscaler environment such as Azure or AWS (and risk lock-in), or would terraforming the project into an independent system landscape be a solution? Decisions that can optimize efficiency in the long term and thus also cost deployment.

DevSecOps – more than a label, but not a new development branch: Why DevSecOps at least demonstrates foresight in many areas, is often necessary, but mostly also a budget question

A word about the buzzword “DevSecOps”: Creating a secure development environment can be costly and can sometimes make free testing more difficult. But if the tooling is integrated securely from the start, the providers’ apps are already geared to this from the start (providers such as IBM offer special service packages for this). So DevSecOps is not a completely new approach to security, branding traditional DevOps projects as “insecure.” It’s more of a label, a standard that starts in the code, ends in the complete pipeline protection, and tips the scales. If the entire toolchain and test landscape are fully secured, this contributes to a secure and fully tested end product – but is also usually a question of budget and the client’s specific vision. It should also be emphasized that our article refers to the automotive industry. In areas such as the financial sector or healthcare, the security standard is already so high even in classic DevOps projects that no separate DevSecOps is required. In the automotive industry, this is different in that regulations such as UN ECE R155 (and 156) require and thus favor that safety aspects are of paramount importance from Day 1, so development to DevSecOps standards is preferable.

The best tip of all: Think about the Core Ability of your solution from the beginning – and move it around.

There it is again: the best tips are often the very simple ones, and isn’t it amazing how often exactly these simple basic rules are disregarded? Even before the first letter is typed into any code, it must of course be clear what product, what solution is to be at the end of the project. But what should this be able to do? What is the special point, what should be handled particularly well, and how do we achieve that? Every time, every problem, every application?

In this respect, we have observed a certain lock-in effect in the many Cognizant Mobility projects reported by our contacts from the specialist departments, which, however, does not only relate to the technical project landscape and cannot be answered with “open source is the solution”. Rather, it’s a kind of tunnel vision, a focus on solving the next small step, building the next pipeline, developing the next piece of code until the next sprint meeting. Often DevOps developers are concentrated and fully focused on the next piece of the application, so the big picture, the ability to move the end product can get a little out of sight. For any team, whether experienced or new to the project world, it is therefore essential to keep asking ourselves together? What are we building, for what? A new application, a new container? Can we use this again, possibly even in other projects, keyword Low Code / No Code? And how can we make the best use of the finished app? For example, can we build and develop this completely ready to go, and will it work just as well when we move it? Building an application in a specific environment and later being able to shift it to AWS or TCP is a core competency. Can we program in Kubernetes and build our systemic eco-system around it? Answering this question should be one of the core skills of dynamic development teams in order to keep projects effective in the long term and thus also generate unique selling points in a growing market. Additionally, we should mention that this is not an uncommon “”What If” scenario – applications move, frequently, between different vendors and system landscapes. Popular is certainly from on-premise to the cloud, but also the other way around is known, as well as building completely own open source solutions or hybrid projects; which is why we recommend the team property to know the platform-independent solution capability of the application or product as the final golden tip that you should definitely consider.

DevOps – The Bottom Line: Let’s Talk

With our 5 golden rules for successful DevOps projects, we open the view on an extensive and complex development architecture and give you food for thought that you can apply to your own projects. The order seemed reasonable to us, however, priorities can be different. Maybe all the tips are effective for you, or maybe you’re just tweaking a few things. Large project teams in particular are often a bit entrenched in existing methods and processes, and here it is more the cautious hand that achieves success little by little rather than hastily changing many things. DevOps is always an organism that must remain intrinsically functional in order to function, and as in any efficient eco-system, meaningful evolution occurs slowly.

Feel free to contact us via the contact form on the blog or our company website, write us a nice comment on LinkedIn or send us an email if you have questions, suggestions – or would like to talk to us about your next Dev(Sec)Ops project: We look forward to hearing from you!