All posts by Administrator

Building a DevOps Culture


The Evolution

A decade after or so, speaking around “What is DevOps?” a common agreement has started to form around DevOps being a cultural association pooled with a number of software development practices that empower speedy development.

As a technical process, a set of tools have emerged that empower teams to work more rapidly and proficiently. Tools only are not self-sufficient to craft the collaborative environment that corporates now refer as DevOps. However, creating such an environment is a massive encounter; it involves evaluating and rearranging how people think about their teams, the business, and the customers.

There might have been many of us who have been “doing DevOps before DevOps was evolved.” Placing together a new set of tools is simpler when compared to altering organizational culture. DevOps is a withdrawal of old-styled software development practices for just this reason.

Organizational Culture – “The Legacy”

The very idea of an organizational culture was evolved in the middle of the 20th century. Outlining an organization’s culture goes beyond having a magnetic leadership and employee motivations. The definition includes the fundamental ethics, theories and principles that govern the organization’s management style, procedures, and practices. These aspects of culture in turn define the kind of employees the organization has.

Groups of people build a culture by sharing their ethics and behaviors. How an organization rewards behaviors, how an organization treat those ethics, and how well it is supported by the participants. It also affects how the participants of the culture may have reacted to attempts to transform the characteristics of that culture.

Businesses have become progressively aware of how work culture creates and maintains the standards of an organization. The topmost Organizations have a culture of consistent productivity, motivated employees and a high retention rate, and have been positive about adjusting to changes in their market. But there are organizations who have been unaware of how their workplaces itself makes, or breaks, their organization’s culture.

DevOps Culture – “Out of Comfort Zone”

DevOps is as much about culture as it is about tools, and culture is all about people. No two groups of people are guaranteed to create the same sort of culture under similar circumstances. So to talk about a cultural movement in absolute terms is disingenuous. Implementing a prescribed toolchain won’t magically turn an organization into a DevOps team. Using DevOps-friendly tools and workflows can help an organization work in a more DevOps manner, but creating a culture that is supportive of the ideals of the DevOps movement is crucial.

Implementation of a DevOps culture in an organization leads to increased collaboration and teams that are more in sync. If an organization currently working in a siloes structure, changing its standing culture to embrace DevOps is often more difficult. Here are four practices to help build a DevOps culture within an organization:

1. Trust

After passing of many years of development and operations teams usually get separated from one another, there’s naturally a lack of trust and loose communication between teams. Before you change organization’s culture, everyone requires agreeing and understanding that they’re on the same team. Although trust isn’t built overnight, it’s important for both teams to break old habits and to learn to work together.

Now all developers, QAs, and sys admins, take part in deploying new features and part of a continuous process, each group needs to trust each other to get quality code delivered on time.

2. Understanding Motivations

Since developers, QAs, and sys admins all are on the same team now, take the time to understand the thought process and motivations of team members. Listen to them; acknowledge their ideas and work together to come up with creative solutions.

When team is under pressure to complete a software project, it may be difficult to understand why someone is reacting or thinking differently than other. Conflict between teams often occurs when people don’t understand each other. One of the Best practices like Unit Tests will become critical in a DevOps shop so QAs frustrations grow to a team’s problems when engineers don’t do their part in testing a solution.

3. Avoid Blaming Each-other

When teams work independently and something goes wrong, each team often places blame on the other for why the project didn’t go as planned. However, there’s no place for blame in a DevOps culture.

While it’s vital to understand what went wrong so it can be avoided again, placing blame on individuals never help the team moving forward. DevOps practices create a new level of visibility which can break down barriers but also show more clearly bottlenecks, blockers and problem creators (like breaking the build). Teams have to be prepared and willing to support each other in correcting problems rather than looking for ways to avoid blames.

4. Help Teams Understand ‘Why’

People are naturally resistant to any change made into their comfort zone, so it’s extremely important to help teams understand why an organization moving toward a DevOps culture. Sit down with your employees and show them how it leads to better, more stable software.

For Example: A developer might like to manage source code the same old way but DevOps only works when changes happen like converting to using Git Workflow. If developers don’t know why they need to use GitHub Workflow, they might not see the impact until it creates bigger problems downstream.

Demonstrating change is necessary requires some homework. It’s fair to change your organization because DevOps emerging and can help to deliver better code quickly. But you’re looking at months of work, new tools to learn and implement teams to restructure. These costs must be outweighed by the benefits, so you have to be able to put real value on your processes.

Articulating upfront what your goals are will help you with other phases of your DevOps roll out. Some common measurable goals are:

  • Reduce time-to-market for new features.
  • Increase overall availability of the product.
  • Cutting down the time to deploy.
  • Increase the percentage of defects detected before go-live
  • Make more efficient use of hardware infrastructure.
  • Cutting down the number of production tickets

Building a DevOps culture within an organization escalates effectiveness and collaboration between development and operations teams. Although implementing changes are never easy, DevOps allows rationalizing software development processes and getting products to market faster.

What takes to select right tools for DevOps

To achieve the cutting-edge speed and agility assured by DevOps, you need to pick the right tools to empower automation across all phases of development, production, and operations. Undeniably, conferring to a 2013 study by Puppet Labs, more than 80% of high-performing software-development companies depend on automated tools for infrastructure management and deployment. That means you perhaps already have many of the tools used in DevOps environments, but are they the right ones for your organization’s needs?

Everyone with in Engineering Team should take an active role in collaborating with their Ops colleagues on strategy to use the tools they engage for software development and learns and inspire coupling to help admins realize how these tools can benefit both the individual and the overall achieving organizational goal. Equally, Ops can represent system management to other involved groups so they can take accountability for deployments, and deploy improved quality code that works right in production at the first time. QAs and testers should similarly be pulled into the initial stages of tool acceptance.

The success of DevOps rises from everyone within the team working with the same tools and processes, but as by nature we all tend to get a little inflexible about the changes we make, so regulating on a toolchain involves much more than simply announcing: “Here we have a new tool; and now we’re using it!”

Make a Strategic Roadmap:

The basic is to make a strategy about all essentials and goals of your company. Your business needs, engineering group, budget, legacy systems and workflows, all of them are exceptional to your company and so there’s no fit-to-all method for selecting the right DevOps tools for your goals. For example, if collaboration is your goal, the ideal tools can encourage your teams to do the right thing and can answer basic questions like: does QA have to wait for the provisioning of test environments? Does committed code get bogged down in testing? You have to decide how important testing is to your overall goals.

 Identify Blockages                                                    

The Next step is to identify the process blockages that prevent your organization from developing code faster, and deploying it more regularly. You can use two main tracks for identifying process blockages.

The First, ask your teams where things get stuck down as code changes through your development and production pipeline. Tell them to rate the severity of each blockage, and identify the must have, critical ones.

Certainly the subjective approach isn’t enough. You also need to study system’s historic data and evidences and collect hard evidence of where your channel is working well, where it can be improved, and where it’s failing. The important one, you need to have baseline data that you can use to measure performance improvements moving forward and determine the parts of your channel that still need work. Combine the results of your subjective and data oriented research to rank your tool needs.

Emphasize on core categories of tools

You have to focus on the categories of the core tools available to make sure you implement and standardize on tools that are appropriate for your organization in each of tool categories, and consider how each tool you select fits in with the other tools. It’s also very important to make sure someone owns the problems of overall tool compatibility and is authorized to take decisions around tooling.

Most of the most successful DevOps organizations automate using tools in a few core categories, using a range of specific tools:

  • Version control (GitHub, Mercurial, Perforce, Subversion, Team Foundation Server)
  • Configuration management (Puppet, Ansible, CFEngine, Chef, Puppet, RANCID, SaltStack, and Ubuntu Juju.)
  • Continuous integration (Atlassian Bamboo, Go, Jenkins, TeamCity, Travis CI, )
  • Deployment (Capistrano, MCollective [part of Puppet Enterprise] )
  • Monitoring (New Relic, Nagios, Splunk, AppDynamics, Loggly, Elastic)

As you build your toolchain, it’s important to understand how each tool increases the benefits of the others. The right tool chain for DevOps will automate IT services, provide real-time visibility into system and application performance, and give you a single source of truth. We have to be the big believers in managing infrastructure as code, and once your infrastructure code is in a version control system, you’re able to apply the best practices of agile development.

More important than an individual tool’s capabilities, though, is how closely the all match your organization’s strategic goals. That’s the right way to maximize your chances of achieving DevOps goodness. Of course, tools are only part of the DevOps equation. You also need to create a culture that gets dev and ops working together towards the same goals.

Ransomware, ITSM and Corporate Bureaucracy

Recently almost all of the urban population heard/read about a new term – Ransomware! There were news articles, non-stop TV coverage and much hot air in general on this subject for a few days. We even had an experience of Bank ATMs remaining closed for some amount of time.

The golden question is – why Ransomware was so effective and what can be done to prevent or fight such issues in future?

Most of the discussions have been about only cyber security, antivirus, firewalls etc but if we delve just a bit deeper into this, we can actually find out that the response of many corporations was tangled in internal bureaucracy. Top bosses shot emails to their next in line and similarly it percolated down to lowest level IT personnel gradually. It was ad-hoc manual response in majority of cases where 100s or even 1000s of servers were patched manually by engaging every available person for 1-2 days.

Practically, it is very much possible to avoid such a situation. A proper implementation of ITSM along with Orchestration Layer integrating it with Server Side Automation would have drastically improved the response.

Instead of so many people sending mails and personally tracking progress of security patching, all it would need is raising few service requests in ITSM which would flow to server side automation via orchestrator. Automation component can easily pick up necessary security patch installer from software repository and apply the patch to 1000s of servers without further manual intervention.

A very large number of bank ATMs had very old unsupported software such as Windows XP. Still MicroSoft had released security patches for these too. Even then there was huge impact because first of all higher management wasn’t even having precise data of how many such ATMs exist that are vulnerable, what is their current patch level and how to apply new patches onto them remotely rather than physically sending people in a van to thousands of such ATMs (some of them are in difficult to access areas as well).

No wonder that a very large number of systems remained vulnerable to such Ransomware and people understood the problem only after it already affected them.

If there was a proper Discovery Mechanism in place that would keep CMDB updated about current software/hardware configuration and if proper triggers were in place where a software compliance check would automatically result in alerts to the management then majority of the machinery would be on latest secure software patches always – thus making them much tougher to hack.

Unfortunately, even now there seems reluctance in the corporate culture to recognize how critical IT is to any business and budget sufficiently for it. That is resulting in lack of proper ITSM-Discovery-Server Automation suite being in place in many such organizations and that is nothing but sheer bureaucracy. Merely purchasing such software doesn’t magically solve this situation. The implementation needs to be done properly and enhancements to suit a given organizations have to be brought in from time to time through configuration/customization in a controlled and disciplined manner. This will certainly reduce chances of such occurrences and also drastically improve response to it.

No point in being penny-wise and pound-foolish!