Web applications are an extremely important part of our lives; We use them daily, and at this point, a huge portion of the world relies on them. With this, we have a rapidly growing customer base that is reliant on the functionality they provide.

For those of us working to build and maintain these applications, we need to observe the application from multiple perspectives. There are our assumptions of our customers’ expectations and our customers’ actual expectations. Meet both sets of expectations and you’re likely to have a successful application.

Included in those expectations are the following:

  • Functionality promises.
  • Continuous innovation.
  • Perceived (and actual) performance.

Companies define what functionality an application offers and that becomes part of the marketing strategy for the application. This is the reason a customer begins to use the application; because it can solve a problem they have. The other area that is in our control is staying ahead of customer needs, adopting market innovations and identifying new application improvements.

Customers, then, define how closely the functionality meets their needs, if application innovation is going in a promising direction and if the performance is acceptable. Most of the development and operational energy is spent on creating positive customer perception and innovating in their market segment.

If we assume that our intended application function and innovation strategy is on target with customer needs, then the focus of our development and operations teams can be narrowed to preventing two key application detractors: bugs in functionality and poor performance.

The Software Development Life Cycle (SDLC)

There are many forms of Software Development Life Cycle (SDLC) in use across the industry today. Figure 1 (above) is a common SDLC used in many companies. The model has a lot going for it, such as:

  • Clearly defines functionality goals in the design specifications.
  • Developers receive peer coaching during development to create better architecture.
  • The application is tested using automation before deploying to customers.
  • We learn from prior releases to create the next generation of design specifications.

The model also has some areas of concern that need to be improved, such as:

  • Lack of feedback into development from an application at runtime in production or test, allowing errors and performance to be associated to code.
  • Customers are impacted by bugs that unknowingly get through development and test environments. In most cases, this leads to functionality loss or performance reduction.
  • The time it takes to fix bugs in production lowers the net promoter score.

This SDLC has a secondary process loop for fixing bugs but the customer is regularly involved in the process. This loop is typically fast for high priority issues and slow for medium and low priority issues. Lingering issues might not get addressed for a long time and continue to impact the net promoter score.

We’re all looking for a silver bullet that elevates our web applications into the envious status of having customers that are mostly net promoters of the application. The reality is that there is no single silver bullet that can make this kind of promise because there are many reasons that development and operations are not delivering at full potential.

The key contributor we will look at in this post is: developer utilization of programming best practices. Best practices in programming typically highlight ways to avoid code structures that have been proven in the past to increase risk and lower customer experience scores, but the traditional ways in which they’re communicated creates inconsistency of avoidance. Best practices are only useful if all teams are uniformly using the same practices.

How do best practices get communicated within a software development ecosphere?

Developers create – to the best of their ability, education and experience – highly-functional, bug-free applications. This highlights the origin of many challenges we face as leaders; not every developer has the same ability, experience and education. To overcome common programming pitfalls, less experienced developers must be taught how to avoid them. Experienced developers also need to learn best practices when working with new technology stacks; such as moving from .NET to Java. There is often no clear mentoring path in this case and is typically done by trial and error and is very inefficient.

In our careers, we make choices constantly and must learn from our successes and failures, this is what we refer to as developer experience. Historically, the role of passing down the wisdom that comes from developer experience has been delegated to the system architects, lead developers and engineering managers.

This is ok if leaders have all the required knowledge, the time to pass it along and pass it along uniformly to each person they mentor. Most commonly, each mentor has a list of specific areas they are passionate about, and those are highest on the syllabus to educate others. This is inherently fantastic because when combined across a team, this leads to a diverse source of innovative solutions.

There are, however, drawbacks to this model that should be accounted for, such as:

  • Sr. Developers and Engineering Managers are not able to teach every topic.
  • Coaching frequently is more effective, but it takes significant time investment.
  • Competes with feature development

What happens if insufficient coaching is happening in your teams? Each developer probably has a few code structures they learned to get a task done. If any of those code structures have unknown operational flaws, those flaws get introduced into the application in many locations. The effect magnifies with the number of undiscovered flaws and developers including each variation. Mentoring can even be the cause for increases in faulty code structures because the errors were unrealized and continue to be communicated as a good method.

What do developers produce in a month?

The question differs greatly depending on company culture, mixture of experience, infrastructure deployed and much more. This model, based on 100% utilization of a developer’s time, simply helps illustrate what we invest into development and what the business realizes from our efforts.

Most work can be roughly placed into the categories from the table. Delivered Functionality refers to time spent creating and delivering customer-ready application elements. Coaching refers to time spent either leading or receiving mentoring activities. Bugs refers to time spent fixing (or trying to fix) unintended behavior that is visible or invisible to customers. The category of Other includes time spent in seminars, classes or meetings.

There are many factors that contribute to the percentage breakdowns in this model, but the main conclusion is that Junior Developers deliver limited functionality because they are still learning and likely generate bugs with a higher frequency. To exacerbate this situation, the most experienced developers are not able to make up the difference because of administrative and coaching demands placed on them.

Creating a culture of balanced team productivity, education and prevention as in Figure 3 is very important to company success.

How can we coach developers more effectively?

Every tool has the potential to be a positive addition to an organization as a learning tool, but they often have a greater potential to generate false security and developer dissatisfaction. In the many times I’ve led software development quality improvements, the roadblock has consistently been the developers’ perception of the tool’s output and the relevance of the metrics it provides.

One main challenge we face when adapting tools to be used as developer learning platforms is that they generate too much data for them to be beneficial.

Regardless of learning opportunities, developers tend to question the benefit of addressing issues detected by their tools. Although these issues may have a severity assigned to them, they are not connected to the value structure of the company and team. Many of them also lack a clear line of sight to customer impact or total cost of ownership. Setting a goal to fix most issues detected by any tool would be impractical and create dissatisfaction in developers. Teams would invest so much time reaching defined goals that the delivered functionality rate would drop and push the scale (Figure 3) off balance to the right.

Coaching is most effective immediately after the code is tested and in a form that highlights successes and identifies areas of improvement. We understand these tools create a huge amount of data that can be used in an SDLC but it can only be effective by having very focused measures. This focus can be accomplished in tools if the measures are implemented thoughtfully, which will reduce the immense data stream to key deliverable goals.

  • Identify key high-value issues reported in the analysis tools running in your system.
  • Integrate the tool into your SDLC to create an immediate feedback loop.
  • Set achievable goals for these specific types of issues and report performance against goal.
  • Report application trends to highlight success and areas to improve. This could be a collective goal at a higher level.
  • Keep the number of goals to a manageable list that represents the most important.
  • The feedback is consistent and part of everyday life.

With a structure like this, the developers get notified if new code meets goals or needs to be improved. Ideally, this all occurs prior to deploying code to customers.

OverOps is adapted to be used as a training tool by utilizing the run time data from QA and production servers. Following the implementation guidance; select key high-volume errors that you want to eliminate from all applications. Be cautious not to select too many error types to use for this purpose, five errors is a reasonable starting point. Selecting many errors to focus on will dilute the coaching and lower functionality delivery rate. Once initial error types are eliminated using tool-based coaching, others can be introduced as new measures.

What happens after implementing tool-driven coaching?

People enjoy getting solutions right the first time and developers are no exception. When feedback is provided immediately after the code is written, it is easier to fix the identified issues. Regular reminding of code structures that should be avoided creates a positive mental pattern. Developers will learn what code structures should be replaced with more viable ones. After a few iterations, we will begin to see developers using viable code structures first because they don’t want to go back and fix it after the tool detects the flaw.

We can only accomplish this with tool-based coaching for two main reasons.

  • It is too time consuming to coach peer to peer at this detailed level and delivered functionality speed drops.
  • Every developer has the potential to make the same mistakes, so coaching might not be possible, and it wouldn’t be consistent across the organization.

Revisiting the example developer productivity matrix, the numbers should have improved. I expect an increase in delivered functionality with fewer bugs. There is also a drop in peer to peer coaching but not by a significant amount because it remains a key personal growth mechanism. There is also a substantial amount of coaching that must be peer to peer. The higher-level concepts that can’t be taught by a tool are communicated by traditional coaching, leaving the lower level concepts for tool-based coaching.

Results

There are many ways to improve an SDLC and the tool-driven approach delivers consistent coaching across an organization. Consistency is even more important when there is a mixture of in-house and outsourced development staff or when only using outsourced development teams.

Developers will become more efficient and learn to avoid errors before deployment to production, enabling a shorter time to deliver higher quality functionality. Once certain errors have been eliminated from the application, new goals can be added to further reduce the application error profile.

OverOps core functionality is to collect and report runtime application quality for pre-production and post-production applications, that data can be used in new ways. Integrating a tool like OverOps into your SDLC creates a unique and powerful solution by taking advantage of a rich data source. By including runtime data, the operational errors in the application are injected into the SDLC earlier and can be associated with new code near the point it was written. As highlighted in Figure 5, adding OverOps as an SDLC step allows intercepting the most critical runtime issues prior to impacting customers.

To be successful, software development speed and quality requires using several different approaches simultaneously. These are a few examples.

  • Medium frequency peer coaching of medium and high-level concepts.
  • High frequency tool-based analysis to cover low-level concepts; such as integrating OverOps into your SDLC.
  • Best practices of code management, build process, automated tests and always on deployment techniques.

Because there is no single answer to optimize SDLC, we get to experiment with different options and find what works best right now. It is an evolution!

Derek has been the catalyst for successful change in DevOps, Technology, Data Centers, Agile Methods and Software Development. He explores the intersection of Technology, Process and People to get the most out of every change. In his free time, Derek is mountain biking, skiing or doing large home projects.