Bug reproduction steps

To properly fix a bug, you must be able to reproduce it. If the developers can’t reproduce it they can only guess at its cause and their corrections are just potential fixes that can’t be validated.

If someone writes a report for a bug they can’t reproduce, how can they validate that the developer assigned to the task has satisfactorily fixed the problem? They can’t. So why report and track something that can’t be closed except by a leap of faith?

Maybe because the developer will find the reproduction steps? They might, but then again who stands a better chance of doing this?

The best person to reproduce a bug is the person who first encountered it. To give an analogy, it’s like if you found a secret glade in a forest and you can’t find it again. So you ask someone who’s never been there to find it for you and you don’t give them any directions. Even if they find one on their own, is it the same one you found?

Another situation is the QA or reporter can reproduce it, but the developer can’t. This might be because the reproduction steps aren’t clear or just missing. It could also be that there is a difference between the environments or actions of the reporter and the developer.

Sometimes the bug reporter has produced the issue but hasn’t tried to reproduce it consistently. They write down with they did but haven’t tested their steps further to see if they reproduce the original results every time.

All of these situations can be greatly mitigated by writing good reproduction steps and testing them before submitting the bug report.

How to write reproduction steps:

  • Most bugs report should include reproduction steps. If reproduction can’t be achieved, in most cases, testing should continue.
  • Steps should be clear and concise.
  • Each step should be absolutely necessary to reproduce the bug. This removes clutter but also forces the reporter to distill the bug to it’s most minimal conditions, which helps to define it correctly.
  • No assumptions should be made while writing the steps. This can be very hard as we all have implicit assumptions that are not apparent at first glance. For example, if a user was created for testing purposes, how was it created? What sort of user is it? Is the username included in the report?
  • Before submitting the report, the reproduction steps themselves should be tested. Did they reproduce the issue? Were some required operations omitted because the seemed evident? For example if navigation to a screen occurred and there are several ways to navigate to this screen, which one was used?
  • During the testing of the reproduction steps the scenario should be altered a little. Does the scenario still reproduce the bug with the alterations. If so the altered steps are either unnecessary or could be generalized further. Examples of this could be testing with a different user or changing the sequence of some operations.

In addition to the reproduction steps bug reports for mobile applications should include the OS and device name.

Bug reports for web applications should include the browser and it’s version in the case of Internet Explorer or non evergreen browsers.

All bug reports should include the version number of the application that was under test as well as the environment the bug was found on: dev, qa, prod.

Taking these steps will minimize the back and forth between QA and development and will reduce the time required to close bugs.

 

SonarLint Visual Studio Extension / C# Linter

I’m currently working on a legacy code base that’s not a shining example of clean code and best practices.

I try to improve the code as I work, a careful balancing act where I do my best not to impact productivity by spending too much time on refactoring and reformatting.

To help me with this task I decided to look into linters. Note that I’m not using ReSharper which would help with code quality, but that’s another story.

I tried StyleCop, but with the default settings I just wasn’t impressed. Many rules are in my opinion needlessly restrictive. It also seemed to simply dump more than a thousand warnings in the error window.

I tried SonarLint next and I must say I was thoroughly impressed. Though the errors are also sent to the error window they are also shown inline in the file. The parsing by SonarLint was fast and running the Visual Studio extension only had a very small impact on performance even while working with a solution containing thousands of files and hundreds of thousands of lines of codes.

While I haven’t run ReSharper in years, performance was one of the two reasons I stopped using it. On large solutions like the one I’m currently working with, it would slow everything down to a crawl, even when disabling solution-wide analysis.

SonarLint isn’t exactly what I was expecting it to be. I thought it would be more for style guidelines but it has instead given me deeper insights. I’m specifically referring to my last two examples.

Things like needlessly nested if statements:

if (FirstProperty == false) 
{
    if (NeedlesslyDeeplyScoped == true)
    { }
}

Using a Count() instead of an Any:

if (files.Count(f => f.IsDeleted) > 0)
// to
if (files.Any(f => f.IsDeleted))

To foreach statements that can run on null variables:

List<Foo> myCollection = null;

if (IsWeatherRainy) 
{
    myCollection = new List<Foo>();
    // add stuff
}

// myColllection null on at least one execution path
foreach (var item in myCollection)

Or overlapping method signatures with optional parameters that can’t be used:

// This method's signature overlaps the second one
// and the default parameter value can't be used
public File GetFile(bool firstParam, 
                    bool secondParam = false)
{
    return new File();
}

public TestFile GetFile(bool firstParam)
{
    return new TestFile();
}

This last one having been found three times in a single file.

SonarLint is available on the Visual Studio marketplace. It’s free and open source with a repository on GitHub. It’s created by a company that also sells a similar product (though geared towards CI / Build servers) but I haven’t found any downsides to using the free Visual Studio Extension.

By the way, I haven’t been asked to write this by the company that makes SonarLint.

TODO comments best practices

// TODO: filter employees by role

The best use for TODO comments is when you are actively working on a change set, adding TODOs that will be replaced with actual code as you work. These TODOs will be removed before you commit to source control.

In this case you add a TODO, signifying an intention to return later on to complete the code.

Before committing code to source control, it’s always a good practice to review all your changes with a diff tool. This is the perfect time to scan for leftover TODOs in the code and implement them.

Another case for TODOs is for something that will truly be implemented in a later commit. Before leaving those kinds of TODOs in the code ask yourself if you shouldn’t rather create an issue in your issue tracker. Otherwise this second kind of TODOs tend to accumulate and pollute the source code. They experience the same kind of code rot as old commented out code does.

TODOs that you leave after a commit should be for changes that are small, implementable in the foreseeable future and relating to source code rather than business processes.

Before leaving those kinds of TODOs in the code ask yourself if you shouldn’t rather create an issue in your issue tracker.

On larger projects you tend to see TODO comments which are several years old. Just today I found one that source control indicated was 3 years old. The person responsible for the comment may not even be working here anymore. While reading this specific comment could give me a general idea of what should be accomplished it does not do so clearly enough that I can go do it. Or why for that matters it needs to be done. You tend to see these often enough.

This is similar to commented out code, where nobody knows why the code was commented out in the first place and thus leave it as it is thinking it may be important. As time goes by there is less and less chance of someone actually knowing why the code was commented out. So it stays there until someone is brave enough to just delete the useless comment (how can it be useful if no one knows why it’s commented out). Keeping track of old code is the source control’s job anyway.

Absolutes

As time goes on, I’m distancing myself more and more from rules that you must always follow or guidelines that make you lean towards absolutes.

  • Commenting every method and variable.
  • Documenting all classes.
  • Coding guidelines that need to be applied everywhere exactly the same way.
  • Over 90% test coverage.

The problem with these rules is that they don’t have any reasoning or intelligence behind them. You must comment everything even if it doesn’t make sense because some tool is checking for it. You must strive for 100% code coverage even if it means writing tests with little business value along the way.

Our strength as developers is our reasoning and our judgement.

I like where these guys are going : Why Inch doesn’t use percentages. They have the right idea. Inch is a documentation coverage tool which gives a you a letter grade rather than a percentage score. Their design takes into account the fact that not everything needs to be documented. Having a grade is less precise that a numerical score but that’s what makes it better. Having a percentage score, particularly with an enforced lower bound will often lead to people gaming the system or fussing too much over the details. Code metrics should be about guiding you to write better code.

Another interesting link on the subject is The code documentation fallacy. The author makes the point that given the choice between 10% or 90% comment coverage, it is better to have 10% because it means the comments that are there have a reason for being there. On the other hand if comments are enforced, most of them will probably be noise.

I can relate to this. I have worked on a large project which required 100% comment coverage of all public classes, methods, method parameters and return values. The team was using a refactoring tool that amongst other things allowed to generate these comments automatically.

I would say about 9 in 10 comments were auto-generated and looked something like this:

/// <summary>
///  The GetName.
///  Returns The GetName.
/// </summary>
public string GetName()

It’s surprising how fast the human brain learns to filter out these useless comments until it doesn’t even notice them. When these comments actually contained useful information they were being gleaned over.

The reverse, absolutely no comments at all, is just as bad. The problem here isn’t the comments or the lack thereof but the inflexible rules.

Google understands this concept as well. If you look at their coding style guidelines documents, (C++, Python, JavaScript) you will see the following:

Use common sense and BE CONSISTENT.

If you are editing code, take a few minutes to look at the code around you and determine its style. If they use spaces around their if clauses, you should, too. If their comments have little boxes of stars around them, make your comments have little boxes of stars around them too.

The point of having style guidelines is to have a common vocabulary of coding so people can concentrate on what you are saying, rather than on how you are saying it. We present global style rules here so people know the vocabulary. But local style is also important.

Notice the mention about local style and the focus on the reasons for having guidelines.

If the person working on the web service layer, knowing the guidelines, truly believes he should use a different naming scheme for his methods I’d be inclined to trust him. He knows his specific context.

Context and judgement trumps blind adherence to rules. Aren’t they called guidelines anyway?

The world isn’t all back and white. We have to trust ourselves.