Bug reproduction steps

To properly fix a bug, you must be able to reproduce it. If the developers can’t reproduce it they can only guess at its cause and their corrections are just potential fixes that can’t be validated.

If someone writes a report for a bug they can’t reproduce, how can they validate that the developer assigned to the task has satisfactorily fixed the problem? They can’t. So why report and track something that can’t be closed except by a leap of faith?

Maybe because the developer will find the reproduction steps? They might, but then again who stands a better chance of doing this?

The best person to reproduce a bug is the person who first encountered it. To give an analogy, it’s like if you found a secret glade in a forest and you can’t find it again. So you ask someone who’s never been there to find it for you and you don’t give them any directions. Even if they find one on their own, is it the same one you found?

Another situation is the QA or reporter can reproduce it, but the developer can’t. This might be because the reproduction steps aren’t clear or just missing. It could also be that there is a difference between the environments or actions of the reporter and the developer.

Sometimes the bug reporter has produced the issue but hasn’t tried to reproduce it consistently. They write down with they did but haven’t tested their steps further to see if they reproduce the original results every time.

All of these situations can be greatly mitigated by writing good reproduction steps and testing them before submitting the bug report.

How to write reproduction steps:

  • Most bugs report should include reproduction steps. If reproduction can’t be achieved, in most cases, testing should continue.
  • Steps should be clear and concise.
  • Each step should be absolutely necessary to reproduce the bug. This removes clutter but also forces the reporter to distill the bug to it’s most minimal conditions, which helps to define it correctly.
  • No assumptions should be made while writing the steps. This can be very hard as we all have implicit assumptions that are not apparent at first glance. For example, if a user was created for testing purposes, how was it created? What sort of user is it? Is the username included in the report?
  • Before submitting the report, the reproduction steps themselves should be tested. Did they reproduce the issue? Were some required operations omitted because the seemed evident? For example if navigation to a screen occurred and there are several ways to navigate to this screen, which one was used?
  • During the testing of the reproduction steps the scenario should be altered a little. Does the scenario still reproduce the bug with the alterations. If so the altered steps are either unnecessary or could be generalized further. Examples of this could be testing with a different user or changing the sequence of some operations.

In addition to the reproduction steps bug reports for mobile applications should include the OS and device name.

Bug reports for web applications should include the browser and it’s version in the case of Internet Explorer or non evergreen browsers.

All bug reports should include the version number of the application that was under test as well as the environment the bug was found on: dev, qa, prod.

Taking these steps will minimize the back and forth between QA and development and will reduce the time required to close bugs.

 

Preventing multiple calls on button in Angular

If you wish to prevent your users from clicking on a button multiple times in a row, for example if the button triggers some animation that needs to complete before being triggered again, you can use debouncing.

Debouncing is a form of rate limiting in which a function isn’t called until it hasn’t been called again for a certain amount of time. That is to say, if the debouncing time is 200ms, as long as you keep calling the function and those calls are within a 200ms window of each other, the function won’t get called.

You could use a button’s [disabled] attribute and setTimeout, but I have ran in some cases where this interfered with the ongoing animation of the first call.

Here is how to use RxJs with Angular to implement debouncing.

In your template just bind a function to an event like you would normally:

(click)="doStuff()"

In your component use the following code:

import { Observable } from 'rxjs/Observable';
import { Subject } from 'rxjs/Subject';
import 'rxjs/add/operator/debounceTime';

// ...
// ...
export class SomeComponent {
    private buttonClicked = new Subject();

    constructor() {
        const btnClickedObs = this.nextClicked
                                  .asObservable()
                                  .debounceTime(200);
        btnClickedObs.subscribe(() => 
                      this.functionToCall()
                      );
    }

    doStuff() {
        this.buttonClicked.next();
    }
}

Debouncing is similar to throttling. With throttling you limit the number of calls to 1 per time frame of unit so the first call happens automatically. If the user keeps mashing the button continuously, throttled calls will be made at a rate of 1 per throttleTime whereas with debouncing only one call will be made. Which one you want depends on your particular scenario.

You can mix both, but you must apply the debouncing before the throttling and the throttlingTime must be at least twice the debouncingTime. Otherwise one of them will have no effect.

Tuples in C# 7

While I had originally written about my lack of appreciation for Tuples in .Net 4, the new Tuple feature that has been introduced with C# 7 is really something else.

Having programmed in functional languages like F#, a good Tuple implementation is something I’ve come to appreciate.

Here are the reasons the new Tuple is better than the old one.

Field names

Tuple fields now have names which makes reasoning about the code a whole lot easier.

Compare this:

var oldTuple = 
    Tuple.Create(1, "test", 1.0);

var b = oldTuple.Item1 == 2 ? oldTuple.Item2.ToLower() 
                            : "";            

With this:

(int rebate, string itemName, double price) newTuple
    = (1, "test", 1.0);

var a = newTuple.rebate == 2 ? newTuple.itemName.ToLower() 
                             : "";

Gone are the Item1 and Item2 fields.

Better syntax support

As you can see in the previous example Tuples can be manipulated using the parentheses syntax. There is no need to use the Tuple type’s Create method.

Here’s an example of the new syntax being used to define a Tuple return type for a method.

public (string first, string second) ReturnTuple()
{
    return ("an", "example");
}

 

Deconstruction support

Deconstruction is another feature that F# and some functional programmers (or Rust programmers for that matter) are probably familiar with and that you can now use with Tuples.

Here is an example:

public class FromHellsHeart
{
    public bool GrapplingWithThee { get; set; }
    public bool StabingAtThee { get; set; }
    public bool SpitingAtThee { get; set; }

    public void Deconstruct(out bool grappling, 
                            out bool stabbing,
                            out bool spitting)
    {
        grappling = GrapplingWithThee;
        stabbing = StabingAtThee;
        spitting = SpitingAtThee;
    }
} 

[TestMethod]
public void TupleTest()
{
    var mobyDick = new FromHellsHeart
    {
        GrapplingWithThee = true,
        StabingAtThee = true,
        SpitingAtThee = true,
    };

    (bool g, bool st, bool sp) = mobyDick;
    Assert.IsTrue(g && st && sp);
}   

 

By deconstructing the mobyDick Tuple we have essentially extracted it’s elements into three local variables.

Conclusion

All these new features bring Tuples in line with languages like F# which is a good thing.

Tuples are a great way to prevent having to write one off classes, to easily refactor methods that would need to pass around anonymous types or to replace the older less efficient Tuple.

 

Precompiling ASP.NET MVC views

In a traditional ASP.NET MVC application, Razor views are compiled by the server the first time they’re requested.

Further calls will result in the same compilation results being served again. These compiled view results are then available until IIS is restarted, the application pool is recycled or the application is shut down.

When that happens and the application is restarted, the next client to request a specific view will bear the compilation cost once more. In a high volume application, the application should be kept online as more requests arrive and compilation will thus happen infrequently.

In certain scenarios, for example when the application is accessed infrequently or when view compilation simply takes to much time, precompilation is a viable solution.

It allows us to precompile the views ahead of time which precludes IIS from needing to compile them at runtime.

You can precompile the views when you are building the application in Visual Studio, on the build server when doing a build or after the application is built but before it is deployed. This last one was possible with Web Forms but I have not investigated this avenue with MVC.

Let’s look at the pros and cons of the first two methods.

Precompiling on the developer’s machine

Pros:

  • Find potential view errors at compile time rather than runtime (see example below).
  • Speed up the application for developers by preventing view compilation when debugging.
  • Allows us to share views between ASP.NET MVC projects in the same solution.

Cons:

  • Cannot modify views when debugging or when the application is running. It needs to be stopped, compiled and started again for changes to take effect.
  • Compiled files will appear in merge conflicts. Note that you can simply ignore the conflicts and rebuild to solution to generate the files again.
  • Generated files will be checked in to source control, polluting the repository. On the plus side this allows other developers to reuse the precompilation results.
  • Need to install a Visual Studio extension and a NuGet package.

Precompiling on the build server

Pros:

  • No change in workflow for developers. They can continue modifying views at runtime, they do not need to install any tools.
  • No precompilation files in source control.

Cons:

  • Will not find view errors at compile time.
  • Will not speed up the application on the developer’s machines.
  • Requires a build server and build process (for those who aren’t using one).

In my last ASP.NET MVC project, I chose the first option for many reasons but primarily because I needed to share views between projects. To do this you need to have the views compiled and then copy the compiled views in a folder in the destination project. This process would require it’s own blog post so I will leave it for a possible future post.

Error example

Here’s an example of a precompilation error I mentionned earlier:

@model TestModel

@Model.First
@Model.Seond

<h2>@ViewBag.Title.</h2>
<h3>@ViewBag.Message</h3>

In our model we have two properties, First and Second. In the previous example an error has been introduced where the property is misspelled Seond. While this error will show up visually with a red squiggly line when you inspect the file, the application will compile just fine but throw an exception when you try to access this view.

mvcError

When working on a large project with dozens of views, it’s easy to miss these errors.

Activating precompilation in Visual Studio

To activate precompilation you need to install the following NuGet package in each project which will contain precompiled views: RazorGenerator.Mvc.

razorGeneratorMvc

You also need to install the following extension in Visual Studio: Razor Generator.

razorGeneratorExt

Finally you need to change the build tool for the views that need to be precompiled, often case this means all views, by setting the custom tool to RazorGenerator.

RazorGenerator

You can visit the RazorGenerator GitHub page for more information.