Best Practices for Software Testing

I originally wrote the following as an internal corporate blog post to guide a pair of business analysts responsible for writing and unit testing business rules. The advice below applies pretty well to software testing in general.

80/20 Rule

80% of your test scenarios should cover failure cases, with the other 20% covering success cases.  Too much of testing (unit testing or otherwise) seems to cover the happy path.  A 4:1 ratio of failure case tests to success case tests will result in more durable software.

Boundary/Range Testing

Given a range of valid values for an input, the following tests are strongly recommended:

  • Test of behavior at minimum value in range
  • Test of behavior at maximum value in range
  • Tests outside of valid value range
    • Below minimum value
    • Above maximum value
  • Test of behavior within the range

The following tests roughly conform to the 80/20 rule, and apply to numeric values, dates and times.

Date/Time Testing

Above and beyond the boundary/range testing described above, the testing of dates creates a need to test how code handles different orderings of those values relative to each other.  For example, if a method has a start and end date as inputs, you should test to make sure that the code responds with some sort of error if the start date is later than the end date.  If a method has start and end times as inputs for the same day, the code should respond with an error if the start time is later than the end time.  Testing of date or date/time-sensitive code must include an abstraction to represent current date and time as a value (or values) you choose, rather than the current system date and time.  Otherwise, you’ll have no way to test code that should only be executed years in the future.

Boolean Testing

Given that a boolean value is either true or false, testing code that takes a boolean as an input seems quite simple.  But if a method has multiple inputs that can be true or false, testing that the right behavior occurs for every possible combination of those values becomes less trivial.  Combine that with the possibility of a null value, or multiple null values being provided (as described in the next section) and comprehensive testing of a method with boolean inputs becomes even harder.

Null Testing

It is very important to test how a method behaves when it receives null values instead of valid data.  The method under test should fail in graceful way instead of crashing or displaying cryptic error messages to the user.


Arrange-Act-Assert is the organizing principle to follow when developing unit tests.  Arrange refers to the work your test should do first in order to set up any necessary data, creation of supporting objects, etc.  Act refers to executing the scenario you wish to test.  Assert refers to verifying that the outcome you expect is the same as the actual outcome.  A test should have just one assert.  The rationale for this relates to the Single Responsibility Principle.  That principles states that a class should have one, and only one, reason to change.  As I apply that to testing, a unit test should test only one thing so that the reason for failure is clear if and when that happens as a result of subsequent code changes.  This approach implies a large number of small, targeted tests, the majority of which should cover failure scenarios as indicated by the 80/20 Rule defined earlier.

Test-First Development & Refactoring

This approach to development is best visually explained by this diagram.  The key thing to understand is that a test that fails must be written before the code that makes the test pass.  This approach ensures that test is good enough to catch any failures introduced by subsequent code changes.  This approach applies not just to new development, but to refactoring as well.  This means, if you plan to make a change that you know will result in broken tests, break the tests first.  This way, when your changes are complete, the tests will be green again and you’ll know your work is done.  You can find an excellent blog post on the subject of test-driven development by Bob Martin here.

Other Resources

I first learned about Arrange-Act-Assert for unit test organization from reading The Art of Unit Testing by Roy Osherove.  He’s on Twitter as @RoyOsherove.  While it’s not just about testing, Clean Code (by Bob Martin) is one of those books you should own and read regularly if you make your living writing software.

Software Development Roles: Lead versus Manager

I’ve held the title of development lead and development manager at different points in my technology career. With the benefit of hindsight, one of the roles advertised and titled as the latter was actually the former. One key difference between the two roles boils down to how much of your time you spend writing code. If you spend half or more your time writing code, you’re a lead, even if your business cards have “manager” somewhere in the title. If you spend significantly less than half your time writing code, then the “manager” in your title is true to your role. When I compare my experience between the two organizations, the one that treats development lead and development manager as distinct roles with different responsibilities has been not only been a better work environment for me personally, but has been more successful at consistently delivering software that works as advertised.

A company can have any number of motivations for giving management responsibilities to lead developers. The organization may believe that a single person can be effective both in managing people and in delivering production code. They may have a corporate culture where only minimal amount of management is needed and developers are self-directed. Perhaps their implementation of a flat organizational structure means that developers take on multiple tasks beyond development (not uncommon in startup environments). If a reasonably-sized and established company gives lead and management responsibilities to an individual developer or developers however, it is also possible that there are budgetary motivations for that decision. The budgetary motivation doesn’t make a company bad (they’re in business to make money after all). It is a factor worth considering when deciding whether or not a company is good for you and your career goals.

Being a good lead developer is hard. In addition to consistently delivering high-quality code, you need to be a good example and mentor to less-senior developers. A good lead developer is a skilled troubleshooter (and guide to other team members in the resolution of technical problems). Depending on the organization, they may hold significant responsibility for application architecture. Being a good development manager is also hard. Beyond the reporting tasks that are part of every management role, they’re often responsible for removing any obstacles that are slowing or preventing the development team from doing work. They also structure work and assign it in a way that contributes to timely delivery of functionality. The best development managers play an active role in the professional growth of developers on their team, along with annual reviews. Placing the responsibility for these two challenging roles on a single person creates a role that is incredibly demanding and stressful. Unless you are superhuman, sooner or later your code quality, your effectiveness as a manager, or both will suffer. That outcome isn’t good for you, your direct reports, or the company you work for.

So, if you’re in the market for a new career opportunity, understand what you’re looking for. If a development lead position is what you want, scrutinize the job description. Ask the sort of questions that will make clear that a role being offered is truly a development lead position. If you desire a development management position, look at the job description. If hands-on development is half the role or more, it’s really a development lead position. If you’re indeed superhuman (or feel the experience is too valuable to pass up), go for it. Just be aware of the size of the challenge you’re taking on and the distinct possibility of burnout. If you’re already in a job that was advertised as a management position but is actually a lead position, learn to delegate. This will prove especially challenging if you’re a skilled enough developer to have landed a lead role, but allowing individual team members to take on larger roles in development will create the bandwidth you need to spend time on the management aspects of your job. Finally, if you’re an employer staffing up a new development team or re-organizing existing technology staff, ensure the job descriptions for development lead and development manager are separate. Whatever your software product, the end result will be better if you take this approach.

Getting (and Staying) Organized

During the past year-and-a-half as a software development manager for a local consulting firm, I’ve tried a number of different tools and techniques to keep me organized.  As my role expanded to include business development, hiring, and project management tasks, there’s been a lot more to keep track of.  I meet weekly or twice-monthly 1-on-1 with each team member on my current project.  “Move it out of e-mail” is my primary objective for getting organized.  The rest of this post will elaborate on the specific alternatives to e-mail that I tried on the way to my current “manager tools”.
Beyond e-mail and calendar functionality, Outlook offers a To-Do List and Tasks for keeping organized.  Both provide start dates and due dates.  The To-Do List is tied directly to individual e-mails, while Tasks are stand-alone items.  I abandoned the use of task functionality pretty quickly.  I used the To-Do List as recently as July of this year, but I see it as a bad habit now.  I rarely end up clearly the various options to flag an e-mail for follow-up (Today, Tomorrow, This Week, Next Week, No Date & Custom), so they become an ever-growing list where I only very occasionally mark items as complete.  In addition, the search functionality in Outlook never works as well as I need it to when I’m trying to find something.
Once I passed the six month mark with my employer, I felt comfortable enough to introduce weekly 1-on-1s as a practice on my project.  After a couple of weeks of filling out a paper template from these guys for each team member on my project, the need for a better solution became readily apparent.  Lighthouse is the name of the company and their product, a web-based application for managing your 1-on-1s with staff.  After the free trial, I chose not to renew.  While I like Lighthouse, and the cost wasn’t prohibitive, my employer wasn’t going to pay for it.
I liked the ideas in Lighthouse enough that I tried to build a simpler, custom version of it myself.  Increasing work responsibilities (and the birth of my twins, Elliott and Emily) erased the free time I had for that side project.  Lighthouse maintains a leadership and management blog that I’ve found to be worthwhile reading.
I first started using Trello years ago for something not at all related to work–requesting bug fixes and enhancements to a custom website my fantasy football league uses.  I didn’t consider using it for work until I was re-introduced to it by a VP who uses it to keep himself organized.  Once I reviewed a few example boards and set up a board to moderate weekly business development meeting, new possibilities for its use revealed themselves very quickly.  As of today, I’ve got 4 different boards active: 1 for “hiring funnel” activities, another board for business development tasks, a 3rd for project-specific tasks that don’t fall into a current Scrum sprint, and a 4th board as a general to-do list.  The last board turned out to be a great solution for capturing information from my 1-on-1 meetings.  It also tracks my progress toward annual goals, training reminders, and other “good corporate citizen” tasks.
The free tier of Trello service offers the functionality that matters most:
  • create multiple boards
  • define workflows as simple or complex as you need
  • create cards as simple or complex as you need
Markdown formatting, attachment support, due dates, checklists, archiving, the ability to subscribe to individual cards, lists and/or boards and collaborate with other team members of Trello combined with the key functionality above has helped me become much better organized and able to communicate more consistently with my team members and executives in my organization.  The search capability works much better for me than Outlook as well.
I’ve only gotten a handful of co-workers in my organization to adopt Trello so far, but I keep recommending it to other co-workers.  I’d like to see our entire business unit adopt it officially so we can take advantage of the capabilities available at the Business Class tier.

Security Breaches and Two-Factor Authentication

It seems the news has been rife with stories of security breaches lately.  As a past and present federal contractor, the OPM breach impacted me directly.  That and one other breach impacted my current client.  The lessons I took from these and earlier breaches were:

  1. Use a password manager
  2. Enable 2-factor authentication wherever it’s offered

To implement lesson 1, I use 1Password.  It runs on every platform I use (Mac OS X, iOS and Windows), and has browser plug-ins for the browsers I use most (Chrome, Safari, IE).  Using the passwords 1Password generates means I no longer commit the cardinal security sin of reusing passwords across multiple sites.  Another nice feature specific to 1Password is Watchtower.  If a site where you have a username and password is compromised, the software will indicate that site is vulnerable so you know to change your password.  1Password even has a feature to flag sites with the Heartbleed vulnerability.

The availability of two-factor authentication has been growing (somewhat unevenly, but any growth is good), but it wasn’t until I responded to a tweet from @felixsalmon asking about two-factor authentication that I discovered how loosely some people define two-factor authentication.  According to this New York Times interactive piece, most U.S. banks offer two-factor authentication.  That statement can only be true if “two-factor” is defined as “any item in addition to a password”.  By that loose standard, most banks do offer two-factor authentication because the majority of them will prompt you for an additional piece of “out of wallet” information if you attempt to log in from a device with an IP address they don’t recognize.  Such out-of-wallet information could be a parent’s middle name, your favorite food, the name of your first pet, or some other piece of information that only you know.  While it’s better than nothing, I don’t consider it true two-factor authentication because:

  1. Out-of-wallet information has to be stored
  2. The out-of-wallet information might be stored in plain-text
  3. Even if out-of-wallet information is stored hashed, hashed & salted, or encrypted with one bank, there’s no guarantee that’s true everywhere the information is stored (credit bureaus, health insurers, other financial institutions you have relationships with, etc)

One of the things that seems clear after the Get Transcript breach at IRS is that the thieves had access to the out-of-wallet information of their victims, either because they purchased the information, stole it, or found it on social media sites they used.

True two-factor authentication requires a time-limited, randomly-generated piece of additional information that must be provided along with a username and password to gain access to a system.  Authentication applications like the ones provided by Google or Authy provide a token (a 6-digit number) that is valid for 30-60 seconds.  Some systems provide this token via SMS so a specific application isn’t required.  By this measure, the number of banks and financial institutions that support is quite a bit smaller.  One of the other responses to the @felixsalmon tweet was this helpful URL:  The list covers a lot of ground, including domain registrars and cryptocurrencies, but might not cover the specific companies and financial institutions you work with.  In my case, the only financial institution I currently work with that offers true two-factor authentication is my credit union–Tower Federal Credit Union.  Hopefully every financial institution and company that holds our personal information will follow suit soon.

Bulging Laptop Battery

Until yesterday, I’d been unaware that laptop batteries could fail in a way other than not holding a charge very well. According to the nice fellow at an Apple Genius Bar near my office, this happens occasionally.  I wish I’d been aware of it sooner, so I might have gotten it replaced before AppleCare expired.  When I did some googling, “occasionally” turned out to be a lot more often than I expected.  Half-an-hour (and $129 later), a replacement battery made everything better.  The battery had expanded to the point that it was pushing on the trackpad and making it difficult to click–in addition to preventing the laptop from sitting flush on flat surfaces.  Now that it has a fresh battery (and even though it’s only a late-2011 MacBook Pro), I’m sort of tempted to replace it with a shinier new one.  My new employer is of the “bring your own device” variety, and the MacBook Pro is quite a lot of weight to schlep to and from the office every day.

Which Programming Language(s) Should I Learn?

I had an interesting conversation with a friend of mine (a computer science professor) and one of his students last week.  Beyond the basic which language(s) question were a couple more intriguing ones:

  1. If you had to do it all over again, would you still stuck with the Microsoft platform for your entire development career?
  2. Will Microsoft be relevant in another ten years?

The first question I hadn’t really contemplated in quite some time.  I distinctly recall a moment when there was a choice between two projects at the place where I was working–one project was a Microsoft project (probably ASP, VB6 and SQL Server) and the other one wasn’t (probably Java).  I chose the former because I’d had prior experience with all three of the technologies on the Microsoft platform and none with the others.  I probably wanted an early win at the company and picking familiar technology was the quickest way to accomplish that.  A couple of years later (in 2001), I was at another company and took them up on an opportunity to learn about .NET (which at the time was still in beta) from the people at DevelopMentor.  It only took one presentation by Don Box to convince me that .NET (and C#) were the way to go.  While it would be two more years before I wrote and deployed a working C# application to production, I’ve been writing production applications (console apps, web forms, ASP.NET MVC) in C# from then to now.  While it’s difficult to know for sure how that other project (or my career) would have turned out had I gone the Java route instead of the Microsoft route, I suspect the Java route would have been better.

One thing that seemed apparent even in 1999 was that Java developers (the good ones anyway) had a great grasp of object-oriented design (the principles Michael Feathers would apply the acronym SOLID to).  In addition, quite a number of open source and commercial software products were being built in Java.  The same could not be said of C# until much later.

To the question of whether Microsoft will still be relevant in another ten years, I believe the answer is yes.  With Satya Nadella at the helm, Microsoft seems to be doubling-down on their efforts to maintain and expand their foothold in the enterprise space.  There are still tons of business of various sizes (not to mention state governments and the federal government) that view Microsoft as a familiar and safe choice both for COTS solutions and custom solutions.  So I expect it to remain possible to have a long and productive career writing software with the Microsoft platform and tools.

As more and more software is written for the web (and mobile browsers), whatever “primary” language a developer chooses (whether Java, C#, or something else altogether), they would be wise to learn JavaScript in significant depth.  One of the trends I noticed over the past couple of years of regularly attending .NET user groups, fewer and fewer of the talks had much to do with the intricacies and syntactic tricks of Microsoft-specific technologies like C# or LINQ.  There would be talks about Bootstrap, Knockout.js, node.js, Angular, and JavaScript.  Multiple presenters, including those who worked for Microsoft partners advocated quite effectively for us to learn these technologies in addition to what Microsoft put on the market in order to help us make the best, most flexible and responsive web applications we could.  Even if you’re writing applications in PHP or Python, JavaScript and JavaScript frameworks are becoming a more significant part of the web every day.

One other language worth knowing is SQL.  While NoSQL databases seem to have a lot of buzz these days, the reality is that there is tons of structured, relational data in companies and governments of every size.  There are tons of applications that still remain to be written (not to mention the ones in active use and maintenance) that expose and manipulate data stored in Microsoft (or Sybase) SQL Server, Oracle, MySQL, and Postgresql.  Many of the so-called business intelligence projects and products today have a SQL database as one of any number of data sources.

Perhaps the best advice about learning programming languages comes from The Pragmatic Programmer:

Learn at least one new language every year.

One of a number of useful things about a good computer science program is that after teaching you fundamentals, they push you to apply those fundamentals in multiple programming languages over the course of a semester or a year.  Finishing a computer science degree should not mean the end of striving to learn new languages.  They give us different tools for solving similar problems–and that ultimately helps make our code better, regardless of what language we’re writing it in.

Reflection and Unit Testing

This post is prompted by a couple of things: (1) a limitation in the Moq mocking framework, (2) a look back at a unit test I wrote nearly 3 years ago when I first arrived at my current company. While you can use Moq to create an instance of a concrete class, you can’t set expectations on class members that aren’t virtual. In the case of one of our domain entities, this made it impossible to implement automated tests one of our business rules–at least not without creating real versions of multiple dependencies (and thereby creating an integration test). Or so I (incorrectly) thought.

Our solution architect sent me an unit test example that used reflection to set the non-virtual properties in question so they could be used for testing. While the approach is a bit clunky when compared to the capabilities provided by Moq, it works. Here’s some pseudo-code of an XUnit test that follows his model by using reflection to set a non-virtual property:

public override void RuleIsTriggered()
  var sde = new SomeDomainEntity(ClientId, null );
  SetWorkflowStatus(sde, SomeDomainEntityStatus.PendingFirstReview);

  var context = GetBusinessRuleContext(sde);

private void SetWorkflowStatus(SomeDomainEntity someDomainEntity, WorkflowStatus workflowStatus)
  var workflowStatusProperty = typeof(SomeDomainEntity).GetProperty("WorkflowStatus");
  workflowStatusProperty.SetValue(someDomainEntity, workflowStatus, null);

With the code above, if the business rule returned by RuleUnderTest looks at WorkflowStatus to determine whether or not the instance of SomeDomainEntity is valid, the value set via reflection will be what is returned. As an aside, the “context” returned from GetBusinessRuleContext is a mock configured to return sde if the business rule looks for it as part of its execution.

After seeing the previous unit test example (and a failing unit test on another branch of code), I was reminded of a unit test I wrote back in 2012 when I was getting up to speed with a new system. Based on the information I was given at the time, our value objects all needed to implement the IEquatable interface. Since we identified value objects with IValueObject (which was just a marker interface), using reflection and a bit of LINQ resulted in a test that would fail if any types implementing IValueObject did not also implement IEquatable. The test class is available here. If you need similar functionality for your own purposes, changing the types reflected on is quite a simple matter.

Pseudo-random Sampling and .NET

One of the requirements I received for my current application was to select five percent of entities generated by another process for further review by an actual person. The requirement wasn’t quite a request for a simple random sample (since the process generates entities one at a time instead of in batches), so the code I had to write needed to give each entity generated a five percent chance of being selected for further review.  In .NET, anything involving percentage chances means using the Random class in some way.  Because the class doesn’t generate truly random numbers (it generates pseudo-random numbers), additional work is needed to make the outcomes more random.

The first part of my approach to making the outcomes more random was to simplify the five percent aspect of the requirement to a yes or no decision, where “yes” meant treat the entity normally and “no” meant select the entity for further review.  I modeled this as a collection of 100 boolean values with 95 true and five false.  I ended up using a for-loop to populate the boolean list with 95 true values.  Another option I considered was using Enumerable.Repeat (described in great detail in this post), but apparently that operation is quite a bit slower.  I could have used Enumerable.Range instead, and may investigate the possibility later to see what advantages or disadvantages there are in performance and code clarity.

Having created the list of decisions, I needed to randomize their order.  To accomplish this, I used LINQ to sort the list by the value of newly-generated GUIDs:

decisions.OrderBy(d => Guid.NewGuid()) //decisions is a list of bool

With a randomly-ordered list of decisions, the final step was to select a decision from a random location in the list.  For that, I turned to a Jon Skeet post that provided a provided a helper class (see the end of that post) for retrieving a thread-safe instance of Random to use for generating a pseudo-random value within the range of possible decisions.  The resulting code is as follows:

return decisions.OrderBy(d => Guid.NewGuid()).ToArray()[RandomProvider.GetThreadRandom().Next(100)]; //decisions is a list of bool

I used LINQPad to test my code and over multiple executions, I got between 3 and 6 “no” results.

RadioButtonListFor and jQuery

One requirement I received for a recent ASP.NET MVC form implementation was that particular radio buttons be checked on the basis of other radio buttons being checked. Because it’s a relatively simple form, I opted to fulfill the requirement with just jQuery instead of adding knockout.js as a dependency.

Our HTML helper for radio button lists is not much different than this one.  So the first task was to identify whether or not the radio button checked was the one that should trigger another action.  As has always been the case when grouping radio buttons in HTML, each radio button in the group shares the same name and differs by id and value.  The HTML looks kind of like this:

@Html.RadioButtonListFor(m => m.Choice.Id, Model.Choice.Id, Model.ChoiceListItems)

where ChoiceListItems is a list of System.Web.Mvc.SelectListItem and the ids are strings.  The jQuery to see if a radio button in the group has been checked looks like this:


Having determined that a radio button in the group has been checked, we must be more specific and see if the checked radio button is the one that should trigger additional action. To accomplish this, the code snippet above is changed to the following:

if($("input[name='Choice.Id']:checked").val() == '@Model.SpecialChoiceId'){

The SpecialChoiceId value is retrieved from the database. It’s one of the values used when building the ChoiceListItems collection mentioned earlier (so we know a match is possible). Now the only task that remains is to check the appropriate radio button in the second grouping. I used jQuery’s multiple attribute selector for this task.  Here’s the code:

 if($("input[name='Choice.Id']:checked").val() == '@Model.SpecialChoiceId'){

The first attribute filter selects the second radio button group, the second attribute filter selects the specific radio button, and prop(‘checked’,true) adds the ‘checked’ attribute to the HTML. Like SpecialChoiceId, Choice2TriggerId is retrieved from the database (RavenDB in our specific case).

Complex Object Model Binding in ASP.NET MVC

In the weeks since my last post, I’ve been doing more client-side work and re-acquainting myself with ASP.NET MVC model binding.  The default model binder in ASP.NET MVC works extremely well.  In the applications I’ve worked on over the past 2 1/2 years, there have been maybe a couple of instances where the default model binder didn’t work the way I needed. The problems I’ve encountered with model binding lately have had more to do with read-only scenarios where certain data still needs to be posted back.  In the Razor template, I’ll have something like the following:

@Html.LabelFor(m => m.Entity.Person, "Person: ")
@Html.DisplayFor(m => m.Entity.Person.Name)
@Html.HiddenFor(m => m.Entity.Person.Name)

Nothing is wrong with the approach above if Name is a primitive (i.e. string).  But if I forgot that Name was a complex type (as I did on one occasion), the end result was that no name was persisted to our datastore (RavenDB) which meant that there was no data to bring back when the entity was retrieved.  The correct approach for leveraging the default model binder in such cases is this:

@Html.LabelFor(m => m.Entity.Person, "Person: ")
@Html.DisplayFor(m => m.Entity.Person.Name)
@Html.HiddenFor(m => m.Entity.Person.Name.FirstName)
@Html.HiddenFor(m => m.Entity.Person.Name.LastName)
@Html.HiddenFor(m => m.Entity.Person.Name.Id)

Since FirstName, LastName and Id are all primitives, the default model binder handles them appropriately and the data is persisted on postback.