HighlightJS

Wednesday, August 29, 2018

Emphasize behavior over structure

Behavior is the mirror in which everyone shows their image. - Johann Wolfgang von Goethe

RSpec is a Behaviour-Driven Development tool for Ruby programmers. Created shortly after JBehave in 2005, its raison d'ĂȘtre has been writing human-readable specifications in the BDD tradition - accessible to not just developers but also business users, analysts, and testers. Over the years, though, that idea of accessibility to an audience beyond just developers seems to have been lost on many. I suspect that's because many view RSpec as little more than a testing tool with a DSL rather than the BDD tool that it's meant to be.

This narrower developer-focused view leads to dubious guidelines like the ones documented on Better Specs. One of the very first ones there is about how to describe your methods.

How to describe your methods

 

The only guideline needed on that topic is: Don't do it. Describe behaviors, not methods.

Instead, you get guidelines like: Be clear about what method you are describing. For instance, use the Ruby documentation convention of . (or ::) when referring to a class method's name and # when referring to an instance method's name.

Below is an example of a supposedly good practice according to that guideline.

# (supposedly) good
describe Lemon
  describe "#yellow?" do
    context "when ripe" do
      it "returns true"
    end
  end
end

# vs (supposedly) bad
describe Lemon
  it "is yellow if ripe"
end

When trying to reason about good practice, I find it always instructive to go back to the origins of an idea. Here too, Dan North's article introducing BDD offers great insight. Some quotes:
[...] when they wrote the method name in the language of the business domain, the generated documents made sense to business users, analysts, and testers.
Notice how, in contrast to the above example, the emphasis is on the language of the business domain (e.g. Lemon is yellow if ripe), rather than the language of code (e.g. Lemon#yellow?, when ripe, returns true) that would make little sense to someone other than a developer.
This sentence template – The class should do something – means you can only define a test for the current class.
Notice how the reference is to a class, not a method. The class, not its methods, should do something. Why should anyone care about whether Lemon#yellow? returns true rather than whether Lemon is yellow?
If you find yourself writing a test whose name doesn’t fit this template, it suggests the behaviour may belong elsewhere.
Consider this Active Record test (assuming Lemon is an Active Record model).

describe Lemon
  describe ".ripe" do
    it "returns only ripe lemons"
  end
  describe ".rotten" do
    it "returns only rotten lemons"
  end
end

This test just can't be written in the form 'Lemon should do something' following the suggested template. Because after all, we aren't really describing a Lemon here. What we're describing is the equivalent of a Lemon Repository (or Lemon Basket, perhaps, in the UbiquitousLanguage of the domain) or Data Access Object or whatever floats your goat when it comes to persistence. Active Record sadly conflates a model's class and its persistence mechanism, even though they're two clearly distinct concepts with distinct concerns. If the two concepts weren't conflated, you'd write tests that don't need to describe methods anymore but can describe the behavior/responsibilities of a class, as below:

describe LemonBasket
  it "should provide only ripe lemons"
  it "should provide only rotten lemons"
end

Another way to think about this is to remember that a class is an object, and class methods are, therefore, really instance methods of the (singleton) class object. So, if you include descriptions of .class_methods and #instance_methods in the description of a class, you're really mixing the descriptions of two different classes (pun intended) of objects. Consider this sample from RSpec's own documentation:

RSpec.describe Widget, :type => :model do
  it "has none to begin with" do
    expect(Widget.count).to eq 0
  end

  it "has one after adding one" do
    Widget.create
    expect(Widget.count).to eq 1
  end
end

That description just doesn't make any sense as soon as you try to read it as sentences that'd appear in RSpec output formatted as documentation.
Widget
 has none to begin with
 has one after adding one
That doesn't make sense because what you're really describing is conceptually something like a WidgetWarehouse.
WidgetWarehouse
 has no widgets to begin with
 has one widget after adding (storing?) one
With Active Record, the Widget class above doubles up as a singleton WidgetWarehouse (WidgetDAO/WidgetRepository/whatever). But instead of letting that incidental implementation detail drive the structure of your tests, you could recognize that they're essentially two distinct concepts/concerns, and describe them in independent tests rather than first mixing them up in the same class test and then trying to distinguish them through syntactic conventions.

Moreover, method descriptions aren't conducive to true BDD or TDD style in general. The behaviors of an object are known before the tests, but the structure (method names, signatures, etc.) quite often emerges. For instance, the behavior WidgetWarehouse has one widget after adding one is known at the time of writing the test, but the method that'd provide that behavior isn't (.add or .store? - I shouldn't have to make up my mind before writing the first test.).

Using cutesy syntactic conventions to describe classes based on their structure allows, even encourages, you to sidestep such considerations and suppresses the feedback generation mechanism that's at the heart of TDD and BDD. Focusing on and describing behavior forces you to think more deeply about the behavior and where it belongs.

Thursday, April 19, 2018

Dual track development - myths and anti-patterns

People just don’t read. It’s a miracle you’ve read this far yourself. - Jeff Patton

But people do look at pictures. (- Jeff Patton, again). And sometimes they misinterpret them.

That's why Jeff Patton felt the need to write "Dual Track Development is not Duel Track".

And because people sometimes misinterpret and then misrepresent pictures and ideas and concepts, someone wrote this mischaracterization of not one but at least three first-class concepts: dual-track development, product discovery and refactoring.

This is my attempt to counter the SemanticDiffusion of those concepts by busting some myths and listing some anti-patterns as mentioned in that article.

What not to think, aka myths


Myth 1: Dual-track scrum is an organizational structure separating the effort to deliver working software from the effort to understand user needs and discover the best solution.

It's not an organizational structure.

It's a way of working - a pattern which lots of people doing rigorous design and validation work in Agile development have arrived at. It may be an (intra-team) work organization mechanism, but organizational structure is just the wrong way to do it, because...

It's not about separating the effort to deliver from the effort to understand.

Jeff Patton has used two different visualizations to represent the dual-track model.


Intertwined model
Conjoined model

Neither is perfect, but each one is really trying to convey how Agile thinking and product thinking work together in the same process.

The original paper about dual-track development (when it wasn't called that) says:
"... we work with developers very closely through design and development. Although the dual tracks depicted seem separate, in reality, designers need to communicate every day with developers."
Discovery work happens concurrently and continuously with development work.

It's not (really) about discovering the best solution.

It's primarily about discovering:
  • enough about the problems we’re solving to make good decisions about what we should build
  • if we're building the right thing (not the best solution) and if we'll get value in return
  • if people have the problem we’re solving and will happily try, use, and adopt our solution
  • risks and risky assumptions in our ideas and solution options
The primary objective of discovery is really validating whether the product is worth building. Once that validation is achieved, the best solution may have been either discovered in the process (as a happy byproduct), or it may be subsequently invented using any number of techniques. Coming up with the best solution may require a lot of tactical design decisions, but these are generally not the risky decisions that need validation through discovery.

Myth 2: Discovery is no more than purposeful iteration on solution options.

Discovery is a whole lot more about validation of problems to solve, opportunities and ideas than it is about iteration on solution options.

Myth 3: Knowledge comes from having thought things through beforehand. The discovery process is where that detailed analysis takes place.

The ability to think things through comes from knowledge, not the other way round. Without knowledge, thinking things through will only generate ideas, options and assumptions (which is all great, too - and you may feed those into discovery).

But no, discovery isn't where detailed analysis takes place - it's where exploration and validation takes place. Yes, to actually start building software, you’ll still need to make a lot of tactical design decisions. It’s often that designers need to spend time refining and finishing design work in preparation for writing code. That's why Jeff Patton clarifies, "OK, it’s really 3 kinds of work".

Myth 4: Refactoring is natural when you’re pivoting.

Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure. Pivoting, on the other hand, is a structured course correction designed to test a new fundamental hypothesis.

So, one requires preserving external behavior while the other is hard to imagine without changing external behavior. So, what's natural when you're pivoting is some form of rework - maybe a redesign, maybe a rewrite - but refactoring doesn't seem to be it.

Myth 5: If you're experiencing frequent refactoring, you probably aren’t putting enough effort into product discovery. Too much refactoring? You may need to invest in product discovery.

Contrarily, if you're experiencing infrequent refactoring, you probably aren't putting enough effort into product upkeep and evolution. Too much rework? You may need to invest in evolutionary engineering skills.

Myth 6: Those weeks of refactoring are a direct drag on speed.
Development work focuses on predictability and quality. Discovery work focuses on fast learning and validation.

Refactoring is what enables you to maximize learning velocity in discovery and delivery velocity and predicability in, well, delivery. The knowledge that you can come back and safely and systematically improve quality (read refactor) allows you to avoid wasting time over-investing in quality stuff when you're uncertain about the solution. Reflecting your learning in the code and keeping its theory consistent and structure healthy through MercilessRefactoring brings predictability and speed to development because you aren't constantly thrown off by surprises lurking in the code.

On the other hand, if you're spending weeks refactoring, you're likely not refactoring (but doing some other form of rework) or, ironically, not refactoring enough.

Myth 7: And “we didn’t think of that” is still a flaw even if you weren’t “supposed to.”

If your weren't supposed to think of something, and so you didn't, it's not a flaw - it's just called not being psychic.

What not to do, aka anti-patterns


Anti-pattern 1: Each track is given a general mandate: discover new needs, or deliver working software. Taking the baton first, the Discovery track is responsible for validating hypotheses about solving user problems, and preparing user stories for the Delivery track.

This sounds suspiciously like talk of two teams rather than two tracks within a single team and a single process. If the whole team is responsible for product success, not just getting things built, then the whole team understands and contributes to both kinds of work.

Anti-pattern 2: A sampling of "how it plays out in my current environment":
- The initial solution concepts are built from the needs brought to the team by Product Management. These often come from known functionality needs, but sometimes directly from client conversations.
- Product Manager provides a prioritized list of user needs & feature requests and helps to explain the need for each.
- The Discovery track generates an understanding of requirements, wireframes and architectural approach.
- The Delivery track takes the artifacts generated in the Discovery track and creates the detailed user stories required to deliver on those ideas.
- The handoff between Discovery and Delivery turned out to be a little hairy.

First, if you already have a prioritized list of requirements/known needs (that only sometimes come directly from client conversations) and can already explain the need for each, where's the element of discovery? You either don't need it, or aren't doing it (right).

Secondly, instead of building a shared understanding of user needs and potential solutions by working together as a whole team, if one group of people in generating an understanding of requirements and handing off artifacts to another group of people, don't be surprised if the handoff turns out to be a little hairy.

Anti-pattern 3: [A variety of specialists] whiteboard wireframes and talk through happy path and edge cases to see if a design holds water. When it doesn’t, changes are made and the talk-through happens again. Simple features may require only a few iterations; more complex ones may be iterated on for several weeks off and on, woven in with other responsibilities. When the team is confident in the design, [...] members of client facing teams are invited to a walk-through, and select clients are recruited to provide feedback.

The team iterating until it feels 'a design holds water' isn't learning and isn't validation and isn't discovery. Sure, it's progressive refinement of ideas and solutions through deliberate analysis and thinking through. And those are wonderful things and should done. However, that's not discovery, so don't call it that.

What to do instead


I've tried to clarify what dual-track development is not, without properly explaining what it is. Others have done a much better job of it than I could at this point, so here're some starting points:
Agile engineering is equally crucial, because without that you'd end up with FlaccidDualTrack (similar to FlaccidScrum - mentally substitute Scrum with dual-track development in that article and it all remains just as true, if not more). For highly relevant engineering perspective, check out: