Skip to main content

The Macro Expansion Heuristic – A Real World Example

I enjoy learning new things, and I especially enjoy making connections between things I've learned or read about, and applying them to real-world situations. I had the opportunity to do that today while I was brainstorming with a colleague for a presentation on model-based testing and its application in intelligent automation. We were talking about the benefits that model-based testing provided when my colleague said something that caught my attention: we are modeling requirements.

I tend to be overly careful in conversations about model-based testing because my experience comes from modeling the states of use of an application and not the approach we’re taking with model-based automation, and I don’t want to use the incorrect context. So I stopped and thought about it for a few seconds before telling my colleague that I wasn't comfortable with that statement. Something about it didn't sit right with me, and I suspected it was that the statement either did not correctly say what was meant or did not mean what was said.

We talked it through and mapped it out in our mind map, but were still unable to agree that the statement was correct, but by now I believe that my colleague was also unsure as to whether or not he had said what he meant or meant what he said. It then occurred to me that I had seen a similar problem before, and that problem had been presented along with a heuristic that we could apply to solve the problem – the concept of a macro expansion, which I had first seen applied to communication by Michael Bolton in his blog post “What Do You Mean By ‘Arguing Over Semantics’?

When we generalize on the concept of a macro expansion we can utilize it to take a single word and expand it into a series of words that more accurately express what we mean. I asked my colleague if we could do a small exercise on the white board to sort this out, and he agreed. Taking one of the markers present I wrote on the board:

“We’re modeling the requirement.”

As I was writing I told my colleague that if we were to expand that statement, we would get something that better said what was meant. So, under that I wrote:

“We’re modeling that the requirement was correctly implemented.”

After further discussion we agreed this could be shortened without a loss of clarity, so I wrote:

“We’re modeling correct implementation.”

We were both comfortable with that statement, and felt that it truly expressed what we were attempting to say. 

Having the feeling that we were on to something, we took this one step further. If we’re modeling correct implementation then any discrepancy between the actual implementation and what we have modeled means that there is a problem. That means that the models we are creating are mechanisms by which we recognize a problem. That sounds rather familiar....


Comments

Popular posts from this blog

Takeaways from the Continuous Automated Testing Tutorial at CAST2014

I had the opportunity to attend Noah Sussman's tutorial on Continuous Automated Testing last week as part of CAST2014. It was a great tutorial, with most of the morning spent on the theory and concepts behind continuous automated testing, and the afternoon spent with some hands-on exercises. I think that Noah really understands the problems associated with test automation in an agile environment, and the solutions that he presented in his tutorial show the true depth of his understanding of, and insight into, those problems. Here are some of the main highlights and takeaways that I got from his tutorial at CAST2014. Key Concepts Design Tools – QA and testing are design tools, and the purpose of software testing is to design systems that are deterministic Efficiency-to-Thoroughness-Trade-Offs – (ETTO) We do not always pick the best option, we pick the one that best meets the immediate needs Ironies of automation – Automation makes things more complex and, while tools can make

Mission Statement, Definition of Software Testing, and Goals of Software Testing

Why I blog? What’s the difference between a good tester and a great tester? I think the main thing is the ability to think for yourself and to be able to incorporate your experiences as a tester back into the context of your testing practices.  I think that if you look at the software testing community and pay attention to who has good ideas and who does not, you’ll find that the vast majority of people with good ideas emphasize their experience, what they have learned from it, and how they incorporate that back into their testing. Writing about my thoughts and experiences in software testing provides an opportunity for me to take a critical look at what I thought about a subject, assess it in the context of experience and information gained since I first came to think that way, and then update or reaffirm my thoughts on the subject. It also allows me to share my thoughts, experiences, successes and failures with others, creating an additional feedback loop. That, to me, is one

A Year in Review

The following post came to mind as I was writing my year-end self-evaluation, and provides a brief glimpse of where I started the year and how I got to where I am today.  This year has been filled with diverse challenges, including ongoing employee issues, the continued mindset of "get it out the door", another reorg of the IT department, and the real possibility of the commoditization of testing within IT. However, as is often the case, challenge spurs innovation. In preparing for working on the team's seven-year strategic plan, I stepped back from the day-to-day operations of my team, and took a critical look at the work we were doing and the services we performed. What I saw was that the testing services we were providing for the company were, in many cases, nearly indistinguishable from the testing services provided by alternative sourcing strategies, with the primary differentiator being cost, not quality. Seeing the threat of the commoditization of testing