Skip to main content

Doing the Math for Assessing Communication: The Bijective Oracle

I've seen some conversations and blog posts recently about whether or not we should be arguing over semantics. Some I read and follow, some I don’t, but someone recently directed the people involved in one of these threads to a blog post by Michael Bolton. The post is entitled “What Do You Mean By “Arguing Over Semantics”?, “ and I think it provides one of the best discussions I've seen on this subject. The thing that caught my attention was when Michael finishes the post by saying:
“There’s a common thread that runs through these stories: they’re about what we say, about what we mean, and about whether we say what we mean and mean what we say. That’s semantics: the relationships between words and meaning. Those relationships are central to testing work. 
If you feel yourself tempted to object to something by saying “We’re arguing about semantics,” try a macro expansion: “We’re arguing about what we mean by the words we’re choosing,” which can then be shortened to “We’re arguing about what we mean.” If we can’t settle on the premises of a conversation, we’re going to have an awfully hard time agreeing on conclusions.”
That really struck a chord with me because it highlights what I see as one of the larger obstacles to effective language-based communication: the one-to-many relationship that exists between a word (the one) and the meanings (the many) generally associated with that word. This is easily extended beyond single words to include groups of words structured to form larger constructs such as clauses, phrases, sentences, paragraphs, etc., in which case we now have a many-to-many relationship between words/constructs and meanings. There can be many disconnects between what we say and what we mean, and we often don’t know whether we actually say what we mean and mean what we say. Moreover, if we’re not sure, how can the people we’re trying to communicate with be sure?

One oracle I try to apply when it comes to assessing the effectiveness of language-based communication is the mathematical relation of the bijective function, which is a relationship that is both injective and surjective. Yeah, I’m a math geek, but bear with me while I describe this oracle.

In mathematics, a function is simply a relationship between a set of inputs called the domain, and a set of outputs, called the range or co-domain, in which every member of the domain is mapped to exactly one member of the co-domain. If we apply this to the relationships between words and meanings by mapping each word/construct to exactly one meaning within the context of our current communications, namely the meaning we are trying to convey, then we have made some progress in alleviating the issues that arise from the many-to-many mapping between words (the set of inputs) and meanings (the set of outputs), restricting the mapping and thereby reducing it a to a many-to-one mapping. But at this point we still don’t know if we are saying what we mean and mean what we say; we just know what we mean and it has many ways to be said. Effective communication needs to use the same words to convey the same meanings.

The problem is that we haven’t addressed the fact that different words/constructs can be used to convey the same meaning. To do this we need to apply the oracle and determine if every meaning is mapped to at most one word/construct. In the language of mathematics, we would say that our relationship would then need to be injective, or one-to-one, so that not only is every input mapped to one output, but also that every output is mapped to a distinct input; no two inputs produce the same output. So now, if we use our oracle to assess our communication, we should be able to better see if we have established a one-to-one mapping between what we say and what we mean. If not, then there is a possibility that we are not saying what we mean and meaning what we say.

For our communication to be truly effective, we need to make sure that every meaning has been mapped to a word/construct in our conversation. Have we left anything unsaid? Have we used words/constructs to explicitly cover all meanings we wanted to cover? Every meaning we want to convey needs to have a relationship to a word/construct we have used in our communication. This means that our relationship needs to be not only injective, but that it also needs to be surjective, so that every element in our output set is mapped to a corresponding element in our input set. If we do this, then we've worked to ensure that our communication is complete.


So, using our bijective oracle, we can look for potential problems in our communication. Is our communication injective? Have we reduced the many-to-many relationship between words and meaning down to a one-to-one relationship? Is our communication surjective? Have we said everything we need to say? If so, then there’s a pretty good chance that we are saying what we mean and mean what we say.

Comments

Popular posts from this blog

Takeaways from the Continuous Automated Testing Tutorial at CAST2014

I had the opportunity to attend Noah Sussman's tutorial on Continuous Automated Testing last week as part of CAST2014. It was a great tutorial, with most of the morning spent on the theory and concepts behind continuous automated testing, and the afternoon spent with some hands-on exercises. I think that Noah really understands the problems associated with test automation in an agile environment, and the solutions that he presented in his tutorial show the true depth of his understanding of, and insight into, those problems. Here are some of the main highlights and takeaways that I got from his tutorial at CAST2014. Key Concepts Design Tools – QA and testing are design tools, and the purpose of software testing is to design systems that are deterministic Efficiency-to-Thoroughness-Trade-Offs – (ETTO) We do not always pick the best option, we pick the one that best meets the immediate needs Ironies of automation – Automation makes things more complex and, while tools can make

Mission Statement, Definition of Software Testing, and Goals of Software Testing

Why I blog? What’s the difference between a good tester and a great tester? I think the main thing is the ability to think for yourself and to be able to incorporate your experiences as a tester back into the context of your testing practices.  I think that if you look at the software testing community and pay attention to who has good ideas and who does not, you’ll find that the vast majority of people with good ideas emphasize their experience, what they have learned from it, and how they incorporate that back into their testing. Writing about my thoughts and experiences in software testing provides an opportunity for me to take a critical look at what I thought about a subject, assess it in the context of experience and information gained since I first came to think that way, and then update or reaffirm my thoughts on the subject. It also allows me to share my thoughts, experiences, successes and failures with others, creating an additional feedback loop. That, to me, is one

A Year in Review

The following post came to mind as I was writing my year-end self-evaluation, and provides a brief glimpse of where I started the year and how I got to where I am today.  This year has been filled with diverse challenges, including ongoing employee issues, the continued mindset of "get it out the door", another reorg of the IT department, and the real possibility of the commoditization of testing within IT. However, as is often the case, challenge spurs innovation. In preparing for working on the team's seven-year strategic plan, I stepped back from the day-to-day operations of my team, and took a critical look at the work we were doing and the services we performed. What I saw was that the testing services we were providing for the company were, in many cases, nearly indistinguishable from the testing services provided by alternative sourcing strategies, with the primary differentiator being cost, not quality. Seeing the threat of the commoditization of testing