This website covers knowledge management, personal effectiveness, theory of constraints, amongst other topics. Opinions expressed here are strictly those of the owner, Jack Vinson, and those of the commenters.

Buffers of risk

Puffer / BufferDo people and groups have an overall level of risk with which they are comfortable?  If risk is reduced in one area, does it get consumed by increased risk-taking in another area?  This is the (under debate) hypothesis of risk homeostasis as proposed by psychology professor Gerald J. S. Wilde.

I've been reading Malcolm Gladwell's "best of" compilation, What the Dog Saw: And Other Adventures.  Several articles are very familiar, either from reading them previously or from the discussion and topics that they created elsewhere.  One story that caught my attention is the story from 1996 about disasters and risk, Blowup.  The overall story is about the Challenger explosion in 1985 and the efforts that went into determining what happened and fixing the problem.  The issue, though, is that while the o-rings on the booster rockets have been fixed, there are six binders full of other shuttle components that have a similar level of risk.  How do you fix all of these things to reduce risk?  You probably don't.  You have to gauge the acceptable level of risk (explicitly or not) and move ahead.

And how does this relate to the idea of buffers from Theory of Constraints?  It was the words Gladwell used - and maybe these are words used in the larger community of people who talk about risk too.  He uses the word "consume," as in 'They consumed the risk reduction, they didn't save it.'  For some reason as I read those words and the final sections of the article, I couldn't help think about buffer management and what happens in reality of TOC implementations if buffer management is not followed.  What I see is that while we make every effort to help people understand the intentions behind the implementation, if they do not follow a new set of rules, they consume the buffers that are intended to be there for variability and emergencies.  Instead, the buffers get consumed because they have the false idea that with the buffer they now have "extra time" to get their work done.

I wonder if there is a connection between the psychology behind risk homeostasis and the behaviors I have seen in some TOC implementations.  Is there an underlying belief that the buffers protect operations more than they actually do?  Or does it go beyond buffers?  Another thing that happens in TOC implementations is that the amount of work in process (WIP) goes down because things are supposed to flow through the system faster.  What if, as part of the initial WIP reduction, people slow down rather than speed up?  They've just consumed the time buffer that is supposed to be there to help them move faster. 

Interesting thoughts.  I hope this helps someone else too.

[Photo: "Puffer / Buffer" by pittigliani2005]

Profound knowledge applies many places

Variability is good for you