Part three of a four part series by SoftExpert, Roger Tregear, a manager at Leonardo International and Ian Huntly, CEO of Rifle-Shot Performance Holdings, representatives in Sub-Saharan Africa of SoftExpert, a market leader in software and services for enterprise-wide business.
In this third part of a four part series on process, process improvement and process management, the authors discuss how to identify the process.
Which process?
It may seem trivial to say that the first step is to identify the process. Of course, users know what process they are analysing, don’t they? Sometimes it’s not that easy. In any case, users should make sure that everyone involved has a shared understanding of the boundaries of the processing question as this will have a significant impact on the performance measures users ultimately assign.
Obviously, they also need to understand what the process is generally trying to accomplish. Any time users discover a process that is not directly or indirectly contributing to the achievement of the organisation’s strategic goals, they should delete it – or, at least, study it more carefully to understand what, if anything, it is doing to add value. Users must identify the process and its purpose.
These steps need not take long, especially for a process that is already well understood. Neglecting to assure this firm foundation, however, will make it impossible to determine the best set of performance measures.
Who cares?
Stakeholders are fundamental to process measurement and improvement. It is they who determine good performance and whether it is being delivered. Processes “perform” for stakeholders, so stakeholder performance aspirations must be the starting point. Users must clearly understand who the process stakeholders are, and what they want from it.
This is not just a simple, vague list of stakeholder types, but a detailed analysis of all stakeholders and their relationship to the process. Many of these will be deeply involved in the definition of process performance measures facilitated by this methodology.
Not all stakeholders are equal. It is often difficult to meet the performance aspirations of each stakeholder; there may even be diametrically opposed requirements. Some stakeholders are more engaged with the process than others; some have more power and influence over the design and operation of the process.
In discussion with the stakeholders, it is useful to take a top-down approach to determining the process performance measures. A discussion about what sort of performance is most important will identify key measurement themes and an important outcome of such discussions is an early prioritisation of what is most important. Users can’t, and don’t want to measure everything.
Mind the gap
No process exists in isolation; no process performance measure exists in isolation. Every process can be decomposed into component processes. It follows that every process is part of one at a higher level – at least up to the highest process architecture level. Although users may not actively do so, all those processes at every level can be measured.
Therefore, in parallel with the hierarchy of processes, users have a hierarchy of process measures.
The analysis issue at this point is to determine if there are any important misalignments between process measures at the same or higher levels. The key questions to be addressed are:
* Does the set of process measures align with the next highest process level?
* Are there gaps in the set of process measures compared to the next highest level?
Stop the leaks
Performance leakage points (PLPs) are places and circumstances in the process where problems are more likely to develop. There is no certainty about this for a particular process in its unique circumstance, but experience shows that any process may have a problem at:
* Handover points;
* Queuing points;
* Places that create rework;
* Customer touch points;
* Points of complexity; and
* Time-critical activities.
Having identified proven or potential PLPs such as these, it is useful to consider whether the candidate process measures would effectively alert users to performance issues at those points. If users imagine where performance problems might develop, do they have the measures that will provide an early warning?
Which process?
It may seem trivial to say that the first step is to identify the process. Of course, users know what process they are analysing, don’t they? Sometimes it’s not that easy. In any case, users should make sure that everyone involved has a shared understanding of the boundaries of the processing question as this will have a significant impact on the performance measures users ultimately assign.
Obviously, they also need to understand what the process is generally trying to accomplish. Any time users discover a process that is not directly or indirectly contributing to the achievement of the organisation’s strategic goals, they should delete it – or, at least, study it more carefully to understand what, if anything, it is doing to add value. Users must identify the process and its purpose.
These steps need not take long, especially for a process that is already well understood. Neglecting to assure this firm foundation, however, will make it impossible to determine the best set of performance measures.
Who cares?
Stakeholders are fundamental to process measurement and improvement. It is they who determine good performance and whether it is being delivered. Processes “perform” for stakeholders, so stakeholder performance aspirations must be the starting point. Users must clearly understand who the process stakeholders are, and what they want from it.
This is not just a simple, vague list of stakeholder types, but a detailed analysis of all stakeholders and their relationship to the process. Many of these will be deeply involved in the definition of process performance measures facilitated by this methodology.
Not all stakeholders are equal. It is often difficult to meet the performance aspirations of each stakeholder; there may even be diametrically opposed requirements. Some stakeholders are more engaged with the process than others; some have more power and influence over the design and operation of the process.
In discussion with the stakeholders, it is useful to take a top-down approach to determining the process performance measures. A discussion about what sort of performance is most important will identify key measurement themes and an important outcome of such discussions is an early prioritisation of what is most important. Users can’t, and don’t want to measure everything.
Mind the gap
No process exists in isolation; no process performance measure exists in isolation. Every process can be decomposed into component processes. It follows that every process is part of one at a higher level – at least up to the highest process architecture level. Although users may not actively do so, all those processes at every level can be measured.
Therefore, in parallel with the hierarchy of processes, users have a hierarchy of process measures.
The analysis issue at this point is to determine if there are any important misalignments between process measures at the same or higher levels. The key questions to be addressed are:
* Does the set of process measures align with the next highest process level?
* Are there gaps in the set of process measures compared to the next highest level?
Stop the leaks
Performance leakage points (PLPs) are places and circumstances in the process where problems are more likely to develop. There is no certainty about this for a particular process in its unique circumstance, but experience shows that any process may have a problem at:
* Handover points;
* Queuing points;
* Places that create rework;
* Customer touch points;
* Points of complexity; and
* Time-critical activities.
Having identified proven or potential PLPs such as these, it is useful to consider whether the candidate process measures would effectively alert users to performance issues at those points. If users imagine where performance problems might develop, do they have the measures that will provide an early warning?