Measure for Measure
Maximize Value Form Your Incentive Program
By Rick Dandes
There are several additional ways to show a program is effective.
Most experts agree that designers and managers should first gather input from top leaders as the program is being developed, to engage them in the planning process and gain agreement on key assumptions and desired outcomes or improvements.
Thus, the first key to measuring ROI, Smith, of O.C. Tanner, added, is creating the program around specific, measurable goals and activities. Those criteria should be firmly established at the onset of the program and clearly communicated to the program participants, with an equally clear message of what they can do to contribute to the achievement of those goals, along with how and what they will be awarded for meeting and exceeding those goals.
"The most successful programs," Smith said, "track and review progress throughout the program so that adjustments can be made rather than waiting until the end to analyze the results. Put a process in place to track and measure the results."
Smith also recommended a follow-up survey to all participants after the completion of the program to gain additional feedback on the experience and to gather ideas to improve the program in the future.
If a program will be running for a year or more, it's often beneficial to survey participants midway through the program to garner helpful insights.
Then, there is what Stotz, of IRF, called an "ad hoc" or after-the-fact review. One way to accomplish this is to measure the results of those who participated in the incentive program versus a control group of employees who did not. This is particularly effective when analyzing the effectiveness of a sales incentive-award program.
"Another way to show a program is effective is to provide both a pre- and post-simulation," explained Stotz. "In other words, looking at designing a program and then going through the 'What ifs?' scenarios: 'What if sales improve by 10 percent? What would that means in terms of increased revenue, increased gross profit and increased cost, based on the fact that we have to fund the program?"
Stotz gave the example of how this kind of an analysis can prevent a program from becoming ineffective.
"The IRF conducted a study where company executives learned that a particular program would result in a negative return on investment," he said.
"The reason for the negative results," Stotz explained, "was that as they went through the analysis of simulating the different possible outcomes, this company realized the issue was they could perhaps sell the product, but they could not manufacture and deliver the product without experiencing huge increases in cost. They asked what if the sales force went out and sold new customers, could the finance department do the credit checks in a timely fashion to make sure that the customer was credit-worthy? Number two, they went to manufacturing and said, if we increased by 10 or 15 percent the requirements, what would you need to do to produce those in a timely fashion to meet our commitments? And they found out they would have to add more people or pay overtime, which is an expense, and secondly, they would have to buy more material for which they did not have a current contract covering."
Those kinds of statistics provided designers and managers with enough data to transform the program into a successful one.