The hidden threat: combating cheating in digital health interventions
By Carmen E Lefevre, on 30 November 2016
By: Paolo Satta, project manager at Day One
Designing an effective incentive system is not an easy task. Besides delicate choices that can decide the fate of the entire system (Which actions should be incentivised? How? How often?), a hidden threat can jeopardise even the most sophisticated concept: cheating by the very same people that should be incentivised.
Irrespective of the system, we know that someone is going to try cheating on it – for getting higher rewards, surpassing the others or just for self-gratification in fooling the system.
How can you build up an incentive system with antibodies that protect it against cheating attacks?
The first rule of thumb is to create incentives which are perceived to be fair by the beneficiaries.
In Credits4Health (C4H) we aim to incentivise the users of the C4H Internet platform to do more physical activity (PA), adopt healthier dietary behaviours, and get and provide social support. We defined a set of in-platform actions which generate points (e.g. planning the PA and dietary calendar, self-reporting the activities completed, etc.), and users can spend their points to get discounts on products provided by industrial partners.
We tried to make the system as fair as possible, by giving the right point value to each rewarded action (posting a message cannot be rewarded as running for an hour!) as well as to discounts (how many points is a 50% discount on a gym subscription worth compared to 30% on a restaurant dinner?).
Another suggestion is to carefully choose the actions to be rewarded. It is well-known the case of a company that rewarded programmers based on the number of lines of code they produced. The result was software with encyclopaedic code. In this case, the rewarded action was badly chosen, due to a misinterpretation of productivity as quantity. C4H users are rewarded based on their performance in physical activity and nutrition. They can plan exercise sessions during the week and report whether they did them or not. The same applies for specific dietary habits: they plan the consumption of meat during the week and self-report whether they met the recommendations. Social activity is also rewarded; points are assigned when they write posts, or get a certain number of answers to their posts. Unfortunately, most of these actions are self-reported.
This leads us to the third suggestion: keep in mind the overall system when designing anti-cheating rules, and think about the consequences. In C4H we defined maximum thresholds of points which could not be exceeded, bonuses included. For instance, the user can get points based on the number of steps tracked by a device. We know that anyone can wear the device, even the user’s dog. So, we minimized the effect of cheating by providing a maximum number of points when the user accomplishes the goal of 10,000 steps. Similarly writing a post is rewarded, but not more than 10 posts per week. After that, points are not assigned anymore, and the user must wait for the following week. The “threshold” rule is applied all over the platform for two reasons: on one side, it prevents users from over-acting just to gain points; on the other, it provides clear goals to users for measuring their performance and acting for improvement (3000 steps more to reach your goal!).
Finally: make the rules clear to users, and monitor the results. In C4H, no individual “punishing” action has been programmed, also because users contributed to a randomised controlled trial study and to the research. Still, we found some cases of cheating. Some users planned their activities up to 2018 just to get more points from planning the calendars. We therefore blocked the possibility to plan more than one month ahead.
We also used checkpoints as disincentives for cheating, through periodic screenings of the users’ anthropometric measures (weight, BMI, etc.) for getting objective data on users’ status.
Also, auditing actions can be useful, especially in case of challenges amongst users: the ones who win could be audited, so to ensure their good faith. In case the rewarded users are too many for being audited, you can put in place random audits. In C4H, some users were gaining too many points from the steps count. We checked the issue, and found out it was a bug in the platform. We informed the users and re-set their points. All in all, a constant monitoring is essential to identify anomalies and inconsistencies in the system that are the first warnings of possible cheating.
What measures have you taken in your digital health projects to combat cheating?
BIO: Paolo Satta is project manager at Day One, an Italian company that supports researchers and start-uppers all over Europe in bringing their innovative technologies to the market, by providing public and private fund-raising, business planning, and business development services. He has been PM of the EU-funded project Credits4Health, and he currently manages other projects in various industry sectors – health, lighting, energy, space, etc. The most important project he is working on is ReInventure, a European network of research centres, enterprises and investors. The aim is to support researchers in creating products that enterprises are willing to integrate in their production, so to facilitate both technology transfer, and the constitution of market-validated spin-offs able to attract funding from investors.