Understandably, when most of us think of perfection, we think of that “fast paced fun” game from the 80’s or (perhaps) its evil step-sister: “Superfection.”
(Actually, it came out in 1975.) |
Evil step-sister |
Of course, kidding aside, perfection is often something we prize: the A+, 100%, the perfect 10!
In the realm of countering violent extremism, so, too, is perfection prized: zero successful major attacks on the homeland. Rehabilitation programs for violent offenders strive for 0% recidivism, and prevention-focused P/CVE programs hope that none of their program participants go on to commit ideologically-motivated violence.
No news is good news, when it comes to atrocities, right?
Certainly. But, the problem–the paradox–is that a perfect program cannot, by itself, be shown to be effective. Why? Good question.
In this post:
- Darn those Taoists: The truth about paradoxes
- Three ways to compare
- Policy and research implications
Darn those Taoists: The truth about paradoxes
Like a Taoist saying, effectiveness can only be assessed in comparison to ineffectiveness.
In the case of a “perfect” P/CVE program–e.g., one whose participants harbor no inter-group ill-will, or who have never gone on to engage, or reengage, in violent extremism–we don’t know if the program is really what “did the trick.”
If there was no “imperfection,” (e.g., inter-group tensions, violent extremism, etc.) to begin with–where (or with whom) the program operates–there is no way to show that it helped to remedy the problem.
So, to demonstrate effectiveness, we have to make comparisons, and there are generally three ways to do this. (Warning, we’re about to geek out for just a bit.)
Three ways to compare
1. Compare one program to another, but, the other program can’t also be “perfect;” it must fall short on some outcome of interest). A challenge with this is to compare equivalent programs. (See our blog post, “No control group, no big deal part 1 (of 2): Propensity score matching designs,” for one of the best ways, statistically, to create and compare equivalent groups.)
2. Compare some people in the program with others in the program. However, some of those people need to be “imperfect;” call them what you will: haters, recalcitrants, violent extremists, recidivists, etc. There’s a fancy way to make comparisons within a program, when true experimentation is neither plausible nor desired, called a “regression discontinuity design,” which is a terribly opaque term for an otherwise brilliant technique (See our blog post, “No control group, no big deal, part 2 (of 2): Regression discontinuity designs,” for coverage of this technique.)
3. Comparison over time. Here, too–to demonstrate that the program is what’s “doing the trick,” and not something else (e.g., historical circumstances), the program would need to be introduced, then taken away, several times (a so-called “interrupted time series” design). The idea is you could show that when the program was in effect, life was good, and when you took it away, things went to heck. Obviously, this is–potentially–a politically disastrous and arguably unethical strategy, because it expects failure during certain phases of the design.
Policy implications
From the perspective of learning how to fix things, shortcomings and failures are godsends. Failures shine a light on what not to do. Shame on us, if we’re not prepared to learn our lessons when they (our shortcomings and failures) occur.
To be prepared to learn what works–and what doesn’t–in countering violent extremism, we need systematic, ongoing program evaluation data from P/CVE programs. That way, when sh!t happens–when, for example, a former detainee goes on to engage in terrorism, or a youth from one of our communities plans to commit an ideologically-rationalized hate crime–we’ll have data on that program that might suggest what could have been done differently.
Research implications
Tell us about your failures! Show us your shortcomings! By the logic above, we don’t need any more conferences about “what works” in P/CVE, because discussing success without reference to a relatively unsuccessful comparison group tells us next to nothing.
As mentioned, “perfection” is one of the obstacles that some prevention-focused programs might have in demonstrating their effectiveness, but it isn’t the only obstacle.
The other main obstacle will be addressed in part 2 of this “Prediction Predicaments” mini-series.
Further resource
The following article addresses many challenges of evaluating P/CVE initiatives. Its second appendix includes several educational resources pertinent both to P/CVE subject matters, and to evaluation skills development.
Williams, M. J., & Kleinman, S. M. (2013). A utilization-focused guide for conducting terrorism risk reduction program evaluations. Behavioral Sciences of Terrorism and Political Aggression, doi: 10.1080/19434472.2013.860183. (Full text available via the hyperlink in this reference.)
Have an idea for a future feature on The Science of P/CVE blog? Just let us know by contacting us through the form, or social media buttons, below.
We look forward to hearing from you!
All images used courtesy of creative commons licensing.