The Situation
Greenpeace USA was relying on expensive face-to-face vendors with misaligned incentives and declining donor quality. The organization had previously operated an in-house canvass program and closed it. In 2004, Greenpeace began rebuilding canvass as an in-house monthly giving vehicle.
When Paul arrived in 2006, there were three offices—New York City, Los Angeles, and San Francisco—all run by international canvass experts. Paul was the first person outside the international team to be given responsibility for running the US canvass.
The Constraints
- The US canvass had only ever been run by the international team—no internal succession path existed.
- Every operational decision affected retention, which affected multi-year revenue modeling. Getting it wrong was expensive and slow to show up in the numbers.
- The organization needed to test changes systematically—not implement gut-feel improvements across all locations at once.
- Staff economics had to work: high turnover destroyed program ROI. Wages, bonus structures, and advancement paths all mattered.
What changed
- Boston expansion, built from nothing. Paul started alone. He opened the Boston office independently—without the international support structure that had been in place for NYC, LA, and SF. He built it to 20 FTE / 25 staff by winter, then 35 FTE / 45 staff the following summer. Within 6 months, Boston was #1 nationally for weekly performance during the winter period.
- Phone verification at point of signup. Paul had previously worked in a sales environment where donors processed donations over the phone—giving their credit card number to the person on the phone. He brought that discipline to canvass. There was significant resistance; the process made conversion harder for canvassers by adding a step to the signup flow. Despite the conversion friction, the retention impact was massive: pre-debit attrition dropped to near zero. This single process change drove approximately $3M more in recurring revenue per 5-year revenue cycle.
- Tablets tested. Data said no. Organization rolled them out anyway. We tested tablets. The data showed no retention improvement. The organization rolled them out anyway. Paul disagreed then and disagrees now—tablets serve no functional purpose in outdoor canvass and are often impossible to see. This is what happens when decisions override data.
- Gift ladder redesign. Donors defaulted to the lowest visible number on the gift ladder. Tested raising the effective minimum offer. Rolled out nationally after the test showed improvement. Result: better bonus attainment for staff and higher average gift, which improved program ROI.
- National Canvass Director: largest wage increases at every field level. As National Canvass Director, Paul oversaw the largest wage hikes at every level of field staff. This was not charity—it was economics. Staff retention drove program ROI. Paying field staff well was part of the operating model.
Outcomes
If your organization is facing this
In-house canvass programs fail when they are run like vendor programs. The entire model depends on retention, and retention depends on training, process discipline, staff economics, and getting the right donors in the door from the start. Every piece of the system interacts with every other piece.
If you are running a canvass program—in-house or vendor—and you are not modeling retention at 6, 12, and 24 months after acquisition, you do not know what your program actually costs. See Direct Response Advisory Retainer for how we work on this type of engagement. Or visit The Canvass for our dedicated face-to-face practice.
Book a call if you want to talk through your canvass program economics.