Deep Dive Breakout 1

Thursday, August 3, 2017

11:30am –12:30pm

Hannah Wesolowski (National Alliance on Mental Illness), Kathleen Gamble (American Trucking Associations), Jeanne Blankenship (The Academy of Nutrition and Dietetics), and Robb Friedlander (Feeding America)

Let’s face it, our industry is pretty unique – in more ways than one. So isn’t it time we created our own benchmarks? Learn how other practitioners measure success, perhaps share some of your own metrics, and help define a true industry standard.

Have feedback on this session? We’d love to hear it! Submit it here.

Notes

 

Introduction

  • A lot of people feel say they need metrics to demonstrate success to bosses and the Board.
  • Metrics should not be the end of the journey, it should inform what you are doing, and also motivate your advocates.

Defining Key Metrics

  • What are some of the most effective metrics that you all are using?
    • For an organization with lots of affiliates, we start off by asking ourselves: what are we trying to do? And if they did that, how would we measure their success? For us, success was engagements – not outcomes. It was not about how many Hill visits we have, but it was about the quality of the engagement.
    • Scorecards for dispersed networks can help. We created scorecards for our affiliates to self-complete that were relevant and measurable, apply to that particular stakeholder, can be communicated positively, demonstrate opportunity, and that drive further grassroots and advocacy initiatives.
    • One of the most important things was to communicate positively. It was an encouraging step, not a comparative one. We would have one on one conversations about the results and would talk more about where we will go from here. These measurements for us were much more about communicating the opportunity to our members.
  • Why was a benchmark tool useful?
    • Creating a tool was important because the volume of groups/people we were trying to reach was so large (200+). Additionally, we are still working to convince our members that advocacy is beneficial, and a good thing, and we also needed to find a way to map out our capacity on a state and federal level.
    • We use an “Advocacy Index” which is less about a science of metrics and more about getting down to basics. It helped our members see their capacity to carry out effective and larger programs over time. Each index scorecard was scored on a scale of 0 to 3. We gave them a survey to fill out and basically asked them to self-assess performance and factors.
    • Broke down advocacy into 5 buckets: 1) Engaging with state and federal officials 2) recruiting and mobilizing grassroots 3) recruiting and mobilizing grasstops 4) leveraging local media 5) building partnerships. Using those categories, we asked them to just send us a simple 0 to 3 score, and also asked them to talk about their capacity to do more.
    • We got to see our advocacy program grow, and as belief in the utility of advocacy and funding of the program and skills, the index grew over time. Sometimes it’s just the basics of an overall understanding of where your members are, and understanding that overall image is helpful. It allows us to simply pinpoint that and then add incentives to increase their score.
  • How do you make scores NOT a negative thing? How have you worked to make scores more positive?
    • We didn’t rank states, we didn’t have a list of best state (1-50). We would have personalized conversations with each team at the state level. We would share their score and the national average, and then keep it personal. The score was never meant to be punitive, and we would make sure to let groups know when we knew there were other factors getting in the way of their growth.
    • It helped to have a lot of carrots – and no sticks. Part of this was trying to convince our members that advocacy was cool – and then we go and provide those incentives. The higher advocacy score you have, the further along you get in the grant funding process. The higher score could get you training. Incentivizing the correct messengers makes a difference. The best messengers are ones who have seen success from the program.
    • The belief that advocacy is appropriate for us is still very new.
  • Any tips for getting the members to deliver the information? Or for incentivizing them to fill out surveys short-of making it a requirement of their membership?
    • Incentivizing is really helpful. It also helps to have some data points come from your software, and to have that contribute to the score – so for example, PAC Contributions, Action Alert participation is all noted in our database. We also have one on one conversations that are designed to help them, so they are encouraged. It also helps that people are motivated to NOT get a 0 score.
    • It helps to show the ROI of advocacy. They will be more dedicated to you if you can show them how this will directly benefit them. It’s really all about selling them on the culture of advocacy. There is also some concern when people see they weren’t listed, so then it’s kind of a little bit of peer pressure.
  • What about for an organization that is just starting to establish deep advocacy metrics?
    • We have constant fly-ins, vs one big fly-in. Our focus is on our champions. So our metrics are really based on are we meeting with our champion.
  • What are some other metrics that are working with leadership?
    • Fly-in numbers, open rates, and the number of advocates in our base.

Establishing Real World ROI

  • How do we make what we are doing tangible? How do we show the investment is worth it?
    • Showing growth towards big goals over time is good – but it really helps to be able to show small wins along the way.
    • Our advocates had a very big win by getting to meet with the President. We were able to generate an organic media moment with buttons, and a photo-op with the President in a truck.
    • We do a lot of work in the regulatory space, which has made it really easy to determine success. We created a PPP (Public Policy Panel) and we have a specific metric for their success. It’s a leadership opportunity, and it incentivizes them to do work at the local level. Having these built in leaders helps to demonstrate the impact they can have.
    • Advocacy days can be a great demonstration, and then also demonstrating and building partnerships.
    • We use our success metrics very carefully, we don’t want to overdo it with notes about how many emails we’ve sent since April or how many advocates have taken action – but those big wins can also be good incentivizers if used in the right way.
    • We were able to demonstrate how the investment in a single advocate led to a large budget item. We were able to measure how much time and effort went into training them (it was roughly $3000) and because they had a local issue, they were able to contact their representatives and create a truly grassroots action.

Reporting Metrics to Governing Bodies

  • What metrics do your boards actually care about?
    • We have an ongoing debate about quantity vs quality. For us getting 100k+ emails and 10k+ advocates can be a reality, but we also do not want our board to think that is all that matters. We want to ask what are the strategic priorities, and making sure they have the buy-in.
    • For us, it’s all about demonstrating and articulating the return on the investment, and we take an investment to mean both dollars and other efforts. So we try and very clearly set out the kind of returns we’ll expect to get with every investment.
    • We demonstrate success by really playing to our board’s interests and looking carefully at their strategic priorities as a group.
    • Mapping communications efforts and advocacy efforts together have been helpful. Making the numbers personal to them makes a big difference.

Industry Benchmarks

  • In order to get more resources, we have to frame our own story. What benchmarks can we use to demonstrate our success?
  • How can you better track state efforts?
    • We conduct surveys and use that tool and then we also use our CRM to provide us with good metrics. We make sure to weigh growth year over year higher than then we would weigh something else.
    • We always just try and keep it simple. Always remember that the measurement may not always be your job, so it needs to be something that someone else can pick up and translate, too.
  • To what degree do you use very specific metrics?
    • We primarily will use really concrete numbers in moments where we are talking about really significant investments or just large dollar amounts.
    • It’s imperative to give your constituents or give whoever you are representing, ownership over the numbers. That also means to make sure they understand the impact that they have so that they really and truly own their success.
    • Public Affairs Council Stats are important to look at: 50% of programs have fewer than 50 key contacts, 24% have 535 or more key contacts. 24% have more than 10,000 active advocates – 57% have 500 active advocates. 260 is the average number of participants in a fly-in, 84 is the average number of participants in a corporate/executive fly-in. A lot of this depends on your programs.
    • Also look at M+R Benchmarks, they look at email and open rates.