Written by: Susannah Brouwer

We are just finishing up the application process for our Addiction Treatment Starts Here: Primary Care program. We received 76 applications and accepted 40 teams into the program. Compiling, reading, scoring, and selecting a cohort from this large application process was quite a journey. As I reflect on this recent experience, I realized how complicated and carefully crafted our process is — and how much I enjoy being part of this important component of CCI’s work.

Picking the right teams for our programs is no easy feat. The process we use has evolved over the years and we are continually tweaking it in an effort to improve it.

It starts with writing the application questions. Are we asking questions that will both inform us about the teams, as well as challenge applicants to reflect on their intentions and capacity to fully participate in the program? Finding the right balance between perfunctory preparedness questions and those that will help us best determine their readiness is hard. We want the application process to be a good use of everyone’s time.

We then develop an application scoring form that aligns with the questions. We form a review committee that’s typically composed of CCI team members and select external reviewers with expertise in the particular program. We hold several prep meetings to make sure all readers are clear on what we’re looking for in the applications. This ensures we’re all starting with the same mindset.

Each application has two to four readers. Applications can take up to an hour to read, so imagine the challenge of our most recent set of 76 applications! We’ve designed an extensive two-part scoring system, in which each application is scored based a cumulative score of how their application answers align with our program goals and expectations. We also have a “gut-check” score, in which we ask the reader to rate from 1 to 4 their overall impression of the applicant and their readiness for the program.

We compile the scores into an elaborate pivot chart and sort the list using the average of the two scores for each application. We use the scores as guideposts to help lead the discussion, but that number is by no means the final judgement. No matter what scores an application receives, every application is discussed in depth by the review committee. These robust discussions help tease out our concerns and excitement about each application. Are the team members ready? Do they seem committed? Do they have the necessary pre-requisites? Do they have a strong program lead? Are they prepared to be a fully-committed participant? Do they bring any unique qualities or perspectives to the cohort?

One of the hardest parts of deciding who participates in a program is figuring out our own goals for program participants. Are we looking for the shining stars who we feel confident will make great strides in doing the work, or are our efforts better spent supporting the groups that are struggling to make headway on their own? Are those groups that seem the newest to the content area or seem perhaps the most disorganized in their application the teams that need support the most? How do we know that a strong, well-written narrative means that a program team is ready to commit to the time it takes to be successful in a program? Alternatively, does a poorly written application actually mean that the team wouldn’t be successful? How do we look “under the hood” through a simple application to figure out what teams are most committed to participate?

And then things get even trickier: In addition to reflecting on the strength of the applications, we must consider other attributes to round out the cohort as a whole. We’ve learned from past programs that having colleagues from both similar and different settings helps everyone learn, so achieving a good mix of participants is essential. We work to build cohorts that represent teams from across different counties and regions in California. We try to incorporate teams from different clinical settings — from tiny FQHCs to ambulatory clinics in large hospital systems to Indian Health Service facilities. We want to make sure we have a good mix of urban and rural participants, and we also look for where the need is greatest. For the medications for addiction treatment (MAT) programs, we looked at the distribution of where the opioid epidemic has affected communities the most and intentionally sought to incorporate teams that came from these geographies. We’ve taken into account health and economic disparities across different communities, as well as history of state and federal funding in areas, or lack thereof.

We finish up the review meeting with a preliminary list of accepted and declined applicants, but the decision-making process is far from over. We often have a long list of follow-up questions and clarifications to complete with some soon-to-be-accepted applicants, as well as suggestions for alternative programs that may be good for declined applicants. Within a week following our review meeting, we’ve generally come to our final cohort list.

It’s a complicated, multi-faceted process that we take seriously. It’s by no means perfect, and when we discover an opportunity to improve upon it, we try to do just that.

For those out there that also are grantmakers, what about this process resonates with you? Do you have any tips and tricks we can try incorporating into our process to improve it? Let us know! We are continually searching for ways to make the process as efficient, effective, and objective as possible.

                          

                           

Find this useful or interesting? We’re constantly sharing stuff like this. Sign up to receive our newsletter to stay in the loop.