Often, accountability requirements use language and concepts unfamiliar to skilled program staff.
Pivot works with program staff to develop and manage accountability measures that are realistic and attainable. Multi-level analyses ensure the evaluation captures unanticipated consequences and follows a 'big picture' direction. Qualitative and quantitative methods provide valid information for your decision-making.
Our community approach recognizes multiple organizations working in the same area of need. Because the need is so great, these organizations rarely compete.
Social Justice Focus
Our evaluative approaches begin with considerations of social justice.
The universe of analysis is complex. We navigate both qualitative and quantitative realms. We often use one to validate the other.
Pivot has developed team-based approaches to safeguard project completion and improve our product quality while maintaining client communication and trust.
Evaluators often work independently for a lot of reasons. For example, to contain costs and to address the periodic nature of the work. Drawbacks to that approach leave the project in jeopardy if the evaluator gets overbooked or becomes indisposed. Additionally, project insight is limited to the evaluator's experience and knowledge.
The Pivot model includes a primary evaluator and a back-up evaluator (or more) to provide additional insight and experience and provide project continuity in case of staff changes. While staff changes are rare at Pivot, our practice lends to client peace of mind. Additionally, the primary and secondary evaluator model contains costs while providing quality control reviews.
What is a report?
Our clients engage us for many reasons, all of which answer some sort of question. Sometimes the client just needs an answer generated from a data set (qualitative or quantitative). Pivot has produced one-page answers with supporting documentation. These short reports save the client money when detailed reports are not necessary. Sometime clients prefer a slideshow with more detail but are unwilling to spend money on a technical report. In the cases Pivot produces a slideshow summarizing the research process and findings. Some clients require a full report following funder requirements and budget accordingly. When clients require a full report, they often consider additional communication products such as one-page infographics and summaries for various audiences. For example, governing boards have different interests than program participants requiring separate communication products.
What is evaluative language?
Our reports pay careful attention to evaluative terms. Words like improve, reduce, and increase always imply measures at 2 points in time. Words that add emphasis like only, unexpected, surprisingly must have a referent in data or literature. Recommendations must be data driven and have a chance of being implemented. To meet those criteria, recommendations are vetted with the client. Our clients have the opportunity to review draft reports prior to finalization with the following instructions: “Please look for errors or omissions of fact, and for nuances of language that don’t accurately reflect the program practices and conditions.” On occasions when our clients have requested changes in nuance, we have ALWAYS agreed with them.
What We Do
Often, accountability requirements use language and concepts unfamiliar to skilled program staff. Pivot works with program staff to develop and manage accountability measures that are realistic and attainable while meeting reporting requirements. Multi-level analyses (we use participant, program and system distinction often) ensure the evaluation captures unanticipated consequences and follows a big picture direction. Combining qualitative and quantitative methods provide valid information for your decision-making.
Where to Begin
We see two basic program models. For ongoing programs, cyclical program evaluation models often apply effectively (see Cyclical Program Model figure). Another model is for grant-funded, start-up programs where the program ends after funding ends. We use linear evaluation models for these types of programs that focus early on process to help maximize outcomes. In both cases, Pivot can begin at any phase. In the linear phase, asking for an evaluation at the end of the project (post hoc evaluations) would be more expensive due to difficulties collecting data in arrears.
Our team of knowledgeable, experienced consultants is available to provide help for organizations and companies with expertise relevant to their particular questions, issues and challenges. The program evaluation methods used by Pivot ensure program staff and participant voices and their concerns drive evaluation. This is done through the use of the cyclical program model.
This image shows the green program cycle along with various ways program staff can enter the orange evaluation process. Program staff can use evaluation results in terms of reporting the next program step, or for improving the previous step. For example, starting with formative (process) evaluation, program staff can report project implementation targets. Alternatively, project staff may wish to examine whether or not their plans materialized and how to improve the execution of those plans.