How Can “Big Data” Be Connected to the Exercise of Orthodontics?

Perhaps you have heard about the phrase “Big Data”? My guess is the fact that for several orthodontists the expression is probably a tad such as the phrase “The Cloud.” They might possess an overall notion of the theory, but will not be totally certain how it is likely to be significant to them or is. The truth is, there’s a powerful connection between both conditions that I am going to discuss later in this post. First, yet let us appear at “Big Data” on it’s own. In accordance with Wikipedia.org, “Big information is an umbrella term for just about any set of data sets so big and complicated that it becomes hard to process using on-hand information management tools or conventional data-processing programs.”

Among the demands of HI-TECH is that complete implementation of digital health records (EHRs) for most individuals is needed by 20-16. The demands of the work especially relate to health care providers who participate in the Medicaid and Medicare plans. Meaning that this requirement covers now few dental practitioners. Nevertheless, this will not imply that individuals aren’t being changed. Since 2009 hospitals and physicians all over the nation have invested billions of dollars, together with assistance from government subsidies, changing paper-based methods to digital digitally-based records. Huge levels of information that is precious are currently accumulating associated with patient-care. A lot of the info was gathered prior to the laws, however, in a document low-standard structure that had not been retrievable and readily aggregated for evaluation that was substantive. The worth of most of the rolled up information that was electronic is starting to be completely comprehended. Big data from all health care suppliers will be aggregated and software to assess the information are used to enhance efficacy patient-care, security, and the standard. Therapy practices are being examined by hospitals and better-informed therapy choices are being made by physicians on the basis of the last care of a large number of comparable individuals.

The element HI-TECH doesn’t particularly relate to the majority of orthodontists therefore why is this significant to us, as I said before? Several orthodontists have or are currently also in the procedure for changing their methods to electronic methods (without the help of the authorities cash). A number of of the orthodontic special applications sellers provide cloud established methods and here’s the place where “Big Data” and “The Cloud” bond. Only take into consideration how precious that advice may be to methods and our individuals. All the reports now, published in our periodicals include therapy examples of of less than 100. The bi-annual Journal of Best Orthodontist in Oklahoma Training Research usually utilizes the input signal to get a couple hundred study respondents (outside of a potential pool of over 8,000). Wouldn’t it be useful for us to understand the most effective kind of Class II corrector centered on the real measurements gathered from the preceding treatment of hundreds of sufferers handled in methods all across the world or the nation? Would not the information that your therapy occasions/sessions change considerably from the local or national earnings be not useless? There’s small issue that accessibility to “Big Data” stats may offer our career the ability to enhance treatment quality, security and efficacy for our individuals simply as it’s starting to do to help another areas of health care.

For further information on this data, contact Sky Ortho.

This is the first of a three part series on how internal evaluators can think about building their organization’s evaluation capacity and sustainability and is based on a talk at Eval13 by the same name.

strategy

Any evaluator, internal or external, working to incorporate evaluative practices into nonprofit organizations must engage a wide variety of staff and systems in the design, implementation, and management of those practices.  The success of those efforts will be decided to a large extent by how non-evaluators are brought into the evaluation tent, and how evaluation is integrated into administrative and service delivery systems.  But how do we even begin?

Starting from Scratch

There are three main steps to coming up with any kind of strategy, including a strategy to build evaluation capacity.

1)   Understand the context

Without knowing where you are starting, it is very hard to set realistic goals.  So before you even start on your journey to build evaluation capacity, you have to know what you are working with.  Get to know the people you will be working with, the restraints and requirements, the values and priorities of the organization.  Conduct a SWOT analysis.  Determine who your allies will be, where your largest barriers will arise.  What will the culture of the organization support, and what is anathema to it?  Much like a body will reject any transplant that is incompatible with it, an organization will respond poorly to an intervention that doesn’t resonate with its culture.

2)   Define your destination and your path

Saying you want to ‘build evaluation capacity’ is not a good enough goal.  What does that mean?  What does that even look like? And how are you going to get there? What are interim benchmarks you can use to determine progress?

I have found three general strategies that have worked well for me: (1) make sure leadership is setting clear expectations for staff participation in evaluation activities, and holding them accountable for it, (2) start working with the high performers and people who already ‘get’ evaluation to create easy wins and visible successes, and (3) focus on the priorities of the people with influence – by convincing them of the value of evaluation, they will begin to shift the center of gravity in the organization closer to evaluation values.

3)   Prepare the foundation

What is the bare minimum in resource needs for you to accomplish your goal?  (Hopefully you were clear about resource needs before you even took the job.)  This is going to be different for every situation, but we probably all know the feeling of not having enough resources to accomplish our goals.

For me, these things recently included: technology, training for evaluation staff, time commitment from people throughout the organization, and coworkers who would support me if I got backed into a corner.  Some of these things I had to get budgetary approval for, but most of them were more about building strong and trusting relationships.  I had to be transparent about my intentions and manage everyone’s expectations about what they were expected to give, and what they could expect to get from working with me.  The first couple of months were more about creating strong relationships than about doing any ‘real’ evaluation work.

Patrick Germain

Patrick Germain

What strategies have worked for you?  What have your pitfalls been when starting a new capacity building effort?

Next post, I’ll discuss how to create momentum around evaluation capacity building efforts.

 

 

 

 

Patrick Germain is the Director of Strategy and Evaluation at Project Renewal, a large homeless services organization in New York City and is the President of the New York Consortium of Evaluators, the local AEA affiliate.

At the American Evaluation Association’s annual conference in October 2013, I led a roundtable titled “The Conference is Over, Now What? Professional Development for Novice Evaluators.” We discussed ways that novices can deepen their knowledge, build their skills, socialize with other evaluators, and get involved in leadership positions. I compiled the notes here so more people can benefit from these resources.

Here are the best resources for novice evaluators:

aea365

aea365This is the American Evaluation Association’s daily blog located at aea365.org. You can read about everything from item response theory to slide design. Confession: I rarely read an entire post. Instead, I’m skimming the posts just to see the title, author, author’s organization, and the main gist of the content. This is a great way to stay up-to-date on the biggest trends in the field.

You should seriously write for aea365, probably 2-3 times a year, even if you’re new to the field. Just make sure you follow the contribution guidelines.

Affiliates and other organizations

AEA is the national-level mothership and there are more than 20 local and regional affiliates. You can find a full listing of affiliates here: http://www.eval.org/p/cm/ld/fid=12. Every affiliate is different. For example, the Washington Evaluators hold monthly brown bags, quarterly happy hours, and an annual holiday party. The Eastern Evaluation Research Society holds an annual 3-day conference. Other affiliates hold virtual book clubs, maintain blogs, or simply hold member meetings via teleconference.

You should join your affiliate. Seriously. The mailing lists are little nuggets of gold and worth every penny of that $25/year membership. The Washington Evaluators, for example, send job announcements almost every day, so you’ll always know which organizations are hiring and expanding. Don’t forget to attend the affiliate events too. (Sometimes people just pay dues but skip all the events, and then they don’t know why they’re not meeting anyone? This confuses me.) After a year, start planning small events yourself, like a brown bag. Then, join the Board.

Here are some additional reasons to join affiliates and ideas for getting involved:

data_community_dcThere are tons of additional evaluation groups. For example, the Environmental Evaluators Network, led by Matt Keene, holds forums for evaluators interested in environmental issues. If you’re in Washington, DC, the Aspen Institute holds quarterly breakfast panels focused on advocacy evaluation. At Innovation Network, we hold Ask an Evaluator sessions for nonprofit leaders. Tony Fujs and I also attend Data Science DC, Data Visualization DC, and Data Community DC monthly meetups. No matter your city, there are probably lots of events that fit your interests.

Blogs

First, check out evalcentral.com, run by Chris Lysy. Chris pulls in feeds from 60+ evaluation blogs so you’ll get exposed to a diverse set of perspectives. Chris even developed a daily email digest, so you can subscribe once to all 60+ blogs rather than monitoring your subscriptions to all the individual blogs. I suggest setting EvalCentral as one of your homepage tabs (along with your other must-haves like Gmail and Pandora) so it’s there every time you log into your computer. And again, I rarely read an entire blog post but I skim everything for the title, author, and main gist of what they’re talking about.

Second, check out AEA’s listing of evaluators and evaluation organizations who blog: http://www.eval.org/p/cm/ld/fid=71

I started blogging after watching Chris Lysy’s Ignite presentation at the 2011 AEA conference. Here’s Chris’ Ignite, which outlines just a few of the infinite reasons why evaluators should blog:

Coffee Break webinars

Coffee Break webinars are just 20 minutes long, so they’re a perfect way to squeeze in some quick professional development in the middle of a busy work day. The best part? They’re free for AEA members. I like to sign up for topics that I know nothing about. After 20 minutes, I’m not an expert, but at least I’ve got a basic understanding of that flavor of evaluation.

Conferences

Evaluation conferences include:

  • the African Evaluation Association’s conference
  • the American Evaluation Association’s conference
  • the Aotearoa New Zealand Evaluation Association’s conference
  • the Australasian Evaluation Society’s conference
  • the Canadian Evaluation Society’s conference
  • the Eastern Evaluation Research Society’s conference
  • the Environmental Evaluators Network forum
  • the European Evaluation Society’s conference
  • the International Program Evaluation Network’s conference
  • the Monitoring and Evaluation Network of Latin America and the Caribbean’s conference
  • the Oregon Program Evaluators Network conference

Do you know of additional evaluation conferences? Please link to them in the comments section below.

I also like to attend non-evaluation conferences to hear how non-evaluators are describing our work (they have completely different lingo and tend to value qualitative data way more than evaluators do).

eStudies

An eStudy is a 3- to 6-hour webinar run by AEA. eStudies are like mini grad school courses because they go in-depth on a particular topic (as opposed to 20-minute Coffee Break webinars, which just provide an overview of a topic). eStudies are broken into 90-minute chunks and there’s typically a homework assignment between each segment to help you practice your new skills.

For example, I participated in an eStudy about nonparametric statistics in which the instructor covered about 20 different nonparametric statistics, when to use each one, and how to perform the calculations in SPSS. We even got to keep her slides, which were full of step-by-step SPSS screenshots. Almost two years later, I still pull out my eStudy notes whenever I need to use some nonparametric statistics.

Journals

AEA offers two journals, the American Journal of Evaluation and New Directions for Evaluation. Both of these journals are included with your AEA membership. What a steal!

LinkedIn

These days, I can’t imagine an employer not doing a full internet search on new applicants. Make sure your LinkedIn profile has, at the bare minimum, a professional photo, your full work history (including dates), and your education history. You can also use LinkedIn to build your online portfolio (e.g., embedded slideshows from recent conference presentations, links to publications and projects, and your list of certifications).

Want to connect with other evaluators? Some awesome evaluation groups on LinkedIn include:

  • American Evaluation Association
  • Baltimore Area Evaluators
  • Chicagoland Evaluation Association
  • Eastern Evaluation Research Society
  • Environmental Evaluators Network
  • European Evaluation Society
  • Evaluation Jobs (a giant jobs board – great for job seekers and employers wishing to advertise for evaluation positions)
  • Evaluators Group
  • Indiana Evaluation Association
  • Ohio Program Evaluators’ Group
  • Oregon Program Evaluators Network
  • Program Evaluators Northwest
  • Michigan Association for Evaluation
  • Monitoring and Evaluation Professionals
  • RTP Evaluators
  • New Mexico Evaluators
  • San Francisco Bay Area Evaluators
  • Southeast Evaluation Association
  • Washington Evaluators

Do you know of additional evaluation groups on LinkedIn? Share your suggestion in the comments below. Thanks!

Listservs, mailing lists, and newsletters

First, check out EvalTalk: https://listserv.ua.edu/archives/evaltalk.html. This is a traditional listserv that goes directly to your email inbox. Subscribing to EvalTalk is a must (if only to watch the bloodbath as evaluators battle each other online). Make sure you adjust your settings so that you get a daily or weekly digest – otherwise you’ll drown in the sheer volume of messages.

Second, subscribe to mailing lists and newsletters specific to your client projects. Whenever I begin a new project, I search the client’s website and subscribe to everything I can (like their Twitter feed, email newsletter, and blog). As a consultant, I only see one slice of their work. Subscribing to all of their updates helps me get a fuller picture of their work, so I can make sure the evaluation fits their organization’s culture and needs.

Thought Leaders Discussion Series

AEA’s Thought Leaders Discussion Series is like a big message board to debate bigger-picture, theoretical issues in the field. Each series is led by a different person and has a different flavor.

Topical Interest Groups (TIGs)

Topical Interest Groups (TIGs) are known as affinity groups in other professional associations. You get to select five TIGs when you join AEA, and you can change your selection at any time. Each TIG is different–different sizes, leadership and committee structure, and different business meetings. I suggest attending business meetings for multiple TIGs at each conference. See which culture fits you best. After a few years, get more involved by running for a leadership position.

Twitter

Just getting started on Twitter? Here’s my list of 275+ evaluators and 80+ evaluation organizations who are using Twitter. Use #eval13 to tweet about that year’s AEA conference (not #AEA13 – the poor folks at the American Equine Association will get confused). Use #eval for all your regular evaluation-related content.

There's a huge online evaluation community. What are you waiting for?!

Here’s Johanna Morariu’s social network map of the #eval13 hashtag. There’s a huge online evaluation community. What are you waiting for?!

White papers and other gray literature

There are approximately 8000 evaluators in the American Evaluation Association. I estimate that maybe… 5%?… aim to publish articles in academic journals. Most of us are practitioners and consultants (not academics, theorists, or professors). White papers and other gray literature are a great way to learn about our work, our insights, and our tips. For examples, check out innonet.org/research and evaluationinnovation.org/publications.

Additional resources

What are your favorite resources? Which resources were most valuable during your first few years in the field? And, most importantly–do you have different viewpoints on any of the resources I described? Share your perspectives! I’ve presented one opinion and there are many more to add to the mix.

Note from Ann: Today’s guest post is from Johanna Morariu, Director of Innovation Network, AEA DVRTIG Chair, and dataviz aficionado.

snaBasic social network analysis is something EVERYONE can do. So let’s try out one social network analysis tool, NodeXL, and take a peek at the Twitter hashtag #eval13.

Using NodeXL (a free Excel plug-in) I will demonstrate step-by-step how to do a basic social network analysis (SNA). SNA is a dataviz approach for data collection, analysis, and reporting. Networks are made up of nodes (often people or organizations) and edges (the relationships or exchanges between nodes). The set of nodes and edges that make up a network form the dataset for SNA. Like other types of data, there are quantitative metrics about networks, for example, the overall size and density of the network.

There are four basic steps to creating a social network map in NodeXL: get NodeXL, open NodeXL, import data, and visualize.

Do you want to explore the #eval13 social network data? Download it here.

Here’s where SNA gets fun—there is a lot of value in visually analyzing the network. Yes, your brain can provide incredible insight to the analysis process. In my evaluation consulting experience, the partners I have worked with have consistently benefited more from the exploratory, visual analysis they have benefited from reviewing the quantitative metrics. Sure, it is important to know things like how many people are in the network, how dense the relationships are, and other key stats. But for real-world applications, it is often more important to examine how pivotal players relate to each other relative to the overall goals they are trying to achieve.

So here’s your challenge—what do you learn from analyzing the #eval13 social network data? Share your visualizations and your findings!

Last week, more than 3000 evaluators descended on my hometown of Washington, DC for the American Evaluation Association’s annual conference. I learned this much + slept this much = rockstar conference.

#omgMQP

I had the pleasure of spending Monday and Tuesday in Michael Quinn Patton’s Developmental Evaluation workshop. Due 10% to my bad vision and 90% to being starstruck, I sought out front-row seats:

 

Best seats in the house for the best speaker in the field. #omgMQP #eval13 pic.twitter.com/fQhXnPWBH3

— Ann K. Emery (@AnnKEmery) October 14, 2013

Along with many other nuggets of gold, MQP shared the Mountain of Accountability, a simple visualization demonstrating a Maslow’s hierarchy for organizations. (Start with the basics like auditing, personnel review, and outputs; then progress to typical program evaluation; then progress to developmental evaluation and strategic learning.) This visual was a fan favorite; the ipads and iphones were flying around as everyone tried to snap a picture. Anyone else think that MQP would be a great addition to the dataviz TIG?

 

A must see! “World debut” of @MQuinnP‘s Mountain of Accountability at #eval13 – like a Maslow’s hierarchy for orgs pic.twitter.com/WSZfk7fOmE — Ann K. Emery (@AnnKEmery) October 15, 2013

My biggest takeaway? Developmental evaluation is probably the future of evaluation, or at least the future of my evaluation career. Also, many evaluators wouldn’t call this approach “evaluation,” which means I’m not an evaluator, but an “evaluation facilitator.” Time to order new business cards!

#thumbsupviz

On Tuesday night I had Dataviz Drinks with Stephanie Evergreen, Tania Jarosewich, Andy Kirk, Johanna Morariu, Jon Schwabish, and Robert Simmon, along with a few more poor souls who had to listen to our endless enthuasiam about charts, fonts, and other things “worth staying up late for.” We’ve each been trying to reshape the dataviz community from one of frequent criticism to one of encouragement and peer learning (e.g., the Dataviz Hall of Fame.) A few beers later, the #thumbsupviz hashtag was born.

Stay tuned for our growing gallery of superb visualizations at thumbsupviz.com.

 

@AnnKEmery plugs #thumbsupviz pic.twitter.com/qnLcdRmIbg

— Stephanie Evergreen (@evalu8r) October 17, 2013

omg Factor Analysis…

On Wednesday I attended a pre-conference workshop about factor analysis. I learned the approach in grad school a few years ago, have only used it twice, and wanted to brush up my skills. The instructor provided a wealth of resources:

 

#Eval13 workshop instructors have really gone above and beyond! Entire beautiful binder on factor analysis: pic.twitter.com/yJpuoB6wqn

— Ann K. Emery (@AnnKEmery) October 16, 2013

My biggest takeaway? Ouch. My brain was hurting. Leave the factor analysis to the experts because 99% of us are doing it wrong anyway. You don’t have to tell me twice!

Performance Management & Evaluation: Two Sides of the Same Coin

On Wednesday afternoon, I gave an Ignite presentation with my former supervisor and performance management expert, Isaac Castillo. Paired Ignites are rarely attempted, and I’m glad we took a risk. I had a lot of fun giving this talk. Stay tuned for future collaborations from Isaac and I!

Check out our slides and the recording of our presentation:

 

Excel Elbow Grease: How to Fool Excel into Making (Pretty Much) Any Chart You Want

On Thursday morning, I shared four strategies for making better evaluation charts: 1) adjusting default settings until your chart passes the Squint Test; 2) building two charts in one; 3) creating invisible bars; and 4) really really exploiting the default chart types, like using stacked bars to create a timeline or using a scatter plot to create a dot plot.

The section about dot plots was pretty popular, so I recorded it later:

I thought the presentation went okay, but afterwards, an audience member came up to me and asked, “So if I wanted to make a different type of chart in Excel, like anything besides a typical bar chart, how would I do it? What could I make?” “That’s what I just spent the last 45 minutes showing you.” “No I mean, if I wanted to make one of these in Excel, could I do it?” “Weren’t you in the audience for the presentation I just did?” “Yes, that would be a cool presentation, you should show us how to make those charts in Excel.” Thanks for the great idea buddy, I’ll submit that idea to next year’s conference. :)

East-coast happy hour

For the second year in a row, the east-coast AEA affiliates got together for a joint happy hour on Thursday night. Good vibes and familiar faces.

eval13_happy_hour

The Washington Evaluators, Baltimore Area Evaluators, New York City Consortium of Evaluators, and the Eastern Evaluation Research Society

The Conference is Over, Now What? Professional Development for Novice Evaluators

On Friday afternoon I led a roundtable with tips for novice evaluators. The discussion was awesome, especially the great chats I had with people afterwards. I’m going to write a full post recapping that session. Stay tuned!

How to Climb the R Learning Curve Without Falling Off the Cliff: Advice from Novice, Intermediate, and Advanced R Users

On Saturday morning I had the pleasure of presenting with a former teammate, Tony Fujs, and my new teammate, Will Fenn. Tony dazzled the audience with strategies for automating reports and charts with just a few lines of R code, and Will shared tips to help novices avoid falling off the learning curve cliff. Check out their resources and tips in this handout.

tony_will

Tony Fujs (left) and Will Fenn (right)

I thought the presentation went okay, but afterwards, an audience member commented, “It would be really cool if you got some evaluators together to show us what kinds of things are possible in R.” “Umm yep, that’s what we just did, Will and Tony showed how to automate reports and create data visualizations in R.” “Yep exactly, that would be a great panel, you could get several evaluators together and show how to automate reports and make data visualizations in R.” “Did you see the panel we just did?” “Yeah you should put a panel together like that.” Okay thanks, I’ll consider it. :)

Evaluation Blogging: Improve Your Practice, Share Your Expertise, and Strengthen Your Network

Dozens of evaluators have influenced and guided my blogging journey, and I was fortunate to co-present with three of them on Saturday: Susan Kistler, Chris Lysy, and Sheila Robinson. I first started blogging after watching Chris’ Ignite presentation at Eval11, Susan’s initial encouragement kept me going, and Sheila provides a sounding board for my new ideas.

awesome_panelists

Left to right: Susan Kistler, Chris Lysy, and Sheila B. Robinson

Can you tell we presented on Saturday morning?! Chris and I arrived early. I almost panicked, but instead Chris and I started laughing hysterically, and then a second person arrived. Close call!

empty_ballroom

By the time we started, we drew a good crowd of 30-40 bloggers and soon-to-be bloggers. Same time next year??

Evaluation Practice in the Early 21st Century

Where have we come from, and where are we headed? Evaluators have accomplished some amazing things, and the future is bright. Patrick Germain and Michelle Portlock, evaluation directors at nonprofit organizations, shared strategies for making evaluation happen when you are not in the room:

eval13_nonprofiteval

For me, the mark of a good presentation is when the evaluator shows vs. tells us something new. Kim Sabo Flores, Chad Green, Robert Shumer, David White, Javier Valdes, and Manolya Tanyu talked about incorporating youth voices into policymaking decisions. The best part: the panelists invited a youth participant to speak alongside them on the panel so that she could share her experiences firsthand.

eval13_youth_voices

They taught us about youth presence vs. participation, and then they showed us about youth presence vs. participation. Well done!

A dataviz panel shared a brief history of dataviz; strategies for displaying qualitative data; and ideas for using graphic recording:

One of many, many graphic recording examples shared by Jara Dean-Coffey

One of many, many graphic recording examples shared by Jara Dean-Coffey

The Innovation Network team is pretty fond of graphic recording too, and Kat Athanasiades even recorded an entire advocacy evaluation panel. Thanks to Cindy Banyai for capturing this awesome video!

And just in case you’re not familiar with my plans for our field…

Wave goodbye to the Dusty Shelf Report!

Wave goodbye to the Dusty Shelf Report!

Lookin’ good, Eval! See you next year in Denver!

Lately I’ve been feeling let down by summary statistics: the min and max, mean and median, quartiles and standard deviation… They do their job well enough. Summary statistics tell a summary. An aggregate story, bringing all the messy scores together into some sort of cohesion. We grab the averages and stick them in bar charts.

But sometimes we don’t want to summarize, we want to highlight the variety in scores and remind readers that the chart is actually made up of individual people, not just the mean or median. Long live the messy data, the dispersion, the distribution, the spread!

unit_chart_1

I could tell you a few descriptive statistics: min = 26%, max = 100%, Q1 = 64%, Q3 = 83%, median = 74%, mean = 73%, standard deviation = 15%. Or, I could show you the spread in this unit-chart-turned-histogram.

Unit charts are not your new go-to chart. They do not replace bar charts. They are not appropriate for all datasets. They’re best for those few moments when you choose to emphasize individual units of data. A unit could be 1 person, or 10 people, or 1 school, and so on. Units can be represented in circles or squares or triangles. Units can be stacked on top of each other to form a histogram, or they can be plotted along a line.

The dataviz challenge: Re-create the chart in in Excel, R, or some other free software program. Then, tweet a screenshot to @annkemery. Bonus: Make a unit chart for your own data. Or, do you emphasize individual differences with other chart types? Share your ideas with the community!

The prize for playing: A professional development opportunity and bragging rights. I’ll post the how-to guide in a couple weeks.

Want to learn more? I’m presenting about charting techniques at the American Evaluation Association’s annual conference on Thursday, October 17, 2013 at 11am in Washington, DC. Hope to see you there!

Hi, I’m Corey Newhouse, the Founder and Principal of Public Profit. We help public service organizations measure and manage what matters. As the leader of an 8-person evaluation firm, I think often about staff training and common organizational practices.

Anyone who has been in the field knows that there are hundreds of tips and tricks that we pick up along the way, ranging from the global (“Don’t falsify your data”) to the very local (“Meg the attendance clerk always has the file you want”).

And, to complex-ify things, one person’s “must-do-every-time-without-fail” tip is another person’s “what-the-heck-are-you-talking-about?” non-tip. So what’s an evaluation team to do?

Our team recently developed a “Top Ten Tips for Evaluation at Public Profit” in order to codify the most important of these practices for our work. To develop the list, each member of the team drafted as many tips as they wanted, and we discussed the tips as a group. We were able to whittle our list down to a set of tips that we agreed were essential to our work.

public_profit

The exercise was hugely helpful for three reasons:

  • We were relieved to find that many of our tips were similar, suggesting that our team was already pretty good at sharing good ideas with one another.
  • The tip nominations process stimulated important conversations about the ways in which we work together, such as whether it was OK to ask for uninterrupted time to complete a task. (And leading to the tip, “Ask for what you need, even if it is time to focus.”)
  • We used our tips list to create a professional development calendar, in which some of the more complex tips were covered in a 30-60 minute training.

Our tips are now part of our data operations manual, and a key part of our staff on-boarding process. We’ll update the list every year or so to make sure that our best thinking is reflected.

I’ve been in love with diverging stacked bar charts since I saw Joe Mako’s submission to Cole Nussbaumer’s dataviz challenge last December. Joe made this contest-winning chart. But in Tableau! The amazing but expensive software!

Could I ever create one in Excel?!

Yes! Luckily I’d learned about the Values in Reverse Order feature from Stephanie Evergreen. With Joe’s inspiration and Stephanie’s strategy, I started making these beauties for myself in Excel.

I wanted to share the chart secrets with all of you, so last month, I challenged readers to re-create a diverging stacked bar chart like this one:

diverging_before-after

It looks like I’m not the only one who loves diverging stacked bar charts. Congratulations to the 12 contestants! In order of submission, they are:

    • David Napoli
    • Anjie Raber
    • David Bonachea
    • Sheila Robinson
    • Amanda Drescher
    • Kristin Minichello
    • @luno1972
    • Hornyik Jozsef (in d3! with code!)
    • Stephanie Evergreen
    • Angelina Lopez
    • Kevin Gilds
    • Praveen Gowda

Most contestants seized the opportunity to use their own datasets and made adjustments as needed. For example, Sheila’s dataset fit a traditional stacked bar chart better than a diverging stacked bar chart, and Anjie needed to display cut-off scores.

So how do you make these diverging stacked bar charts, anyways?! There are at least two strategies: Either a) create two separate charts, a strategy demonstrated in previous posts like this one, or b) use floating bars, a strategy demonstrated in previous posts like this one. Stephanie Evergreen blogged about strategy B a few weeks ago and her explanation is pretty awesome, so I’m going to focus on strategy A today.
Want to learn more? I’ll be sharing my top 5 must-have chart strategies at the American Evaluation Association’s annual conference on Thursday, October 17.

For discussion: Nearly all of the contestants requested friendly feedback on their graphs. In most cases, contestants were trying these charts for the first time and thinking about whether or not these charts could be adapted for their datasets. What do you think?

Last week I shared strategies for improving any chart’s colors. One of the examples was a diverging stacked bar chart:

diverging_before-afterI love stacked bar charts because they’re pretty versatile, and because they’re a great chart for lots of evaluation and survey data. In my example, I looked at the percentage of survey respondents who selected strongly agree, agree, disagree, and strongly disagree on a satisfaction survey. But stacked bar charts can be used in dozens of different ways.

So when can you use a stacked bar chart?

  • Stacked bar charts are for part-to-whole relationships. Use them when you want readers to see both a) one portion of the bar and b) compare that piece to the entire bar.
  • Stacked bar charts can be used for tallies or percentages. A tally is the number of actual people, dollars, etc. For example, a nonprofit could display their funding sources in a stacked bar chart – $100K from a foundation, $200K from a government grant, and so on. The reader can see the size of each grant as well as how the grants stack up as a whole.
  • Stacked bar charts can be used for nominal, ordinal, or diverging data. An example of nominal data is the racial/ethnic categories of your survey respondents. Ordinal data has a natural order – from best to worst, most to least, something to nothing – like my example. Diverging data is a subtype of ordinal data – when the categories are polar opposites and there’s a clear middle ground or neutral zone in between two ends.

And when can you use a diverging stacked bar chart? Diverging stacked bar charts are just for comparing several sets of ordinal data at once. They work best when you’ve got an even number of categories (like the 4 survey choices). Then, you can easily line up the midpoints along an invisible y-axis.

The dataviz challenge: Re-create the “after” version in Excel, R, or some other free software program. When you’re finished, email me or tweet a screenshot to @annkemery.

Bonus! 1) Adapt this chart for own data. Think outside the box! 2) There are at least two different ways to create diverging stacked bar charts in Excel. Can you find more than one solution? (And these charts are so awesome that you’ll even see one solution on Stephanie Evergreen’s blog next week!) 3) Don’t forget to use custom colors!

The prize for playing: Beer or coffee, my treat, the next time you’re in DC; a professional development opportunity; and bragging rights.

I’ll post the how-to guide in 3 weeks, on September 6. Happy charting!

Are you facilitating a focus group? If so, you’ve probably got a lot on your mind – greeting participants, making sure everyone feels comfortable, covering all the key questions… It’s a lot to remember!

As I was learning to facilitate focus groups, I started taking notes of all the good advice I received from experienced facilitators. I turned that advice into a checklist for myself. I  review my checklist before every focus group (to keep the advice fresh on my mind) and after every focus group (as a self-evaluation). You could also use this checklist to assess colleagues who are learning to facilitate focus groups (i.e., use the checklist as a conversation starter when you’re debriefing together after the focus group).

Here’s my checklist for focus group facilitators. Did the facilitator:

  • Introduce him or herself?
  • Explain the purpose of the focus group?
  • Explain the rules for the discussion?
  • Address issues of confidentiality?
  • Use verbal and non-verbal expressions to make participants feel comfortable during the focus group?
  • Facilitate real interaction among group members (not just a series of individual interviews)?
  • Draw everyone into the discussion?
  • Demonstrate genuine interest in the discussion?
  • If asked for their personal opinion on a topic during the group, deflect requests by participants to give an opinion?
  • Maintain a good pace during the discussion (kept things moving but don’t rush questions)?
  • Ask questions in a neutral way (not in a leading way)?
  • Link comments to previous comments or themes?
  • Address comments or behaviors that could take the discussion off course?
  • Address disruptive and disrespectful behavior?
  • Ask participants for validation if rewording or summarizing comments?
  • Courteously but firmly stop a chatty participant?
  • Demonstrate sensitivity to emotional reactions of participants?
  • Display awareness and respect for cultural issues that emerge in the discussion?
  • Re-explain or re-phrase questions as needed?
  • Sincerely thank participants for contributing?

Do you have additional tips to add to the checklist? Which tips have been most valuable to you when facilitating a focus group?