As many of you know, a survey was conducted in August by AmericaSpeaks, the League of Women Voters, the National Coalition on Dialogue and Deliberation (NCDD), OMB Watch and OpentheGovernment.org, to assess the public experience of participating in the White House’s 3-phase online dialogue process feeding into the forthcoming Open Government Directive (OGD).
Yesterday, I and eight others from our group met with six white house officials to (1) discuss our findings, (2) to get a sense of how the White House plans to evaluate future online consultations, and (3) to discuss how the open government community can contribute to enhancing the quality of future public consultations of the White House or federal agencies by playing an ongoing role in assessment. The meeting took place at 1:00 pm in DC at the White House Conference Center.
In attendance from the White House…
- Chelsea Kammerer, Office of Public Engagement
- Beth Noveck, Open Government Initiative
- Robynn Sturm, Office of Science and Technology Policy (OSTP)
- Beverly Godwin, GSA (U.S. General Services Administration) Office of Citizen Services and Communications
- Brian Behlendorf, Department of Health and Human Services
- Macon Phillips, Director of New Media for the White House (the man behind WhiteHouse.gov)
In attendance from our collaborative group…
- Me (Sandy Heierbacher) and Leanne Nurse (EPA Policy Analyst and NCDD Board member) from NCDD
- Joe Goldman and Carolyn Lukensmeyer from AmericaSpeaks
- Chery Graeve and Kelly McFarland from the League of Women Voters
- Sean Moulton and Chris George from OMB Watch
- Amy Fuller of OpenTheGovernment.org
I wanted to share some of my rough notes and impressions from the meeting with the NCDD network. No one had their laptops out, so I was just jotting down written notes, mostly when White House folks talked. So this is by no means a full account of the meeting, nor is anything a direct quote.
After quick introductions around the room, we began the meeting by talking about our findings. Generally, there was appreciation among respondents for the White House’s leadership and innovation in launching the online dialogue process. There was also considerable feedback offered to help improve the process for future use, in the hopes that initiatives such as this, done well, can advance good ideas and open government more fully to the public.
I don’t want to focus this post on the evaluation findings per se. I think those who participated in the process saw first-hand both the strengths and limitations of the process, and I will share the evaluation results in another post soon. Suffice it to say that people were thrilled to have been invited to contribute ideas for the Open Government Directive, as one of numerous streams of input, but people’s participation in the 3-stage process declined over each of the three phases due to a variety of factors like increasingly complex technology platforms, a too-tight timeframe, and an overabundance of both relevant and not-so-relevant posts.
I believe it was Beth Noveck who said at the meeting that the White House is committed to showing incremental progress on the quality and effectiveness of its online dialogue programs for general citizens and stakeholders. They’re excited about the fact that we’re seeing a culture shift of sorts; a whole new way of thinking about the role citizens and stakeholders can play in informing government policy.
In fact, the whole concept of the administration and federal agencies involving citizens and stakeholders in online discussions about policymaking is so new that people often question if it’s real, and if it will actually have an influence on policy (as many of our survey respondents questioned).
There was widespread acknowledgment at the meeting that online dialogue and consultation is still a new endeavor, and that there is not yet a perfect platform or strategy for engaging people online. Beverly Godwin from GSA noted that many government agencies are experimenting with online engagement, but that they are all at different stages in their online participatory processes.
Beverly listed three questions agencies commonly struggle with:
- Should the process be moderated or not?
- How long should the process be? (Is a week long enough? Three days? A month? How long is too long for participants to stay engaged? How do you make this decision?)
- How much discussion/participation is adequate? How much is too much/too overwhelming for participants and too much data for agencies to deal with or make sense of?
The Core Principles for Public Engagement our community created this spring were included in attendees’ folders, and Beverly noted that the principles are “extremely helpful.”
When asked what types of assessment agencies and the administration are conducting on their online consultations, Beverly noted that case studies are often written and posted on the consultations. Robynn Sturm from OSTP mentioned that there is a loosely-built community of practice developing among agency personnel who are working on online engagement. (Something I plan to ask her more about, since such a group could benefit from the knowledge and experience of NCDDers.)
Chelsea Kammerer from Office of Public Engagement mentioned that they would like to develop a system to engage a wide variety of citizens – including those who have never participated before. She noted that they need to rely on technology because of the sheer number of people they would like to engage, but that the technology needs to be standardized somehow, and it needs to be better than what has been done in the past to engage people online.
Several people representing the White House talked about the need for a standard set of online tools agencies can choose from, and noted that they also needed clear, simple guidance on how to decide when to use different tools.
Macon Phillips, Director of New Media for the White House, noted that their engagement efforts at WhiteHouse.gov have primarily been online town halls or question-and-answer sessions, and online video chats with officials. Their biggest challenge is the incredible volume of input and information they receive when they pose a question or open a forum. The sheer volume of participation they get on WhiteHouse.gov is not anything the public engagement community has seen or experienced. He mentioned one program that received over 100,000 questions from 100,000 people, with 3.4 million people voting on comments and ideas in a span of just four or five days.
Macon noted that one of the techniques they have found particularly useful is ranking (allowing participants to indicate that they “like” or “don’t like” someone’s post or comment). He also noted that one of the problems with a lot of ranking systems is that a consultation can quickly become a popularity contest, with interest groups rallying their constituents to increase the ranking on their own posts while ignoring others’ contributions.
Beth mentioned that there are important benefits from conducting experiments like the Open Government Dialogue process that are hard to perceive from the outside. She specifically noted that programs like these have a useful modeling effect that can reach people in higher decision-making positions.
Beth said the administration is interested in proliferating as many opportunities for participation and engagement as possible, though some issues are most appropriate for the mass public while others are best addressed by engaging stakeholders. She also noted that there is a continued need for information on tools and best practices, as well as feedback on online consultation programs.
She said there are 4 “buckets” that online engagement tends to fall into (though I took careful notes when she said this, and wasn’t clear on where one bucket ended and another began). I jotted down:
- brainstorming and ideation / rating and ranking of ideas
- blogs and policy discussions
- generating lots of ideas, Q&A
- digging in deep into a specific question
- challenges and prize-back solutions to problems (people can win prizes)
- grantmaking; new open processes for giving away grants
- drafting something together
Next, my notes have Beth saying that there is a need to identify tools to use and best practices to use them, and to make sure tools can be easily adopted by different agencies and entities. She said that, increasingly, they have been helping people with process design, so the more streamlined the processes can be, the better.
She also wanted to mention that she loves the categorization features that are part of some online tools, and that they have been finding community self-moderation techniques to be indispensable since they can’t be in the position of deleting off-topic posts (and off-topic posts have been a major problem in some online consultations).
Beth also talked about the paperwork reduction act, and stated that they could use help clarifying for people that citizen consultation is not the same as information-gathering for purposes of the paperwork reduction act.
Beverly spoke about how people are wanting them to look at various tools, but that more important is which tool best meets your purpose for a specific project—and none of the tools are perfect. She said that a “wizard” would be helpful; something that helps you see what goals different tools help you meet, and show you clearly what the pros and cons are of different tools. There was a lot of head-nodding in the room when she said there is almost TOO much information out there about tools and process, and what they need is quick, simple guidance.
Macon talked about there being a shift in the venues in which government is trying to engage people. He talked about there being less of a focus on your own website and more of an emphasis on finding out where people are already visiting, talking to each other, and exploring issues. At the popular site MotleyFool.com, for example, they had site visitors videotape questions for the White House. They’re looking to do more of this type of collaboration. Budgets are tight at the White House, and they are looking to utilize existing venues rather than having to continually market new engagement programs.
Our group suggested that some sort of ongoing mechanism be established for evaluating public consultations, so evaluations needn’t be initiated separately for every program. Part of that suggestion was the implication that our organizations and networks could be involved in that process. The idea of anything formal being established did not seem to go over too well, though, and there wasn’t much discussion on that. The White House folks present seemed to embrace a more open, flexible approach to evaluation since much of what is happening right now is experimental.
One of the main themes coming out of the meeting seemed to be the White House’s and federal agencies’ need for a common set of online engagement tools. There is a need for greater technical collaboration in open government tools, to develop a common code base that maps to a common set of tools.