...

PDF

by user

on
Category: Documents
3

views

Report

Comments

Description

Transcript

PDF
System for Environmental and Agricultural Modelling;
Linking European Science and Society
Participatory methods, guidelines and good practice
guidance to be applied throughout the project to
enhance problem definition, co-learning, synthesis
and dissemination
J.-P. Bousset, C. Macombe, M. Taverne
Partner involved: Cemagref
Report no.: 10
Ref.: D7.3.1
December 2005
ISBN no.: 90-8585-038-X
Logo’s main partners involved in this publication
Sixth Framework Programme
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
SEAMLESS integrated project aims at developing an integrated framework that allows exante assessment of agricultural and environmental policies and technological innovations.
The framework will have multi-scale capabilities ranging from field and farm to the EU25
and globe; it will be generic, modular and open and using state-of-the art software. The
project is carried out by a consortium of 30 partners, led by Wageningen University (NL).
Email: [email protected]
Internet: www.seamless-ip.org
Authors of this report and contact details
Bousset Jean-Paul,
Partner acronym: Cemagref
Research Unit: Dynamics and Functions of Rural Areas, Cemagref
E-mail: [email protected]
Macombe Catherine,
Partner acronym: Cemagref
Research Unit: Dynamics and Functions of Rural Areas, Cemagref
E-mail: [email protected]
Taverne Marie,
Partner acronym: Cemagref
Research Unit: Dynamics and Functions of Rural Areas, Cemagref
E-mail: [email protected]
Disclaimer 1:
“This publication has been funded under the SEAMLESS integrated project, EU 6th
Framework Programme for Research, Technological Development and Demonstration,
Priority 1.1.6.3. Global Change and Ecosystems (European Commission, DG Research,
contract no. 010036-2). Its content does not represent the official position of the European
Commission and is entirely under the responsibility of the authors.”
"The information in this document is provided as is and no guarantee or warranty is given
that the information is fit for any particular purpose. The user thereof uses the information at
its sole risk and liability."
Disclaimer 2:
Within the SEAMLESS project many reports are published. Some of these reports are
intended for public use, others are confidential and intended for use within the SEAMLESS
consortium only. As a consequence references in the public reports may refer to internal
project deliverables that cannot be made public outside the consortium.
When citing this SEAMLESS report, please do so as:
Bousset, J.P., Macombe, C., Taverne, M., 2005. Participatory methods, guidelines and good
practice guidance to be applied throughout the project to enhance problem definition, co
learning, synthesis and dissemination, SEAMLESS Report No.10, SEAMLESS integrated
project, EU 6th Framework Programme, contract no. 010036-2, www.SEAMLESS-IP.org,
248 pp, ISBN no. 90-8585-038-X.
Page 2 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Table of contents
General information
5
Executive summary
5
Introduction
7
SECTION A. KEY PRINCIPLES FOR PROBLEM DEFINITION, CO-LEARNING,
SYNTHESIS AND DISSEMINATION
9
1
Identifying the relevant participants and focusing on problem definition
9
1.1
Identifying participants
9
1.2
Focusing on problem definition
10
2
Facilitating the appropriation of Seamless project by each participant
13
3
Understanding framing process
15
3.1
The participants agree about coordination principles
16
3.2
Dealing with threat
16
3.3
Taking into account procedures weight
17
4
Favouring dissemination process
19
4.1
Swirling dissemination model
19
4.2
How to do in practice ?
20
SECTION B. participatory methods and dialogue tools
23
5
Defining participation
23
6
Relevant participatory methods for SEAMLESS
27
6.1
Advising methods aiming at mapping out diversity of views
28
6.2
Convergence methods aiming at decision-support
63
6.3
Methods for democratisation
120
6.4
Consulting Methods
139
6.5
Involving Methods
144
7
Dialogue tools and facilitation tips
153
7.1
153
Dialogue Tools
Page 3 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
7.2
Facilitation Tips
163
SECTION C. Guidance for the construction and evaluation of participatory protocols
173
8
Principles of Good Practice for participatory protocols
173
9
Guidance for the construction of participatory protocols
175
9.1
Defining what you are consulting about
175
9.2
Deciding who to consult (stakeholder analysis)
177
9.3
Deciding when to consult
184
9.4
Defining how you will carry out your consultation (method)
186
9.5
Defining how you will analyse the results of your consultation
195
9.6
Defining how you will provide feedback
200
10
Guidance for the evaluation and improvement of participatory protocols
203
10.1
Rationale for evaluating participatory exercises
203
10.2
Framework for evaluating participatory exercises
204
10.3
Evaluating participatory exercises
206
SECTION D. USES OF PARTICIPATORY METHODS IN SEAMLESS
214
11
Recall : Participation needs in Seamless
214
11.1
Participation status and approach in Seamless project.
214
11.2
List of participatory methods uses in Seamless project
215
12
How to implement participatory methods to answer these needs ?
219
12.1
Explaining vocabulary
219
12.2
How to create protocols ?
220
12.3
Main protocols which are of interest in SEAMLESS project
222
GENERAL CONCLUSION
242
References
245
Page 4 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
General information
Task(s) and Activity code(s):
T7.3, A7.3.1, A7.3.3
Output to (Task and Activity codes):
T7.4, T7.5, T7.6
Executive summary
The objective of this deliverables is to provide SEAMLESS teams with participatory
methods, guidelines and good practice guidance to be applied throughout the project to
enhance problem definition, co learning, synthesis and dissemination. Five methodological
key issues that should be taken into account in the design of participatory methods for
problem definition, co-learning, synthesis and dissemination have been pointed out and
discussed. They are: definition of the subject matter for discussions, identification and
mobilization of stakeholders, definition of issues of the discussions, selection and adaptation
of a relevant participatory method, and definition of the roles of scientists. Then a large set of
pre-defined participatory methods is provided, with a detailed description of the issue of the
discussion (mapping diversity of view versus participants reaching consensus; advising
policy makers versus empowerment of participants), the subject matter for the discussion
(field and scope definition, procedure and realization – how to use, resource considerations,
best practices and potential pitfalls). Finally, the deliverable provides guidance for defining
what we are consulting about (subject matter for discussions and issues of discussions in
SEAMLESS), who to consult, when to consult, how to carry out the consultation, how to
analyse the results of consultation, and how to provide feedback, and how to evaluate the
exercise. Examples of application of such guidance for the construction of protocols for the
pre and post modeling stages in SEAMLESS are provided.
Page 5 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Page 6 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Introduction
This document presents participatory methods, guidelines and good practice guidance to be
applied throughout SEAMLESS project to enhance problem definition, co-learning, synthesis
and dissemination.
A new governance concept has recently emerged which assumes that policy programs are the
product of complex interactions between government and non-government organizations,
each seeking to influence the collectively binding decisions that have consequences for their
interest. It’s the concept of network management. This concept is based on the assumption of
the model of “co-production of knowledge” (Callon, 1999)1 : different kinds of knowledge
are negotiating their hybridization, which is necessary to get forward in the management of
risk-complexity-uncertainty or the implementation of knowledge-based policies. In this
context, an important goal of WP 7 is participation to improve problem definition, colearning, synthesis and dissemination.
For reaching these purposes, Deliverable 7.3.1 has been set up upon four sections. Section A
presents key principles of problem definition, co-learning, synthesis and dissemination, which
are. For each question, we have suggested practical outcomes for Seamless designers activity.
Second B presents a library of relevant participatory methods and some dialogue tools and
facilitation tips. Participatory methods are viewed here as overall contexts or settings in
which information is elicited, without respect to their historical origin (participatory
approaches or participatory researches). Some methods which would not be of a great interest
for Seamless project have been eliminated, either because they are too specific and complex,
either because they can’t concern project’s targets. Participatory methods have been clustered
from the two major dimensions of the issue of any participatory process: the aspiration/
motivation of the participatory process (from advising to democratisation); the targeted
output of the process (from mapping diversity of the participants’ views towards reaching
consensus among participants). Section C provides guidance for the construction and the
evaluation of participatory protocols. Section D provides examples of protocols with their
justification. The first part indicates where participatory methods are involved in Seamless
project. The second part suggests some concepts to help designers task when choosing
participatory methods to produce a protocol, and shows some protocols addressed by the
project.
Who are targets of this deliverable? We wanted to produce a document which would be of
good help for three kinds of target public. First, for colleagues of other WPs currently
involved in Seamless project -- which we call here “designers”. Second, we wanted to be
useful for future end-users of Seamless-IF, after the end of Seamless project, which would
like to perform other interactions with stakeholders, implementing new participatory
methods. Third, we care about ourselves (WP 7), in order to perform training for end-users
and colleagues.
1
Callon M., The role of lay people in the production and dissemination of scientific knowledge, Science,
Technology and Society, vol.4, no 1, 1999, p 81-94.
Page 7 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Page 8 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
SECTION A. KEY PRINCIPLES FOR PROBLEM
DEFINITION,
CO-LEARNING,
SYNTHESIS
AND
DISSEMINATION
A new governance concept has recently emerged which assumes that policy programs are the
product of complex interactions between government and non-government organizations,
each seeking to influence the collectively binding decisions that have consequences for their
interest. It’s the concept of network management. This concept is based on the assumption of
the model of “co-production of knowledge” (Callon, 1999)2 : different kinds of knowledge
are negotiating their hybridization, which is necessary to get forward in the management of
risk-complexity-uncertainty or the implementation of knowledge-based policies. In this
context, an important goal of WP 7 is participation to improve problem definition, colearning, synthesis and dissemination. For reaching these purposes, we suggest first to deal
with four topics: 1.1) Focusing on problem definition and identifying the relevant
participants, whose knowledge will be involved in definition problem; 1.2) Facilitating the
appropriation of Seamless project by each participant, which is the main sine qua non
condition for co-learning; 1.3) Understanding framing process, which enhance problem
definition, co-learning and synthesis; and 1.4) Favouring dissemination process.
Consequently, the participatory methods showed below, and the chosen trajectories (pool of
participatory methods to be applied to one case) will be drawn from these general guidelines
spirit.
1
Identifying the relevant participants and focusing on
problem definition
To enhance problem definition and to allow hybridization of knowledge, we suggest two key
points : having at one’s disposal a well-proven method, and identifying participants whose
knowledge are concerned. Wealthy implementation of problem definition method depends on
a good choice of involved participants. So, we discuss this issue first.
1.1 Identifying participants
The relevant method to reach this purpose is known as “stakeholder analysis”. It is “an
approach and procedure for gaining and understanding of a system by means of identifying
the key actors or stakeholders in the system, and assessing their respective interests in that
system” (Grimble and Chan, 1995)3. The goal is to help to find ways to turn situations of
potential conflict into opportunities for collaboration. Stakeholder approaches and supporting
2
Callon M., The role of lay people in the production and dissemination of scientific knowledge, Science,
Technology and Society, vol.4, no 1, 1999, p 81-94.
3
Grimble R. and Chan M.-K., Stakeholder analysis for natural resource management in developing countries :
somme practical guidelines for making management more participatory and effective, Natural Resources Forum,
vol. 19, N°2, 1995, 113-124.
Page 9 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
methodologies in the field of management science have been established by the beginning of
the 1980s (Freeman, 1984)4.
Conditions where stakeholder analysis is particularly crucial are well known for natural
resource management (Grimble and Chan, 1995). We can apply these conditions for another
project such as policies definition. Stakeholder analysis is likely to be particularly relevant
where the following exist:
-
Important externalities (positive or negative) generated by the project, such as land
use,
-
unclear or open property rights to the resource in question (e.g. forests, irrigation
systems),
-
different levels of stakeholders with distinct interests and agendas,
-
trade offs are needed at the policy level : for example, where European objectives
encourage deregulation, but local people are primarily interested in status quo.
So, Stakeholders analysis seems to be suitable for Seamless project. The main issue is to
clearly set the problem the stakeholders will have to deal with. For SEAMLESS project, if the
problem settlement differs (because of various stakes from one region to another) for each
region, we have to perform one stakeholder analysis for each region. But a fraction of
stakeholders can’t be identified at the same time (nor by the same experts) than regional
stakeholders, that is policy makers involved in policies management at the European level.
For European level, we have to set up a representative sample group of potential end-users.
Practically, it means that a few experts will gather to do it, only one time for European level
and only one time for each region, throughout the project. Experts are chosen by scientists
because they have got an intuitive knowledge of the institutions (and sometimes of the
persons) which are concerned by the issue at the relevant level. For Seamless project,
implementation of stakeholder analysis doesn’t go farer than investigating stakeholders
interests, characteristics and capacities, because the other steps of classical stakeholder
analysis will be made during other activities.
1.2 Focusing on problem definition
Problem definition question occurs very often when human beings are involved. We recall
methodological statements, before suggesting why it can be useful for Seamless project and
describing generic method itself.
Methodological statements : The main source of this issue is the seminal paper by Mitroff and
Emshoff5, based on Ackoff and Pounds6 work. The assumption is that usual tools of social
science have been applied to well-structured issue, “while policy-making is a process of
defining and treating ill-structured issues and problems”(page 1). We suggest that issues we
have to deal with in Seamless are ill-structured problem, so we can apply unusual tools
advised by Mitroff and Emshoff. According to these authors, ill-structured problem is defined
as one which possesses one or more of the following characteristics: (a) the problem is welldefined in the sense that it can be clearly stated but those charged with dealing with it cannot
4
Freeman R.E., Strategic management : A Stakeholder Approach, Pitman, Boston, MA, 1984.
5
Mitroff I. I., Emshoff J. R., 1979, On strategic assumption-making : a dialectical approach to policy and
planning, Academy of management review, vol. 4, n° 1, pp. 1-12.
6
Pounds W.F., 1969, The process of problem finding, Industrial Management Review, vol.11, 1-19.
Page 10 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
agree upon an appropriate solution or strategy ; (b) they cannot agree on a methodology for
developing such a strategy ; or (c) they cannot even agree on a clear formulation (definition)
of the problem. We are describing here only the first part of the method, which concerns
problem definition, and we leave ill-structured issues treating out. Invented for large-scale
organization field, we lay that this method fits with organizations involved in Seamless
project.
Within Seamless project, problem definition dealing concerns several topics and levels that
we can’t list here. For instance about topics, when prim users give their opinion about
scenarios they want to be simulated, they answer to an interior question which could be : “For
agricultural and environmental field, what are the policies that EU wants ?”, but perhaps this
question is : “What are the policies that EU thinks to be unavoidable ?” or “What are the
policies to be accepted by national stakeholders ?” with numerous implicit underlying
assumptions. About levels, the same prim fuzziness for defining what participants are doing
occurs for national and for regional stakeholders. So, holding a well-proved method for
defining problem isn’t luxury.
The generic method advised by Mitroff and Emshoff tries to correct usual weaknesses of
large-size organizations about problem definition. The first is the failure to consider in a
systematic and explicit way strongly differing policy alternatives to their current way of
doing things; the second is that it is very difficult to be heard when challenging organization’s
preferred policies ; the third one is that internally addressed criticisms are directed towards
the surface of a policy and not about underlying assumptions. In fact, because of reality
complexity, every particular point of view can almost always find significant empirical
support for his policy “by consciously and unconsciously selecting the evidence most
favourable to his case” (page 3). So, as Feyerabend7 explains, one can’t define any problem
without developing maximally challenging alternatives, in order to test any theory (policy).
Here are the stages of the method we suggest, following these authors.
The process begins at the point when an organization has more or less a vaguely formulated
notion of a problem it faces, and has developed at least one idea about solving it. The fact that
a group has to be set up from various levels and points of view must be emphasized here. For
Seamless implementation, it means choosing various stakeholders to perform a wealthy
problem definition.
The first stage, specification, involves the following sequence of activities: from an already
existing strategy for solving an issue, to collect the selective supporting data for legitimate the
chosen strategy, to the underlying assumptions which, when coupled with the data, allow one
to deduce the strategy as a consequence. By working backwards to underlying assumptions,
the proposed process requires that each strategy contain in addition to supporting data, a list
of assumptions (i.e. given conditions, events, or attributes that are or must be taken as true)
which implicitly underlie the strategy. The assumptions list must include both plausible and
implausible conditions to ensure that nothing important is left out. Defining the assumptions
provides “specification” of the problem to which the strategy is addressed..
The second stage of method is the dialectic phase. This purpose is to identify new strategies,
for consideration as potential solutions to the problem. One sets up a pool of plausible
counter-assumptions opposite to which the initial strategy was based. Each assumption
previously identified is reformulated as a counter-assumption. If implausible, it is dropped.
This pool is then examined to see if it can be used as a basis both for defining and deducing
7
Feyerabend P. 1975, Against method, Outline of an anarchistic theory of knowledge, London, NBL.
Page 11 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
an entirely new strategy. When the dialectic phase is achieved, one has reach the point where
a maximum diversity of options has been obtained.
In the third stage, the assumption integration phase, the process focuses on negotiating an
acceptable set of assumptions, instead of trying to resolve differences in potential strategies.
Mitroff and Emshoff affirm that decision makers are able, by dealing on the level of
assumptions, to obtain agreements that are not obtainable at the level of strategy. There is no
guarantee that a list of acceptable assumptions will always emerge from the process, but
sometimes, a composite set of acceptable assumptions can be created. If not, resolution is not
possible by any means, short of one person imposing his or her view on all others. Anyway,
the method provides an occasion of realizing what assumptions are behind chosen strategies,
so, provides a real and critical problem definition.
The fourth stage is the creation of a composite strategy. From acceptable assumptions pool,
one collects new suitable data for suggesting a new strategy. This foundation enables various
strategy options to be assessed on a more deductive basis. The authors affirm that “when the
issues of the definition of the problem have reached this point of specificity, much of the
inherent conflict in the selection of alternatives has been removed” (page 4).
So, clear problem definition contributes to appropriation of the project by participants. Here
is the main topic of the following paragraph.
Page 12 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
2
Facilitating the appropriation of Seamless project by each
participant
We need to discuss here about conditions which favour participatory methods achievement.
Success is positive when project appropriation by stakeholders occurs and when policy
makers take into account the participatory approaches outputs.
Methodological statements : Ray Hilborn 8 conclusions about failures or success factors are
the bases of these suggestions. They have been tested regarding system analysis for
ecological system and confirmed by Shepherd and Bowler9 suggestions about the manner to
improve public participation. These conclusions are completed by Henry Mintzberg10 work
(who explains why managers don’t use simulation model for decision making). The first
referee makes the hypothesis that we compare system analysis to SEAMFRAME (analysis
for reaching SEAMLESS-IF creation) and that we establish a parallel between the system
which react to agricultural or environmental policies trends and an ecological system. The
second referee can be suitable for SEAMLESS, on condition to admit that the author’s
conclusions, about organisations management are relevant for public management.
To be successful ( i.e. to bring outcomes and participants coordination), the project has to
care with all these properties :
Power to influence decision making process : Systems analysis (here Seamframe)
must be request by the individuals who have the power to use its end results. In fact, more
generally, the public participation is improved when people have the possibility to influence
the decisions which concern them (Shepherd and Bowler, 1997). It means that success needs
substantive and early investments in public participation. A proactive effort (beyond the
minimal public participation requirements) led to a more effective process and outcome than
a reactive, minimalist approach. Stakeholders investment must be seen as a mean to create a
project that is more suitable for them, and accepted by them.
Communication : Successful implementation of applied systems analysis in a
management system requires large amount of time devoted to communication (3/4 of time
devoted to project from Hilborn). Geurts and Joldersma11 (2001) report than inadequate
communication between policy analyst and policy actors was one of the reasons for the
limited impact of policy analysis on policy making.
Techniques : Techniques qualities used must be understandable and transparent. The
systems analysis techniques (here Seamframe and Seamless-IF) used must be defensible
8
Hilborn R., 1979, Some failures and successes in applying systems analysis to ecological systems , Journal of
Applied Systems Analysis, volume 6, p. 25 –31.
9
Shepherd A. and Bowler C., Beyond the requirements : improving public participation in EIA, Journal of
Environmental Planning and management, 40 (6), 1997, 725-738.
10
Mintzberg H. Beyond implementation : an analysis of the resistance to policy analysis, INFOR, volume 18, n°
2, May 1980, p. 100-138.
11
Geurts J. and Joldersma C., Methodology for participatory policy analysis, European journal of Operational
research, 128, 2001, 300-310.
Page 13 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
against technical criticism (from an expert point of view). If the models are not defensible,
the entire programme could suffer.
Avoiding to focus on tool, focusing on ends
Henry Mintzberg argues that the root of the resistance to policy analysis use by managers lies
not in management itself, but in analysis formulation. Here are the pitfalls generally met and
which explain why policy makers don’t use analysis :
-
policy making isn’t a relatively static, orderly process. In fact, the process is a
fundamental dynamic, non-linear one.
-
Generally, analysis can’t handle soft data, despite they are critical for policy making.
-
Very often, analysis drives the organizations towards a narrow economic morality,
despite for policy making, goals often emerge from the decision making process
rather than existing as given inputs to it. Goals themselves are created and altered
during policy making.
-
Analysis uses one mode of thinking, disregarding another, fundamentally different
one that better suits many of the needs of policy making. For instance, managers
prefer verbal media.
So, analysts had better focusing on ends than focusing on tools.
For SEAMLESS project, the “power to influence decision” condition is not an evidence
because the prim principal is Commission DG Research. So, involving other actual or
potential end-users (at EU level as well as region level) means to assure that they will
influence further policy programs. This can be done only by consulting them about indicators
or wished interfaces with computers. Asking them about the policy scenarios they would like
to be simulated would allow a better appropriateness. Communication condition means to
assume a close and permanent contact with end-users, as soon as they have been contacted
for the first time, at each level, by regularly sending relevant news (about project
development), even when stakeholders are not solicited by scientists. Techniques qualities
condition lead to spend a long time about heuristic qualities of the media (computer interfaces
and all presentation supports for mails, meetings…) intended to explain the project to
stakeholders. For instance, it means that researchers who will present SEAMLESS-IF will
have a great deal with making the whole system easy to understand by lay stakeholders.
“Focusing on end” condition is the reason why participatory methods are so highlighted.
Page 14 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
3
Understanding framing process
Everybody could say that co-learning occurs in each situation where actors are in contact. We
don’t care about this kind of common co-learning, but we believe that we can learn a lot from
the characteristics of the situations where co-learning is the main stream. Systems involving
human participants often involve a complexity inherent to the relationships which the players
maintain between themselves or to the interpretations that they make of the situation in which
they find themselves (Girin, 2000)12. One of the forms of complexity identified by Girin, the
“framing complexity”, calls for particular attention here.
Methodological statements : in management, the complexity of “framing” as seen by Girin is
linked to the difficulties encountered in attributing a meaning to an event (“What is
happening?”) or to an interaction between the participants (“What are we doing at the
moment?”). This is precisely the characteristic of a configuration which necessitates the
provision of a training scheme. This complexity of the frame is thus a central characteristic of
situations where the participants would like to take collective action but do not have available
a predefined frame. The existence of a management situation assumes then a prior definition,
if only minimal, of the expected result which will be the subject of external evaluation, and of
the collective action needed to achieve it. This definition, shared by the participants, may
result simply from pre-existing norms or rules, or conventions agreed between the parties, i.e.
a previously established framework. However when this definition does not already exist, it
has to be collectively created during the course of a particular phase of the process, known
precisely as the “framing phase”. In this sense, the term “framing” indicates, according to
(Raulet-Croset, 1995)13 an emergent collective cognitive form, arising from the confrontation
of a group of players and of knowledge. The emergent character of the frame distinguishes it
from the notion of a framework proposed by (Goffman, 1974)14: it is not a case of a preexisting framework which can be used to interpret actions, but an emerging framework,
resulting from the set of definitions made by those involved in the situation. The frame
emerges from the moment when the different representations and definitions of the situation
given by the participants begin to be mutually compatible. These definitions can be expressed
in words but also as actions, related problems, objects. All these elements form the “salient
features” of the definition. These salient features thus constitute opportunities for instigating
the creation of links between the players. Thanks to them, the frame firms up with the help of
various forms of support, either material, such as objects or tools, or symbolic, as
publications, charters etc.. The frame becomes anchored more deeply into the situation,
which constitutes both a determining factor and a consequence: as a factor, the situation is a
support to the process in operation, by giving rise to forms of stability; as a consequence, it
allows the evolution of the cognitive form, by the intervention of new players and new
objects.
12
Girin J., 2000, Management et complexité : comment importer en gestion un concept polysémique ?, in David
Albert, Hatchuel Armand, Laufer Romain (dir.), Les nouvelles fondations des sciences de gestion, Vuibert, Paris,
p. 125-139.
13
Raulet-Croset N., 1995, Du conflit à la coopération, un processus de structuration : le cas de la protection d'une
nappe d'eau minérale vis-à-vis de pratiques agricoles, Université Paris-Dauphine, Sciences de Gestion, Paris, 422
p.
14
Goffman E., 1991, Les cadres de l'expérience, Les éditions de Minuit, Paris, 573 p
Page 15 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
The compatibility of definitions, representations and common interests should be revealed
during the course of the process and the frame should meanwhile become more and more
stable. The key is not to try to reach consensus on all values and meanings but to create some
common values and shared meanings through processes that promote the development of
mutual recognition of the legitimacy of the interests of others (Mc Lain and Lee, 1996)15. In
policy field, framing process can be represented by participatory policy analysis (Geurts and
Joldersma, 2001), which is directed towards improving to integrating the mental models of
different actors in a policy network. The debate enables communication between participants
by creating a language or other communication mode, which is understandable by the
different actors.
Wealthy problem definition, co-learning and synthesis emerges from successful framing
process. Here are the conditions we can suggest for reaching this purpose : 131) The
participants agree about coordination principles; 132) Dealing with threat; 133) Taking into
account procedures weight.
3.1 The participants agree about coordination principles
All the participants of a framing process are bringing the common values which allow to try
to coordinate. If they don’t, framing process can’t go on. In fact, nearly every framing process
trial leads to exclude a few potential participants. For consensus building for instance, a
perceived need to reach consensus leads to decisions not to involve potential participants with
known “difficult” views (Richardson and Connelly, 2002)16. The key issue is to be explicit
about how the question of exclusion has been dealt with.
Organizations –even without “difficult” views, may decline to participate. Sometimes,
negative previous experiences of partnership working can affect their willingness to get
involved in new partnership opportunities. Elsewhere, the problem of consultation fatigue can
be mentioned, especially if consultation is seen as too late and as having little if any visible
impact.
But stakeholders scepticism regarding the potential success of a consensus building effort is
not a reason to recommend against proceeding (Susskind and all, 1999)17.
3.2 Dealing with threat
Nobody can ask to participants for setting up framing process “out of the context”, as Rawls
suggests. Very often, it is likely that at least a group of participants will feel that their
interests are threatened by the project in debate. It would be all the more dangerous to ignore
this fact since, in some cases, scientists in contact with participants are seen as
representatives of the menacing part. If the project will be likely perceived as a menace, we
can draw up two strategies :
-
to highlight the threat and to present participation as the best way to fight against. For
instance, a large part of stakeholders involved in assessing SEAMLESS-IF at region
15
Mc Lain R. and Lee R.G., Adaptive Management : promises and pitfalls, Environmental management, vol.20,
No 4, 1996, p 437-448.
16
Richardson T. and Connelly S., Building consensus for rural development and planning in Scotland, Scottish
Executive Central Research Unit, 2002, Edinburg, 79 p.
17
Susskind L., McKearnan S. and Thomas-Larmer J., Yhe consensus buiding handbook, 1999, Sage Publications,
1147 p.
Page 16 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
level, will feel threatened by potential CAP deregulation impacts. Then, discussing
about simulation model has to be shown as the best way to react : either participants
could discover deregulation impacts they didn’t oversee before, either they would get
a mean to accurately anticipate main deregulation impacts, either they could simulate
policy measures which could balance deregulation.
-
to weaken the menace within numerous scenarios. Another way to deal with
stakeholders fears is to drown sensitive issue within other ones. For instance, gaming
or simulation offer participants possibilities for experimenting with policy in a safe
environment. One scenario can be threatening but it won’t be rejected if proposed (by
the participants or by the animator) as an alternative from equiprobable others and
not as the unavoidable one.
3.3 Taking into account procedures weight
There is no doubt about procedures weight. The kind of knowledge exchanged and
hybridized by co-learning is affected by procedures :
Tacit knowledge versus explicit knowledge : Face-to-face participative situations (interviews,
meetings) allow co-learning about tacit knowledge, parts of them becoming explicit. On the
contrary, situations where participants can’t actually gather are useful only for co-learning
about explicit knowledge. For instance, if participants belong to the same culture and socioprofessional milieu, they are able to efficiently deal trade-offs by media of internet forum.
When co-learning from tacit knowledge (what are the uses in such a case, who has to do
something, what are the hidden stakes… ) is expected, every occasion to make the
participants gathering should be favoured.
Conventional knowledge versus little approachable knowledge : Standard meetings
(everybody sitting round a table during full time, every speaker listen by the whole
participants) favour the expression of conventional or official opinions and knowledge. In the
countries where hierarchical position within organization is decisive, mixing people from the
same organization but with different levels will lead to suppress any possibilities of sharing
knowledge from low hierarchical levels people. At the opposite, other dispositions favour
expressions of little approachable knowledge : standing up during a meeting to discuss
around map, charts, graphs or discussions with a very few people (like in World Café
method) favours groups whose voices are rarely heard in standard meetings. Available
knowledge so depends upon the place where meetings are held. Too solemn function rooms,
too unusual place (Theatre, café) prevent actual participation happening, although some
places (schools) favour participation (Blondiaux and Lévêque, 1999)18.
Co-learning by intentionally setting participants in learning situations. We know a few about
participatory methods contribution to acquisition of new knowledge. But it seems that
methods built around learning situations (simulation/gaming methods) favour co-learning,
although others encourage to information exchange and structuring (electronic meeting
systems) or to reinforce the influence of informed stakeholders (consensus conference)
(Geurts and Joldersma, 2001). Using system models can facilitate iterative learning about the
components of systems and relationships that are important at scales ranging from local sites
to regions. Generally, the most useful function of the models was their ability to allow users
an opportunity to explore different “what if” scenarios. The exercice can be highly
18
Blondiaux L., Lévêque S., 1999, La politique locale à l'épreuve de la démocratie. Les formes paradoxales de la
démocratie participative dans le XXème arrondissement de Paris., in Neveu Catherine (dir.), Espace public et
engagement politique, L'Harmattan, Paris, p. 17-82.
Page 17 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
instructive. More, the development of competing models is a significant step toward
collaborative learning, because the resulting debates over the merits of the different models
will help to identify how the underlying assumptions may affect possible outcomes (Mc lain
and Lee, 1996).
Page 18 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
4
Favouring dissemination process
We want that organisations of potential Seamless-IF users appropriate this model. To favour
dissemination, we suggest to refer to swirling dissemination model 141), then to pay attention
to practices we can perform 142).
4.1 Swirling dissemination model
Usual literature about innovation success suggests some key factors. First of all, a promoter
exists : he or she is a mediator, a translator between two worlds with different logics and
perspectives. For our purpose, it means that we need to be helped by someone who knows
both Seamless project and internal logic of the target organisation. Literature suggests to go
fast and, at the same time, to set up innovation at the right time (e.g. when a competitive
project is weakened, on a change occasion etc.) .
But we can learn more about the way to disseminate Seamless-If use, from the perspective of
the “Sociologie de l’Innovation” Center about innovations success. Two famous papers
(Akrich, Callon, Latour , 1988a; 1988b)19 suggest that the only one relevant point of view is
those of users. “The assessment of defect and advantage of innovation is wholly in the users’
hands : it depends on their waiting, their interests, the problem they are taking” (page 15).
Thus, innovation is perpetually seeking for allies. To be successful, it must become integrated
into an actors network, who deal with. These authors highlight that the innovation features
come from technical decisions. And these decisions make the definition of involved social
groups, setting up some as allies, others as opponents or sceptics. In other terms, an
innovative object brings in itself the forces which will support it and those which will fight
against. They suggest to draw a sociotechnic diagram, it means the combination of two kinds
of analysis, ordinarily separated. First, the technological analysis provides a view of the
innovative object per se and its own technical properties. Second, the sociological analysis
assesses the social environment where this object will move and where it will impact. Such a
sociotechnic diagram about Seamless-IF could be performed before dissemination.
Above all “Innovation is the art to interest a growing number of allies, who make you more
and more strong” (page 17). The adoption movement is an adaptation movement. Adaptation
is generally the result of a collective working-out, which is the witness of more and more
growing interest. The innovation environment is created at the same time as the innovation it
has to judge !
The actual diffusion model suggested is the famous swirling model (modèle tourbillonnaire).
It takes into account the actual process of many trade-offs, between actors, from whom final
innovation will emerge. According to this design, innovation is evolving at each time the
promoter group tries to interest somebody. Thus, the project success lies in the choice of good
representatives or spokespersons. Representatives are the people who will interact and
negotiate with other actors, until the appropriation of the last version of innovation. So, we
need them to be truly legitimate.
19
Akrich M., Callon M. et Latour B., A quoi tient le succès des innovations. Premier épisode : l’art de
l’intéressement, Annales des Mines, volume juin 1988, 1988, pp 4-17 ; Akrich M., Callon M. et Latour B., A quoi
tient le succès des innovations. Deuxième épisode : l’art de choisir les bons porte-parole, Annales des Mines,
volume septembre 1988, 1988, pp14-29.
Page 19 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
4.2 How to do in practice ?
We suggest to try to understand how each target organisation learns, for locating the good
allies, then to keep in mind some lay ideas about new learning implementation in practice.
How does this organisation learn ? The investigations about actual and potential user
organisations will take into account this question : what is the kind of organisational learning
system at stake ? Of course, “organisation” doesn’t mean the whole formal institution, rather
the relevant service or direction or workshop where Seamless-IF implementation could have
a sense.
Shrivastava (1981)20 suggests a typology about six kinds of organizational learning systems.
Locating whose type an organization belongs to, will help us to choose a manner to enhance
dissemination within this organization.
a. One man institution : Only one person is well informed about the whole
activity. By tacit acceptance, he (or she) is considered by the other
organization’s members as the only one source of relevant information.
For this type of organizational learning system, dissemination must be
build on the seduction of this person.
b. Mythological learning systems : Organizational learning occurs by
stories exchanges about actors and activities. These stories, which are
perpetuated along time, become organizational myths. Myths are
determining who are the “hero” and who are the “baddies” (such a
service…) and, above all, who is allowed to attain knowledge. Here are
targets for Seamless dissemination.
c. Information seeking culture : Within some organizations, culture and
nature of employees favour exploring and challenging. The main
communication mode is verbal and people are attaining information by
informal ways. The Seamless dissemination will be favoured in that kind
of organization only if this new tool is brought by a favourable rumour.
d. Participative learning systems : Information acquiring, treatment and
transfer are setting up during joint working groups or committees.
Members’ knowledge and expertise are shared within legitimized forum.
This kind of organization would have to create such a process before
appropriating Seamless-IF.
e. Formal management systems : Some organizations are structured by
systematic process (strategic planning, environmental watch, information
management, budget checking and so on) which are guidelines for
standardised and other non standardised activities. Use of a new tool
must be introduced as a new process.
f.
Bureaucratic learning systems : The formal learning system within these
organizations (most of them are public administrations) includes process
and regulation sophisticated system. Rules give a precise definition of
addressee and aim of information. The concept of knowledge is restricted
to the information which can be objectively produced and passed on.
Introducing new model leads to respect step-by-step procedures.
20
Shrivastava P., Strategic Decision-Making Process : The influence of Organizational Learning and Experience,
Doctoral Dissertation, University of Pittsburgh Pittsburgh, 1981, quoted by Tebourbi N., L’apprentissage
organisationnel, research note, Research Direction, télé-université, Québec University, September 2000.
Page 20 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
For introducing a new learning, we have to take the precaution of understanding how the
problems which could be solved with the help of new learning (Seamless-IF) are solved
today. Because to avoid no-response, all the organisations have set down “organizational
defensive routine” (Argyris, 1995)21, based on common causal maps (Weick, 1979)22, by
hypothesis shared by members (Mitroff and Emshoff, 1979)23.
Lay ideas about dissemination into organisation. Once the type of organizational learning
system is known, some lay ideas can be suggested to favour dissemination of new learning
(e.g. about a new tool) inside. As March and Simon (1974)24 uphold, three main means can
be used : trough organization chart, trough spatial structures or trough role systems.
- A new learning can be favoured by modifying organization chart : creating a new part
(e.g. a service in charge of modelling), specifying an existent part (e.g. giving an
additional role of modelling to a service in charge of planning), changing parts
official arrangements (e.g. setting modelling service in direct link with general
direction).
- A new learning can be favoured by changing spatial structures : location, services
closeness, keeping innovative cells in the background, inside fitting-out, personal or
collective offices allocation.
- A new learning can be favoured by modifying roles system (a role suggests the good
ways for solving a problem).
Consequently, the participatory methods showed below, and the chosen trajectories (pool of
participatory methods to be applied to one case) will be drawn from these general guidelines
spirit.
21
Argyris C., savoir pour agir. Surmonter les obstacles à l’apprentissage organisationnel, InterEditions, 1995.
22
Weick K., The Social Psychology of Organizing, Addison-Wesley, Reading, Mass, 1969.
23
Mitroff I. and Emshoff J.R., On Strategic assumption-making : a dialectical approach to policy and planning, in
Academy of Management review, vol. 4, n 1, 1969, pp 1-12.
24
March J.C. and Simon H., Les organisations, Dunod, Paris, 1974.
Page 21 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
SECTION B. participatory methods and dialogue tools
This section specifies what is meant here by “participation” (2.1), then provides a detailed
description of a set of participatory methods (2.2) and dialogue tools (2.3) that include
relevant bricks for the construction of participatory protocols in SEAMLESS.
5
Defining participation
Derived from social and political theory, participatory processes are based on principles of
deliberative democracy and the assumption that public decision making should result, not
from the aggregation of separately measured individual preferences, but from open public
debate. Many terms are used to describe public participation in policies, programs and
decision-making processes. This involvement may relate to planning new developments,
designing new policies, or responding to government’s proposed laws or regulations. Some of
the terms that come up here are consultation, participation, involvement and engagement.
Consultation expresses the idea that an agency, group, community or individual is going out
to seek advice from someone else. It implies a purpose-driven process in which someone
takes the initiative to seek advice. It does not necessarily imply anything about what will be
done with that advice when and if it is received.
Participation is very similar to involvement – the act or process of being involved. In the
social science literature, participation means the act of participating in assessment or
decision-making processes of those directly or indirectly involved in, affected by,
knowledgeable of or having relevant expertise on the issue at stake. Participation in
development projects and programmes is widely seen as both a means and an end. As a
means, participation is a process in which people and communities cooperate and collaborate
in development projects and programmes (Clayton et al 1998). In this view, participation is a
way to support the progress of a project or programme and a means to ensure the successful
outcome of activities. The term "participatory development" is commonly used to describe
this approach (Clayton, et al 1998). Participation is also viewed as a means to help ensure
sustainable development (Rudqvist and Woodford-Berger 1996, Uphoff 1992). As an end,
participation is seen as the empowerment of individuals and communities in terms of
acquiring skills, knowledge and experience, leading to greater self-reliance (IDB, Clayton et
al 1998). Participation is an instrument to break poor people's exclusion and lack of access to
and control over resources needed to sustain and improve their lives. It is intended to
empower them to take more control over their lives (Clayton et al 1998).
Engagement goes further than participation and involvement. It conveys the idea that
people’s attention is occupied and their efforts are focused on the matter at hand – the subject
means something personally to someone who is engaged and is sufficiently important to
demand their attention. Engagement also implies a commitment to deeds not only words. So
it is possible that people may be consulted, participate and even be involved, but not be
engaged.
Page 23 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Definition : The notion ‘participatory methods’, also referred to as interactive or deliberative
methods, is used as an umbrella term embracing a variety of methods and approaches
employed to enhance participation in assessment as means to different ends. In this paper we
use the following definition: Participatory methods are methods to structure group processes
in which participants play an active role and articulate their knowledge, values and
preferences for different goals.
The above definition implies that we focus on participation organised, and thus imposed, by
analysts. Secondly, participatory methods is confined here to group methods; although
interviews can be used as means to articulate knowledge, values and preferences of nonexperts, so that interview data can serve as a way to involve stakeholders’ views in
assessment processes, interviewing is not a participatory method per se. Participatory
methods as discussed in the heart of this working paper refer to a specific type of methods to
organise stakeholder involvement in assessment and decision-making processes, while
interviewing is a standard social science technique that can also be used in the context of
stakeholder involvement.
Participatory methods are overall contexts or settings in which information is elicited. An
overview of the literature shows that existing participatory methods fall down into four main
categories, referring to the goal of their application (for a full review of participatory method,
see Slocum (2003) and van Asselt et al. (2001)). Participatory methods are sometimes
justified on arguments inspired by the nature of democracy (see, for example, (Kasemir et al.
1997). As Ravetz (Ravetz 1997) states “policies for managing sustainability will be effective
only if they have the moral support of a great mass of people”. It is therefore argued that
assessments should comprise the opinions and attitudes of stakeholders. So the idea of
participation is by some ones understood as a way to democratise science and to empower
citizens. On the other hand, participation can also be understood as a way to improve the
quality of an assessment process by enriching the knowledge base with contextual knowledge
and stakeholder opinions. In such a context, participants are consulted and the output is used
as advice. In the latter case, participatory processes are used to inform decision-making
processes; the goal of such application of participation can be described as ‘advising’. In such
case, participation is part of the decision-support process, while in the case of
democratisation participation is a way of organising the decision-making process, as an
alternative to traditional top-down modes of decision-making. In addition to
‘democratisation’ and ‘advising’ two other goals of participatory processes can be
distinguished, i.e. ‘mapping out diversity’ and ‘reaching consensus’:
So, van Asselt et al (2001) suggest to structure the objectives of methods into two axis :
Aspiration/Motivation axis and Targeted Output axis. The poles of the Aspiration/Motivation
axis are defined as ‘Democratisation’ versus ‘Advising’, while the Targeted Output axis is
divided into ‘Mapping out Diversity’ versus ‘Reaching Consensus’.
Each of these four poles is defined as follows:
• Mapping out diversity – participatory methods that seek to uncover a spectrum of options
and information. They enable a group to disclose information (making tacit knowledge
explicit) or test alternative strategies in a permissive environment.
• Reaching consensus – participatory methods that seek to define or single out one option
or decision. They enable a group to reach an informed decision on an issue.
• Democratisation – participatory methods that enable participants to employ their own
knowledge to create options for tackling (policy) issues that directly concern them. The
output has weight in the decision-making process (it can be binding)
Page 24 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
Advising– participatory methods that are used to reveal stakeholders’ knowledge, values
and ideas that are relevant to the process of decision-making. The output is used as input
to the decision-support process.
Reaching consensus and mapping out diversity can be seen as opposite poles: mapping out
diversity can be characterised as a process focussing on divergence, while, on the contrary,
reaching consensus seeks convergence through compromise. Democratisation and advising
can also be considered as two ends of one axis. Both deal with the fundamental question of
the context of the participatory process of “what weight is attached to the output of the
participatory process?”. In the first case, participation is meant to be part of the decisionmaking process, while in the second case participation is used as tool in decision-support
(policy analysis). The first axis ranging from reaching consensus to mapping out diversity can
be characterised as aim in terms of targeted output, while the second axis democratisation –
advising expresses the deeper why in terms of aspiration and motivation.
Page 25 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
6
Relevant participatory methods for SEAMLESS
This paragraph describes a set of participatory methods that include relevant bricks for the
construction of consultation protocols in SEAMLESS. These methods are clustered in two
ways. The first way consists of clustering them according to the matrix of goals sketched by
the two axes discussed above (see Figure 1).
Figure 1. Categorization of participatory methods
World Café
PAME
Planning Cells
Delphi
Expert Panel
Charrette
Comments on figure 1:
Advising methods aiming at mapping out diversity (examples)
The upper right quadrant includes five methods that are used to reveal and map stakeholders’
knowledge, values and ideas that are relevant to the process of decision-making. The output
is used as input to the decision-support process. They are: Policy Exercises, Scenario
Analysis, Participatory Modelling, Focus Groups, World Café…
Convergence methods aiming at decision-support (examples)
The lower right quadrant includes six methods that aim at enabling a group to reach an
informed decision on an issue. They are : Consensus Conferences, Charrette, Conventional
Delphi, Expert Panels, Citizens’ Juries, Planning Cells…
Methods for democratisation (examples)
The lower left quadrant includes three methods that enable participants to employ their own
knowledge to create options for tackling (policy) issues that directly concern them. The
output has weight in the decision-making process (it can be binding). They are : Participatory
Planning, Participatory Assessment Monitoring and Evaluation (PAM&E)…
The upper left quadrant referring to methods that aim at democratisation through mapping out
diversity remained empty. This is the case, because we focussed on participation imposed by
scientists/analysts. This empty quadrant may be associated with participatory processes
Page 27 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
organised by stakeholders themselves, such as mass demonstrations that had the aim to map
out the diverse arguments for protest.
The second way consists of clustering them according both to their goal (mapping diversity
or consensus building) and to the degree of involvement of stakeholders (consultation or
involvement).
Mapping
Degree of involvement of participants
Consultation
Involvement
POLICY EXERCISES (*)
USERS PANELS
SCENARIO WORKSHOPS (*)
USERS FORUMS
ENVISIONING WORKSHOP
PLANNING FOR REAL
POLICY DELPHI
COMMUNITY
APPRAISAL
FOCUS GROUPS
Objectives of the process
WORLD CAFÉ
Policy Conference
Open/Public Meetings
MYSTERY SHOPPING
PROFILING
COMMUNITY VISIONING
OPEN SPACE EVENT
WEB FORUMS
Convergence
CONSENSUS CONFERENCES (*)
PARTICIPATORY MODELLING
CONVENTIONAL DELPHI
CHARRETTE
Expert Panels
FUTURE SEARCH CONFERENCE
CITIZENS’ JURIES (*)
PLANNING CELLS
Democratisation
PARTICIPATORY PLANNING (*)
PRA
PAM&E (*)
(*) means that method includes high co learning capabilities
In the next paragraphs, we will give a brief description and a detailed description of each
method.
6.1 Advising methods aiming at mapping out diversity of views
Policy exercises
ƒ
Brief description
A policy exercise is a flexibly structured process designed as an interface between scientists,
policy-makers and sometimes stakeholders. Its function is to synthesise and assess knowledge
accumulated in various relevant fields of science for policy purposes in the light of complex
Page 28 of 248
/
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
practical management problems (Toth, 2001). In the prospective context, the exercise is a
creative process employing a variety of tools and techniques to explore alternative futures.
One of those techniques is the gambling exercise: the participants are displaced from their
usual and complex context into a hypothetical and simpler situation, so as to free them from
their normal frame of reference, thereby stimulating creative thinking and new insights.
Historical background : The policy exercise methodology has its roots in political-military
simulation games. Variations of the policy exercise have been applied by businesses in
developing competitive corporate strategy, and in other institutions in teaching and training.
They have also been used as research tools in policy sciences for studying foreign policy
crises and emergency management. In recent years, various forms of policy exercise have
been used in different research fields and thematic contexts. For example, civil servants,
administrators and managers were together involved in a gaming method to reorganise
information from diverse political systems. The method was also found useful in the
development of future environmental policy or in exploring the global change impact of
existing environmental policies.
Objectives : According to Toth (2001), a basic goal of policy exercises is to integrate
knowledge from various sources, explore alternative future developments and evaluate new
policy ideas in order to obtain a better- structured view of complex problems. Policy
exercises aim to identify poorly understood topics and questions and to make discoveries.
The goal of policy exercises is to increase the range of solution to a problem, but not to
provide the solution itself.
Participants : Participants are generally scientists from disciplines of critical importance to
the subject, stakeholders and policy-makers. Usually, it involves a heterogeneous group of 10
to 15 individuals and takes place over one or more periods of joint workshop. In most cases,
the participants from the policy side are involved from the very beginning of the process.
Procedure : There are numerous forms of policy exercise, such as gaming-simulation
techniques. These various methods have three common steps:
Step 1 is dedicated to the preparation of a series of possible future development scenarios by
all the participants and with the necessary technical support materials (mainly reports).
Scenario writing provides a special framework in which issues from various fields affecting
the practical problem are integrated and bounded.
Step 2 concerns the scenario workshops. Specific policy options are tested during an
interactive session at the workshop. Throughout the test, often models or computer-models,
are used to support discussions by simulating the response of the system to decisions made by
the participants. It should be noted that models are used here as consulting device and not as
tool to guide the discussions.
Step 3 is the evaluation step. The output of a policy exercise is not necessarily new scientific
knowledge or a series of explicit policy recommendations, but rather a new, better-structured
view of the problem in the minds of the participants. The formal product of the exercise is a
briefing document summing up the most important policy insights.
Relevance : The policy exercise method is most appropriate when the issues at stake are
poorly understood, ambiguous or contested. It is particularly suited to exploring issues where
opinions have not yet been formed.
ƒ
Detailed description
The following general goal of policy exercises is derived from (Geurts and Duke 1999): The
goal of policy exercises is to integrate knowledge from various sources, explore alternative
future developments and evaluate new policy ideas in order to obtain a better structured view
of complex problems. Policy exercises aim to identify poorly understood topics and questions
and to make discoveries. The goal of policy exercises is to increase problem solving but not
Page 29 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
to provide the solution. The term ‘policy exercise’ refers to a general method providing a
context within which a heterogeneous group of 10-15 participants synthesises and assesses
knowledge from various sources and in which ideas (policy options) can be explored.
Participants (who have traditionally been policy-makers and scientists, sometimes
stakeholders) are selected on the basis that they can contribute skills, perspectives and
concerns about the general problem.
A policy exercise is designed as an interface between scientists and decision-makers (Toth
1988). In policy exercises, a complex policy issue or system is represented by a simpler one
with relevant behavioural similarity (Parson, 1996). By observing the simpler one it is
possible to learn about the complex reality. This representation resides partly in the
participants who often assume roles or play themselves and interact through negotiation, and
partly in the use of tools such as models designed to provide a framework for exploring the
particular issue. The participants are displaced from their usual context into a hypothetical
situation in order to free them from their normal frame of reference stimulating creative
thinking and new insights (Parson 1997). A policy exercise does not necessarily yield new
knowledge but rather a new, better-structured view of the problem (Geurts and Duke 1999).
The participants learn to concentrate on the main problem and not on irrelevant details. The
gaming environment stimulates learning as it allows those holding specific information to
share it within a permissive game setting (Brewer 1986).
Policy exercises differ from the other participatory approaches in that the participants do not
explicitly take part in the assessment process. For scientists a policy exercise is a way to get
information on human behaviour and human interactions in negotiation processes and policy
preferences necessary for the assessment. For the participating decision-makers a policy
exercise is a deliberate procedure in which goals and objectives are systematically clarified
and strategic alternatives are invented and evaluated in terms of the values at stake. The
exercise is for them a preparatory activity for effective participation in official decisions
(Toth and Hizsnyik 1998).
The policy exercise methodology has its roots in political-military simulation games. The
complexity, uncertainty and high stakes involved in military war-fare demanded techniques
that would account for non-quantifiable but consequential factors, such as political ones.
These exercises were developed to explore questions that were outside the capability of
analytical tools (i.e. simulation models) but that needed to be integrated into thinking and
analyses (Brewer 1986). Variations of the policy exercise have been applied by businesses in
developing competitive corporate strategy, and in other institutions in teaching and training.
They have also been used as research tools in the policy sciences for studying foreign policy
crises and emergency management. More recently policy exercises in various forms have
been used in different research fields and thematic contexts. For example civil servants,
administrators and managers together in a gaming method to reorganise information systems;
top researchers and policy makers from diverse political systems to participate in policy
exercise sessions aided by models and scenarios to develop international environmental
policy; policymakers, doctors and other stakeholders in exploring fundamental changes in the
national health systems (Geurts and Duke 1999).
The policy exercise methodology is most appropriate when issues or values at stake are ill
understood, ambiguous or contested (Brewer 1986). They are particularly suited to issues
where some of the most salient features concern behavioural strategy or values (Parson
1997). This type of method is particularly useful to explore issues where opinion formation
has not taken place.
There are numerous forms of policy exercise such as gaming/simulation; combinations of
gaming/simulations and computer simulation models; combinations of computer simulation
Page 30 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
models and structured workshops. These various methods have common elements. They
require intensive preparation including scenario construction. They often involve simulation
models as well. Models aim to imitate the behaviour of complex systems. They are used to
support discussions by simulating the response of the system to decisions made by the
participants. They offer insight in relevant trends and provide quantitative information. A
computer-model can be used as a consulting device or as tool to convert the negotiated
agreements into a new ‘state of the world’ (Van Asselt et al. 2001). It should be noted that
models are used in policy exercises to support and not to guide the discussions. Overemphasis
on models can cause a stumbling block to creative, exploratory thinking. Rules for policy
exercises are developed as recognisable descriptions for action and negotiation (de Vries
1993). The challenge is to create a realistic situation, which can easily inspire participants to
play their roles. The policy exercise occurs within the bounds of the situation and the roles of
the participants.
There are a number of other common elements in policy exercises. Direct interaction between
policy-makers, stakeholders and researchers is required. Some methods are needed to clarify
and integrate different perspectives. This is important and requires good facilitation. The
process must be flexible with the participants orchestrating the process. The environment
must be such that participants are at ease and do not feel threatened. Summing up, these
methods try to establish a relationship between policy and science. They can be described as
a new vehicle for dialogue which can complement the traditional ‘scientific report’ in order to
improve the learning and assimilative process of the actors involved, and also of the decision
makers. Many variations of the policy exercise process are known. The general procedure is
one where the participants assume roles in a controlled gamed environment and engage in
intense role play gaming about the central issue. Policies are formed and the impacts of these
are traced either by supporting models or simply following the course of the simulation. The
policies are reconsidered in an evaluation performed by the whole group and the next round
of the game takes place. The emphasis is on making different renditions of a complex setting,
not on concentrating intellectual energies on just one (Geurts and Duke 1999).
A general feature of the process is that the participants are usually involved throughout the
process from the beginning through to evaluation (Geurts and Duke 1999). During this
process the activities of the group are a ‘closed shop’, that is that the outcomes are not for
external dissemination. Among the members of the exercise there must be a mutual
agreement of respect for the other members and for the collective endeavour (Brewer 1986).
The first stage is one of information and data collection about the technical and factual details
of the problem as well as the perspectives, values and opinions surrounding the issue. This
involves intensive scientific preparation to discover as much about given problems as
possible. This step involves direct interaction between participants to clarify as openly as
possible differences of opinion affecting perceptions of the problem. Next comes the phase of
tool development, for example models. It often happens that existing models tailor-made for
the policy exercise are used (see for example (Parson 1997; Parson and Fisher-Vanden
1997)). In some cases the tasks are divided e.g. experts write scenarios or develop model
tools based on relevant knowledge and the policy makers use them in the following step to
create strategies.
References
Brewer, G. 1986. Methods for synthesis: policy exercises. In: Sustainable Development of the
Biosphere, (Eds.) W.C. Clark and R.E. Munn, pp. 455~ 473.
Toth, F. 1988. Practisingthe future, Part 2: lessons from the first experiments with policy
exercises. IIASA wP-88-12, 31 pp.
Page 31 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Stigliani, W.M., F.M. Brouwer, R.E. Munn, R.W. Shaw and M. Antonofsky. 1989. Future
environments for Europe: some implications of alternative development paths, Sci. Total
Environ., 80:1-102.
Scenario workshops
ƒ
Brief description
Scenario analysis -- Also referred to as Scenario Planning or Scenario Learning method, is a
procedure engaging a group (mainly scientists and experts) in a process of identifying key
issues and creating and exploring scenarios, with a view to developing possibilities in the
future and to integrating them into the decision-making process. Participation consists in
involving a range of actors to discuss and criticise the scenarios.
Historical background : Scenario analysis grew out of a military planning tool after WWII.
Since General Electric and Royal Dutch Shell adopted the practice in the late 1960’s, the
approach has been used regularly in strategic decision support within commercial
organisations. Thus scenario analysis has been implemented in many sectors of business,
such as telecommunications and industry. Over the past two decades, government institutions
have also increasingly adopted scenario analysis in strategic policymaking processes. A
typical feature of contemporary scenario analysis is the involvement of decision-makers and
important stakeholders in the scenario development process. The involvement may be limited
to a single interview or entail a number of several-day-long scenario workshops.
Objectives : The scenario analysis method aims to build contrast visions of future states
development amongst a systematic participation of stakeholders. The scenario method is
mainly qualitative (storylines), but can be explorative or normative. The critical point is that
the scenarios themselves must be internally consistent pictures of future possibilities.
Storylines are tools for synthesis, structuring thinking, and presentational purposes.
Participants : The participants (20 – 25 individuals) are people with a thorough knowledge of
the issues to be addressed and affiliated with different institutions. In addition, participants
from outside the organisation, especially original “thinkers” with unusual views, are
frequently included. Scenario analysis processes within governments usually involve external
stakeholders as well. The diversity of participants’ experience is an asset for the success of
the scenario analysis.
Procedure : Many governmental and consulting organisations have developed their own
particular approaches to designing scenarios. The scenario analysis method described here is
based on the one developed by the US Environmental Protection Agency (U.S EPA, 1995).
Its procedure can be roughly summarised by the following steps:
• The first step in the process is to identify the key driving forces of the system to be
addressed. The brainstorming technique is one possible way to cause a set of ideas associated
with the driving forces to emerge. The primary driving forces are listed and prioritised
(according to their importance, probability or uncertainty). Key driving forces are then
selected, and plausible trends within each key driving force are identified.
• The second step involves the development of narrative scenarios (storylines) from the
selected key driving forces. These are first grouped according to common theme (e.g., social,
economic, technological, and environmental). Two to five scenarios are then built from the
selected themes. Participants drive the plots of the scenarios from the key trends and
determine their outcomes. The storylines are usually written during a two- to three-day
workshop. A period of feedback and reflection is needed to write up the final scenarios and to
explore their implications. Final draft storylines are then distributed to others actors.
Page 32 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Scenario analysis procedure
Relevance
The scenarios analysis method is especially useful for comprehending a system that seems to
contain a large number of key factors (driving forces) that must be considered. The method
can be used whenever the degree of uncertainty about the futures is high, in order to build
alternative images of the future. The main use of the method is more to understand the
complex system than to try to predict the future. Working through a small number of
scenarios helps to gain a simplified “image” of the system’s evolution. Once this future image
is established, some modelling exercise can be used to refine the picture.
ƒ
Detailed description
The goal of scenario analysis is to: “…explore the range of available choices involved in
preparing for the future, test how well those choices would succeed in various possible
futures and prepare a rough timetable for future events.” (Fahey and Randall 1998). A
scenario workshop is interactive process engaging a group in a process of identifying key
issues, creating and exploring scenarios in order to learn about the external environment
and/or integrating the insights into the decision-making of the organisation. The free-format
approach enables the exchange and synthesis of ideas and encourages creative thinking. The
use of scenario analysis has traditionally been for planning purposes. Nowadays its
application varies from planning to teambuilding, vision development to conscience raising
and communal learning.
A typical feature of contemporary scenario analysis is the involvement of decision-makers
and important stakeholders in the scenario development process. The involvement may be
limited to a single interview, but it can also involve participating in several workshops that
may run for several days at a time.
The scenario analysis methodology described here is based on the scenario analysis concept
(also known as Scenario Planning or Scenario Learning) developed by Royal Dutch /Shell,
and later by Global Business Network. Scenario analysis involves two elements: the
construction of alternative scenarios relevant to a particular organisation and the integration
of the content of these into the organisation’s decision-making. Scenarios are developed in
sets of usually three or four to study how an organisation or one of its strategic options would
fare in each future set.
Page 33 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Although many business, governmental and consulting organisations have developed their
own particular approaches to crafting scenarios, in general the scenario learning
methodologies incorporate each of the following elements into the scenarios:
- Driving forces are forces that shape and propel the story described in a particular plot.
- Logics provide the explanation of why specific forces or players behave as they do.
- The plot contains a story that connects the present to the end state.
- The end state is a description of what would happen at the end point of the time horizon.
When to use
The method was designed to challenge the mind-set of participants by developing plausible
alternative futures, and establishing a dialogue between members usually from within an
organisation. It facilitates the free-ranging exchange of ideas, perceptions and concerns.
Scenario analysis is an aid to understand how the world might unfold and how that
understanding can be used in strategic planning for an organisation.
The methodology is most appropriate for addressing complex issues whose futures are
shrouded in uncertainty, where decision-making is generally based on subjective, nonquantifiable factors and where the establishment of dialogue among key actors is necessary to
formulate strategic plans for the future. Scenarios help direct attention to driving forces,
possible avenues of evolution and the span of contingencies that may be confronted. Thus
they are particularly useful when many factors need to be considered and the degree of
uncertainty about the future is high.
The process of backcasting – analysing back from the preferred (or undesired) scenario to the
present day, tracing the sequence of critical events and changes – allows people and
organisations to develop a strategic plan that will inform their actions as these critical events
unfold. This, in turn, allows people to become agents of change rather than being driven by
change and to create trends rather than being the victims of trends.
Scenarios methods can provide planners with ‘compass points’ with which to orient thinking
about the innumerable possible futures. Policies can be examined in terms of their robustness
across a range of possible futures: instead of focusing on the supposedly ‘most likely future’,
a balanced range of strategies that may be required in different circumstances can be
developed.
The scenario-construction process can also be used to build a common vision among
participants. It can thus be used to generate consensus and direction. Especially where
involved in workshops, participants will understand better the strategies and policy options
needed to build alternative futures. In addition, the processes of establishing images of these
futures and how to realise them can facilitate action. Participants will also come to better
understand the viewpoints and strategies of others.
Procedure
Overview
The preparation for a scenario workshop can vary extensively. Depending upon the topic(s)
being addressed, the amount of information gathering required for well-informed, realistic
scenarios can be significant. The length of the pre and post-workshop phases will also be
determined by the extent to which the scenario-building process is conducted more in a larger
group or by smaller teams (who collect the input of others). In any case, a scenario team is
recruited, which then goes through the following steps:
• elicit Views, Insights and Facts.
• identify Focal Issue or Decision.
Page 34 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
•
•
•
•
•
•
•
•
list Key Factors in the Local Environment.
list Driving Forces in the Macro-Environment.
rank Key Forces & Drivers by Importance & Uncertainty.
select Scenario Logics.
flesh Out the Scenarios.
explore Implications.
select Leading Indicators & Signposts.
present Scenarios to Relevant Public.
generate and Discuss the Options.
The first element of scenario analysis is an interactive team process of creating building
blocks for the scenarios. This process is generally carried out in a two-day workshop away
from the usual working environment. The second element, the development of compelling
scenario stories from this initial material through background research exploring the
implications of the stories is less participatory by nature and usually performed by a small
team of scenario analysts. In view of the focus of the current report on participation, we will
concentrate on the scenario workshops themselves.
The first step in the process is to identify the key issues or questions relevant to the
organisation and the time frame associated with the focal issue(s). This is followed by a
brainstorming exercise to surface ideas associated with the issues under concern. From this
brainstorming, driving forces and key trends are identified by clustering the brainstorm ideas
into common themes. Often these are social, cultural, technological, economic, environmental
and political, featuring the most significant events in the external environment; they will
drive the plots of the scenarios and determine their outcome. A variety of procedures is
developed for arriving at scenario plots from the key trends and driving forces. In general in
the following way: 1) the driving forces and key trends are prioritised to determine those that
are most important and uncertain; 2) these provide the themes for the plots; 3) a variety of
scenario plots is then created from this limited number of selected themes; 4) once the themes
are identified the group fleshes out the skeleton of each scenario – tracing the narrative line
from a beginning to an end.
The follow up to the workshop output involves a period of interim research and reflection,
writing up the scenarios and exploring their implications. The important driving forces, trends
and uncertainties are researched.
Brainstorming exercises are used to surface a vast array of ideas associated with the issues.
The exercise is meant to be creative. For this reason rules such as that no idea is immediately
disparaged or discarded are applied. Brainstorming is a commonly known technique for the
creative generation of ideas, approaches or solutions without taking into account constraints
such as cost, practicality or feasibility. The blue-sky nature of the technique requires an
atmosphere conducive to creative and free-format expression. Participatory brainstorming is
an essential step in scenario analysis and is also often employed in focus groups. In order to
create optimum conditions for creative thinking the members of the group are asked not to
criticise, discard or disparage any ideas generated by others (Fahey et al, 1998b). Instead they
are encouraged to build on the ideas of others by suggesting embellishments, improvements
and modifications (Stewart and Shamdasani, 1990). The emphasis of the exercise is on the
quantity of ideas produced, as the greater the number of ideas generated, the higher the
probability that at least some are valuable. Brainstorming can be done verbally but also
written, for example in a ‘post-it session’. An important factor is that the generated ideas are
visible and accessible to the group, so that participants can directly interact on it.
Page 35 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
The Group Decision Room is an example of a software supporting tool to perform
brainstorming in a very interactive way, usually generating ideas quicker than in ‘traditional’
settings. It allows large numbers of people to participate in strategic brainstorm sessions
simultaneously. The software can be adapted to suit specific needs. For example, anonymity
of input can be arranged so that all ideas are treated equally. In this way the potentially
negative influence of organizational hierarchies on the process is avoided. The software is
often used on location in so called group decision rooms. Consulting firms typically use this
technology offering their software and the related accommodation as part of their services. It
is also possible to use the technology independently of a physical location whereby allowing
group processes to be run efficiently and free of geographical constraints original thinkers
with unorthodox views, are frequently included. Scenario analysis processes within
governments usually involve external stakeholders as well.
Mental mapping means making explicit the mental models of persons. Mental mapping is
also referred to as phenomenography. Phenomenography is a research method for mapping
the qualitatively different ways in which people experience, conceptualise, perceive and
understand various aspects of, and phenomena in, the world around them. In mental mapping
modelling techniques are used in a conceptual way to elicit knowledge from participants and
groups. A great variety of hardware and software supports has been developed for eliciting
and structuring knowledge of individuals or groups. For example MAXTHINK or MORE
provide a set of flexible text processing and sorting utilities that can help both to elicit and
organize verbal concepts. When projected in front of a small group, these software programs
can be used to support group brainstorming, acting as a sort of infinitely flexible ‘electronic
flipchart’. DAVID and DESIGN are modelling tools that can help in the conceptualization or
problem definition phases of a modeling project where causal loops are being either
generated or discussed by a group. STELLA is a very powerful model-building tool that
allows modellers to create models at a conceptual level very different from what had been
possible previously using conventional simulation languages such as DYNAMO and
DYAMAP. Richmond and Peterson have developed gaming interfaces for STELLA. Using
these interfaces, users may interact directly with the simulation model, often without having
to come to grips with or understand the structure of the system under study. However the
potential of these software tools for model conceptualization in groups has not yet been tested
(Vennix,1996).
Realisation
a. Compose the scenario team
The team is composed of people with a thorough knowledge of the organisation concerned
and/or the issues to be addressed and often including people from different levels of the
organisation’s hierarchy. In addition participants from outside the organisation, for example
The team should be composed of:
• decision-makers (whose mandate or competency is relevant to the focal issue or question)
• persons with a broad range of functions, areas of expertise and (political) perspectives
• creative thinkers.
All members of the team should have open minds and be able to work well together as a
team.
b. Elicit views, insights and facts
The team must decide how it wishes to gather the opinions and intelligence needed to prepare
the base for the actual scenario-building workshop. Many tools are available for preparing
this base, including the multiplicity of techniques for analysis briefly described in this
publication. Analytical tools commonly used to prepare the base for scenarios include
Page 36 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Structural Analyses, Delphi Expert Panels, Régnier’s abacus, MACTOR and SWOT
analyses.
Almost always a certain amount of desk research is necessary to gather relevant information
about internal (to the organisation, region, etc.) and external trends, for example from the
OECD, economic forecasts, government statistics on demographics, think-tank reports, etc.
Such information may be gathered, as required, throughout the entire scenario preparation,
construction and analysis process. Usually such information will be useful initially to
contribute to the definition of the ‘assumptions’ upon which the scenarios will be built, also
called the scenario logics. Later additional and more specific information can be gathered
once these logics have been decided. The main information required includes:
• critical trends, especially very long-term trends that are expected to continue
• factors of change or future-shaping events that could alter even the seemingly most
established trends
• the roles of the various categories of stakeholders
• events that can alter the environment in the future.
Individual interviews and/or issue workshops can be used to gather viewpoints and insights
that will be useful in identifying the various items in steps II – V. There is no rule about the
amount of information that should be fixed prior to the workshop. Thus more or less
information may be decided and fixed on the basis of collected intelligence, interviews or
workshops or by a simple executive mandate. Inevitably, the outcome will be a result of all of
the above. In any case, the process should be made transparent and decisions should always
be checked with the commissioner.
For gathering information through interviews, Ringland (2002) provides some questions that
can be used to trigger people’s strategic thinking:
• Critical issues. Would you identify what you see as the critical issues for the future?
Suppose I had full foreknowledge of the outcome as a clairvoyant, what else would you
wish to know?
• Favourable outcome. If things went well, being optimistic but realistic, talk about what
you would see as a desirable outcome.
• Unfavourable outcome. As the converse, if things went wrong, what factors would you
worry about?
• Where culture will need to change. Looking at internal systems, how might these need to
be changed to help bring about the desired outcome?
• Lessons from past successes and failures. Looking back, what would you identify as the
significant events that have produced the current situation?
• Decisions that have to be faced. Looking forward, what would you see as the priority
actions that should be carried out soon?
• If you were responsible. If all constraints were removed and you could direct what is
done, what more would you wish to include?
132
Depending on the nature of the general problem to be addressed, later interviews and
workshops may address different aspects and may need additional preparation. Ringland
(2002) notes that three areas of uncertainty very commonly arise:
• globalisation versus regional/localisation
• community values versus individual values
• technology: rate of change or adaptation.
Page 37 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
The interviews should be analysed by grouping the major issues, including the above three if
applicable. This will reveal different points of view regarding what the ‘real problems/issues’
are and these will flavour the various scenarios.
c. Identify focal issue or decision
While the general topic might have been pre-determined, it is almost always too broad.
(Alternatively, too narrow questions will be inappropriate to address with this method.)
Narrow down the general topic to a specific decision, question or focal issue that is
confronting the society, policy makers and/or management. In addition, set a clear time
horizon, for example 10 or 20 years. Finally, decide on the scope of the issue, for example the
future of the European Union or the future of information technology.
d. List key factors in the local environment
List the key factors influencing the success or failure of the decision. Consider the main
relevant issues that the decision-makers will need to be informed about when making
choices.What are the main criteria of success/failure and what would influence the outcome?
These are often microeconomic forces, such as resource availability, patterns of consumption,
supply, transportation and other infrastructure aspects, etc.
These factors can be elicited in an extended scenario workshop or separately in individual
interviews, focus groups and/or issue workshops.
e. List driving forces in the macro-environment
List the drivers and barriers that will or could affect the key factors. Forces to consider
include the ‘STEEPV’: Social, Technological, Economic (macro), Environmental, Political
and Values. In addition, forces such as demographics and public opinion should be
considered. One is attempting to identify major trends and breaks in trends and research is
usually required to adequately define them.
Also identify ‘predetermined’ elements of society, aspects of life that are almost completely
certain to develop in a known way. Next, identify ‘critical uncertainties’. These can be found
by questioning one’s own assumptions about the predetermined elements
These forces can be elicited in an extended scenario workshop or separately in individual
interviews, focus groups issue workshops.
f. Rank key forces & drivers by importance & uncertainty
For each of the Key Forces and Drivers, rank:
• its degree of importance for the success of the focal issue or the decision identified
• the degree of uncertainty as to how it will develop.
Here one is not rating how uncertain the effects are that the factor/driver will have on the
issue or decision. Rather one is rating how uncertain the future developments of the
factor/driver are. For example, if one is quite sure that a pattern of immigration will
emerge for an area, then the ‘uncertainty rating’ will be on the lower end of the scale, say
For convenience, use a scale of 0 to 10, where 1 = very certain and 10 = very uncertain. The
purpose of this exercise is to identify the factors that are most important and whose
development is most uncertain.
Ranking can be done in an extended scenario workshop or separately in individual
interviews, focus groups and/or issue workshops.
Page 38 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
g. Select scenario logics
Based upon the rating exercise in step IV, two or three factors must be chosen to provide the
‘logics’, also referred to as the ‘assumptions’, of the scenarios, or in other words, the ‘axes’
along which the scenarios will differ. In order for the scenarios to be useful learning tools, the
axes (or logics) must be based upon factors that are inherent to the success of the focal
decision or highly important to the development of the focal issue.
For each identified factor, two contrasting aspects are chosen to label the poles of the axis.
For example, the factor ‘social values’ might be one axis, whereby one pole is labelled
‘individually dominated’ and the other pole ‘community dominated’. Similarly, another axis
might be based upon the factor ‘globalisation’, whereby the poles are labelled ‘local/regional’
versus ‘global’. The result would be four quadrants that provide the rationales for four
different scenarios, as shown in the figure.
Thus scenario I would depict a society based upon community values and the dominance of
global forces, exploring how these factors influence the focal issue or decision. The other
scenarios are designed in a similar fashion, such that the logics for each of them are as
follows:
Scenario I: Community/Global
Scenario II: Individual/Global
Scenario III: Individual/Regional
Scenario IV: Community/Regional
Next, consider one or two ‘wild cards’ that can be added into the scenarios. Wild cards are
unexpected – yet plausible – events that have major consequences, such as natural disasters
(floods, tsunamis, earthquakes), political upheaval (terrorism, dramatic regime change),
demographic trends (population reduction due to disease, migration due to changes in natural
resources) and so forth. The purpose of wild cards is to see how adaptable the organisation or
society would be under each of the scenarios.
h. Flesh out the scenarios
Participants can choose the angle from which they approach fleshing out the scenarios.
Traditionally, an analytical distinction has been made between exploratory and normative
scenarios (defined below), whereby in practice both exploratory and normative processes are
involved in every exercise. Nevertheless, a given exercise may focus more upon exploratory
or normative scenarios or a combination of both. Perhaps particularly effective is to first build
a small number of exploratory scenarios to identify potential developments, obstacles and
opportunities, relationships between factors and choices and long-term consequences. Based
upon insights gained from the exploratory exercises, the group can endeavour to create a
normative scenario. Then, an action plan can be developed for the attainment (or avoidance)
of a particular scenario. This involves ‘working back’ from the future towards the present,
tracing potential sequences of critical events and changes, so that this step is commonly
referred to as ‘backcasting’.
Exploratory scenarios start from the current situation and from past and present trends.
Assumptions are made about uncertainties relating to the environment and factors of change,
leading to pictures of plausible, possible futures. Some authors refer to these as neutral
scenarios, implying that researchers do not make any value judgements about the futures they
are describing. However, certain kinds of value judgements are always present, at least
implicitly in one’s choices of factors, for example.
Normative scenarios are constructed on the basis of various images of the future, which may
include either feared or desired futures. Then, one or more paths are portrayed as to how one
Page 39 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
could arrive at, or avoid, that/those future(s). Hence, this process is basically the equivalent of
‘backcasting’.
When constructing exploratory scenarios, it is important to do multiple scenarios in order to
highlight the different relationships between the factors under different logics. In contrast,
with normative scenarios, often only one ‘desired future’ is constructed, sometimes as a
consensus-building exercise. However, if consensus promises to be difficult, try starting with
an undesired future first – it is often easier for everyone to agree upon what they do not want.
While the logics that distinguish each of the scenarios are determined by the scenario’s place
in the matrix of the most important driving forces, all of the scenarios will describe the same
general factors to enhance comparability. Each of the driving forces and key factors listed in
steps II and III should be given attention in each of the scenarios.
First, consider how each of these factors and forces might develop under the logics of each
scenario. Hence, one scenario might provide the description, Schools have metal detectors
and armed guards and are locked up outside of school hours’, while in another scenario,
‘Schools are used by the entire community for 14 hours per day’.
Weave the pieces together in the form of a narrative. The scenarios need to be fleshed out
with a storyline that describes how the scenario state evolved from the present. Answer the
questions:
• How would we get from here to there?
• What events would need to happen for this scenario to come true?
• What sort of people would characterise the scenario?
Peter Schwartz (1998) identifies some common plots for scenarios :
• Winners and Losers
• Challenge and Response
• Evolution
• Revolution
• Cycles
• Infinite Possibility
• The Lone Ranger
• My Generation
For descriptions and examples of these plots, refer to Schwartz (1998).
Consider the ways in which different plots might handle the same forces, such as
environmental policy. The narrative should recount a sequence of events, expressed in
observable terms such as ‘The UK joins the European Monetary Union’ rather than ‘The UK
grows closer to Europe’.
The plots often change over time and interact with each other. Beware of assuming that any
given plot will continue ‘in an unbroken line’, without any human response to developments.
Good scenarios are both plausible and surprising. Consider adding in one or more of the ‘wild
cards’ and describe how the event affects the other factors in each scenario.
During the fleshing out stage begin to note and quantify (when applicable) early indicators
that distinguish the development of each scenario. These can be described in the scenarios
themselves and can also later be used to ‘monitor the future’.
136
Page 40 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Give each of the scenarios a name that is concise, vivid and memorable. The name should be
revealing of the scenario’s logics in that it distinguishes a given scenario from the logics of
the others. Beware of ending up with three scenarios, which may be perceived as ‘most
likely’, ‘middle’ and ‘most unlikely’ forecasts. In general, avoid assigning probabilities to the
scenarios; one risk neglecting an unlikely scenario that would have high impact if it were to
unfold.
i. Explore implications
Consider the implications of each scenario for the focal issue or decision. What
vulnerabilities have been revealed? Is the decision or strategy robust across all scenarios or
only in one or two? If a decision looks appealing in only one of the scenarios, then it is
considered a high-risk gamble, particularly if the organisation has little control over whether
or not the scenario will be realised. Explore how the strategy can be made more robust.
j. Select leading indicators & signposts
Identify events or characteristics that would be indicative that a particular scenario is coming
to pass. These indicators are early signals that should be scenario-specific, not common to all
or several, so that the various scenarios can be distinguished from each other. They should be
concrete rather than general or ambiguous, so that they can be monitored by the government,
organisation or company. For example, signs that the economy is changing from industrial to
more technology-based might be detected in help-wanted advertising, changes in union
memberships or the emergence of new periodicals.
The purpose is to be able to detect various actual developments as early as possible so that the
strategies can be adapted appropriately.
k. Present scenarios to relevant public
Commonly the scenarios and analyses are presented to the relevant public in the form of
written reports. However, some scenarios have been presented using highly creative venues.
For example, one city created a ‘Villa 2015’, with a room for each scenario. All of the city’s
inhabitants were sent a postcard picturing the four scenarios, which invited them to visit Villa
2015. Visitors to Villa 2015 were asked to express their preferences in a questionnaire before
leaving and the city planners subsequently used the information gathered. Another innovative
idea came from a company that created an online interactive environment to feed the
scenarios back. A further possibility would be to present the scenarios in short theatrical skits.
137
l. Generate and discuss the option
Insights generated during the scenario-constructing process can be used to inform subsequent
decision-making. Ringland (2002, Section III.7) discusses one possible method to move from
the scenarios to plans. He suggests the following steps:
• Strategic analyses. Perform a strategic analysis of one’s own organisation as well as
existing and potential future competitors. The analysis can be conducted with well-known
tools such as SWOT analysis, PIMS, portfolio analysis, critical success factors, business
segmentation, etc. Refer to the list of analytical tools, provided in this manual, for further
possibilities.
• Scenario creation. In the scenario creation process, future developments are described
that could affect the organisation.
• Strategy finding. Scenarios can be used in at least two ways to help develop strategies.
First, they can be used to explore the environments in which the community must most
likely operate in the longer term. Thus they can guard against the pitfall of designing a
strategy for the year 2050 that would have been suitable for the world as it was in the
year 2004 (but is no longer relevant). Review the opportunities, threats and their related
options for action that have been determined in the different scenarios. Managers must
Page 41 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
decide whether to base the strategy on one or multiple scenarios. A strategy based upon
one reference scenario is called ‘strongly focused’,while one based upon multiple
scenarios is called a ‘future-robust plan’. In either case, the main question is ‘What shall
we do if a certain scenario comes true?’ and not ‘What will happen?’
Create a matrix that lists the various options for the organisation. Rate the suitability of
the options for each of the scenarios. Group the options into strategies, depending on
whether they are part of a future-robust strategy, a partly robust strategy or a focused
contingent strategy. Refer to Ringland (2002, p. 188) for an example of a scenario options
matrix and further explanation.
Strategy formulation. Once the strategic orientation has been decided upon, concrete
measures must be determined to bridge the organisation from the present to its objectives.
These objectives may be described in the organisation’s mission statement. However, as a
consequence of insights gained from the scenario construction process, members of the
organisation may wish to re-formulate the mission statement, in full or in part.
Alternatively or in addition, previous goals and strategies seen to be in accord with the
mission statement may be revised. In deciding how to build the bridge from the present to
the stated objectives, the organisation can have various approaches. Ringland (2002)
describes some typical types of scenario-supported strategic approaches including:
- Reacting to recognizable trends
- Managing future risks
- Energetically using future chances
- Staying flexible
- Developing and reaching own visions.
Most likely, a mix of these will be used.
138
Resource considerations
A very minimum of two days is required to conduct a scenario workshop from the point Step
III/IV through fleshing out the scenarios. This is only feasible if the focal question or issue is
already very well defined and all information required for deciding the key drivers and main
factors of uncertainty has been gathered and understood by the participants.
Once the scenarios have been fleshed out, additional time is needed for analysis and strategy
building. For good results, it is advisable to provide at least three days for the actual scenarioconstruction workshop and a total of six months for preparation (intelligence gathering
interviews, determining the focal question), analysis and strategy-building and dissemination.
At its optimum, and especially when this method is used for development purposes, the
procedure should be seen as a continuous, iterative process that involves:
• the continuous development, refinement and adaptation of the scenarios
• the use and interpretation of the scenarios in new plans and programmes
• the implementation of existing plans and programmes
• the maintenance and evolution of the knowledge and action networks.
Scenario methods are more laborious, costly and time-consuming than simple ‘planning’.
However, some authors emphasise that, given the propensity of traditional forecasting and
planning to fail in uncertain times, the additional delay and cost can be justified if they result
in a more durable plan.
The following items listed are the main budgetary items in a scenario workshop:
o Personnel: project manager,
o Facilitator(s): honorarium to participants, if applicable
o Travel: facilitator(s), participants
Page 42 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
o
o
o
o
o
o
Accommodation: facilitator(s), participants
Food: meals and refreshments for each day of workshop
Recruitment and Promotion: mailings to recruit participants, promotion for public
presentation of scenarios
Communications: costs for eliciting opinions (depend upon methods used), costs for
public presentation of scenarios (depend on format), printing of final report
Facilities: location for workshop
Materials and Supplies: paper and pens, lap-top Computer, software for calculating
and plotting and for word processing, (overhead) projector, large sheets of paper to
post ideas, tape or tacks, bold markers
Additional best practices and potential pitfalls
Care must be taken not to generate the impression that the scenarios developed are the only
possible futures. In reality, the future is likely to be a mix of the various elements in the
scenarios, as well as ones not considered at all.
Sometimes the output is that one scenario is seen as the ‘most likely’ scenario and the others
describe minor variations on that theme. For this reason, some facilitators rule out ‘business
as usual’ scenarios.
Some users may find it challenging to grapple with multiple plausible futures, which is why
most practitioners recommend developing only three to five scenarios in a single workshop.
However, this risks limiting the range of dynamics and possibilities that are considered. For
this reason, it can be particularly useful to have some time devoted to examining ‘wild cards’.
When presenting the scenarios, it is essential to carefully consider one’s audience. Scenarios
that only describe broad generalities, lacking supporting analysis and quantification, are not
operational. Thus policymakers see them as not useful – though they may be appreciated by
the general public for giving a taste of the future. In contrast, scenarios presented in extreme
technical detail and with great formality may prove too difficult for ordinary readers to
assimilate.
Envisioning workshops
ƒ
Brief description
The envisioning workshop method can be seen as a variant of the participatory scenario
analysis method (scenario workshops). The main difference is that in the envisioning
workshops method, the initial scenarios are predefined by experts and scientists and are
subsequently discussed by participants, in order to refine them. In the scenario-analysis
method, the participants develop the scenarios themselves.
Historical background : Envisioning workshops have their basis in technology assessment:
analysis and prediction of technological change to provide an input to technology policy
making. Traditionally, technology assessments were science-centred, performed by scientific
institutions and handed over to decision-makers in the form of desk studies. This proved to be
largely ineffective because it does not adequately address the interaction among actors, the
impacts of which cannot be the simple sum of individual actors’ perspectives and actions.
Objectives : Envisioning workshops aim to create an environment where all participants
(scientists and non-scientists) play an equal role in the generation and exchange of ideas on
the basis of predefined scenarios. Therefore, these scenarios serve as a tool for creating and
comparing different visions of the future and developing a set of actions.
Page 43 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Participants : The participants, 18 to 30 individuals, can be experts, managers, decisionmakers or stakeholders, have key roles to play in technical or decision aspects of the problem.
Procedure : The description of the process is based on the UK envisioning workshop project
on Sustainable Urban Living in the Coming Decades (see the example in the next section)
and can be summarised in two steps. Prior to the first workshop, small sets of scenarios are
predefined by experts or derived from previous prospective studies. Scenarios are mostly
narrative descriptions of how everyday life might be in the future, indicating how problems
might be solved at that time. The participants are then invited to discuss the previous
scenarios describing possible futures for their local area.
Relevance : Envisioning workshops are best suited to outlining interdisciplinary topics that
entail an assessment of future scenarios associated with different types of technology. This is
the case where there is a large number of plausible futures and the exchange of professional
expertise and insight may create new knowledge.
ƒ
Detailed description
The goal of envisioning workshops is to bring together a range of people and to stimulate
them to put forward their view of arrangements for future developments and to look how
those arrangements can be brought about and by whom (Street 1997). Envisioning workshops
are used to create an environment where all participants (scientists and non scientists) play an
equal role in the generation and exchange of ideas. Envisioning workshops are most suited to
broad, interdisciplinary current and socially oriented subjects, which entail an assessment of a
choice between different types of technology where differences are important and where
exchange of professional expertise and insight may create new knowledge. Envisioning
workshops aim to encourage a critical evaluation of scientific institutions and try to
incorporate citizens’ views into technology policy by creating an environment where a group
of heterogeneous participants can discuss, explore and evaluate different policy options
related to technology.
Envisioning workshops are meetings that involve discussion among a range of actors, with
the aim of developing visions and proposals for technological needs and possibilities in the
future (Street 1997). The method is referred to as Scenario workshops in the literature. The
name has been changed for the purposes of this working paper to avoid confusion with other
methods of similar name. The basis for such workshops is a set of prepared scenarios, which
put forward possible future arrangements or conditions surrounding a particular issue. The
group of 18-22 participants discusses and criticises the scenarios, creates common visions,
identifies barriers to those visions and develops plans of action. The envisioning workshop
can be seen as a variant of the scenario learning methodology. The main difference is that in
envisioning workshops scenarios have been developed in advance and are used as input for
the discussion, while in scenario workshops the participants themselves develop the
scenarios.
Predefined scenarios are sets of scenarios formulated by experts (or derived from previous
scenario studies) are used to engage participants in discussion about the future. They provide
a more concrete context for discussing issues in the future that are more often abstract. Such
scenarios are ‘snapshots’ of possible futures often taking the form of expressive descriptions
of everyday life in a future time paying particular attention to issues that the workshop aims
to address. The scenarios are not intended to be prescriptive but they provide a starting point
for discussion in ‘envisioning workshops’. They serve as a tool for creating and comparing
different visions of the future and developing a set of actions. Although these predefined
scenarios may be constructed in a similar way to those employed in role playing or gaming
exercises, they serve an entirely different purpose. The participants do not use them to
Page 44 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
assume roles and act out the future but as a means to stimulate thinking and as a context to
engage in discussion about the future.
As this is a relatively new method there are not enough examples upon which to base a
generic description. For this reason the process will be described based on the experiences
with the envisioning workshop project, ‘Sustainable Urban Living in the Coming Decades’
which took place as part of the Value II programme of the Commission of the European
Communities (CEC), which aims to stimulate the dissemination and exploration of
knowledge resulting from specific Community R&D programmes. In this context initiatives
were developed to strengthen the interface between research and society. In particular the
above envisioning workshop project explored new ways for bringing technological
developments more in line with sustainable environment and future plans of society. There
was an international workshop involving participants from four European cities (in Corfu,
France, the UK and the Netherlands), which was followed by four local workshops held in
each of the cities (see Box) (for more detail see Street, 1997).
140
Potential limitations are known. As a social process, some limitations to scenario workshops
(and all prospective methods) include:
- The ‘Zeitgeist’ problem: The group dynamics can affect the outcome of the deliberative
process such that different exercises have similar results. This happens when different groups
focus on the same small range of currently dominant social and cultural themes.
- The ‘opacity of context’ problem: This is common when participants become too focused
on particular aspects of a certain sector, such as technology, but omit to fully evaluate the
social, economic and political implications of the associated sector changes.
- The ‘event evaluation’ problem: People tend to overestimate the likelihood of lowprobability events and underestimate the probability of likely events. There is an equal
tendency to distort the representativeness of events, essentially by focusing on striking but
basically irrelevant details, which is liable to undermine the viability and usefulness of future
scenarios.
References and Resources
African Futures and Phylos IPE (2002) A Guide to Conducting Futures Studies in Africa.
Ottawa, Canada: St. Joseph Print Group.
Futures Group, The (1994). Scenarios. In J. Glen (Ed.) Futures Research Methodology.
AC/UNU Millennium Project.
ICIS Building Blocks for Participation in Integrated Assessment: A review of participatory
methods.
Ringland, G. (2002) Scenarios in Public Policy.West Sussex: John Wiley & Sons Ltd.
Schwartz, P. (1991) The Art of the Long View. Chichester: John Wiley & Sons.
Social Analysis: Selected Tools and Techniques.World Bank Social Development Paper
Number 36, June 2001.
Van der Heijden, Kees (1997) Scenarios:The Art of Strategic Conversation. Chichester: John
Wiley & Sons.
Wehmeyer,Walter, Clayton, Anthony and Lum, Ken (eds) (2002) Greener Management
International, Issue 37: Foresighting for Development.
Street, P. (1997). Scenario workshops: A participatory approach to sustainable urban living.
Futures, 29(2): 139-158.
Page 45 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Policy delphi
ƒ
Brief description
Conventional Delphi method deals with technical topics and seeks a consensus among a
homogeneous group of experts. In contrast, the Policy Delphi is employed to generate the
strongest possible opposing views on the potential resolutions of a major policy issue. A
policy issue can be seen as an issue for which there are no ‘experts’, only informed advocates
and referees. An expert or analyst may contribute a quantifiable or analytical estimation of
some effect resulting from a particular resolution of a policy issue, but it is unlikely that a
clear-cut (to all concerned) resolution of a policy issue will result from such an analysis. The
expert becomes an advocate for effectiveness or efficiency and must compete with the
advocates for concerned interest groups within the society. The Policy Delphi rests on the
premise that the decision maker is not interested in having a group generate his/her decision,
but rather in having an informed group present all the options and supporting evidence for
his/her consideration. Therefore, the Policy Delphi is a tool for the analysis of policy issues
and not a mechanism for making a decision. Generating a consensus is not the prime
objective. The structure of the communication process, as well as the choice of the
respondent group may make achieving consensus on a particular resolution very unlikely.
The procedure for the Policy Delphi is the same as for the traditional Delphi, but the survey
questions posed to the panellists will aim more at exploring all possibilities, opinions and
reasons rather than at achieving consensus.
ƒ
Detailed description
The following questions should guide the planning and implementation phases of a Policy
Delphi:
o
o
o
o
o
o
Formulation of the issues. What is the issue that really should be under
consideration? How should it be stated?
Exposing the options. Given the issue, what are the policy options available?
Determining initial positions on the issues. Which are the ones everyone already
agrees upon and which are the unimportant ones to be discarded? Which are the ones
exhibiting disagreement among the respondents?
Exploring and obtaining the reasons for disagreements. What underlying
assumptions, views or facts are being used by the individuals to support their
respective positions?
Evaluating the underlying reasons. How does the group view the separate arguments
used to defend various positions, and how do they compare to one another on a
relative basis?
Re-evaluating the options. Re-evaluation is based upon the views of the underlying
‘evidence’ and the assessment of its relevance to each position taken.
In principle, this process would require five rounds in a paper-and-pencil Delphi procedure.
However, in practice most Delphis on policy try to maintain a three-round or four-round limit
by doing the following : The monitor team devotes considerable time to carefully preformulating the issues. The questionnaires provide a list(s) of an initial range of options but
allow for the panellists to add to the list(s). The panellists are asked for their positions on an
item and their underlying assumptions in the first round.
Page 46 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
It has been suggested that the best vehicle for a policy Delphi is a computerised version of the
process, in which the round structure disappears and each of these phases is carried through
in a continuous process.
In a Policy Delphi it is necessary that informed people, representative of the many sides of
the issues under examination, are chosen as participants. The initial design must ensure that
all of the ‘obvious’ questions and sub-issues have been included and that the participants are
being asked to supply the more subtle aspects of the problem. Thus, the monitors must
understand the subject well enough to recognise the implications of the participants’
abbreviated remarks.
In some cases the participants may over-concentrate their efforts on some issues to the
detriment of others. This may occur because the group is not as diversified as the total scope
of the exercise should be.With proper knowledge of the subject material, the design team can
stimulate consideration of the neglected issues by interjecting comments in the summaries for
consideration by the group. It is a matter of integrity to use this privilege sparingly to
stimulate dialogue on all sides of an issue and not to sway the participants toward one
particular perspective.
Focus groups
ƒ
Brief description
A focus group can be defined as a discussion group with a limited number of participants and
that focuses on a specific topic. A moderator asks questions of a general nature according to a
pre-defined discussion guide and facilitates the discussions.
Historical background : The focus group method was first used in the early 1970’s in social
science research and applied marketing science. It is a combination of two social-research
techniques. The first is the “focused interview” in which an interviewer elicits information on
a topic without the use of a fixed questionnaire guide. The second is a “group discussion” in
which a possibly heterogeneous, but carefully selected, group of people discusses a series of
particular questions raised by a skilled moderator. The group is provided with a common
input and the group’s reaction to this input is explored.
Objective : The objective is to obtain information about the group’s perceptions, attitudes and
values regarding a given topic and to analyse the determinants of such perceptions, attitudes
and values.
Participants : The most frequently used format of the focus group technique involves six to
twelve participants and is facilitated by one or two moderators. According to the issue to be
discussed, it is necessary to bring together a fairly homogenous public, although it may entail
implementing focus groups with different kinds of public. The public may consist of ordinary
citizens, consumers, stakeholders, experts, decision-makers, etc.
Procedure : Like many participatory methods, the focus group structure comes in many
variants and needs to be adapted to circumstances. The implementation of focus groups is
generally based on the following steps:
• Step 1 concerns designing a focus group: (i) selecting the topics, (ii) defining the objectives,
(iii) determining the role of the organising team, (iv) identifying and selecting the
participants, (v) setting the size of the groups and the number of sessions, (v) drafting a guide
to support and orient discussions and (vi) deciding on how discussions are to be analysed
(writing notes, tape or video recording).
• Step 2 concerns the implementation of the group sessions. The discussion evolves among
the participants, triggering new questions as they respond to previous ones. The facilitator
Page 47 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
keeps the discussion on track and establishes the range of reactions by prompting participants
to look at different sides of topics. He can use various techniques to encourage respondents to
express their views or to pursue the discussion more deeply.
• Step 3 is devoted to reporting on discussions and analyses. The analysis is conducted by the
research team and is based on the notes or/and records. It generally yields a report.
Relevance : The issue at stake in a focus group approach is often an unstructured problem
(Field definition and Scoping). Focus groups can be used to help identify public concerns
"coming over the horizon" and also those which are current but not registered or recognised
by existing institutions. This implies that there is uncertainty and disagreement on both the
values at stake and kinds of knowledge needed to address the issue. This favours a pluralism
of possible descriptions of causes, impacts and solutions, since reaching a consensus is not an
objective. They might also be used as fora to help assess evidence of appropriateness,
compliance and effectiveness (Monitoring). While the physical setting is more typical of a
private conversation (open discussion around a table), the topics introduced by the moderator
and the overall group situation (people do not know each other) belong more to a situation of
public debate.
ƒ Detailed description
A focus group is a planned discussion among a small group (4-12 persons) of stakeholders
facilitated by a skilled moderator. It is designed to obtain information about preferences and
opinions in a permissive, non-threatening environment. Group members influence each other
by responding to ideas and comments in the discussion, with the consequence that a more
natural articulation unfolds (Krueger 1988). In focus groups, scientists play the role of
facilitator or observer. They are usually not actively involved as full participants. Focus
groups can also be conducted online.
When to use
Focus groups are good for initial concept exploration, generating creative ideas. They are
often used to test, evaluate and/or do a programme review. They are most appropriate to get a
sense of regional, gender, age and ethnic differences in opinion. They are not effective for
providing information to the general public or responding to general questions, nor are they
used to build consensus or make decisions. Use focus groups for gaining an overview or
exploring the development of ideas, not for collecting detailed individual stories. Use groups
when experiences are not too sensitive to share. Do not use focus groups when addressing
sensitive, personal or particularly contentious issues. Do not use focus groups when exploring
individual attitudes, experiences or decision-making, collecting detailed or complex factual
material.
Focus groups are used for marketing research and political and sociological work. Some
purposes of focus groups include exploratory work, pre-test work, aiding event recall and
triangulation with other data collection methods. They are particularly useful when
participants’ reasoning behind their views is of interest, as well as the process by which
participants’ develop and influence each others’ ideas and opinions in the course of
discussion. Focus groups are useful to:
• Gauge the nature and intensity of stakeholders’ concerns and values about the issues
• Obtain a snapshot of public opinion when time constraints or finances do not allow a full
review or survey
• Obtain input from individuals as well as interest groups
• Obtain detailed reaction and input from a stakeholder or client group to preliminary
proposals or options
• Collect information on the needs of stakeholders surrounding a particular issue or concept
Page 48 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
Determine what additional information or modification may be needed to develop
consultation issues or proposals further.
Warning
Make sure that composition of the groups encourages rather than hinders the exchange of
ideas. Pick a venue where your respondents will feel comfortable, not over- or underwhelmed. Deal confidently and effectively with practical matters such as refreshments,
incentives, tape recording and seating. Internalise the topic guide, use only as an aide
memoire to keep you on track. Concentrate and listen carefully, do not let your attention
wander.
Advantages
Focus groups are relatively inexpensive and the format is flexible, allowing participants to
question each other and to elaborate upon their answers. Focus groups, in contrast to
individual interviews, allow for the participating individuals to develop and express their
opinions in a more ‘natural’ social context, which some claim is more akin to the ways in
which people form their opinions in everyday contexts. In addition, this discussion period
highlights people’s reasoning and thoughts underlying their expressed opinions. The method
is relatively simple, allowing participants to readily grasp the process and purpose.
When the power differential between the participants and the decision-makers is great enough
to discourage frank participation, the focus group provides the security of a peer group.
Furthermore, the method is particularly useful when one is interested in complex motivations
and actions, when one will benefit from a multiplicity of attitudes, when there is a desire to
learn more about consensus on a topic and when there is a knowledge gap regarding a target
audience.
Focus groups are easy to organise especially if you already have contacts of people you can
invite and they are willing to attend or have indicated an interest in advance. Complex issues
can be addressed targeting specific interest groups. People generally feel more confident in
groups. Discussion can stimulate thinking and spark ideas within the group. Focus groups
can be used to communicate with all sorts of groups of people.
Disadvantages
The multiple voices of the participants, as well as the flexibility in process structure, results in
limited researcher control over the focus group process. Sometimes group expression can
interfere with individual expression and the results may reflect ‘groupthink’.
Due to small numbers the results cannot be used to extrapolate to represent the whole
population therefore not statistically reliable. It is recommended that you use an experienced
moderator which can add to the cost.
As focus groups are small it is difficult to make them representative of the population, you
may need to run several focus groups that represent different groups in your population. As
data from focus groups is not statistical (although it can be quantified) its analysis is time
consuming and complex. Some participants might be inhibited or afraid to say what they
really feel. Often a group view is the general outcome. Dominant participants might shout
other members of the group down.
Procedure
Overview
To prepare for the focus group event at least three staff members must first determine the
questions to be addressed by the focus group and the targeted participants. Next, the focus
Page 49 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
group participants and a moderator are recruited. At the focus group event, which usually
lasts for a few hours, the moderator leads the group through a semi-structured discussion to
draw out the views of all of the participants and then summarizes all of the main issues and
perspectives that were expressed. After the event the research staff analyses all results of the
focus group(s) conducted and produces a report.
Preparation
a. Personnel
A minimum of three staff, one administrator and two (assistant) researchers, will be needed to
prepare for the focus group event.
•
Administrative staff tasks include:
o Preparing and sending information materials for participants
o Organizing logistics (location, equipment, catering, accommodation, etc.)
o Set up and clean up after the event
o Distribution of honoraria.
99
• Research staff tasks include:
o Recruiting potential participants in the focus groups
o Recording proceedings
o Data analysis
o Preparing report
•
•
•
Either two moderators or one moderator and one assistant will be required to facilitate the
focus group(s).
b. Tasks
Define concepts to investigate:
o Assess the purpose of the focus group. What kind of information is needed? How will
the information be used? Who is interested in the information? Determine the ideal
end-result, including its probable use.
o Decide who the target participants are, for example, customers, employees, decision
groups, etc.
o Listen to the broad target audience to determine how to select participants,
appropriate incentives for various groups and ideal questions and moderator
characteristics to maximize participant engagement.
o Determine, generally, the number of sessions. Consider whether different subgroups
of the population have different levels of knowledge or different attitudes that may be
relevant to the research and reflect on the expected generalisability to the population
at large.
o Decide on the characteristics for the participants for (each of) the focus group(s). If
you hold more than one, you may want to divide the individual sessions into groups
of people sorted by gender, social class or interest group. Alternatively, you may
wish to have more heterogeneous groups. Some practitioners recommend recruiting
members of the same socio-economic status for each of the focus groups. In any case,
avoid putting people in a situation where they are unlikely to participate due to
intimidation.
o Develop your description of the problem.
o Formulate potential questions in terms of issues for discussion.
Generate questions for the focus groups:
Page 50 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Create a set of questions in a loose-running order, with specific prompts to facilitate
participant understanding and to encourage replies.
o An opening question should be used to acquaint and identify common characteristics
among the group members.
o An introductory question can be used to introduce the topic and foster conversation.
o Use 2 – 5 key questions or topics to drive the focus group discussion.
The question list and order should be prepared but should be flexible and adapted to the
group’s natural conversation process. They should be clear, relatively short and use
simple wording. Accompany the questions with sufficient background to minimize
assumptions and place them in the appropriate context. The questions should be openended rather than dichotomous. Avoid broad ‘why’ questions and instead break them
down into specific sub-issues. The questions can include various formats, such as
sentence completion and conceptual mapping (situation – response: ‘Given a certain
situation, what would you do…?’)
One can begin with a general question to get a sense of the level of knowledge of the
participants as well as information about their perceptions/misperceptions. Alternatively,
one can begin with questions about sub-issues that the members who are least likely to
actively participate are likely to know the most about.
o Use a concluding question that helps to establish closure.
If consensus is the aim, one can ask, ‘All things considered, what would you
recommend…’. Alternatively, or in addition, the moderator can first briefly summarize
the discussion. Then, ask the group if the summary is adequate and end with, ‘Have we
missed anything important?’
o
•
Logistics for focus group participation
o Select a location that is easy to find, minimizes distraction, provides a neutral
environment and that ideally facilitates sitting in a circle.
o Plan/schedule for the focus-group(s).
For very narrow topics focus groups usually last only an hour or two. However, if the
topic is more policy-oriented, a one-day workshop can be organized with multiple
sessions so that the group can focus on various sub-topics. When scheduling the event,
avoid major national events. Do not exceed two hours per session with adults (or 1 hour
for children). Schedule the focus groups at a convenient place and time. Avoid hosting
the event at locations that might be contentious.
o Prepare copies of any questionnaires or handouts, if there are any.
o Identify small talk topics for discussion with participants as they arrive. Avoid the
focus group topic.
o Secure audio or video-recording equipment, extra batteries, tapes, extension cords,
notepads and pens.
o Make nametags.
o Arrange furniture in the room.
o Ensure absence of disruptive background noise that might interfere with discussion
and recording.
o Set up and test recording equipment.
o Set out refreshments.
o Have honorariums and/or travel imbursement money ready.
•
Recruiting for focus group participation
o Determine the planned focus group size.
The ideal focus group size ranges from 4 – 12 persons, with recommendations ranging
between 4-8, 6-10, 7-10 and 8-12 persons. Larger groups can be used for more
exploratory purposes, although they tend to fragment into smaller groups beyond a
Page 51 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
maximum group size of 12. Some researchers use mini focus groups of 4-5 persons to
gauge initial reactions, but these can fail to generate useful discussion.
o Recruit participants at least 1-2 weeks prior to the scheduled focus groups.
Participants are generally chosen to represent a cross-section of the public affected by the
issue and may be chosen to represent specific interests.
o Try to make the group representative of your target.
o Do not use regulars (focus-group addicts).
o The moderator should not know members.
o Members should not know each other.
o Choose people who can communicate effectively.
o When recruiting for focus groups, it may help to emphasize the need for participants’
insight to discuss the topic at hand rather than participation in a ‘focus group’. This
more casual formulation may prove less intimidating.
o Send personalized letters of invitation to each person who has been pre-selected and
who has confirmed their availability and interest in participation. Include the
information provided on the phone (and/or in person) with some elaboration, if
appropriate. Include directions to the location of the event, information about public
transportation and parking availability, etc.
o Call each of the focus group participants the day before the event to remind them.
o Recruit a focus group moderator(s).
The moderator should have a good knowledge of the topic in order to ask appropriate
follow-up questions. If the focus group participants make up a distinct culture group,
it is useful to have a moderator with cultural sensitivity to that group. The moderator
should dress as (s)he expects the participants will dress.
Realisation
a. Participant Arrival
As participants arrive, the moderator(s) greet(s) guests and make(s) small talk but avoid(s)
the topic of the focus group. At this time the moderators have a chance to quickly assess the
communication styles of the participants. Based upon their assessment, they can place
nametags around the table. It has been suggested that dominant communication styles be
placed near the moderator and more reticent participants be seated where eye contact can be
easily established. In case some participants happen to know each other, they can be
separated.
•
•
•
•
•
•
b. Introduction
Begin taping the session.
Once all participants are seated, the moderator welcomes the group, introduces
him/herself and gives relevant background information and an overview of the topic.
Emphasize that this is an opportunity for participants to give voice to their opinions and
that the researchers are there to learn from the participants.
The moderator explains what the results of the focus group will be used for and what
form the data will take.
The moderator outlines the ground rules. Emphasize that one person speaks at a time and
that the session is being recorded to ensure that all comments are noted. Assure that no
specific names will be used in the final report. Emphasize that all points of view are
important to the discussion.
The moderator asks a warm-up question that everyone is asked to answer.
The moderator asks the introduction question (if any) and then moves to the other
questions/topics, as pre-decided.
During the course of the discussion, the moderator or an assistant can use a flipchart to
illustrate the ideas expressed. The moderator should encourage all participants to express
Page 52 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
their views, for example by asking, ‘Does anyone have a different view?’ Overly
dominant participants and those who ramble should be reigned in to give others space.
The moderator may suggest that all participants initially write down a few thoughts in
response to a question before the group discusses it together.
•
•
c. Conclusion
The moderator briefly summarizes the main points of view and then asks if the summary
is accurate or if anything was missed. (S)he answers any final questions about the focus
group work.
The moderator thanks the group members for their participation and explains how the
honorariums and reimbursements will be distributed.
103
Follow up
Send letters of appreciation to all participants (as well as honorariums or reimbursements, if
these were not distributed at the event).
•
•
•
a. Analysis
Start while still in the group
o Listen for inconsistent comments and probe for understanding.
o Listen for vague or cryptic comments and probe for understanding.
o Consider asking each participant a final preference question.
o Offer a summary of key questions and seek confirmation.
Immediately after the focus group
o Draw a diagram of the seating arrangement.
o Spot-check tape recording to ensure proper operation.
o Conduct moderator and assistant moderator debriefing.
o Note themes, hunches, interpretations and ideas.
o Compare and contrast this focus group to other groups.
o Label and file field notes, tapes and other materials.
Soon after the focus group – within hours – analyze individual focus group.
o Make back-up copy of tapes and send tape to transcriptionist for computer entry if
transcript is wanted.
o Analyst listens to tape, reviews field notes and reads transcript if available.
o Prepare report of the individual focus group in a question-by-question format with
amplifying quotes.
o Share report for verification with other researchers who were present at the focus
group.
•
Later – within days – analyze the series of focus groups (if applicable).
o Compare and contrast results by categories of individual focus groups.
o Look for emerging themes by question and then overall.
o Construct typologies or diagram the analysis.
o Describe findings and use quotes to illustrate.
For additional guidance on focus group analysis and report-writing, refer to:
http://www.tc.umn.edu/~rkrueger/focus_analysis.html
b. Prepare the Report
The nature and style of the report will depend upon the audience. It is recommended to use
summary as well as specific quotes (without mentioning individuals’ names) to illustrate the
various perspectives, ideas and concerns. Some additional suggestions include:
• Consider narrative style versus bulleted style.
Page 53 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
• Sequence could be question by question or by theme.
• Share the report with others for verification and then revise.
For detailed guidance on focus group report writing, refer to:
http://www.tc.umn.edu/~rkrueger/focus_analysis.html
Resource considerations
Focus groups require at least one month of planning plus the time required for writing the
final report. This method is relatively low in cost for each individual event but the total cost
will depend upon how many focus groups are conducted on the subject. (Often multiple focus
groups are held on a given topic.) Naturally, the cost per focus group declines when the focus
group is part of a general research program or when several groups are conducted on the
same topic.
The main budgetary items for a Focus Group are listed below.
• Personnel: project manager, moderator, assistant, honoraria for participants (if applicable)
• Travel: for project team, for participants
• Accommodation (only necessary for all-day and non-local events): for participants, for
moderator
• Food: light refreshments, meals for participants and project team, if event is all-day
• Recruitment and promotion: recruitment of experts
• Communications: paper, printing & postage for 2 mailings to participants, translation
costs (if required)
• Facilities: location for the Focus Group to meet
• Materials and supplies: cost to rent recording equipment (if applicable), tapes, nametags,
paper, pens.
105
Additional best practices and potential pitfalls
A focus group needs to build synergy and secure cooperation from the members. Thus, it is
crucial that communication be open and trust is built quickly. This helps encourage new
ideas. It is necessary to choose the right focus group members, as well as facilitator, in order
to make the information flow positively.
Some additional guidelines for effectiveness include:
• Secure skilled personnel to identify and moderate the focus groups.
• Record the sessions.
• Ensure the atmosphere in the group is informal.
• Use an interviewer, guide or facilitator – do not use a questionnaire.
• It is not always appropriate to give participants advance notice of the material.
Tips for moderating focus groups:
Focus groups are cheaper and quicker than in-depth interviews and the discussion may
stimulate ideas within the group, with people ‘bouncing’ ideas off each other. Focus groups
are very good for clarifying ideas and testing them out with the benefit of exploring issues in
great detail.
Page 54 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Traditionally focus groups last 1½ to 2 hours and participants can be recruited from a variety
of sources including existing network groups, Citizens’ Panels or on the back of other
surveys.
Focus groups are a qualitative method of consultation. The data you collect is dynamic and
interactive. People’s views develop and change during the discussion, people are influenced
by other members of the group, individual views can be identified but should not be
aggregated.
Before you begin introduce yourself and the purpose of the group. As an ice breaker it’s a
good idea to ask participants to introduce themselves. Name labels are also useful. Set the
ground rules and manage housekeeping. Eye contact is very important, look up at respondents
rather than down at the guide and make sure the discussion runs at an appropriate pace so that
you cover everything you need to.
As a moderator you cannot take accurate notes of the discussion, you will always miss
information or possibly interpret it differently. You should either ask someone to take
accurate notes or record the discussion (seek permission to do this first). Note that
conventional tape recorders are not really suitable, think about the acoustics in the venue
before you book it.
Start with easy questions as a warm up and leave sensitive more complex questions to the end
after group dynamics and rapport has been established within the group. As moderator you
should not give your opinion or say where you stand.
Ask questions that are simple, single, open-ended and non-directive or leading. Give people
time to answer, do not rush to fill the silence or finish people’s sentences. Make sure you
probe fully, don’t assume you know the context/motivation for why someone has said
something. Do not allow side discussions to take place, invite contributions and avoid getting
locked in with one person. It is essential that you try and get everyone to take part, neutralize
the ‘discussion hog’ or disruptive participant. Ask them to leave if necessary.
Always end on a positive/constructive note, invite questions and re-affirm uses of findings
and confidentiality
References and Resources
Gearin, E. and Kahle, C. (2001) Focus Group Methodology Review and Implementation.
Kruger,
R.
Analysis:
Systematic
Analysis
Process.
www.tc.umn.edu/~rkrueger/focus_analysis.html
World Bank. Social Analysis: Selected Tools and Techniques. World Bank Social
Development Paper Number 36, June 2001.
Dürrenberger, Gregor. Focus Groups in Integrated Assessment: A manual for a participatory
tool. ULYSSES Working Paper WP-97-2. This can be downloaded at: http://www.zit.tudarmstadt.de/ulysses/docmain.htm
Einsiedel, A., Brown L.,& Ross, F. (1996). How to Conduct Focus Groups: A Guide for
Adult and Continuing Education Managers and Trainers. University of Saskatchewan:
University Extension Press.
Page 55 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
World café
ƒ
Brief description
The World Café is a creative process for facilitating collaborative dialogue and the sharing of
knowledge and ideas to create a living network of conversation and action. In this process a
café ambiance is created, in which participants discuss a question or issue in small groups
around the café tables. At regular intervals the participants move to a new table. One table
host remains and summarizes the previous conversation to the new table guests. Thus the
proceeding conversations are cross-fertilized with the ideas generated in former conversations
with other participants. At the end of the process the main ideas are summarized in a plenary
session and follow-up possibilities are discussed.
ƒ
Detailed description
When to use
The World Café process is particularly useful in the following situations. To engage large
groups (larger than 12 persons) in an authentic dialogue process (groups of 1200 have been
conducted!). When you want to generate input, share knowledge, stimulate innovative
thinking and explore action possibilities around real life issues and questions. To engage
people in authentic conversation – whether they are meeting for the first time or have
established relationships with each other. To conduct in-depth exploration of key strategic
challenges or opportunities. To deepen relationships and mutual ownership of outcomes in an
existing group. To create meaningful interaction between a speaker and the audience.
The Café is less useful when:
• you are driving toward an already determined solution or answer
• you want to convey only one-way information
• you are making detailed implementation plans
• you have fewer than 12 persons (In this case, it is better to use a more traditional dialogue
circle or other approach for fostering authentic conversation.).
Procedure
Overview
In the Café event the participants explore an issue by discussing and drawing in small groups
or ‘tables’ for multiple consecutive sessions of 20-30 minutes. Participants change tables after
each session in order to ‘cross-fertilize’ their discussions with the ideas generated at other
tables. The event is concluded with a plenary, where the key ideas and conclusions are
established.
Most of the information here is taken from Brown, J. (2002) The World Café: A Resource
Guide for Hosting Conversations That Matter. Mill Valley, CA: Whole Systems Associates.
142
Preparation
a. Choose Café facilitator
This flexible method is relatively easy to organise. It can be organised and facilitated by a
single person or by a team, as available. In any case, one person (or possibly two) will act as
the Café facilitator(s). The job of the Café facilitator(s) is to see that the guidelines for
dialogue and engagement are put into action. It is not the specific form, but living the spirit of
the guidelines that counts. Hosting a Café requires thoughtfulness, artistry and care. The Café
facilitator(s) can make the difference between an interesting conversation and breakthrough
thinking.
Page 56 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
The responsibilities of the Café facilitator(s) include the following:
• work with the planning team to determine the purpose of the Café and decide who should
be invited to the gathering
• name your Café in a way appropriate to its purpose
• help frame the invitation
• work with others to create a comfortable café environment
• welcome the participants as they enter
• explain the purpose of the gathering
• pose the question or themes for rounds of conversation and make sure that the question is
visible to everyone on an overhead, flipchart or on cards at each table
• explain the Café guidelines and Café etiquette, and post them on an overhead, an easel
sheet or on cards at each table
• explain how the logistics of the Café will work, including the role of the ‘table host’ (the
person who will volunteer to remain at the end of the first round and welcome newcomers
to their table)
• move among the tables during the conversations
• encourage everyone to participate
• remind people to note key ideas, doodle and draw
• let people know in a gentle way when it is time to move and begin a new round of
conversation
• make sure key insights are recorded visually or are gathered and posted if possible
• be creative in adapting the Café guidelines to meet the unique needs of your situation.
b. Clarify the purpose
Decide on the purpose and focus of the Café conversation. Ask yourself the following
questions, discussing them among the members of the organizing team, if applicable.
43
• What is the topic or issue we want to address or explore?
• Who needs to be invited to participate in this conversation?
• Who can contribute conventional and unconventional wisdom?
• How much time do we have for the inquiry?
• What line(s) of inquiry do we want to pursue? What themes are most likely to be
meaningful and stimulate creativity?
• What is the best outcome we can envision? How might we design a path toward that
outcome?
Explore Questions That Matter!
The question(s) addressed in a Café conversation are critical to the success of the event. Your
Café may explore a single question or several questions may be developed to support a
logical progression of discovery throughout several rounds of dialogue.
It is important to establish an approach of ‘appreciative inquiry’. The major premise here is
that the questions we ask, and the way in which we ask them, will focus us in a particular
manner, which will greatly affect the outcome of our inquiry. For example, if we ask, ‘What
is wrong and who is to blame?’ we set up a certain dynamic of problem identification and
blame assigning. While there may be instances where such an approach is desirable,
experienced. Café hosts have found it much more effective to ask people questions that invite
the exploration of possibilities and to connect them with why they care.
Page 57 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Knowledge emerges and creativity thrives in response to compelling questions. Generate
questions that are relevant to the actual concerns of the participants. People engage deeply
when they feel they are contributing their ideas to questions that are important to them.
Powerful questions that ‘travel well’ help attract collective energy, insight and action as they
move throughout a system.
[A Powerful Question is simple and clear, is thought provoking, generates energy, focuses
inquiry, surfaces unconscious assumptions, opens new possibilities, seeks what is useful]
•
•
•
•
Well-crafted questions attract energy and focus our attention to what really counts.
Experienced Café hosts recommend posing open-ended questions – the kind that do not
have ‘yes’ or ‘no’ answers.
Good questions need not imply immediate action steps or problem solving. They should
invite inquiry and discovery, rather than advocacy and advantage.
You will know you have a good question when it continues to surface new ideas and
possibilities.
Bounce possible questions off of key people who will be participating to see if they
sustain interest and energy.
Give the Café a name. The name should be appropriate for its purpose, for example CAP
Café; Environment Café and so forth.
c. Invite participants
Decide who should be invited to the gathering.
Decide upon the location. (For tips, see the section on ‘creating a hospitable space’ – physical
environment.)
Decide upon a time. Allow at least three or four hours for the event. However, depending
upon the issue and ambitions of your project, consider a kind of Café Marathon….
Make and send out the invitations. Include in the invitations the theme or central question you
will be exploring in your Café.
State it as an open-ended exploration, not a problem-solving intervention.
d. Create a hospitable space
• The social atmosphere
First and foremost, a hospitable space means a ‘safe’ space, where everyone feels free to be
him/herself and to offer his/her most creative thinking, speaking and listening.
Encourage all participants to contribute to the conversation. Inform them that, in accordance
with the World Café philosophy, each participant in the Café is seen as representing an aspect
of the whole system's diversity. As each person has the chance to connect in conversation,
more of the intelligence inherent in the group becomes accessible. A popular phrase among
Café-goers is, ‘Intelligence emerges as a system connects to itself in new and diverse ways’.
Experienced Café facilitators have found that, on occasion, it is helpful to have a ‘talking
object’ on the tables. Originally used by numerous indigenous peoples, a talking object can
be a stick or stone, a marker or saltshaker – almost anything, as long as it can be passed
among the people at the table.
There are two aspects to the talking object:
o whoever holds the talking object is the only one empowered to speak, and
o whoever is not holding it is empowered to listen.
Page 58 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
It is not necessary to use a talking object all the time, but it can be particularly useful in cases
where the topic being explored raises impassioned responses. It can be a very effective way
to ensure everyone has the opportunity to contribute, even if they simply choose to hold the
talking object and observe a few minutes of silence.
Whether or not a ‘talking object’ is used, encourage the participants to adhere to the
following guidelines:
o The speaker’s responsibility is to focus on the topic and express his or her thoughts
about it as clearly as possible.
o The listeners’ responsibility is to actively listen to what the speaker is saying with the
implicit assumption that (s)he has something wise and important to say.
o Listen with a willingness to be influenced.
o Listen to understand where the speaker is coming from.
o Appreciate that the speaker’s perspective, regardless of how divergent it may be from
your own, is equally valid and represents a part of the larger picture that none of us
can see by ourselves.
• The physical environment
Creating a warm and inviting physical environment can contribute significantly to designing
a hospitable space. When asked where they have had some of their most significant
conversations, nearly everyone recalls sitting around a kitchen or dining room table. There is
an easy intimacy when gathering at a small table that most of us immediately recognize.
When you walk into a room and see it filled with café tables, you know that you are not in for
your usual business meeting. Creating a Café ambiance is easy and need not be expensive.
Some suggestions follow.
How to Create a Café Ambiance:
Whether you are convening several dozen or several hundred people, it is essential to create
an environment that evokes a feeling of both informality and intimacy.When your guests
arrive they should know immediately that this is no ordinary meeting.
o
o
o
o
o
o
o
o
o
If possible, select a space with natural light, comfortable seating, a pleasant
temperature and an outdoor view to create a more welcoming atmosphere.
Make the space look like an actual café, with small round tables that seat four or five
people. Four is the ideal number.
Less than four at a table may not provide enough diversity of perspectives, more than
five limits the amount of personal interaction.
Arrange the tables in a staggered, random fashion rather than in neat rows. Tables in
a sidewalk café after it has been open for a few hours look relaxed and inviting.
Use colourful tablecloths and a small bud vase with flowers on each table. If the
venue permits, add a candle to each table. Place plants or greenery around the room.
Place at least two large sheets of paper over each tablecloth along with a mug or
wineglass filled with colourful markers. Paper and pens encourage scribbling,
drawing and connecting ideas. In this way people will jot down ideas as they emerge.
Put one additional café table in the front of the room for the host’s and any
presenter’s material.
Consider displaying art or adding posters to the walls (as simple as flipchart sheets
with quotes).
Consider playing some soft background music. Music played too loudly will be
disruptive to the conversation.
Page 59 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
o
To honour the tradition of community and hospitality, provide beverages and/or
snacks, if it seems appropriate.
Café Supplies Checklist
o small round tables for four people are ideal
o enough chairs for all participants and presenters
o colourful tablecloths
o flipchart paper or paper placemats for covering the café tables
o coloured water-based, non-toxic markers. For legibility use dark colours such as
green, black, blue and purple.
o Add one or two bright colours to the cup (red, light green, light blue or orange) for
adding emphasis.
o a very small bud vase with cut flowers per table
o a mug or wineglass for markers per table
o a side table for refreshments and snacks
o mural or flipchart paper for making collective knowledge visible and tape for hanging
up the sheets
o flat wall space or two rolling white boards
o additional wall (or window) space for posting collective work and/or the work of the
tables
o refreshments, if appropriate
Optional (depending on size and purpose)
o overhead projector & screen
o sound system for playing music
o a selection of background music
o wireless lavalieres for Café facilitators and handheld wireless microphones for town
meeting-style sessions
o easels & flipcharts
o basic supplies including stapler, paper clips, rubber bands, markers, masking tape,
pens, push pins and pencils
o coloured 4x6 inch or 5x8 inch cards (for personal note taking)
o large and bright colourful papers for posting of ideas
Realisation
Welcome the participants as they arrive and seat four (or five) people at the tables or in
conversation clusters. Introduce the World Café process and the issue(s) or question(s) at
hand.
Explain the purpose of this particular Café event and pose the prepared questions, posting
them where they are visible to everyone.
Explain the Café guidelines and Café etiquette and post them on an overhead, an easel sheet
or on cards at each table.
• Focus on what matters.
• Contribute your thoughts.
• Speak your mind and heart.
• Listen to understand.
• Link and connect ideas.
• Listen together for insights and deeper questions.
• Play, Doodle, Draw – writing on the ‘tablecloth’
• Sheets is encouraged.
Page 60 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
Have fun!
A few tips for improving our listening:
• Help people notice their tendency to plan their response to what is being said and inquire
internally as to the ways this detracts from both the speaker and the listener.
• Listen as if each person were truly wise, sharing some truth that you may have heard
before but do not yet fully grasp.
• Listen with an openness to be influenced by the speaker.
• Listen to support the speaker in fully expressing him/herself.
• Listen for deeper questions, patterns, insights and emerging perspectives.
• Listen for what is not being spoken along with what is being shared.
Set up progressive (usually three) rounds of conversation of approximately 20-30 minutes
each. Once you know what you want to achieve and the amount of time you have to work
with, you can decide the appropriate number and length of conversation rounds, the most
effective use of questions and the most interesting ways to connect and cross-pollinate ideas.
The members of each table explore together the question(s) or issue(s) at hand.
Facilitators should ask the participants to share their individual perspectives and listen for
what is emerging ‘in the middle of the table’. Encourage them to use the markers and paper
on the table to create a ‘shared visual space’ by noting key ideas and drawing the emerging
ideas. Sometimes the co-created pictures can really be worth a thousand words in showing the
relationships between ideas.
Five Ways to Make Knowledge Visible:
• Use a graphic Recorder. Draws the group’s ideas on flipcharts or a wall mural using text
and graphics to illustrate the patterns of the conversation.
• Take a Gallery Tour. At times, people will place the paper from their tables on the wall
so members can take a tour of the group’s ideas during a break.
• Post Your Insights. Participants can place large notepapers on which a single key insight
is written, on a blackboard, wall, etc. so that everyone can review the ideas during a
break.
• Create Idea Clusters. Group insights from the Post-Its into ‘affinity clusters’ so that
related ideas are visible and available for planning the group’s next steps.
• Make a Story. Some Cafés create a newspaper or storybook to bring the results of their
work to larger audiences after the event. A visual recorder can create a picture book along
with text as documentation.
Upon completing the initial round of conversation, ask one person to remain at the table as
the ‘host’while the others serve as travellers or ‘ambassadors of meaning’. The travellers
carry key ideas, themes and questions into their new conversations.
Make sure that members of each table during the first round each go to different tables as the
conversational rounds progress. This cross-pollination of ideas often produces surprising
results that could not have happened otherwise. Setting up your Café in conversational rounds
and asking people to change tables between rounds allows for a dense web of connections to
be woven in a short period of time. Each time you travel to a new table you are bringing with
you the threads of the last round and interweaving them with those brought by other
travellers. As the rounds progress the conversation moves to deeper levels. People who
arrived with fixed positions often find that they are more open to new and different ideas.
Page 61 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
At the beginning of the consecutive rounds, the table hosts welcome the new guests and
briefly share the main ideas, themes and questions of the initial conversation. Encourage
guests to link and connect ideas coming from their previous table conversations – listening
carefully and building on each other’s contributions.
By providing opportunities for people to move in several rounds of conversation, ideas,
questions and themes begin to link and connect. At the end of the second round, all of the
tables or conversation clusters in the roomwill be cross-pollinated with insights from prior
conversations.
In the third round of conversation, people can return to their original tables to synthesise their
discoveries or they may continue travelling to new tables, leaving the same or a new host at
the table. Sometimes a new question that helps deepen the exploration is posed for the third
round of conversation.
After several rounds of conversation, initiate a period of sharing discoveries and insights in
whole group (plenary) conversation. It is in these town meeting-style conversations that
patterns can be identified, collective knowledge grows and possibilities for action emerge.
Conversations held at one table reflect a pattern of wholeness that connects with the
conversations at the other tables. The last phase of the Café involves making this pattern of
wholeness visible to everyone. To do so, hold a conversation between the individual tables
and the whole group. Ask the table groups to spend a few minutes considering what
occurrences were most meaningful to them. Distil these down to the essence and then have
each table share with the whole group the nuggets that are being discovered at their table.
Make sure that you have a way to capture this, either on flipcharts or by having each table
record them on large notepapers or the sheets on their tables, which can then be taped to a
wall so that everyone can see them. After each table has had a chance to report out to the
whole group, take a few minutes of silent reflection and consider:
• What is emerging here?
• If there were a single voice in the room, what would it be saying?
• What deeper questions are emerging as a result of these conversations?
• Do we notice any patterns and what do these patterns point to or how do they inform us?
• What do we now see and know as a result of these conversations?
Resource considerations
The actual Café event lasts a few hours – a minimum of four hours and perhaps a maximum
of an entire day, depending upon the topic and ambitions of the project. Of course, one can
schedule multiple Café events on consecutive days. The amount of time required to prepare
for a given event depends upon the scale of the event and the intended participants. A small
Cáfe of 20 participants can be organized very spontaneously if the participants are readily
available. If the targeted participants have complex schedules and/or the number of
participants is very large, then the event will require at least several weeks, if not months, of
planning.
The following items listed are the main budgetary items in a World Café process:
• Personnel: project host/team
• Travel: participants
• Food: light refreshments, meals for participants only if the event is all-day
• Recruitment and Promotion: invitations to participants
• Communications: printing and distribution of final report
• Facilities: location for Café
Page 62 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Additional best practices and potential pitfalls
The inventors of the World Café emphasize that the process is about helping people to
"remember what they already know how to do": to convene conversations that matter. In
other words, the facilitators help the participants to be more aware of the conditions
conducive to productive, powerful dialogue, and they attempt to help participants tap into
their own knowledge and wisdom in order to create these.
Experienced facilitators strongly recommend using round tables with four persons at each.
Three is too few, and while five can work, their experience shows that the number four is far
superior.
One potential pitfall is posing questions that ask about the nature of truth. Philosophers have
spent thousands of years arguing the nature of truth, and many of the wars in history have
been fought over such questions. We are after ‘shared meaning’,which does not mean that we
all share the same perspective on what is true, but rather that each participant has the
opportunity to share what is true and meaningful for them. This, in turn, will allow us all to
see our collective situation in a different light, hopefully enlarging our individual views of
truth along the way. The experience of seasoned hosts has been that questions that focus on
‘What is useful here?’ are more effective at generating engagement on the part of participants
and they tend less to provoke defensive reactions than questions that focus on ‘What is true?’
References and Resources
Brown, J. (2002) The World Café: A Resource Guide for Hosting Conversations That Matter.
Mill Valley, CA:Whole Systems Associates.
The World Café website: http://www.theworldcafe.com
6.2 Convergence methods aiming at decision-support
Participatory modelling
Cf. 2.5. Involving methods
Consensus conferences
ƒ
Brief description
A consensus conference is a public enquiry centred on a group of citizens (10-16) who are
charged with the assessment of a socially controversial topic of science and technology.
These lay people put their questions and concerns to a panel of experts, assess the experts’
answers, and then negotiate among themselves. The result is a consensus statement which is
made public in the form of a written report directed at parliamentarians, policy makers and
the general public that expresses their expectations, concerns and recommendations at the end
of the conference (Joss and Durant 1995). The goal is to broaden the debate on a given issue
and include the viewpoints of non-experts in order to inform policy-making. Socially
controversial science or technology issues on a national scale level, which depend on expert
contribution for clarification (Joss and Durant 1995). The goal of arriving at a consensus
opinion is not necessary. Consensus conferences usually have a 3-day intensive programme
that is open to the public.
ƒ Detailed description
When to use
Page 63 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
The objectives of a consensus conference include providing a vehicle for citizens to
meaningfully influence policy decisions, conflict assessment, clarification of attitudes and
assessing relevance of an issue to society (Problem Framing). It has also been used for social
experiments, research projects and as a means for promoting social awareness and public
debate. The process generally gives the outcome a high level of credibility because laypeople
define the agenda of the conference as well as conduct the assessment. Some effects of the
method may include new regulations, generating new debate and understandings,
consolidating politics, building bridges between interest groups and perspectives and
removing fears.
It is necessary that the topic to be addressed can be defined and delimited.
This method is most useful for combining many forms of knowledge (e.g. local, traditional,
technical). It is a useful method for obtaining informed opinions from laypersons. It can also
allow for the inclusion of subjective knowledge in scientific, technological and other
technical developments. More generally, it is a viable alternative to use when all or most of
the following criteria are present:
• Citizen input is required for policies under review or development.
• Issues are controversial, complex and/or technical.
• Many diverse groups and individuals have concerns.
• Ensuing decisions significantly and directly affect select groups or individuals.
• There is a need for increased public awareness and debate.
• There is citizen desire for a more formal involvement.
• The process of communicating information about the conference topic provides a strong
educational component.
Procedure
Overview
The consensus conference begins with the selection of a panel of citizens drawn randomly
from the general population. During two weekends prior to the public conference, this panel
discovers the significant issues relevant to the topic of concern by drawing upon experts and
documentation. From their own perspective, they then formulate a set of key questions. These
are put to a panel of experts at a public consensus conference. After two days of expert
presentation and citizen cross-examination, the citizens’ panel composes a report based upon
their comprehensive learning and expert response to their key questions. The report is
presented on the final day of the conference and then disseminated to policy-makers,major
stakeholders and other interested groups and individuals, constituting public input into public
policy.
The main participants in the consensus conference are the 10-16 citizens that make up the lay
panel. They are selected to create a group of non-experts with no vested interests with regard
to the conference topic, but representative of several attitudes towards the issue. The group is
balanced on age, gender, education, occupation and area of residence. The lay panel
participants are selected from respondents to advertisements about the consensus conference
in regional and national newspapers. The respondents send in a brief written description of
themselves, their knowledge of the topic and their motivations for participating (Joss and
Durant 1995).
Also a number of scientific experts is involved. They can be scientific experts or
representatives of interest organisations. They are abreast of the latest knowledge and have a
good overview of the topic. The organisers select the members of this expert panel on the
Page 64 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
basis of the wishes voiced by the lay panel. The expert panel is composed so that essential
opposing points of view and professional conflicts are visible within the conference.
The main tools in the consensus conference are the series of 8-10 key questions and a number
of sub-questions. These are formulated by the lay panel and are based on the information
provided to the lay panel during the preparatory weekends, the lay panels’ own reading and
knowledge and the tracking of the present public debate. The experts receive the questions in
advance of the conference in order to prepare their answers carefully. During the consensus
conference the experts each give a presentation responding to the key questions (Joss and
Durant 1995).
The conference itself is an intensive 3-day programme, with expert presentations, questions
from the lay panel, and discussion sessions between the members of the lay panel. A
facilitator who is also non-expert in the issue chairs these sessions. The entire conference
(except the lay panel discussion sessions) is open to the public. It is preceded by two
preparatory weekends at four months and one month prior to the conference. The preparatory
weekends are for the lay panel to prepare for the conference. During the first preparatory
weekend the main objective for the lay panel is to identify key questions to be addressed at
the consensus conference and to indicate the type of experts that the lay panel would like to
address the questions to. A speaker is invited to give a basic presentation on the topic to
initiate discussion, while brainstorming sessions reveal the lay panel’s expectations, worries
and questions in relation to the topic. The aspects appearing in discussions and brainstorm
sessions form the point of departure for continued discussion on the key questions.
The main activities in the second preparatory weekend are further discussions, one or two
short presentations based on the wishes of the first weekend and finalisation of the key
questions. The sessions alternate between group and plenary sessions and are lead by the
facilitator. The wording of the questions is finalised before the end of the weekend. There is
also opportunity to comment on the composition of the expert panel selected by the
organisers based on the wishes of the lay panel expressed in the first preparatory weekend.
The finalised questions are forwarded to the agreed panel of experts in preparation for their
presentations at the conference.
Page 65 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
At the consensus conference itself, during the first day, the invited experts respond to each of
the lay panel’s key questions. They deliver their answers in short presentations of 20-30
minutes, highlighting key areas where knowledge is lacking, and possible solutions. This is
followed by an opportunity for the lay panel to ask a few additional questions for
clarification. If time permits, the experts add to their presentations important points, which
they believe the lay panel should consider. In the course of the day, the conference can hear
up to 15 presentations. During the evening of the first day, the lay panel meets on its own,
and decides which aspects of the key questions have been explained sufficiently and which
need further clarification. On this basis, they compile the questions that should be asked of
the experts on the second day.
On the second day, the lay panel poses supplementary questions to the experts for
clarification. In some cases, the audience may pose additional questions and react on the
experts’ answers. This second day, the facilitator acting as chairperson plays an important
role: s/he is charged with focusing the attention of the experts on the questions and repeating
them if no clear answer is given. The afternoon and evening of the second day are used by the
lay panel to prepare the final document. Using the key questions as a basis, the lay panel
writes down argumentative evaluations and recommendations concerning measures related to
the central topic. Writing the contents of the final document is carried out in subgroups,
alternating with plenary sessions. The preparation of the final document is a process in which,
through an open discussion, every effort is made to attain the largest consensus between the
lay-panel members on actions to be recommended. Minority opinions are only allowed when
the process reveals very wide differences of opinion.
The lay panel presents the final document at the conference at the third day. The experts are
then allowed to correct technical errors and misunderstandings, but they may not alter the
actual content. Finally the experts and the audience have an opportunity to address questions
to, and to discuss the conclusions with the lay panel.
Page 66 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
5
Preparation
The main persons involved in the planning and execution of a consensus conference include
the following:
o project management (director, assistant and clerical staff)
o advisory/steering committee (‘planning group’): 5 – 6 persons
o citizens’ panel (panel of laypeople): 12 – 15 persons
o expert panel: approximately 20 persons
o facilitator
The main responsibilities of each of these individuals or groups are detailed below.
• Project management
The tasks of the project management include:
o preparing the project
o managing the different partners
o taking care of the lay panel
o contacting experts
o balancing the budget
o taking care of the press
o assisting the lay panel in writing the recommendations
o documenting the conference.
• Advisory/Steering Committee
The advisory committee is composed of topic stakeholders that may include – but are not
limited to – regulators, policy-makers, scientists, industry and non-governmental agencies.
They should be selected for their knowledge of and expertise in topic-related fields and for
the diversity of their viewpoints. This means that they speak for themselves as members of
the steering committee and not for their organisations. The committee aids the project
management in setting the aims, determining the conference scope, identifying potential
experts (and funding sources, if applicable) and compiling the initial information package.
The tasks of the advisory/steering committee include:
o ensuring that the project is objective
o monitoring the process
o discussing the content with the project manager
o ensuring that papers for the lay panel are relevant and neutral
o making a list of the best experts on the subject
o deciding – together with the panel – which experts to call upon
o giving approval of the conference programme.
• Citizens’ Panel
Ideally, the members of the citizens’ panel are supposed to be representative of the population
at large. Recruitment is usually done three to four months prior to the first study weekend.
In addition to meeting any other qualifying criteria, all applicants are asked to be available for
both study weekends and the consensus conference. Participants must volunteer their time.
The project must pay for their travel, accommodation and all other expenses (child care,
holidays, etc).
The tasks of the citizens’ panel include:
o a profound understanding of the subject
o deciding the agenda and preparing questions for the conference
o questioning the experts during the conference
o writing recommendations related to the questions
Page 67 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
o
presenting and discussing recommendations.
• Facilitator(s)
One or two professional facilitators who are experienced in participatory and consensusbased processes should be recruited. The facilitator(s) must be non-directive and committed
to the citizen-driven aspect of the process. They will be required to facilitate the two study
weekends and the conference itself.
The tasks of the facilitator(s) include:
o steering all processes during the study weekends and conference
o managing the dialogue during the study weekends and conference
o chairing the conference
o assisting in writing the document.
o
• Reference Persons
The reference persons are 12 to 15 people with both traditional and non-traditional expertise
in the topic at hand. They are only required to attend the consensus conference weekend.
Both pro and con viewpoints should be represented in each of the issue areas, which may
include social/ethical, science, policy, environment, health and safety and economics.
Compilation of the expert pool begins very early in the planning process and requires that the
project management team anticipate the categories in which the citizens’ panel may post
questions. Potential reference persons in these categories are contacted, informed of the
process and expectations and asked if they are both willing and available to participate if
selected. Actual selection of the reference persons is done by the citizens’ panel during the
second study weekend after they have finalised their key questions.
1
While the citizens’ panel will identify the types of reference persons it desires at the
conference, it may not know the names of specific individuals. The citizens’ panel may
assign this task to the project management team. Although the reference person pool
compiled by the project management team may contain dozens of names, it seldom satisfies
all the requirements of the citizens’ panel. Consequently, the team must locate, invite and
inform suitable reference persons as soon as possible after the second study weekend, so that
they can be confirmed early enough prior to the conference.
Once the reference persons are confirmed, they are sent the one or two questions related to
their particular expertise. They are asked to prepare a short presentation in response to the
question(s) provided. This is to be delivered at the consensus conference and will be followed
by cross-examination. Some additional reference persons may be required at the study
weekends to aid the citizens’ panel in their understanding of the issues. Their role during
these weekends is determined by either the management team or the citizens’ panel.
The experts may or may not be provided with honoraria, depending on the customs that apply
in the society. This must be determined at the budgeting stage. Members’ travel, food and
accommodation expenses are customarily covered in any case.
The consensus Conference
a. Preparing informing of the citizen’s panel
Between the selection of the citizens’ panel and the first study weekend, it is customary to
provide the panellists with some preliminary information on the topic at hand. The nature and
Page 68 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
extent of this information is a decision to be made by the project management and advisory
committee. Various components of the package may be requested from stakeholders,
provided by the advisory committee or other experts. Information packages need not be
limited to print. Between the first and second study weekends and between the second study
weekend and the conference, the citizens’ panel may itself request specific kinds of
information or expertise be provided – or may gather and share information amongst the
panellists themselves. In the interests of informed choice, a concerted effort should be made
to present all sides of the issues with the information package and all subsequent information.
However, it should not be expected that every panellist will have the time or interest to
read/view/listen to the entire package.
62
b. First weekend
The first study weekend represents the beginning of a steep learning curve for the members of
the citizens’ panel, who will become better informed on the topic over the coming months.
This is also the beginning of relationship-building between the project management team, the
facilitator and the panellists. One of the objectives of the process is to have the panellists take
a progressively more active role in the decision-making. Thus, while the study weekend
begins with an agenda planned by the project management team, it should be flexible and
adaptable to the readiness of the panellists to assume control. One of the roles of the
facilitator is to guide this gradual shift of control over the process. It is imperative to give the
group enough space to develop their own thoughts and attitudes, without external
disturbance. All tasks required of the panelists will be made easier if the project management
team makes every effort to provide for all their needs. This includes but is not limited to
pleasant accommodation, good food, scheduled breaks and opportunities for social interaction
away from the learning or conference venues.
Objectives:
Competency development
- provide basic information
- introduce the context and method
- interacting with reference persons, the public and the media
Team-building
Identify areas of interest or concern
Begin formulation of questions
Set agenda for second study weekend
The tasks of the participants for the first study weekend are as follows:
Project management: Provide an overview of the topic context and the expectations and
stages of the process. Provide a broad information base upon which the panellists can begin
to formulate their key questions.
Facilitator(s): Guide the citizens’ panel in reaching decisions by consensus. Aid the citizens’
panel in the assumption of control of both the direction and the process. Team-building.
Citizens’ panel: Become familiar with the full range of issues relevant to the topic. Identify
the areas of greatest concern or interest to them. In each of those areas, formulate a set of
questions to which they seek answers. Set an agenda for the second study weekend.
Consideration: If the project management team wishes to tape the proceedings, it is necessary
to obtain the permission of the citizens’ panel in advance of this weekend. Some conferences
are taped for the purposes of the organisers, others are taped by local or national news media
for promotion or documentary purposes. TIP: Due to time and/or complexity issues, it is
entirely possible that all the tasks set for this weekend will not be accomplished. Some
conferences have solved this by having the panellists continue to work on the formulation and
categorisation in the month between the two study weekends. This is made easier if all are
Page 69 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
electronically connected. Others encourage strict enforcement of the agenda. Some
conference management teams have started the first study weekend with an in-depth
exploration of core values and assumptions held by the citizens’ panel. The rationale is that
by having these on the table, panellists would have a better understanding of why certain
positions are held by others. A listing of these values may also be used to help direct the
identification of areas of interest, the formulation of key questions, and the placing of
emphasis in the final report. However, such an exercise may be met with a degree of
resistance, as some panellists have noted that, at such an early stage in the process, they were
not yet comfortable enough with their fellow panellists to reveal their most personal beliefs.
c. Second weekend
The agenda for this weekend should have been planned by the citizens’ panel during the first
study weekend and will thus follow from whatever they set out. As much as possible or to the
degree with which they are comfortable the panellists should control both the facilitation of
the process and any decisions made. The management team and facilitator support this shift
of control by assuming whatever roles the citizens’ panel assigns them. This increases both
the panellists’ responsibility for, and ownership of, the process and its outcomes.
64
Objectives:
Further develop competence for the final public weekend.
Shift control of the process decisions and facilitation to the citizens’ panel.
Formulate a set of key questions.
Identify types of experts required for the conference.
Plan the conference
The tasks of the participants for the second study weekend are as follows.
Project management: Organise the second study weekend in accordance with the citizens’
panel’s wishes. Prepare a list of experts available for the consensus conference.
Facilitator(s): Support the citizens’ panel in assuming control of process and outcomes.
Citizens’ panel: Refine questions to one or two overarching in each issue area. Select experts
to address questions at the consensus conference. Plan the consensus conference.
d. Third public weekend
The consensus conference is a three-day public event in which a citizen-driven discussion
takes place between citizens and experts. By this point, the members of the citizens’ panel are
well-informed on the topic at hand. At the conference they have two roles: as citizens
representative of the general public and as well-informed citizens in discussion with reference
persons. They must keep both of these roles in mind in their interactions with other
conference participants so that the proceedings may be as meaningful to attendees as to the
two panels.
The tasks of the various participants during the conference are as follows.
Project management: Conduct registration and trouble-shooting. Coordinate media access to
citizen and expert panellists.
Facilitator: Assist the citizens’ panel with the conference proceedings and the writing of the
final report.
Moderator: Facilitate the timing and flow of the conference.
Expert panel: Make presentations based on key questions. Respond to cross-examination by
the citizens’ panel. Respond to questions from the audience. Be available for media
interviews.
Citizens’ panel: Cross-examine the expert panel. Write and present the final report. Be
available for media interviews.
Page 70 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Conference Day One
The first day is devoted to expert presentations, followed by cross-examination by the
citizens’ panel. The citizens’ panel will have determined the format during the second study
weekend. The audience plays only a passive role on this day, observing the proceedings. At
least one person from the project management team should focus on the needs and requests of
the two panels,while another is responsible for trouble-shooting. This is the longest of the
days,with six to eight hours of presentations and cross-examination not uncommon. At the
conclusion of the day, the citizens’ panel meets to review the day’s proceedings and to
determine which of its questions and concerns are still outstanding. Members formulate a set
of supplementary questions that are put to the expert panel on the next day. It is helpful to
have one member of the project management team solely responsible for media relations and
coordinating media interviews with both panels. The project director should also be available
for media interviews. Some of these duties, as well as the registration, may be delegated to
volunteers.
Key features:
expert presentations
cross-examination of experts by citizens’ panel
expert rebuttal.
Conference Day Two
The citizens’ panel first poses its supplementary questions to the expert panel and the panel
responds. When the citizens’ panel has concluded its questioning, the forum may be opened
to audience questions to the expert panel. Such audience participation is not a feature of all
conferences! At the conclusion of the formal proceedings, the citizens’ panel retires to write
its report behind closed doors, supported by the facilitator and a scribe. The report is
structured around the key questions and incorporates all that the panellists have learned and
heard throughout the study weekends and the conference itself. Traditionally, the report is
written between the second and third day of the conference and is presented on the final day.
The facilitator’s role here is most sensitive. (S)he must motivate and encourage without
appearing excessively directive.
Key features:
supplementary questioning of the expert panel by the citizens’ panel
audience questioning of the expert panel
citizens’ panel writes final report.
Conference Day Three
Prior to the day’s proceedings, the project management team makes copies of the report for
the expert panel and audience members. The consensus conference is concluded with the
presentation by the citizens’ panel of their report. The expert panel, followed by the audience,
are allowed to ask the citizens’ panel questions of clarification. The expert panel may only
make changes of factual error, as the report represents the perspective and conclusions of
citizens. Afterwards, the report is finalised, printed and disseminated. The report writing is a
very intense and time-constrained process that is mentally and emotionally draining for the
citizens’ panel members and the facilitator. It is not uncommon to have it concluded in the
early morning hours. It is thus important that their every need and comfort is anticipated or
promptly addressed. Alternatively, some people/cultures may prefer to enforce a strict time
limit in order to avoid working throughout the night.
Key feature:
Citizens’ panel presents their report and fields questions from the expert panel and audience.
TIP:
Page 71 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Resource considerations
Schedule
From start to finish, this process requires, on average, twelve months. However, this can be
condensed to a more intensive process of approximately seven months. The 12-month
overview is presented as a countdown up to the beginning of the consensus conference. Then
the schedule of the conference and post-conference events are presented in normal
chronological order.
One Year Prior to the Consensus Conference:
The first step is to recruit the advisory/steering committee. Once selected, this committee and
the organisers set the context for the conference (see contextual considerations above).
Concurrently and continuing for several months is the identification and contact of potential
funding sources (if applicable).
Four to Six Months Prior:
Recruit and select the citizens’ panel. Organisers should begin to build a pool of potential
expert panellists, as well as recruit a facilitator(s) and conference moderator. With the help of
the advisory committee, a set of informative readings, tapes or videos can begin to be
assembled for the citizens’ panel. Design the conference promotion materials.
67
Three Months Prior:
Prepare an information package and send this to the members of the citizens’ panel prior to
the first study weekend.
Two Months Prior:
The first study weekend is held. This is the first meeting of the citizens’ panel. The purpose
of this first weekend is to introduce the topic, identify key issues and questions, begin to
identify the type of experts desired at the conference and to plan the second study weekend.
Expert recruitment continues.
One Month Prior:
The second study weekend is held. This is the final study weekend of the citizens’ panel.
Tasks include further education on the topic, finalisation of key questions and sub-questions,
finalising the selection of experts for the conference and planning the conference agenda.
Conduct conference promotion and registration.
The Consensus Conference Weekend:
This public event normally covers three days and does not necessarily have to take place over
a weekend. Day One is normally filled with expert presentations and cross-examination by
the citizens’ panel. On Day Two, there is supplementary questioning of the expert panel by
the citizens’ panel as well as the audience. When the questioning closes, the citizens’ panel
retires to write its report privately. The citizens’ panel presents its report to the experts and
audience on the morning of Day Three. The experts may correct errors of fact only, before
they and the audience are given the opportunity to question the citizens’ panel.
One Month Post:
The final report of the citizens’ panel is corrected for grammar, printed and disseminated to
policy-makers, industry, nongovernmental organisations and other interested groups and
individuals. It represents public input into public policy. The citizens’ panel debriefing may
also occur one to two months after the conference.
One To Twelve Months Post:
The evaluation is conducted.
Page 72 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
One Day before the public conference
Lay panel arrives, sees the conference facilities and has dinner together.
Day 1 of the public conference
Experts give their presentations, answering the questions made by the lay panel on the
weekend courses. In the afternoon, time is allotted for experts to elaborate and clarify any
questions of the panel.
Day 2 of the public conference
The lay panel questions and debates with the expert panel. After the citizen panellists’
questions have been
answered, the forum may be opened to the public audience.
Traditionally, the next tasks have been conducted during the evening and throughout the
night. Alternatively, one can add another day to the process.
The panel discusses to decide its recommendations and prepares the report.
The document is printed for distribution and uploaded on the website.
A press release containing the principal recommendations of the lay panel is prepared and
sent to the press.
Day 3 or 4 (depending upon whether a day is added for debate and the writing of the report)
The lay panel presents the final document (reads it out loud) to the expert panel, politicians,
the press and the rest of the audience. Comments are made and factual errors corrected.
Conduct project evaluation.
After the conference
The final document of the lay panel is set out in a report together with the written
contributions of the experts.
Principal conclusions of the lay panel are communicated to members of the Parliament in a
newsletter (and to any other relevant persons).
Summarise project evaluations and post them on the website.
Budget
The process is elaborate and requires significant resources. Costs will vary depending on the
conference scope (i.e. regional versus national), selection method, transportation and
accommodation and the type and amount of advertising. The following items listed are the
main budgetary items in a consensus conference:
Personnel: project manager, communications manager/assistant, facilitator(s), clerical staff,
moderator
Travel
Accommodation
Food: study weekends, consensus conference: panels and audience, media reception after
consensus conference
Recruitment and Promotion: mailings to recruit citizens’ panel, conference promotion and
advertising
Communications: printing of conference papers, printing of draft and final reports
Facilities: study weekends, consensus conference
73
Additional best practices and potential pitfalls
The recruitment method may not ensure representative participation. Multiple conferences
may be required to ensure that broad, representative opinions are sought.
Page 73 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
References and Resources
Abelson, J., Forest, P-G, Eyles, J., Smith, P., Martin, E., &Gauvin, F-P. (2001) A Review of
Public Participation and Consultation Methods. Canadian Centre for Analysis of
Regionalization and Health http://www.regionalization.org/PPfirstpage.html.
Andersen, I-E & Jeagaer, B. Danish participatory models Scenario workshops and consensus
conferences: towards more democratic decision-making, a revised and updated version of an
article which was first published in Science and Public Policy, October 1999, Vol. 26, No. 5,
PP331-340 http://www.pantaneto.co.uk/issue6/andersonjaeger.htm.
Banthien, H., Jaspers,M., Renner, A. (2003). Governance of the European Research Area:The
role of civil society. Interim Report. European Commission Community Research.
Chevalier, J. Forum Options. The Stakeholder/Social Information System.
http://www.carleton.ca/~jchevali/STAKEH.html
Cointelligence Institute. 2002. A Toolbox of processes for community work. http://www.cointelligence.org/CIPol_ComunityProcesses.html [accessed 3 Jan 2002].
COSLA. (1998). Focusing on Citizens: A Guide to Approaches and Methods. Available at:
http://www.communityplanning.org.uk/documents/Engagingcommunitiesmethods.pdf
[accessed 3 Jan 2002].
Danish Citizen Technology Panels. http://www.co-intelligence.org/P-DanishTechPanels.html
Einsiedel, E. and Eastlick, D. Unpublished paper. Convening Consensus Conferences: A
Practitioner’s Guide. University of Calgary. Calgary, Canada.
ICIS Building Blocks for Participation in Integrated Assessment: A Review of Participatory
Methods.
O`Connor, Desmond, M (1985-1994) Constructive Citizen Participation: A Resource Book.
Victoria, BC: Connor Development Services. Fifth Edition 1994.
Charrette
ƒ
Brief description
Charrette is an intensive face-to-face process designed to bring people from various subgroups of society into consensus within a short period of time. The pre-Charrette planning
breaks the main issue into component parts, to which sub-groups of people are assigned. The
subgroups periodically report back to the whole group and feedback from the whole is then
addressed in the next round of sub-group discussions. This sequence is repeated until
consensus is reached at the final deadline for a report. Charrettes vary in size, from 50 to over
1,000 people, and in time, from four days to two weeks.
ƒ
Detailed description
When to use
Charrettes have often been applied to development, design and planning projects at the local
community level, but can be adapted to address other topics and geographical areas. In
general, a Charrette will assemble practical ideas and viewpoints at the beginning of a
planning process, encourage input and collaboration from a wide range of participants,
facilitate decisions on difficult issues when a process is mature, resolve indecision or
deadlocks between groups toward the end of a process, develop feasible projects and action
plans with specific practical steps for the successful development of projects based upon
citizen input and identify potential funding sources for projects.
Procedure
Overview
Page 74 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
The pre-Charrette phase focuses on developing and working with a steering committee that
will determine the primary focus of the Charrette and handle the logistics for the next two
phases. It is suggested that the steering committee work with the Charrette facilitator to
identify a preliminary set of issues to be addressed during the Charrette.
The Charrette workshop is an intensive planning and design workshop involving participants
in assessing needs, interviewing stakeholder groups, prioritising issues, developing
recommendations, identifying specific projects and generating implementation strategies.
The post-Charrette phase comprises the preparation of a final document that outlines
strengths, challenges, recommendations, specific projects, actions steps and potential funding
sources. It also includes preparing and delivering a formal presentation that is open to the
public. It is during this phase that implementation begins.
This section is largely a reproduction of Segedy, J. and Johnson, B. The Neighborhood
Charrette
Handbook:Visioning and Visualising Your Neighborhood’s Future. (See
references.)
Realisation
a. Personnel and tasks
• Project Manager
The project manager can be one person or a team. The responsibilities of the project manager
are as follows:
o oversee the entire process
o identify citizens who will be in charge of establishing the steering committee
o printing and disseminating the final report
o serve as a contact person for post-Charrette activities.
• Steering Committee
To begin the process and to see it through to its ultimate fruition it is usually best to identify a
diverse group of citizens who can serve as the coordinators and facilitators of establishing a
strong steering committee.
This is a community effort. Create a citizen action group that represents a broad base of
community interests (which will vary depending upon the issues addressed) according to the
following guidelines. The Committee should:
o comprise between 9 to 15 persons
o ensure diversity of opinions and ideologies
Page 75 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
o include people actively interested in the issues and their solutions. For example:
- members of the business community
- neighbourhood/citizen/homeowner associations
- elected officials (local, regional, national, supranational)
- academic specialists
- technical experts
- church/religious organisations
- youth
- service groups
- public/private schools (faculty, staff, students, administration, etc.)
- senior citizens
- persons from adjoining cities, regions, etc. (as applicable)
The responsibilities of the steering committee include:
o coordinating Charrette activities
o establishing timeline and meeting schedule
o establishing a preliminary list of issues/the Charrette focus
o arranging for financial support and managing the Charrette budget
o assisting in workshop facilitation.
b. The pre-charrette
• Issue/problem identification
This is a very important step in the process. The stakeholders must first determine that they
want to get involved in this process and are willing to do something with the results. While
the Charrette workshop itself is a community-wide endeavour, it begins with the efforts of a
few dedicated leaders that will establish the foundation. They must:
o define the primary and secondary issues related to the project
o determine the scope of the project
o identify the geographic area of the project (if applicable).
• Identify and invite Charrette participants (team)
Arrange to have an appropriate facilitator.
The Charrette ‘team’ is usually a group of individuals with a broad range of skills and
backgrounds. The team will be primarily responsible for producing the tangible results of the
workshop. Sometimes, all interested persons are welcomed to participate in the Charrette,
either as members of the Charrette team or more casually as observers. There are advantages
and disadvantages to having local and outside team members. Local members bring unique
insights to the process while outside members can bring a fresh, and objective, viewpoint to
the activities. It is important that the team be assembled for its skills, not just for the interests
of the individual members.
• Develop community relations and public awareness
The key to making the Charrette an integral part of a successful community effort is an
informed public. Please refer to the suggestions provided in the section of ‘General guidelines
and tips’.
• Assemble support information
An effective process begins with good information. Much of the Charrette process builds on
public input, but a solid base of technical information is critical to having accurate
information. The type of information required will depend upon the topic. However, often
existing plans and historical profiles are especially useful. It will always be necessary to:
Page 76 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
identify key players in the community and document existing conditions. Some possibilities
include:
- governmental regulations
- reference materials and examples of related projects
- photos
- maps
- previous planning documents
- studies or reports
- demographics and/or statistical information
- video/photographs/sketches
- surveys
- historical profiles (newspaper files, photos, archives, historical societies, books, etc.)
TIP:
It is strongly suggested that the Charrette has a strong visualisation component. This means
that the products of the Charrette will include an ample amount of pictures and drawings to
help illustrate the issues and ideas that arise from the process. To facilitate this, slides and/or
prints of the study areas (if applicable) should be taken prior to the actual Charrette. These
snapshots can then be used as the basis for before/after comparisons. Aerial photographs can
also be very helpful in illustrating large-scale and site associated issues.
• Logistics
The actual Charrette workshop is the most visible aspect of the process. If the planning is
well executed beforehand, the Charrette itself – while often an example of ‘organised chaos’
– will be a fun and productive opportunity for the community to build and visualise its future.
Several months prior to the Charrette:
Hold an organisational meeting with the steering committee and the Charrette facilitator to set
goals and arrange a basic schedule. The steering committee should hold regular meetings to
ensure that all necessary preparations are being made.
The following need to be arranged:
o Establish dates.
It is not possible to find a ‘perfect’ date, but every effort should be made to minimise
conflicts. It should also be noted that the ‘days’ do not have to be contiguous. In some
cases it is better to have several days between sessions to allow the team and community
to ‘catch its breath’. However, spreading the process out over too long a period of time
will lose momentum and public interest in the process.
o Establish the location for the Charrette workshop.
o Prepare the schedule for the Charrette workshop.
o The actual schedule must be flexible. Public meeting times should be firm and
closely adhered to, but you do not want to miss out on spontaneous opportunities or
stop creative energy just to keep on schedule.
o Make a list of participants to be invited (particularly experts and specialist interest
groups) and send out invitations. Charrettes require discipline and may become
difficult when particularly vocal individuals – who do not respect others – are invited
to attend.
o Meals should be arranged for the Charrette team/participants. Some food can be
catered to the location and some can be off-site. Local restaurants and/or service
groups can donate/prepare meals. It can be motivating to invite participants in the
morning sessions to stay for lunch.
Page 77 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
o
Arrange accommodation for out-of-town participants and transpor-tation to and from
the Charrette facility location. Provide materials and supplies.
One month prior to the Charrette: All plans should be finalised. Send out first press
releases.
c. The Charrette workshop
The following is a sample schedule for the Charrette workshop; it can be varied.
Session #1: Steering Committee Meetings/Charrette Team Meetings
Goal: to develop a working relationship between the Charrette team and the steering
committee.
This can be held the night before the workshop, at or after dinner or at a breakfast meeting.
The steering committee and Charrette team should introduce themselves, providing a short
background and some interests. Then the steering committee can share and explain their
issues list with the Charrette team. An informal setting and casual conversation is more
effective at this stage.
32
Session #2: Context Development
First day, morning.
Goal: to get a first-hand look at the community for the Charrette team and provide the
Charrette team with an of the background information and – if applicable – a first-hand look
at the issue being addressed.
If the issue being addressed has a physical component, a tour of the area can be arranged for
the Charrette team and the steering committee. The following should be done in this session:
o The steering committee summarises its interests. A list of these interests should be
attached to the wall so that it is easily visible in the room.
o View any videotapes or slides on the issue.
o Study maps, photos, etc. (if applicable).
o Review planning reports and other technical documents.
Session #3: Interview and Input Sessions
First day, morning.
Goal: to provide the opportunity for diverse citizens’ and public groups to discuss issues with
the Charrette team.
Divide the study team into small groups to facilitate interaction and effective communication.
Schedule interview times so that each group can be properly heard. Each interview session
should run approximately 45 minutes; multiple groups can be interviewed simultaneously,
each with its own facilitator and recorder.
Allow time for questions from the Charrette team (about goals, needs, liabilities, assets, etc.).
After the interview session, give each participant a strip of colour dots and ask them to ‘vote’
for the most important issues by placing the dot next to the issue(s) on the list that is on the
wall. They can put all their dots on 1 issue or distribute them as they see fit. This helps
prioritise the issues.
Session #4:Team Analysis and Issue Clarification
First day, afternoon.
Goal: to provide an opportunity for the Charrette team to assimilate and discuss observations
and prepare for the public meeting. This is a critical regrouping of the Charrette team to
brainstorm, share ideas, do initial analyses (such as SWOT), develop preliminary
observations and recommendations and prepare for the evening public session.
3
Page 78 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Session #5: Open the Doors -- Community Discussion and Feedback
First day, evening.
Goal: to summarise the Charrette team’s initial impressions in order to provide the
community with preliminary assessment and analysis and to obtain broader citizen input and
feedback.
Charrette team: summarise input and analysis. Present SWOT (Strengths,Weaknesses,
Opportunities and Threats) and/or any other analysis results. Present goals, objectives and
priorities (from dots exercise).
Community: Provide feedback on the Charrette team’s initial impressions. Confirm or
redirect the focus.
Sessions # 6 - ?: Development of Goals and Objectives/Recommendations
Goal: to develop proposals and solutions in response to the specific issues.
Create a smaller working group for each priority issue and divide participants into each
group.
Each sub-group should contain at least one specialist/expert on the specific issue.
The sub-groups meet to generate proposals and solutions for their specific issue.
The whole group comes together to present the ideas of the sub-groups, discuss, make
suggestions and coordinate their sub-projects, etc.
The sub-groups meet again to revise their proposals/plans, incorporating the input received
from the whole group. This process of pulsing between the sub-groups and the whole
continues as necessary (or as time permits). Ideally, the workshop should comprise at least
four days to allow time for enough feedback cycles.
At the end of this pendulum process, the whole Charrette team and the members of the
steering committee meet to finalise their ideas, coordinate their projects and
recommendations and prepare action plans for each project team.
The final session will be the presentation of the Charrette workshop results to the community.
Here, this is presented as part of the post-Charrette activities.
34
d. Post Charrette workshop activites
The post Charrette activities can be broken down into three steps, each of which will be
elaborated below:
. the document and presentation preparation
. presentation and approval
. implementation and benchmarking.
• Document and presentation preparation
Following the completion of the Charrette workshop the Charrette team should first complete
the following items:
o a newspaper ‘tab’
o a reader/user friendly document
o formal presentation materials.
The newspaper ‘tab’ (a specially printed newspaper insert) should be printed and delivered
with relevant newspapers or other community media. This insert should include a summary
of the findings, ideas, projects and recommendations. The newspaper tab has several
purposes: first, to give the general public a chance to learn about what is happening in the
community; second, to further solicit input and information (a planning process is never
finished) and third, to interest and encourage people to attend the final presentation. Make
Page 79 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
sure there is at least a week between the publishing date of the newspaper tab and the final
presentation.
The ‘final document’ should be completed using the information and ideas collected to date
and should be finalised after the final presentation (there will surely be some minor changes
following the presentation). It is critical that the final report be:
o action oriented
o user friendly
o positive
o free of jargon
o highly visual
o in ‘bulleted’ format
o explanatory (not just descriptive).
Formal presentation materials should include slides and a handout. Slides of drawings,
project concepts, character/design samples and existing conditions are most useful. The
handout should summarise the entire project for those who may not have been involved prior
to the presentation (the newspaper tab can be used for a handout).
• Presentation and Approval
Goal: to present Charrette findings to the community.
Hold a public meeting and conduct the graphic and verbal presentation. Present the
challenges of following through with the projects. Following the presentation, ask for
questions and comments. Assign someone to document all of the comments. The final
presentation must be thoroughly advertised and take place in a politically neutral facility
which is easy to find. A verbal presentation in conjunction with a slide show is generally the
best format. Following the presentation, the final document should be modified, if necessary,
according to comments at the final presentation. The document should then be approved and
adopted by the steering committee.
• Implementation and Benchmarking
Finalising the Charrette is only part of the overall process. For tips on implementing the
results of the Charrette, refer to the general section.
Resource considerations
Advance preparations are extensive. At least two to four months may be required to gather
background materials and expert participants. The process itself usually takes a day (but at
least four days are recommended). A shorter Charrette (two or three hours) may yield only a
limited number of ideas.
Cost factors include ample meeting space, background materials, an experienced facilitator,
resource people and on-site supplies. It may also be necessary to cover travel and
accommodation, hospitality and compensation for individuals who must take time away from
their regular jobs to participate.
Personnel: project manager, steering committee (9-15 persons)
Travel
Accommodation
Food: meals for Charrette team and participants
Recruitment and promotion: invitations to participants, Charrette promotion and advertising
Communications: printing of draft and final report
Facilities: location for Charrette event, location for public presentation of the final report
Page 80 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Additional best practices and potential pitfalls
Depending on the definition of ‘expertise’, an emphasis on specialist participation in a
Charrette may exclude community voices from the process. This could cast doubt on the
credibility of the overall public involvement plan of which the group is a part. The continuous
nature of a longer Charrette may exclude some participants who are hindered by a disability.
References and Resources
Corporate Consultation Secretariat, Health Policy and Communications Branch (2000).
Health Canada Policy Toolkit for Public Involvement in Decision Making.Minister of Public
Works and Government. Services Canada.
Glenn, J. (Ed.) Futures Research Methodology. Version 1.0. AC/UNU. The Millennium
Project.
Segedy, J. and Johnson, B. The Neighborhood Charrette Handbook: Visioning and
Visualising Your Neighborhood’s Future. Sustainable Urban Neighborhoods. University of
Louisville.
http://www.Charretteinstitute.org/Charrette.html
Future search conference
ƒ
Brief description
Future search conferences are highly structured events, usually lasting 2.5 days, at which a
cross-section of community members or 'stakeholders' create a shared vision for the future.
The future search conference combines small- and full-group work sessions to arrive at an
agreement among a broad cross-section of people. The method is similar to a Charrette
process, but a little more structured.
ƒ
Detailed description
When to use
A future search conference aims to precisely define a new direction and strategy to make the
necessary future changes. As with other participatory processes, the participants in a future
search conference should represent a cross-section of those most critical to the
implementation and impact of the new directions. The conference usually lasts two or three
days, with 30 to 65 participants, managed by two facilitators.
Procedure
The future search conference has five steps:
(1) Review the Past identifies and discusses the major global trends through a
brainstorming exercise;
(2) Explore the Present: participants are divided into four or so groups of about eight
people to analyse the trends in terms of impacts, desirability and plausibility;
(3) Explore the Future: projections are made of how the system will evolve based on
these trends (impact evaluation), strategies to achieve the new desirable design are
proposed;
(4) Create the Ideal Future: each group designs an overall scenario for the future, the
scenarios developed are passed on to the other groups for selection. In this way, all
participants reconvene and integrate the selected strategies into an overall scenario
and future design;
(5) Make Action Plans: these strategies are grouped, and the conference as a whole
selects strategies to achieve the new design.
Page 81 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
People carry out tasks individually, in small self-managed workshops and as a whole group.
The results are recorded openly on flipcharts.
•
•
Points to think about
Time and resource intensive.
Needs a large space to work well – see ideal layout below
Useful links
http://www.futuresearch.net/
http://www.communitiesscotland.gov.uk/web/site/Engagement/community_engagement.asp
http://www.communityplanning.net/methods/method66.htm
References
Thanks to Community Planning Net
Yet other descriptions of participatory methods can be found in Slocum (2003).
Conventional/ public Delphi/ Delphi Conference
General Conventional delphi (pen-and-paper)
ƒ
Brief description
Conventional Delphi involves an iterative survey of experts. Each participant completes a
questionnaire and is then given feedback on the whole set of responses. With this information
in hand, (s)he then fills in the questionnaire again, this time providing explanations for any
views they hold that were significantly divergent from the viewpoints of the others
participants. The explanations serve as useful intelligence for others. In addition, (s)he may
change his/her opinion, based upon his/her evaluation of new information provided by other
participants. This process is repeated as many times as is useful. The idea is that the entire
group can weigh dissenting views that are based on privileged or rare information. Thus, in
most Delphi processes the mount of consensus increases from round to round.
While traditionally conducted via mail, other variations of Delphi can be conducted online or
face-to-face. In the original Delphi process, the key characteristics of this method were
structuring of information flow, feedback to the participants and anonymity for the
participants. In a face-to-face Delphi, the anonymity is eliminated.
Page 82 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
ƒ
Detailed description
When to use
A dialectical process, Delphi was designed to provide the benefits of a pooling and exchange
of opinions, so that respondents can learn from each others’ views, without the sort of undue
influence likely in conventional face-to-face settings (which are typically dominated by the
people who talk the loudest or have most prestige). The technique allows experts to deal
systematically with a complex problem. From round to round the relevant information is
shared, further educating the panel members. Recommendations can thus be made on the
basis of more complete information.
Usually one or more of the following properties of the application leads to the need or
usefulness of employing Delphi:
• The problem does not lend itself to precise analytical techniques but can benefit from
subjective judgments on a collective basis.
• The individuals needed to contribute to the examination of a broad or complex problem
have no history of adequate communication and may represent diverse backgrounds with
respect to experience or expertise.
• More individuals are needed than can effectively interact in a face-to-face exchange
(except through the face-to-face Delphi’s shuttle process between plenary and subgroups).
• Time and cost make frequent group meetings infeasible.
• The efficiency of face-to-face meetings can be increased by a supplemental group
communication process.
• Disagreements among individuals are so severe or politically unpalatable that the
communication process must be refereed and/or anonymity assured.
• heterogeneity of the participants must be preserved to assure validity of the results, i.e.
avoidance of domination by quantity or by strength of personality.
The Policy Delphi serves any one or a combination of the following objectives:
• to ensure that all possible options have been put on the table for consideration
• to estimate the impact and consequences of any particular option
• to examine and estimate the acceptability of any particular option.
In general, the Delphi method was invented in an attempt to overcome various socialpsychological challenges associated with committee processes, including:
• the domineering personality or outspoken individual that takes over the committee
process
• the unwillingness of individuals to take a position on an issue before all the facts are in or
before it is known which way the majority is headed
• the difficulty of publicly contradicting individuals in higher positions
• the unwillingness to abandon a position once it is publicly taken
• the fear of bringing up an uncertain idea that might turn out to be undesirable and result
in a loss of face.
Procedure
Overview
Page 83 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Refer to the Delphi Method Flowchart below for a graphic overview of this method.
Delphi Method Flowchart
Delphis – whether conventional, real-time on computer or face-to-face – usually undergo four
phases. In the first phase the subject under discussion is explored and each individual
contributes the information (s)he feels is pertinent to the issue. In the second phase an
overview is reached on how the group views the issue, for example, where there is
dis/agreement over what is meant by relative terms such as ‘feasible’, ‘important’,
‘desirable’, etc. If there is significant disagreement, then this is explored in the third phase in
order to illuminate the reasons for the differences and evaluate them. The fourth phase entails
a final evaluation that occurs when all previously gathered information has been initially
analysed and the evaluations have been fed back for reconsideration.
In the following sections a step-by-step description of conventional Delphi is presented,
followed by a description of the variations that constitute the Policy Delphi. Finally, the steps
of the Delphi Conference are presented.
Realisation
a. Personnel and tasks
• Organisational Team
The tasks of the organisational team are as follows:
Develop the questionnaires.
Identify and recruit experts.
Distribute questionnaires.
Analyse the comments and give feedback to the experts after each round.
Page 84 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Write the final report.
• Experts
Complete the questionnaires.
If the Delphi is face-to-face, attend the scheduled events.
• Moderator(s)
If the Delphi is conducted face-to-face, one or two moderators will be required to facilitate
the process.
b. Procedure
The procedure of the original Delphi can be described in the following steps. (For a graphic
overview, please refer to the Delphi Method Flowchart.)
• Form a team to undertake and monitor a Delphi on a given subject.
• Select and recruit panel(s) to participate in the exercise.
Customarily the panellists are experts in the area to be investigated. Some literature suggests
that while the panellists should be well informed about the topic, a high degree of expertise is
not necessary. Of course, the required level of expertise will depend upon the specific topic
and questions being addressed. The number of panellists varies greatly between Delphis, but
should include a very minimum of four persons per panel. TIP: The panellists should be
assured that they are participating in an exercise that involves a peer group. Therefore, in the
letter of invitation indicate the types of backgrounds reflected in the participant group.
• Develop the first-round Delphi questionnaire.
A month or more is needed to develop the first-round questionnaire. Ideally, the questions
posed should be specific enough to eliminate most irrelevant information, but otherwise place
as few constraints on the information as possible. In addition to the questionnaire, a factual
summary of background material is usually supplied. In some cases single or multiple sets of
scenarios are provided that specify certain items that the respondents are to assume as given
for the purpose of evaluating the issues. (Typically these scenarios deal with aspects like
future economic conditions, such as the rate of inflation.)
Often, various alternatives are presented along with rating scales, which give the respondents
an opportunity to quantify their preferences. An example of a commonly used scale follows.
If the rating procedure is used, take care not to use compound statements (such as ‘Do you
think y, if x….’; rather break down such statements into two simple statements (e.g. ‘Do you
think x?’ and ‘Do you think y?’).
If new to Delphi, the respondents will often respond with compound and lengthy comments.
It is useful to provide some examples of the form you would like their answers to take, in
terms of being short, specific and s ingular in nature. Allow the panellists to suggest changes
in the wording of items and introduce them as new items. Policy issues are often very
sensitive to precise wording.
Sometimes it is appropriate to introduce a set of alternative assumptions making up scenarios
and let the respondents form a group scenario by voting on the validity of each.
• Test the questionnaire for proper wording (e.g. ambiguities, vagueness).
Each questionnaire should be pre-tested with people who have not been involved in the
design. Identify any items that are phrased in a confusing manner and revise them.
Page 85 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
•
•
Transmit the first questionnaires to the panellists.
Analyse the first-round replies.
Prepare the second-round questionnaires (and possible testing).
In this round the discrepancies between the participants’ views are brought to the fore (but
still kept anonymous). Participants are asked to try to explain the differences between their
views and others’, providing their reasoning and any influential information to which the
others may not be privy. In each round such information and reasoning are shared with the
other participants (still maintaining anonymity).
• Transmit the second-round questionnaires to the panellists.
When asking for re-votes on an item, show the individuals their original votes and provide
them with two copies of the questionnaire so that they may retain one for later reference or do
draft work.
• Analyse the second-round replies.
Steps 7 – 9 are reiterated as long as desired or necessary to achieve stability in the results.
• Prepare a report by the analysis team to present the conclusions of the exercise.
It is very important that all of the participants understand the aim of the Delphi exercise;
otherwise they may answer inappropriately or become frustrated and lose interest.
Resource considerations
a. Timing
The following table presents a general weekly schedule for an online version of Delphi.
However, this is just to provide a general guideline. It should be noted that schedules will
vary greatly and that face-to-face Delphis will require significantly more time than those
conducted online.
General Schedule for Delphi-online
1. Preparation of the Delphi project: Week 1&2
General preparation
Compose expert panel
Email addresses participants
Make the collaboration more concrete
Develop accompanying texts
2. Start up and configuration of Delphi online system: Week 1&2
Questions first round
Develop invitation-email;
Call participants for formal consent
Create first round in Delphi online system (users, passwords, texts, …)
3. First question round Delphi: Week 3&4
Call the non-respondents: Week 5
4. Treatment of results of first round and start up second round: Week 6&7
Data-analysis: reduce the answers to the open questions to a more limited set without losing
content
Introduce the system
Invite participants by email
5. Second question round Delphi: Week 8&9
Call the non-respondents: Week 10
6. Treatment of results of second round and start up third round: Week 11&12
Data-analysis: response to closed questions and arguments
Invite participants by email
7. Third question round Delphi: Week 13&14
Page 86 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Call the non-respondents: Week 15
8. Treatment of results of third round
Intermediate report: Week 16&17
9. Project management continuous
Elaborate project plan
Project meetings
10.Final report Delphi project: Week 18&19
b. Budget
The following items listed are the main budgetary items in a Delphi:
Personnel: organisational team, stipends for experts, moderator(s)
Travel: only for Face-to-Face Delphi: travel for experts and moderator(s)
Accommodation: only for Face-to-Face Delphi: accommodation for experts and moderator(s)
Food: only for Face-to-Face Delphi: meals for experts and moderator(s)
Recruitment and promotion: recruitment of experts
Communications: printing and postage costs for surveys (if done by traditional mail), printing
of draft and final report and dissemination
Facilities: only for Face-to-Face Delphi: location for event
Additional best practices and potential pitfalls
Some common reasons for the ‘failure’ of a Delphi are:
• the imposition of the monitor’s views and preconceptions of a problem upon the
panellists by over-specifying
• the structure of the Delphi and thus not allowing for the contribution of other perspectives
related to the problem
• the assumption that Delphi can be a surrogate for all other human communications in a
given situation poor techniques of summarising and presenting the group response and
thus failing to ensure common interpretations of the evaluation scales utilised in the
exercise
• ignoring, rather than exploring, disagreements so that discouraged dissenters drop out and
an artificial consensus is generated
• underestimating the demanding nature of a Delphi; failing to recognise the respondents as
consultants and properly compensate them for their time if the Delphi is not an integral
part of their job function.
For a successful Delphi, it is important to:
• carefully select the group of respondents/panellists
• adapt the Delphi design to your particular application
• assure the honesty and lack of bias in the monitoring team
• assure a common language and logic, particularly if participants come from diverse
cultural backgrounds.
Page 87 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Variation 1: PUBLIC DELPHI
As its name indicates, the Public Delphi method is an extension of the well-known and
conventional Delphi method. The Delphi method is a repeating questionnaire that affords
participants the opportunity to react to each other’s judgements. Conventional Delphi,
however, is usually conducted among a pre-selected and carefully screened panel of experts
rather than on a representative sample of a given public.
Objectives
The Public Delphi method is a type of opinion poll used for scenario-building. The main
Delphi objective is to build a consensual view on a well-defined subject within a panel of
experts. The Public Delphi method aims at generating an extensive set of opinions and getting
them to converge. The results of the questionnaires (mail, on-line or face-to-face) are
analysed statistically. The Public Delphi process is accordingly an open meeting that
promotes the emergence of group ideas rather than a consensus of expert judgement.
Participants
Public Delphi involves 25 to 100 stakeholders selected for their independence and their
ability to provide orientation on the future. The number of participants depends on the issue
to be addressed. However, the Public Delphi philosophy suggests that a process design should
include as many stakeholders as possible, to ensure the validity of the results and reduce
questionnaire bias. To limit the bias, (i) the sample of stakeholders must at least allow the
expression of a wide range of viewpoints, (ii) the questionnaire is preferably conducted
anonymously (biases due to group influence are limited). and (iii) by mail (face-to-face and
on-line interviews are also possible, but generate other types of bias).
Procedure
Widespread use of the Delphi method has led to quite a number of variations around the
original technique and a family of Delphi-related processes. Based on the original Dephi, the
Public Dephi method has three main steps (Figure 3).
The first step entails formulating the issues and preparing the questionnaire. The
questionnaire should be as quantitative and precise as possible and its validation must be
preceded by a pilot test to ensure the questions are properly understood. The second step
involves selecting the stakeholders. The third and most important step consists in a series of
questionnaires organised in several rounds of interviews. In practice, three or four rounds of
questions and answers suffice to reach an optimal consensus:
• The first questionnaire (Round R1) is designed as a structured brainstorming exercise to
identify all the issues that respondents consider relevant for scenario-building.
• The second questionnaire (Round R2) is based on the condensed responses from R1 and is
designed to provide a more detailed judgement on the issues identified in ranking or scoring
R1 judgements.
• R2 responses are then analysed and used for the next round.
With each round, new questions are defined in the light of responses from the previous round,
and the statistical results of the previous questionnaire (group means, standard deviations,
minimum and maximum scores per question) are presented to the respondents in order to
facilitate their positioning within the sample. The process continues until a consensus is
reached for the scenario choice.
Relevance
According to Linstone and Turoff11, Public Delphi is particularly useful in cases where: (i)
the issues to be addressed do not require very specific knowledge; (ii) the available data are
not sufficient to support scenario construction (the alternative solution is then to use
Page 88 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
subjective probability); (iii) the problem is extensive, complex and/or interdisciplinary
(experts have different training, and there are problems finding a common language); (iv) it is
difficult to bring a large group of experts together physically (Delphi does not required an
actual meeting); and (v) the participants must remain anonymous mainly to avoid bias due to
powerful personalities and political pressures
Variation 2: DELPHI CONFERENCE (FACE-TO-FACE OR GROUP DELPHI)
This face-to-face group version of Delphi allows for more discussion and debate and takes
less time than the traditional version, but the participants forego anonymity.
82
Procedure
Recruit a design-monitor team, group facilitator and an assistant to undertake and monitor the
group Delphi. The design-monitor team should consist of at least two professionals, so that
one can check the other. Ideally, one should be knowledgeable in the issue at hand and the
other should have editorial talents.
A management team must decide on (and usually narrow down) the topic(s), as well as the
number of Delphi panels that will be conducted on the topic(s). Decide on the date that the
panel will be held. One full day will allow for several rounds of the process, in addressing
one question. More time will be required to address very complex issues or more than one
major question. Reserve a location for the workshop. One large room to accommodate all
panellists is required. It would be ideal to have access to a few smaller rooms, in which the
sub-groups can do their work.
Select and recruit participants for each panel. Customarily the panellists are experts in the
area to be investigated.
Reserve accommodation for those who require them. Make catering arrangements.
Development of the Delphi questionnaire.
Individual question replies.
Working individually and without discussion, each participant responds to the question.
Small groups. Participants divide into sub-groups of ‘similar’ people and prepare a list of
information, arranged in order of importance. Here ‘similar’ refers to their views on the topic
being addressed. The purpose of having homogenous sub-groups is to help ensure that all
information that is important to a particular perspective or interest group will reach the
plenary list.
Plenary group. Gather the important items from each group and list them where everyone can
see them (newsprint, flipcharts, etc.). To do this, ask each group in turn to contribute the most
important item on their list that has not already been added to the plenary group’s list.
Plenary vote. A multiple-vote procedure is used to rank the items from most to least
important. A natural cut-off point is chosen between items with high scores and those with
low scores. Somewhere between six and nine items are appropriate for most topics.
Individual changes. Each individual considers what changes (s)he wishes to make to his/her
small-group list after having seen the plenary list.
Small groups. Members compare the list of top items on their small-group list to those on the
plenary list. Where the small group list differs from the plenary list, the small group has two
options. It can either change its list to conform more closely to the plenary list or it can
develop evidence for changing the plenary list more in the direction of its list. This is done as
follows: add to the small-group list the items from the plenary list that the small group
previously omitted but is ready to accept. Prepare a brief report supporting any of the top
items from the small-group list that the group believes should be added to the plenary list.
NOTE: A time limit for the report, of one minute for example, should be enforced. The
purpose of this report is not to persuade others to adapt their point of view, but to present
Page 89 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
evidence that the group thinks others may have overlooked. Each small group then
documents its revised list on one sheet of newsprint and its evidence (in note form) on
another.
Small-group reports to the plenary: The revised lists of the small groups are displayed
without comment. Each group in turn displays its sheet of evidence and explains it briefly.
Each group report is followed by a brief session of questions for clarification only. Strict time
limits are reinforced.
Plenary consensus development: return to step 9 and repeat the cycle until consensus
emerges. Time constraints may require a fixed number of cycles. Consensus can be increased
by having two rounds of
voting, instead of one, at step 9.
For additional in-depth information on the philosophy behind the Delphi method, as well as
various applications, refer to: http://www.is.njit.edu/pubs/delphibook/
References and Resources
Dick,
B.
(2000)
Delphi
face
to
face
[On
line].
Available
at
http://www.scu.edu.au/schools/gcm/ar/arp/delphi.html
Glenn, J. (Ed.) Futures Research Methodology. Version 1.0. AC/UNU The Millennium
Project.
Linstone, H. and Turoff,M. (2002) Introduction. In H. Linstone and M. Turoff (Eds.), The
Delphi
Method:Techniques
and
Applications.
pp.
3
–
12.
http://www.is.njit.edu/pubs/delphibook/
Practical Guide to Regional Foresight in the United Kingdom.
Turoff, M. The Policy Delphi. In The Delphi Method:Techniques and Applications, pp. 80 –
96. http://www.is.njit.edu/pubs/delphibook/ch3b1.html
Expert panel
ƒ
Brief description
The main task of an expert panel is usually synthesising a variety of inputs – testimony,
research reports, outputs of forecasting methods, etc. – and produce a report that provides a
vision and/or recommendations for future possibilities and needs for the topics under
analysis. Specific tools may be employed to select and motivate the panel, assign tasks and
elicit sharing and further development of knowledge.
ƒ
Detailed description
When to use
Expert panels are particularly appropriate for issues that require highly technical knowledge
and/or are highly complex and require the synthesis of experts from many different
disciplines. This method is not designed to actively involve the broad public.
Procedure
Overview
The preparation for an expert panel includes specifying the task, determining the desired
composition of the panel and then recruiting panel members, a panel chair and support staff.
Once formed the expert panel is expected to investigate and study the topics assigned and set
forth their conclusions and recommendations in written reports. If a study is of special topical
Page 90 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
interest, arrangements may be made to schedule a (public) session at which issues, findings,
conclusions and recommendations of the report are presented.
Preparation
a. Defining the project
A project must be formulated carefully to ensure a clear understanding of the nature of the
task, its aim and extent, any limitations or restrictions and the range of disciplinary expertise
required among the members of the committee that will undertake it. Agreement on these
elements should be sought with the requesting agencies or other originating sources; careful
consultation is important to avoid misunderstandings later. However, once agreement on
these essentials has been reached, it must be made clear that conduct of the work is the
responsibility of the panel. This responsibility includes the determination of the approach to
be taken and the substance of the report or other resultant product.
Note: Most of the information provided here is a condensed version of the Royal Society of
Canada’s Expert Panels: Manual of Procedural Guidelines.For more detailed information,
please refer to this manual (see references).
88
b. Recruiting panel members and support staff
This section addresses the process of forming a panel, including the resources for identifying
potential chairs and members.
• Composition and Balance in a Panel Profile
The first step in assembling the panel nomination slate is to develop a profile of the panel.
The two key dimensions of this profile are composition and balance. Composition concerns
the mix of expert knowledge and experience needed for the panel to understand, analyse and
draw sound conclusions about the issues before the panel. It can be represented in the
question,What kinds of knowledge should the panel have? A well-composed panel will be
technically competent to deal with the task.
Balance concerns the even-handed representation of differing points of view that can be
expected to affect the conclusions on issues the panel will address. Because these differences
often involve value judgements held by a committed adherent to one side of an issue, the
question of balance can be represented as,‘What kinds of value judgements may be relevant
to the panel’s task? Sometimes balance can be achieved by having opposing views
represented in the panel membership. In other circumstances, particularly when the opposing
views are strongly held and not subject to a factual test, it can be better to seek members who
are not strong proponents of the contending perspectives. The panel profile in such cases
should aim more for balance in each member and rely on briefings, workshop presentations,
etc. to bring forward the best evidence and arguments from the strongly opposed sides.
However it is achieved, a balanced panel is one that has excellent prospects of achieving
impartiality in its final conclusions and recommendations.
The panel profile must explicitly address both composition and balance. To do so the project
profile must be taken into account:
o Project scope:Will the study be limited to technical problems or will it address broad
issues of public policy?
o Degree of controversy: Do the problems to be addressed have alternative resolutions
that are controversial, affecting parties who have strong emotional, political or
financial stakes in the outcome or are there no stakeholders with strong commitments
to a particular outcome?
Page 91 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
o
o
o
Technical support:Will the panel’s conclusions and recommendations be based more
on data analysis or on the panel’s expert judgment?
Will the panel’s conclusions adequately represent the uncertainties?
Disciplines: Do the issues involve a single discipline or are they interdisciplinary?
• Roles of the Panel Chair
The chair of the panel guides the process of analysis and seeking solutions for technical,
scientific, policy, professional or social issues that are often complex and may be highly
controversial. The chair serves as facilitator and team builder for the panel and as lead
architect/integrator of the panel’s report. In addition, the chair aids in project management
and is the chief spokesperson in representing the study’s audiences during dissemination.
Facets of each of these roles, as panel facilitator, project manager, report architect/integrator
and spokesperson are discussed in greater detail in the Royal Society of Canada’s Guidelines.
• Guidelines for Interviewing
The following guidelines cover the key points in interviewing potential panel members and
panel chairs. Items that apply just to interviews of potential panel chairs are in [square
brackets]. It is sometimes advisable to communicate in writing first by sending a candidate a
copy of the statement of work and a note saying you that intend to call to explore his/her
interest in participating.
o Indicate that the context for the call concerns the expert panel nomination process.
o Discuss the origin of the project, its objectives and the statement of the task. Ask the
candidate to comment on the task and to offer suggestions about it and how the study
might be carried out. The responses will provide insight into what the candidate
knows about the subject, his or her thought processes, points of view, etc. Then ask
what kinds of expertise are required to make up an appropriate committee, including
soliciting suggestions of individuals who meet the requirements. Only then should
the interviewer ask about the candidate’s interest, availability and willingness to
participate.
o State that another purpose of the call is to explore the candidate’s interest and
availability to serve on the study panel, if nominated. [In interviewing a potential
chair, state that you are, in particular, interested in whether the candidate would be
interested in being considered for the panel chair.] Explain that you are putting
together a nomination slate, from which a committee will make the final panel
selection. This is not the final round in the panel selection process, since the
committee must take into account many composition and balance factors.
o Offer to elaborate on why the study is being undertaken. Describe the expected time
demands of the study. [In interviewing the potential chair, be especially clear on
these points, above all on the time demands and the chair responsibilities
anticipated.]
o Listen carefully to the candidate’s response and the level of interest (s)he conveys.
Ask questions, as appropriate, to better gauge the motivation to serve as a member
[or as panel chair].
o If the candidate appears interested in serving, it is necessary to discuss the subjects of
balance and conflicts of interest. Here is one possibility for addressing the subject:
‘We are trying to assemble a panel that is free of direct conflicts of interest and is
appropriately balanced with respect to different points of view on the study’s issues.
For this purpose each panel member will be asked to complete a confidential form,
the purpose of which is to disclose any points of view or conflicts of interest. At the
first meeting panel members will also be asked to discuss their backgrounds and
activities as indications of their perspective and any strongly held views or
commitments relevant to the study task. I would like to run quickly through the areas
Page 92 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
of principal concern. At this time, you don’t need to give specific, detailed answers
but you may want to ask about any that you think might apply. A positive response to
any of these questions does not necessarily indicate a problem with serving on the
panel; more often it indicates areas we need to consider when balancing the panel.’
90
o
Ask the following questions:
1. Organisational affiliations: Do you have any business affiliations or volunteer nonbusiness affiliations, such as with professional societies, trade associations or civic
groups or with organisations that might benefit in a direct way from this study if the
issues came out a certain way? To your knowledge, have any of these organisations
taken a public stand on the issues related to the study?
2. Financial interests: Do you have financial interests, whether through employment,
consultancies or investments in companies or other entities whose value or business
would be directly affected by a particular resolution of the issues in this study?
3. Research support: Do you receive any research support from agencies,
organisations, etc. that might have an interest in the outcome of this study?
4. Government service: Have you provided services or been employed by an
international, national, regional or local government, including advisory boards, that
would be seen as relevant to the topics covered by this study?
5. Public positions: Have you published articles, given testimony or made speeches
that might be viewed as stating a commitment to a particular view on the issues in
this study? Do you hold office in or otherwise formally represent an organisation that
is closely identified with a particular point of view on the issues this study may
address?
o
If an obvious conflict of interest has been identified, indicate that it could pose a
problem for panel membership per se, but would not preclude other contributions to
the study, perhaps through an oral or written briefing.
o
Express appreciation for the candidate’s time. Emphasise the exploratory nature of
the call and reiterate that a larger slate of nominees will be put forward than will
actually serve. If it seems appropriate, you can explain the various aspects that are
considered in balancing a panel and emphasise that selection is in no way a
judgement on a nominee’s technical qualifications. Inquire as to whether the
candidate has suggestions for other panel members.
• Developing the Nomination Package
Define the panel profile. Use the project profile and the statement of task to define a profile
of the panel. What areas of expertise are needed for composition? What points of view or
different perspectives on the issues are needed for the panel to be balanced?
Develop a ‘long list’ of candidates.
Cut down to a ‘short list’ and establish a slate of primary nominees and alternates. Unless
they have been contacted previously during the ‘long list’ step, exploratory calls are made to
the candidates selected as primary nominees and alternates. Each slate must include at least
one alternate for the chair and at least one alternate in each major expertise category.Where a
category requires several nominees, more than one alternate should be proposed. The
alternates must be serious candidates – not just ‘gap fillers’. Alternates for the chair can also
be proposed as primaries or alternates elsewhere on the slate.
91
• Technical Writer
It may prove very useful to include a technical writer in the staff complement. The
professional demands on the time of panel members and panel chair are such that the
Page 93 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
inclusion of a technical writer in the staff will almost always prove to be a great advantage in
the drafting of the panel report.
Conducting the Expert Panel
• The Role of an Expert Panel
The expert panel is expected to investigate and study the topics assigned and set forth their
conclusions and recommendations in written reports. These reports are often the only lasting
products of the panel’s work and deliberations. Thus, reports must be given early and close
attention.
o Expert panel reports are scientific and technical inquiries; they require the same
standards of integrity and conduct as other scientific and technical studies.
o Panels should strive for a consensus report, but not at the expense of substantially
watering down analyses and results. It is much better to report serious disagreements
and explain why the disagreements exist than to paper over such problems. Lack of
consensus on all points is not a failure of the panel and will not be treated as such.
o Members of the panel serve as individuals, not as representatives of organisations or
interest groups. Members are expected to contribute their own expertise and good
judgement in the conduct of the study.
• Guidelines for the First Panel Meeting (Public Meeting)
General Meeting Objectives
o To complete panel formation through the discussion of panel composition and
balance.
o To ensure the panel understands the expert panel process and their roles.
o To introduce the panel to its task, by clearly conveying:
- the study’s origins and context
- study objectives (statement of task)
- expectations of other important audiences, e.g. governments
o To begin the immersion of the panel in the subject matter of the task.
o To produce an agreed-upon plan by which the study will be conducted:
- the general nature of the report to be written (e.g. through a topical outline)
- a strategy for conducting the study, including:
• research methods, data acquisition approaches, etc.
• panel structure, if any, and/or roles of panel members
• assignments to various panel members for undertaking specific study tasks
• topics for future meetings
• future meeting schedule
• an agreed-upon milestone chart for project tracking
Typical First Meeting Architecture
Session 1: Discussion of the origin, background, task statement and objectives of the initial
study plan, led by the chair or study director involved in preparing the prospectus.
Session 2: Discussion of the task statement, context, schedule imperatives, objectives and so
forth.
Session 3: Expectations of other important audiences, if any.
Session 4: Discussion of panel composition and balance. Full presentation by each panel
member and staff of her/his background as it relates to the study.
Session 5: Initial immersion in the subject matter of the study.
Session 6: Discussion among the panel and project staff of the study approach and plan,
resulting in an agreement. If required, additional open (public) panel meetings can be
scheduled but the working meetings are not normally open to the public.
Page 94 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
• Preparing the Expert Panel Report
The reports that expert panels prepare should be given early and careful attention. Experience
with many panels shows that consensus building and report writing are the most difficult
parts of the study process. The following tips are important:
o Cut Start early.
o Cut Define early, no matter how tentatively, the ‘architecture’ of the report. Refine it
and fill it in as the study unfolds.
o Cut Give writing assignments to panel members as soon as it is practical to do so.
o Cut Produce writing assignments on time, even if they are rough and incomplete.
o Cut Empower and use the project staff (especially the technical writer) to assist the
chair and other members of the panel in filling out draft sections, integrating them
and smoothing the report by putting it into one consistent style.
It is essential that none of the members provide any kind of briefing until the final report is
completed. Everyone must agree to complete confidentiality!
93
Some elements that should be included in the report are the following:4
o charge
o description of panel composition
o scientific uncertainty
o distinguishing evidence from assumptions
o distinguishing analysis from policy choice, especially in risk-related issues
o citation of other relevant reports
o managing study completion
o consensus and disagreement.
Presentation of the Panel Report
If a study is of special topical interest, arrangements may be made to schedule a public
session after submission of the final report at which issues, findings, conclusions and
recommendations of the report are presented.
The following information should be prepared and, if appropriate, made available to the
public:
o project prospectus, the signed contract and related official correspondence
o names and principal affiliations of panel members.
Upon completion of the study reports should be disseminated to appropriate persons and in
general made available to the public. If desired, the report can be submitted for peer review,
prior to public dissemination.6
Resource considerations
Realistic estimates of time and costs are especially difficult in the early stages;
underestimating is common. Estimates must include provision for assembling the panel and
staff, holding meetings, preparing the report and seeing it through a review process (if
applicable) and publishing and disseminating the final result.
The following items listed are the main budgetary items in an Expert Panel:
Personnel: professional, technical and support staff salaries, honoraria for experts, research
associates and assistants, subcontracts, especially for technical services (if applicable),
honoraria for peer reviewers (if applicable)
Travel: experts
Accommodation: for experts, if required but not included in honoraria
Food: meals for Experts, if required but not included in honoraria
Page 95 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Recruitment and Promotion: recruitment of experts
Communications: printing and dissemination of final report, translation costs (if required)
Facilities: location for the expert panel to meet, location for public presentation of the final
report, if applicable
Some inevitable uncertainties regarding the budget include:
estimating the number of occasions on which the panel will be convened
estimating the number of days on each such occasion, during which the panel will deliberate
forecasting the likelihood that the panel will have to re-convene after the peer review
comments have been
received (if applicable).
Additional best practices and potential pitfalls
The panel participants should be diverse and it is important that, in addition to technical
qualifications, the individuals concerned are creative thinkers who can bring diverse
viewpoints to bear, work well in groups and are prepared to speak freely without feeling that
they have to represent a particular interest group.
It can also be valuable to bring together different types of players who might not normally
meet in the course of a panel – such as innovators, financiers, policy makers, academic
researchers, users or consumers, etc.
Panels need to avoid too narrow representation, which is liable to result in little challenging
thinking, lobbying by interest groups or perceptions that vested interests are in charge.
Panels need to be chaired and facilitated effectively, to maintain motivation and morale, to
resolve conflicts, to monitor timetables and external constraints, to prevent over-dominance
of strong personalities, etc.
95
References and Resources
Practical Guide to Regional Foresight in the United Kingdom.
Royal Society of Canada (1998) Expert Panels: Manual of Procedural Guidelines. Version
1.1. Ottawa (Ontario), Canada. Source:www.rsc.ca./english/expert_manual.pdf
Citizens’ juries
ƒ
Brief description
A Citizens Jury is a panel of non-specialists who meet over 3 – 5 days to examine carefully a
complex or contentious issue of public significance. The decisions or recommendations of a
Citizens Jury are not binding but it is important that there should be some form of contract or
agreement that the Authority will take into account the Jury’s recommendations and, if these
are not implemented, reasons should be given and officially published. Citizens Juries are
expensive due to the costs associated. Citizens' panels can usefully articulate relevant values
in a flexibly structured way, for instance by making deliberative use of the "characterisation"
approach to evaluating environmental capital (Valuation). Citizens' juries have also been used
to consider and recommend decisions on a wide range of issues (Comparison and Choice).
They offer an appropriate means of contextualising cost-effectiveness analyses of proposed
policy options or standards regimes, and of exploring the full range of opportunity costs.
Citizens' panels might also be used as fora to help assess evidence of appropriateness,
compliance and effectiveness (Monitoring).
Page 96 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
ƒ Detailed description
The citizens’ jury method is a means for obtaining informed citizen input to policy decisions.
A citizens’ jury is a group of 12-24 randomly selected people, who are informed by several
perspectives, often by experts referred to as ‘witnesses’. The jurors then go through a process
of deliberation and subgroups are often formed to focus on different aspects of the issue
specific (policy) issue (Crosby 1995) (Smith and Wales 1999). Finally, the jurors produce a
decision or provide recommendations in the form of a citizens’ report.
The citizens’ jury (see also (Renn et al. 1995)) is intended to provide a response to the
growing democratic deficit in contemporary societies (Smith and Wales 1999). This deficit is
claimed to occur when policies are worked out for rather than with the “politically
marginalised”, systematically excluding their perspectives. Citizens’ juries are based on the
rationale that given adequate information and opportunity to discuss, such a jury can be
trusted to take decisions regarded as legitimate and fair on behalf of the community (Coote
and Mattinson 1997), even though in terms of training and experience many people are
professionally more competent than they (Crosby 1995).
When to use
The Citizens Jury method has been applied to a wide range of topics, including economic,
environmental, social and political issues. It is most applicable when one or more alternatives
to a problem need to be selected and the various competing interests arbitrated. The method is
most likely to lead to concrete action when it is directly linked to legislation or other
decision-making process.
Procedure
Overview
Preparation: The preparation of a Citizens Jury is expensive. First, it entails recruiting a
project director, staff, an advisory committee and a working group. Second, criteria for
selecting the jurors must be developed, a questionnaire created for this purpose and the jurors
recruited. Third, the charge of the jury must be established and an agenda developed. Next,
criteria for expert witnesses must be engendered and the experts recruited; moderators must
also be recruited. Finally, binders of information must be compiled, logistical matters
arranged and media contacted. 38
An introductory day is followed by several hearing days, in which the expert witnesses give
presentations and are questioned by the jury and the jury deliberates to come to a consensus
on the charge, if possible. Two moderators facilitate the entire process. A draft report of the
decisions, reasons and description of the process is produced.
A final news conference is held to announce the result of the jury’s deliberations; evaluations
are conducted with participants and a final report is produced and disseminated.
Preparation
a. Personnel and tasks
• Project Director
The project director is responsible for the execution of the Citizens Jury project. (S)he
participates in the advisory committee and is the liaison between the project staff, the
advisory committee and any other involved entities. It is the director’s responsibility to
delegate tasks. Depending on the director’s skills and the complexity of the issue and project,
the role of the director can range from half-time to full-time for 3 – 6 months.
Page 97 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
• Project Staff
Project staff needs to be hired. The staff is responsible for the execution of the following
elements of the Citizens Jury project:
o Advisory committee
o Jury selection, including survey
o Establishing the charge
o Developing the agenda
o Identifying, selecting, recruiting & preparing witnesses
o Logistical issues
o Moderator training
o Managing the hearings
o Wrap-up, follow-up and evaluation
o Media and publicity.
These steps are not necessarily chronological, many elements require attention
simultaneously. Traditionally the project staff, including the director, is separate from final
policy decision-maker of the Citizens Jury project. This helps to ensure that no one entity can
exert undue influence on the project. The duty of the staff is to protect and preserve the
integrity of the process, not to influence the content of the outcomes.
• Advisory Committee
Before assembling the advisory committee, the general timeline and scope of the project
should be agreed upon by the staff (although revisions may occur). Furthermore, funding
should also be secured.
Although the make-up and function of the advisory committee will vary between projects, it
is composed of approximately 6 – 15 individuals who are knowledgeable about the issues at
hand. The committee’s role is to ensure that the project staff is aware of the different
perspectives and relevant issues so that an appropriate charge, agenda and witness list can be
developed.
Two possible types of advisory committees are those that are constituted of ‘wise and
thoughtful’ individuals who understand the issues but are not the stakeholders or advocates in
the issue at hand. Alternatively, the stakeholders and advocates can be members of the second
type of advisory committee. The ‘wise’ committee may be less conflict ridden and more
manageable, but the project may receive less support from the stakeholders and hence have
less impact, if they are not directly involved. In either case, it is important to ensure that all
significant perspectives are represented so that the charge, agenda and witness list are not
biased.
At the first advisory committee meeting, the Citizens Jury process should be explained and
the role of the advisory committee clearly defined. It is essential that members understand
that their role is to assist and advise the project staff so as to reflect the relevant issues and
varying perspectives in a balanced and objective manner. Advisory committee meetings are
facilitated by project staff and should be viewed as an opportunity:
o
o
To gather input and ideas about the charge, agenda and witness list. They may also
provide input on the telephone survey, jury demographics, media outreach, etc.
To build and maintain support for the project from a range of perspectives.
Whether or not they agree with the outcomes, all stakeholders should feel that the process is
balanced and fair.
Page 98 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
• Working Group
A working group allows for candid discussions of input from advisory committee meetings.
The working group should consist of fewer than five persons and should include the main
contact person from the policy-makers to ensure that the policy-makers remain ‘in the loop’
throughout the entire planning process. It will probably meet several times throughout the
planning process and can be consulted via email and telephone as questions and problems
arise.
• Moderators
A Citizens Jury project cannot be successful without qualified and skilled moderators.
Because of the strenuousness of the Citizens’ Jury process, it is usually advisable to have two
moderators.
The purpose of the moderators is to lead the jurors through what is usually a long and
complicated agenda in a way that enables them to understand what they are doing and why
and to facilitate the discussion sessions so that the jury arrives at conclusions and clear
recommendations.
In deliberations and discussions the moderators aim to ensure fairness, to maintain decorum
and to see that the designated topic is adhered to within broad limits. The goal of these
sessions is to seek consensus and common ground whenever possible. However, consensus is
not always possible, so a vote may be necessary. Several diff rent types of voting, including
weighted voting and silent ballots, may be employed in situations, as appropriate. The
hearings are not conducted using rules of procedure from the legal system. In contrast,
considerable latitude is given to the witnesses to make their statements.
At the conclusion of each day, there is a meeting with all project staff. Since the moderators
are most closely connected to the jurors and will have the best sense of how the jury is
feeling, it is imperative that the moderators participate fully in the staff meetings. It is the
moderators’ responsibility to represent the jurors’ best interests, while other project staff may
be responsible for representing the best interests of the policy-makers and the process. If
agreed to by all concerned, changes to the agenda may be made to accommodate the needs of
the jury. For example, additional discussion time can be added in or a witness may be called
back in for clarification.
Another key responsibility of the moderators is to ensure that the charge questions are
answered. The moderators must direct the discussion and deliberations in such a way as to
focus the jurors on the charge in the given timeframe. The jury may choose to go beyond the
charge, but the charge questions are the first priority. In addition, the jury may choose not to
answer a charge question or to answer it in a different way, but they must provide detailed
reasoning for altering the charge.
Due to the nature of the Citizens Jury project, a team of two moderators is necessary. While
the ‘primary’ moderator leads the jurors through discussion, the ‘secondary’ moderator
observes. Each moderator serves in each capacity throughout the hearings. It is important that
the secondary moderator listens carefully to the discussion, observes jurors and witnesses, is
on alert for negative jury dynamics, assists with group activities and helps with any necessary
recording on flipcharts. Having two moderators also helps in the process of summarising
results after each session.
The specific responsibilities of moderating a Citizens Jury include:
Page 99 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Planning
Participate in the design of the agenda and charge when possible, bringing to the process the
perspective of the person who will lead the jurors through it.
41
• Facilitating
o Keep foremost in consideration the Citizens Jury principle that meeting the needs of
the jurors is their primary task, so long as this is consistent with fairness to witnesses.
o Be able to monitor the jurors’ level of satisfaction with what is happening.
o Help the jurors clarify and refine their statements without putting words in their
mouths or leading them in one direction or another.
o Ensure that all the jurors are given an opportunity to express their opinions and ask
questions, to make sure all their concerns are aired.
o Be responsible for ensuring that the jurors are treated in a respectful manner and that
their needs are met during the intense time they spend at the hearings.
o Create a climate within which the jurors feel good about their tasks, melt as a group
and operate with mutual respect.
o Suggest some kind of framework for the jurors to finish putting their ideas together in
a timely and organised fashion.
o Work with the jurors to pull out their ideas instead of leading the jurors in order to
bring them to a good set of recommendations.
o Keep close track of the timing of the hearings so that neither witnesses nor jurors are
shorted in the time they deserve, both for discussion and for breaks.
o Ensure that the rules of procedure are explained to the jurors and are followed
throughout the proceedings.
o Be aware of the format and goal of each session, so as to direct the flow of
conversation appropriately.
o Facilitate the interaction between the jurors and the persons brought in as expert
witnesses or advocates.
o Depending on the format, the persons appearing before the jury may give a formal
presentation first or simply be available to answer the jurors’ questions. If a formal
presentation is given, the moderator will need to listen carefully and be ready to
involve the jurors in the discussion.
o Ensure that there is no inappropriate lobbying going on amongst the jurors or
between stakeholders and jurors.
o Question the experts directly, if the jurors seem reluctant or unable to do so and a
clear majority of the jurors wishes this to be done. The goal is to let the jurors ask the
questions, but at the beginning of the process the jurors may be shy about this. Also,
if the topic is complicated, they may just not know how to begin.
o The moderator can help with questions or prompting of the jurors.
o Facilitate the interaction among the jurors themselves in the sessions in which they
frame questions, reach conclusions or develop recommendations. This work will
usually involve restraining the very vocal jurors and bringing out the ideas of the
quiet jurors. Sometimes the goal is a consensus conclusion and at others a vote is
taken. In either case, jurors will be asked to explain their conclusions and the
moderator needs to help them not only reach their decisions but be able to articulate
their reasons to the public.
• Meeting with Staff
Be advocates of the jurors when uncertainties arise within the project staff. Always represent
the jurors’ best interests. Work cooperatively with other project staff, before and throughout
the hearings.
42
Page 100 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
• Evaluating
At the conclusion of the process suggest any improvements for future projects.
The Jefferson Center (www.jefferson-center.org) has developed a Moderator Training Guide
for Citizens Jury Projects, which can be referred to for more detailed information on
moderating Citizens Juries.
It is essential for the project staff to meet with the moderators and discuss the process and the
project with them and make sure that they understand the unique elements of moderating a
Citizens Jury project. It is also very important for the two moderators to get to know each
other and each other’s working style prior to the hearings. The moderators should play a role
in establishing the charge and developing the agenda.
b. Jury Selection
A Citizens Jury is designed to be a microcosm of the population covered by the project (in all
important and relevant ways), so jurors need to be chosen in a way that ensures this.
The first step is to clearly define the relevant population, which is determined by the scope
and purpose of the project.
The second step is to decide on which specific demographic variables to base the jury
selection. What characteristics of the population need to be reflected accurately in the jury in
order to make it a microcosm of the public? Some common demographic variables include
age, educational attainment, gender, geographic location within the community and race.
Often a sixth variable is added. This can be a demographic characteristic, such as tax paying
status for a given year or health insurance status, etc. Alternatively, it can be an attitudinal
question, such as one’s opinion regarding European monetary union. Other variables can be
incorporated as well, but the project staff and advisory committee should carefully weigh the
usefulness of each variable. The final constitution of the jury will reflect (or nearly reflect)
the actual percentage of the population that falls into the sub-categories.
c. Juror Recruitment
• Survey
Use of a random survey to form the jury pool is an essential part of a Citizens Jury.
Identifying potential jurors on a random basis establishes credibility for the project. The
survey can be conducted by telephone (if legally permitted), in which case telephone numbers
can be purchased or a random selection from the public telephone book can be used (such as
calling every fourth listing with two even numbers in the last four digits of the phone number
or any such random procedure). Alternatively, recruitment can be done in person or by mail.
43
• Survey Questionnaire
Regardless of whether the initial survey is conducted via telephone, mail or in person, a
questionnaire will be required to grab the interest of the potential participants, provide a brief
description of the project, to establish its credibility and to inform about the kind of time
commitment required and any payment being offered. If the respondent says (s)he would be
interested in participating, additional questions should be asked to establish his/her
demographic details. Then the potential participant should be told that more information will
be sent out immediately.
In order to keep track of the survey calls and potential jurors as well as to generate letters, a
computer with a database and print merge capability will be needed. After completed
questionnaires have been entered into the database, a letter should be produced for each
Page 101 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
potential juror from that day. A control number should be assigned to each survey
respondent. The number will be used as the identification of the potential juror until the final
selection is established. This helps to prevent bias in the selection process.
• Sending initial materials to those considering participation
The day after the initial contact a packet of materials should be sent to the respondents who
said they would or ‘might’ be interested in participating. The packet for potential jurors
should include:
o a covering letter explaining the project
o a form to fill out and return
o a small stamped envelope for returning the form
o a fact sheet on the project.
When forms are returned, this should be indicated in the database and the corresponding
control number should be clearly indicated on the form, which should be saved.
• Selection of jurors and alternates
There will be a pool of people in each category who are willing to participate. The staff must
then choose the jurors and alternates needed for the right balance in each category and notify
them that they have been chosen as jurors or alternates.
It is advisable to first call to confirm the selected jurors and then arrange for the alternates (in
case a juror does not show up). Alternates should be asked to come the first morning.
If all the jurors arrive on time, the alternates will be dismissed. If a juror needs to be replaced,
the alternate that is demographically the most similar will be seated as a juror. Alternates are
typically paid an agreed upon sum
if they are dismissed. If they are seated, they receive the same stipend as an original juror. It
is advisable to select three alternates for 18 – 24 person projects and two for 12 person
projects.
44
• Notification of jurors and alternates
3– 8 weeks before the jury hearings, a phone call should be made to the selected jurors and
alternates, confirming their participation. In addition, a formal letter acknowledging their
selection and providing detailed information should be sent. A sample information packet to a
selected and confirmed juror includes:
o letter
o juror expectations sheet
o maps, if necessary
o lodging/parking/special needs information
o stamped return envelope, if necessary.
It is highly advisable to make one or more follow-up calls, including one on the week before
the jury event.
A letter of thanks should be sent to those potential jurors who were not selected for the jury.
It should include a note about how to find information about the process.
d. Establishing the charge
The charge is one of the most important elements of the entire project. The charge will guide
the agenda, the witness selection, the deliberations and the form of the recommendations. It
Page 102 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
must be clearly written and focused but without directing the jurors towards any bias. The
charge defines the scope of the project, so it must present a manageable task to the jury.
The project staff can draft a preliminary charge after consultation with the policy-makers.
This draft can then be revised and refined after further consultation with partners, moderators
and advisory committee members. It is the responsibility of the project director to ensure that
the charge is worded in such a way as to meet the needs of the policy-makers, to be unbiased,
focused and allow for useful recommendations to be developed in response to the charge.
The charge questions can be separate questions or it can be a series of linked questions that
build on one another. Ideally, there should not be more than three charge questions, including
sub-questions, but it can be difficult to appease all of the involved parties.
e. Developing the agenda
Since key components of a Citizens Jury are the education of the jurors and the opportunity
for thoughtful deliberation, careful attention needs to be paid to the structure of the agenda
for the introductory, hearing and deliberation days. The agenda is based upon some
preliminary decisions that are made by the advisory committee and/or working group,
including:
o goals and objectives of the project
o scope of the project
o charge to the jury
o issues to be addressed
o timing of the hearing and number of days scheduled
o form of the final conclusions.
In addition, the following matters should be considered in setting the agenda.
In the education process of the jurors, there must be enough information presented to enable
them to have a good grasp of the issue at hand, but not an overdose of information. The
information must come from several points of view, balancing the perspectives of all relevant
stakeholders. Enough time should be provided for jurors to discuss what they are learning, for
them to have their questions answered and for them to deliberate and reach conclusions on
the final day.
The hearings are organised to utilise expert witnesses or presenters. One must decide when to
make use of ‘factual’ information and when to utilise advocates to present specific views or
arguments. Staff need to consult with advisors to recruit competent ‘witnesses’ who can
answer jurors’ questions about the issues.
Staff will have to decide how much information – if any – should be sent out to the jurors in
advance and then do so.
Some jurors say they would prefer to receive all materials in advance for their perusal before
arriving at the jury. However, there are risks involved. Jurors who are not good at reading
may find the information intimidating and may not show up on the first day. Furthermore,
since not everyone will read it, jurors will arrive with different levels of preparation for the
project. The staff and advisors need to design an agenda framework that divides the
information sessions into logical steps in acquiring the education needed. The information
should flow easily from an orientation to the Citizens Jury process to a general introduction to
the material and finally into the details of the issue. Enough time should be allowed along the
way for the jurors to understand how their own backgrounds or values may be influencing
Page 103 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
their interpretations. Allowing time for the jurors to tell their own stories relating to the topic
is important for giving them ownership of the subject and process.
a. Selecting and recruiting expert witnesses
Expert witnesses include all persons who aid the jurors in understanding the issues central to
the charge to the jury. This goes from the neutral resource persons, who introduce the
vocabulary and history of the topic, to the experts, who either discuss all the options or
advocate one point of view.
46
• Definition of the role of expert witness
The role of the expert witnesses is to help jurors understand all aspects of the topics included
in the charge to the jury. Because the topics may be ones that the jurors have not thought
about before, witnesses need to be able to explain the complexities in a language that average
citizens can understand. In most projects, the witnesses will give brief presentations that
sketch out their perspective, but at least half of the scheduled time will be devoted to jurors’
questions.
• Neutral resource persons/presenters
The role of neutral resource persons is to familiarise the jurors with the vocabulary of a
complex topic, to explain the history behind a current problem and sometimes to lay out – in
a non-partisan or unbiased manner – possible options for solutions to the problem. These
persons may only participate at the beginning of a project, to set the stage for advocates who
will argue different points of view. Alternatively, they might be hired as experts who will
accompany the jurors through the entire project in order to assist them with questions they
may have as the other advocate witnesses present their opinions. However, this latter
approach risks introducing biases into the project and must be done with great care.
• Options for advocate witness selection
The advocate witnesses can be chosen in various ways:
Advocates can be selected to present each competing point of view in an adversarial context.
In this case, the advocates choose their own witnesses for the panels that consider different
aspects of the problem. In this scenario, a neutral resource person is usually chosen to orient
and advise the jurors.
In another method, project staff can choose a balanced group of experts,making sure to find
witnesses to represent both (or all) sides of the issue or to choose individuals who can discuss
all sides. There are two models within this method:
o Separate experts present specific positions that they favour or
o Panels of experts, both academics and practitioners, discuss all sides of the issues.
• Review of witness criteria
In the planning stages, the advisory committee may adopt criteria for selecting the panels of
experts. The staff, advisory committee should brainstorm to define the full range of existing
points of view on the topic at issue. An attempt should be made to include all points of view
within the scope of the jury’s charge.
When selecting witnesses, the staff needs to know whether witnesses are supporting a
particular position. It may also be necessary to consider criteria such as employment for a
particular organisation that stands to gain financially from a particular solution.
In the case of an advocacy method, it is important to choose advocates of equal status and
capability for all sides, so that the jurors are not swayed more by the advocate’s talent or
Page 104 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
personality than by the facts presented. This is generally a challenge, because some people
present better than others.
The advisory committee should decide whether to make an explicit attempt to include
diversity as a criterion for witness panels. A diverse panel can improve credibility for the jury
panel (which is itself diverse), for the public and policy makers who follow the proceedings
and for the media who reports the results.
Once key decisions are made about the method of witness presentation and the agenda, the
staff needs to create lists of possible witnesses for each witness ‘slot’, including neutral
resource persons, advocates and experts. Advice on possible witnesses can be sought from
many sources, in addition to the advisory committee members, such as: academics from a
variety of universities, professionals or policy makers in the field, legislators, private and
governmental agencies, think tanks or institutes, business leaders or chambers of commerce,
interest groups or lobbyists, reporters, special advocacy organisations.
• Recruitment of specific individuals
It is recommended to make initial contact with a possible witness by telephone. However, one
can also first send a letter or fax. Provide a concise description of the project and the role of
the witnesses. The witness selection criteria should also be mentioned. Determine whether the
person is interested and available on the hearing dates. If the person is interested, a covering
letter with follow-up materials should be sent immediately.
Sometimes it is necessary to contact more witnesses than will actually be needed in order to
have enough from which to draw a balanced panel and ensure that they can all come on the
day chosen.
• Materials to send to witnesses
The information packet for the selected witnesses can include some or all of the following:
a covering letter
o information about the Citizens Jury and the role of the witnesses
o witness guidelines
o details about the current project, including the charge to the jury
o information about the topics the witness is being asked to cover in his/her
presentation
o specific date(s) and time(s) for the witness’s presentation(s), as well as the time limit
o inquiry about the audio/visual equipment required by the witness
o request for background information and/or a brief position statement
o request for witness to prepare copies of any presentation handouts
o request for a one-page summary of the witness’s position or a questionnaire
o information about the specific location of the hearings
o travel vouchers or reservation information
o information on any hotel accommodation that will be provided.
This information can be sent in two stages, if preferred. Once a final selection has been made,
any experts who are not needed or who are not available on the appropriate day should be
contacted.
• Confirmation
About a week prior to the hearings, the project director should call all witnesses to confirm
their participation, remind them about the details, answer any questions, nudge them to return
information and forms if they have not done so and to check on audio-visual equipment
requirements.
Page 105 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Logistical issues
• Site choice
The staff is responsible for finding and reserving a meeting location, handing all the site
details during the event and making hotel reservations for all persons who require
accommodation.
The meeting room should be large enough to accommodate a U-shaped table set up to seat
the jurors comfortably. It should be large enough to allow jurors to split up into four or five
groups or – even better – the site should have smaller rooms available for this purpose. The
moderator and witnesses will sit or stand at the open end of the U-shape, so allow space for a
podium, table and projector. The room must also have sufficient electrical power to
accommodate the electronic media.
The following items will (or might) be necessary:
o at least two flip charts on easels
o space to hang the flipchart sheets
o tape or tacks to hang sheets
o projector (for power point presentations)
o a projector screen
o a podium (for individual speakers)
o a table (for panels)
o a microphone
o bathroom facilities
o a photocopy machine
o telephone
o laptop computer
o printer
o video camera
o extra chairs
o pens, pencils, paper.
If an audience is expected, additional chairs should be set up in an unobtrusive location
behind the jurors.
• Food and Accommodation
Arrangements will need to be made to provide the following:
o meals during the hearing days
o hotel accommodation, as required
o parking
o travel arrangements
o travel reimbursements
o stipends.
• Information
Prepare the following materials:
o Juror Binders
Include background information, project overview, description of the Citizens Jury
process, list of participants, charge, current agenda, witness list, rules of procedure, blank
paper, copies of witness presentations, a set of dividers and space to insert handouts and
additional notes, etc.
o Staff Binders
Page 106 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
These should include the same information as the jurors’ binders, as well as a telephone
list for all relevant persons (staff, jurors,witnesses, advisory committee members,
caterers, etc.), a detailed list of logistical details, such as break times.
o Public Information Materials
A table should be set in a convenient yet unobtrusive location to set out public
information materials. This may include additional press packets and information for the
general public. Include: Project overview, charge, agenda, witness list, juror list, extra
copies of witness handouts, sign-up sheet for final report (including name and address).
o Media Briefing Packet
Some possible contents include: agenda, project contact person and phone numbers,
philosophy of the Citizens’
50
Jury, project overview, revised schedule, jury selection explanation, juror list, reporting
information, expert witness panel lists, advisory committee list, additional information on the
issues to be discussed, commentary.
The Citizens Jury Event
a. Introductory day
Various housekeeping details need to be addressed on the first day. The morning of the first
day is usually devoted primarily to orientation of the jurors to the process and to each other. It
is important to focus on the importance of the jurors and the central players in the project and
to build this notion in their minds. Time should be allowed for the following:
o Jurors introduce themselves to each other.
o Staff reviews the background of the Citizens Jury concept.
o Staff explains the details of the current project.
o Distribute guidelines and rules of the procedure.
After the jurors state their name, where they live and what they do for a living, the
moderators can pose a question for each juror to answer. The question should not be too
personal or controversial but something that will provide a bit of unique information about
each individual. It is also important to encourage the jurors to learn about each other through
discussion of their own experiences concerning the issues they will investigate further with
the expert witnesses. This helps to get biases on the table as well as to create a group feeling.
If the process will involve many decisions made by voting, jury members should be
introduced to the voting process early on so that they are comfortable with it.
b. Hearing days
The next days are dedicated to the education of the jurors through the presentations and
questioning of expert witnesses. The amount of time dedicated to this can vary.
In most juries, advocates are used to present opposing cases for particular points of view. The
advantage of this is that jurors hear consistent cases, pro and con, from start to finish. The
disadvantages are the adversarial nature that this approach builds into the process and the
reliance that the jurors must place on the advocate for the choice of witnesses and
presentations. If the jury concerns a highly contentious issue, the advocate system may be
necessary. If this approach is chosen, the hearings should begin with some factual
background information to provide the context of the different views that will be presented.
51
In cases where the issue is not highly contentious, it may be more productive to let the jurors
sort out the issue without being directed by advocates and rather have them assisted by
experts carefully chosen by the staff. This system allows the witnesses to express their
opinions freely because they are not limited to advocating only one point of view.
Page 107 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
It can be useful to utilise panels when a number of different perspectives on the same issue
are presented. For example, each perspective can be given a set amount of time for a
presentation, followed by clarifying questions. After all perspectives have been presented,
construct a panel with each presenter as a member of the panel. The jury can then ask
questions to several members of the panel at the same time. The panel structure helps to
illustrate the areas of agreement and disagreement between the various perspectives.
Allow plenty of time for the jurors to discuss and deliberate throughout the week. Some of
the discussion in the early parts of the hearings will be to assist the jurors in processing the
information they hear. At other times, the discussion will take the form of deliberations. If the
agenda is divided into stages, the jurors may deliberate and reach certain conclusions after
each of these stages.
Staff should be ready to provide material to help the jurors organise the information they
hear, such as colour-coded sheets to take notes on different sections of the charge. The staff
can develop scoring sheets and voting forms and provide other materials, if requested by the
jurors.
During the hearing days, the staff needs to monitor the comfort level of the jurors with the
agenda. It may be necessary to adjust it somewhat. Jurors tend to become more talkative as
the process progresses, so a question period that is long enough on the first day may be too
short on the third day. Ideally, however, the agenda should be defined in advance and remain
unchanged.
It can be very useful to hold a staff meeting at the end of every day of the hearings. This can
include the project staff, moderators and sometimes representatives of the policy bodies or
partner organisations. These meetings should be run by the project director and can be used to
discuss the day, the next day and any issues that may have been raised during the hearings.
c. Deliberation
The final stage, which may last a day or longer, is for deliberations aimed at reaching
conclusions on the charge to the jurors. At this point, a clear charge to the jury will be a great
advantage because it frames the deliberation discussions and will lead to clear decisions,
either by vote or consensus. The moderators should have a clear understanding of the kind of
decisions the jurors must reach and a strategy on how they can best get there. Moderators
need to take a very active role in moving the discussion along to cover all the necessary
points.
52
Different kinds of deliberation strategies are needed for different juries. If the jury charge is a
fairly straightforward vote or decision, the deliberations should probably be done with the
whole group. If the goal is more complex, like designing a reform plan, it may be better to
divide the job into pieces and have the jurors split into small groups to work on different
sections. The groups can then report back to the whole and discuss the results in order to find
an agreement. Ideally, the work of the small groups should be typed up and brought back to
the whole group so that everyone has the proposed language in front of them as they discuss
it. Once the jurors agree on their conclusions, these are typed up and brought back to them
again so that they can review their final product and give their endorsement or request
changes. This final review is an essential part of the process.
It is useful to have a staff member and a laptop computer present at deliberations so that the
recommendations can be typed, printed and copied and then presented to the group as a
whole.
Page 108 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
In answering a charge question, it is necessary to provide background information on how the
jury arrived at a specific recommendation, especially if the charge question is framed as a
‘yes’ or ‘no’ type question. This background information may consist of rankings of the
various options, votes on the various proposals presented, pro/con evaluations of the options,
etc. This information is often the most useful piece to the stakeholders because it provides the
justification for the recommendation.
Follow-Up
a. Evaluations
After the jurors have concluded their work (and before the news conference), they should be
given an evaluation form and time to complete it. Evaluation forms can also be given to each
of the witnesses – as well as to the staff and other project participants. Include at least one
standardised question, asking their opinions about the fairness of the process.
b. Final news conference
The jurors should be prepared for the final news conference and they should elect two
spokespersons (usually one male and one female) to present their work to the media and
public. They should be briefed on the questions the reporters may ask and provided with a
copy of the initial project report before the conference.
c. De-briefing with project staff
Another part of evaluating the project involves meeting with the staff to share opinions about
the success of the project and suggestions for improvement. This should take place soon after
the completion of the project. A celebratory atmosphere can facilitate the ease of the
conversation.
53
d. Media
All newspaper articles about the project should be collected and selected ones will form part
of the final report. In addition, people can be recruited to record radio and television news
broadcasts. If the jury proceedings have been aired on the radio, staff should ask the station to
keep track of the comments on listener hotlines.
e. Dissemination of the final report
Once the final version of the Citizens’ Jury report has been prepared, it should be distributed
(with letters of appreciation) to the staff and all participants in the Citizens Jury as well as
made available to the public.
Resource considerations
a. Timelines
The complexity and contentiousness of the issue will have the greatest impact on the project’s
timeline. However, a Citizens Jury project should not take more than 4-5 months to plan,
once funding is secured. (Shorter times are possible with the simpler processes, such as those
sometimes used in the U.K. or Australia.) Two possible ways of structuring the Citizens Jury
are presented here. Both timelines begin AFTER a contract or agreement has been signed and
funding has been secured.
The first timeline divides the planning into two separate phases, with most of the planning
occurring in Phase One and implementation occurring in Phase Two. The primary advantage
of this structure is that the staff is given an opportunity to evaluate the progress of the project
prior to the initiation of the survey (which is a significant cost).
The staff can suggest changes regarding the charge, agenda, witness list, etc. and allow more
time for revision of the plans or (s)he may choose to terminate the project. However, Phase
Page 109 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
One is very intensive, requiring considerable work from the staff and presenting potential
scheduling difficulties for the advisory committee members.
Phase one
Week 1
Weeks 1-2
Weeks 1-8
Week 4
list
Week 6
Weeks 6-9
Week 7
witness list
Week 9
list
Week 9
54
Phase two
Week 1
Weeks 2-3
Week 2-4
Week 5
Weeks 1-10
logistics;
Weeks 7-9
Week 9
Week 10
Week 13
Establish working group
Select advisory committee
Consult with working group
1st advisory committee meeting: develop preliminary charge, agenda, witness
2nd advisory committee meeting: charge, agenda, witness list development
Design phone survey
3rd advisory committee meeting: further develop preliminary charge, agenda,
4th advisory committee meeting: finalise preliminary charge, agenda, witness
reviews progress
Purchase random phone numbers
Conduct phone survey; Mailing to survey respondents
Set jury targets; Meet with moderators
Select jurors
Finalise charge, agenda, witness list; Recruit and prepare witnesses; Finalise
Consult with working group
Additional advisory committee meetings, if necessary
Prepare juror and staff handbooks
Confirm all jurors, witnesses and logistics
Jury Hearings; Friday: issue initial report
Issue final report
The second generic timeline has the planning elements occurring concurrently with jury
selection (including survey and mailings) and logistical arrangements rather than in two
distinct phases. The advantage is that the total time is reduced. However, there is less
flexibility to address any problems or disagreements that arise, since jurors will already have
been contacted with the dates. In addition, staff will have to juggle more tasks
simultaneously.
Week 0
Weeks 1-18
Week 1
Week 2
Week 3
Weeks 4-14
Week 5
Week 6
Week 7
Week 8
Receive project approval
Consult with policy-makers on design elements
Select advisory committee
Design telephone survey
Develop jury selection targets
Develop preliminary charge ideas
Purchase random telephone numbers
Select site
Consult with advisory committee
Conduct telephone survey
Mail information packet to survey respondents
Develop preliminary charge, agenda, witness list
Finalise site
Charge, agenda development
Select jury; Discuss: charge, agenda development
Page 110 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Weeks 9-11
Week 12
Week 13
Week 14
Week 15
Week 18
Meet with moderators
Discuss: charge, agenda development
Finalise charge, agenda, witness list
Recruit witnesses
Confirm and prepare witnesses
Confirm jurors
Confirm logistical details
Prepare juror & staff handbooks
Jury hearings
Friday: issue initial report
Issue final report
b. Budget
The following items listed are the main budgetary items in a Citizens Jury:
o Personnel: project staff (possibly including project director, manager, assistant(s),
clerical staff), stipends for members of the jury, moderators, honoraria for expert
witnesses
o Travel: jury members, experts, moderators
o Accommodation: jury members, experts, moderators
o Food: meals for jurors, experts and project staff during the event
o Recruitment and Promotion: recruitment of jurors, recruitment of experts, Citizens
Jury promotion and advertising
o Communications: printing of draft and final report and dissemination
o Facilities: location for Citizens Jury event
Staff time is the most significant cost of a Citizens jury project. The amount of staff time
needed depends on many factors, including experience, competence, contentiousness of the
issue, length of the project, etc.
56
Additional best practices and potential pitfalls
In order to guarantee that the jury is representative of the population in question, a reliable
and open procedure should be established for obtaining consensus on the demographic or
attitudinal characteristics to be taken into account when setting up the jury (at the beginning
of the process), as well as for ratifying the recommendations of the jury with those they are
supposed to represent (at the end of the process).
Requiring policy makers to be active participants in the Citizens Jury process, to ask and be
asked questions and to put forward their points of view, would make the method more
powerful. It would enable citizens to dialogue directly with those who govern them, involving
them more directly in the policy arena.
References and Resources
Armour, A. (1995). The Citizens’ Jury Model of Public Participation: A Critical Evaluation.
In O. Renn, T.Webler and P. Wiedemann (Eds), Fairness and Competence in Citizen
Participation, pp. 175-187. London: Kluwer Academic Publishers.
Crosby, N. (1995). Citizens Juries: One Solution for Difficult Environmental Questions. In O.
Renn, T.Webler and P. Wiedemann (Eds), Fairness and Competence in Citizen Participation,
pp. 157-174. London: Kluwer Academic Publishers.
Crosby, N. [2003]. Healthy Democracy: empowering a clear and informed voice of the
people. Edina, Minnesota: Beavers Pond Press. (May be ordered through
www.BookHouseFulfillment.com )
Glenn, J. (Ed.) Futures Research Methodology. Version 1.0. AC/UNU The Millennium
Project.
Page 111 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
ICIS Building Blocks for Participation in Integrated Assessment: A review of participatory
methods.
Veasey, K. (2002). Citizens Jury Handbook, updated and revised version. Provided by the
Jefferson Center.
Consensus Conference
Planning cells
ƒ
Brief description
The Planning Cell method engages approximately twenty-five randomly selected people, who
work as public consultants for a limited period of time (e.g. one week), in order to present
solutions for a given planning or policy problem. The cell is accompanied by two processescorts, who are responsible for the information schedule and the moderation of the plenary
sessions. A project may involve a larger or smaller number of planning cells. In each cell
participants acquire and exchange information about the problem, explore and discuss
possible solutions and evaluate these in terms of desirable and undesirable consequences.
Experts, stakeholders and interest groups have the opportunity to present their positions to the
cell members. The final results of the cells’ work are summarised as a ‘citizen report’, which
is delivered to the authorities as well as to the participants themselves.
ƒ
Detailed description
When to use
The Planning Cells work best in a situation in which an urgent problem has to be resolved in
a short period of time and when different options, each posing different benefits and risks, are
available. The process works optimally when the issue is not too controversial and has not
already polarised the attitudes of the affected population. However, Planning Cells can
address even highly controversial issues if the majority of participants are selected by random
process. The following criteria should be used to evaluate the suitability of the Planning Cells
procedure for a given application.
When all or most are answered positively, the Planning Cell method will be suitable.
• Variability of options: Do the participants have the choice of selecting one option out of a
variety of options that are all feasible in the specific situation?
• Equity of exposure: Are all groups of the community or the respective constituency
exposed in some way to the potential disadvantages of the proposed options (to avoid a
distinction between affected abutters and indifferent other citizens)?
• Personal experience: Do participants have some experiences with the problem and do
they feel competent about giving recommendations after they are further educated about
the problem and the remedial options?
• Personal relevance: Do participants judge the problem as serious enough to sacrifice
several days of their time to work on solutions?
• Seriousness and openness of policy-makers: willingness to accept, or at least carefully
consider, the recommendations of the Planning Cell(s) or do they pursue hidden agendas?
Procedure
Overview
• Preparation Phase:
• Recruit personnel
• Design programme
• Recruit citizen advisors
Page 112 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
•
Selecting and Recruiting Experts and Advocates
Logistics
Conducting the Planning Cells:
General Programme for the Planning Cells of a Project
Day 1
Day 2
Day 3
Day 4
Planning Cell 1 (25 persons)
Work units 1 – 4 (maximum 4 per day)
Work units 5 - 8
Work units 9 – 12
Work units 13 - 16
Planning Cell 2 (25 persons)
Work units 1 – 4
Work units 5 - 8
Work units 9 – 12
Work units 13 - 16
Draft report written and sent to all participants for review. Representatives of each Planning
Cell (if multiple Cells were conducted on the same topic) meet to criticise and improve the
report. Final draft of report produced and disseminated.
Daily Programme for a Planning Cell
Planning Cell 1
Day 1
Work unit 1: Sub-theme A (a specified task)
Phase I: Plenary. Participants receive information on sub-theme A through reports, videos,
field tours, presentations by experts and/or interest-group representatives, etc.
Phase II: Small groups. The Planning Cell divides into 5 small groups of 5 persons each. The
subgroups work on an assigned task, first discussing the viewpoints and information and then
generating recommendations.
Phase III: Plenary. The results of the work of the small groups are presented to the plenary.
The moderators collect these results on flipcharts. All participants evaluate each of the
recommendations, using an agreed upon method.
Work unit 2: Sub-theme B (id. for one other specified task)
…
Work unit 4: Sub-theme D (id. for one other specified task)
Day 2 Work units 5 - 8
Final Report Production and Dissemination:
• preparation of draft report
• critique and improvement session
• production of the final citizens’ report
• dissemination of the citizens’ report.
Preparation
a. Recruit Personnel
It will be necessary to recruit an organisational committee and moderators for the Planning
Cells.
• Organisational Committee
Several people will be needed to be responsible for the following tasks:
o assembling information on each of the sub-themes to be addressed in the work units
o designing the programme and schedule
Page 113 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
o
o
o
o
o
o
o
recruiting the citizen advisors
finding a suitable location for the Planning Cells event
recruiting experts and interest group representatives to present their opinions
making travel, accommodation and catering arrangements
publicising the event
compiling the draft report and revising it according to the input of the advisors
producing and disseminating the final report.
• Moderators
Recruit two moderators and a conference assistant (possibly someone from the organisational
committee) for the duration of the Planning Cells.
120
b. Programme design
The organisational committee must develop a work programme. In order to do this it is first
necessary to become familiar with the facts and the context of the problem(s) to be addressed.
Request all required documents, plans, previously issued assessments, etc. from the
appropriate authorities. Pursue discussions with the various interest groups and stakeholders
in order to define the problem itself. A website can be established at which all persons are
invited to inform themselves on the project development and to express their ideas and
opinions already at this stage.
Once the problem is defined, the programme content and schedule needs to be established.
The facilitator subdivides the proposed problem into distinct, thematically specific ‘work
units’. These units fill a methodological function by helping the advisors to address specific
issues and questions before generating final recommendations. A maximum of four of these
units can be addressed each day (thus 16 units can be addressed in four days). Depending
upon the complexity of each work unit, it might be necessary to schedule fewer units and thus
allow more time for some unit(s). Schedule the units across the span of several days, The
number of days required will depend upon the number of work units and the time allotted to
each, which vary with the complexity of the issue being addressed. The planning cells usually
require three to five days,whereby four days are most often sufficient.
Essential to the validity of the results of the Planning Cell is that all camps and interests be
equally represented in the information package and that they be allowed to present their own
case. It is thus imperative that the organisational committee include in the programme as
many diverse and controversial points of view as is possible. It is their job to ensure that all of
the important topics are addressed and that the information is not partial to one perspective.
c. Recruitment of citizen advisors
An important characteristic of planning cells is the random selection of the participants. A
planning cell consists of 25 citizens. These are selected from the pool of all citizens over the
age of 16 in the relevant area, using a random chance procedure. This guarantees that every
citizen has a chance to become one of the advisors and that the final advisory group will be
heterogeneous and representative of the relevant population.
Arrangements have to be made to release all participants from their daily duties, both
professional and personal (such as childcare). Those persons who do not have the opportunity
to take a paid sabbatical must be compensated for any lost income as well as travel expenses.
In addition, any costs to provide an alternative for the care of children, elderly or disabled
family members must also be covered for the duration of the planning cells.
Refer to the General Guidelines for tips on recruiting participants.
Page 114 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Sending initial materials to those considering participation. The day after the initial contact a
packet of materials should be sent to the respondents who said they would or ‘might’ be
interested in participating. The packet for potential jurors should include:
o a cover letter explaining the project
o a form to fill out and return
o a small stamped envelope for returning the form
o a fact sheet on the project.
When forms are returned, this should be indicated in the database and the corresponding
control number should be clearly indicated on the form, which should be saved.
Selection of citizen advisors and alternates.
There will be a pool of people in each category who are willing to participate. The committee
must then choose the participants and alternates needed for the right balance in each category
and notify them that they have been chosen as citizen advisors or alternates. It is advisable to
first call to confirm the selected citizen advisors and then arrange for the alternates (in case
one does not show up). Alternates should be asked to come the first morning.
Notification of citizen advisors and alternates.
4 – 8 weeks before the Planning Cell begins, a phone call should be made to the selected
citizen advisors and alternates, confirming their participation. In addition, a formal letter
acknowledging their selection and providing detailed information should be sent. A sample
information packet to a selected and confirmed juror includes:
o letter
o information sheet on duties for citizen advisors
o maps, if necessary
o lodging/parking/special needs information
o stamped return envelope, if necessary.
TIP:
It is highly advisable to make one or more follow-up calls, including one on the week before
the Planning
Cell event.
A letter of thanks should be sent to those potential citizen advisors who were not selected for
the jury. It should include a note about how to find information about the process.
d. Selecting and recruiting experts and advocates
Experts are resource persons, who introduce the citizens to the vocabulary and history of the
topic and discuss all the options. Advocates represent interest groups that present their point
of view.
The organisational committee must choose a balanced group of advocates, making sure to
find experts and advocates to represent both (or all) sides of the issue. Two possible models
include:
• separate experts present specific positions that they favour or
• panels of experts, both academics and practitioners, discuss all sides of the issues.
122
Review of Criteria for Experts and Advocates.
In the planning stages the staff of the organisational committee may adopt criteria for
selecting the experts and advocates. They should first brainstorm to define the full range of
existing points of view on the topic at issue. An attempt should be made to include all points
of view within the scope of the issue.
Page 115 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
When selecting experts and advocates, the staff members need to know whether witnesses
are supporting a particular position. It may also be necessary to consider criteria such as
employment for a particular organisation that stands to gain financially from a particular
solution.
Once key decisions are made about the method of experts’ and advocates’ presentation and
the agenda, the staff needs to create lists of possible experts and advocates for each ‘slot’ in
each work unit. Advice on possible experts and advocates can be sought from many sources,
in addition to the organisational committee members, such as: academics from a variety of
universities, professionals or policy makers in the field, legislators, private and governmental
agencies, think tanks or institutes, business leaders or chambers of commerce, interest groups
or lobbyists, reporters, special advocacy organisations.
Recruitment of specific individuals.
It is recommended to make initial contact with a possible expert or advocate by telephone.
However, one can also first send a letter or fax. Provide a concise description of the project
and the role of the experts and advocates. The selection criteria should also be mentioned.
Determine whether the person is interested and available on the date of the relevant work
unit. If the person is interested, a cover letter with follow-up materials should be sent
immediately. Sometimes it is necessary to contact more experts and advocates than will
actually be needed in order to have enough from which to draw a balanced panel and ensure
they can all come on the day chosen.
Materials to send to experts and advocates.
The information packet for the selected witnesses can include some or all of the following:
o a covering letter
o information about the Planning Cell and the role of the experts and advocates
o details about the current project, including the main issue and each work unit
o information about the information the expert or advocate is being asked to cover in
his/her presentation
o specific date(s) and time(s) for the expert’s or advocate’s presentation(s), as well as
the time limit
o inquiry about the audio/visual equipment required by the witness
o request for background information and/or a brief position statement
o request for witness to prepare 30 copies of any presentation handouts
o request for a one-page summary of the expert’s or advocate’s position or a
questionnaire
o information about the specific location of the hearings
o travel vouchers or reservation information
o information on any hotel accommodation that will be provided.
This information can be sent in two stages, if preferred.
Once a final selection has been made, any experts who are not needed or who are not
available on the appropriate day should be contacted.
Confirmation
About a week prior to the hearings, the project director should call all experts and advocates
to confirm their participation, remind them about the details, answer any questions, nudge
them to return information and forms if they have not done so and to check on audio-visual
equipment requirements.
e. Logistics
Site and equipment
Page 116 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
The staff of the organisational committee is responsible for finding and reserving a meeting
location, handing all the site details during the event and making hotel reservations for all
persons who require accommodation.
The meeting room should be large enough to accommodate a U-shaped table set up to seat
the citizen advisors comfortably. It should be large enough to allow the advisors to split up
into five small groups or – even better – the site should have smaller rooms available for this
purpose. The moderator, advocates and experts will sit or stand at the open end of the Ushape, so allow space for a podium, table and projector.
Conducting the Planning Cells
Please refer to the table above for a summary of a typical sequence of a Planning Cell.
The schedule for the Planning Cells are organised into multiple ‘work units’, each of which
addresses a specified task that is part of the larger issue or problem. Each work unit
comprises three major components:
• Phase I: reception of information through lectures, field tours, videos,written material and
other mediums
• Phase II: processing of information through small group discussions, plenary sessions and
hearings; and
• Phase III: evaluation of the impacts of the options through small-group discussions,
personal judgements and consensus-building exercises in the plenary.
124
Each of these phases is described in greater detail below. After all of these work units have
been conducted, there is a final evaluation and then the summarising citizens’ report is
compiled.
These work units should not be seen as a sequence of separate decisions but rather as a
progressive opinion-building process that is completed during the last units of the final day.
The results of each unit can be seen as provisional results that can elucidate various parts of
the final result.
Phase I: Information presentation.
Informing the participants about the policy options and their likely consequences is the most
vital part of the whole procedure. Their common sense and lay understanding of the topics
being addressed is supplemented with factual information and the perspectives of all
interested parties.
At the beginning of each work unit, the citizen advisors are informed about various aspects of
the issue by experts, interest group representatives and so forth in the form of reports,
community visits or field tours, videos, lectures, written material, photographs, etc.
Afterwards, the advisors have the opportunity to ask specific questions. This phase is
facilitated by the moderators and assistant, who are responsible for steering the process in a
timely fashion.
Phase II: Small group discussions.
The second major component of the Planning Cell procedure is the elicitation of values,
criteria and attributes and the assignment of relative weights to the different value
dimensions. This is the aim of the discussions between the citizen advisors subsequent to
each information phase. The discussions take place in small groups of five persons, which
enables less talkative persons to express their ideas. The constitution of the small groups
should be changed at regular intervals and is determined by lottery, as this helps to prevent
Page 117 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
the dominance of any individual opinions. Each small group should be given a clearly defined
task (pertaining to the current work unit) and time frame. The discussions serve to place all of
the received information in relation to the advisors’ personal experience and facilitate the
formation of their opinions. In contrast to the plenary sessions, the moderators do not play a
role in the small group discussions.
The small groups will produce a recommendation based upon their discussion and, if
necessary, some kind of voting procedure. The members can choose their own method. In
several cases, methods derived from Multiattribute Utility Theory have been used. In these
procedures, the citizen advisors are first asked to rate each decision option on each criterion
that they deem important. Each criterion is weighted against each other criterion, resulting in
a matrix of relative weights and utility measures for each option and each criterion. Both
tasks, the transformation in utilities and the assignment of trade-offs, are performed
individually and in the small groups.
Based upon their discussions and any voting procedures, the small groups produce a
recommendation on the specified task of the given work unit.
Phase III: Small group presentations to the plenary and evaluations.
The discussions in the small groups lead to various proposals and recommendations regarding
the specified task for the work unit. The next step in each round is the presentation of the
results of the small work groups to the plenary. The moderators should collect all of the
recommen-dations on flipcharts. These recommendations are subsequently evaluated by all of
the citizen advisors.
The evaluations can take place in various ways. Some possibilities include assigning grades
or points, filling out personal evaluation forms or having a plenum vote on the various
proposed alternatives. The results of these evaluations are recorded by the moderators and
assistant and will later be compiled into the final report.
Final Report Production and Dissemination
a. Production of citizen’s report
The moderators have the task of summarising the initial results of the planning cell(s) in the
form of a citizens’ report. The report should include a description of the problem and task, a
description of the entire procedure (selection of the advisors, process of the planning cells,
voting process, etc.) and the results of each of the work units. The purpose is to make the
entire process transparent and comprehensible.
Approximately two months after the conclusion of the Planning Cell meetings, the report is
first presented to all of the advisors for their authorisation. All of the participants, or some
representatives of each Planning Cell if multiple Cells were conducted on the same topic,
meet once more to review, criticise and improve the report.
The organisational team incorporates the comments into the report and finalises it for
publication.
b. Dissemination
The final citizens’ report (the results of the planning cell) can be presented to the contracting
party and published. The results can serve as a decision aid to relevant political institutions.
Resource considerations
Any given project may incorporate multiple Planning Cells,whereby each Cell additional to
the first one will cost less. Each Planning Cell requires approximately two months of
Page 118 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
preparation, four or five days for the main event and two-three months afterward: a total of
approximately five months. This will vary according to the complexity of the issue being
addressed. The main budgetary items include:
o Personnel: organisational committee members, moderators, daily stipend for 25
participants, plus costs to free them from any duties
o Travel: for organisational committee members (if applicable), for participants, for
moderators
o Accommodation (only necessary for all-day and non-local events): for organisational
committee members, for participants, for moderators
o Food: meals and refreshments for each day of event
o Recruitment and Promotion: recruitment of personnel and citizen advisors
o Communications: printing costs to produce final report and any other information, as
required, publishing and dissemination costs for final report
o Facilities: location for the Planning Cell to meet
Additional best practices and potential pitfalls
The advantages of the planning cell include:
o The random selection of the citizens increases the acceptance of the results because
they are representative of the relevant population.
o The results of the planning cell are completely open. In contrast to some participatory
methods, there are no pre-defined solutions. Rather, the citizen advisors develop their
own solutions and recommendations based upon their experience in the planning cell
process.
o The recommendations of the citizen advisors tend to clearly promote action in, and
protect the interests of, the general community. Citizens do not try to push through
their own individual interests but seek the well-being of the community as they
understand it.
o Planning cells are processes of political education. As a side effect, the participants
learn about various institutions, processes, pressures and constraints involved in
political decision-making.
o Planning cells provide an opportunity to learn about the interests of others. By
bringing together people of diverse ages, socio-economic and educational
backgrounds, the process facilitates contact and understanding between people with
very different perspectives,who otherwise might never meet each other.
Drawbacks and Limitations of Planning Cells:
Planning Cells are not well suited for issues that pose major inequities between different
regions or social groups. In these cases, randomly selected citizens are not perceived as
legitimate negotiators for the groups that face these inequities. In addition, decisions
involving only a yes-no alternative are inappropriate for Planning Cells because participants
tend to select the ‘easy’ solution of objecting to any new development, especially if the
affected community does not equally share the benefits.
Another problem associated with Planning Cells is accountability and long-term planning.
Since citizens are not responsible for implementing the final decision, they may make choices
that are not financially or physically feasible in the long run. Although Planning Cells could
be reconvened several times or different panels could be organised for the same subject over
a longer period of time, this does not constitute the same public control as having elected
officials who face elections and may be legally accountable for their actions. The question of
how much authority these panels should be given was also a major point of criticism in a
review of participation models by Fiorino (1990).
Page 119 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
References and Resources
Dienel, P. (1989) Contributing to Social Decision Methodology: Citizen Reports on
Technological Projects. In C. Vlek and G. Cvetkovich (eds.), Social Decision Methodology
for Technological Projects, pp. 133 - 151. Dordrecht: Kluwer Academic Publishers.
Dienel, P. and Renn, O. (1995) Planning Cells: A Gate to ‘Fractal’ Mediation. In O. Renn,
T.Webler and P. Wiedemann (eds.), Fairness and Competence in Citizen Participation, pp.
117 - 140. Dordrecht: Kluwer Academic Publishers.
Fiorino, D. J., Citizen Participation and Environmental Risk: A Survey of Institutional
Mechanisms. In Science, Technology, and Human Values, 15, 2 (Spring 1990), 226-243.
Seiler, H. (1995) Review of ‘Planning Cells’: Problems of Legitimation. In O. Renn,
T.Webler and P. Wiedemann (eds.), Fairness and Competence in Citizen Participation, pp.
141-155. Dordrecht: Kluwer Academic Publishers.
6.3 Methods for democratisation
Participatory planning
ƒ
Brief description
Participatory planning tools and techniques enable participants to influence and share control
over development initiatives and decisions affecting them. The tools promote sharing of
knowledge, building up commitment to the process and empower the group to develop more
effective strategies. The traditional approach in development work is an ‘external expert
stance’ where the assessors place themselves outside the local system that they are
investigating. The local non-experts (local citizens, decision-makers, or interest group
representatives) are considered to be sources of information from which the experts collect
information, assess it and convert it into a development strategy or project. This strategy or
project usually requires behavioural changes on the part of the local people. However, this
expert-based approach usually does not enhance the desired social change. This failure is due
to two reasons: it is difficult for the intended users to learn the value and rationale of new
social behaviours specified by an expert as they have not been through the same learning
process as the experts, and the strategies often fail to address the problem because experts
external to the local system, miss possibilities, opportunities that are obvious to those within
the system. The failure of the ‘external expert stance’ approach led to experimentation with
exercises where the local stakeholders generate and internalise information in a social
learning process. In such exercises stakeholders invented the social practices that they are
willing to adopt and social change was stimulated. These participatory approaches allow local
stakeholders to make informed commitments to the project.
ƒ Detailed description
The participatory planning methods developed by the World Bank fall into two classes,
depending on the type of actors they attempt to reach:
• workshop based methods that engage powerful, high-level decision makers, experts and
interest group representatives, for example government officials, technical experts (i.e.
forestry, conservation, energy, water) and representatives from NGOs. These actors either
affect the outcome of the project or are directly affected by the project. It is critical to get
these relatively powerful people on board in order not to alienate them or provoke
opposition, wich may result only in compounding the problem.
• community based methods that engage citizens (often poor and disadvantaged) in
dialogue to address issues at the community level. It is important to engage these people
Page 120 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
in the development process as their social status usually leaves them without voice in
decision making, about issues directly affecting them.
The main goal of this participatory planning techniques is to “… level the playing field
between different levels of power, various interests and resources and to enable different
participants to interact in an equitable and genuinely collaborative basis. To achieve a
shared decision (or consensus), build up commitment to and ownership of this decision, and
empower individuals to address problems which affect them.”
Participatory planning is thus a process through which actors and stakeholders influence and
share control over development initiatives and the decisions and resources which affect them.
The issues addressed in participatory planning involve a wide variety of development
problems. For example, improving the national electricity supply, or selecting a type of well
that is best suited to the needs of the local community.
The tools and techniques, and outline of process used in these two classes of methods are
described using selected examples. Workshop-based methods will be described using
‘Appreciation – Influence – Control’ (AIC) and ‘Objectives Oriented Planning’ as examples.
The Community-based methods will be described using Participatory Rural Appraisal (PRA)
as an example.
Workshop-based methods: ‘Appreciation – Influence – Control’ (AIC)
This method aims to formulate action plans by creating a learning-by-doing atmosphere,
enabling participants to collaboratively design projects to address specific problems. The
methods encourage social learning, promote ownership of the outcome and establish a
working relationship between the participants involved.
The participants are a relatively heterogeneous group, usually of high-level decision makers
with technical experts and sometimes stakeholder representatives from interest groups.
Symbolic representations such as drawings, collages or cartoons are non-verbal (visual)
techniques designed essentially to communicate experience and understanding of the issue.
Each participant produces his/her own symbolic representation and presents it to the group.
These representations overcome language differences (either national or ethnic or in terms of
technical language), and literacy differences and elicit creative thinking.
In the process social, cultural and political factors together with technical and economic
factors that may influence the issue are considered. The aim is to identify a common purpose,
recognise the range of stakeholders relevant to that purpose and provide a framework for
pursuing the problem collaboratively. As the name suggests, three phases are distinguished:
Appreciation, Influence and Control.
• Appreciation- this is the listening phase, often involving a brainstorming technique or
round-table discussion with the aim to appreciate the realities and possibilities of the
situation by sharing ideas from the diverse backgrounds present at the workshop. The
facilitator ensures that there is a non-critical atmosphere, in which all ideas are valued
equally. In this way all the participants regardless of their official status are treated as
equal. This phase is carried out in small heterogeneous groups to allow interaction and
learning among people who do not normally interact. At the end of the appreciation phase
the ideas are summarised into main overarching themes.
• Influence – this is the dialogue phase, where the participants explore the logical and
strategic options for action as well as the subjective feelings and values that influence the
Page 121 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
selection of strategies. They discuss the themes in relation to priorities necessary for
change needed to address the issue and the potential influences these changes could have.
• The final step is the Control phase where the planning takes place. This phase is carried
out in homogeneous groups. It enables participants to take responsibility for choosing a
course of action in the light of information learned during the process.
Every phase in the planning process includes one or more workshops.
Workshop- based methods: Objectives Oriented Planning (OPP)
The process is a project management method that encourages participatory planning and
analysis throughout the project cycle in a series of OPP workshops. The workshops engage
participants in setting priorities and planning for implementation and monitoring. The main
output of the process is a project planning matrix which the participants have built together.
The participants in this process are the members of the team involved in a specific project.
They are usually a collection of interest group representatives, local or national decision
makers, and sometimes technical experts.
The building of a project planning matrix is a phased process. The process begins by
identifying all the parties who may be affected in some way by the issue that the project is set
up to address. The next step is to evaluate the impact the project may have on them. Then the
issue is analysed by means of construction of a ‘problem tree’, which is developed through
brainstorming about problems related to the issue, clustering and prioritising these,
identifying the cause(s) and the consequences if the problem is not solved.
The next step is to make an ‘objective tree’, that is a mirror image of the ‘problems tree’ as it
indicates what the future will look like by solving each problem. Tree diagrams are visual
tools that organise information in a treelike scheme. The scheme narrows down and
prioritises problems, objectives, or decisions by including patterns of influences and
outcomes of certain factors. Articulating clustering and prioritising desired solutions and
evaluating them in terms of whether they are attainable construct it creates a series of
objectives.
The next step is to formulate a project strategy for achieving the objectives. The information
obtained in the exercises is arranged into a project planning matrix. The project planning
matrix is a framework that is completed during the process. It essentially summarises along
two axes each aspect (or task) of the project and the indicators that will signal completion of
each aspect.
Community- based methods: Participatory Rural Appraisal (PRA)
These methods are defined by their use of interactive tools to involve local stakeholders in the
assessment of their own needs, setting of priorities and drawing up plans of action. The
participants are usually local people; for this reason local materials and visual tools are used
to bridge literacy gaps. The participants experience empowerment through having their
contributions valued. So, PRA can also be part of local capacity building.
The participants in a PRA exercise can be a heterogeneous or homogeneous group composed
mainly of local citizens and some local governmental decision-makers.
Many different techniques can be combined to do that. Storytelling is a verbal technique to
share information of a qualitative nature about historical events, changing patterns, and their
associated impacts (social, economic etc.). It essentially provides a historical context for
discussing the issue. The stories are often written or drawn as chronologies of events to refer
back to in later stages.
Page 122 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Mapping is a visual technique to give an overview of the current situation by providing a
spatial context for discussing the issue. It is often used as the foundation upon which more
focussed discussions can be built. The advantage of mapping is its ability to quickly foster
discussion and analysis. Furthermore it stimulates thinking in terms of site specific solutions.
Examples of maps are social maps (to discover where the participants live), health maps (map
of the body to indicate where people do not feel well), demographic maps, resource maps of
village lands and forests, maps of fields, farms and home gardens, and thematic or topic maps
for water, soils, and trees.
Diagramming is a visual technique that involves establishing sequences of events, changes
and trends representing causes and consequences (Chambers 1997). Diagrams or calendars of
seasonal patterns illustrate the major changes that affect a household, community or region
within a year, such as those associated with climate, crops, labour availability and demand.
Preference ranking is a tool to elicit preferences for various options or indicate desirable
outcomes. Using counters (made from local materials such as seeds or stones) the participants
can allocate votes to different options.
The aim of this process is to enable local participants to appraise, analyse and address a
particular issue through recognising and sharing their own knowledge. The iterative nature of
the exercise enables participants to continuously shift priorities, rethink strategies and invent
new options as the problem is viewed in new ways. The particular issue determines which
combination of PRA tools are used and in which order. In general it is advised to begin with
mapping techniques, because they involve all participants, stimulate discussion and
enthusiasm and generally deal with non-controversial information. They also provide an
overview of the current situation. Subsequent to this, diagramming can be used to provide
information about trends and flows. Building on this preference ranking exercises can be used
to focus on the planning stage.
Participatory rural appraisal/ rapide appraisal
ƒ
Brief description
Participatory rural appraisal is an opportunity to learn from the target people about their
situation. Target groups can be involved at several points in the process: investigation and
identification and prioritisation of needs; planning and implementation; monitoring and
evaluation. It’s also a method that gives researchers and farmers an opportunity to exchange
knowledge. This will enable researchers to design programmes that will meet the needs of the
farmers.
ƒ Detailed description
PRA techniques includes 3 mains stages: problem identification, gathering data, ranking and
scoring
Problem identification
Problem identification includes 2 steps:
1. Open discussion (non-structured)
Problems are identified through open non-structured discussion. The main features of the
discussion are: the clients lead the discussion (‘we can do it’ attitude) and the researcher acts
as a facilitator only; the researcher steps in only to seek clarification; otherwise he/she
watches, listens and learns.
2. Semi-structured interviews (topic investigation)
Page 123 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
•
•
•
specific answers are obtained for specific questions
the researcher’s role is limited to introduction and organisation
target group leaders take lead to ensure vigour in the discussion
the researcher’s role in discussion is catalytic.
Gathering data
There is always a wealth of information embedded in a variety of resources. PRA is a quick
way to make this information available to the research before project planning. To simplify
the collection of data, it is classified into many types depending on the source, space, time
and social considerations:
Primary data are spatial data collected jointly with the stakeholders. It comprises sketch maps
(resources, activities, opportunities and problems) and transects (items the sketch map may
not have included).
Secondary data: publications, maps, census reports, grey literature and aerial photographs.
Time-related data (temporal). A time line is developed showing important local, national and
international events observed by the community. A trend line is developed showing
important changes in the community, why there are changes and the community’s attitude to
change (since research aims at change, such information is essential). Seasonal calendars are
developed to show seasonal problems or opportunities in average years.
Social data (people-related information). Farm household interviews are conducted to gather
socio-economic information and characteristics of a particular community or farm.
Information is collected about institutions to provide knowledge about various groups and
organisations within the community (including churches, schools, women groups etc.). This
information gives an insight into the relationship among the institutions and determines how
the community views its institutions (e.g. ranking the contribution to development of
institutions).
Ranking and scoring
The impact of problems at the farming systems level can be of varying magnitude.
Ranking and scoring are ways of prioritising different interventions and technical
solutions that are relevant to and adoptable by farmers. This involves placing things
in their order of importance at a particular time. Ranking could be done for problems
and opportunities.
ƒ PRA virtues
The virtues of PRA are that it is:
• participatory and hence client oriented
• enabling, as farmers feel the researcher is interested in them
• empowering because the clients say what needs to be done and the researcher listens
• semi-structured leading to local participation in a systematised fashion
• flexible, gender sensitive and applies to all classes
• iterative and thus exploits all alternatives and possibilities
• exploratory and hence looks at all possible situations
While in PRA information is generally owned and shared by local people, as part of a process
of their empowerment, in RA (or RRA), information is generally elicited and extracted by
outsiders as part of a process of data-gathering.
Page 124 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
RRAs are particularly useful in gathering information that will help agencies to orient their
programs, get a sense of the range of issues that need to be addressed. RRAs are essential in
the design phase to ensuring that the project is appropriate to the realities in the area where it
will be working.
Purpose: Inform project design, gather baseline information, monitor and evaluate
Team: Multi-disciplinary team of CRS: staff and specialists
Sites: Limited number of representative sites
Time Period: Discrete studies, usually lasting 5-7 days
Tools and Techniques: The range of tools and techniques presented below (and others as
appropriate)
Documentation: Comprehensive, well written report that captures the depth and complexity
of information obtained in the study
The technique rapid assessment or rapid appraisal was developed in the 1970s and 1980s as
an antidote or alternative to large scale survey studies which were perceived to give
insufficient attention to people's local knowledge. The method encouraged the active
involvement of local people with perspective and knowledge of the area's conditions,
traditions and social structure in data gathering activities, using a variety of informal
techniques that could be employed within a short timescale. Various forms of rapid appraisal
are still being used in rural and urban settings. Rapid assessment process, for example, is an
intensive, team-based ethnographic inquiry using triangulation and iterative data collection
and analysis to quickly develop a preliminary understanding a situation from the insider's
perspective.
In contemporary evaluation practice, rapid appraisal techniques have evolved into newer
forms such as participatory learning and action. Instead of outsiders trying to understand the
knowledge of the local people, PLA tries to facilitate local people to develop their
capabilities. The emphasis is on participation as a systemic learning process linked to action
and change. A key purpose of participatory evaluation is to enhance community and
organisational capacity-building through fostering interactive participation and self-initiated
mobilisation and collective action.
Participatory learning is based on the principle of open expression where all sections of the
community and external stakeholders enjoy equal access to the information generated as a
result of a joint sharing process. The information generated in the process would not only be
of use to the secondary stakeholders but would also to members of the community.
Participatory assessment monitoring and evaluation
107
(PAM&E)
Participatory Assessment, Monitoring and Evaluation
ƒ Brief description
A Participatory Evaluation is an opportunity for the stakeholders of a project to stop and
reflect on the past in order to make decisions about the future. Through the evaluation
process, participants share the control and responsibility for deciding what is to be evaluated
selecting the methods and data sources carrying out the evaluation and analysing information
and presenting evaluation results. PAME can (ideally) be conducted as part of a broader
participatory process (see the section on best practices) or as a separate exercise. Participatory
monitoring and evaluation has been mainstreamed in international development agencies
such as the World Bank, and is increasingly being applied in the industrialized north in
Page 125 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
sectors including environmental assessment, health, urban regeneration and social care.
Methods are being used not just to enable the voice of local people, especially those who are
marginalised, to be heard but also for people's own analysis of their own conditions.
ƒ
Detailed description
Common principles in participatory monitoring and evaluation include the following:
• Participation - opening up the design of the process to include those most directly
affected and giving the intended beneficiaries the chance to speak out about local
impacts.
• Negotiation between the different stakeholders to reach agreement about what will be
monitored and evaluated, how and when data will be collected and analysed, what the
data actually means, and how findings will be shared, and action taken.
• Learning - a focus on cumulative learning by all the participants as the basis for
subsequent improvement and sustained action. This action includes local institution
building or strengthening, thus increasing the capacity of people to initiate action on their
own.
• Flexibility in adapting the evaluation to the wider external environment and to the set of
local conditions and actors, as these factors change over time.
A Comparison of Conventional and Participatory Evaluation
ƒ When to use
Participatory Evaluation may be conducted for the following reasons:
• Because it has been planned(!), Participatory Evaluation can be planned at various points
throughout a project. These can be mid-way through a series of activities or after each
activity, depending on when the community decides it needs to stop and examine past
performance.
• Because a (potential) crisis is looming, Participatory Evaluation can help to avoid a
potential crisis by bringing people together to discuss and mediate a solution to important
issues.
• Because a problem has become apparent, Problems, such as a general lack of community
interest in activities,may be apparent. Participatory Evaluation may provide more
information that can help people determine why there is a problem and how to remedy it.
Page 126 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
To introduce and establish a participatory approach. A Participatory Evaluation may shed
some understanding on why a project is not working very well. The results of a
Participatory Evaluation may be the entry point for a more participatory approach to the
project in general.
ƒ Procedure
Overview
The extensive planning phase of a participatory evaluation includes recruiting staff, who will
conduct the following steps:
• review objectives and activities
• review reasons for evaluation
• develop evaluation questions
• decide who will do the evaluation
• identify direct and indirect indicators
• identify the information sources
• determine the skills and labour that are required to obtain information
• determine when information gathering and analysis can be done
• determine who will gather information.
The information is then gathered in a database, partially analysed and then presented to the
appropriate public, who further analyse the information collectively. Finally, conclusions and
action plans are developed from insights learned.
Realisation
a. Personnel and tasks
The personnel required to conduct an evaluation varies widely, depending upon variables
such as the scope of the project being evaluated, its geographical range and the number and
type of methods used to collect and analyse data. However, the following requirements
should be taken into consideration:
o A director will be needed to supervise the overall evaluation and ensure that the
various parts come together to cohesive whole.
o Moderators will be needed to facilitate group data collection techniques.
o Researchers will be needed to conduct analyses and facilitate, perhaps with a
moderator, group analyses.
o Administrative staff will be required to organise logistical matters, such as meeting
locations, travel and accommodation, etc.
b. Planning the evaluation
The time that is taken to carefully prepare and plan a Participatory Evaluation is time well
spent. The preparatory process helps participants understand what they are evaluating,why
and how they are going to do it. The first meeting to prepare and plan the evaluation should
be open to all interested groups, including beneficiaries and others in the community. If a
great number of people are interested in the evaluation, some of the responsibilities of the
evaluation can be delegated to a small group, a community evaluation team. However, at the
first mee- ting, the whole group must first discuss why they are doing an evaluation and what
they wish to know in order to provide guidance to the community evaluation team.
• Review objectives and activities.
Discuss:
What are the stakeholders’ long-term and immediate objectives?
What activities were chosen to meet these objectives?
Page 127 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Scenario-building can be a very useful tool to think about longer term goals in a holistic
manner. For additional tools that are useful for identifying objectives refer to the list of
analysis techniques provided in this publication.
• Review reasons for evaluation.
After objectives and activities are reviewed, discussion can focus on the questions:
Why are we conducting an evaluation?
What do we want to know?
• Develop evaluation questions.
In a brainstorming session participants should propose evaluation questions, which the
facilitator writes (or draws) on large sheets of paper, a blackboard, etc. The group should
discuss and agree on each question. If many questions are generated around each objective
and activity, they can be ranked in order of importance. If the project evaluation can be
divided into two or more sub-sections, one can also divide the group into sub-groups that
focus on one or more of these subsections.
• Decide who will do the evaluation.
In the plenary decide who will do the evaluation and who will want to know the results. It
may be decided to include all the stakeholders (especially if it is small), only the beneficiaries
or to delegate the responsibility for the evaluation to an evaluation team. The composition of
the evaluation team should be decided by the larger group at this first meeting. If it is known
that some minority groups will not be represented, the facilitator may encourage the
participation of spokespersons from these groups on the evaluation team. The evaluation team
may include beneficiaries, those who may be disadvantaged by an activity, community
members and other affected groups. The larger group also decides who needs the results of
evaluation and when the results should be ready. This will depend on who needs the
information to make decisions and when decisions are to be made.
• Identify direct and indirect indicators.
Taking the evaluation questions that were generated in the first meeting direct and indirect
indicators are chosen for evaluation questions.
Direct indicators are pieces of information that expressly relate to what is being measured.
For example, if information on election attendance is required, then the number of ballots cast
is counted and perhaps set in proportion to the entire population.
Indirect indicators are pieces of information chosen to serve as substitutes to answer
questions that are difficult to measure.
In developing indicators some important questions to be answered are:
What do we want to know?
What are the pieces of information that could tell us this?
What are the best pieces of information (‘key indicators’) that will tell us this most
accurately?
Is the information accessible?
Indicators should be chosen that are accurate and illuminating as to the nature of the problem
or issues. In addition, it is important to verify that the necessary information can be gathered.
Establishing good indicators will reduce the amount of information that needs to be collected.
Page 128 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
• Identify the information sources.
For each evaluation question and indicator that is chosen, the evaluation team identifies what
information sources are available, which sources to choose and how to obtain the
information. Some ‘raw’ data (unanalysed information) may be available and require some
effort to analyse. Other information may not be readily available and will have to be gathered.
If information is not readily available, it must be decided which information gathering tool
will be used to obtain information. The choice of tools will depend on the kind of information
needed. Remember that it is possible to use one tool to gather information for a number of
indicators. If an information-gathering tool has been used before, it may be used again to
update the information and show change. For additional tools that are useful for gathering
information for Participatory Evaluations, refer to the list of analysis techniques provided in
this publication.
• Determine the skills and labour that are required to obtain information.
The assistance of people with specific skills, such as interviewing,mathematics, art and/or
drama, as well as a certain amount of labour (time), will be required. The evaluation team
must decide which skills and resources are available to them. They might ask the questions:
What resources do we need?
What resources do we have or can we develop?
What additional resources do we need to get?
• Determine when information gathering and analysis can be done.
It is important to assure that information will be gathered and analysed within the time frame
that is given to the evaluation team, so that the results can reach decision-makers on time.
The timing of the evaluations must take into account factors such as seasonal constraints
(planting and harvesting times), religious holidays, field staff availability and community
labour demands.
Make a schedule: For each tool that is used the evaluation team decides approximately how
long each task will take and when it will be done.
• Determine who will gather information.
When the specific dates, the required time and skills are known, the tasks can be delegated to
individuals or small working groups.
c. Data collection
• Collect the information.
Each of the delegated individuals should gather the information for which they are
responsible. All of the data should be collected centrally.
• Form database.
The information collected should be put into a manageable format to facilitate the analysis
process.
d. Data analysis
When all the tasks have been completed, it will be necessary to analyse and synthesise
information for presentation. Some of the information may already be analysed and will
simply have to be put in its place in the presentation. The evaluation team can decide what
will be the best way to present results, given the audience for whom the results are intended,
the resources and time available.
112
Page 129 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Analysis is examining information (sorting it out, adding it up, comparing it) in order to
understand the ‘parts’ in relationship to the ‘whole’. Some of the analysis may have already
been done, or partially done, depending on which information gathering tools have been used.
Some steps in information analysis for evaluations are provided below.
• Review the questions.
The questions generated before the information was gathered should be reviewed.Why was
this particular information necessary? What questions was it to answer? What kinds of
decisions are to be made based on this information? It is common for people to work very
hard planning for the information they need and then, once the information is collected, to not
look back and renew their understanding of the central issues and key questions. Important
results that were not anticipated should not be ignored. Sometimes putting information
together will raise important, unforeseen and relevant questions. These can be noted for
future reference and pointed out in the presentation of results.
• Organise the information.
Gather together all relevant information that has been collected. If necessary, sort information
into parts that belong together. The way in which the information is organised and
categorised will vary according to the thinking processes of different people. Some
information may have already been analysed while other will require further analysis.
• Decide how to analyse information.
Analysis of parts may be simply adding up numbers and averaging them or comparing
information to examine the relationship of one thing to another or two things together. In the
process of analysis, one can also: take note of similarities make contrasts by setting two
things in opposition in order to show the differences relate pieces of information to establish
relationships between them.
• Analyse quantitative information.
Quantitative (numbers) information can be computed by hand or with the use of adding
machines. Refer to the list of analysis techniques, provided in this publication, for tools that
can be used to facilitate participatory analysis.
• Analyse qualitative information.
Analysis of qualitative (descriptive) information is a creative and critical process. The way
the information has been gathered will probably determine how it can best be analysed. For
example, if drawings of a community have been done at the beginning, middle and end of the
project, these can be analysed by presenting a series of drawings to a number of individuals
and asking them to: validate the drawings (are they truly representative, and if not, why not)
rate the difference (very good, good, not very good, etc). Refer to the list of analysis
techniques, provided in this publication, for tools that can be used to facilitate participatory
analysis.
• The information.
The team that has been assigned to gather and analyse information can put the analysed parts
together in a way that tells the complete story. Partial analysis can be presented to the larger
community group for completion.
Page 130 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
e. Presentation & action plan
• Presentation of initial results.
Once the information has been collected and (partially) analysed, hold another meeting with
the larger group to present the initial results. It can be very effective to present the
information in partially analysed form.
The benefits of partial analysis are: The larger group has an opportunity to contribute to
further analysis. The results are validated by more people and will be more reliable. More
people can understand the process of analysis.
If the information is presented in partially analysed form, the group will need to do further
analysis to answer their initial questions.
Regardless of the form in which the information is presented, the group will have to discuss
the implications of the results for their initial questions. Have new questions arisen that
require additional collection of information? What conclusions can be drawn? How can we
learn from the results? What are the different options available to address the emerging
issues?
Encourage thorough discussion of these questions, allowing people to express their
perspectives regarding how the information should be interpreted.
The emphasis of the conclusions should not be upon success or failure but upon learning.
Insights gained from the evaluation process might also inspire the group to reconsider their
initial objectives. This is part of the iterative learning process that is comprised by
participatory assessment, monitoring and evaluation.
114
Discourage the group from focusing on blaming or accusations for any poor results. Instead
orient the discussion around the future, exploring new and better paths toward the desired
future.
If additional information is required to answer pressing new questions, then devise a plan to
gather the needed data, following the steps above.
• Develop a future action plan.
Finally, the group should discuss and decide upon a plan of action, based on the results.
Based on what has been learned, what steps are to be taken now?
Who will do what?
Within what time period?
TIP:
In developing a plan of action and in reconsidering the initial goals prospective methods, such
as scenario workshops, can be very useful.
• Write up a final report.
The final report should include the questions, participants, method, analysis procedures,
conclusions and a summary of the new plan of action. For tips on writing an evaluation
report, particularly from a local perspective, refer to the ‘Presentation of Results’ section in
Case, D. (1990): http://www.fao.org/docrep/x5307e/x5307e00.htm#Contents
ƒ Resource considerations
The resources required for a participatory evaluation will vary widely, depending in part
upon:
the complexity of the issues being evaluated
Page 131 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
the methods used for data collection
the availability and cost of persons skilled to collect and analyse the data (personnel costs)
the geographic scope of the issue being evaluated (travel and accommodation costs)
whether or not the evaluation is built into a general participatory project (saves time and
avoids duplication of many costs).
ƒ Additional best practices and potential pitfalls
It is strongly advisable to make participatory evaluations one aspect of a broader participatory
approach to project development. This will enhance stakeholder ownership from the
beginning of the project and will also be more cost effective. Ideally, the evaluation process
should be iterative and seen as part of a larger planning/development or decision-making
process. When evaluations can be planned regularly throughout a long-term project, they are
more likely to be seen as aimed toward learning and improvement than as a one-off
judgement. In addition, progress can be improved in the long run when lessons are learned
from evaluations early in the project. The process of developing indicators helps people to
define their goals more precisely and thus to generate more concrete action plans. Take extra
care to ensure that the data collected answer the real questions being asked. Avoid the pitfall
of choosing a particular method of data collection because it is easy, but may not really attain
information that is useful to learning how a project can be improved.
In conducting evaluations be careful to consider the long-term perspective. It is sometimes
natural in development processes for things to get, or superficially appear to get, worse before
they get better.
At the centre of the M&E plan is a series of indicators which are selected to reflect key
intermediate impacts. A minimal set of indicators is needed based on their usefulness
(especially in terms of their relevance to management choices), their ease and cost
implementation, and the number of different stakeholders benefiting from the information
they provide. This implies a need for careful and logical selection of cost-effective indicators,
not merely brainstorming to come up with an unedited wish-list. Attention needs also to be
given to the way in which various quantitative and qualitative M&E data are woven together
into coherent narratives or stories which describe and explain project impacts.
Participatory M&E draws eclectically on a range of methods and techniques, both to develop
and to implement the M&E plan. In this respect the distinction between ‘conventional’ and
‘participatory’ methods and techniques has been overdrawn. For example, questionnaire
surveys have been strongly criticised by advocates of participatory methods, but they can be
designed and implemented in a ‘participatory’ (inclusive and responsive) way and have an
important place in the repertoire of techniques available for M&E. Having said that, we have
found that working with focus groups and using a range of less conventional techniques
(mapping, diagramming, ranking, and scoring) can yield accurate and useful information
quickly and easily, with considerable benefits to all concerned. The success of these
techniques, however, depends crucially on skilful facilitation. This requires not just skill in
the particular techniques, but a clear understanding of the background to and purpose of the
activity and a sense of ‘ownership’ of the outcomes. The participatory nature of M&E is
enhanced when the techniques used are such that the elicitation, analysis, and utilisation of
information can be carried out locally and within a relatively short time-frame.
ƒ What is the basis for comparing project effects?
Whichever way we categorise the project effects, there is a fundamental issue regarding the
basis for comparison
If we are measuring changes over time (e.g. in livestock productivity) and attributing these
changes to the project, we need to be able to answer two questions:
• What was the situation before the project started (i.e. the ‘before-after’ comparison)?
Page 132 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
• What would the situation be now if the project had not intervened (i.e. the ‘withwithout’
comparison)?
Without these comparisons we cannot be sure to what extent the changes we are monitoring
are actually effects of the project. For example, we might find that livestock productivity is
high. But was it already high before the project started? If not, would it have been higher
anyway in the current year because of other factors (e.g. good rainfall resulting in an
abundant supply of native grasses)? These questions are relevant whether we are talking
about a farmer group monitoring its own progress or a donor agency evaluating the
effectiveness of a large research program.
The conventional way of making these comparisons is to conduct a baseline study at the
beginning of a project (to permit the before–after comparison) and to monitor change in a
non-project or ‘control’ area (to permit the with–without comparison). However, this need
not require an elaborate and time-consuming questionnaire survey; more participatory
techniques can be used. For example, as part of project planning, focus groups can be
organised during which techniques such as community mapping, time lines, problem ranking,
semi-structured interviews, etc., are used to establish the current and recent status of key
variables, thus establishing a baseline. Even if this has not been done at the outset of a project
it is possible to construct a ‘retrospective baseline’ in which participants recall their situation
immediately before the project commenced.
Moreover, it may not be necessary or desirable to include a ‘control’ area to obtain a with–
without comparison. It is always difficult to find an area which is sufficiently similar to the
project area yet unaffected by the changes the project is engaged in. In any case, it is
somewhat contrary to the participatory research approach to be monitoring a group of farmers
purely to evaluate impacts elsewhere. If the aim is to establish whether a change is due to the
project’s activities, it may be better to use participatory techniques which draw on the
detailed local knowledge and experience of farmers and field workers within the project area.
For example, farmer focus groups could identify and weight the factors (project and extraproject) which have led to changes in livestock productivity, using flow-charting and rankingand-scoring techniques. Farmer case studies using semi-structured interviews might also be
used to give an in-depth understanding of the reasons for observed impacts.
Such approaches not only give answers to the question: “To what extent are the observed
changes attributable to the project?”, they also enhance the understanding and research
capability of the project participants.
ƒ How do we develop a monitoring and evaluation plan?
M&E is a complex process in its own right with several distinct aspects. Estrella and Gaventa
(1998) outline four major steps in applying participatory M&E:
• Planning or establishing the framework for a PM&E process, including identification of
objectives and indicators
• Gathering data
• Data analysis
• Documentation, reporting, and sharing of information.
The first of these steps is clearly critical — to be effective, M&E needs to be carefully
planned. Ideally, this planning should take place at the start of the project as part of the whole
process of problem diagnosis and development of project activities. In practice, the M&E
plan will need to be re-visited several times as the project evolves and as participants become
clearer about the key indicators to measure and the feasibility of measuring them.
Page 133 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
The steps involved in developing a PM&E plan are indicated by the following list of
questions:
• What are the project objectives?
• What are the M&E questions that follow from these objectives?
• Who needs answers to these questions?
• What are the best indicators to help us answer these questions?
• What are the units in which these indicators are measured?
• What are the best methods/tools to obtain this information?
• What/who is the source of this information?
• When does this information need to be collected and at what scale?
• How will the information be analysed?
• How will the information be utilised?
• Who is responsible for collecting, analysing, and utilising the information?
These questions can form the column headings in a M&E matrix, which can be a convenient
way to develop and record the plan. Table below shows a matrix based on these questions.
The two completed rows in the matrix give hypothetical (and fairly simple) examples of how
a M&E plan might proceed. In practice, as found in workshops to develop M&E plans for the
FSP and other projects, it becomes more difficult to develop measurable indicators for less
tangible impacts such as ‘group self-mobilisation’.
Hypothetical Example of a M&E Matrix
Participatory M&E requires that the development of a M&E plan be itself conducted in a
participatory manner. Developing such a plan requires facilitation, using many of the
methods and tools described in later sections of this report. It is not simply a question of
putting up a blank matrix and asking participants to fill in the cells. For example, to
determine the important M&E questions, it may be necessary to form a focus group (or
groups) of the key stakeholders and use participatory appraisal techniques to elicit and rank
the questions. Then, for a given M&E question, the group could develop a list of potential
indicators using flow-charting, and rank these indicators according to agreed criteria, such as
those discussed below. The completed matrix is the end-product of these various activities.
The context for many of these M&E activities may be regular farmer, village and project
meetings, i.e. they need not be special exercises. As far as possible they should be woven into
the normal activities of farmers and project staff.
Page 134 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
ƒ What makes a good indicator?
Central to the development of a M&E plan is the identification of appropriate indicators and
of procedures to measure them. A good indicator is determined by its usefulness, ease of
collection, and the number of stakeholders benefiting from the information it provides. In
Figure 3, good indicators are those which fall in the space enclosed by the triangle and the
three axes (note that the three dimensions are depicted as increasing towards the ‘origin’).
The figure implies that there are trade-offs between the three criteria. For example, an
indicator which is considered very useful by scientists in the project (such as manure
production and composition) might be difficult to measure and of no interest or value to other
participants. Compromises will have to be made to ensure appropriate indicators are selected.
Indicators (whether of farm productivity, sustainability, or research capacity) are useful to the
extent that they improve farmers’ and researchers’ state of knowledge (i.e. reduce their
uncertainty) and thus improve decision-making in such a way as to affect production and
resource management. Conversely, indicators wich have no bearing on management
decisions or outcomes, or which are excessively costly to monitor, are of little value (Pannell
and Glenn 2000). The managerial relevance of indicators is related to the question of scale
and planning horizon. Short-term indicators at the field or enterprise scale may show negative
trends, whereas the activity in question may be contributing to the productivity and
sustainability of the whole farm as a management unit (Cramb 1993). Where off-site effects
are important, the village or catchment scale may be of more managerial significance
(Pachico et al. 1998), assuming of course there is institutional capacity to manage at that
scale.
In a participatory process, many good ideas for indicators may emerge (e.g. Table 3), but not
all should be selected for the M&E plan. It is the role of project leaders and facilitators to
help stakeholders agree on a minimal set of SMART indicators. In particular, as Pachico et al.
(1998) remark, “indicators need to be theoretically and logically linked, preferably some
causal relationship, with the behaviour of the complex system of interest”. Simply positing a
list of indicators, whether or not the list is developed participatively, is unlikely to provide
any coherent guide to the desirability of the technological changes taking place.
One indicator (e.g. area of forages planted) may be causally related to others (e.g. livestock
growth, labour requirements) which in turn affect some larger management objectives (e.g.
net farm income, maintenance of resource base). Hence these indicators may be
‘intermediate’ in two related senses: (1) they reflect changes in intermediate products of the
system in question; (2) they give an early indication of outcomes which necessarily take time
Page 135 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
to emerge. To be useful and credible, therefore, indicators need to be developed within an
integrated framework which reflects the structure and dynamics of the management system
for which the technology is being developed (e.g. the farm-household system).
Flow-charting is a useful technique for identifying these connections and zeroing in on
suitable intermediate indicators. Having developed a flow chart of impacts, a focus group can
be asked to rank the impacts in the flow chart in terms of their suitability as indicators. This
may require some skilful facilitation. For example, participants could be encouraged to look
for impacts which capture or encompass the effects of a sequence of prior impacts (e.g.
number and liveweight of cattle in a village might be considered to capture the effect of
increased forage area, increased forage production, and changed feeding practices). At the
same time, it may be necessary to include combinations of indicators which help to separate
out the multiple factors or causes giving rise to an impact. For example, an improvement in
the number and liveweight of cattle in a given year may be due to increased availability of
planted forages as well as increased productivity of natural forages, both of which might be
due to a better than average season. A decision would have to be made as to which
combination of these variables needs to be monitored in order to assess correctly the effect of
new forage technologies — area and yield of planted forages? area and yield of natural
forages? rainfall? Participatory techniques could be used to economise on data collection. For
example, rather than measuring rainfall directly farmers could develop a scale for rating
seasons; rather than measuring natural and planted forage production, farmers could estimate
their relative contribution to livestock feed intake using a matrix scoring technique (e.g. Table
4).
Many of the indicators used to measure productivity effects are simple ratios, e.g. forage
yield, livestock growth rate, gross margin per hectare or per head. Yet, taken in isolation,
such partial productivity measures may be misleading as indicators of the overall profitability
of an activity (Dillon and Hardaker 1993). For example, a high forage yield may be obtained
with expensive fertiliser or excessive use of family labour. There is a need to capture all the
benefits and costs of a new technology to assess its impact on economic productivity. Partial
budget analysis, if extended to include non-monetary benefits and costs, can do this for a
small change in the annual production cycle, such as augmenting feed supply with a small
forage plot. The productivity indicator in this case is the net benefit of the change in question.
Farm development budgeting extends the same principle to larger and longer term changes,
such as investment in an intensive forage management system involving expansion of
livestock activities. Here the standard indicator is net present value, derived from the
summation of discounted benefits and costs occurring over a specified planning period.
There are many different methods and tools which can be used in M&E, described in
numerous manuals and monographs (Bernard 1995; Casley and Kumar 1988; Dillon and
Hardaker 1993; Dixon, Hall, Hardaker and Vyas, 1994; Fowler 1993; Norman, Worman,
Siebert and Modiakgotla 1995; Poate and Daplyn 1993; Yin 1994; Mikkelsen 1995). These
can help the project’s stakeholders to:
• Establish and clarify project objectives
• Identify and rank M&E questions
• Develop measurable indicators
• Obtain and communicate the information needed.
It is not very helpful to label these methods and tools as either ‘participatory’ or
‘conventional’. They are merely techniques which may or may not be used in a participatory
way. For example, a community mapping exercise may be used to extract population or landuse information for a national planning agency, with no feedback or immediate benefit to the
Page 136 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
community concerned. Alternatively, a map may be developed as a community resource,
retained in a community meeting room, to help local farmers plan and monitor their own
progress in forage and livestock development. Both these uses may have their justification. It
is useful to distinguish between methods , that is the overall context or setting in which
information is elicited and tools , that is the specific means of eliciting information within
that setting (Figure 4). The main methods used in M&E of the FSP have been:
• Focus groups — small groups of farmers sharing a common experience (e.g. farmers in the
same location, women farmers, members of a forage work group) who meet together with a
facilitator to pool their knowledge and perceptions.
• Farmer case studies — detailed investigation and observation of an individual
farmhousehold system, including all livelihood activities, not only those relating to forages.
• Surveys — systematic elicitation of information from a sample of farmers in a specified
region, the sample being obtained by one of a number of methods (e.g. farmers may be
randomly selected from a list or those encountered along a transect).
As shown in Figure 4, these methods form a logical sequence — focus groups (or key
informants) can provide an overview of farming circumstances in a particular location, case
studies can provide an in-depth understanding of the processes underlying these
circumstances, and surveys can be used to verify these impressions and assess the range of
circumstances existing within and beyond a project area. This is not to say, however, that all
three methods are necessary in a M&E process — for many purposes routine reporting by
farm leaders and field staff and occasional focus group meetings may suffice.
The main tools used within these methods can be grouped as follows:
• Mapping and diagramming tools (e.g. community maps, time lines, seasonal calendars, flow
charts, crop histories)
• Ranking and scoring tools, including techniques for wealth ranking
• Interviews (structured and semi-structured).
These methods and tools can be combined in various ways, depending on the task at hand
(Figure 4). For example, mapping is a tool which can be used in a variety of settings:
• Mapping can be used in a focus group meeting (e.g. a forage farmers’ group) to elicit and
record information about the location, extent, and species composition of members’ forage
plots.
• Mapping can also be used in a case study to depict the layout of the case study farm and
record various attributes of the farm.
• Similarly, asking respondents in a survey to draw a simple diagram of their farm layout and
to record information about each plot (e.g. area, tenure status, crops grown, etc) can be a
more ‘user-friendly’ and reliable way to obtain this information than simply asking questions
and recording answers in a questionnaire table.
Relationship between Methods and Techniques for M&E
Mapping may also be combined with other tools in a given setting, say a focus group
meeting. For example, having constructed a community map, showing the location of
households, farms, and community facilities, a wealth-ranking exercise might be conducted
in which participants agree on wealth categories and collectively assign each household to a
category, the resultant rank then being recorded on the community map. This could help the
group and the project worker to monitor whether certain conservation technologies are only
being adopted by better-off farmers or by all farmers uniformly.
ƒ
How is M&E information utilised?
The use of each of the methods and tools listed above involves three phases:
Page 137 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
• An elicitation phase, in which information and opinions are expressed and recorded; for
example, farmers’ knowledge about their local landscape is expressed in the form of a
community resource map.
• An analysis phase, in which the information is summarised, aggregated, correlated, or
otherwise analysed to make it more useful for monitoring and evaluation; for example, the
forage plots recorded on the community map may be counted and the number in each subvillage written on the map or in a table or chart, to indicate the extent of forage adoption by
location.
• A utilisation phase, in which the information is communicated to those who need it to make
decisions; for example, a local project team may use the information about number of forage
plots by location to evaluate the suitability of the forage species being offered to farmers.
Methods vary according to whether these phases:
• Are conducted at one time (e.g. a single meeting of a farmer group) or at separate times (e.g.
analysis and utilisation of the information involves some delay).
• Are conducted in one place (e.g. a community meeting place) or several places (e.g. analysis
is conducted in the researcher’s office and the information communicated to headquarters).
• Involve the same people (e.g. farmers and project workers) or several groups (e.g. analysis
is conducted by specialist staff and the information is utilised by project managers).
The process of M&E will be more participatory the more the three phases come together.
Nevertheless, a given method may serve several purposes at once, e.g. a farmer planning
meeting may generate information upon which farmers are able to act but which can also be
communicated to project staff at various levels and (if the expertise is on hand) incorporated
in a database at the project headquarters. As far as possible, we should be aiming to develop
M&E procedures which simultaneously satisfy various stakeholders in this way (Figure 3).
Regardless of the methods used, or the degree to which they can be considered participatory,
the information generated is inevitably woven into a story of some sort (e.g. in a written
report or when reporting during a project meeting or review). It is the stories we tell which
place indicators and other data in context and communicate this information in order to make
some point, whether to urge fellow project participants to take corrective action or to
persuade donors to continue providing support . Indicators are the bare bones of M&E; it is
the stories which put flesh on these bones and bring them to life . More explicit and
systematic attention in M&E needs to be given to the processes by which stories emerge from
participants’ experiences and observations (e.g. Davies 1996, Dart 1999).
Hence in the FSP and similar projects it is important not only to report on the various
quantitative and qualitative indicators that have been developed and measured. There will be
much that occurs which is not captured by these indicators alone. In fact, it is likely that some
of the most important outcomes of the FSP will not have been anticipated when setting up the
M&E system, or will not be fully reflected in the data that system provides (Cramb 2000).
Annual meetings, mid-term reviews, and project workshops should be used to bring out the
stories behind the M&E data. To some extent this will happen naturally during the life of a
project, but it should be planned for explicitly so that the full richness of various local
experiences can be drawn out, shared, and reflected upon. It is in this way that participants
can get behind the questions about ‘what happened’ to an understanding of ‘why things
happened the way they did’. Our ability to address the larger questions regarding the
effectiveness or otherwise of participatory research will depend on this kind of systematic
‘story-telling’.
Page 138 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
ƒ
References and Resources
Booth,W., Ebrahim, R and Morin R. (2001) Participatory Monitoring, Evaluation and
Reporting: An Organisational Development Perspective for South African NGOs.
Braamfontein, South Africa: Pact/South Africa.
Case, D’Arcy Davis (1990) The community’s toolbox: The idea, methods and tools for
participatory assessment, bmonitoring and evaluation in community forestry. Bangkok,
Thailand:
FAO
Regional
Wood
Energy
Development
Programme.
http://www.fao.org/docrep/x5307e/x5307e00.htm
Pahl-Wostl, Claudia (2002) ‘Participative and Stakeholder-Based Policy Design, Evaluation
and Modeling Processes’. Integrated Assessment 3(1): 3 – 14.
UNDP (1996) ‘Participatory Evaluation in Programmes Involving Governance
Decentralisation: A Methodological Note’. Unpublished Paper.
USAID Center for Development Information and Evaluation (1996) ‘Conducting A
Participatory Evaluation’.
Performance Monitoring and Evaluation TIPS, Number 1.
Zimmermann, A. and Engler,M. (Comilers) Process Monitoring (ProM).Work document for
project staff. Eschborn, Germany: Deutsche Gesellschaft für Technische Zusammenarbeit
(GTZ) GmbH.
6.4 Consulting Methods
Focus groups
Cf. 2.1. Advising methods aiming at mapping out diversity of views
Policy conference
ƒ
Brief description
Page 139 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Policy conferences provide an opportunity to inform local people and receive feedback on the
plans, service developments or strategies for an area. Policy conferences are one-off events
and can be limited in terms of the depth of community engagement that can be obtained.
They can be advertised as public meetings that local people are invited to attend. This can
make it difficult to plan numbers, so the advert should be backed up by direct invitations to
representatives of community groups and other community networks. Generally, the
conferences involve presentations on the topic or proposed plans, followed by the opportunity
for attendees to have any questions answered. Opportunities for more detailed, in-depth
participation can be provided by breaking up participants into smaller workshop groups (you
could consider using the Nominal Groups Technique). At the end of the conference there
should be a review of the day and clear indications given to participants on what happens
next and how their feedback will be used.
ƒ
Detailed description
Cost : This technique requires publicity to attract as broad a range of people as possible. It is
also more appropriate if the information given out at the conference is available in other
forms to people who do not attend. Apart from this the main resources are a venue, flipcharts,
and facilitators for any workshop sessions. Policy conference organisers should also ensure
that there are facilities for those with childcare requirements, access and mobility issues,
special dietary requirements should also be catered for, as far as possible.
When to use : When feedback is required on a policy or document.
Points to think about : There is a danger that only very motivated individuals or interest
groups will attend. Try to use other techniques at the Conference (workshops, prioritising
exercises) to get a range of views
Pros : Relatively inexpensive and provide an open forum for debate
Cons : Unlikely to get a representative sample of views and can be dominated by a small
number of individuals.
Useful links
http://www.communitiesscotland.gov.uk/web/site/Engagement/community_engagement.asp
Open/public meetings
ƒ
Brief description
Public meetings provide an opportunity for people to hear about issues, plans or proposals.
Open meetings can be a good way of encouraging dialogue between a service and its users
and of keeping members of the public informed. Used carefully, they can complement other
forms of consultation. Meetings are usually held at a public place (school or church hall, local
sports centre etc) convenient for people to get to. The issues to be discussed is usually
publicised in advance through posters, leaflets, letters, invitations etc. You can also consider
using other forms of consultation within a public meeting for example Opinion Poles and
Mad, Sad and Glad Boards. However public meetings often have a very low attendance and
those people who do attend may have a particular concern or view which is not necessarily
Page 140 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
representative of the population as a whole. You could also consider using – Policy
Conference, Open Space Event, Future Search Conference or Community Visioning.
ƒ
Detailed description
Cost : Relatively cheap depending on how you do it. Consider cost of venues, interpreters,
refreshments, childcare and publicity. Also consider accessibility and choosing a venue
appropriate to the issue. See Checklist 5.D - Planning a Consultation Event.
Issue : The issue being discussed will clearly have an impact on attendance. More people will
come if they are directly affected by or concerned about the issue or where their interest is
attracted. Try to make the material advertising the meeting as interesting as possible but make
sure that people who do attend have not been misled about the content. Have clear objectives
for what you want to achieve from the meeting and how you are going to take forward what
comes out of it.
Target audience : Open meetings are unlikely to attract an audience that is representative of
the local population and may contain more retired and middle-aged people than young people
so don’t use them as your only method of consultation. Think about your target audience and
organise the meeting at an appropriate time and location
Collecting Information : Think about why people might want to attend an open meeting. As
well as an interest in a particular issue people might be motivated to attend by a sense of
community spirit or support for the service. It is worth finding out a short questionnaire for
people who attend could give you lots of information as much about who doesn’t attend the
meetings as about who does. As a general rule try to collect more information than just
numbers of attendees.
Publicity : Publicise the meeting as widely as possible to reach your intended audience. As
well as posters, leaflets etc. word of mouth is an effective means of advertising. Speak to
informal networks, community and interest groups etc.
Practicalities : Planning the practical side of a meeting can be difficult if you have no idea
how many people are going to attend so you might want to invite people to let you know if
they are going to come so that you have some indication of numbers. If you have only
planned for 30 people and 100 turn up you may have problems.
Meeting Structure : Think about how the meeting will be structured. Make sure that any
speakers know what is expected of them e.g. how long they should speak and that the Chair is
well briefed and is able to control any more vocal members of the audience and limit
repetitive discussion. If appropriate you might want to think about breaking the meeting up
into smaller workshop/discussion groups to give more people the chance to participate
Reporting : Recording views and reporting back can be difficult in open meetings,
particularly if there are large numbers of attendees. Make sure that someone takes a note of
the points raised. You can ask people to vote on the main issue but be careful about placing
too much weight on these results views recorded in this way should generally only be used to
give an indication of public vies. You must make clear to participants how their opinions will
be taken forward.
See also Checklist 5. D – Planning a Consultation Event.
Page 141 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Pros
•
•
•
Provides local opportunities for people to comment on matters that affect them
directly or indirectly
Offers a convenient and transparent way to demonstrate public consultation/build up
good relationships
Can be used to inform the public at the same time as getting views
Cons
•
•
•
People who attend are unlikely to be representative of the local population
Attendees’ ability to contribute to a discussion about service-wide, strategic priorities
can be limited by a lack of knowledge and possible lack of interest
Contributions will mainly be about local, topical or personal concerns
Useful links
http://www.cabinet-office.gov.uk/regulation/consultationguidance/content/methods/index.asp
Reference
Adapted from Cabinet Office – Code of Practice on Consultation
Mystery shopping
ƒ
Brief description
Mystery shopping can provide you with very specific and detailed feedback on areas of
SEAMLESS-IF. Someone commissioned by you or recruited by you tests the service, looking
at a number of predetermined areas, and then reports back. This should give you a picture of
the type of experience a real user would have. The process is relatively simple, although you
get more out of it if it is well structured.
ƒ Detailed description
Cost : The cost depends on whether you undertake this internally or employ a specialist
agency to do it for you. If done internally it can be relatively cheap to carry out although you
must ensure you recruit neutral, unbiased people to do it for you. Employing an external
agency will be more expensive and cost will reflect the complexity, number of visits or phone
calls made and how you want the results presented. Using an external agency ensures that the
study is completely independent and unbiased.
Who to use with / when to use : Use to test specific areas of service delivery. Mystery
shopping is more suited to some services and service areas than others. Front-line operations,
where it is important to check that customers are being treated quickly and courteously, and
being given the right information, are suitable.
Points to consider : The mystery shoppers should be typical of real users. They should not be
given too much background knowledge (which may restrict their ability to see the service as
real users do), but they should be given guidance on how to assess the service and how to
feed back the information.
• If you run the study in-house, you will need to consider how to ensure enough
turnover of your shoppers so that they don’t become too knowledgeable. You also
need information from your mystery shoppers in a consistent format, think about
Page 142 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
•
•
questionnaire design, briefing for your shoppers and how they will feed beck their
information to you.
The information collected will give snapshot details of individual incidents, and you
will need to make sure that ‘one offs’ are not given too much weight. If it looks like
there might be a problem in a particular area, send another mystery shopper in to test
the same service – ‘one offs’ and more fundamental problems can be handled
differently.
Think about how will you present the idea of conducting mystery shoppers to staff, it
can be seen as an underhand way of checking up on them, and a distraction from
serving ‘real’ customers
Incentives for both the mystery shoppers and staff can be offered. Encourage
shoppers to highlight good as well as bad service, and then perhaps reward the staff
who have performed particularly well.
Pros
•
•
•
•
•
Precise and detailed feedback.
Relatively simple to implement.
Equivalent to asking other users for their experiences.
Flexible and immediate. You should be able to highlight particular service areas and
investigate possible problems quickly.
Can be used commend / motivate staff.
Cons
•
•
•
•
More applicable to front-line, person-to-person services.
Staff are often suspicious.
Only gives isolated instances and small samples.
Regular shoppers could get to experienced/stale.
Web forums or e forums
ƒ
Brief description
Web forums utilise the internet to hold online discussions held over a fixed period of time.
The discussion is usually broadcast over the internet or it can be recorded or summarised for
publication on the web. Participants can view the discussion online and submit responses,
questions or make points in response to other people’s comments. Focus can be provided by
questions and themes for discussion or participants can start their own discussions, and
moderators can be used to respond or intervene in discussions. Before people can take part
they have to register their personal details and email address, usually participants write
comments under a nickname. Sometimes it is worth recruiting people specifically to take part
in a forum, this ensures you actually have a sensible discussion and it reinforces individual
responsibility to take part and responsibility for what they write. Web forums are often used
to allow members of the public the chance to put questions to celebrities or people in high
positions such as a chief executive or simply as employee discussion forums.
ƒ
Detailed description
Cost : There are many market research firms who are now offering this service which is
expensive especially where you need them to recruit participants and moderate the
discussion. Simple web forums can be run in house, contact the Web Team.
Page 143 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Points to consider : To begin with let participants talk about what they are interested in and
then direct them to talk about what you are interested in.
It is necessary to moderate the discussion, especially if it is high profile and live on the
internet.
It is essential that when questions or queries are raised by participants that the relevant person
responds within a reasonable amount of time.
Holding a web forum as a real time discussion requires everyone to be logged on
simultaneously and effective moderation.
Careful planning is needed when promoting the web forum and inviting people.
Pros
•
•
•
Development of a community feeling within members of the forum.
Effective when used with young people.
Gives people the opportunity so say what they really feel without being identified in
a group.
Cons
•
•
•
•
Only computer literate people with access to the internet can take part, young people
therefore dominate.
Unlikely to be representative, only the keenest people take part.
Only really successful if you have mass engagement and momentum.
Can get out of control, people talk about other things and quickly drop out if they do
not feel engaged in the discussion.
ƒ Examples for web forums
The Scottish Parliament http://www.communitypeople.net/interactive/
6.5 Involving Methods
Participatory modelling
ƒ
Brief description
Participatory modelling refers to the active involvement of model-users in the modelling
process. The participatory modelling method can help to build mutual understanding between
scientists, policymakers and stakeholders. It can solicit input from a broad range of
stakeholder groups and it can contribute to maintaining a substantive dialog between
members of these groups. Consensus-building is an essential component in group model
building processes (Costanza and Ruth, 1998). Models can vary from simple conceptual
models to complex computer models. Also, participatory modelling can be used for different
goals, varying from facilitating problem structuring to interactive planning or decision
support by means of models. In view of the current report on prospective, we will concentrate
on participatory modelling method, the final state of which focuses on producing a scenario.
Page 144 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
ƒ
Detailed description
The term ‘participatory modelling’ or ‘group model building’ refers to the active
involvement of model-users in the modelling process. There is a difference to the extent to
whether the participants are really modelling themselves, or whether the participants provide
input to the modelling endeavour. In some cases, the aim is to build a conceptual model,
sometimes facilitated by visualisation software, while in other cases the goal is to develop a
computer model. Costanza and Ruth (1998) see a development from emphasis placed from
the model development at the core towards facilitating problem structuring methods and
group decision support by means of models.
Historical background
Participatory modelling approaches usually stem from system dynamics. An approach that
fits under the heading of participatory modelling is adaptive ecological modelling.22 The aim
is that the crucial choices in the model should be co-designed by the user community in the
design phase.
Objectives
The objective is to provide a flexible, adaptive approach to environmental planning,
assessment, and management. The method draws on a variety of modelling techniques to
capture the essential physical and economic interactions and on analytic policy techniques to
generate alternative policies and/or to evaluate policy consequences.
Participants
The participants are the experts, managers and decision-makers, typically from a number of
institutions, who have key roles to play in technical or decision aspects of the anticipated
futures problem. The method would involve a core team of six to ten participants, who can
meet at regular intervals to improve models used.
Procedure
There are many forms of participatory modelling method, such as combinations of computer
simulation models and structured stakeholder workshops. Costanza and Ruth (1998) designed
a three-step participatory modelling process.
. The first step is devoted to developing a basic model structure to represent a system one
intends to study. At this stage, stakeholders are involved in making decisions about the
functional connections between the variables to use in the model.
. In the second step, more detailed and realistic attempts are made to replicate the relevant
evolution of the system. Using the model results, scenarios are usually developed by trend
extrapolations into the future. Here, it is still critical to maintain stakeholder involvement, so
they interact with modellers through the alternative scenarios.
. The last step focuses on producing the final future scenarios. Most important and uncertain
are scenarios selected and discussed during a workshop.
Consensus building is an essential component in group model building processes.
Some pre-existing generalised models exist and can be used for different sorts of study. These
include: Interactive Planning, Hiring System Theory, Operations Research, Socio-Technical,
Soft Systems Methodology, System Dynamics, Total Quality Management and Viable
Systems Model. In addition to these situation models, another technique is decision
modelling. Decision modelling attempts to develop a model of the decision process applied
by decision makers to important decisions within the system. The approach assumes that
decision makers consider a number of different factors when comparing various alternatives,
that some are more important than others and are implicit in the perceived value of alternative
decisions.
Page 145 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Relevance
The participatory modelling method is now widely used in integrated environmental
assessment. It is particularly suitable for addressing an interdisciplinary specific problem and
providing relevant information to policymakers. More generally, the participatory modelling
method is not specifically designed to analyse the future. Rather, it needs to be combined
with other prospective techniques, such as scenario analysis.
Examples
An application of the use of conceptual modelling techniques is provided by ICIS. In several
policy processes on sustainable development (for example, the plan of the surroundings of
Province of Limburg (Provincie Limburg 2000), (Rotmans et al. 1999a), city of Maastricht
(ICIS 1999), reconstruction of the rural areas in North Brabant (Telos 2000)), ICIS uses a
conceptual ‘triangle model’ to structure group thinking (see Box). The basic triangular
concept functions as a structuring tool to bring knowledge and information from several
domains together in a group process. The conceptual modelling work is meant to facilitate
effective and efficient communication between participants from a variety of backgrounds
and sectors. In the participatory modelling process, the participants structure all important
policy issues in relation to the central topic and they explore the integrated effects of different
policies.
Charrette
Cf. 2.2 Convergence methods aiming at decision-support
Future search conference
Cf. 2.2 Convergence methods aiming at decision-support
Citizens juries
Cf. 2.2 Convergence methods aiming at decision-support
Users panels
See also Users forums and e networks
ƒ
Brief description
Users are invited to join committees, panels or boards as equal partners in decision-making.
This approach ensures that the users / public voice is represented when decisions are made.
ƒ
Detailed description
Costs include venue, refreshments, and in appropriate circumstances expenses and transport
costs. It may also be necessary to consider providing childcare, or alternative care, and to
Page 146 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
arrange sign language and community language interpretation. You also need to consider the
accessibility of documents / papers that you produce for the meeting.
Points to think about
•
•
•
•
•
It is important to consider how meetings work from the participants’ point of view.
Whilst this is aimed at meetings involving social services users and carers it could be
adapted for other forums.
Be aware that your representative may need training or support to participate fully in
the meeting
Be clear about the capacity in which the user / member of the public is involved. Are
they representing a personal opinion or a wider group or network
If the individual is representing a wider group or network consider what are the
mechanisms for the representative obtaining and disseminating information.
Consider the frequency at which representatives are elected / replaced.
Pros
•
•
•
Provides a regular opportunity to hear the opinion of the public / service user group
Builds positive relationships between the service and its users
Demonstrates that you are willing to share power and control.
Cons
•
•
•
•
Representation can become dominated by particular groups
The individual involved may not be typical of the views of users
The individual can become “institutionalised” to see the service from a providers
point of view
Lines of accountability and communication between the representative and those that
they are representing may be unclear.
Users forums and networks
ƒ
Brief description
User Forums and Networks provide the opportunity for a tool users to come together on a
regular basis. The rationale for this is to improve the tool. This is achieved by giving users a
regular opportunity to comment on the tool, discuss and suggest proposed changes and
developments.
ƒ
Detailed description
Costs include venue, refreshments, and in appropriate circumstances expenses and transport
costs. It may also be necessary to consider providing childcare, considering providing
alternative care, and to arrange sign language and community language interpretation. You
also need to consider the accessibility of documents / papers that you produce for the
meeting.
Page 147 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Points to think about
•
•
•
•
•
•
It is important that there are clear terms of reference for the forums – and that the
limits of responsibility and decision-making are clear. Click here for an example of
terms of reference for a user group. (link to public info terms of reference)
Be aware that your representative may need training or support to participate fully in
the meeting.
It is important to consider how meetings work from the participants’ point of view..
Try to encourage a broad cross section of users to become involved in your Forum /
Network to avoid an individual / interest group dominating.
Consider refreshing the membership of your user group periodically - you could do
this by re-issuing publicity, open invitations, promotion or targeting specific users.
Remember to consider the implications of the Data Protection Act when consulting
your group – particularly if it is being used for consultation on an issue outside its
usual purpose.
Pros
•
•
•
Provides a regular dialogue with users.
Builds positive relationships between the tool and its users.
Provides positive opportunities for targeting the tool at what people want and need,
improving the delivery of the tool and take up of the tool, testing options for service
change, testing public views on conflicting priorities and supporting bids for
resources
Cons
•
•
•
Can become dominated by particular issues and groups.
May not be typical of the views of your service users.
Can become “institutionalised” to see the service from a providers point of view.
Planning for real
ƒ
Brief description
A 3-dimensional cardboard model of a neighbourhood is created for use at the consultation
session(s). Local people discuss relevant issues (e.g. community safety, traffic, housing,
vandalism, provision of play facilities). Participants make suggestions about what they want
to happen and where. Suggested actions can also be prioritised (e.g. Now, Soon or Later).
ƒ
Detailed description
Main costs include : materials for model(s) , hire of facilities (meeting room, refreshments
etc), facilitator time and promotional materials
When to use : Planning for Real can be used:
•
To establish local people’s priorities in development of a general community action
plan.
Page 148 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
•
To establish views about a particular issue (e.g. what to do with a piece of derelict
land, how to tackle traffic problems).
To demonstrate complexity of decision making processes
Who to use with
•
•
Take to community events e.g. village fete, neighbourhood meetings.
Use with specific groups e.g. school children, women’s institute.
Points to think about
•
•
•
•
•
•
•
•
How many sessions need to be held and where?
How will sessions be promoted?
How many models will be required?
Time & materials to make model(s) (can involve local people e.g. school children).
Presentation of background information/ main issues to consider
Are participants given a list of possible options or will this be left open?
How will suggestions be fed back? (e.g. cards to place on model, group presentation,
feedback forms to fill in).
How will suggestions be taken forward? (e.g. production of an action plan; feedback
to relevant people).
Pros
•
•
•
Highly visible and very hands-on
Easy and enjoyable for people of all ages, abilities and backgrounds.
Promotes discussion of real issues and allows local people to suggest solutions.
Cons
•
•
•
Making models can be time consuming if starting from scratch.
Time & effort to ensure all relevant groups are represented.
Danger of being taken over by certain individuals- need to ensure everyone has their
say.
Useful links : Planning for Real was developed by the New Initiatives Foundation:
http://www.nifonline.org.uk
Community profiling / community appraisal
ƒ
Brief description
Community profiling involves building up a picture of the nature, needs and resources of a
community with the active participation of that community. It is a useful first stage in any
community planning process to establish a context that is widely agreed. There are a wide
variety of methods available to develop community profiles. The methods combine group
working and group interaction techniques with data collection and presentation techniques.
The focus is on methods that are visual in order to generate interest and make the process
accessible. The results are in the public realm. Reports include as many of the words, writings
and pictures of local people as possible.
Page 149 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
ƒ
Detailed description
Cost : Cheaper than many conventional methods that involve the use of consultants. Costs
include facilitators and materials.
When to use : To build up a comprehensive picture of a local community. You can pick and
choose from the range of methods as appropriate. The method has been used in Strathclyde
and other areas.
Points to think about : This method requires a high level of community participation and
commitment. Effort will need to be made to ensure that a particular interest group does not
dominate the process and that efforts are made to engage everyone in the process. It is
essential that there is an outcome to the profiling – that something happens as a result at the
end of the process. Carrying out these sorts of exercises will raise expectations.
Pros
•
•
•
Highly participative
Cheaper than using external consultancy
Wide range of methods available mean that it should be possible to engage everyone
Cons
•
•
•
Could be dominated by a particular interest group – good facilitation will help
Could leave some people disappointed at the end of the process if expectations are
not met.
Lack of a clear objective can mean that the process lacks a clear focus.
Useful links
http://www.communityplanning.net/methods/method42.htm
References : Adapted from Community Planning Net.
Community visioning
ƒ
Brief description
Community visioning involves a group of people coming together to develop ideas about
what they would like their community ideally to be like. After the vision is agreed the group
will then work on looking at what needs to be done to bring about that vision and put this
together in an action plan. Community visioning can involve conference or workshop events.
It is likely that drawing up the vision and the action plan will take place over a period of
months. Groups meet, and are assisted by a trained facilitator to agree on a vision for their
area and look at ways of achieving this goal. Alternatively, creating the vision can be tied into
other events.
Page 150 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
ƒ
Detailed description
There needs to be a process, framework and resources in place to translate community visions
into action. Other resources are meeting rooms and trained facilitators.
When to use
Can be used for large-scale community planning, or to look at the way forward for particular
issues such as health, environment and education. It has been used as a tool for Agenda 21
planning.
Points to think about
Visioning projects can cover any geographical area, from a street to the whole world. Factors
that influence that scale are:
•
•
•
Go for somewhere small enough that people can identify with it;
Go for somewhere large enough that decision-makers will feel it is worth putting
effort into;
Go for somewhere that feels like a natural unit.
Useful links
For a comprehensive guide to community visioning
http://www.neweconomics.org/gen/uploads/doc_18920003301_Howto.doc
For basic information
http://www.communitiesscotland.gov.uk/web/site/Engagement/community_engagement.asp
ƒ Reference
Thanks to Scottish Centre for Regeneration - Communities Scotland
Open space event
ƒ
Brief description
Open Space events involve from 20 - 500 people in identifying important issues, discussing
these, prioritising them and deciding on action. It involves all stakeholders and is therefore a
‘whole system’ approach. Key steps are setting the theme and inviting the participants, then
participants start by sitting in a circle and decide themselves on the issues to discuss, using a
simple procedure usually guided by a facilitator ; the participants create their own agenda
within the theme and 1-2 hour workshop sessions identify the key issues. Participants self
organise by signing up to those topics important to them. Groups move on to prioritising and
identifying action. Management has the option to respond on the day. All stakeholders should
be invited. You will have to think about how you ensure a good turn out on the day. Again,
remember those who cannot come to such events for whatever reason - such as people from
groups we find traditionally difficult to engage.
Page 151 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
ƒ Detailed description
Cost : Depends on the size of the event, consider an accessible venue, carer and childcare
costs, catering. An external facilitator can cost up to £1000 per event. Events need to be well
publicised so budget for this.
When to use : When you want to bring together a broad range of people – and translate
detailed discussions into action plans.
Points to think about
• Make sure you have enough breakout areas.
• Think about layout of the event although workshops are self-managing they operate
in accordance with certain “principles and laws” which should be clearly displayed.
Pros
•
•
•
Can generate action plans quickly
Highly participative
Relatively low cost
Cons
•
•
•
Can appear a bit "chaotic".
The official version of Open Space says that all participants should get written
summaries on the day, which means organising a bank of typists and a large capacity
photocopier.
Some areas have found difficulty in getting large turnouts of the public.
Useful links
www.openfutures.com
http://www.communityplanning.net/methods/method90.htm
ƒ
References
Community Planning Net
Building Strong Foundations – Involving People in the NHS
Page 152 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
7
Dialogue tools and facilitation tips
This chapter describes some dialogues tools – i.e. specific means of eliciting information -and facilitation tips for the application of participatory methods.
7.1 Dialogue Tools
Face-to-face interviews
ƒ
Brief description
Face-to-face interviews are the traditional way of asking large numbers of people their views
based upon a structured questionnaire. Trained interviewers either stop respondents in the
street, or visit them in their homes, to find out what people think about a topic and issues.
While semi-structured interviewing appears to be informal and conversational, in fact it is a
well-defined and systematic activity that has clearly defined goals and guidelines. The
advantage of this technique is its flexibility and responsiveness — the interview can be
matched to individuals and circumstances. At the same time, the use of an outline or guide
can make data collection reasonably systematic. The disadvantages are that it requires some
skill and is therefore difficult to delegate to an assistant; different information may be
gathered from different people, depending on which topics arise; and data organisation and
analysis can be quite difficult (Mikkelsen 1995). Semi-structured interviews can be carried
out with individuals or with groups. Individuals can be selected respondents who give
information about themselves (case studies), or key informants whose special knowledge can
give insights on a particular topic. Group interviews can be conducted with a community
group comprising diverse members with access to a broad range of information, or with a
small select group of like-minded individuals (a focus group) who are able to discuss a
particular topic in detail. Structured interviews are mainly used for comparative purposes and
to obtain quantitative data (GAO 1991). Typically structured interviews are combined with a
sampling scheme and are used to generate data for statistical inference. For example, sample
surveys (using a structured interview technique) can generate information which can be
generalised to the population from which the sample was drawn, whereas case-studies (using
a semistructured interview technique) are specific to the person being interviewed and the
information cannot be generalised to the population. However, inferential analysis is not
restricted to the use of structured interviews in a sample survey format.
Structured interviews allow a consistency between interviews so that every respondent is
asked the same question. This is what allows the comparison between respondents. It also
makes it possible to delegate the interviewing task to enumerators, provided they are
thoroughly trained and well supervised. However, unlike semi-structured interviews,
structured interviews limit the ability of the interviewer to ask questions outside the format of
the questionnaire and thus are prone to omission of information that may be of interest.
Structured interviews can be of an open-ended or closed-question type and can be conducted
face-to-face or by a written questionnaire filled in by the respondent. However, in situations
such as Malitbog and M’Drak, face-to-face interviewing is the only feasible technique.
ƒ
Detailed description
Page 153 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
When to use : Face-to-face interviews are typically used to gather specific information about
a subject, such as facts, figures and attitudes. They can also be used when issues are already
known and need to be quantified, or when open-ended information is required. Face to face
methods are most suited to surveys in which a wide range of topics must be covered using a
large number of questions.
In the context of quantitative surveys, face-to-face interviews are completely structured, and
take the form of a questionnaire administered by an interviewer. As with postal surveys, they
are useful for obtaining basic information from service users and people who have quite a
high degree of interest/knowledge in a subject.
Semi-structured or unstructured interviews are more useful where we want fairly detailed
information from a small number of people. Interviewees are not presented with fixed, preselected questions and responses, as is the case with structured interviews.
What to do : There are a number of stages to the interview process that have to be followed :
• draft questionnaire (structured/semi-structured interview) or a set of question topics
(unstructured interview) is produced;
• sample is chosen;
• pilot interviews to ensure all topics are covered and interviewees understand the questions;
• answer sheets and, where appropriate, visual aids are printed;
• responses are coded and the data stored/input;
• data is analysed;
• report on the findings and feedback to the people who participated in the exercise.
Detailed implementation advices
Designing face-to-face or phone interviews
•
•
•
•
•
•
•
•
•
•
Give the interviewer(s) a detailed briefing about the purpose of the exercise and how
you are going to use the results. Provide a laminated letter of authenticity for the
interviewer to show the respondent.
Everything the interviewer needs to say should be provided on the questionnaire for
them to simply read out to the respondent.
Split the questionnaire into sections that flow logically. Write brief introductions that
the interviewer reads out before each section.
Keep the wording consistent throughout the survey.
Number questions and parts of questions logically and clearly for the interviewer to
follow.
Provide detailed instructions, verbatim and ‘GO TO’ rules for the interviewer to
follow throughout the whole survey. Think about how changing the order of your
questions can help this.
The interviewer should indicate to the respondent questions where they can choose
one option or more than one.
For face-to-face surveys use laminated showcards that list and number the responses
to each question. The respondent then simply reads out the number that matches their
answer.
Be consistent in your design and question framing i.e. always from positive to
negative or 1 to 5.
Ask straightforward questions first, leaving any sensitive, more difficult or
contentious questions until last. You can ask ‘About You’ questions either at the
beginning or the end of the survey.
Page 154 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
•
Do ask ‘About You’ questions which will add value to your analysis. Analyse results
by groups such as age group, sex, district etc and compare results to see how opinions
vary. What you ask here depends on your survey topic.
If you ask open-ended questions ensure you leave enough space for comments that
the interviewer writes down. Provide blank boxes rather than lines which limit what
people can fit in, everyone’s handwriting is different. Also consider how you are
going to analyse this information and take it into account.
Particularities of semi-structured interviews
Mikkelsen (1995:110–111) lists some general guidelines for semistructured interviews:
• Begin with a greeting and state that the interview team is here to learn.
• Begin the questioning by referring to someone or something visible.
• Conduct the interview informally and mix questions with discussion.
• Be open-minded and objective but judge everything you hear — there are many reasons
why people give the information that they do, not necessarily because it is accurate or
truthful.
• Carefully lead up to sensitive questions — put these near the end of the interview so that if
the respondent decides not to answer these you do not lose their willingness to answer earlier
questions.
• Be aware of non-verbal signals.
• Avoid leading questions and value judgements — such questions can cause bias in the
answer.
• Avoid making assumptions — for example, asking people how many grades of school they
completed assumes that they went to school in the first place.
• Avoid questions that can be answered with ‘yes’ or ‘no’.
• Be aware of both direct and indirect questioning — for example asking a male farmer about
farming activities carried out by his wife may lead to different answers than if you asked the
wife directly.
• Individual interviews should be no longer than 45 minutes and group ones no longer than
two hours.
• The interviewer should have a list of topics and key questions written down in a notebook.
• The interviewer or a member of the interviewing team should make detailed and systematic
notes, as these are the primary output of the interview. When the collection of information is
delegated to someone who has a lack of ownership of the process or who will not benefit
from the outputs, the quality and reliability of the information declines. In such a situation,
what is intended to be a semi-structured interview with open-ended and probing questions
becomes more like a structured, closed-question survey without any desire on the part of the
interviewer to find out the reasons why people give the answers they do. Hence it is important
for semi-structured interviews to be conducted by experienced workers with a genuine
interest in the outcomes.
Particularities of structured interviews
There are many good references on structured interviews and survey design (e.g. Bernard
1995; Casley and Kumar 1988; Fowler 1993; GAO 1991, 1992; Poate and Daplyn 1993;
Pannell and Pannell 1999). It is not the purpose of this report to reproduce that material.
However, it is worth emphasising that structured interviews need to be carefully planned in
order to be successful. The planning of a structured interview needs to take into consideration
not only the design of the appropriate questions but also the selection of the sample to be
interviewed. There are many problems with structured interviews, in particular sample
surveys, which can be avoided by careful planning and pre-testing. However, one particular
pitfall that appears prevalent in most surveys is the lack of forethought for data analysis. This
falls into two categories — the collection of data without consideration of the statistical and
Page 155 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
sampling context, and the inclusion of questions in a structured interview which are not going
to be analysed. In the second instance the collection and coding of that information is a waste
of valuable time and resources. In general, it is far easier to expand a questionnaire and
increase the number of respondents than it is to manage and utilise the data which results
from this activity. As far as possible the aim should be to minimise the number of questions
asked and the size of the survey sample, while maximising the reliability and utilisation of the
data generated.
Piloting the questionnaire/interview
Pilot the questionnaire before doing it for real. Test either with colleagues or a sample of
people. Check the questionnaire and showcards thoroughly for spelling errors, go to rules,
instructions, responses, question numbering and coding. Do take people’s feedback into
account, if they say they don’t understand something or find it difficult to complete or answer
then you should do something about it! Time how long it takes someone to complete the
questionnaire or interview it may take longer than you think.
Raising response rates for face-to-face interviews
•
•
•
•
•
•
•
•
Send a letter in advance of the interviewer calling at the respondent’s house, explain
what the survey is about and why it is important. Include your contact details so that
residents can contact you should they not wish to take part.
Interviewers should carry and ID badge and a signed letter of authentication.
Brief the interviewer fully so they can answer any questions the respondent might
have about the survey.
Give interviewers a clear opening statement to read out when they call at a person’s
house. Don’t be too forceful but be persuasive and truthful.
Give an indication of how long the interview might take.
If residents appear reluctant offer to call back at a more convenient time.
Interviews are best conducted in the evening or weekends. If residents are out
interviewers should call back at least 4 times. Leave a calling card/letter asking them
to contact you to arrange an interview.
Offer an incentive such as entry into a prize draw.
Analysing and using
There is no strict framework for analysis of semi-structured interviews as there is for
structured interviews. The primary purpose of the interviews is not to collect quantitative data
from which to draw inferences — a purpose best left to structured interviews in a survey
framework — but to tell a story. The qualitative information gathered from semistructured
interviews enables researchers to describe patterns among the data and to build explanations
of processes, such as farmers’ adoption decisions.
In fact, there is no clear demarcation between the elicitation and analysis phases in
semistructured interviewing. The technique is essentially iterative, hence analysis is occurring
concurrently with data collection. The interviewer follows a process of ‘observe, think, test,
and revise’ as the interview proceeds, in order to develop robust conclusions in a
participatory manner. Triangulation — the comparison of multiple, independent sources of
evidence — is also used to strengthen the validity of the findings. GAO (1990) suggests
developing alternative interpretations of findings and testing these through a search for
confirming and disconfirming evidence, until one hypothesis is confirmed and others are
ruled out. The reproducibility of findings is established through analysis of multiple sites and
data over time. These can be analysed by developing a matrix of categories, using graphic
Page 156 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
data displays, tabulating the frequency of different events, developing complex tabulations to
check for relationships, and ordering information chronologically for time series analysis.
Data analysis ends when a plausible description or explanation has been developed, having
considered all the evidence (GAO 1990:59).
The analysis and utilisation of data collected from structured interviews depends on whether
the data are derived from open-ended or closed questions and whether the responses can be
quantified or not. Closed questions usually mean (a) that the responses are exhaustive and
mutually exclusive (all possible responses are covered and they do not overlap) and (b) that
the questions are asked of all respondents. For open-ended questions, however, responses
may range from no response, through a few words, to several sentences. Respondents usually
only detail factors which come to mind immediately, not necessarily the most important
factors. Quantifiable responses enable higher order analysis to be carried out whereas nonquantifiable data restrict the analysis to description of the situation. Analysis of structured
interview data can be carried out at several levels. At the first level of analysis a description
of the data collected needs to be given. This can be done in the form of frequency tables that
can show the number of respondents in each particular category. At the second level of
analysis a description and analysis of the data is carried out. Each question can be analysed
and associations between responses examined. This can be done in the form of correlation
and chi-squared analysis to check the statistical significance of differences between groups.
The third level of analysis takes into account the interaction of many different variables on
the responses for particular interview questions, and addresses more complex analytical
questions. Such analysis can be carried out using analysis of variance, multiple regression
analysis, and discriminant function analysis.
•
•
•
•
•
•
•
If the interviews are to be selected using a random sample, interviewers should not
attempt to increase the response rate by interviewing people not on the list. This will
reduce the validity of the sample.
Depending on the complexity, an interview comprising 60 questions would take
about half an hour to complete depending on the respondent (e.g. interviewing older
people usually takes longer). An interview of 30-45 minutes is considered the
maximum length.
Professional interviewers should conduct the interviews. They will be required to
carry an ID badge and a letter of authentication. It is also a good idea to notify the
local police that market researchers are conducting fieldwork in the area. Sometimes
you can also send an advance letter to notify householders an interviewer is due to
call and what its all about.
Normally at least 4 call backs are made in an attempt to interview a person who is not
in the first time an interviewer calls, further adding to the cost.
Response rates, and questionnaire lengths will depend upon the location of the
interview. People will have less time if interviewed on the street, but may be less
accessible, and less inclined to agree to an interview in their own home.
Provision needs to be made where necessary for interviewers who speak ethnic
minority languages. The interviewer should also be appropriate to the people you are
trying to interview i.e. gender issues, sensitivity and ethnicity.
Sample size. Should you tender for a project that involves face to face interviews,
you should ask the companies you contact to identify what sample size they will use
and what response rate they expect to achieve. Their response to this will give you a
clear indication of quality, efficiency and value for money.
Page 157 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Resources : Although interviews may involve much smaller numbers of respondents than
with postal surveys, they still require significant resources. Because interviewers need to be
skilled in carrying out long and possibly complex interviews, it may be cost-effective to
employ a market research company to do the fieldwork. If the work is to be done in-house,
the cost of training staff to do the fieldwork must be assessed along with the other aspects of
the exercise – e.g. preparation, data analysis, report writing, information dissemination,
printing of materials, and travel.
Feedback : The results of the interviews must be presented in a simple, well-organised way.
The bulk of the final report should be composed of text, as this is the only effective way of
explaining results. Tables and graphs should also be used to aid clarity. Findings and
outcomes should be fed back to interviewees. Where their names and addresses are known,
this should be relatively straightforward. Where interviewees’ names and addresses are not
known (e.g. respondents to street interviews), the same information can be disseminated
through other means, such as press releases.
Pros
•
•
•
•
•
•
The questionnaires used in face-to-face interviews can be longer and they are more
flexible than postal surveys. Greater complexity and routing of questions can be
incorporated so that particular questions can be bypassed and tailored to the
respondents’ answers as they go along. Only simple routing can be incorporated in
self-completion questionnaires.
Response rates are generally high. You can also track the progress of the number of
completed interviews throughout the fieldwork process which is important when a
particular response rate or number of responses is required.
More probing, complex questions can be asked as the interviewer can guide the
respondent through the questionnaire, display visual aids, or even offer personal
experience and encouragement.
Sensitive or difficult subjects can be explored and respondents can answer using
numbers on laminated ‘showcards’. Electronic methods of data capture such as CAPI
(Computer Assisted Personal Interviewing) can also be used, here the respondent
answers questions via a laptop or handheld device. CAPI enables more accurate,
faster data capture without the need for data inputting and therefore reduces costs.
Collecting responses through face to face interviews can be as quick if not quicker
than postal surveys.
The quality of the data you collect is of higher accuracy and completeness than in a
postal survey. Those being interviewed cannot jump questions or misinterpret the
meaning of questions.
Cons
•
•
Face-to-face interviews are more expensive than postal surveys and telephone
interviews. They are more time consuming, labour intensive and require trained
interviewers.
In some cases there may be difficulties with interpretation of responses, especially
when responses are sought from for whom English is not a first language, or people
with disabilities such as a hearing impairment. Interpreters may be required or you
may need to consider what is the most appropriate way of communicating.
Page 158 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
•
•
•
Interviewer effect can occur. This is when the presence of the interviewer influences
the respondent e.g. the respondent may not answer truthfully or give an answer they
think the interviewer wants to hear.
Feedback from individual interviews may not be representative of the views of all the
respondents.
Not everyone is willing to take part especially when interviews are conducted on the
street and door to door.
Older and younger people are more likely to refuse than other groups of people.
Postal questionnaire surveys
ƒ
Brief description
Postal surveys involve sending out a paper based questionnaire to respondents who then
complete and return it by a specified date. Paper based self-completion questionnaires, either
postal or distributed in another way e.g. in-situ, are one of the most popular survey methods
available. They are flexible, easy to administer, relatively cheap and can often be successfully
carried out in house. Postal surveys are ideal when those you want to survey are widely
dispersed across the County. Wherever possible tick-box responses should be provided
making it easy for the respondent to complete, and data analysis straightforward. Open-ended
questions are not suited to self-completion surveys.
ƒ
Detailed description
Postal questionnaire surveys are suitable for asking a range of straightforward questions
of a large number of people. Typically, these people will be either service users or they will
have quite a high degree of interest/knowledge in a subject. Self-completion and postal
surveys can cover a range of people and topics, however they are not always the best method
to use. This method is difficult to use for complex subjects, but ideal when you want to gauge
public opinion and satisfaction ratings on broad topics. They are also a good method to use
when covering sensitive or personal issues. Pre-coded tick box questions are the best question
types to use. This is where you best guess the full range of responses people are likely to say
and list them for people to tick. You can also ask questions based on a rating scale where you
provide a list of responses (usually 5) and force respondents to make a choice e.g. Very
satisfied, Fairly satisfied, Neither, Dissatisfied, Very dissatisfied.
What to do
Once it has been decided that a postal questionnaire survey is to be carried out, a number of
stages have to be followed :
• draft questionnaire is produced and an appropriate sample chosen;
• the questionnaire is piloted to ensure that the questions are relevant and understandable to
respondents;
• questionnaires are printed and other materials (e.g. covering letter and pre-paid reply
envelope) are collected;
• questionnaires are mailed, returned and responses input;
• data analysis, followed by a report on the findings and feedback to the people who
participated in the exercise.
Detailed implementation advices
Page 159 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Designing a postal or self completion questionnaire
•
•
•
•
•
•
•
•
•
•
•
Split the questionnaire into sections that flow logically and begin with a brief
paragraph introducing each section.
Keep the wording and design consistent throughout the survey.
Choose an appropriate font and size (11-13pt) and highlight questions and important
text in bold. However do not go over top with formatting!
Design the questionnaire so that is appealing to the eye. Do not overload pages with
questions or information, white space is a good thing. Maximise your margins to
make full use of the space on the page.
Number questions and parts of questions logically and clearly.
Provide ‘GO TO’ rules instructing respondents where to go and what questions to
answer next. Think about how changing the order of your questions can help this.
Help the respondent by stating ‘Tick one only’ or ‘Tick all that apply’.
Be consistent in your design and question framing i.e. always fro from positive to
negative or 1 to 5.
Ask straightforward questions first, leaving any sensitive, more difficult or
contentious questions until last. You can ask ‘About You’ questions either at the
beginning or the end of the survey.
Do ask ‘About You’ questions which will add value to your analysis. Analyse results
by groups such as age group, sex, district etc and compare results to see how opinions
vary. What you ask here depends on your survey topic.
If you ask open ended questions ensure you leave enough space for comments.
Provide blank boxes rather than lines which limit what people can fit in, everyone’s
handwriting is different. Also consider how you are going to analyse this information
and take it into account.
Piloting the questionnaire/interview
Pilot the questionnaire before doing it for real. Test either with colleagues or a sample of
people. Check the questionnaire and showcards thoroughly for spelling errors, go to rules,
instructions, responses, question numbering and coding. Do take people’s feedback into
account, if they say they don’t understand something or find it difficult to complete or answer
then you should do something about it! Time how long it takes someone to complete the
questionnaire or interview it may take longer than you think.
ƒ
Analysis
Analysing Quantitative data
Analysing Qualitative data
Examples of types of Questions
Pre-coded or closed questions – a list of pre-coded responses are provided for the respondent
to simply ring or tick. Each option is assigned a numeric code which is used for data input
and analysis. The option of ‘Other’ may also be offered with space for respondents to write in
their answer. Be sure to include a full range of options so that you don’t get lots of people
ticking ‘Other’.
Rating or attitude scales – can be used to measure agreement or satisfaction. Ensure that you
provide a fair range of responses that are balanced e.g.
How satisfied are you with the bus service overall?
Page 160 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Very satisfied, Fairly satisfied, Neither, Dissatisfied, Very satisfied
Or
To what extent do you agree or disagree with the following statement “The quality of Council
services is good overall”. Strongly agree, Agree, Neither, Disagree, Strongly agree.
Open ended questions – for these question types no categories or answers are suggested, the
respondent is free to express themselves in their own words. E.g. Why are you dissatisfied or
very dissatisfied with the service provided?
Classification questions – ‘About You’ questions such as sex, age group, employment status,
tenure, ethnicity. These are very useful in breaking down the results into different groups for
analysis.
Raising response rates for postal questionnaires
•
•
•
•
•
•
Include the Worcestershire County Council logo on the mailing envelope will aid
recognition and makes your letter stand out from the rest. Reprographics can arrange
to have our logo printed on your envelopes, note this can take up to 4 weeks.
Include a clear, concise covering letter signed by an appropriate senior officer
signifying its importance. Underline what will happen with their answers, explain
why it is important that they respond.
Don’t make your questionnaire too long people will be put off straight away.
Consider stating how long it should take them to complete or using another method.
Include a pre-paid envelope so that people can easily return their completed survey.
Send a reminder or reminders two to three weeks after you sent the survey. You
should only send reminders to those people who have failed to respond. Sending
reminders to everyone is wasteful and will be perceived as inefficient. To do this you
must assign ID numbers to the people you send surveys to. This ID number is then
written onto the survey and recorded as completed surveys are returned.
Offer an incentive. Selecting one respondent at random who wins an appropriate
prize. Ensure you do not over or under spend and cash may not always be appropriate
(it can affect people’s benefits) consider vouchers. LINK See Social Services guide
to incentives LINK
Resources
Once a survey timetable has been produced, it will be possible to identify what resources are
needed to carry out each stage of the process. Some of the survey costs will be hidden; e.g.
staff time, but should nevertheless be accounted for. Other costs are easier to identify. For
example, the cost of printing and posting the questionnaire and covering letter can be
estimated once the sample size is known. PC-based software packages that can be used for
storing and analysing survey data fit into three main categories: spreadsheets (Excel and
Lotus 123 are easy for entering data but generally have limited capacity for analysis),
databases (Access or dBase should be sufficient for most purposes), and statistical packages
(SPSS and SNAP are powerful analysis tools for very large surveys). There is also a package
called Pinpoint, which combines questionnaire design with data storage and analysis.
Analysis and Evaluation
The best place to start when analysing survey data is to calculate frequencies. These show
how respondents answered each question; e.g. how many were very satisfied with a particular
service, how many were dissatisfied, etc. Once the basic frequencies have been calculated,
Page 161 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
the next step is to look at cross tabulations. These are used to investigate the relationship
between two or more variables, e.g. the number of people in a certain age group and a
particular town who were dissatisfied with a service.
Feedback
The survey results must be presented in a simple, well-organised way; otherwise they may
easily be ignored. The bulk of the report should be composed of text, since this is the only
real way of explaining results. Tables/graphs should also be used to aid clarity. Ideally, the
survey results and outcomes should be fed back to respondents. Where the respondents’
names and addresses are known, this should be a relatively straightforward exercise. Where
respondents are anonymous, the same information can be disseminated through other means,
e.g. press releases.
Pros
•
•
•
•
•
•
•
Easy to administer.
A well designed questionnaire produces reliable statistical information.
Repeating the same questions over a period of time allows you to track opinions.
Pre-coded tick-box responses are quick and easy to analyse.
Large numbers of people can be contacted in a short period of time.
Relatively cheap especially where supported by in-house expertise.
The respondent is able to complete the questionnaire in their own time which may
lead to more considered responses.
Cons
•
•
•
•
•
•
Only a small amount of information can be gathered in a self-completion survey. It is
difficult to establish why someone has answered they way they have.
A poorly planned or designed questionnaire will result in poor data, a low response
and inaccurate results.
Expect lower response rates especially within younger age groups, people with
literacy problems and people whose first language is not English (unless translations
can be provided). It is easier for someone to ignore a postal survey than an
interviewer.
Limited length and complexity of questions, the questionnaire must be easy to
complete.
The reader may misinterpret questions with the absence of an interviewer.
Lack of control over who answers the questions
Phone interviews/surveys
ƒ
Brief description
Telephone surveys are being increasingly used as a means of collecting data quickly and at
relatively low cost. They involve calling, usually without prior warning, and running through
a series of questions on the telephone, writing down the answers given on a pre-coded sheet.
A well designed telephone survey can actually combine data gathering and data entry as the
response given can be entered directly into a computer screen by the interviewer as the survey
takes place.
Page 162 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
ƒ
Detailed description
When to use ? When quick consultation process is needed.
Points to think about : Telephone surveys should last no more than 10-15 minutes.
Raising response rates for telephone interviews
Give interviewers a clear opening statement to read out when they call at a person’s house.
Don’t be too forceful but be persuasive and truthful. Give an indication of how long the
interview might take. If residents appear reluctant offer to call back at a more convenient
time. Interviews are best conducted in the evening or weekends. If residents are out
interviewers should call back at least 4 times. Offer an incentive such as entry into a prize
draw.
Pros
•
•
•
•
•
•
•
•
Obtains relevant information.
Allows freedom to explore general views and perceptions in detail
Can use external organisation to do the work as it will add independence.
Can target groups which are often excluded
Relatively quick and easy to conduct in house & high volume of data can be obtained
More complex issues can be tackled than in a postal survey as an interviewer is
involved
Easy to survey people who live in wide geographic areas
The data can be inputted electronically as you carry out the interview.
Cons
•
•
•
•
•
•
Interviewing skill required.
Need to sample enough people to generalise results.
Expertise is needed in preparing questions so they are not prescriptive. Data analysis
skills are also needed to analyse the response.
Tightly structured questionnaires can constrain consultees responses
Sample results may not be representative. You can not validate that you are speaking
to the intended person.
Telephone surveys have high refusal rates and cold calling can often annoy the
prospective respondent many people fear that these calls are actually thinly disguised
sales pitches
7.2 Facilitation Tips
ƒ
Customer comment cards
Uses
Customer comment cards can be used as a management tool to refine service delivery by
seeking comments from service users. Ideally, they should focus on both positive and
Page 163 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
negative aspects of service delivery, and provide the basis for introducing improvements. A
variation of the customer comment card is the freephone telephone line, which can also be
used for gathering comments on services. As the viewpoints of service users are not always
made on pre-printed comment cards, the process should be augmented with a formal
complaints system for dealing with unsolicited letters and telephone calls.
What to do
Self-completion comment cards can be printed – preferably in an eye-catching style – and left
at prominent positions in Staff, Access Points, etc. As there is usually no personal interaction
with a member of staff, the act of completing a card is at the discretion of the respondent. A
post box should be provided next to the public supply of comment cards, or, alternatively, a
reply-paid envelope attached to each card. Consideration should be given to taking out an
advertisement in the local press. A copy of the comment card could accompany text on the
service(s) we are seeking feedback on. Where a free-phone telephone line has been
established, the local press and other media should be used to publicise the facility and
encourage people to use it.
Pros and Cons
The main advantages of customer comment cards and a complaints system are :
• they are inexpensive and require little staff time or expertise;
• they are easy to administer;
• they record the instant views of service users without the time lag associated with postal
customer satisfaction surveys.
The main disadvantages are:
• no control over who responds – therefore, the responses may not be representative of users
as a whole;
• the response rate can be low;
• responses may be very skewed – for example, those who have a negative experience may be
more likely to respond as a way of making their complaint known;
• the system is open to abuse by campaigning groups who may submit multiple comment
cards one issue;
• because of that, it can be difficult to know what weighting to give responses.
Resources
Apart from designing and printing comment cards, the only resource needed is the ability to
collate and analyse the results. With free-phone telephone numbers, there will be an
additional telecommunication charge for the service.
Analysis and Evaluation
Analysis can be difficult, as comments will cover a wide range of services. They will also
vary greatly in style, e.g. general remarks to detailed textual comments on specific services.
Comment cards generally ask open-ended questions and the usual problems of analysing
qualitative data will apply. Unsolicited comments are more likely to be service-specific as
they will usually arise from dissatisfaction with a particular aspect of service delivery. As
with many other consultation techniques, it is unlikely that customer comment cards or a
complaints system will operate independently, they are more likely to be an integral part of a
wider review of service delivery.
Feedback
Comment cards can be anonymous and it is difficult, therefore, to advise a respondent about
the action taken on his or her suggestion. However, it is important that the appropriate service
Page 164 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
manager is advised. Where the respondent is known, the suggestion should be responded to
promptly and, if appropriate, details given of the action taken.
ƒ
Mad-sad-glad boards
These Boards are a simple and effective way of allowing people to give their opinion about a
particular issue, place or event. They are a great way of engaging people and getting people
talking. The Boards have a panel each for comments about what makes people “Mad, Sad or
Glad”. Comments are written on Post It notes and stuck to the Board under the appropriate
heading. Comments can be as general or detailed as people please. Comments are written up
and fed back to participants. The Boards can be used as part of a meeting or at a community
event.
ƒ
Go-rounds
Uses
At most meetings and events some people never get heard. At big events (20 or more people)
this may be a significant minority/majority of those present. A Go-Round goes some way
towards remedying this by giving everyone a space to speak to the whole group. It is an
Equal Opportunities method and can be used more than once in a meeting. A Go-Round can
be used as the backbone of any Beginning or Ending, i.e. for people to introduce themselves
and to feedback on how they felt about an event.
Method
• Everyone gets to speak for a short, equal time, taking turns.
• The facilitator goes first in order to model the brevity required to make the Go-Round work.
• Times of 30 seconds to two minutes are common depending on the content of the Go-Round
and the number of people participating. Longer times of up to five or ten minutes work well
when small, skilled groups consider broader topics.
Points
Giving a set of headings for a Go-Round allows people to relax, as they know what they
should talk about. Timing the contributions avoids garrulous people taking more than their
fair share of group time. Even in groups of up to 40 people a Go-Round of half a minute each
works well. Aminute each allows quite substantial issues to be addressed.
Page 165 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
ƒ
Think & Listen
Uses
Think and Listen is a method for putting aside time to think. This allows participants to try
out interesting ideas - abandoning those with little promise and settling on those with real
potential - safe in the knowledge that their wanderings are private and confidential. Think and
Listen is an important method that, with Go-Rounds, provides the basis for an inclusive and
productive meeting. Despite the simplicity of the method, it presents people with significant
challenges.
Method
Participants work in pairs. For half the time, one person is the thinker and the other the
listener. At half time the roles reverse. The facilitator manages the timing. During their
thinking turn, each person is encouraged to think out loud without necessarily making any
sense to the listener. The thinking turn is for the thinker’s benefit. It is a time for the thinker
to collect and develop their thoughts at their own pace and in their own way. The listener
makes no comments and asks no questions. During the Think and Listen time, participants
address a topic proposed by the facilitator. Common time periods for a Think and Listen are
two to five minutes each (known as 2 by 2 or 5 by 5). Skilled groups can work outside these
boundaries using one minute by one minute Think and Listens to good effect, or, for complex
matters, ten minutes each way. Keep the time fairly short for the first few goes to avoid
difficult empty spaces. What the thinker speaks about and how their thinking develops are
confidential, known only to themselves and the listener. The listener makes no reference to
what has been said by their thinker, either to the thinker or to anyone else, unless the thinker
clearly gives their permission for this to happen.
Points
Think and Listen is often the first new method introduced to a group. It is very different from
normal
conversations and discussions, which generally have interruptions and digressions that people
can find off-putting. The initial awkwardness of a Think and Listen soon tails off as a group
becomes more practised. However, the facilitator needs to be ready to reassure people that
any discomfort is temporary and that empty spaces are valuable as opportunities to gather
thoughts (or rest).
ƒ
Brainstorming
Uses
This technique has become the most commonly used (and misused) technique for promoting
participative, original and creative thinking. Essentially, the purpose of this technique is to
generate a large number of ideas from a group of people in a short time.
Method
The ground rules for conducting an effective brainstorming session are : suspend judgement –
all ideas are equally important ; promote freewheeling thought around an issue ; generate as
many ideas as possible rather than focusing on quality. But do not spend time discussing
these ideas and do not reject any, even if they seem ridiculous ; encourage linkages between
ideas ; record the ideas on a list (e.g. flip chart) that all participants can see, as this may
stimulate further ideas ; be aware of the emergence of common trends and potential solutions.
Points
Amore specific, structured form of Brainstorming is outlined in the following section on the
Nominal Group Technique (Sec 4.4). This technique illustrates the value of brainstorming in
Page 166 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
generating ideas and how it can be adapted to make decision-making and evaluation tasks
easier. Brainstorming can form part of, or be linked with, other facilitation techniques, e.g.
mind maps, agenda setting, and visioning.
ƒ
Nominal group technique (NGT)
Uses
The Nominal Group Technique (NGT) is a structured procedure for collecting information
about a group’s views on a particular area of interest. It is also known as the ‘Snowball’, and
can be used for participation amongst quite large groups of people. (It is sometimes found to
be more successful than Brainstorming with small groups).
Page 167 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Method
The members of an NGT are asked to consider two questions, e.g. how might [service X] be
strengthened? and, ‘what are the current strengths of [service X]?. The procedure then
follows six stages:
1. contemplation of the questions and listing of ideas by each participant during a silent
period.
2. division of the participants into subgroups of 6 to 8 people to collect and discuss their
ideas.
3. voting by each subgroup member on which items were of most relevance.
4. ranking and selection of the top 5 (or 10) responses to each question.
5. discussion of the reduced set of items within the larger group.
6. voting for the issues of most personal importance and ranking them according to the
highest score.
The session should be brought to close with a summary of the proceedings and an agreement
as to what the next step of the process should be. For example, the findings could be used as
the basis for a questionnaire-based survey in which a large number of service users are asked
to agree or disagree with the items that had emerged most strongly in the NGT.
Points
The NGT process results in a short-list of ranked priorities based on consensus. The method
favours those people who want to be involved, but do not like speaking out at large public
meetings. It also allows immediate feedback to the participants. However, because the NGT
does not address people’s experiences or attitudes, it is not appropriate for a detailed, in-depth
exploration of issues.
Pros
The technique is useful for identifying problems, exploring solutions and establishing
priorities - the Nominal Group Technique can be used as a total assessment tool in that it can
be used to identify the problem, generate solutions and implement them. It encourages
everyone to contribute and prevents people from dominating the discussion - this allows
everyone’s opinions to be heard and judged equally. It requires only one skilled facilitator.
Nominal Group Technique produces an answer with few resources
Cons
Group members have to make themselves available for the required time - this can prove
difficult but should be attempted. The ideas may be ill informed or impractical - it must be
explained that the process being carried out is not being done so in a hypothetical sense but is
a realistic problem requiring realistic solutions. The Nominal Group Technique is a good
stand-alone technique for simple issues but must be combined with other approaches where
the issue is more complicated or affects people outside the sphere of influence within the
group.
ƒ
Mind Maps
Uses
Mind mapping is a type of brainstorming technique that is useful for getting thoughts on
paper without worrying about sentence formation. In the context of consultation, a mind map
can often illustrate a theme or assist in problem-solving much more effectively than normal
text.
Mind Maps give visual representations of entire subjects and allow the main points and
linkages to be easily identified. They provide a flexible way of presenting information that
Page 168 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
allows for alteration more easily than linear text. Ideally, participants in a mind mapping
exercise will be able to see obstacles and potential solutions, and link them together.
Method
There are a number of points to consider when drawing a mind map:
• start from the centre of the page and work out;
• make the centre a clear and strong visual image that depicts the general theme of the map.
This should have the largest print, with other words correspondingly smaller;
• create sub-centres for sub-themes;
• put keywords on connecting lines;
• use colour to make things stand out and to depict themes and associations;
• put ideas down as they occur, wherever they fit;
• use arrows, icons or other visual aids to show links between different elements;
• continue the process until all of the concepts associated with the central theme have been
recorded.
The completed mind map can be used as the focus for discussion and should help to answer
some important questions. Can we see connections between apparently disparate ideas ? Has
a clearer definition of the original theme or problem emerging ?
Points
Mind mapping is a skill that requires the mapper to translate sentences into key words.
Participants should practise on a straightforward task whilst they build up their skills.
Creativity should be encouraged in mind mapping. This will make the task enjoyable for the
participants and increase their involvement in the wider consultation exercise.
ƒ
Skills audit
Uses
A skills audit allows a group (or an individual) to find out what skills and knowledge it
currently has and what skills and knowledge it requires. Without this information, the group
will not be able to identify its weaknesses. With this information, the group will be able to
target its resources more effectively and develop training in areas where improvements are
needed. A skills audit is a useful participative exercise that allows individuals in a group to
analyse their own skills, expertise and talents.
Method
There are three key stages to a skills audit. Note down all the tasks that will have to be carried
out, setting these out in as much detail as possible. It may be useful, at this stage, to use a
brainstorming session. Select 10 random tasks and ask each group member to score each task
from 1 (no good) to 5 (very good) on how effectively they themselves would perform this
task. They must then analyse what they think would allow them to become very good, e.g.
training, resources etc. Draw all the ideas together and for each item record who can do it and
to what extent. Assess what would be involved in helping each group member to become
more proficient.
The benefits to a group of carrying out a skills audit are improved skills and knowledge,
lower training and development costs (because efforts are more targeted), and greater
effectiveness.
Page 169 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Points
A skills audit is normally used at an early stage of a group’s work to allocate roles to the most
appropriate group members and to identify training needs. However, it can also be used later
in the process after a group has encountered difficulties and needs to look at restructuring its
operations.
ƒ
Conflict Resolution
Uses
Conflict resolution techniques can be usefully employed in the following areas : problemsolving ; arbitration and mediation ; negotiation ; consensus building.
People with opposed views on certain issue(s) are identified and brought together in a group
situation. They work at informing and educating each other about their respective concerns
and, ultimately, reach agreement about issues or working out solutions that all sides can
accept.
Method
Conflict resolution involves the following eight steps:
1. identify the problem or conflict;
2. gather information to find out what is already known about the problem;
3. analyse the data to establish the root causes of the conflict;
4. generate potential solutions;
5. select the best solutions;
6. plan for implementation and decide how to go about ensuring that the agreed action takes
place;
7. once implemented, test to see whether the problem/conflict has been resolved;
8. continue to improve on what has been achieved to date.
An independent facilitator, or mediator, is needed to assist in resolving the conflicts.
Points
Conflict resolution helps to build up trust, two-way understanding and team working among
participants with opposed viewpoints. Problems are brought out into the open. Also, the
structured approach helps to alleviate difficulties and tensions. However, conflicts may turn
out to be very complex and a failure of the conflict resolution process may actually worsen
the situation. Some participants may feel uncomfortable engaging those with opposed
viewpoints. Even where a resolution is reached, it may not be acceptable to all participants.
ƒ
Consensus Building
Uses
Consensus building involves informal, face-to-face interaction between representatives who
have different, but not necessarily opposed, interests. It aims for ‘mutual gain’ solutions,
rather than win-lose or lowest common denominator outcomes. Consensus building can
generate solutions that are fairer, more efficient, better informed, and more stable than those
arrived at by conventional means. It can be applied to a wide range of issues and involve all
kinds of stakeholding interests.
Method
During the consensus building process, each person is given the opportunity to express his or
her views to the group. A skilled facilitator is required and the following stages should be
followed:
1. list ideas/suggestions. Collect ideas from each person in the group;
Page 170 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
2. record ideas/suggestions. The facilitator records these on a flip chart. No judgements are
made at this stage;
3. check understanding. Each idea is clarified and discussed with the group;
4. vote on ideas/suggestions. There are two ways of scoring: (a) if there are twenty ideas, then
score twenty for the most valued point and one for the least important; or, (b) each person can
award a total of twenty points in all – thus twenty points could go to one ideas and none at all
to the rest!;
5. reach a consensus. Each person’s scores are recorded on the flip chart. This will indicate
the majority views of the group and is a good starting point for discussion. The discussion
should be open and honest, as this will allow effective judgements to be made. It is important
to identify disagreements and assess what is important
to each interest. This is necessary in agreeing a decision that meets differing points of view.
Points
Bringing stakeholders together to build consensus means that:
• possible solutions are identified by the stakeholders;
• disagreements are identified;
• the majority views are identified;
• the scoring system allows areas of concern to be prioritised for discussion;
• promotes two-way understanding, trust and team working;
• the final decision is a group decision, arrived at in a structured way.
The potential drawbacks to the consensus building process are that (a) the problems may be
more complex than anticipated, (b) additional problems may be identified, and (c) not all
participants will be happy with the outcome.
ƒ
Vision Support Group
Uses
Participation in a Vision Support Group gives people the opportunity to think about visions
and long term goals. Most people have little opportunity to think in such a constructive way
in their normal lives and this method gives them practice at these strategic activities. Vision
Support Group makes use of the Think and Listen and Go-Round facilitation methods.
Method
Each person in a small group of three to six people has the attention of the other group
members (using no interruption rules) whilst they take a turn to answer the following four
questions or adapted versions of these questions.
1. What is going well for me as a ……..?
2. What is difficult for me is a ……..?
3. What are my long term goals and visions as a ……..?
4. What are my next achievable steps towards these visions and goals?
The "as a …….." section could be, for example, ‘as a male resident of Keith who is interested
in my local environment.’ The same phrase would be inserted in questions 2 and 3. Once
these questions have been answered, the visions and long term goals can be addressed.
Talking about these can help people to prepare the ground for making changes.
When dealing with "What are my next achievable steps?", participants are seeking practical
steps they can take which will move them, as individuals, towards realising their visions and
goals.
Page 171 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Points
People may have some initial difficulties in expressing their visions and long term goals.
Also, it can be a challenge to allocate time for Vision Support Groups in a busy,
achievement-orientated culture. People may need some convincing that spending time on this
is valuable. Working with this method means that no one is required to justify their visions
nor are they required to know how they will bring about their vision beyond identifying the
next step.
ƒ
Swot analysis
Uses
The SWOT analysis is an effective method for identifying the Strengths and Weakness of an
entity (e.g. organisation, service or individual) and the Opportunities and Threats faced by
that entity. It is a wellknown participatory tool that is often used in the preparatory stages of a
consultation, partly as a stocktaking exercise.
Method
A SWOT analysis involves categorising issues according to:
• strengths – positive aspects that are internal to the entity, which are seen as the key current
advantages;
• weaknesses – negative aspects that are internal to the entity, which are seen as the key
current weaknesses and areas for improvement;
• opportunities – positive aspects that are external to the entity, which are thought to provide
an advantage;
• threats – negative aspects that are external to the entity, which are thought to present a
danger.
These external factors may be economic, political, legislative or social.
A recorder notes the discussion of the group on a flip chart for all to see. The results will then
be discussed at the next stage of the consultation process.
Points
A SWOT analysis is a systematic process that ensures that participants take account of both
internal and external factors. It is quick, easy to administer and provides immediate feedback.
Once completed, the results of a SWOT can be used to form the basis of an Action Plan – the
overall objective being to maximise the Strengths, minimise the Weaknesses, exploit the
Opportunities and deflect the Threats.
Page 172 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
SECTION C. Guidance for the construction and evaluation of
participatory protocols
This section is intended to provide: principles of good practices for participatory protocols
(3.1); guidance for the construction of participatory protocols in SEAMLESS (3.2); and
guidance for the evaluation of the used protocols (3.3).
8
Principles of Good Practice for participatory protocols
The principles that the SEAMLES-IF Consultation Toolkit has adopted to ensure that our
consultation process is of the highest quality are set out below. The Toolkit follows each
Principle and provides you with everything you need to know to put the principles into
practice.
Consultation Should be Needed. Before any new consultation begins, a thorough search will
be made to find out whether relevant questions have already been asked of the public. We
will avoid unnecessary repetition and duplication. Consultation will aim to seek informed
public opinion and not just instant reaction.
Purpose should be clear. Any consultation will contain a clear statement describing why it is
being carried out and how the results will be used. It will be clear to consulteees what can be
changed by responding to the consultation – and what cannot. Consultation will usually be
related to a decision that the SEAMLESS-IF user is intending to make, and that can be
influenced by the result of that consultation. This principle will be intelligently applied, as
there may be circumstances in which consultation not linked to a decision is appropriate.
Consultation should be inclusive. Consultation should aim to seek a representative crosssection of views. It is widely documented that some sections of the community are harder to
engage in consultation than others. Therefore, appropriate action should be taken to ensure
that the views of these individuals and groups are not excluded or overlooked. The toolkit
provides practical suggestions and contacts that enable this to be achieved.
Consultation should be well planned and timely. Consultees will be given adequate time to
prepare their response. It is recognised that the length of time will vary depending on the time
of year and the level of response that is being sought. Sufficient time will be allowed for the
results of consultation to be collated, analysed and considered, so that the results of
consultation feed directly into the decision making process.
Methods should be appropriate and well managed. The SEAMLES-IF Consultation Toolkit
will provide a wide range of participatory methods, as well as guidance for an appropriate
use, reflecting the strengths and weaknesses of each method and will be managed with a clear
understanding of the particular skills, knowledge and resources that consultation requires.
The toolkit will assist with this.
Results should be acknowledged and fully considered. The full range of views expressed
during consultation will be acknowledged and attention drawn to areas of agreement and
disagreement. The results of public consultation will be weighed carefully together with other
evidence and considerations before decisions are made.
Page 173 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Accessible feedback should be given. Accessible feedback will be provided both on the
results of consultation and on how they have been used, in order to encourage greater public
participation in the future.
Effectiveness should be evaluated. The effectiveness of major public consultation will be
evaluated and the results shared to encourage broader lessons to be learned. Evaluation will
consider not only the number of responses received but also the quality, cost and timeliness
of the consultation and the overall usefulness of the results in helping to inform decisions.
Page 174 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
9
Guidance for the construction of participatory protocols
The following section provides supports for the construction of participatory protocol for a
particular problem situation according to the questions Why? When? Who? and How? to
ensure the appropriateness of the participatory process to the problem, then it gives
indications to analyze the results of the process performed and to provide feedback. It
includes six chapters.
9.1 Defining what you are consulting about
Be clear about what you are trying to achieve
Be clear about what we want to achieve. Carefully think through the purpose of your
exercise. Is it a “one off ” consultation? Are you trying to start an ongoing dialogue? Are you
asking people to be involved in decision-making? What do you want to achieve?
SEAMLESS stresses that participatory processes will serve three specific purposes/ goals,
with respect to the actors involved and the SEAMLESS-IF lifecycle:
• Specification of the inputs (policy scenarios), outputs (impact indicators) and features of
the man-machine interfaces of SEAMLESS-IF, which will be achieved by getting and
putting together the views and expectations of the multiple potential users of
SEAMLESS-IF (ref. WP1 and WP6);
• Assessment of the results provided by SEAMLESS-IF, which will be achieved by getting
and mapping the diversity of reactions of the users and stakeholders in the face of the
results of the simulation (ref. WP6);
• Information and Training of different categories of users of SEAMLESS IF, which will
be achieved by developing specific material.
Page 175 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Do you really need to carry out this exercise?
The SEAMLESS-IF
information about:
•
•
•
– Consultation Finder will be designed to help us all to share
What consultation you have planned over the next year and ways we may be able
to join up with each other – avoiding duplication, cutting costs and preventing our
residents suffering consultation fatigue!
What we already know – you will find results from consultations that have already
closed on the database. Search by Topic/Target Group to find out what we already
know – so you don’t ask the same question again
What happened as a result of what we found out – consultation must make a
difference to what we do – this tool allows us to demonstrate this by logging not just
findings but the outcomes and decisions that followed
Over time the SEAMLESS-IF – Consultation Finder will store consultation protocols and
their results. These may provide useful benchmarking data for comparison with your results,
as well as information that you need.
Ok – use the toolkit!
You should 'custom build' your consultation process so that it specifically suits what you are
trying to achieve. Working through the steps of this Toolkit will enable you to develop your
Consultation Project Plan.
You need to think NOW about the timeframe for this exercise – when will a decision be made
about the issue you are involved in? What budget do you have to carry out this exercise? How
much staff time will be needed? Are there staff training needs that you need to consider now?
The Consultation Project Plan will help you to work through the whole process. Stage Four –
When to Consult provides you with more information and help tools.
Manage expectations
Members, officers and the public will all have different expectations about the outcomes of
any exercise. To be successful you must think about these before you start.
Clearly describe why this consultation exercise is being carried out and how the results will
be used. Be explicit about what’s on offer, what can change and what the options are. Explain
any constraints on what can be done at the beginning of the process.
Use the Template 2.1 – Setting Out Your Objectives below to help you set this all out at the
start of your consultation exercise.
This template sets out the main points that you will need to cover in order to let people know
what you want to achieve. You will, of course, need to adapt the wording to suit the method
of consultation that you are using. For example if you are doing a survey, or a written
consultation exercise you could adapt the format below to be included on a front sheet or
Page 176 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
covering letter. If you are running a focus group or face to face interview you will need to
cover the same points – but present the information differently according to the audience.
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
ƒ
Thank you for participating in (Title of Consultation Exercise). This consultation exercise
will run between (state start and end dates). The final date for responses is (state when)
Your responses are important to us. We would like to know (state what do you hope to
achieve)
The reason for asking your views is (what is on offer – what is the decision to be
influenced). What you tell us can influence (state what can be changed / what the options
are)
Some of this (policy, service, document – state) has already been decided (state what - if
appropriate). We are asking for your opinions only on the areas that can still be
influenced. (state here if you have set out specific questions to be answered)
We will let you know what we found out through this exercise by (state how you will
provide feedback)
We will take account of your views when the decisions about this (policy, service,
document – state) are being made. This will be (state when). The final decision rests with
(state who)
It is important that you know that (state any further constraints).
If you need any further information about this (state method) please contact (state who,
supply address telephone and email contact)
For written responses from organisations or individuals to a consultation document see
footnote below on confidentiality
This is a genuine exercise to find out your (opinions, views, concerns – state). Thank you
for taking part.
Plan your exercise from a customer’s point of view
Once you are clear about your objectives in undertaking the consultation try and think from
the customer’s point of view. What might they want to tell you about the issue?
9.2 Deciding who to consult (stakeholder analysis)
Identifying your stakeholders (stakeholder analysis)
Stakeholder Analysis25 is a vital tool for identifying those people, groups and organisations
who have significant and legitimate interests in a specific issues. Clear understanding of the
potential roles and contributions of the many different stakeholders is a fundamental
prerequisite for a successful participatory governance process, and stakeholder analysis is a
basic tool for achieving this understanding. To ensure a balanced representation, the analysis
should examine and identify stakeholders across a number of different dimensions. For
example, the analysis should separately identify relevant groups and interests within the
public sector, within the private sector, and within social and community sectors. In addition,
the analysis can seek out potential stakeholders to ensure proper representation in relation to
25
"Stakeholder Analysis ensures the inclusion of all stakeholders and maximisation of their roles and
contributions"
Page 177 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
gender, ethnicity, poverty, or other locally relevant criterion. Cutting across these categories,
the analysis can also look at stakeholders in terms of their information, expertise and
resources applicable to the issue. However, stakeholder analysis by itself only identifies
potentially relevant stakeholders - it does not ensure that they will become active and
meaningful participants; other measures to generate interest and sustain commitment will be
necessary as well.
Purpose : “Stakeholder analysis ensures the inclusion26 of all stakeholders and maximisation
of their roles and contributions”.
1. Ensure inclusion of all relevant stakeholders
Experience has shown that inclusion of the full range of stakeholders is not only an essential
pre-condition for successful participatory decision-making but also vital for promoting equity
and social justice in governance. For example, when decisions are made, priorities set, and
actions taken without involving those relevant stakeholders, the result is usually misguided
strategies and inappropriate action plans which are badly (if at all) implemented and which
have negative effects on the beneficiaries. These approaches, which fail to properly involve
stakeholders, have been widely proven to be unsustainable. This Stakeholder Analysis Tool
therefore encourages a far-reaching review of all potential stakeholder groups, including
special attention to marginalised and excluded social groups such as the poor, women,
elderly, youth, disabled, or others. This allows identification of representatives of these
groups, so that they may be included in the decision making framework.
2. Maximise the role and contribution of each stakeholder
It is well recognised that broad-based stakeholders. involvement and commitment is crucial to
successful strategy and action plan implementation and therefore to sustainable development.
With a multi-stakeholder approach to implementation, a wider variety of implementation
instruments can be utilised. The stakeholder analysis facilitates mapping of potential
stakeholder roles and inputs and access to implementation instruments. This will indicate how
best to maximise the constructive potential of each stakeholder whilst also revealing
bottlenecks or obstacles that could obstruct realisation of their potential /contributions. For
example, an analysis could identify a particular stakeholders’ lack of information and skills
for dialogue and negotiation, factors which undermine the contribution or influence of an
otherwise importantly affected group of stakeholders.
Principles
"Stakeholder Analysis ensures the inclusion of relevant27 groups while incorporating gender
sensitivity28"
3.Five sequential stages to make a Stakeholders analysis
- Specifying issue(s) to be addressed. Stakeholders are defined and identified in relation to a
specific issue people and groups only have a concrete "stake" in a specific issue or topic.
26
Inclusiveness. Ensure inclusion of the full range of different stakeholders, including marginalised and
vulnerable groups.
27
Relevance. Includes only relevant stakeholders - those who have a significant stake in the process (i.e., not
everyone is included).
28
Gender Sensitivity. Both women and men should have equal access within the participatory decision making
process.
Page 178 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Hence, the stakeholder identification process operates in respect to a particular specified
issue.
- Stakeholders Identification. With respect to the specified issue, a "long list" of possible
stakeholders, as comprehensive as feasible, should be prepared, guided by the general
categories of stakeholder groups (e.g., public, private,and community/popular, with further
sub-categories for each, gender, etc., also identifying those which:
" are affected by, or significantly affect, the issue;
" have information, knowledge and expertise about the issue; and
" control or influence implementation instruments relevant to the issue.
In the case of firms stakeholders, Mitchell, Agle and Wood29 have elaborated a theory
allowing to reliably separate stakeholders from non stakeholders. They suggest that classes of
stakeholders can be identified by their possession of one, two or three of the following
attributes : 1) the power to influence the firm ; 2) the legitimacy of the relationship with the
firm ; 3) the urgency of the claim on the firm. These features have been applied to a project
for identify stakeholders (e.g. for deciding public subsidies for professional sport facilities by
Friedman and Mason30). Here are the definitions of the features given by these authors : 1) A
party to a relationship has power, to the extent it has or can gain access to coercive,
utilitarian, or normative means, to impose its will in the relationship. 2) Legitimacy is a
generalized perception or assumption that the actions of an entity are desirable, proper or
appropriate within some socially constructed system of norms, values, beliefs and definitions.
3) Urgency is defined as calling for immediate attention and exists only when two conditions
are met : when a relationship or claim is of a time-sensitive nature and when that relationship
or claim is important or critical for the stakeholder.
Such suggestions may be of interest for helping stakeholders identification.
- Stakeholder Mapping. The "long list" of stakeholders can then be analysed by different
criteria or attributes. This will help determine clusters of stakeholders that may exhibit
different relevance of their interests (overall aims) for the issues at stake, and various levels
of investment capability to address the issues -- which depends on their available working
force and means to be used (information, expertise, collective decision). Knowledge of such
differences will allow systematic exploitation of positive attributes. Identify areas where
capacity building is necessary for effective stakeholder participation, and highlight possible
"gaps" in the array of stakeholders. One of the several forms of stakeholder mapping is by
degree of stake and degree of influence, as shown in the matrix below :
29
Mitchell R. K., Agle B. R., Wood D. J., 1997, Toward a theory of stakeholder identification and salience :
defining the principle of who and what really counts, The Academy of Management Review, vol. 22, n° 4, pp. 853886.
30
Friedman M.T., Mason D.S., 2004, A stakeholder approach to understanding economic development decision
making : public subsidies for professional sport facilities, Economic development quarterly, vol.18, n° 3, 236-254.
Page 179 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Stakeholder Analysis for Participation
Low Influence
High Influence
Low Stake
least Priority Stakeholder Group
useful for decision and opinion
formulation, brokering
High Stake
important stakeholder group perhaps in most critical stakeholder group
need of empowerment
Influence-Interest Matrix
Participatory Stakeholder Mapping
To achieve a shared view of stakeholders, their relations to the issue and their relative
importance, the following group technique can be applied:
1.
The participants put the name of each stakeholder on white, circular cards of approx.
10cm in diameter, and put them on a big table, or the floor or a wall (with removable
adhesive).
1.
When no more suggestions for stakeholders are presented, the main interests of each
stakeholder are identified in relation to the focus questions.
1.
The cards are organized in clusters of related interests. When agreement has been
reached, the white cards are replaced with coloured cards, one colour for each cluster. The
name of the stakeholder is transferred to the coloured card, and the main interests of the
stakeholder are written on the card below the name.
2.
The coloured cards are organized in starlike fashion along a line for each cluster
where the centre of the star is the project or the initial focus question. Using group judgments,
the cards are placed at a distance from the centre corresponding to the importance of the
stakeholder for the project. The cards must be fixed with removable adhesive, allowing later
modifications of the visual presentation.
Stakeholders are best compared by comparing their commitment to the status quo against the
influence they wield (figure below). Stakeholders that have considerable influence and are
determined to prevent change (quadrant A in the figure) are the greatest challenge for many
projects. Groups that want change, whether or not they have much influence, are possible
counterbalances. The project needs to find ways to increase the influence of groups that favor
change but lack influence and to mediate between influential groups that favor change and
groups that oppose it.
- Verify analysis and assess stakeholders availability and commitment. Review, perhaps
utilising additional informants and information sources, the initial analysis to ensure that no
key and relevant stakeholders are omitted. Also, assess the identified stakeholders availability
and degree of commitment to meaningful participation in the process
- Devise strategies for mobilising and sustaining effective participation of stakeholders. Such
strategies should be tailored to the different groups of stakeholders as analysed and classified
above. For example, empowerment strategies could be applied to those stakeholders with
high stake but little power or influence.
Page 180 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
The following questions help to understand the stakes within state institutions, community
organizations and interest groups:
- Which stakeholders are contracted to do the institution's work, either as employees,
laborers, managers, quality control agents, or contractors?
- Which stakeholders control and distribute the goods, services and works that the institution
provides?
- Which stakeholders are accountable for failure to deliver, and how are they penalized?
- Who disseminates information, measures performance, monitors compliance and defines
success?
Influence:
H=High,
M=Medium,
L=Low
Interests:
a. commitment to
status quo b. vs.
openness to change
Capabilities:
a. have information,
b have expertise
c. nfluence
implementation
Relation to issues at
stake:
a. affected by
b affect them;
Stakeholder categories
Relevant stakeholders
The figure below and the matrix are useful to summarize and compare stakeholders
categories.
Government policymakers
Implementing agency staffs
Intended beneficiaries
Adversely affected persons
Organized interest groups
(associations, unions)
Page 181 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Civil society (NGOs, CBOs…)
Other external stakeholders
Advantages / Main outputs
- Potential winners and losers-those who will be positively or negatively affected by the
project-are identified.
- Participants' commitment to the goals of the project-their ownership of the project-is
assessed. Ownership assessment determines stakeholders' willingness to stick with the
project's goals. A low level of ownership means that the stakeholder cannot be counted on,
and that the stakeholder's weak commitment may affect other stakeholders. There is no
simple test of ownership. Assessing it requires knowing the interests of the particular
stakeholders and the pressures under which they work. It is also important to recognize that
many people will readily agree to something if they see it as necessary to get the proposed
project benefits, but may be less enthusiastic about implementing the agreed-upon actions
once the project has been formally approved. A realistic assessment of ownership may have
to go beyond words and promises to assess actions and other concrete evidence.
- The likelihood of the stakeholders assisting or obstructing the project's development
objectives is evaluated.
- Monitoring stakeholder involvement during implementation can be straightforward if the
social analysis defines indicators that involve decisions and outcomes, such as agreement on
eligibility requirements, changes in policies and procedures, added institutional
responsibilities, new contracting mechanisms and decisions to target certain social groups.
According to chosen actors, what sort of information are you trying to obtain?
Think about what range of views you want to hear. The Table below will help you think
about the sorts of responses you are likely to receive from different groups of stakeholders.
Who is being
consulted?
Individual users
What sort of views/comments can you expect?
Personal view of service as individual has experienced it.
Snapshot of service.
User
'Non-expert' view from users of SEAMLESS-IF. Can help you see
groups/panels/meetings a different perspective.
Representative groups Considered thoughts and proposals based on good knowledge of
the service you provide and what users of SEAMLESS-IF want.
Sometimes views may be stronger than those of the average user.
General public
General perception of service. Can be useful indicators of
problems and preferences with service provision
Sounding boards (non- Relatively impartial views on proposals - useful for testing out
users)
proposals and plans.
Staff
Experience of a range of customers' views, combined with
knowledge about the practical aspects of providing the service.
Page 182 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Set targets for the level of response you want from your stakeholders
Think about what sort of response you need for your exercise. Do you want a representative
sample? Do you want an in-depth informed opinion from a smaller group of people? Do you
want personal experiences? Do you want to give people opportunities to change their views
through discussion and debate? The Table 3.1 What sort of Views Can You Expect From
Different Stakeholders? will help you think about the sorts of responses you are likely to get
from different groups of stakeholders. This will influence your choice of methods.
Set specific targets for the levels of response you want from your different stakeholders.
Information about which consultation methods worked for which groups will be very useful
for the future. At the end you want to be able to measure if :
• you got views from those you wanted
• you were successful in consulting minority, disadvantaged or under-represented
groups
• different groups responded to different methods
• you gave feedback to those consulted
• people consulted felt that the consultation was worthwhile.
Points you should be able to measure at the end
At the end, you want to be able to measure whether:
you got views from those you wanted
you were successful in consulting minority, disadvantaged or under-represented groups
you gave feedback to those consulted
the people consulted felt that the consultation was worthwhile.
Page 183 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
9.3 Deciding when to consult
At what stage in the process do I want to involve others?
There is always a judgement to be made about when you should discuss your initial thoughts,
proposals and ideas with others to seek their views. We are all aware of the difficulties of
getting this right – a 'blank sheet' approach may be unrealistic – and leave others unsure what
you are asking for, while more worked up proposals may suggest to consultees that there is
little left to influence.
If you have worked through your objectives in Stage One – What Are You Consulting About,
then you should be in a good position to decide on your approach. You will have identified
what you want to consult about and identified any constraints. This is important, as it will
help you decide whether a 'blank sheet' is really being offered or whether some choices have
already been ruled out.
When you involve others is ultimately your judgement but considers a number of
factors.
Where am I on the SEAMLESS-IF toolkit of participation? Think carefully – are you
giving people information about what will happen – if so, this will usually be at the end of a
process when a decision has been reached that is being communicated. Do you want to
involve others early on (e.g. a service user group or network) to help you to formulate ideas
and proposals – this will usually be at the beginning of a process. This initial involvement
may help you frame proposals that you wish to consult a wider audience about. At this stage
you will be setting out known constraints, and giving options and choices about what can still
be influenced. This is likely to be in the middle of the process.
What are your objectives? What are you trying to achieve? What sort of views are you
looking for?
Are there existing Networks and Forums where you can discuss initial ideas and get
input from others before putting together options and proposals? Could you use these
forums after a wider consultation to refine and finalise your options in the light of the
feedback that you have received?
Consultation always takes longer than you think!
When planning your process you need to build in time for each stage of the process. Checklist
4.A – Preparing Your Timetable below will help you with this.
Task
When will the decision be taken?
Draw up my report on this issue – including
consultation outcomes and recommendations
Draw up my report of the consultation outcomes prepare different formats to enable feedback to
stakeholders.
Collate, analyse and consider the consultation
Time
Insert date
1 day
2 days
1-4 weeks
Page 184 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
outcomes.
Run the consultation – allow sufficient time for all your
key stakeholders to respond. Be aware of times of year
when the response may be affected e.g. religious
festivals, school holidays.
If you are using postal surveys build in sufficient time
for reminders to be sent out 1–2 weeks before closing.
Build in time to reach “hard to reach” groups
Consider if you need to “pilot” your consultation – if so
build in time for this and any modifications that you
may need to make
Advertise and publicise the consultation – allow
sufficient time for distribution. Consider time needed
for printing, enveloping, post etc.
If you require your data to be processed either in–house
or through a data processing company build this in to
your timetable now.
Produce consultation material –
‰ Do you need input from the design/
reprographics/ communications teams?
‰ Do you need materials produced in community
languages, converted to Braille, produced in
different formats?
If so build this in.
Are you involving other partners/Directorates/agencies
in this exercise? – Build in time for them to contribute.
Ideally 12 weeks for written
consultation documents.
Allow 4 weeks to respond to postal
survey – allow a further week for
late returns.
2 weeks
2-4 weeks
2-4 weeks
Allow time for partners to
participate in your exercise as
appropriate
Are you using an external agency to run your Allow 6 – 8 weeks
consultation? Build in time to:
‰ prepare a brief
‰ tender
‰ interview and select your consultants
Depending on your method of consultation (e.g. focus Let people know the date in
groups, public meetings, etc.) you may need to let advance
people know the date and broad outline of your event
or recruit people NOW
Does management need to approve your consultation Build in time to get necessary
plan?
permission
Identify resources - budget and staff time to carry out Take a day to plan your
consultation exercise thoroughly –
this exercise. Build this in to work programmes.
Do staff need training to be involved in running this complete the Project Plan
exercise? If so schedule this in.
ƒ Decide on methods of consultation.
ƒ Decide on who you will consult.
ƒ Decide on what you will consult about.
ƒ Build in time at the end of your exercise to
feedback results after the decision has been
taken – do you need to let Communications
team know?
ƒ Build in time to evaluate your exercise.
Do you need a new exercise? -Register your SEAMLESS-IF
– Consultation
consultation on the database.
Finder
Page 185 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Clarify why you are consulting - search the database – SEAMLESS-IF
can you link up with another exercise with the same Finder! –
time frame?
– Consultation
9.4 Defining how you will carry out your consultation (method)
Introduction
By now you should have considered the timeframe for your exercise, clarified your
objectives, identified your key stakeholders and thought about what to ask them. You will
now be able to decide how to go about it – you need to think about the methods you will use,
the process you need to go through and the practicalities of running a consultation event
(protocol).
What methods will you use?
This chapter provides preliminary guidelines for choosing methods to conduct participatory
exercises. These guidelines are based on the types of objectives pursued, the types of issues to
be addressed, on resources considerations, the strengths/weaknesses of the methods and on a
survey of past practical experience.
The objective pursued
The objectives are the reasons for which the participatory exercise is undertaken, i.e., to build
scenarios. However, two ideologies can guide this building process: (i) scenarios are the fruit
of consensus research between participants or (ii) scenarios are built after the participatory
process, which looks only at collating a wide diversity of opinions to obtain highly contrasted
scenarios. For example, if the objective is to build scenarios based on consensus, the Public
Delphi method will be preferred to the Focus Group, that would allow only the expression of
the diversity of opinions. Processes can then be more or less interactive and more or less
consensus-oriented. Participatory methods do not necessarily lead to consensus on the
scenario output. It is often more satisfying to work towards structuring a diversity of views
than towards unanimity with regard to goals and strategies.
Types of participants and their role according to the issue addressed
The choice of the participants largely depends on the issue to be addressed, on the role they
are supposed to have during the participatory scenario process and on the objective pursued.
If the objective is to reach a consensus, it is not desirable to select persons who are unwilling
to change their opinions. This is especially the case for some stakeholders already engaged in
a conflictual relationship. If the objective is to collect a broad diversity of information, the
presence of participants viewed as “leaders” can intimidate others, which is counter to the
objective, since other people are reluctant to express their opinions. Participant selection also
depends on the nature of the issue to be addressed. Slocum (2003) distinguish four kinds of
issues and four types of participants. The level of initial knowledge needed to participate in
the scenario process determines the participants' profile:
The first level is (general) knowledge. Here, general knowledge of the system to be studied is
needed. It mainly concerns the issue of access to information about actors involved in the
system and of identifying key forces for scenario development.
The second level is labelled maturity. Participants have already developed opinions about the
problem at stake. These views can be helpful in defining the present situation (reference
scenario, for example) and pathways to the scenario of future.
Page 186 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
The third level of knowledge is complexity. In instances where the subject at stake is highly
complex, extensive technical knowledge is required. The participation of experts and
scientists is needed to design a simplified representation of the system (i.e., models) to deal
with the system’s future evolution.
The fourth and final level is controversy. Highly controversial issues can lead to a polarised
discussion between participants. Consensus scenarios become difficult to achieve. However,
controversial knowledge can turn out to improve scenario diversity.
Stakeholders
When participatory process is designed, the identification and selection of stakeholders is a
crucial and time-consuming activity. If the project aims at exploring the various aspects of a
complex, unstructured, issue, then participants should constitute a heterogeneous group
representing the whole range of (potentially) conflicting views (van de Kerkhof, 2001). The
stakeholders’ function is to build a scenario for the future by synthesis and to assess the
knowledge accumulated by participants in several relevant fields in the light of complex
practical management problems. In addition to enhancing the democratic process, the
involvement of stakeholders expedites action during the decision process following the
exercise. However, participation can lead to unfair influence by those better aware of how to
manipulate the process. Another social risk of participation is to create a potential “we/they”
polarity of those who did or did not participate in the exercise.
Experts, decision-makers and scientists
Participatory exercises are usually restricted to a consultation among experts and scientists.
Participatory exercises may bring together experts/scientists, decision-makers and
stakeholders. Experts and scientists may contribute by providing quantitative knowledge and
models. Stakeholders may contribute their practical knowledge in order to balance unrealistic
assumptions made in the conceptual models. Furthermore, the expert/stakeholder dialogue
may operate as an “extended peer review”, with critical analyses. The interaction between
experts, decision-makers and stakeholders is necessary to ensure the relevance of the exercise
to the decision-makers in strategy planning and to help the scientists in understanding
decision-making processes and how these affect future developments.
Facilitators
Good facilitation of a participatory process requires that the facilitator maintain a clear
distinction between the issues of the process and the issues of the content. These issues must
also be kept distinct in planning such a process. Confusing these makes the process less
efficient and the content less exhaustive.
Time resources
The participatory exercise is conducted over one or more periods, each consisting of three
main phases: (a) preparation, (b) implementation, and (c) evaluation and dissemination of the
outputs. The time frame depends not only on the method selected, but also on many other
factors such as the scale to be addressed, the objective (consensus or not), the number and
duration of workshops, etc. The comparative chart (Table 2) provides estimates of the time
required for preparation, implementation and the analysis of results (i.e., final scenarios), as
well as the total time. Roughly, the total time required for a participatory exercise may vary
from three months (e.g., a local exercise) to three years (global exercise). This can be cut
down by reducing the interval between feedback loops of the participants involved in the
planning process (i.e., reducing the interval between workshops). From the point of view of
policy issues, it is also important to address the exercise in a timely fashion. A participatory
pexercise output is not likely to have a strong impact upon policy decision if conducted just
after legislation has been passed on the issue. Conversely, an effective contribution can be
made when legislation concerning the issue is scheduled to be passed in the near future.
Page 187 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Financial resources
In designing any participatory method, cost considerations should to be taken into account.
The costs of a participatory exercise can vary according to the scope of the exercise and the
method selected. Highly elaborate methods such as participatory modelling require a larger
budget (e.g., to cover the cost of data collection). Therefore the implementation of these
methods can vary greatly in cost and depend upon a number of other factors. These include
the number of participants, their geographic distribution (travel/accommodation costs), the
organisation of events and workshops (venuerelated costs), whether or not the participants
will require financial compensation for their contribution, etc.
Advantages and Disadvantages
Each of the methods reviewed will have strengths and weaknesses. Some of ones are
summarised in Tables below. Method selection is a matter of concern for the initiators of
studies involving stakeholders. Policy Delphi, Charrette and SYNCON methods have major
advantages if a consensus is sought. They are, however, time- and cost consuming. The
participatory modelling method is also expensive and may take a long time to implement
because of the models required. Its main advantage is that it yields quantitative and more
objective scenarios. Focus group and policy exercise methods are now widely used various
purposes, but they are primarily designed to “inform” about a complex situation and/or
system. Scenario analysis and envisioning workshop methods are very similar in their process
design. In the former, scenarios are defined and developed by the stakeholders themselves. In
the latter, scenarios are predefined by experts and scientists prior to discussion by
stakeholders. Both are qualitative analyses and have a notable advantage of being able to be
combined with qualitative analysis.
Page 188 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Page 189 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
There are a number of checklists / tools in this section that will help you to select a relevant
method to conduct your consultation process. To make it as easy for you as we can we have
organised this information in different ways.
I. METHODS ORGANISED ACCORDING TO THEIR GOAL AND PARTICIPANTS
INVOLVEMENT
Checklist 5.A below offers you a 'quick selector' guide for the selection of methods to
conduct a participatory process, based around our model - The SEAMLESS-IF Consultation
Toolkit of Participation.
CHECK LIST 5A - QUICK SELECTOR GUIDE
Mapping
Degree of involvement of participants
Consultation
Involvement
POLICY EXERCISES (*)
USERS PANELS
SCENARIO WORKSHOPS (*)
USERS FORUMS
ENVISIONING WORKSHOP
PLANNING FOR REAL
POLICY DELPHI
COMMUNITY
APPRAISAL
FOCUS GROUPS (1; 5)
Objectives of the process
WORLD CAFÉ
Policy Conference
Open/Public Meetings
MYSTERY SHOPPING
PROFILING
COMMUNITY VISIONING
OPEN SPACE EVENT
WEB FORUMS
Convergence
CONSENSUS CONFERENCES PARTICIPATORY MODELLING
(*) (2; 3)
CHARRETTE
CONVENTIONAL DELPHI
FUTURE SEARCH CONFERENCE
Expert Panels
CITIZENS’ JURIES (*) (3; 4)
PLANNING CELLS
Democratisation
PARTICIPATORY PLANNING (*)
PRA
PAM&E (*)
(*) means that method includes high co learning capabilities
(x) means that method includes high deliberative capabilities for field definition and scoping
(1), problem framing (2), valuation (3), choice (4), monitoring and review (5)
ii. Methods organised according to key questions
Ask yourself the following key questions below to refine your selection:
Page 190 of 248
/
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
CHECKLIST 5.B – METHODS ACCORDING TO KEY QUESTIONS
The answers to the following questions will also help guide you to choosing the right method:
Do you need deliberative processes to Capture Values and Local Knowledge of the
Public?
Focus groups
Participatory Appraisal
Rapid Rural Appraisal
The processes above are a means of the decision-making agency to capture the views and
perceptions of citizens, and incorporate these into any future policy decisions. Deliberation
occurs between citizens, with minimal exchange between ‘experts’, decision-makers and
citizens. These processes are exploratory and relatively unbounded, allowing participants to
explore and discuss issues on their own terms. By allowing the public to articulate their
values and ideas in their own terms, the outcomes may not be easily translated into policy
decisions which have predefined values and boundaries (Davies, 1999). Outputs tend to be
used for scoping, getting a feel for public opinion or information provision. Deliberative
processes which involve participants recommending policy options also usually involve some
sort of process of exploring values and knowledges, and go on to use this in future stages. For
example Citizens’ Juries, Consensus Conferences and Citizens’ Panels produce a report to the
commissioning agency, giving them an insight into the values and concerns of the
participants. Processes under the family of Participatory Appraisal focus on empowering
participants to go on and develop their own plans and projects.
Do you need processes engaging an informed public in recommending policy?
Citizens Juries
Citizens panels
Consensus conferences
Involving the public in the development and prioritisation of policy options not only helps
those with responsibility for that decision to understand public values and priorities, but it
also can help the development of solutions that are locally relevant and publicly supported.
Issues such as whether the solution is seen as socially acceptable, and possible
implementation problems can be resolved at an early stage, thus increasing the effectiveness
and sustainability of the policy. Other benefits from involving public representatives in the
policy process in this way, is they understand the issue and the complexity of the policy
process better. Trial processes have shown that members of the public are well able to
understand and consider complex issues, make good judgements and competent decisions
(Smith & Wales, 1999).
Do you need processes to engage public and experts together in developing policy
options?
Policy Exercises
Scenario Workshops
Envisioning workshops
Community Visioning
Citizens’ Juries, Panels and Community Advisory Panels effectively allow deliberation
amongst the public participants but there is little true dialogue between participants and the
experts. The above processes and approaches adopt different means of integrating the ideas
Page 191 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
and values of both experts and the public. This does raise challenges in that these different
groups have very different expectations, use different types of language and hold very
different knowledges about the issue. There is a risk that public representatives may be less
willing to voice their opinions in a situation where experts are present. By focusing on values
rather than technical details the processes below aim to resolve this issue.
Do you need your response to be representative?
Focus groups
Citizens Panel
Citizens Juries
Is the issue complex?
users panels
users forums and networks
planning for real
community profiling / community appraisal
community visioning
open space event
future search conference
policy conference
citizens panels
Citizens Juries
Focus Groups
Use tools like:
Face-to-face interviews
Nominal group technique
Do you need to have a dialogue with the same people?
Users panels
Focus groups
Citizens panel
Use tools like:
Face-to-face interviews
Telephone interviews/surveys
Web forums or e forums
Do you want to find out about a geographical area?
Planning for real
Community profiling / community appraisal
Page 192 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Community visioning
Open space event
Future search conference
Focus groups
Use tools like:
Nominal group technique
Face-to-face interviews
Mad-sad-glad boards
Mystery shopping
IV. FINDING OUT THE DETAIL
Once you have an idea about what methods you will use you can have a look at lick on the
the advantages and disadvantages of each, a guide to cost, a contact who can provide you
with further information and useful web links for each method.
Use Checklist 5.D below to develop your project plan.
This will help you to think through the practical arrangements for your event. Consider the
accessibility of your venue – is it near to your target group, will it be accessible for people
with disabilities? Think about whether you require interpreters (community or sign language
– see Communications below) Do you need to provide childcare or do you need to arrange
substitute care for dependent relatives? Think also about the appropriateness of refreshments
or catering provided (e.g. using local produce, catering for specific groups, providing a
vegetarian option etc). Consider the timing of your meeting – should it be daytime, evenings,
in school holidays? Checklist 5.D will help you think through other considerations.
CHECKLIST NO 5.D – ORGANISING A CONSULTATION EVENT
Use this checklist to ensure you have thought about each aspect of your consultation
event.
Is the consultation method appropriate for your target audience?
Have you taken advice where appropriate from members of the target audience e.g.
voluntary organisations, cultural leaders, specialist workers or carers – before you start
Have you ensured the consultation exercise is well publicised in a variety of media to
attract your target audiences?
Have you explained clearly from the outset what you are consulting on and what the
options are so that public expectations are not raised unrealistically? Template 2.1 will help
Have you explained who will take the decision and when?
Is the information you are providing available in the right format (e.g. other languages,
large print, audiotapes etc) for your audience
Is the venue appropriate? People may feel more comfortable in community buildings such
as schools, tenants halls or day centres
Have you checked that your venues are fully accessible for the target audience (e.g.
wheelchair access, disabled toilets, lifts, induction loop, childcare, substitute care?)
Page 193 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Is the layout appropriate? People may feel more involved if the layout of chairs is in the
round, rather than with an audience and top table.
Is the timing appropriate for your audience? For people who work, or for those who look
after children, do you need to provide crèche facilities?
Have you considered holidays including school and religious holidays and festivals?
Have you provided transport for those who need it or offered to pay transport costs?
Have you provided refreshments?
Are these appropriate for your target audience?
Have you provided expenses for attending the meeting if appropriate? Have you provided
any other incentives for attendance?
Have you given people an opportunity to provide feedback on your exercise or fill in an
evaluation sheet?
Don’t forget to thank the participants and tell them how and when you will be feeding back
the results
Have you made sure that people have a contact point for further information?
Ensure the results of the consultation exercise are widely communicated in a variety of media
to help increase public confidence (Stage 6 will provide you with some guidance)
Provide sufficient information
Sufficient information must be supplied to consultees in order for them to be able to consider
and respond. For example consultation documents should set out the main information and
competing arguments relevant to whatever options are possible. This should be set out in an
accessible form.
Allow sufficient time for consultees to give you a considered response
Allow sufficient time for your consultees to give you a considered response. Be aware of
factors that may make a difference – such as school and public holidays, and major sporting
events.
Remember to check for regular days and times of worship of different faith groups that may
impact on your consultation.
Sometimes it can be difficult to decide which dates to avoid but it is important to demonstrate
your awareness of festivals. If festivals and worship times cannot be avoided remember you
may need to set aside a room for prayers.
For written consultation documents we recommend that a response time of 12 weeks should
be given.
Consider the implications of the data protection act (dpa)
In particular ensure you are complying with the Data Protection Act The extensive guidance
attached gives you examples of wording to use on surveys, or with user or focus groups that
ensure you are complying with the spirit as well as the letter of the Act. Remember if you
don’t follow the principle and practice set out in the Toolkit your consultation process could
be challenged.
In house / external expertise?
This Toolkit provides you with stage-by-stage advice, and a range of 'in-house' contacts and
networks that can help you to make your consultation effective and inclusive. You may
decide however that the scale of the task is beyond our expertise.
Factors to consider when considering using external consultants include:
Page 194 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
•
•
•
•
Budget available
Whether the size of the consultation can be met from in–house resources
Whether the appropriate skills are available in-house
Range of methods selected – e.g. you may want to use external, on street
interviewers to run face to face interviews but run a focus group 'in-house'
Profile of the consultation exercise – are the outcomes likely to be very important
to the Authority / very contentious
Fulfil CONSULTATION project PLAN
If you have not already done so complete your Consultation Project Plan now.
Double check - things you should know by now!
If you have worked through Stages 1 – 4 of this Toolkit you should be about ready to start
your consultation.
As a quick reminder – check do you know the following:
Who to consult – have you thought of everyone?
Whose views will be most influential?
That you have thought of the right issues and questions to focus on?
That you have selected the most appropriate method /s?
How much it will cost and where the money is coming from?
What decisions will be affected and when?
That you have thought through the Data Protection Act and other legal frameworks and how
they might affect your consultation?
Double check - things the people you are consulting should know
Before you start be sure that your consultees know:
Who is being involved and why
What decisions will be influenced
Who will take these decisions
When the decisions will be taken
How the results will be fed back to people
That anonymity will be respected if requested
Who they can contact about the exercise
9.5 Defining how you will analyse the results of your consultation
Don’t underestimate the amount of time and effort required to process data or write reports.
Even if you get an agency to do your data input and processing, you may still have a lot of
work to do.
The precise method of analysis will vary depending on the consultation method(s) used.
Page 195 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Analysing quantitative data
For a definition of quantitative and qualitative data see
What methods will you use? .
To know which consultation methods are quantitative and qualitative see CHECKLIST 5.C.
Quantitative data is the easiest type of data to analyse in terms of producing statistics and
graphs and then interpreting the results.
In order to effectively and efficiently analyse questionnaires the responses should be turned
into an electronic format. This involves inputting the responses into a suitable format for
analysis. There are many ways of doing this, for example, in MS Excel, Access or specialist
analysis packages such as SPSS. For longer surveys you may need to employ a company who
will input the data for you (charged by the keystroke).
Coding data and analysis
Excel is capable of satisfactory data input and analysis from fairly short and simple surveys.
The following guidance assumes the use of Excel:
Each row of data in Excel represents a completed survey known as a ‘case’, columns
represent questions or parts of questions.
Coding data is the procedure by which answers are converted into numeric codes for analysis
by computer. In most face to face surveys the interviewer is able to carry out most of the
coding immediately, usually by ringing the appropriate number on the questionnaire. Single
response tickbox questions can be pre-coded in the survey ready for straightforward data
input.
Pre-coding involves placing a small number next to the tick box response indicating the
numeric value associated with that option e.g. YES (box), NO (box). Therefore if someone
ticks ‘No’ a 2 is input into the data file. Pre-coding in this way works best for questions
where ‘tick one only' applies, you can have as many pre-codes as you like as long as the
question does not become too long!
Where more than one option can be ticked a slightly different data input approach is needed.
If a question specifies ‘tick all that apply’ the data should be input as either 0’s or blanks if
it’s not ticked, and input a 1 if it is ticked. Essentially you enter a response for each option.
E.g. question 3 below is a multiple choice question with a combination of 5 possible
responses (the respondent can tick all that apply a-e). If the respondent ticks a, b and d then
you input 1,1,0,1,0. Each option, a-e has its own column of data in the excel file. 1 is input
where the respondent has ticked that response a 0 when they haven’t ticked it. For multiple
response questions like this remember the percentage will not add up to 100.
Questions 1, 2, 4 and 5 are single response questions i.e. tick one only. In question 5 you can
see that the first person ticked the response coded as 5, the second person ticked 2 and the
third ticked option 3.
Page 196 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Below is an example of what your data file might look like in Excel.
ID
Q1
Q2
Q3a
Q3b
Q3c
Q3d
Q3e
Q4
Q5
Q6
01
2
1
1
1
0
1
1
0
5
WR6 4
02
1
2
1
1
0
0
1
3
2
DY13 2
03
2
2
1
1
1
1
1
0
2
B971
It is important to keep a record of how you are inputting and coding your data. Open ended
questions or ‘other please specify’ will require typing as text which can be printed off and
quantified at a later stage.
When examining a new data set, perform data verification and cleaning, which helps ensure
that your numeric results are accurate. For example, if you have gender data in which '1' is for
male and '2' is for female, your data shouldn’t have '3' as a response. Validate your numeric
codes known as ‘data cleansing’.
You may wish to recode some of your data. For example if respondents have written in their
age in years it is best to re-code it into suitable age groups such as 18-34, 35-49, 50-64 65+ to
aid analysis of results.
Once the data is finally input and cleaned you will end up with a screen full of numeric codes
and some text (where open ended questions have been used). Excel allows you to calculate
frequency counts and percentages of particular numeric responses and you can apply filters
and other statistical tests to the data. Proficient use of Excel is needed. Applying filters and
running pivot tables allows you to interrogate your data, create cross-tabulations and subdivide the sample into groups or categories which can be analysed and compared against each
other.
Analysing qualitative data
You must ensure that accurate and complete records are kept of all responses, whether
received through a formal written consultation or more interactive methods.
Where your consultation has been very wide ranging and involved different groups of
stakeholders try to sort out the responses into particular types, for example business groups,
employees, community/voluntary organisations, individual views. You may want to use other
criteria such as rural respondents.. This will help you identify variations in perspectives on
particular issues.
Develop a 'framework grid' for analysis by identifying the key proposals/issues/themes that
you were consulting about, and then summarise the primary viewpoints from your key
stakeholders on each aspect.
Page 197 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Identifying the key messages from your consultation
Responses should be analysed with an open mind. This will allow evaluation of the responses
before coming to a final decision. If responses are considered with an insufficiently open
mind, and the consultation merely an exercise to validate a previously held view, the process
will be open to legal challenge.
It is important that key messages from your exercise are clearly identified and reported. You
should also identify areas where views diverge and opinion is divided.
Use Checklist 6.A - Identifying Key Messages below to help you work out what the results
mean and to identify your key messages.
This checklist will help you to identify what your results have told you:
Think about the following questions:The overall picture
What are the main findings?
Are people satisfied / dissatisfied?
What are the areas on which there is a majority consensus?
Where do views and opinions differ?
Are views consistent
What does the sub group analysis show?
Strengths and Weaknesses
Do we have any clear messages?
What are the publics priorities
How are we doing on each of these?
What can we do to meet these?
What can we do little about?
User expectations
How are we doing against these?
How can we improve?
What can we do little about?
Our expectations
Which results did we expect?
Which results are a surprise?
Benchmarking
Can we benchmark these results against other Council services?
Can we benchmark these results against other Authorities?
Can we identify any trends
Any upward trends?
Any downward trends?
Any results that have stayed the same?
Can we identify trends from elsewhere
Can we compare results with others who have asked the same questions / used the same
methods?
Are we moving in the same direction as national trends?
What is the current climate
Are ratings rising / falling in general
Page 198 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Anonymising results
When providing feedback ensure that for quantitative methods data is anonymised in line
with the Data Protection Act.
For qualitative methods ensure that you respect any stated requests for confidentiality. When
giving examples of the types of responses you have received it is acceptable to change names
or use titles such as "male, 23yrs " or "girl, 10yrs". If you are tape recording a focus group, be
sure to seek permission from participants in advance.
Identifying priorities and actions from your results
In analysing the results you will need to identify priorities and highlight these in your
feedback and communications. Work through Checklist 6.B Identifying Priorities and Actions
From Your Results below o help you do this.
In analysing the results you will need to identify priorities and highlight these in your
feedback and communications. Work through the checklist to help you do this:
Which findings do not require action? - e.g. low priority or results that are very good.
Which things can we not change in the short term? – how do we tell people – popular
recommendations that can not be taken forward require an explanation as part of your
feedback.
Which things can we change in the short term? – identifying “quick wins”, especially
those that can be done within existing budgets or timescales. This demonstrates that you can
and will act on the outcomes of consultation.
Which results highlight the need for action? – what are the next steps, who needs to
know, does funding need to be identified, is further consultation needed, when can decisions
be taken?
Which results highlight the need for more communications? – what is the issue, how
we will communicate it, to who and where?
Which results highlight the need for further consultation? - in some circumstances new
alternatives will come to light which may themselves call for further consultation.
Balancing conflicting results
Local communities are not homogenous, so consultees will frequently express a range of
views. On a controversial issue views may be strongly polarised. This may happen, for
example, if a facility is deemed to be a 'good thing' by the population as a whole but no-one
wants it in their own back yard.
In resolving these conflicts take the nature of the different types of consultation into account.
If, for example, the subject was complicated, or needed background information to
understand it fully the views of a small well informed sample (e.g. a citizens jury) may be
more relevant than a large uninformed sample.
When consultees’ views diverge it is particularly important to provide clear feedback. We
make this commitment in the consultation strategy. Consultees who do not feel that their
point of view has been fairly represented may have recourse to the Staff’s epresentations
procedure. Balanced feedback can assist individuals who do not like the decision that has
been reached to feel that the process has given them a fair hearing.
Page 199 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
9.6 Defining how you will provide feedback
Who needs to know your results?
You should ALWAYS provide feedback to your respondents and consultees. Refer back to
your original list of stakeholders. Make sure that you let them know what was learnt and
what we did with the information. In addition consider:
•
•
•
•
•
•
•
You need to report the results of strategic/contentious consultation exercises
Local Members – particularly if the results affect their ward or portfolio
Chief Officers, Heads of Service, Team Managers – particularly where the results
affect their service
Front line staff and other Officers
Partner organisations
Users/residents and other members of the community
Other identified stakeholders to check that you have not missed anyone out of the
feedback
What does your audience need to know?
You will need to take account of WHEN feedback should be provided to consultees. On
some occasions this should be AFTER decisions are taken so that you can report not only
what you found out, but also the outcome of the process.
Different audiences will want different levels of information. For example, residents may be
interested in the headline findings of a residents’ attitude survey - but may be more interested
in the detailed results of consultation on a controversial development in the area.
It is particularly important to give clear feedback when there has been controversy, or a
decision has been taken which goes against popular opinion. In these circumstances
respondents may want a detailed account of the findings and the outcomes, and to know how
their views were taken into account – even if they didn’t get their desired outcome.
Care should be taken to communicate appropriately to all those who took part to increase
public confidence in the process.
The levels of information you should consider providing include:
•
•
•
•
•
What methodology was used and how it worked
Headline findings or an executive summary
Full results
Invitation for feedback/suggestions
An action plan
How will you tell them?
Different audiences will have different needs so communicating the results of the
consultation could take different forms and in many cases a mix of techniques is best. For
example, communication methods could include:
•
•
•
•
Feedback documents/letters to respondents, which include headline findings and
subsequent actions
Presentations
Seminars and workshops
Summary reports
Page 200 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
•
•
•
•
•
•
•
Detailed reports
Through the “SEAMLESS-IF – Consultation Planner & Finder” database
Through the email system – remember to obtain the necessary permission
Through SMS/Text Messaging
Make a video or use drama or other interactive method
Residents’ newspaper or magazine
Via the local media
Staff newsletters
Make sure that you consider the communication requirements of people who will be
receiving your feedback. The Communications Unit will be able to help.
Linking your results to decision making
The importance of linking consultation to a decision has been highlighted throughout this
Toolkit and in the WCC Consultation Strategy. It is important to record/evidence and
feedback how the results of the consultation have influenced the decision.
When a decision has been taken that is different to the opinions expressed through
consultation it is important that this is clearly explained.
Enter your results on seamless-if – consultation database
Don’t forget to log your results on to the SEAMLESS-IF – Consultation Database when it
will be done. The database also allows you to record the outcomes of your process.
Page 201 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
10 Guidance for the evaluation and improvement of
participatory protocols
This chapter is intended to outline some of the key issues and constraints facing participatory
exercises in SEAMLESS, and to provide guidance for the evaluation and improvement of the
suggested set of participatory methods. The focus is on using evaluation as a tool for adaptive
learning and method improvement.
It includes four sections. Sections 1 and 2 describe the rationale and present a framework for
the evaluation of participatory methods within the context of SEAMLESS, which has the dual
objectives of supporting quality and relevant applied methods while at the same time
strengthening users research capacity. In this case, a balance must be struck between
“academically ideal” methods, available resources, researcher capacity and skills, and users
needs. This influences evaluation criteria and expectations of participatory methods.
Section 3 describes key considerations for developing an appropriate and learning-based
approach for evaluating participatory methods. This draws from a number of different
evaluation strategies and recognises that different user groups (researchers, agency,
community members) have different monitoring and evaluation needs, as well as different
perceptions of positive and negative research outcomes. Section 4 presents options for
integrating monitoring and evaluation into the different stages of the project cycle (preproject, in-project and interim or post-project).
The final section presents the issues and questions to consider in monitoring and evaluating
the process and outcomes of participatory methods for SEAMLESS.
10.1 Rationale for evaluating participatory exercises
Two overall goals of participatory research can be considered in monitoring and evaluation.
These include: participation as a product, for which the act of participation itself is an
objective and an indicator of success; and participation as a process to meet research
objectives and goals (Cummings 1997:26; Rocheleau and Slocum 1995:18-19). For
evaluation purposes, participatory methods generates products of the following kinds:
Participatory methods describe who was involved, how, and at what stage of the project
shapes the ultimate outcomes and reach of the research project. They can be considered as a
functional means for meeting specific objectives of the SEAMLESS project.
Outputs of participatory processes describe the concrete and tangible consequences of the
selected participatory activities. These include information and product outputs (e.g.
information from participatory baseline analysis or community monitoring, new agricultural
practices or technologies developed with farmers, new community resource management
approaches, etc.). Outputs also include tangibles such as number of people trained, number of
farmers involved in on-farm experiments, number of reports or publications produced from
the research, etc. “Participation” itself can be considered an output.
Outcomes (short term impacts or effects) describe the intermediate impacts of the
participatory processes. Outcomes result both from meeting research objectives as well as
Page 203 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
from the research process itself. They can be negative or positive, expected or unexpected,
and encompass both “functional” effects of participatory research (e.g. greater adoption and
diffusion of new farming practices) or intangible “empowering” effects (e.g. improved
community confidence or self-esteem, improved local ability to resolve conflict or solve
problems).
Impacts describe overall changes in the community (negative or positive) and may include
overall social and development goals. Desired impacts of participatory research for natural
resource management include sustainability of livelihoods and natural resources,
empowerment of communities, decreased poverty, improved equity, and so on. Development
impacts are influenced by many factors external to the project and are often observable only
in the long term. Consequently, assessing the impact of a participatory research project is
extremely difficult. For evaluation purposes, it is more realistic to consider outcomes as
“intermediate” signs of impact.
Reach. The concept of reach cross-cuts all of the products of participatory research. Reach
describes the scope of who is influenced by the research combined with who “responds” or
acts because of this influence. Participatory research is assumed to influence reach by
involving marginal groups and communities throughout the research process rather than
treating them as passive “beneficiaries” of the research results. Participatory methods are
anticipated to improve equity and appropriateness of results, the distribution of research costs
and benefits, and the persistence of behavioural change at the community level. For the
purposes of IDRC, which has a mandate of strengthening research capacity in the South, an
important consideration for reach is the spread of capacity and ideas at the level of
researchers and research institutions.
Indicators can be defined for the different products and stages of participatory research. In
practice, differentiating between process, output, outcome and reach of participatory research
can be fuzzy and artificial since these are often “sequential” and “time-dependent”.
Therefore, it does not always make sense to differentiate between these in evaluation.
10.2 Framework for evaluating participatory exercises
Evaluation of participatory research for natural resource management projects must be
situated within parameters that influence the appropriateness and feasibility of different
participatory approaches. These parameters determine realistic expectations from different
participatory research projects. These parameters include the nature of the research question,
the initial “capacity” of local people and researchers involved, the values and motivations for
using a participatory research approach, and external contextual factors that enable or
constrain participation.
Questions that can be considered when framing an evaluation include:
Goals
Is the participatory approach appropriate for the research question?
a. What are the goals and overall objectives of the research process? Functional, empowering
or transformative, improved farm production, improved decision-making for common
resources, etc.
Is participatory research the best approach for meeting the research goals and objectives?
Who will benefit from community participation in the research?
Page 204 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
b. What is the sector of the research? Fisheries, forestry, farming
Does the research problem address resource decisions that require individual decision-making
and compliance, or collective decision-making and compliance?
c. What are the dimensions of the research? Economic, social, ecological, political, etc.
d. What is the appropriate scale and scope of participation? Local, regional, national.
Who needs to be involved (what stakeholders) and are they included in the process?
At what stage do these groups need to be involved?
External Context
a. What are the social, cultural, political, environmental, economic and institutional
variables that are likely to enable or constrain different approaches and methods of
participatory research?
b. What contextual variables will affect the research? Will these restrict the type of
participatory approach that is feasible? What are the risks and enabling factors?
Values and Motivation
What are the motivating factors and underlying values for engaging in a participatory
research approach?
Of researchers and research institutions: Commitment to a participatory research approach,
commitment to allowing the community to direct the process, attitudes and values regarding
local knowledge and local people, focus on empowering or functional goals of participatory
research, culture, etc.
Of the community and subgroups, and possibly other stakeholders: Motivation to
participate in process, awareness of problems and desire to address them, culture, past
experience with participatory research or other projects, expectations of benefit, values
towards collective action, values of hierarchy and respect, values of equity, conservation,
differing interests in negotiating access to resources or power, etc.
Of the donor institution: acceptance of fluid research processes, openness to alternative
forms of accountability, time-frame flexibility, etc.
Capacity
What are the existing skills and experience of the researchers and research organisations with
participatory research? What is the existing capacity of the community (institutional and
individual) to deal with local natural resource problems and to work collectively?
Of researchers and institutions: Past experience with participatory methods, training, skills
and experience with community facilitation, understanding of social and gender dimensions
of research, adaptability and flexibility, etc.
Page 205 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Capacity of the community: Existing level of education and skills, level of organisation,
traditional forms of natural resource management, approaches for managing conflict and
making collective decisions, history of collective action, etc.
The above parameters help establish realistic expectations for participatory research
processes and results. Aspects of the research process that can be considered for evaluation
within this context include:
Relevance and effectiveness of participatory tools and methods: Stage at which these are
used, adaptability and progress of the research process according to the context and according
to various emerging realities, adaptation of methods when necessary to enable representation
of different perspectives, etc.
Scope for social transformation: Community ownership of research process, learning and
capacity building from the process, community involvement in identifying research priorities,
in defining solutions, in action, etc.
“Quality” of participation: Identification and representation of important stakeholders,
“scale” of participation, etc.
Trustworthiness and validity of the research findings: Are the researchers taking measures
to ensure the validity of the research findings?
10.3 Evaluating participatory exercises
Why evaluate?
Evaluating consultation exercises can help you to:
•
•
•
•
•
Find out what worked and what did not
Uncover unexpected outcomes
Apply learning to improve future practice
Know whether involving the public actually contributes to improved decision making
Assess whether the exercise was cost effective in terms of time and resources
How you will evaluate should be considered at the planning stages of a consultation exercise.
Emphasis should be placed on ensuring that evaluation is:
•
•
•
•
Done in good time
Is cost effective
Proportional to the scale of the project
Has adequate resources invested in it
Evaluation is easier when good practice, as set out in this Toolkit, has been followed.
Participants’ evaluation of consultation exercises
You should always offer participants an opportunity to comment on your consultation
exercise/process. The way you do this will vary according to the method that you have used.
Page 206 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Different stages on the SEAMLESS-IF – Consultation Toolkit offer different levels of
participation and involvement to consultees. The questions you ask about your exercise will
need to reflect.
Checklist 8.A - Participants Evaluation of Consultation Exercises below offers a range of
questions that you could consider asking participants.
CHECKLIST 8.A. – PARTICIPANTS' EVALUATION OF CONSULTATION
EXCERCISES
In every consultation that you undertake it is important to give participants an opportunity to
evaluate the exercise. Choose from the range of questions below those that are applicable to
your method.
Did you understand why you were asked to be involved in this exercise?
Did you know from the outset what difference your participation would make – i.e. did you
understand what this consultation could influence and what it could not?
Did you think that you were provided with adequate information about the issue?
Was the information easy to read and understand?
Was the information of sufficient detail to help you make up your mind?
If not, what information would have helped you to take part?
Were you told who to ask or where to go if you needed more information?
How easy was it for you to give your views?
Did you think the questions you were asked were fair and balanced?
Were you given the opportunity to express a range of opinions?
Did you feel that you needed additional support to participate?
What else could have been done to help you to participate?
What did you think of the practical arrangements for this exercise (e.g. venue,
refreshments, interpreters, facilitators)?
Did you feel that the consultation exercise was fair and balanced?
Did you feel that your contribution was listened to and respected?
Did you feel your contribution was taken seriously?
Did you feel that your contribution made a difference?
How would you suggest that this consultation exercise could have been improved?
What do you feel you gained from being involved in this exercise?
Were you given information about what we found out as a result of this exercise?
Did we tell you what, if anything, changed?
What do you think happened as a result of this exercise – do you think it made a
difference?
Has being involved in this exercise changed the way that you feel about the service/issue
(either for better/worse)?
The West Midlands Regional Network for User & Carer Involvement has developed a
“Consultation Matrix – A Measuring Tool for User and Carer Involvement within Social
Services“. Adapt this for your use.
How did you do when we involved you ?
·
When I am fully involved I feel happy, excited, interested and important
·
When I am fully involved I feel informed and understand what is going on
·
When I am fully involved I feel powerful enough to change things
Page 207 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
·
When I am fully involved I feel like a respected and equal citizen with rights
This form asks you to say what you think about being involved in (named event/activity) that
took
place
at
……………………………………………………………………………………………on
(today’s date)
Page 208 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Please tick the faces that tell us what you think and write any other things you want to tell us.
BEING INFORMED
1.
Were you told enough for you to be able to take part?
Yes
2.
No
Sometimes
Did you understand what we said?
Yes
4.
Sometimes
Did we keep you informed and tell you what was going on?
Yes
3.
No
No
Sometimes
Were you told who to ask to get more information?
Yes
No
Sometimes
How could we have informed you more?
LISTENING TO YOU
1.
Whilst you were taking part did we treat you with courtesy and respect?
Yes
2.
Did you feel your views and opinions were listened too?
Yes
3.
No
No
Sometimes
Did you feel your views and opinions were taken seriously?
Yes
No
Sometimes
How could we have listened to you better?
Page 209 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
TAKING PART
1.
Were you clear about why you were taking part?
Yes
2.
Did we tell you what you could change?
Yes
3.
No
Sometimes
Did we tell you what you could not change?
Yes
4.
No
No
Sometimes
No
Sometimes
Did you feel able to take part?
Yes
How could we have involved you more?
WHAT DIFFERENCE HAS YOUR INVOLVEMENT MADE?
1.
Did you feel you were able to influence the decisions that were made?
Yes
2.
No
Not Sure
Did we tell you what, if anything, has happened?
Yes
4.
Not Sure
Did anything happen as a result of you taking part?
Yes
3.
No
No
Not Sure
Overall, did you feel it was worthwhile taking part?
Yes
No
Not Sure
IS THERE ANYTHING ELSE YOU WANT TO TELL US?
Page 210 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Basic evaluation - questions to ask yourself every time
The Checklist 8.B - Basic Evaluation – Questions To Ask Yourself Every Time below is a
basic list of questions that you should answer after carrying out any consultation exercise.
Working through these will help you to evaluate what happened and learn for next time.
Remember to retain your evaluation on file as it may be needed for audit purposes.
CHECKLIST 8.B – BASIC EVALUATION – QUESTIONS TO ASK YOURSELF
EVERY TIME
Use the checklist attached to evaluate your consultation exercise:Did everyone involved (staff, consultees, partners) understand the objectives of the
exercise?
Were the right stakeholders involved ?
Did you successfully reach all your stakeholders?
Were the numbers who took part as expected – did you reach your targets?
Were you successful in reaching ‘hard-to-reach’ groups?
Did the publicity material you used work (e.g. posters to advertise an event, putting
material on the internet, press releases)?
Did you get the level of information you provided right? ( e.g. it was easy to access;
relevant to the consultation, produced in plain language, easy to understand and available in
other languages and in other formats, e.g. Braille and audio cassette, where necessary)
Was the consultation accessible (e.g. interpreters were provided if necessary, venues were
accessible, seating and set up encouraged participation)?
Did the methods used match the objectives?
Was there the right balance of qualitative and quantitative methods?
If you used more than one method, which worked better than others and why?
Did some methods work better with particular stakeholders than others? Note this for the
future.
Was the timescale and process transparent and kept to – if not, why not?
Did you get the information you wanted in sufficient time, depth, and quality?
Were the level of resources and support right?
Did you budget adequately – note areas of overspend/savings for next time
What were the costs (include staff time)?
Were there any unforeseen costs – what they were?
What was the evaluation of those who took part - what did they think of the information
provided, was it easy to give views, did they perceive the exercise as fair, useful?
Did it lead to a change of policy, service etc – be specific - how?
How many people will be affected by the changes?
Has the consultation changed the relationship between you and your users and others?
What would you do differently next time?
The acid test – the effect of the consultation
The key question: “Has anything changed as a result of the consultation?” At the end you
need to be able to measure whether:
•
You got views that you could use
Page 211 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
•
•
•
You have actually used those views
The consultation has led to some identifiable change
The consultation has changed the relationship between you, your users and others
Strategic evaluation of consultation outcomes and process
The Checklist 8.C - Strategic Evaluation of Consultation Outcomes and Process below will
assist managers to take a more strategic approach to the evaluation of consultation. It offers
managers and those seeking an overview of activity an evaluation process that considers both
process and outcome. It is particularly useful when trying to balance the costs involved with
the outcomes achieved.
CHECKLIST 8.C – STRATEGIC
OUTCOMES AND PROCESS
EVALUATION
OF
CONSULTATION
This tool offers you an opportunity to evaluate your consultation in terms of the outcomes
achieved and the process that you undertook.
Considering Outcomes
Did consultation directly inform a decision or shape policy or service delivery
arrangements?
Were the consultation results used to set local performance standards and targets?
Has the exercise helped to improve the cost effectiveness of a service by making it match
users’ needs more closely?
Over time, has consultation resulted in an increase in the percentage of local people who
say that the authority listens to their views or who have expressed satisfaction with the project
?
Considering Process
Did the exercise(s) reach a representative sample of the population or, where this is
appropriate, all the target groups?
Are your response rates consistently high enough to give reliable results?
Are results regularly disseminated to consultees, the wider public, relevant staff (including
front line staff) and partner organisations?
If consultation exercises did not meet their objectives, why was this and what steps can be
taken to prevent similar problems in the future?
What did consultation cost, both directly and indirectly?
What proportion is this of the overall cost of the relevant service?
How does the cost compare with other similar exercises in the authority or other similar
authorities?
Has the cost been shared by designing the exercise to be valuable to more than one service
or organisation?
Has the programme been planned to cover both corporate and service area priorities?
Has the programme been planned jointly with partner or neighbouring organisations?
Is consultation being carried out to a consistently high standard across the organisation – is
the Toolkit being used in a consistent way and all steps followed?
Were 100% of your exercises - including your findings and outcomes - logged onto the
Consultation database and made available to other services or organisations that might find
them helpful?
Page 212 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Independent evaluation
In some circumstances, for example where there has been a lot of external criticism of a
particular project, you may want to consider commissioning an independent evaluation of
your exercise. External evaluation may increase the legitimacy of the findings..
Share what you learn
The - "SEAMLESS-IF – Consultation Planner & Finder” will have a check box for you to
complete when you have carried out your evaluation. You will be able to consider logging the
outcomes of your evaluation on to the 'Good Practice' area of the Consultation website. This
provides a valuable opportunity to share good practice and learn from our mistakes.
Page 213 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
SECTION D. USES OF PARTICIPATORY METHODS IN
SEAMLESS
11 Recall : Participation needs in Seamless
We evoked the existence of several uses of participatory approach and methods before. First,
we want to show where participation is needed from DOW, and second, we will suggest a list
of situations where participatory methods are usable during the project.
11.1 Participation status and approach in Seamless project.
As part of the Seamless project, to make users and other stakeholders participate has been
targeted as a main goal. Moreover, one of the five specific objectives of Seamless mentioned
in the DOW is even to promote participatory development and use of Seamless-IF, including
dissemination, knowledge transfer and training (§ 2-). Farther, the DOW specifies that
participation aims at establishing dialogue between Seamless-IF designers and users and
stakeholders and preparing Seamless-IF dissemination (§ 6.0.2-).
It refers to the mentioned objectives of participatory approaches : first, users involvement and
dissemination, and second, ensuring applicability for users (§ 1-). And it appears to be
coherent with the proposal of identifying and using participatory methods (i) to distil
information from prime users and stakeholders in rural development, and (ii) to ensure utility
and dissemination of Seamless-IF (§ 2.1-).
The DOW indicates to take into account a wide range of different stakeholders (§ 6.0.3.7-).
Stakeholders are defined as all individuals and groups affecting and/or being affected by
agricultural policy decisions and agricultural land use. Users are defined as those using
Seamless-IF for making or evaluating policy decisions or innovations. Therefore,
stakeholders include users (§ 6.0.3.7-).
Ways evoked to succeed are interactions and communication with users and
stakeholders (§ 6.0.2-) :
•
First by the expressed will of designing Seamless-IF by an iterative process, with
feedbacks between (i) developers and users, (ii) developers and scientists who develop and
implement tools used in the framework (§ 2.1-). The prime users will be involved in
developing, testing and improving Seamless-IF (§ 6.0.2-). They will be involved from the
early stages of the project (§ 6.0.3.7-). Knowledge about other users organisations
functioning will be acquired (thematic organisation, structure of decision-making, and
culture), because they’ll have to be able to integrate the tool use in their ongoing work and
existing structure (§ 6.0.3.7-).
• Second, specifying that Seamless-IF will comprise training and implementation, in order
to transfer Seamless-IF to main users (§2-). The mentioned demonstration activities (part
of the participatory methods) will be in relation to main users and other stakeholders (§
6.0.2-). Demonstration will be about applicability and reliability of Seamless-IF, on the
occasion of tests of use (which aim at validate Seamless-IF also)
(§ 2 ). In addition to training, users needs for future support and consulting, will be
defined (§ 6.0.3.7-). Two kinds of training activities are forecasted : among the project
partners (intern) and with users of Seamless-IF (extern) (§ 6.0.2.).
Page 214 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
11.2 List of participatory methods uses in Seamless project
Before suggesting such a list, we recall the foreseen uses of Seamless-IF itself, when
it will become an achieved tool.
-
Seamless-IF is intended to simulate policy scenarios impacts in agricultural or
environmental fields. The aim is helping policy making, by giving policy-makers
from any level a good idea of simulated policy scenarios impacts.
-
Seamless-IF could be used sometimes (by prime and other users) as a training tool, to
organize debates with stakeholders involved in agricultural or environmental field.
-
Seamless-IF would be a research tool for scientists.
So, from the previous highlighted elements of the DOW, we can suggest the following list.
This list is classified upon the criteria of the human group(s) involved.
- Prim users : participatory methods will be used for collecting their needs : scenario to
simulate, some relevant indicators, interface wishes, and to assess each prototype.
Participatory methods are intended to facilitate dissemination among this target public. Prim
users will likely participate to assess participatory methods implementation itself.
- National and regional stakeholders : participatory methods will be used to motivate them to
participate, to indicate some relevant indicators, interface wishes, and to assess some
prototypes. Implication depends on test-case and zone involved, as shown in scheme below.
Participatory methods are intended to facilitate dissemination among a part of this target
public (organizations which will be able to simulate scenario by their own).
- Scientists : participatory methods are involved in testing of interface by scientists.
Comment about users forum : this forum is a group empirically made up from prim users and
likely from national and regional stakeholders, and intended to answer by internet media
questions addressed by developers of Seamless-IF or other scientists involved in the project.
We assume not to consider every interaction within Seamless as calling up participatory
methods. For instance, iteration between Seamless-IF developers and scientists aren’t
participation.
The previous list remains on a very general level. To be able of operational working, we need
to examine more detailed features. As foreseen interactions with national and regional
stakeholders are quite complex, we suggest here zooming in test cases. So, we recall
hypothesis taken into account to draw the scheme below.
Page 215 of 248
SEAMLESS
No. 010036
Deliverable number: D7.3.1.
13 July 2005
Page 216 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
Explanations about the precedent scheme
The joint scheme shows the different situations where participatory methods are involved in
Seamless project within test cases and with prime-users (DGs officers), according to place
and time. So, our relevant unit is the group of stakeholders of a fixed region.
Definitions
DGs
Involved DGs from European Commission are mainly DG Agriculture, DG Environment and
DG Research. Other DGs could be involved : DG Tren, DG Trade and DG EcFin. (cf. :
“Minutes of Seamless Information Meeting at DG Agri April 28th 2005”). Perhaps DGs of the
second group won’t be involved as much as the main ones : simplification of participation
schedule and content proposed by WP7 would, as a result, be necessary. So are involved
other policy organisations which will be interesting to include to the group of prime users :
the JRC (joint reseach centre in Seville and in Ispra), the EEA (European Environmental
Agency in Copenhagen), COPA (Comité des Organisations Professionnelles Agricoles de
L'Union Européenne ) and COGECA (Confédération Générale des Coopératives Agricoles de
l’Union Européenne) both in Brussels, OECD in Paris and also possibly some national
ministries.
Stakeholders from some regions and countries
Stakeholders are defined as all individuals and groups affecting and/or being affected by
agricultural policy decisions and agricultural land use. Users are defined as those using
Seamless-IF for making or evaluating policy decisions or innovations. Therefore,
stakeholders include users (§ 6.0.3.7-). Test Cases will be located on a few number of
regions. Stakeholders of these regions will be involved in Seamless. In each country
concerned by Test Cases, national stakeholders (concerned by the studied situation, but at
national level) will also be involved.
P1, P2, P3 : for prototype 1, 2, 3.
Choices
All regional stakeholders are involved as potential users
On a region we feel difficult to involve some actors as potential users (their needs and their
judgment about prototypes collected) and other ones as single stakeholders (only their
judgements about prototypes collected) without generating frustration, oppositions to the
Seamless project and perhaps conflicts between actors. Some are potential users of seamlessIF, some others are potential users of results of Seamless-IF, and perhaps others are neither in
the first case nor in the second one. But, to favour actors involvement on the project, we
propose a same involvement of all stakeholders categories of a region : no distinction
between actors groups.
Systematic needs collection before evaluation
Page 217 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
In order that actors feel interested in prototypes evaluation we can’t avoid to collect their
needs before (for instance : which policy they would like to see simulated ? what are the
stakes in the region ?). Like this, they will feel more involved in Seamless-IF design. So we
propose to collect needs from all solicited actors.
Test cases 1 and 2 everywhere
Distinction between Test Cases 1 and 2 consists in studied topics used for Seamless
evaluation : economical policies for the 1st one, and environmental policies for the 2nd one.
On the participation dimension, we suggest not to distinguish economical and environmental
scenarios. So, a single protocol will be proposed for all the real situations studied. If
necessary, adaptations will be made by the protocols’ users. For a fixed region, we can’t
distinguish two stakeholders groups, one for test case 1 and another for test case 2. So, there
is only one stakeholders group for each fixed region.
Specifications
Crop regions and livestock regions
The 1st prototype will only take into account the arable sector. Livestock systems will be
represented from the 2nd prototype. So, among the real studied situations, we distinguish 3
categories : crop regions, livestock regions, and crop and livestock regions. On livestock
regions, actors participation will begin later.
Participation topics and use of results
Needs collection
With Prime Users : motivation, appropriation, and needs collection (Generation of policies
scenarios to test, Relevant indicators to characterize a policy effects, Wishes about interface)
In test Cases 1 and 2, with regional potential users : motivation, appropriation, and needs
collection, particularly about crop systems (Generation of economical and environmental
policies scenarios to test, Relevant indicators to characterize a policy effect , Wishes about
interface).
Collected actors needs will be used by WP2 (indicators), WP3 and WP4 (policy scenarios to
test), and WP5 (wishes about interface) to design prototypes. WP3 will ensure the synthesis.
Prototypes evaluation
With Prime Users : prototype evaluation during simulation of some European policies, about
adequation outputs / expectations, indicators relevance (World, Europe, testing countries,
testing regions, etc…) and recommendations / interface functionalities.
In test Case 1 and 2, with regional stakeholders : prototype evaluation during simulation of
some policies, about adequation outputs / expectations, indicators relevance (tested regions,
territories of tested regions, typical farm of tested regions, production data), pedagogical and
heuristic qualities and recommendations / interface functionalities.
Collected actors judgments will be used by WP2 (indicators), WP3 and WP4 (adequation
outputs / expectations), and WP5 (pedagogical qualities and interface) to improve prototypes.
WP3 will ensure the synthesis.
.
Page 218 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
12 How to implement participatory methods to answer these
needs ?
As we have seen in the previous paragraph, performing SEAMLESS project means having at
designers’ disposal some plans to achieve specific goals. These plans describe the process of
implementation of a set of participatory methods upon a precise target public, and for a
precise purpose. Two papers give us an account of each participatory plan : a general
schedule (see an example below) and what we call a protocol. Each protocol is designed by
the name of the SEAMLESS activity or task it takes into account. Our suggestion is 421) to
explain the vocabulary. Then, we will have 422) to give an idea about the way by which these
protocols have been set up, which could help to set up potential new other ones. 423) Finally,
we will present some protocols which are of interest to perform SEAMLESS tasks.
12.1 Explaining vocabulary
A plan describes how to perform an activity linked with use of participatory methods into
SEAMLESS tasks. The specific characteristics of each plan is that plan has been tailored for
a precise purpose, with such target public, and within such an activity (e.g. a plan for
collecting the expectations about SEAMLESS-IF, with regional level stakeholders, within
activity 6.2.1). Each plan can be described by a general schedule and detailed by a protocol.
General schedule : general schedule provides a plan sum-up. The horizontal dimension takes
time into account ; the various kinds of participants, within the target public, are set on the
vertical dimension. Here are shown various steps of interaction with target public, kinds of
chosen participatory method and solicited media. General schedule suggests the functions
which will be useful to perform activity. So, it gives an idea of the needed number of person
involved.
Protocol : what we call protocol is quite the same concept that the design of participatory
policy analysis trajectory (Geurts and Joldersma, 2001). Each protocol makes a quick
presentation of the generic goals it has been designed for, and of the relevant target public. It
puts the thesis that the stakeholder analysis, which permit to obtain a list of relevant
participants to participatory methods, has been performed before. By designing generic goals
(like : collection of expectations) a protocol is able to be used in another activity, elsewhere,
with quite similar target public. Protocol sums-up the people resources needed to implement
it, and give explanations about each step. Each step reminds : step goals, resource
consideration (who does what ?) step development (what is happening ?) and step analysis
method. Documents (letter, questionnaire, drawing…) needed in each step (to help
interviewer, to perform analysis…) are named (with name of activity and Greek character).
Of course, the specific contain of these documents can’t be written before having a perfect
clear view of the whole activity.
Page 219 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
TIME
Step 1
Step 2
Step 3
Kind of solicited
people
Regional
strategic
managers
(policy makers)
Step 4
Step 5
Personal return
before
mutualisation
Step 6
Global return
Contact
Technicians
Appointment
making
Opinions
Mutualisation
Individual
interviews
External experts
Experts acquaintances +
Investigator + organizer
Media
Phone
Face-to-face
investigator
Phone
Investigator + analyst
organizer
Organizer + help + analyst
Face-to-face
Mail
6.2.1a : on-line forum
6.2.1c : mail, phone
6.2.1.d : meetings
Analyst + organizer
General Schedule for the stage of expectations collection about Seamless-IF
So, plans shown in the part 423) are suitable for activities they have been designed for, but
can be adapted –with a few modification – for activities seeking to reach the same goals with
a quite similar target public.
12.2 How to create protocols ?
Creating protocols means choosing relevant participatory methods to deal with foreseen
tasks. Above all, designers have to follow a few set of rules, inspired by qualitative research
practice. The first question is about selection of participants, the second about what they
know precisely, the third about what they want to say or not.
− selection of participants or recruiting ? Throughout Seamless project, the number of
participants involved into topics dealt with participatory methods is always limited. Number
will never be a major fence which could avoid to gather participants. Designers are able to
foresee available methods such as : individual interviews, meetings, phone call, mail… as
well as web forum, email etc. Problem is more to find motivated partners following
throughout the project, than to select a representative set of users or stakeholders. So, for
recruiting partners, stakeholders analysis is needed. Major stake is charming as many relevant
participants as possible, by leaning on the few number of rules indicated before (see
introduction). Similarly, participants’ time is money, here more than anywhere. So, designers
will do their best to reduce contacts to a few occasions, as possible. For instance, several
topics will be dealt during the same meeting, or during the same phone call. In general, when
Page 220 of 248
Mail
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
designers hesitate between two equivalent interaction methods, the less expansive in
participants’ time will be chosen.
− Do participants precisely know what they want or not ? Here are examples of topics
the participants have to give their opinion about : what scenario(s) would they like to be
simulated ? What are the relevant indicators to assess scenario impacts ? What interface do
users prefer ? Faced to such questions, some participants could be troubled. Because they
may not holding a steady opinion. We don’t care about their will to give or not their own
opinion here, we ask a question about the reality of this opinion. If they have at their disposal
steady opinion, the problem is how to reach this opinion (how can designers do for making
the participants wanting to deliver their own opinion). If they don’t, problem becomes for
designers : how to contribute to make up a steady opinion for participants ? Fences which
avoid participants to answer are of two different natures, which dictate various strategies :
first, fences can result from a lack of information (for instance, what are the potential
available interfaces ?); second, fence can result from a kind of complexity that alone person
can’t pass over (for instance to choose potential scenarios to simulate). In the first case, the
main stake is to provide information to participants (for instance, to show the different
potential interfaces), then participatory methods will be tailored for this purpose (sending a
descriptive mail, managing a show etc.). In the second case, participatory methods must be
tailored to allow framing process (see introduction) to occur. To reach rich framing process,
designers will favour interactions and co-learning (iterative exchanges by every way) with the
objective to allow each person to set up his (her) own opinion gradually, and to share it with
others : participatory methods will be tailored with this specific purpose in mind. If tacit
knowledge is concerned, participatory plan will favour face-to-face situations : interviews
between experts or stakeholders and designers, and face-to-face meetings to favour
“socialisation” 31 step.
− How making participants telling ? Even when participants have made up steady and
clear opinions, making them to express it isn’t an evidence. A particular attention has to be
paid –for each participatory plan- to this question : how setting participants in the optimal
situation as possible, in order to reach what they really think ? Every reliable planning about
participatory methods plays with two rules : people will actually give their opinion if
designers succeed to mitigate stakes or to move stakes. Mitigating stakes means that topics
are less threatening as possible for concerned participant. For instance, in spite of speaking at
region level, about “impacts of CAP deregulation”, designers can ask participants for their
opinion about “policy measures which could encourage growth of new incomes for European
farmers”. Moving stakes is the situation which occurs when a participant is free of leading an
interview as he (she) pleases. So, speaking any subject from his (her) point of view, the
interviewed participant puts back stakes within his (her) own mental map and freely speaks
about the whole stakes. In another way, Seamless-If can be shown as a tool able to
demonstrate (if needed) that EU can’t afford a total CAP deregulation, in spite of only testing
total deregulation scenarios impacts. Of course, contacts material conditions,
within
participants set and between participants and designers, have an effect on stakes mitigation
and moving. For instance, if designers try to gather French administration managers and some
of their subordinate employees, the last ones never confide their personal opinion about a
political topic in front of the firsts. At a pinch, to improve free opinion collection, anonymous
forum can be essential.
31
Nonaka I., Takeuchi H., 1995, The knowledge-creating company, Oxford university press, New York, Oxford,
284 p.
Page 221 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
All these items are taken into account to design a participatory plan. The resulting plan will
be a compromise solution between balancing wishes : recruiting as relevant participants as
possible, who will spend less time as possible, providing them only relevant information,
helping them to achieve framing process, with respect to potential tacit co-learning and
material conditions which enhance actual opinions collection !
The following paragraph suggests some participatory plans, designed with respect for all
these items, and which explicitly quote each considered feature.
12.3 Main protocols which are of interest in SEAMLESS project
Roles of the two chosen protocols
We have chosen to present two protocols because they are of interest for Seamless designers.
The first one is called “621”, because it will be implemented by WP 6, within task 6.2
(Specification of Seamless-IF application and scenario definitions) and Activity number
6.2.1. It is a protocol for collecting expectations about Seamless-IF with regional stakeholders
(national level can be concerned too), so this protocol include the first contact step with
regional stakeholders. We recall that stakeholders can’t be motivated anyway if they don’t
have any hope to be associated with decisions. So, at the beginning of each interaction with a
new group of stakeholders, designers must ask participants for the policy scenarios they
would like to be simulated, and must give an idea of likelihood to see Seamless-IF simulate
these choice.
The second one is called “64”, because it will be implemented by WP 6, within Task 6.4
(Evaluation of tools, procedures and results). This protocol will contribute to produce various
inputs (from stakeholders’ opinions) for activities of Task 6.4. It is a protocol to assess
Seamless-IF prototypes. As we want stakeholders to be really motivated for taking part in
project, the second protocol can’t be implemented anywhere without the first. So, the first one
aims to motivate participants above all, and this motivation phase further allows to collect
rich assessment about Seamless-IF prototypes state.
Protocol 621 : Justification and consistency
What are the justifications of protocol 621 design ? We feel to be in danger not to have
enough motivated participants. Our first deal is to recruit and to keep along the project (four
years) enough relevant participants. Two major fences to perform participation would be,
first, no interest from stakeholders, and second, a total refusal to participate. So, the contact
step goes by personal contact and acquaintances. We can’t neither waste our stakeholders’
time. Then, for instance, we foresee collection of expectations about economic policies as
well as about environmental measures, in the same interview. Everything is done to save
stakeholders time. Our second deal is about stakeholders opinion existence. We put the
hypothesis that stakeholders have their own steady opinion about some topics, but none for
all. Then, protocol includes documents for giving relevant information (very short paper
introducing the project, avoidable interfaces…) and promotes participatory methods which
favour interactive learning within stakeholders, and between stakeholders and experts. The
third deal is to make participants confide their real thinks. Protocol 621 cares with this
problem by providing the best situations as possible to be confident. For example, every
stakeholder may express oneself and confide his (her) mental categories during face-to-face
interviews with open themes. Technicians are never forced to give their opinion in the
presence of their hierarchical chief.
Page 222 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
T IM E
S te p 1
S te p 2
S te p 3
K in d o f s o lic ite d
p e o p le
R e g io n a l
s tr a te g ic
m a n a g e rs
( p o lic y m a k e r s )
S te p 4
S te p 5
P e r s o n a l r e tu rn
b e fo r e
m u tu a lis a tio n
S te p 6
G lo b a l re tu r n
C o n ta c t
T e c h n ic ia n s
A p p o in tm e n t
m a k in g
O p in io n s
M u tu a lis a tio n
In d iv id u a l
in te r v ie w s
E x te r n a l e x p e r ts
E x p e r ts a c q u a in t a n c e s +
In v e s ti g a t o r + o r g a n i z e r
M e d ia
P ho ne
F a c e -to -f a c e
in v e s ti g a t o r
P ho ne
In v e s ti g a t o r + a n a l y s t
o r g a n iz e r
O rg a n iz e r + h e lp + a n a lys t
F a c e -to -f a c e
M a il
6 .2 .1 a : o n - lin e f o ru m
6 .2 .1 c : m a il, p h o n e
6 .2 .1 .d : m e e tin g s
A n a lys t + o r g a n iz e r
M a il
G e n e r a l S c h e d u le f o r t h e s t a g e o f e x p e c t a t io n s c o lle c tio n a b o u t S e a m le s s - IF
Page 223 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
Protocol for Activity 6.2.1
Preliminary collection of expectations about SEAMLESS-IF for regional policy-makers
Why?
The application context for such a protocol can be defined as follow: You want to make
preliminary collection of expectations about SEAMLESS-IF. For instance, you need
information about:
Policies impacts each stakeholder would like to be simulated or policies simulation to have at
one’s disposal,
The indicators that each stakeholder considers to be relevant
The interface the future users expect.
We recall that one can’t ask for assessment about prototypes nearby a public who hasn’t be
motivated. The main aim of this protocol is to enhance project appropriation by stakeholders.
Who?
This protocol is designed for collecting the expectations of potential users of SEAMLESS-IF,
and stakeholders opinion at the regional level.
Government policy-makers
Implementing agency staffs
Intended beneficiaries
Adversely affected persons
Organized interest groups
Civil society
Other external stakeholders
This protocol also assumes that you have spotted strategic managers, called “policy-makers”
(e.g. conseiller general, directeur départemental de l’agriculture…) of each organisation.
They will be mainly asked on their expectations and suggestions on the policy-options to be
simulated and the main issues at stake. During the step 1, you will try to obtain, from policy
Page 224 of 248
M=Medium,
H=High,
Influence:
a. commitment to status
quo b. vs. openness to
Interests:
b have expertise
a. have information,
Capabilities:
b affect them;
a. affected by
Relation to issues:
Identity of
stakeholders
Stakeholder categories
Relevant
This protocol assumes that regional participants have been identified using the stakeholder
analysis. That is to say, you have a stakeholder matrix such as the one in figure below, which
describes the stakeholder categories in terms of their relevance to the objective of your
consultation, their characteristics, their interests and their influence.
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
makers, names of technicians, for nearly each organization. The technicians will be mainly
asked on their expectations and suggestions on indicators to be used to evaluate the impact of
the simulated policies and the features of the SEAMLESS-IF interfaces.
Furthermore, one or two experts (as University teacher, philosopher…) who would uphold
more original opinion will be asked.
What?
Information you will have to collect is the following :
Policy options to be simulated
Indicators to evaluate the policy impacts
Features of the man / machine interface of SEAMLESS-IF.
Explanations and rationales on refusals to address the above questions.
How?
The suggested protocol has been designed by referring to conventional Delphi methods. It
combines face-to-face in-depth interviews and iterative survey of experts.
Delphi method usually undergoes four phases (see Figure below). In the first phase the
subject under discussion is explored and each individual contributes the information (s)he
feels is pertinent to the issue. In the second phase an overview is reached on how the group
views the issue, for example, where there is dis/agreement over what is meant by relative
terms such as ‘feasible’, ‘important’, ‘desirable’, etc. If there is significant disagreement, then
this is explored in the third phase in order to illuminate the reasons for the differences and
evaluate them. The fourth phase entails a final evaluation that occurs when all previously
gathered information has been initially analysed and the evaluations have been fed back for
reconsideration.
Page 225 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
Resource considerations
Two leaders are needed: an investigator to manage interviews and to deal with possible
meetings animation, and an organizer to deal with mail, e-mail, phone, or possible meetings
setting up. For a protocol without meetings, you may set only one leader involved, but his
(her) task will be very heavy. At the beginning of the protocol, other people are involved:
they are acquaintances of policy makers designed by stakeholders analysis. For steps 3 and 6
you may call an analyst to help investigator about interviews analysis and synthesis.
Enquiry schedule
The schedule (621α) provides a view of the whole issue and of the interventions for each
kind of participant.
Detailed steps
Delphi
Method Flowchart
Step
1: Contact
Goals of step 1:
−
Getting identity and characteristics of asked people
−
Getting the name of at least a technician for discussing topics (indicators and
interface) in-depth. Because in the case of a policy maker contact, you need the name
of a technician, who will be allowed by his (her) chief to answer questions
−
Explaining possible refusals to answer or to collaborate and rationales about this
behaviour.
Resource considerations
This step requires the involvement of people / experts who have acquaintances with policy
makers selected by stakeholders analysis, and investigator or organizer to note down features
of policy makers involved and chosen technicians.
Step 1 development:
During this step, you have to contact policy makers by all the means you can : phone call,
during a meeting, mail, email, acquaintance…Very often, you can contact a person only if
you have been introduced by somebody who is an acquaintance of this person. During this
contact :
−
You have to present SEAMLESS project very briefly,
−
And to describe work to do for the requirements collection phase,
−
Without forgetting to show that phases of prototypes assessments will follow the
requirement collection phase, with the same stakeholders if possible. Samples of
these three oral presentations are noticed in doc (621β)
Page 226 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
−
You have to make an appointment to list the expected policy simulations and
indicators and/or to get the name of a relevant technician or to understand possible
refusals rationales.
A very short paper (native language) introducing the project and items below (work to do,
assessment protocol…) is given or sent by mail or e-mail as soon as possible (621γ).
Step 1 analysis:
You have to collect : identity and characteristics of asked people, chosen technicians
identity, possible answers, and you have to explain possible refusals to answer or to
collaborate.
Step 2: Appointment making
Goals of step 2:
−
Making appointments with technicians and regional stakes bearers who had agreed to
be in-depth interviewed and with one or two experts (as University teacher or
philosopher…) who would uphold more original opinion. These experts have been
chosen during stakeholder analysis.
Resource considerations
Investigator to make appointments of policy makers or technicians.
Step 2 development:
You have to make appointment by phone by telling:
(Possibly) from whom the investigator is sent (who has allowed the inquiry)
What the investigator wants to do : to present SEAMLESS program and to ask questions
about policy good to simulate, relevant indicators, and expected interfaces.
You set a calendar where you plan every appointment and place.
You sent the paper (621γ) that is showing the project as soon as an appointment is made, for
your contact having a chance to read it before your meeting.
Step 2 analysis:
The analysis is very simple: you have at your disposal a list of appointments with dates and
places. For preparing opinions mutualisation, you have to check answers in such a tabular
form (621δ). The first column will be shift out before current mutualisation.
Step 3: individual interviews
Goals of step 3:
Page 227 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
−
Identifying names and characteristics of asked people
−
Listing policies simulations each stakeholder would like to have at one’s disposal
−
Listing the indicators that each stakeholder considers to be relevant
−
Designing features for interfaces the future users expect.
−
Explaining possible refusal to answer or to collaborate and rationales about this
behaviour.
Resource considerations
Investigator manages interviews with policy makers and technicians and experts. An analyst
could help for the first analysis or the difficult ones.
Step 3 development:
You introduce the interview by explaining who you are and the SEAMLESS project. The
paper (621γ) that introduces the project and the general schedule (621α) where you can locate
the present step can help you.
Thus, you need to tackle three themes. You must ask a question about these themes in the
case where the asked people don’t broach spontaneously. You try to obtain as many as
possible answers you care by the introducing question: “What is your own practice, your own
use (or in your service) about policy simulation tools?»
You want answers to the following main questions:
What policy simulations would you like to have at your disposal?
Which are the relevant indicators for assessing the impacts of such a policy?
What about interfaces features?
To help asked people to imagine what you mean by “policy”, you have got document (621ε)
which shows “baseline scenario” and “impact scenario” at your disposal. For helping them to
understand what you mean by indicators, you can produce document (621ζ).
About interfaces features, you can use document (621η).
Each interview is recorded and you have to take notes about main answers.
Step 3 analysis:
For preparing opinions mutualisation, you have to check answers in such a tabular form
(621δ). The first column will be shift out before current mutualisation. Investigator’s notes
and partial transcriptions from tape-recorder will be useful.
Step 4: personal return
Page 228 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
Goals of step 4:
−
Returning information to asked people
−
Being sure that the opinions from asked people have been well understood
−
Preparing opinions mutualisation following step, by encouraging them to participate
and by introducing the following process (step 5).
Resource considerations
Organizer manages mail, e-mail and phone calls with policy makers and technicians and
experts.
Step 4 development:
For each people, you send the relevant extract of the board (621δ) with an acknowledgement
mail (no more than one sheet long), where you ask to notice every mistake, before
mutualisation (see example of letter 621ι). You ask for an acknowledgment of receipt. You
give the key for the following process (on-line address for protocol 621a, meetings dates for
protocol 621b, mail procedure for protocol 621c).
Step 4 analysis:
You have to notice every reaction to your mail: claims, modifications to do, refusals to
mutualisation setting. You prepare the following step by completing the board (621δ).
Step 5: mutualisation
Goals of step 5
Permitting each participant to give his (her) opinion, to shift his (her) opinion, to get feedback
on the whole set of responses (anonymously for 621a and 621c).
Improving social learning
Preparing a possible following step of consensus building.
Resource considerations
Organizer manages mail and phone calls with policy makers and technicians and experts. At
least another person is needed if meetings way (621 b) is chosen, to lead the debates.
Step 5 development
Each participant has got a completed board (621δ), which gives feedback on the whole set of
responses. With this information in hands, he (she) then fills in the board, this time providing
explanations for any views they hold that were significantly divergent from the viewpoints of
the other participants. We suggest three different ways to achieve step 5 development, as
Page 229 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
shown in the diagram below: by using web forums (protocol 621a); by using face-to-face
discussion (protocol 621b); by using postal or electronic surveys (protocol 621c).
621a
621b
621c
on-line
meetings
mail or e-mail
Three possible ways to achieve step 5
Protocol 621a: web forums
- The main benefit is to preserve anonymity, if there are enough participants. The
number of rounds depends on the time this step will last (at least one month). If you want to
obtain a consensus, this protocol is favourable: at each round, the expressed opinions will
focus.
- The main drawback is time and skills to design and to maintain such an on-line
forum.
- If there is a very few various opinions, this on-line process isn’t worth.
During step 5 development, the organizer needs to maintain the on-line site and to relaunch
participants to make them look at the site and fill in the board with their reactions towards
what the other participants have though. Each participant knows the foreseen duration of this
step. The organizer closes the rounds at the foreseen date.
Protocol 621b: face-to-face discussions
If the Delphi is conducted face-to-face, one or two moderators will be required to facilitate
the process.
- The main benefit is to produce social learning, if participants don’t use to work
together, and to obtain clear results: at the end of the meetings, this step is achieved. If you
need a consensus, a consensus building will be sought during the same meetings.
- The main drawback is difficulty to gather participants physically. Perhaps, it can’t
be done before three months or more. Perhaps, some technicians will refuse to meet, telling
that they can’t discuss policy topics because it’s their chief domain. Then, you have to show
that the technicians meeting won’t worry about policy choices but, above all, deals with
indicators and interfaces.
Page 230 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
- If there is a very few various opinions, this protocol is worth. A consensus will be
easy to reach. If a very little number of people have asked to your questions (they don’t feel
concerned or they don’t want to answer), meetings protocol is worth: because of participants
missing.
During step 5 development, you design two meetings: the first one with the regional stake
bearers participants, the second one with technicians. You can’t gather these two kinds of
asked people in any case, except if there is only one policy to simulate.
Regional stake bearers meeting contains:
Presentation of board (621δ) results (after the first interviews round)
Discussion about policy choices, indicators, data availability.
Possible consensus building: to choose policy to simulate. We suggest to begin trade-offs by
choosing the criterions. The meeting leader has foreseen a list of potential criterions (e.g. for
the few set of scenarios: number of concerned inhabitants, data availability, different
directions balance…) to suggest.
Presentation of the following step in protocol 621 (global return by mail for each participant)
and future stakeholders implication (evaluation of SEAMLESS-IF prototypes).
Technicians meeting contains:
Presentation of board (621δ) results (after the first interviews round)
Discussion about policy simulation feasibility, indicators choices, interfaces.
- Possible consensus building: to choose indicators to implement. We suggest to begin tradeoffs by choosing the criterions. The meeting leader has foreseen a list of potential criterions
(e.g. for the few set of scenarios: number of concerned inhabitants, data availability, different
directions balance…) to suggest.
Presentation of the following step in protocol 621 (global return by mail for each participant)
and future stakeholders implication (evaluation of SEAMLESS-IF prototypes).
Protocol 621c: postal e-mail surveys
The main benefit is to encourage participants to look after the survey. Mail or e-mail is a kind
of media which appeals somebody more than to have a website address at one’s disposal.
The main drawback is that mail or e-mail monitoring is time consumer, as on-line monitoring
is, and more expensive. Is easy to reach a consensus by this way.
During step 5 development, organizer has to monitor the whole process. The total duration of
the rounds is foreseen (no less than one month or two).
To send a second mail, gathering the whole opinions.
To send a new mail as soon as a change occurs…and so on.
If you want to reach a consensus, you will put in a mail a question that suggest possible
criterion to choose a policy to simulate. Criterions could be: number of concerned inhabitants,
Page 231 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
data availability, different directions balance…You ask each participant to give individual
answers about the relevant criterions he (or she) prefers.
Comment: If preferred, you can combine two variants: e.g. the on-line protocol + a final
meeting with regional stake bearers, if you feel that social learning would be useful for
preparing following protocol.
Step 6: Ultimate return
Goals of step 6
Analysing the whole process
Giving a global return to the participants about their expressed opinions and even to the
policy-makers involved in the first step who didn’t answer the questions, but who has
selected a technician.
Preparing the implementation of other protocols (for evaluation of Seamless-IF prototypes).
Resource considerations
Analyst for analysis of the whole process.
Organizer to manage mail or e-mail with involved policy makers, chosen technicians and
involved experts.
Step 6 development
This step begins when the foreseen rounds period is closed. Then, you can produce the
ultimate analysis. It means you have to answer to these questions:
Who are the asked people? (Identity and characteristics of asked people, for each following
question).
What are the policy simulations each asked people would like to have at one’s disposal?
What are the indicators that each asked people considers to be relevant?
What is the interface that each future user expects?
What are the rationales about potential refusals to answer or to collaborate?
What are the policy simulations the whole group feel having priority?
A sum-up text of this analysis will be sent to the overall participants to close protocol 6.2.1.
and to prepare following protocols, by planning the period where participants will be
contacted to assess the first SEAMLESS-IF prototype.
Step 6 analysis
To complete protocol 6.2.1., a short analysis of participants reaction to the ultimate analysis
will be made.
Page 232 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
Protocol 64 : Justification and consistency
Like for protocol 621, our first deal is to involve enough relevant participants. To facilitate
their involvement, we will try to keep the same people all long the project. So, to save
stakeholders time, we foresee at the same time the several dimensions of a prototype
evaluation.
To make participants confide their real thinks (second deal), the different hierarchical levels
will never be melted, so as individual opinions’ collection methods will be well adapted. In
addition, asked questions will be, as possible, opened questions. Participants will also be
helped to project themselves, by making them describe their present work, and imagine
Seamless-IF use and involved changes in their ongoing work.
To favour opinions expression about prototypes (third deal), information about the prototype
will be provided. The documents should be as realistic as possible, to help people to project
themselves on Seamless-IF use. If possible, the software will be provided also. Evaluation
aims at collecting opinions diversity about prototypes, so individual opinions’ collection
methods (questionnaire or individual interviews) will be adapted. In addition, ended questions
will be asked when stakeholders don’t spontaneously have an opinion.
Page 233 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
Page 234 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
Protocol for task T.6.4
“Evaluation of SEMALESS-IF prototypes”
Warning : Three successive prototypes will be designed during the project. Stakeholders will
probably be asked to evaluate at least two prototypes. According to novelties between two
prototypes, the topics to talk about will be adapted by protocols users.
A- Why ?
You want to make stakeholders evaluate a Seamless-IF prototype.
So, in the Seamless project you are probably in the following context:
1st: The Seamless-IF stakeholders’ category you want to mobilize have been already
associated to the design of Seamless-IF (cf. protocol A.6.2.1).
2nd: You dispose of a prototype of Seamless-IF, a set of policy scenarios to be addressed by
the tool and results of simulations.
B- Who ?
We advise you to solicit people already consulted about their requirements about Seamless-IF
(cf. protocol A.6.2.1).
Nevertheless, if you must solicit people not consulted yet about their requirements, identify
the relevant actors to involve by practicing a stakeholders analysis.
C- What ?
Information you will have to collect are identity and characteristics of asked people and their
point of view about :
policy hypothesis possible to address to Seamless-IF (relevance), and additional input data
required (relevance and feasibility)
simulation results : indicators, the way to organize them, their pertinence, justification of
point of views, and improvement proposals
understanding and pedagogical qualities of outputs : easiness to understand the indicators
meaning and the indicators organization, utility in debates about policies effects
interface ergonomy : easiness to find searched elements and to understand the several
simulation steps, and improvement proposals
computational implementation : easiness of step by step software implementation by users,
and improvement proposals.
Comments : Five main points are mentioned above. If no novelty has been brought on one
point between two prototypes evaluations by the same actors, this point won’t be tackled. In
the same way, some points won’t be tackled if the corresponding functionalities of SeamlessIF aren’t available yet (for example : software implementation by users). These adaptations
are under “protocol users” responsibility.
Page 235 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
D- How ?
To do this, we propose you some ways of doing, that concern :
1- Providing of the needed material to asked people
2- Formulation of questions and how to order them
3- Modalities of information collection and its preparation
4- Information analysis method
1- Providing the needed material to asked people
We advice you to provide to asked people at least : the evaluation schedule (doc. 64α), the
aim of the evaluation, and a prototype description with results of its implementation (doc.
64β).
2- Formulation of questions and how to order them
We advice to begin with a question about global feeling before engaging questions about
details. This question can be completed with 2 questions about main positive and negative
points. It enables to collect what impressed people most.
After, you can engage questions about details, with alternation of open-ended question, and
solicit improvement propositions.
At the end, it’s interesting to get people to project really themselves on the tool use, in order
to bring out possible obstacles. When they feel more implicated, potential users can be asked
if they would use Seamless-IF simulations, what would be a problem, and what would
become easier in their job according to their present job.
3- Modalities of information collection
Four alternatives are proposed :
Relevant context
3.a- a questionnaire sent
by postal mail (answers
on paper) or e-mail
(answers
on
the
computer)
3.b- individual
interviews
Asked people are the same as those asked about their needs (cf.
protocol A.6.2.1). Or at least, they have heard about the project
and the tool concept. Necessarily, they feel involved in the
project. If the Web alternative is chosen, they must possess a
computer with a Web access.
phone If asked people had already been asked before, a phone interview
will be less expensive than a face-to-face one. It enables a good
understanding of asked people arguments. Sometimes, this
modality could also be preferred by asked people if they are very
busy and afraid to spend time on understanding the “visual
support document”, or not autonomous enough.
3.c- individual face-to- It enables a good understanding of asked people arguments. If
face interviews
asked people aren’t the same as those asked about their needs (not
available) : a face-to-face interview will enable to give more
explanations about Seamless-IF.
Page 236 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
If asked people had already been asked before, this method will
be uselessly expensive.
3.d- a group meeting (to It facilitates co-learning and produces a dynamic between users
avoid as possible)
on a territory.
In Seamless it will also be a good way if already asked people
aren’t autonomous enough to answer on their own to a
questionnaire, but are too many to be all interviewed.
Whatever the tackled topics would be (according to prototype content), these 4 alternatives
are relevant. Nevertheless, the sent questionnaire and the meeting will be the less expensive.
E- Detailed steps
Step 1 : New contact
Goals of step 1
Asked people have already been associated to the Seamless project, at least about their
expectations collection, and perhaps for a previous prototype evaluation.
So, we’ve got to recall the Seamless project approach, to present the project progression since
their last contribution, and to explain their next needed contribution.
Resource considerations
The organizer (or the investigator, if opinions are collected by individual interviews)
manages contacts.
Step 1 development
Three cases and associated media :
b- and c- : if opinions are collected by individual interviews (by phone or face-to-face), actors
will be contacted by phone
d- : if opinions are collected by a meeting, actors will be contacted by mail
a- : if opinions are collected by a sent questionnaire, step 1 and step 2 are melted. Actors will
be contacted by mail, and reception of documents will be checked by phone.
Common content : 3 elements will be tackled in all situations, either by phone or by mail :
very brief recall of the Seamless project approach : actors expectations and their opinion
about prototypes contribute to Seamless-IF design and improvement
the project progression since the last actors’ contribution
the next needed contribution (aim of the evaluation and its unfolding)
In addition, in each situation, specific elements will be tackled :
Page 237 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
in the case of individual interviews, an appointment will be made. For phone interviews,
explanations will be given about the unfolding (ex. : reception of documents by mail or email before, demand of testing the software before the appointment).
In the case of a meeting, some proposed dates with a reply coupon will be sent, the chosen
date will be confirmed to each actor by phone, and finally, an invitation letter with practical
information will be sent by mail.
In the case of a sent questionnaire, a letter will specify how to access to and/or to answer the
questionnaire, the references of an help-person, and the deadline to transmit answers.
Step 2 : Providing information
Goals of step 2
The goal is to provide to actors the needed information about the prototype, to enable them
making their opinion about it.
Resource considerations
The organizer will provide these elements to solicited actors.
Step 2 development
A visual support documents and also the software if possible, will be provided to solicited
actors.
Common content among the different situations :
a description of the technical content of the prototype : possible simulations, necessary input
data, output data = indicators
a description of the prototype running
screen-copies of simulation results, or a list of produced information by each simulated policy
if not possible
the software if available
a- In the case of a sent questionnaire, step 1 and 2 will be melted : the above mentioned
documents and software will be either sent by mail with the “new contact” documents, or
placed on a Website (the way to access will have previous been sent). At the same time, the
questionnaire will be provided.
b- In the case of individual Phone interviews, the documents and software will be sent by
mail or e-mail before the interview. If an evaluation of software computational
implementation is expected, actors will be asked to test it before the appointment.
c- and d- In the case of individual face-to-face interviews or meeting, the documents will be
provided the day of the interview (steps 2 and 3 melted). The software can either be provided
the same day, or having been provided before by mail or e-mail, to be tested before by actors
(as for phone interviews).
Page 238 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
Step 3 : Collection of opinions about the prototype
Goals of step 3
The goal is to collect actors opinions about the prototype on the 5 points listed before (§ C-).
According to the prototype version, less points could be tackled.
Resource considerations
Resources will differ according to the chosen media.
Individual interviews will be carried by the investigator. Questionnaires will directly be
received from actors. Meetings will be carried by the investigator helped by somebody.
An analyst could help the investigator in results analysis.
Step 3 development
a- In the case of a sent questionnaire, when they have studied the support documents and the
software (if available), actors fill in the questionnaire themselves and return it, or valid it on
the Web site. The questionnaire will alternate open-ended questions (see § C- and § D.2-).
b- and c- In the case of individual interviews, the investigator recalls the evaluation’s goal,
and he presents the documents (descriptions of the prototype and simulation results) when
required by the interview. He tries to obtain as many information as possible with opened
questions (see § C- and § D.2-). Interviews are tape-recorded, and the investigator writes the
main information on a recording-grid during the interview.
d- In the case of a meeting, actors are distributed in groups according to their hierarchical
status (technicians and their hierarchy aren’t melted). Participants are welcomed with a
coffee. The animators recall the evaluation goal. The documents and the software (if
available) are presented when required for the discussion (see § C- and § D.2-). At each step,
the animators let a moment to think individually, after what opinions are shared. Individually
written opinions can be collected at the end of the meeting. The animators will pay attention
to equity of participation. Some topics could be more difficult to carry on a meeting that the
others, as selection of relevant / no relevant indicators. Either participants will be asked to
indicate no relevant indicators and to justify, or all indicators will be reviewed to attribute
them scores according to their relevance. At the end of the meeting, animators can propose an
evaluation of the meeting : each participant fills in a evaluation chart.
Step 3 analysis
For each tackled topic, results could be analyzed on 3 dimension : global point of view,
detailed remarks about the prototype (an ordered list), improvement ideas. The envisaged use
of Seamless-IF (see D.2-) should also be reported. Some opinions categories could be made
according to actors functions, context and status.
b- and c- In the case of individual interviews, the individual analysis will optionally be sent to
each asked people, for them to notice eventual mistakes.
Page 239 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
Step 4 : Global return of evaluation
Goals of step 4
The goal is to inform participants about the evaluation results, to keep them interested and
motivated.
Resource considerations
Organizer and analyst will return the global evaluation.
Step 4 development
The final analysis will be sent to each asked people, without necessary distinguishing
hierarchical levels.
Ideally, this return will also contain scientists decisions of improvement according to actors
remarks, with justifications, to show them their important role in the project.
Conclusion
Deliverable 731 deals with only one main purpose : to be operational for Seamless project.
But exact actions to implement within Seamless project aren’t fixed once and for all. A part
of choices is made along the process, according to reactions of met people, available data and
so on. The strategy needed to lead every task and to design every detail is what Mintzberg32
would call a strategy as “incremental learning process”. As a consequence, Deliverable 731
can’t show exactly what will be done at each step of the project. On the contrary, we both
suggest generic methods (in the wide sense of term) and show examples of what could be
implementation, if set up hypothesis were held. Here are the main comments we want to do
about these two issues : generic methods and examples.
In this deliverable, generic methods contain various kinds of outcomes. Section 1 recalls main
principles about human interactions, suitable for Seamless case. These principles will provide
explanations about human interactions potential pitfalls (if occur). We stress here on the fact
that no one is allowed to go against, without risking to suffering a setback. Section 2 is a
detailed library of participatory methods, which are classified according to designers goals.
Of course, knowledge about existing and suitable participatory methods is only a part (and a
small one, in fact) of the challenge to achieve when designers want to make stakeholders
participate. The invisible part of iceberg is uncovered in section 3, with “toolkit for the
construction and evaluation of participatory protocols”. Because the operational tool for
interaction isn’t a participatory method but a protocol : a trajectory which can call for several
participatory methods and contains more specifications : goals, targets, duration and so on.
The toolkit recapitulates steps and questions to prepare a protocol. But of course, the toolkit
can’t mechanically provide an achieved protocol. For instance, toolkit suggests a group of
participatory methods for each kind of goal, but can’t provide “the” good participatory
method. In other hand, very often, successful interaction depends more on the way by which
one gets first contact with a policy-maker than on the type of participatory method one
32
Mintzberg H., Ahlstrand B., Lampel J., 1999, Safari en pays stratégie : l'exploration des grands courants de la
pensée stratégique, Village Mondial, Paris, 423 p.
Page 240 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
employs. Principles as toolkit are generic contains. Generic means that these knowledge can
be applied to various contexts, but it doesn’t mean without practical interest nor without
impact on actual actions. Reality is the opposite. So, we have tried to decline each principle in
actual consequences for implementation in Seamless project, and to show a way of using
toolkit for creating protocols tailored for precise steps of test cases (in section 4).
Declining principles leads to provide hypothesis about implementation of participatory
methods in Seamless. Here are the main hypothesis :
−
everything will be done to facilitate appropriation of project and of Seamless-IF by
prim users (mainly DG officers) and by national and regional stakeholders (some of
them will be finally end-users). Test cases are the occasion to test methods for
enhance appropriation. So, these practices will help to enhance dissemination after
the end of the project itself.
−
A real stakeholders analysis is useful to gather relevant participants for problem
definition and co-learning. We suppose that stakeholders analysis will feed “users
forum.”
The consequences of these two hypothesis are for the first one : each protocol will be
preceded by a motivation phase (part of protocol designed to enhance motivation of
stakeholders), so it means that each participant will be called about his (her) opinion for
scenario. For the second one : in a given region, the same group of stakeholders is involved
from the beginning to the end of the project. Thus, these hypothesis allow us to provide
examples of protocols (section 4). Of course, they are valuable only if hypothesis fit. Thus,
perhaps we have set hypothesis which are no realistic. In addition, some of the proposed
methods aren’t easy to implement (e.g. stakeholders analysis). Then, implementation
suggested upon hypothesis and from generic principles and methods, can be seen like an
ideal-type of what could be done. Here are the standard bases for assessing by contrast what
will have been really made as interactions.
Page 241 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
GENERAL CONCLUSION
This document is designed to support searchers and and-users involved in SEAMLESS
project in the designing, implementation and evaluation of participatory protocols. Section A
presents the main principles that lead human interactions, as well as causes of potential
pitfalls. Section B provides a library of participatory methods, which are classified according
to their goals. Section C provides guidance for the construction and the evaluation of
participatory protocols. Section D presents examples of the use of such guidance for the
construction of specific protocols for SEAMLESS.
The document argues that adopting principles in relation to human interactions and
employing appropriate strategies for consultation will enhance not only the quality of
deliberation among the end-users and other stakeholders of SEAMLESS-IF but also its
inclusiveness.
Participation is widely believed to be a good thing. Four key arguments are typically
advanced to support the notion of participation:
• Ethics – everyone has the right to command their own destiny.
• Expediency – people who are not involved in decision-making may revoke or subvert
decisions made by others.
• Expert knowledge – certain decisions require expert knowledge and to ensure that
knowledge is brought to bear on a decision, the experts themselves should be involved in the
decision-making process.
• Motivating force – participation in the decision-making process ensures that people are
aware of the rationale for the decision and are more likely to want to see it implemented
efficiently and effectively.
Whilst the arguments in favour of participatory approaches are persuasive, questions their
value in practice (Dudley, 1993 33): ‘Community participation may have won the war of
words but, beyond rhetoric, its success is less evident’ (p. 7). This lack of evident success
results of two main causes: a failure to make the nature of participation, and what it is to
achieve, explicit; and a under resourcing of the related tasks, leading to mechanical use of
handbooks and tools by people with insufficient experience or understanding of context.
Participatory methods cannot be seen as a cheap option. They must be treated as a serious and
integral part of the impact assessment process, which requires management by people with
the skill and experience to flexibly adapt the different techniques to the particular issues,
contexts and institutions being assessed.
The reliability and credibility in the use of participatory methods can be increased if those
conducting the participatory protocols are asked to carefully plan participatory stages in
advance, identifying which particular methods are to be used, their aims, who is to be
2
Dudley, Eric.(1993). The Critical Villager: Beyond Community Participation. London, UK.: Routledge.
Page 242 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
involved. There should also be an account of the potential risks and the ways in which the
participatory process may need to be adapted to accommodate these. Although participatory
methods will require to be flexible to circumstances and the contributions of participants, if
there is a clear plan at the start it is easier to identify why changes were necessary and how
these affect the assessment. It is also necessary to identify early on, and then at each
successive stage, the limitations of the methodologies and how these are to be addressed.
There is no “cookie-cutter” participation plan that will fit every decision or issue. There is no
participation technique that will work in all circumstances. When people talk about highly
successful participation programs they are talking about programs where the techniques
matched the purpose of the program, reached the interested stakeholders, and resulted in a
clear linkage between the public participation process and the decision-making process.
The general strengths of any participatory process include: speed of implementation because
a cross-section of decision makers and recipients of the decisions worked out the goals,
strategies, objectives, and tactics together; reduced time to make a long-range plan by
reducing the interval between feedback loops of those involved in the participatory process;
enhanced democratic processes that result in a more equitable product; and increased
probability of success by sharing commitments and values of the participants and addressing
potential conflicts during the process.
The general weaknesses of participatory processes include: superficial analysis; unfair
influence by those more aware of how to manipulate the process; threatening to established
power; and potential to create a new we/they polarity of those who participated and those
who did not.
Good facilitation of a participatory process requires that teams in charge of participatory
approaches keep a clear distinction between the issues of the process (Mapping out diversity
of views, Reaching consensus, Advising decision, Democratisation) and the issues of the
content (Problem definition and scoping, Problem-framing, Comparison and choice of
alternative policy options). Confusing these makes the process less efficient and the content
less profound.
The strengths and weaknesses of participatory methods should be understood in context,
including the purposes for which they would be used. Will it be a small group or a large
gathering? Will they meet face-to-face in one location or can the general public share their
views through other means?
In addition, the success of participatory protocols depend on the three following factors:
1. Stakeholders Identification. Potential End-Users and other Stakeholders of SEAMLES-IF
are well defined and identified in relation to the issues at stake (stakeholders must have a
significant impact on the issues and/or must be impacted by the issues; must have
information, knowledge and expertise about the issues; must have control or influence
implementation instruments relevant to the issues)
2. Stakeholders Analysis. The relevance of the end-users and stakeholders’ interests for the
issues at stake, and their investment capability to address the issues, are pointed out and
analysed.
Page 243 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
3. Stakeholders Mobilisation. Relevant strategies are applied to access to, mobilise, and
sustain effective participation of end-users and stakeholders along the project.
Page 244 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
References
Anderson, J.R., ed., 1994. Agricultural Technology: Policy Issues for the International
Community. Wallingford: CAB International.
Bennett, C., and Rockwell, K., 1995. Targeting Outcomes of Programs: An Integrated
Approach to Planning and Evaluation. Draft Manual.
Bentley, J.W., 1994. Facts, fantasies and failures of farmer participatory research. Agriculture
and Human Values 11:140–150.
Berk, M. , L. Hordijk, M. Hisschemöller, M.T.J. Kok, D. Liefferink, R.J. Swart and W.
Tuinstra (1999). Climate Options for the Long Term, Interim Phase Report. NOP Rep. nr.
410 200 028, Wageningen University, The Netherlands.
Bernard, H.R., 1995. Research Methods in Anthropology: Qualitative and Quantitative
Approaches. 2nd ed. Walnut Creek: AltaMira Press.
Casley, D.J., and Kumar, K., 1987. Project Monitoring and Evaluation in Agriculture.
Baltimore and London: Johns Hopkins University Press for The World Bank, IFAD, and
FAO.
Casley, D.J., and Kumar, K., 1988. The Collection, Analysis and Use of Monitoring and
Evaluation Data. Baltimore and London: Johns Hopkins University Press for The World
Bank, IFAD, and FAO.
Choudhary, Anil, Suneeta Dhar and Rajesh Tandon. 1988. "Key Issues in Participatory
Evaluation," Report of International Forum on Participatory Evaluation, 1-5 March, 1988,
New Delhi, Society for Participatory Research in Asia, (PRIA), 45 Sainik Farm, Khanpur,
New Delhi 110062. pp 5-15, (skim case studies).
Clayton, Andrew, Peter Oakley and Brian Pratt (1998) Empowering People: A Guide to
Participation, New York: UNDP.
Cramb, R.A., 1993. Shifting cultivation and sustainable agriculture in East Malaysia: a
longitudinal case study. Agricultural Systems 42:209–226.
Cramb, R.A., 2000. Processes influencing the successful adoption of new technologies by
smallholders. In W.W. Stür, P.M. Horne, J.B. Hacker and P.C. Kerridge, eds. Working with
Farmers: The Key to the Adoption of Forage Technologies. ACIAR Proceedings No. 95, pp.
11–22. Canberra: Australian Centre for International Agricultural Research.
Dart, J., 1999. A story approach for monitoring change in an agricultural extension project.
Paper presented at the Conference of the Association for Qualitative Research, Melbourne
(http://www.latrobe.edu.au/www/aqr/offer/papers/JDart.htm).
Davies, R., 1996. An evolutionary approach to facilitating organisational learning: an
experiment by the Christian Commission for Development in Bangladesh. Swansea: Centre
for Development Studies (http://www.swan.ac.uk/cds/rd/ccdb.htm).
Dillon, J.L., and Hardaker, J.B., 1993. Farm Management Research for Small Farmer
Development. Rome: FAO.
Dixon, J.M., Hall, M., Hardaker, J.B., and Vyas, V.S., 1994. Farm and Community
Information Use for Agricultural Programmes and Policies. Rome: FAO.
Drehborg, K.H. (1996). Essence of Backcasting. In: Futures, vol. 28, No. 9,pp.813-828, UK.
Dunn, W.N. (1994). Public Policy Analysis. An introduction. Prentice Hall, USA.
Page 245 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
Estrella, M., and Gaventa, J., 1998. Who Counts Reality? Participatory Monitoring and
Evaluation: A Literature Review. IDS Working Paper 70. Sussex: Institute for Development
Studies.
Fowler, F.J., 1993. Survey Research Methods, 2nd ed. Newbury Park: Sage. GAO, 1990.
Case Study Evaluations. Transfer Paper 10.1.9. Program Evaluation and Methodology
Division, United States General Accounting Office: Washington, D.C.
GAO, 1991. Using Structured Interviewing Techniques. Transfer Paper 10.1.5. Program
Evaluation and Methodology Division, United States General Accounting Office:
Washington, D.C.
GAO, 1992. Using Statistical Sampling. Transfer Paper 10.1.6. Program Evaluation and
Methodology Division, United States General Accounting Office: Washington, D.C.
Haverkort, B., 1991. Farmers’ experiments and participatory technology development. In
Haverkort, B., Van der Kamp, J., and Waters-Bayer, A., eds. Joining Farmers’ Experiments:
Experiences in Participatory Technology Development, pp. 3–16. London: Intermediate
Technology Publications.
Jackson, E.T., 1995. Participatory impact assessment for poverty alleviation: opportunities
for communities and development agencies. Paper presented at International Evaluation
Conference, Vancouver, November 1–5, 1995; cited in Estrella and Gaventa (1998).
Jaeger, C. and B. Kasemir (eds.) (1999). Ulysses, Urban Lifestyles, Sustainability, and
Integrated Environmental Assessment. Final Report. Darmstadt University of technology,
Germany.
Jiggins, J., 1994. Quality control, method transfer and training. In I. Scoones and Thompson,
J., eds. Beyond Farmer First: Rural People’s Knowledge, Agricultural Research and
Extension Practice, pp. 139–143. London: Intermediate Technology Publications.
Johnston, B.F., and Clark, W.C., 1982. Redesigning Rural Development: A Strategic
Perspective. Baltimore: Johns Hopkins University Press.
Krueger, R.A., 1994. Focus Groups: A Practical Guide for Applied Research. 2nd ed.
Thousand Oaks: Sage Publications
Mason, R.O. and I.I. Mitroff (1981). Challenging strategic planning assumptions. Theory,
cases and techniques. USA.
Mayer, I. (1997). Debating Technologies. A Methodological Contribution to the Design and
Evaluation of Participatory Policy Analysis. Tilburg, The Netherlands.
McAllister, K., 1999. Understanding Participation: Monitoring and Evaluating Process,
Outputs and Outcomes. Ottawa: International Development Research Centre.
McAllister, K., and Vernooy, R., 1999. Action and Reflection: A Guide for Monitoring and
Evaluating Participatory Research. Ottawa: International Development Research Centre.
Mikkelsen, B., 1995. Methods for Development Work and Research: A Guide for
Practitioners. New Delhi: Sage.
Moscovici, S. and W. Doise (1994). Conflict and consensus. A general theory of collective
decisions. Sage Publications, London. UK.
Norman, D.W., Worman, F.D., Siebert, J.D., and Modiakgotla, E., 1995. The Farming
Systems Approach to Development and Appropriate Technology Adoption. Rome: FAO.
Okali, C., Sumberg, J., and Farrington, J., 1994. Farmer Participatory Research: Rhetoric and
Reality. London: Intermediate Technology Publications.
Page 246 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
Pachico, D., Ashby, J., Farrow, A., Fujisaka, S., Johnson, N., and Winograd, M., 1998. Case
study and empirical evidence for assessing natural resource management research: the
experience of CIAT. Paper Presented at Workshop on Assessing Impacts in Natural Resource
Management Research, April 27–29, ICRAF House, Nairobi, Kenya.
Pannell, D.J., and Glenn, N.A., 2000. A framework for economic evaluation and selection of
sustainability indicators in agriculture. Ecological Economics 33:135–149.
Pannell, P.B.W., and Pannell, D.J., 1999. Introduction to Social Surveying: Pitfalls, Potential
Problems and Preferred Practices. SEA Working Paper 99/04 (http://www.general.uwa.
edu.au/u/dpannell/seameth3.htm).
Parson, E.A. (1996). A global climate-change policy exercise: results of a test run, July 2729, 1995. IIASA Working paper WP-96-90, Laxenburg, Austria.
Patton, M.Q., 1990. Qualitative Evaluation Methods. Newbury Park: Sage Publications, pp.
288–289. Cited in Mikkelsen, B., 1995. Methods for Development Work and Research: A
Guide for Practitioners. New Delhi: Sage Publications, pp. 102–103.
Poate, C.D., and Daplyn, P.F., 1993. Data for Agrarian Development. Cambridge:
Cambridge University Press.
Pretty, J.N., Guijt, I., Scoones, I., and Thompson, J., 1995. A Trainer’s Guide for
Participatory Learning and Action. London: International Institute for Environment and
Development.
Raiffa, H. (1968). Decision Analysis, Introductory Lectures on Choices under Uncertainty.
Harvard University, USA.
Rudqvist, Anders and Prudence Woodford-Berger (1996) Evaluation and Participation Some Lessons, SIDA Studies in Evaluation 96/1, Stockholm: SIDA.
Schonhuth, M., and Kievelitz, U., 1994. Participatory Learning Approaches: Rapid Rural
Appraisal, Participatory Appraisal; An Introductory Guide. Schriftenreihe der GTZ No. 248.
Rossdorf: GTZ.
Slocum, N. (2003). Participatory Methods Toolkit. A practitioner’s manual. King Baudouin
Foundation and the Flemish Institute for Science and Technology Assessment (viWTA) in
collaboration with the United Nations University – Comparative Regional Integration Studies
(UNU/CRIS).
Stern, P.C., H.V. Fineberg (eds.) (1996). Understanding risk. Informing decisions in a
democratic society. National Academy Press, Washington D.C, USA.
Sumberg, J. and Okali, C., 1997. Farmers’ Experiments: Creating Local Knowledge. Boulder
and London: Lynne Rienner.
Uphoff, Norman (1991) "Fitting Projects to People", in Cernea, Michael M. (ed.), 1991,
Putting People First: Sociological Variables in Rural Development, New York: Oxford
University Press.
Van Asselt M.B.A. et al., 2001. Building Blocks for Participation in Integrated Assessment::
a review of participatory methods. ICIS Workg Paper 101-E003, ICIS, Maastricht, the
Netherlands, 2001, 71 p.
Van de Kerkhof, M. 2000. "A Survey on the Methodology of Participatory Integrated
Assessment," Interim Report IR-01-014. International Institute for Applied Systems Analysis,
Schlossplatz 1 A-2361 Laxenburg, Austria.
Van Veldhuizen, L., Waters-Bayer, A., and de Zeeuw, H., 1997. Developing Technology
with Farmers: A Trainer’s Guide for Participatory Learning. London: Zed Books.
Page 247 of 248
SEAMLESS
No. 010036
Deliverable number: 7.3.1.
13 July 2005
Weisbord, M.R. (ed.). Discovering common ground. Berret-Koehler Publishers, San
Francisco.
Yin, R.K., 1994. Case Study Research: Design and Methods. 2nd ed. Thousand Oaks: Sage
Publications.
Page 248 of 248
Fly UP