Data

Face-to-Face, Online, or Both?
Both
General Type of Method
Research or experimental method
Typical Purpose
Research
Spectrum of Public Participation
Consult
Links
Citizens Handbook - A Review of Public Participation and Consultation Methods
Journal Article - Web Survey Design and Administration
Open to All or Limited to Some?
Open to All
Number of Participants
There is no limit to the number of people who can participate
Large groups
Types of Interaction Among Participants
Ask & Answer Questions
Express Opinions/Preferences Only
Facilitation
No
Decision Methods
Opinion Survey
If Voting
Preferential Voting
Scope of Implementation
No Geographical Limits
Level of Polarization This Method Can Handle
Low polarization
Level of Complexity This Method Can Handle
Low Complexity

METHOD

Survey

Face-to-Face, Online, or Both?
Both
General Type of Method
Research or experimental method
Typical Purpose
Research
Spectrum of Public Participation
Consult
Links
Citizens Handbook - A Review of Public Participation and Consultation Methods
Journal Article - Web Survey Design and Administration
Open to All or Limited to Some?
Open to All
Number of Participants
There is no limit to the number of people who can participate
Large groups
Types of Interaction Among Participants
Ask & Answer Questions
Express Opinions/Preferences Only
Facilitation
No
Decision Methods
Opinion Survey
If Voting
Preferential Voting
Scope of Implementation
No Geographical Limits
Level of Polarization This Method Can Handle
Low polarization
Level of Complexity This Method Can Handle
Low Complexity

A survey is a tool used to gather information from a given population. Generally, a survey comprises a collection of questions or statements requiring a response from participants. Surveys can be used to collect background information or gather opinions.

Problems and Purpose

In the context of public engagement and participation, surveys are used to help ‘determine community attitudes or target a particular group’ [1]. Surveys can be deployed by decision-makers to get a general idea about views on a proposed policy or issue. They are most often used in a consultative manner [2] in order to inform decision-makers of a population’s views [3].

Surveys can be used as a standalone method, but may also form one component of a broader deliberative or participatory process. For example, Deliberative Polling includes a survey before and after deliberation to gauge preference change. Similarly, the Scarborough Beach Deliberative Survey in Western Australia utilised surveys before and after a deliberative forum where participants discussed a local development issue. The Fremantle Community Engagement Process also used an initial community survey prior to face-to-face deliberation which gave an overview of policy options that were discussed later in the process, as well as pre and post deliberation surveys.

A survey is generally used to ask a specific population general questions, whilst a poll asks a general population specific questions [2]. More specifically, a survey can be used to ask a range of questions, in different forms, thus generating a wider depth and range of data. By contrast, a poll may ask just one question, with multiple choice options [4]. This can enable a snapshot of public opinion on one specific topic. In practice, the terms survey and poll are sometimes used interchangeably, which can cause confusion [2].

There are two general instances in which a survey may be of use: when there is need to determine broad, general information on a large number of people or when one is interested in answers to more specific, issue-based questions among a smaller, targeted group of individuals.

Depending on the format and design, a survey can collate quantitative and/or qualitative data. This is determined by the type of question selected and the chosen answer options: closed answer (e.g. multiple choice, preference ranking) or open (e.g. dialogue box for respondents to write in their answer) [1].

Surveys are also commonly used in academic research for both qualitative and quantitative research.

Origins and Development

Surveys were not a development of the deliberative turn in democratic politics but are well established for use by governments, business organizations and individuals. In the field of democratic innovation, perhaps the most well-known use of surveys is that of James Fishkin's Deliberative Polling technique which uses opinion surveys to gauge the level of attitudinal change among participants before and after deliberation. Variations of this approach have been used in the Scarborough Beach Deliberative Survey and the Fremantle Bridge Community Engagement Process.

Surveys have been used in different and creative ways as part of participatory processes. In Melbourne, a digital survey about the future of the city was included as part of an art installation in a gallery. In Nepal, surveys were used to gauge views on local radio. Surveys could be completed using mobile devices and were followed up with face-to-face interviews to provide more in-depth information. The survey findings were then used to inform stakeholders in the media and were disseminated to the public through social media in order to encourage greater participation in local radio.

How it Works

The selection of survey participants depends on the intentions of the survey organizers, and on how they want to use the results. For example, if results are to be generalised to a wider population, then a representative sample of that population is needed. If the views of a particular group are sought, sampling can be more purposive, reaching out to that particular group.

Achieving a random, representative sample for any process can be time consuming and costly [1]. Usually, this will require the services of a market research company or consultancy specialised in this area [3]. A random, representative sample is needed if the organizers wish to claim any kind of generalisability about their results. It is possible to conduct a survey through more opportunistic sampling methods but this can bias the sample and thus, the results. For example, putting a survey out on Facebook may elicit a relatively large number of responses, but obviously excludes those who are not Facebook users, and will have implications about the demographics of respondents. Opportunistic sampling may be more appropriate if a generalisable sample is not needed, and a specific group of respondents is sought.

It is also important to consider not only the initial sample that will receive the survey, but the response rate for the survey. With sufficient resources, it is possible to engage a very large demographic. Census surveys are used to collect demographic information from a population. The Canadian 2016 census survey - which was mailed to citizens but also allowed for electronic reporting - had a response rate of over 98% [5]. However, Canadians are legally required to complete the census [6]. For the most part, survey completion is not mandatory, and response rates are usually considerably lower than this [7].

The response rate will also affect the survey outcomes, as different groups may be more likely to respond, creating the potential to skew responses [2]. In the Fremantle Bridge Community Engagement Process, participants who responded to the initial community survey were generally older than other participants in the deliberative forum. This effect was compounded by the information included in the community survey, which had a framing impact on deliberations.

Surveys can be administered online, face-to-face, over the telephone or through the mail. It may be desirable to use all these methods if a very large, representative is needed. It is also worth considering that the collection methods may affect response rate and participation, so additional methods could be needed to reach certain groups.

The main point of interaction between participants and initiative organizers are the survey questions themselves - whether administered in-person or on paper. It is therefore of utmost importance that questions be expertly formulated so as not to produce biased or skewed responses, as ‘poorly constructed surveys produce poor results’ [1].

Respondents complete surveys as individuals and there is no interaction between respondents. Results are usually collated or aggregated in some way so that organizers can make general statements about the viewpoints expressed. Survey results might be published by the organizers, which is generally considered good practice, especially for public consultations [3].

Closed response questions such as tick boxes/multiple choice, yes/no options are the easiest and fastest type of survey to conduct [1]. They will also provide easily quantifiable data. On the downside, they will also generate limited information and importantly, are less likely to be able to provide information on the reasons and values underlying people’s opinions. Rowe and Frewer (2005) consider a survey that asks respondents only to choose a yes/no response to a question: interpretation of the results will be difficult because there is no indication of the reasons why people responded with yes or no. It is impossible to tell whether everyone who chose ‘yes’ did so for the same reasons, or to evaluate the relative merit of the reasoning behind the options [2]. Furthermore, restricting respondents’ ability to fully express their viewpoint can result in attrition, or in respondents filling out the survey without much thought or consideration [2].

Open response questions can allow respondents to more fully express their viewpoint and yield more qualitative data, although they may result in generating additional irrelevant information [2] and will be more labour-intensive to prepare. Overly long surveys and questions are also more of a burden to respondents and may limit responses [3].

The exact design and question choice will depend on the needs of the organizer, but all good survey questions should be clear and neutrally worded, to avoid leading questions [3]. Surveys should also be piloted beforehand to check questions are suitable and most importantly, elicit the kind of information required [1].

Analysis and Lessons Learned

The outcome of a survey is basically the responses or collated answers given to the questionnaire. This will be entirely dependent on the question design, the sampling and the response rate. Abel et al (2001) note that as questions need to be relatively straightforward and clear, this can result in responses being somewhat superficial [8]. The outcomes of a survey are usually an aggregation of individual responses. As survey respondents complete a survey as individuals, the aggregated response may not reflect the same outcome that would have occurred in a group setting. However, Rowe and Frewer (2005) note that the reverse can also occur - a group response may not reflect the true values and view of the individuals in the group due to the effect of group dynamics [2]. Whatever the outcome, it is usually published publicly if the survey is part of a public consultation or engagement [3].

The influence and effect a survey has will depend ultimately on the purpose of the survey in the first place, and on how those organizing the survey choose to use it. Survey results may be integrated into a broader community engagement process and influence it in a variety of ways. In California, surveys were used to gauge the opinions of people not participating in deliberative forums on community budgeting, and the results were integrated and presented to the local council. Used in this way, surveys can complement a deliberative process where participant numbers tend to be limited, by gathering opinions from a larger proportion of the population. In Western Australia, a survey helped shaped the agenda for a community forum, by identifying key issues for discussion prior to deliberations.

It is also worth considering another kind of effect, namely the effect that questions, sampling, and other design choices can have on the outcomes of a survey. In the Fremantle Bridge case, participants for a deliberative forum were recruited partly through a community survey. This resulted in participants at the forum falling into two groups: an older group recruited through the survey and younger participants recruited through other methods. The result was a potential effect of age on deliberation. This effect was compounded by the information provided in the community survey, which focused considerably on safety as an issue. This further biased the survey respondents, although this effect was partly mitigated by deliberation.

The Queensland Plan was a state-wide consultation on a 30 year strategic plan to develop goals for a range of key policy issues. A community survey was used to gather feedback from across the state, but the majority of responses came from South-East Queensland, the most densely populated area. The effect of this was that many areas of the state were underrepresented in the responses. The survey questions were also criticised as being unclear, and after the survey was deployed, prompts had to be added to the survey to assist people in filling it out. This could have affected the responses given, and could have created a divide between the responses given prior to and after the prompts were inserted, although no evaluation was conducted to assess this possibility.

Another aspect affecting survey responses is the timing of the survey. For one, carrying out a survey at certain times of year (over vacation periods for example) can affect response rates [3]. Secondly, the salience of the issue under consideration can affect the type of responses. In a consultation on the control of wild horses (considered a feral species in Australia) in part of New South Wales, two surveys were conducted as part of the engagement process. The first survey did not inform participants that the topic was about wild horses, but about national parks in general. This survey was conducted by a survey organisation using a random sample. By the time a second survey was conducted, the consultation was underway and the topic had generated a significant amount of discussion and controversy. The second survey also relied on self-selection, and the combination of these two aspects meant that respondents were more likely to hold strong views than those in the first survey.

Whilst surveys are sometimes considered an inexpensive method of collecting information or consulting the public [7], a well-designed survey requiring a large, random sample can incur significant costs [1], namely for sampling through a professional survey organisation. This needs to be taken into consideration prior to going ahead.

Relative to other methods of consultation such as focus groups or deliberative processes, surveys can be carried out in a short period of time [7], given that they do not require people to attend an event/s in person, or commit a significant portion of their time. However, a survey will still require at least six weeks to conduct [1], taking into account the time allowed for responses.

It is vital that care is taken in the construction and form of survey questions, so that they are clear and unambiguous. However, as discussed earlier, this limits respondents’ ability to fully express their views, and may not provide much useful information about why people hold certain opinions [1]. Even with a well-designed survey, organizers will require that respondents are literate in both the language and terminology used in a written survey [7]. This could be mitigated to some extent by telephone or face-to-face surveys rather than written surveys, which also have higher response rates. However, there is no clear winner here - written surveys allow respondents more time to consider their answers than in person or phone interviews.

It is possible that when carefully integrated into a broader deliberative process, some of the weaknesses of a survey can be mitigated. For example, a survey can offer a snapshot of a large sample of the public that most deliberative forums cannot. At the same time, a deliberative forum offers the opportunity for in-depth discussion that cannot be done through a survey. However, using surveys as part of a broader process does not eliminate all the weaknesses of the approach, as shown by the Queensland Plan and Fremantle Bridge cases. Furthermore, a survey may be poorly integrated into a process or survey results may not align with the outcome of a deliberative process. In South Australia’s engagement process on Nuclear Fuel Storage, a Citizens’ Jury and Aboriginal consultation process both rejected proposals for a fuel storage facility, but the government instead emphasised the outcomes of a state-wide survey which suggested that the community was open to learning more about the proposal. This served to undermine trust in the deliberative process and the genuineness of the government’s community engagement program.

See Also

Online Consultation

References

[1] National Coalition for Dialogue and Deliberation (2008) Survey. Available at: http://ncdd.org/rc/item/1559

[2] Rowe, G. & Frewer, L. (2005) A Typology of Public Engagement Mechanisms. Science, Technology & Human Values. 30(2), pp. 251 - 290. DOI: 10.1177/0162243904271724 

[3] Market Research Society & Local Authorities Research & Intelligence Association (2005) Using Surveys for Consultation. Available at: https://edkurtzbooks.com/market-research-opinion-surveys.html

[4] Obsurvey (2014) The Difference between Polls and Survey Questionnaires. Available at: http://obsurvey.com/blog/the-difference-between-polls-and-survey-questionnaires/

[5] Statistics Canada (2016a) Canadians' overwhelming response enables 'best ever' Census in 2016. Available at: https://www.statcan.gc.ca/eng/about/smr09/smr09_069

[6] Statistics Canada (2016b) Completing the census is mandatory. Available at: https://www.statcan.gc.ca/eng/about/smr09/smr09_068

[7] Health Canada (2000) The Health Canada Policy Toolkit for Public Involvement in Decision Making. Available at: https://www.canada.ca/en/health-canada/corporate/about-health-canada/reports-publications/health-canada-policy-toolkit-public-involvement-decision-making.html#a56

[8] Abelson J, Forest P-G, Eyles J, Smith P, Martin E and Gauvin F-P. (2001) Deliberations about Deliberation: Issues in the Design and Evaluation of Public Consultation Processes, McMaster University Centre for Health Economics and Policy Analysis Research Working Paper 01-04, June 2001. Available at: http://www.citizenshandbook.org/compareparticipation.pdf

External Links

Journal Article - Web Survey Design and Administration

http://www.citizenshandbook.org/compareparticipation.pdf

Notes