Facilitating Collective Intelligence

Perhaps the greatest challenge we face in the modern world is the challenge of effective collaboration. In social, political, business and educational settings, working groups often fail to solve complex problems because their method of collaborative problem solving is ineffective. Decades of research in social psychology and cognitive science highlight the many limitations of group problem solving, including the tendency to focus on a limited set of ideas, select ideas based on biased ‘rules of thumb’, and failure to build trust, consensus and collective vision. We have developed a new software tool that helps groups to structure the many and varied ideas that are often generated when a group comes together to consider solutions to problems.

Our software tool allows groups to first identify important ideas and develop a model describing how ideas are related in a system. This systems thinking method is very useful in situations where a group wants to understand a complex situation and design a roadmap for action built upon consensus and a collective vision. At the same time, from a research perspective, we’re interested in how to maximize group performance in these collaborative design sessions.

In the field of education and learning sciences, collaborative learning is increasingly recognised as a powerful way to promote learning. However, research tells us that merely bringing groups of people together to work on a problem does not guarantee effective collaboration. Successful collaboration requires the careful design of learning environments for group interaction and the provision of good facilitation to promote meaning-making and problem solving (Pea, 2004; Strijbos, Kirschner, & Martens, 2004). While the importance of good facilitation in collaborative learning environments is often highlighted by expert facilitators (Hmelo-Silver, 2002), there has been limited research focused on the effects of facilitator behavior styles on collaborative learning outcomes. Our recent study sought to address this gap by experimentally manipulating the types of prompts facilitators delivered in a group learning context.

Prompts are commonly used in group learning contexts (e.g. Gamlem & Munthe, 2014) and come in many forms: for example, guiding questions, sentence openers, hints, suggestions, or reminders that help a group to complete a task. Prompts act as a form ofscaffolding that support and inform the learning process (Gan & Hattie, 2014). Prompts may also be considered as “strategy activators” which “induce productive learning processes” (Berthold, Nückles, & Renkl, 2007, p.566) and can be used to promote quality dialogue and critical thinking in a group.

Two types of prompts are often distinguished: task-level and process-level prompts. Task-level prompts focus on, for example, distinguishing between correct and incorrect responses, acquiring more information, and building more knowledge of the task. While these prompts have value, they may also have limited impact. According to Hattie and Timperley (2007), task-level prompts that are too heavily focused on immediate task goals may not provide learners with much insight into the cognitive strategies they are using. Process-level prompts, on the other hand, address the key the cognitive strategies and problem solving processes necessary to complete the task (Ketelaar, den Brok, Beijaard & Boshuizen, 2012). Process-level prompts may support, for example, error detection, strategic planning, and steps for revision of work done (Gan & Hattie, 2014). In our study, collaborative working groups were working together to build systems models focused on the negative consequences of social media use.  We’ve posted a blog highlighting the key findings of that study, which focused on Facebook use and the somewhat mysterious construct of FOMO (Fear of Missing Out).  As part of that study we focused on the behavior of facilitators.  In particular, we focused on the prompting of argumentation during the systems thinking collaborative learning situation, and we provided students in one condition with task-level prompts whereas students in a second condition received both task-level prompts and additional process-level prompts.

It’s not easy to facilitate quality argumentation in a group.  Instructional support designed to facilitate quality argumentation in collaborative learning groups can take many forms. One commonly used strategy is question asking, a strategy which has been found to have positive effects on argumentation in both University and high-school students (Graesser, Person, & Huber, 1993). Question asking can serve a number of functions, including: prompting students to check each other’s information, prompting provision of further explanation and encouraging justification of assertions (Webb, 1995). In this way, question asking can be used as a form of process-level prompt, as questions can be used to move beyond an assessment of the correctness of a student’s response (“Does everyone agree with John?; Does anyone disagree with John?), to address the process, strategy or logic used by the student (“What type of evidence would support John’s claim?).

In addition to evaluating the quality of arguments observed from participants in our two facilitation conditions, we were also interested in the levels of perceived consensus our collaborative groups reported after their facilitation experience. Perceived consensus is a potentially critical variable in collaborative learning settings. Reaching consensus on a solution to a problem is advantageous for many reasons, especially with regard to implementing an action plan designed to resolve a problematic situation. If there is a high level of consensus amongst group members as to key decisions and conclusions, progress toward a solution to a shared problem may be easier to achieve.

Finally, we were interested in the group’s judgment of the efficacy of the collaborative systems thinking methodology they were using, Interactive Management. You may have seen our earlier blogs showcasing this method as applied to studies on music, critical thinking skills, critical thinking dispositions, and the FOMO phenomena linked to Facebook usage.  While we might tend to assume our method is efficacious as a strategy to promote collective intelligence in a group, it’s important to gather user-level judgements in relation to this issue. Perceived efficacy is an important outcome. If computer-supported collaborative learning tools such as the one we are using here are to be adopted by groups for use in educational settings, it is imperative that they are perceived as efficacious by the group. A group will quickly reject a tool or method if they don’t believe it’s effective in some way in addressing their needs.

There were four groups in our study. Each group worked together for approximately 180 minutes, during which time they discussed the potential negative consequences of online social media. The design of the prompting conditions was informed by the work of Hattie and Timperley (2007) and Hattie and Gan (2011). The task-level condition consisted primarily of simple, task-level prompts, while the process-level condition included some task-level prompts, with the addition of process-level prompts. In each condition, an independent facilitator was given a specific set of prompts which could be used as part of their facilitation behavioral repertoire. During discussion in the task-level groups, the facilitator would gather opinions from the group, request additional ideas, and elicit further information via clarifications and elaborations. This type of prompting was also delivered by the facilitator in the process-level group, with the addition of some further, higher-level prompts directing students to consider, for example: how the presented ideas can be evaluated; the extent to which the ideas are generalizable beyond the context presented; as well as considering if complex ideas might be broken down into simpler, concrete and specific ideas.

The results of our study indicated that, compared to those in the task-level prompt condition, those in the process-level prompt condition reported higher levels of perceived consensus in response to the group design problem. Those in the process-level prompt condition also reported higher levels of perceived efficacy of the collective intelligence methodology. Finally, analysis of the dialogue from the sessions revealed that those in the process-level prompt condition exhibited higher levels of sophistication in their arguments, as revealed by the Conversational Argument Coding Scheme (CACS).

Both the consensus and efficacy effects observed in the current study are important. Notably, if there is a strong level of consensus in relation to the understanding and conception of a problem, groups are more likely to be committed to, and satisfied with, any plan which comes from the newly-formed collaborative understanding (Mohammed & Ringseis, 2000). Similarly, while participants across both prompting conditions found the computer-facilitated group design methodology to be a useful and valid, those in the process-level prompt group reported significantly higher levels of perceived efficacy in relation to the collective intelligence process. Therefore, the types of prompts provided by facilitators may be important for the overall perception of efficacy of the tool and method used here. This endorsement of the methodology itself may be important in the context of efforts to sustain the ongoing use of a collaborative methodology as working groups often breakdown in their collaborative efforts when they dislike the methodology they are using.

A very significant finding from the current study comes from an analysis of the argumentation of the different groups. Notably, groups that received process-level facilitation collaboratively discussed ideas using more propositions, amplifications and challenges. Notably, while elaborations (i.e., statements that support other statements by providing evidence, reasons or other support) were similarly evident in both facilitation conditions, amplifications (i.e., statements that explain or expound upon other statements to establish the relevance of an argument through inference) were observed more often in the process-level prompt condition. In this way, those in the process-level prompt condition were moving beyond accumulation of evidence and support in their reasoning activity — they were working further to establish how this reasoning relates to the problem at hand, and more specifically the relevance of their reasoning to the problem at hand. Similarly, while the frequency of objections (i.e. statements that deny the truth or accuracy of an arguable, e.g. “No, I think it would be the other way around”) were almost identical across the two prompt conditions, challenges (i.e. statements that offer problems or questions that must be solved if agreement is to be secured on an arguable, e.g. “Well it kind of depends, on whether your self-consciousness affects your ability to socialise”) occurred more often in the process-level prompt condition. This suggests that those in the process-level prompt condition engaged more critically with the information at hand, and engaged in more productive argumentation.

The collective intelligence methodology we have used in the current study, Interactive Management (IM), is well established and has been used successfully in a wide variety of scenarios to accomplish many different goals, including assisting city councils in making budget cuts, promoting world peace, improving tribal governance processes in Native American communities, and training facilitators. However, because it is a facilitated process (Hogan, Harney, & Broome, 2014), the success of any collaborative group using the process is influenced by the support, guidance, and instruction provided by the facilitator. While the importance of good facilitation is often highlighted by expert facilitators, the current study provides one of the first experimental demonstrations of the effects of facilitator prompt style on outcomes in the application of IM.

Owen Harney (ResearchGate, Twitter, LinkedIn) & Michael Hogan.

Full Paper: 
Originally posted Jan 23, 2016 in ‘In One Lifespan @ PsychologyToday.com
Some links contained within this post are external



Berthold, K., Nückles, M., & Renkl, A. (2007). Do learning protocols support learning strategies and outcomes? The role of cognitive and metacognitive prompts. Learning and Instruction, 17, 564–577. doi: 10.1016/j.learninstruc.2007.09.007.

Gamlem, S. M., & Munthe, E. (2014). Mapping the quality of feedback to support students’ learning in lower secondary classrooms. Cambridge Journal of Education, 44(1), 75-92. doi: 10.1080/0305764X.2013.855171.

Gan, M. J., & Hattie, J. (2014). Prompting secondary students’ use of criteria, feedback specificity and feedback levels during an investigative task. Instructional Science, 42(6), 861-878. doi: 10.1007/s11251-014-9319-4.

Graesser, A.C., Person, N.K. & Huber, J. (1993). Question asking during tutoring and in the design of educational software. In M. Rabinowitz, ed., Cognitive foundations of  instruction, pp. 149–172. Hillsdale, NJ: Lawrence Erlbaum.

Hattie, J.A.C., & Gan. M. (2011).  Instruction based on feedback.  In R. Mayer & P. Alexander (Eds.), Handbook of Research on Learning and Instruction. (pp. 249-271). New York:  Routledge.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. doi: 10.3102/003465430298487.

Hmelo-Silver, C. E. (2002, January). Collaborative ways of knowing: Issues in facilitation. Computer Support for Collaborative Learning: Foundations for a CSCL Community. Paper presented at the International Conference on Computer Supported Collaborative Learning, Boulder, Colarado. doi:

Hogan, M.J., Harney, O. M., & Broome, B. (2014). Integrating Argument Mapping with Systems Thinking Tools – Advancing Applied Systems Science. In A. Okada, S. Buckingham Shum, & T. Sherborne (Eds), Knowledge Cartography: Software Tools and Mapping Techniques (pp. 401-421). London: Springer.

Ketelaar, E., Den Brok, P., Beijaard, D., & Boshuizen, H. P. (2012). Teachers’ perceptions of the coaching role in secondary vocational education. Journal of Vocational Education & Training, 64(3), 295-315. doi: 10.1080/13636820.2012.691534.

Mohammed, S., & Ringseis, E. (2001). Cognitive diversity and consensus in group decision making: The role of inputs, processes, and outcomes. Organizational Behavior and Human Decision Processes, 85(2), 310–335. doi:10.1006/obhd.2000.2943.

Pea, R.D. (2004). The social and technological dimensions of scaffolding and related theoretical concepts for learning, education, and human activity. The Journal of the Learning Sciences, 13(3), 423-451. doi:10.1207/s15327809jls1303_6.

Strijbos, J.-W., Kirschner, P. A. & Martens, R. L. (Eds.). (2004). What we know about CSCL: And implementing it in higher education. Boston, MA: Springer.

Webb, N.M. (1995). Group collaboration in assessment: multiple objectives, processes and outcomes. Educational Evaluation and Policy Analysis, 17(2): 239–261. doi:10.3102/01623737017002239.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s