Thinking with Reason!Able


Paul Grainger & Yanna Rider

Widnes & Runcorn Sixth Form College




We understand critical thinking [CT] to be purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or contextual considerations upon which that judgment is based. CT is essential as a tool of inquiry. As such, CT is a liberating force in education and a powerful resource in one’s personal and civic life. …CT is a pervasive and self-rectifying human phenomenon. The ideal critical thinker is habitually inquisitive, well-informed, trustful of reason, open-minded, flexible, fair-minded in evaluation, honest in facing personal biases, prudent in making judgments, willing to reconsider, clear about issues, orderly in complex matters, diligent in seeking relevant information, reasonable in the selection of criteria, focused in inquiry, and persistent in seeking results which are as precise as the subject and the circumstances of inquiry permit. Thus, educating good critical thinkers means working toward this ideal. It combines developing CT skills with nurturing those dispositions which consistently yield useful insights and which are the basis of a rational and democratic society. 

Expert consensus statement regarding critical thinking and the ideal critical thinker according to The Delphi Report (Facione 1990, p.2)


Success in post-secondary education and training is not merely a matter of knowing facts demanded by a curriculum. It is largely a matter of having, and exercising, certain abilities, such as the ability to learn independently, to conduct research and to manage one’s own learning under minimal supervision. In our view, it also means having the ability to think critically. 


Critical thinking is important for Widening Participation in post-secondary learning in two ways. First, it is a skill directly called for in education, and advantageous in the workplace and in one’s civic and personal life in general. A critical thinker is able to think clearly through complex issues, to seek information judiciously and make judgements about its quality, and to marshal that information in a reasoned response to a task. This ability to reason well - to deal with argumentation and evidence in order to take decisions and make evaluative judgements - is called for in countless contexts, whether taking financial or business decisions, determining policy or the best course of action, or in working out whom to vote for. And it is impossible to write good university-level essays or reports without it. 


So the skill itself is invaluable to success in many fields; but there is a second way in which it is important. It has a meta-cognitive, self-regulatory dimension that enables the critical thinker to stand back and ask him- or herself, ‘How am I doing? Have I missed anything important? Let me double-check before I go further’ (Facione 1998, p.6). In this way it contributes to independent learning and to the management of that learning.


CT is now taught at many universities and tertiary institutions worldwide, not only because those institutions recognise its importance, but also because they recognise that most students do not possess these abilities to an adequate standard. The evidence that this skill is lacking ranges from the anecdotal to the scientific. Just ask any marker of university students’ essays to be told how they seldom get beyond the regurgitation of facts or other people’s authoritative opinion to thoughtful analysis, synthesis and considered evaluation of those facts and opinions. More rigorous evidence is Deanna Kuhn’s alarming findings regarding the ability of a wide section of the population to marshal reasoned argument.[1] For example, although all of the 160 subjects in her study had theories about what causes prisoners to return to crime after they are released, what causes children to fail at school and what causes unemployment, and despite the fact that they all held their theories to be true, only 26 (16%) were able to generate any genuine evidence to support their convictions (Kuhn 1991, p.266). Kuhn’s overall findings pose serious doubts about whether the majority of the population has any proper understanding of the notion of evidence or justification at all, let alone the sophistication to deal with – and evaluate – alternative theories or views, counterarguments or rebuttals. And although her study was conducted in the United States a little over a decade ago, there is no reason to suppose that a similar study conducted in Britain now would yield substantially different results.


In order to facilitate wider participation in Continuing Education and enable people to engage in Lifelong Learning, educators will have to target generic skills, like CT, that mark successful from unsuccessful independent learners. It may be argued that teachers have always done so, or how else have generations of students risen to the challenge of Higher education in the past. The truth, however, might be that in the past we have not so much cultivated or instilled those abilities in students as fostered and rewarded them in that relatively small proportion of learners who – by dint of good luck – happened already to possess and exhibit a degree of independence. One of the real challenges of Widening Participation, then, is to instil or promote independence – of learning, thought and action – in that cohort of learners who have perhaps never demonstrated any inclination in those directions. 


One way to do this is to create a learning environment that requires and encourages students to use their initiative and rewards them when they do: the kind of environment that university students have traditionally found themselves in; but in order to facilitate the transition of less traditional students to such learning contexts we must ensure that they know how to function within them before they get there. In other words, secondary and sixth-form education must be brought more into line with the learning strictures of the post-secondary and tertiary sector. Otherwise, learners will be ill-prepared for the transition and so face a very real risk of failure. 


There is another reason why this shift in pedagogical attitude towards the deliberate cultivation of CT skills must occur in primary and secondary schooling. Not everyone will go to university; but everyone will need these skills in the workforce, in life and in order to engage in lifelong learning. If early schooling doesn’t give people these skills, there is nowhere else they are likely to obtain them.


We do not advocate the introduction of Critical Thinking as a new, distinct subject into the pre-university curriculum, for three reasons. First, the curriculum is already strained, and to suggest the introduction of a new subject would be unrealistic. Second, CT, when taught as a stand-alone subject, can be boring for students, who may not see the value in it, and so not engage sufficiently to benefit. Third, there are independent reasons to doubt the effectiveness of such a stand-alone subject (one of which we discuss below). Instead, we believe it is possible to cultivate CT skills in the classroom in the context of subjects in the existing curriculum. Doing so is a matter of structuring lessons in ways that focus on those skills, challenging students to use them and scaffolding their ability to do so. 


Here, however, we need a word of caution, because research tells us that CT instruction – whether explicit or implicit in the way just outlined – is not always successful. Researchers in The Reason! Project at The University of Melbourne (Australia), in which one of the authors was involved, have recently conducted a meta-analysis of studies that sought to evaluate CT skill growth in one- and two-semester first year university courses.[2] They surveyed all studies that involved quantitative testing of students before and after a course using objective measures, and that provided enough data to calculate effect sizes. What they found was that different courses varied enormously in effectiveness (in some courses, CT skills were even found to have declined). What is more, the courses that fared the worst on average were not those that were dedicated solely to teaching CT explicitly (for example courses called by such names as ‘Critical Thinking’ or ‘Informal Reasoning’) but courses that sought to enhance CT by modifying the way their ordinary subject matter was taught. Not just any old strategy works.


The lesson here is that we have to be very careful about the way we modify the way we teach if students’ CT skills are to benefit. We must ensure that the modifications we make are ones that stand the best chance of succeeding, given what we know about what works best.


Based on the Reason! Project’s findings, Widnes & Runcorn Sixth Form College has recently introduced a strategy for enhancing the CT skills of students enrolled in its General Studies programme. The method we are using is based on the Reason! method developed by Tim van Gelder at The University of Melbourne to teach Critical Thinking to first-year undergraduates as a one-semester course. The Reason! method has been tested over a period of three years (1999-2002) in which time it has consistently yielded substantial gains in students’ CT ability that on average far exceeded gains achieved by traditional methods of teaching CT.[3]


The idea behind the Reason! method was that the ability to think critically is a skill, and so, as with any other skill, practice makes perfect. Most university education, however, has few contact hours and large student-to-staff ratios, and is therefore not geared towards providing sufficient supervision of individual students to enable them to become proficient in CT. The Reason! Project’s research indicates that, on average, traditional CT courses do not increase CT over and above what might be expected from general maturation in students involved in higher education.[4] The challenge was to find a way to enable students to practise their reasoning and argumentation skills and to scaffold their attempts to do so without requiring extra staff time. The Reason!Able software was therefore created as an IT aid to solving the problem of adding value to CT within these constraints.



Reason!Able uses a technique called argument mapping. This involves constructing tree-like diagrams visually that represent the structure of arguments. The user enters concise sentences, each expressing a clear and precise idea, in boxes that are then connected to each other with arrows. What makes argument mapping special (and different to techniques such as mind mapping) is that the connections between boxes are meant to be logical connections. A Reason!Able argument map is a way of representing reasons for and against a claim, including reasons for and against those reasons, rebuttals to objections, and so on. In other words, the user builds visual representations of justification, or evidence for and against some conclusion. This process makes visually explicit what is required of good argumentation, and it also gives the user an overview of the reasoning process – which is often quite complex – at a glance, reducing the cognitive load on concentration and memory. Argument maps are intuitively readily accessible and easier to digest than prose, so they allow the user to concentrate on the reasoning without getting lost in the verbiage.


The software provides context-sensitive guidance in constructing argument maps, as well as in evaluating them. It prompts the user to be systematic, not only in finding out the reasons for and against believing some claim, but also in making judgements about the strength or quality of those reasons. It scaffolds the user’s reasoning process by breaking it down into simpler, discrete steps that can be addressed sequentially and systematically. By repeating this process, the user eventually internalises it, learning to deal with complex issues more methodically, thoroughly and confidently.


All content is created by the user, so Reason!Able is a generic tool usable with any argument, regardless of subject domain. With it one can produce one’s own arguments, or analyse and evaluate those of others, whether they are about some scientific hypothesis, some moral dilemma, or some controversial issue in that morning’s newspapers. As a learning tool it is therefore extremely versatile. As software, it is also user-friendly. All that is difficult in using Reason!Able is the thought that goes into the reasoning itself.[5] 


The simplicity and versatility of the software, as well as its success with first-year undergraduates, make it an ideal instrument for the introduction of CT training into secondary (and perhaps even primary) education. Furthermore, if the software is embedded into normal teaching practices, such as classroom discussion and essay writing, students are repeatedly exposed to good reasoning and argumentation principles in a variety of domains, thus reinforcing learning and encouraging transfer to new contexts. And because it obviously helps students perform ordinary tasks, such as structuring essays, most can immediately see its usefulness and relevance, which makes CT training less of a chore than learning the skills abstracted from a practical context.


At Widnes and Runcorn we have chosen to focus initially on the General Studies programme, although it is also used to teach Religious Studies and teachers of subjects such as Business Studies, English and Geography have shown an interest in it. We picked General Studies for a number of reasons. First, it is a subject taken by many students, and the College would like to improve average scores in it. Second, it is a much maligned subject that has failed to enthuse either teachers or students, so this was a way of revamping it, adding value, and changing its perception as a ‘soft option’. Third, the General Studies curriculum lends itself readily to Reason!Able treatment: it has broad content; it is clearly intended to encourage students to think through complex issues; and a large proportion of its assessment tasks can be directly supported by the software. We expect, for example, that students will improve in tackling questions like: ‘Scientists have a moral responsibility for the impact of their findings.’ Discuss. Or ‘The national press can be trusted to tell the truth.’ How acceptable is this statement?


Reason!Able can be used in whole-class teaching and discussion (with projection facilities), in small groups (two to five per computer) or individually. In group discussion, it focuses and structures debate, and we hope it will raise the standard of oral communication as well as make classroom discussion more constructive. Used individually, it helps plan and structure essays, so we expect to see an improvement in students’ written communication skills. Around half the College teaching staff have received basic training with the software.


We have three ways of assessing the success of this strategy. One is by comparing General Studies examination results after its introduction to the results of previous years. Another is to track the level of improvement of individual students. The third is by pre-post testing of groups that use Reason!Able and a comparison group that does not. We have initiated a pilot research project to this end. At the beginning of the academic year, students were asked to complete a 30-minute essay task. We chose four similar General Studies-type questions, and the students were split alphabetically into two groups. Group One were given a choice of two of the questions, while Group Two were given a choice of the other two. Students’ responses were identifiable only by their tutorial group and student numbers. At the end of the year, the students will be asked to perform the same task, with the questions reversed between the two Groups. Two independent markers will mark all the papers blindly. We will then calculate the difference in each student’s scores and separate those who used Reason!Able from those who did not (we asked teachers to log their usage), to calculate an effect size for each cohort. We do not expect that Reason!Able users will become, over that period of time and with that level of exposure, expert critical thinkers. We do, however, expect them to perform significantly better than the comparison group in reasoning through complex issues and in argumentative essay writing.


The greatest difficulty we have had in implementing this strategy has been in ensuring that teachers use the software, and use it well. A number of factors may affect this. One is that many teachers are less comfortable with computer-based learning than their students are. In this respect, teachers with better IT skills may embrace the approach more readily than others. In addition, teachers may see this as one more (faddish) imposition in an already hectic and over-dictated schedule. Any new technique takes a little time to learn, and time is precious. Several training sessions were on offer, but these tended to be under-subscribed. Consequently, teachers may not feel confident in their own argument mapping ability.


We are not sure of the solution to this conundrum. The best remedy would be to introduce argument mapping into teacher training (qualification) courses, so new teachers are familiar with the use of both the principles and the software before they enter the classroom. This, however, would require a sea change. In the meantime, perhaps the best we can do is to look for committed volunteers, because only commitment can make a difference.





Facione P (1990), Critical thinking: a statement of expert consensus for purposes of educational assessment and instruction; executive summary – the Delphi report, California Academic Press. (The Complete American Philosophical Association Delphi Research Report is available as ERIC Doc. No. ED 315 423.)


Facione P (1998), Critical thinking: what it is and why it counts, California Academic Press.


Kuhn D (1991), The skills of argument, Cambridge University Press.

[1] Subjects included men and women aged 14-69, with varying levels of education (Kuhn 1991, pp. 18-20).

[2] These results are as yet unpublished and were communicated to us personally. For more information on the Reason! Project, led by Dr Tim van Gelder, see

[3] The Reason! method produced an average effect size of 0.76 of a standard deviation, compared to an average effect size of 0.31 found in traditional CT courses (personal communication). 

[4] The average effect size of 0.31 found in traditional CT courses is negligible when compared to the average effect size of 0.29 found in courses in which no deliberate or extraordinary attempt was made to enhance CT.  

[5] To download a free trial version of Reason!Able go to