1. The history of heuristics investigations
Since the 1970s a research agenda has emerged around brain processes that have been said to be 'heuristic'. Many different ideas and research streams have increasingly been swept into the orbit of so-called 'heuristic research'.
1.1. Automatic thinking: first described by Schneider and Shiffrin in 1977 who evidenced that the brain processed familiar and repetitive tasks/data faster and less consciously than novel data and tasks.
1.2. Cognitive miserliness: described by Fiske and Taylor in 1978 who evidenced that the brain will choose the lowest cost route to a solution rather than choosing a more effortful, higher cost route which may be more accurate.
1.3. Cognitive biases: described principally by Kahneman and Tversky in the 1970s and 1980s, who evidenced that the brain will fail to detect logical fallacies presented to it, choosing instead solutions which are less complex and less effortful. This leads to errors of judgment and decision making. Kahneman applied these conclusions to the fields of economics, showing how trading decisions which were thought to be rational were often in fact irrational and errorful. Won Nobel prize for contribution to economics.
1.4. Heuristic substitutions: Kahneman went on to describe various forms of bias which seemed to involve thinkers substituting complex, abstract computations with personal, experiences of the same event. Concluded that a critical element of this quicker kind of thinking was the substitution of the abstract with the personal, imagined, first person scenario.
1.5. Priming: first described by Bargh in the 1980s, who evidenced that the mind is unconsciously influenced by the environmental cues around it. These were called priming effects and were shown to create conditions like attentional bias and blindness which undermined the rationality of the mind.
1.6. Bounded rationality: described principally by Gigerenza in 1980s and 1990s who evidenced that in real-world systems, in which rationality was bounded by incomplete access to all the data available, the mind would make approximations and guesses to direct both thinking and acting. Argued that this gave evolutionary advantages of speed and the ability to cope with large volumes of novel data. Others referred to ‘bounded rationality’ as ‘heuristic thinking’.
1.7. Algorithmic cognition: described principally by Stanovich and West in the 1990s and De Neys in the 2000s who investigated the difference between the fast and slow forms of cognition, principally from the slow side. They asserted a model of the slow, effortful, accurate cognition as algorithmic. By this they meant that when the brain has to come to accurate conclusions it uses a step by step algorithmic procedure in which it works it way through a series of steps to the right answer. They argued that this accounted for the difference in speed between this and ‘heuristic thinking’ which used more associative processing.
2. The dual-mind paradigm
Over time, these varied traditions of research organised their conclusions under a general theory of the mind called Dual-Mind Theory. This emerged in the 1990s and continues to be asserted to this day. Dual mind theory asserts that the brain has two systems for processing data. System 1 is a fast intuitive system which works by associative thinking and come up with approximations that may contain error. It is generally and very loosely described also as ‘intuitive, unconscious and heuristic’. System 2 is a slow, effortful system which works by algorithmic processing and is used to reach conscious, accurate conclusions. The brain will choose system 1 before system 2 because it is less effortful. System 2 can override system 1 but only with conscious effort. Roughly speaking, system 2 approximates to what is measured by tests of fluid intelligence- algorithmic processing.
3. Problems with the dual mind paradigm
3.1. A two system theory of the mind has several evidential and theoretical problems.
3.2. There is no clear understanding of how the brain switches between the two. By what metacognitive mechanism does the brain know whether to funnel data down the fast system 1 route or the slow system 2 route? Various proposals have been made, none are entirely convincing. Theorists end up inserting into the allegedly ‘unconscious’ heuristic system 1 a component of conscious rationality to account for how system 1 can judge when to ‘switch’ routes to system 2.
3.3. Heuristic thinking is thought to always increase error through guesswork. However, Walker evidenced that purely heuristic thinking could also increase the accuracy of learners’ thinking through a series of experiments with school students. Heuristic cognition contributed a separate component to academic outcomes over and above algorithmic cognition.
3.4. System 1 and system 2 assume parallel data processing routes; two routes to solve a problem in different ways. However, Walker evidenced that heuristic cognition contributes to algorithmic cognition. He also evidenced that heuristic cognition was influenced by the environment in a way that algorithmic cognition was independent of the environment i.e. it is ecological and therefore must represent a means by which the brain adjusts its processes to the demands of the environment.
3.5. Research methods have tested data of single modalities and structures. Typically visual or auditory data of the computational and linguistic kind (written or verbal instructions). As such, conclusions should be limited to how the brain processes novel vs repetitive data of that kind. The brain processes an enormous range of varied epistemic modalities and structures of data (social, affective, somatic data); current studies therefore do not tell us much about how heuristic cognition might be contribute to the recognition, processing and switching between epistemically varied tasks.
3.6. No test for real heuristic thinking has been designed. Given that heuristic cognition is proposed as the means by which we approach novel situations, any test of heuristic thinking would need to assess an individual via an epistemically unguided assessment. However, cognitive assessments set up narrow epistemic tasks (a verbal problem to solve, a spatial match to find, a calculation to make), and guide candidates what the expected kind of answer is: find the match, calculate the sum etc… By doing so, they define the kind of thinking that is activated; hence they cannot be truly said to be a test of heuristic thinking, which is said to control the cognitive strategy used in novel, unguided situations.
3.7. Critics have argued that heterogeneous terms that do not belong together have been conflated under the term heuristic cognition. For example, Shiffrin’s automatic processes related to repetitive processing; this would seem to be a different concept than that described by ‘bounded rationality’ conjectured as part of how we process novel, unfamiliar situations. As such, data interpreted as system 1, ‘heuristic’ processing, might not relate to that category of processing at all.
3.8. Finally, a dual mind model is unparsimonious: a one system explanation of the evidence would be better than a two.
4. Research carried out by Walker 2002- 2015
Walker argued that the fundamental distinction in the research lay behind algorithmic and non-algorithmic processing. Walker’s research programme initiated in 2002 designed a research technology which was immune to accusations of collecting false ‘non-heuristic data’.
4.1. To avoid inadvertently collecting algorithmic cognitive data along with non-algorithmic data, Walker first designed as assessment which involved no computational calculation, deduction or other algorithmic process. Candidates opted for multiple-choice answers which would not be aided by prior knowledge of computational ability. In this way, the risk of algorithmic cognitive processing leaking into the assessment was removed.
4.2. Second, Walker exploited this correlation between heuristic cognition and the imagination. Using an imagination exercise, in which the candidate imagined performing a learning task Walker activated and then assessed the first-person cognitive response of the candidate rather than an abstracted response. In this way, associative processing rather than algorithmic processing was engaged.
4.3. Third, candidates completed the exercise without formal guidance as to the shape, structure, kind or approach to take to the imaginative task. By cueing up an undefined ‘white world’ in the candidate’s imagination, Walker removed the potential priming biases about what kind of answer was required, to overcome Stanovich's criticism of current ‘heuristic measures’ that were, in fact, closed and prescriptive..
4.4. Fourth, Walker standardised the candidate’s white world imagination against a data model consisting of 7 validated factors. Standardised scoring consisted of a set of multiple choice questions on a Likert scale. Walker measured the candidate’s response to a series of real-world and unpredicted scenarios. In this way, the capacity to adjust and regulate heuristic cognition in response to an epistemically varied set of scenarios could be measured against a baseline score. Walker referred to this 7 factor model of heuristic cognition as CAS state – cognitive affective social state.
4.5. Fifthly, Walker conducted a programme of experiments with secondary school students in which he compared student heuristic CAS scores with academic outcomes and general intelligence (algorithmic cognition measured by CAT or MiDYIS). In so doing, Walker was able to identify the statistical relationship between academic outcomes, CAT and CAS in large populations.
5. Walker’s findings to date:
5.1. CAS state adjustment explains an element of academic outcome not explained by CAT (a measure of general intelligence)
5.2. CAS state is correlated with CAT but the relationship is asymmetrical: CAS explains a proportion of CAT but CAT does not explain CAS
5.3. CAS state variance is correlated to the heterogeneity of the school environment and can therefore be said to be subject to priming effects.
5.4. Less academically successful students can be primed to adopt a more optimal CAS state, though it appears this is effortful.
5.5. Students can be trained to exhibit better CAS states through feedback and coaching.
5.6. By identifying the factors contributing to a best fit model, which explains the most variance of academic outcome, optimal CAS state models for different curriculum subjects have been developed. This suggests that CAS state is an epistemic cognitive biasing system which contributes to the accurate adjustment of ecological cognition to the in situ, novel task in hand.
5.7. Students exhibit heuristic ability to adopt and regulate between associative, symbolic, analogical and conceptual data processing strategies. However, there is no evidence to date that student computational capacity can be improved by improving the regulation of heuristic cognition.
5.8. CAS state is a model of heuristic biasing. A bias is a specific biased state of CAS. Self-regulation of CAS indicates a regulation of bias. Bias is required for different epistemic tasks (Maths, for examples, requires a different heuristic bias than English or Science). Bias errors have been described as extreme bias (dysregulated); fixed bias (unchanging across different tasks) and inaccurate bias (adjusting to sub-optimal bias states for a given task).
5.9. Over-regulation of bias has been evidenced in a resistance to adopt any bias in a wide variety of situations. Over-regulation is measured by lack of variance from the median. Over regulation is conjectured to be effortful.