Grade Calculation Penalizes Learning

Over the past few blog posts I have been deconstructing the practice of grade calculation. And through this process, I am hoping that it is becoming evident that grade calculation, in the guise of objectivity, still does not meet the standards of being transparent, fair and equitable (Growing Success, p. 6). What is even more important to recognize that the practice of grade calculation in fact actually penalizes the learning process.

The goal of grading is to recognize that by the end of a learning cycle, a numeric representation of what a student has learned is to be accurately determined. It is not, not the average, mode or median of their learning process.

The assessment and evaluation process is all about taking snapshots on a journey of learning, to create a picture of what a student knows and can do. Each snap shot is a tiny segment of a picture, a piece of a puzzle. The task of the teacher is to use the evidence of learning (not just the summary data points) to come up with the grade. It is essential that the teacher has an understanding of the process to inform the judgment of what a student knows at the end of a course.

What ends up happening is that through calculation, it is only the data points that are being considered in establishing what a student knows and can do, but not it does not take into consideration the individualized context and process of learning.

If we take a look at what learning is, essentially taking new information and assimilating or accommodating it with prior knowledge, it is easy to see that there can be mistakes made along the way. That is only to be expected. Students need time to process, to practice and to figure things out. A good teacher will be using Vygotsky’s principle of the zone of proximal development to scaffold the learning to allow the student to maximize his or her potential at success. A good teacher will investigate where students start (diagnose) and provide opportunities to refine (formative) before determining what a student knows or can do (summative). The learning needs to be structured to progress a student through the learning process.

Growing Success states that

the grade should reflect the student’s most consistent level of achievement throughout the course, although special consideration should be given to more recent evidence of achievement. (p. 41)

This is their level of achievement – it is explicitly about what a student knows and what a student can do. More recent evidence of achievement should be a student’s best work (as they have hopefully had multiple opportunities to practice), but there are mitigating circumstances. Emphasis should be placed on assessments OF learning, but at the same time informed by assessments FOR learning.

I recognize that the evidence of learning used in the following example is quite limited, but it should suffice for illustrating my purpose.

Screen Shot 2017-03-07 at 12.25.04 PM

In the example above, the first quiz could essentially be diagnostic. The teacher is attempting to determine prior learning and understanding. The second quiz could reflect the acquisition of knowledge (e.g. vocabulary terms, essential concepts, etc.) that will help a student throughout the rest of the unit.  It is evident so far that based on marks of 52% and 60% this student is struggling.

As the student moves into a smaller hands on task, the student is becoming increasingly familiar with the overarching ideas, concepts, processes, and skills needed. It has simply taken time for this student to learn. It appears that the context and task type may be essential in allowing this student to more effectively learn.

The fourth task is a larger scale project. It is evident that the student has finally “gotten’ it”. There have been connections that have been made and the student has acquired the required knowledge and skills.

This is ultimately confirmed by the summative task in which the student continues to excel.

Looking back over the data from calculated grades in the previous posts, these are some of the possibilities that a calculated grade could have been (based on a consistent application of the KTCA weightings):

  • Blended Average: 80.4%
  • Blended Mode: 84.0%
  • Blended Median: 84.3%

Now, there is nothing “wrong” with a mark in the mid to low 80s. However, this person’s grade has been lowered as a result of their learning. Using professional judgment, which is at the heart of assessment, evaluation and reporting (Growing Success, p.8) I believe the calculated mark to me a misrepresentation of student achievement. Looking at the evidence, the patterns of learning, mitigating circumstances, etc., I would determine that this student has actually achieved a 90%. So why is the calculated so much lower? It has penalized the student for not knowing, or having the required skills, right from the outset.

After having discussions with other educators, trying to understand why teachers cling to calculations, the response that I get is that the calculation is “accurate”, it is what parents want, it provides me with a basis from which to “tweek” the grade. But as we have already determined the, whole process of calculating is flawed: it is a mis-representation of student learning and achievement. Why would we ever want to use this calculation as the basis of determining a student grade?

 

Grade Calculation Disadvantages Students

In my last post I explored the process of grade calculation, and discovered that though the math may be “relatively easy”, it is by no means “transparent” as the first guiding principle of Growing Success requires. This principle of Growing Success also requires assessment, evaluation and reporting to be fair and equitable.

A common practice is to weight the performance categories. At this point, I wish to make it clear that although I now disagree with this practice, and I will explain here, I cannot fault teachers for adhering to this practice either, as it is the way I was taught in teachers college as well as when I first entered the profession in 2004. Long standing practices are exceedingly difficult to adjust. Nonetheless, we are now at a point where changes must be made to ensure best practice.

The challenge is that performance standard weighting (which provides us with the “blended” calculations discussed in my last post) also disadvantages some students.

When we last looked at the the sample data that was generated, we discovered that the calculations reduced the variance from the highest to lowest calculation to a range that was approximately ±6%. However, what happens when we start changing the weighting of each of the categories?

Example 1:

 Knowledge  15%
 Thinking  20%
 Communication  25%
 Application  40%
 Blended Average  80.4%
 Blended Mode  84.0%
 Blended Median  84.3%

Example 2:

 Knowledge  20%
 Thinking  35%
 Communication  25%
 Application  20%
 Blended Average  78.0%
 Blended Mode  82.0%
 Blended Median 82.4%

Example 3:

 Knowledge  35%
 Thinking  15%
 Communication  30%
 Application  20%
 Blended Average  76.3%
 Blended Mode 76.0%
 Blended Median  76.7%

Based upon the the sample category weightings that were selected to check variance of data, there is now an 8.3% variance in what the final grade “could be”. Now of course, as professionals that are intimately familiar with the curriculum in each of our subject areas, when weighting the performance standards, it is typically done so in relationship to what is understood to be “the most important method of performance” for a subject area. And these categories weightings should be communicated to students as early in the semester as possible, for example on a course syllabus, for the sake of transparency, fairness and equity.

But it is here the at the second principle of growing success kicks in to invalidate the process of calculating grades in this way:

support all students, including those with special education needs, those who are learning the language of instruction (English or French), and those who are First Nation, Métis, or Inui (Growing Success, p. 6)

In example one, the achievement chart weighting clearly favours students that are more practical in their demonstration of content, as they are being asked to use it. In a variety of class, students who may know what to do, but have difficulties with fine motor control for example, may struggle with direct application.

In example two, the weighting is fairly normalized, but there is an emphasis placed on  processes, planning skills and creative / critical thinking. Students who are new to the country and are learning a new language will struggle with thinking and communicating abstract concepts that they have learned in English. This does not however mean that they do not understand.

Lastly, in example emphasis is placed on communication and knowledge. Here the calculation supports students that are efficient at “regurgitating” information and have a clear grasp of the language and are comfortable in their communication skills. But what about the child that is shy or has anxiety and may struggle with different forms of communication? What about students that have an IEP, and although they are accommodated, struggle with memory recall?

These are just some examples. As soon as the categories are weighted, we are advantaging one student over another. I have included the Google Sheet that I have been using so that you can play with the category weightings to check for variance for yourself.

The last calculation that I wish to review is that of an equally balanced calculation. This is where each of the achievement chart categories are weighted at 25%.

Example 4

 Knowledge  25%
 Thinking  25%
 Communication  25%
 Application  25%
 Blended Average  77.8%
 Blended Mode 80.0%
 Blended Median  80.5%

Though this may be more mathematically balanced and creates the aura of “fairness” and “equity”, the question still remains: “Based on the data set, and the mathematical calculations behind working behind the scenes, do the percentage grades accurately represent what the student has learned by the end of the course?”

Grade Calculation is Opaque!

The Opaqueness of Calculations

I am no mathematician, but I consider myself numerate. I believe that I understand the “current practice” of grade calculation and have an understanding of the process.And yet, though I understand the process of calculating, the mystical quality of a relatively simple algorithm for calculating a grade is quite opaque. It is hidden by hind a mask of relatively easy but interconnected calculations. And when I calculated grades, I would often ask students if they understood how it figure out how the grade on their report card was calculated. More often than not, they did not. This is further complicated by “professional judgement” – opportunities for teachers to include, emphasize, or omit certain evidence from the calculation.

To understand the calculations, and often the method used by such antiquated tools as Markbook, it is important to understand some key vocabulary.

  • category: the various performance standards by which students are evaluated
  • weighting (categorical): the percentage value associated with each category of the achievement chart
  • weighting (task): the numerical value associated with different assessments to represent the “importance” of one task relative to another
  • average: a value expressing the central or typical value in a set of data
  • mode: the value which appears most often in a set of numbers
  • median: the middle value in a data set
Screen Shot 2017-03-06 at 6.34.29 PM
The above data set represents a straight calculation based on the data acquired from different tasks. By looking at just the straight average, mode and median, there is a significant discrepancy between the highest calculated value (90%) and the lowest calculated value (66%). In this case, an accurate calculation of student learning, in relation to notions of consistency and recency is not adequately represented by the calculated values.

Task weighting adds data points to the entire data set in order to increase the value / importance of one entry in relation to another. For example, in the example we are using, the “Small Task ” (identified in cell A5) has an assignment weight of 3 (cell C5). Reading across the the achievement categories, each assigned mark would be added to the calculation three times based on the task weighting. This means that there would be 3 72%, 3 60%, 3 52% and 3 %76 marks added to the data set.

  • weighted mode: the value that appears most often within a weighted data set;
  • weighted average: the average of the marks based on a data set in relation to the their task weighting for each entry in the data set
  • weighted mode: the middle value in a data set in relation to the  task weighting for each entry in the data set.
Screen Shot 2017-03-06 at 6.46.50 PM
This is the expanded data set based on the task weighting for each task. As you can see, by weighting, there is once again a significant shift in the numeric calculations. The weighted average has increased by approximately 4.5% and the median has jumped from 66 to 90.

Current practice though does not rely on straight calculation of the weighted average, weighted mode and weighted median. Current practice has normalized what Markbook coins the “Blended Mode”. This takes into consideration the weighted calculations of each achievement category and multiplies them by the category weighting coefficient.

Screen Shot 2017-03-06 at 6.52.25 PM
The first step in calculated a blended Grade is to take a weighted calculation for each achievement chart category. For example the average for Knowledge is calculated by a simple formula, =average(K3:K20), resulting in the 65.3%. This represents the students weighted average for one performance category from the achievement chart. Similar calculations are made for mode and median for all achievement chart categories. As a result, the algorithms calculates the bolded values in the table above.

Currently however none of these numbers are representative of the students wholistic achievement. One approach of course is just to average the achievement according to each performance standard (Knowledge, Thinking, Communication, Application). Doing so you can see the discrepancies in calculated Grades based on the each method of collocation discussed so far.

Screen Shot 2017-03-06 at 7.00.41 PM
By simply calculating averages there is still a discrepancy of approximately 6% between the highest value and the lowest value of these calculations.

The promoted method of grade calculation though has been the “blended mode”. This takes into consideration the category weighting. This means that you take the weighted average, weighted mode and weighted median and multiply it by the percentage associated with each achievement chart category. In the example above, we are using a weighting distribution of:

Screen Shot 2017-03-06 at 7.08.50 PM

The formula for the blended average then looks like this:

Grade = K*0.15 + T *0.20 + C*0.25 + A*0.40, where

K = weighted average of knowledge marks, T = weighted average of thinking marks, C = weighted average of communication marks, and A = weighted average of application marks

Similar calculations are made for determining the blended mode and median.

Screen Shot 2017-03-06 at 7.13.38 PM
The end result is a blended average of 80.4%, a blended mode of 84.0% and a blended mean of 84.3%. Through this process the variance between the highest and lowest values has been reduced from a 24% difference to a 3.9% difference.

The question still remains…. what calculated value best represents the student’s overall achievement?

Now in terms of Growing Success, we as educators are guided by 7 principles to ensure that our assessment, evaluation and reporting are “valid and reliable” (p. 6). The first of these principles is to ensure that assessment, evaluation and reporting “are fair, transparent, and equitable for all students” (p. 6).

In this post, I have investigated and deconstructed the process of calculations. Through this process, I have attempted to make t he calculations “transparent”. “Transparency” however does not necessarily equate to the math being easily understood and accessible. Although the math is “fairly straightforward”, I conclude that the calculations are by no means “transparent”. And on this account, the method of calculating a grade fails to meet the primary standards by which we are called to reach for as educators, as outlined in the assessment, evaluation and reporting policy of Ontario.

 

 

It’s Time to Acknowledge that Grade Calculation is NOT Objective!

There is something comforting about the notion that numbers “don’t lie”. Everyone can agree that 1+1=2 and that a 75% of 20 is 15. There is a familiarity that allows us to believe that when we use strict numbers that we are communicating a mathematically accurate representation of what students know and can do.

However, the numbers in and of themselves cannot be objective because the processes that lie beneath the calculation are all determined by teachers, by people that make choices and decisions about what is taught, what counts as evidence of learning, and how much “important”evidence is worth.

To this end, calculating a grade is much like the evolution of photography.

Photographers have a Code of Ethics by which they are bound to follow to maintain their credibility as photo-documentors of the news. In much the same way, educators in the province of Ontario are bound by the Professional of Standards that reflect practices and the ethical dimension of teaching. The job of the photojournalist has become increasingly complex as more and more is understood about the psychological dimension of the photographic medium. Photography, once believed to be an “objective” medium as it emerged as a photo-chemical process that literally reproduced reality, has undergone significant change. Photographer artists began to manipulate the photo-chemical process, creating hybrid images that were unreal (e.g. Hannah Maynard) but retained the illusion of authenticity, which of course required people to begin to question the reality that was being depicted. Over the course of many years, it has become more understood that the content a photograph is actually a subjective interpretation and may or may not accurately represent what was photographed. There are various factors that go into the subjectivity of a photograph:

  • Who took the photograph?
  • What was chosen to be included in the photograph? What was cut out from the frame?
  • Where was the photograph taken? Which point of view or vantage point was the photograph taken from?
  • When was the photograph taken?
  • Why was the photograph taken?
  • How was the photograph created?

Though the photograph “looks real” and therefore makes the photograph look truthful, the one element that many people forgot about is that the photograph was taken by a person who made certain decisions. Underlying the “objective image” are subjective choices. Therefore, the truthful photograph is no more than a perception.

What is interesting is that the same phenomenon exists around student grading. There is an implicit trust that if evidence of student learning can be quantified, it must be accurate and a more truthful representation of student learning. If we can count what students have learned, than we can calculate a truthful number to represent the cumulative learning of a student over a learning cycle. This is entirely and utterly false. Just as the photographer makes decisions that influences the final photograph, the teacher makes many choices about what, how and when students learn. Such choices include:

  • What lessons will be taught
  • When the lessons will be taught (e.g. in what order)
  • What counts as an assessment
  • When students will be assessed, given feedback or evaluated
  • What questions are asked
  • What counts as a suitable / appropriate response to
  • How much each learning task is weighted
  • How each learning task is organized around the achievement chart

The teacher is making so many decisions about student learning that to effectively quantify learning and to say that it is entirely “objective” is ridiculous. The teacher, just as much as the student, makes the entire assessment and evaluation process very difficult. Assessment and evaluation is about people, and with people come filters that affect how evidence is perceived. Grades therefore are perceptions of student learning within a given framework that is created by the teacher, and as such can never be an objective representation of what a particular student knows or can do in his or her totality.

Its Time to Change Our Practice: Determining a Grade is NOT an Option Any Longer

Teachers are creatures of habit. When we enter the profession as fledgling educators (many whom are not prepared for the day to day reality of what it means to be in a classroom day in and day out), survival is paramount. The first instinct is survival, and to do so, we rely on the teaching strategies that our teachers used that allowed us to learn best. The challenge for many teachers is to move past the survival mode and to not become complacent and fall into the mindset of “this is how things are done”.

We live in an age for education in which we have a tremendous amount of material that can help us refine our practice, to become better learners and teachers. Admittedly, it is a challenge to keep up with the new pedagogies that seem to bubble to the surface each and every year. More importantly, it is important to recognize which of these emerging strategies are simply fads that will sink back down to the bottom of the depths of social media conversation, and which ones are legitimately making a significant impact on student learning.

For the past seven years I have been investigating Assessment and Evaluation in great detail, particularly through the lens of learning goals. In 2010 Growing Success was released (the regulations that teachers in Ontario Canada are to follow for assessment, evaluation and reporting), and I have been struggling with various aspects of this document because the foundational ideas have been appealing, though not necessarily easy to adopt in a busy, hectic visual arts classroom. Now, in 2017, seven years later, digital tools such as Sesame are emerging to help fill the gap for gathering evidence of student learning from products, conversations and observations. No other tool that I have seen is able to so seamlessly allow students and teachers to work together in order to gather evidence of learning, track evidence of learning and to communicate about a students progress in their learning.

As the process of gathering the evidence of student learning, refining the tools that I use to assess and evaluate students, and how to create a more seamless workflow for maintaining digital files for students (currently through Google Classroom) my attention has been gradually shifting to the the reporting of student achievement.

And this is where things are getting very tricky systemically; teachers are desperate to hold onto practices that are familiar and comfortable.

Current Practice

Current practice dictates that communication of student achievement is based on a mathematical calculations. In this case practice refers to what is currently happening in reporting, not what ought to be happening with reporting. The practice varies immensely, depending upon the the teacher, the subject, the department, the school. I believe this variance has primarily been a result with the influx of evidence (or data) that we have been asked to gather as educators – assessment as learning, assessments for learning, assessments of learning, diagnostic learning, formative learning, summative learning, observations, conversations, products, the contexts of knowledge / thinking / communication / application, accommodated tasks and modified tasks. The challenge has simply been….what do we do with all of this evidence of learning. Most teachers are not statisticians and working with such large amounts of evidence is quite challenging. Especially when most teachers were brought up in an era when it was simple quizzes, tests and projects that ultimately went into calculating your grade.

In 1999 the achievement charts were introduced. In 2010 they were revised. The achievement chart is described as the “performance standards” by which students demonstrate the “content standards” or overall curriculum expectations. (Growing Success, p. ##) Growing Success explains the relationship between these standards as the what and the how: what students need to know / do and how they demonstrated it. How students demonstrate their learning comes is determined by four categories:

  • knowledge & understanding,
  • thinking,
  • communication, and
  • application.

As teachers transitioned to the use of achievement charts we were encouraged to “weigh” each of the categories according to how they best relate to subject / discipline curriculum. As a result, the teaching teams for each course in a school would determine how much each category is “worth”.

Knowledge & Understanding 20%
Thinking 25%
Communication 15%
Application 40%
Total 100%

Teachers would then evaluate and document student achievement throughout the term and calculate a grade. Unfortunately, this practice is wholly incorrect and appallingly disadvantageous for many students.

(1) The shroud of mathematical calculation creates a facade of objectivity.

(2) This process does not meet the fundamental principle that assessment and evaluation and reporting be transparent – the math itself can become exceedingly convoluted.

(3) We are putting the emphasis of student learning in the wrong spot – we are calculating grades on how studnets learn, not on what they have learned.

(4) This practice is not founded in current policy.

I will expand on these four principles in my following posts.

Censorship of Student Art

Imagine. You are a young artist in a high school. You have worked for three and half years building your technical skills and your conceptual understanding of what art is for. In your final year of highschool, as you are preparing your portfolio for post-secondary school, you are told that your work is inappropriate.

This is exactly what happened to one of my students. A grade 12 black student painted this image.


wp-1478519070937
Sabeehah B.L. “Kan’t Keep Killing” 2017

This artwork was created in order to shine a light on the systemic racism that takes place so often in our society. There have been many incidents of unnecessary police brutality and racial profiling against people of colour, which is how the well-known term “black lives matter” came about. Many people try to downplay these situations or make them seem like it’s the fault of the person being brutalized by the police. However, upon looking at cases like Sandra Bland and Trayvon Martin, cases that ended fatally for the victims, it is clear that they were singled out solely because of their race. Because there have been so many cases like this, I decided to create an artwork that clearly indicates that some police officers are racist and that that can be their motive to attack certain people. This artwork, however, is not meant to show that I believe that all police officers are racists, because I’m aware that that is not true. It is simply meant to say that there are police officers out there who are racist, and people must be aware that police officers carry out anti-black crimes all the time, and these choices have led to distrust from people of many communities towards police officers and towards the justice system.

       In the beginning stages of creating this artwork, I knew that I wanted to create something that would make people uncomfortable, but also make them think. I used my own experiences and experiences of other people of colour as inspiration for this piece. Due to my skin colour, I tend to get a bit anxious around police officers. I’m constantly wondering “what if this one’s racist?”, so I tried to depict this fear to the best of my ability.


 

I set up my grade 12 visual art course in an inquiry model, working explicitly to teach process. Students are responsible for content. They are required to choose a theme and develop a message that is of personal significance and interest to them as they create a body of work. She developed this piece out of her own anxiety and the fear that she and her peers feel around the local police enforcement, as there is a carding / profiling policy in place in our region.

After the work was complete, my colleague and I put this piece on display in the school. It was displayed for nearly two weeks, until the morning of Wednesday January 9, 2016 (notably the day after Donald Trump was elected as the 45th president of the United States) the painting was turned around in the display case by one of the custodial staff of my school. From this point forward, the wake of the next several weeks will never be forgotten.

The painting was removed because it was deemed too controversial. (Admittedly, the artist statement did not originally appear in the display case as my teaching partner and I got distracted from finishing the display – nonetheless, it was up for two weeks without incident). After a conversation with my principal, who was very support of the work, but required to follow direction on the matter, we started an ongoing conversation with Sabeehah. She demonstrated extreme poise and maturity as she defended her artwork. This experience also created an exceptional learning opportunity for the entire class, who came together and supported Sabeehah in a variety of different ways.

The discussion quickly escalated from the principal’s office, to the superintendent, all the way to the director of education office for the PDSB. Once it was finally “cleared” and allowed to be put back up, there were conditions. The stake holders (e.g. the community police officers for the school, as well as the custodial staff who first made the complaint) had to be okay with the decision to have it displayed. This is equivalent of asking the oppressed to seek permission from the oppressor, which creates an inequities power relationship for a modern and democratic society. We were also notified that the chief of police for our region was having discussions around this piece – and yet at the same time, never initiated a conversation with Sabeehah or myself as to the overall context and how to work on alleviating the broken community relationship that exists between the police and the black community in Brampton.

In the end, Sabeehah became even more persistent in her quest to voice her experience, and developed the courage to share her feelings and opinion about a very controversial issue that is experienced, but unfortunately is rarely talked about as much as it should.

Over Design

One of the most difficult tasks for me as a teacher is planning the curriculum for a given semester. I am conflicted by the vast quantity of “things” that my students can learn from the time that they spend with me versus what is reasonable and achievable for what can actually be accomplished.

When I sat down and planned using the backwards design model developed by Roland Case and Garfield Gini-Newman, also known as Cascading Curriculum, I became so excited with the possibilities, that I quickly realized that I was trying to cover too much. The learning narrative that I was planning allowed for a good balance of skill development, as well as acquisition of key concepts. The problem, which of course I did not realize at the time of writing is that there is just too much.

Granted having more planned than what you can get through has its benefits, but also its drawbacks. I put considerable pressure on myself to teach what I plan, as opposed to using my plans as a scaffold to say “This is the basic direction that I would like to go, and there may be things that pop up during a semester that takes more or less time for the group of students that I am currently working with.” I see this especially with the cascading curriculum around the theme of narrative that I developed for the ASM2O0 – Media Art course that I have had the luxury of teaching again.

In the near future, I will be going through a review process for the cascading curriculum, and determining how to structure and format the flow of learning to create a baseline (all students NEED to learn this in order to reach the overarching learning goals for the course) as well as areas for extension (for students who are genuinely interested in adding more tools and concepts to their repertoire of skills that they want to leave the course and  with).

In order to accomplish this differentiation though I need a tool that can provide the structure for students learning at different paces. And I believe that this is where gamification can possibly come in. I will be following up on this thought in a future post.

Digital Jigsaws

I have been fascinated as of late with the power of Google Docs when implemented in an educational environment to stimulate new ways to get students thinking. A classic educational strategy has been the jigsaw, which admittedly I have not used effectively up until this point. One of my biggest sticking points with this strategy was the difficulty in getting students to genuinely share and discuss the information that they found in their content specific areas for research to bring back to a group that needs to assimilate different ideas and knowledge. Often, this ended up simply as pass the paper and mindlessly copy.

With the use of Google Docs, the facilitation and sharing of the information is much easier. Students co-construct a single document. This allows for focused conversation that can be directed with some guiding questions if students struggle to naturally investigate intersecting concepts. For example, in grade 10 visual art one of my goals for students is to understand the expressive qualities of the element of design (i.e. if you want to communicate humility, what type of line, shape / form, colour, texture, values would you use). I have organized students using a matrix such as the one below.

screen-shot-2016-10-18-at-9-05-36-pm

In this example, all students in a row are working together to understand how the elements of design communicate a particular personality trait. I create a document and duplicate by the number of groups (in this case 5). I then share the document to all members of the personality trait.

Once students have brainstormed and understood the personality trait, they are then moved to their research groups (based on the columns). Here they research a common topic to fully understand it together, and work together to make sure that they understand the nuances.

After all the research is complete, they go back to their original group (the row from the chart) and discuss and examine how the elements work to communicate the personality trait.

What I have recognized is that at this last stage, the best way for students to consolidate their learning is through a thought provoking question. When I do this lesson again, I will have students come back and respond to the following prompts:

  • Write a short paragraph describing the most common qualities that are consistent between elements of design that communicate your personality trait.
  • Summarize the most essential visual qualities of the elements that are used to communicate your personality trait in one sentence.
  • Identify three words that most clearly articulate the qualities of the elements of design in communicating the personality traits.

 

The Importance of Criterion Thinking

Critical thinking is one of those educational buzz words, along with inquiry based learning, problem based learning, challenge based learning, gamification, personalized learning, flipped classrooms….. Education, like anything else, is full of fads. Ideas come and ideas go. And as I write this, I am not sure whether critical thinking will be one of these fads. What I do know however, is that after exploring many of the above mentioned, one thing that they all claim to promote is improve student outcomes. What makes teaching by critical thinking different: from the outset, the primary objective is specifically intended to assist students in developing criteria based thinking.

What is Criteria Based Thinking?

In its simplest form, it is recognizing that for students to be able to make any particular judgement, the first thing that they need is criteria. Criteria, in essence forms the foundation for any reasonable and important thinking endeavour. Once students recognize that they need to have a standard by which they are able to gather and evaluated information, then they are genuinely thinking. Criteria, in essence, makes the thinking that we ask our students to do every day, more meaningful and purposeful.

The goal though of teaching critical thinking is not to provide students with the criteria, but rather with the tools for identifying and establishing criteria. This includes simple statements such as “the painting needs to be expressive”, but also more detailed descriptions of the identified criteria (e.g. to be expressive means that there is evidence of brush stroke, choice of colour, composition, etc.).

Once students know how to identify and describe success criteria, they are then better able to gather, assess and evaluate information as it pertains to the criteria that they are using as the basis of the judgement.

Why do Students Need Criteria?

It is well known that when students know the criteria in their learning, their ability to successfully demonstrate their learning increases. The research supporting this basic understanding has been around for the past few decades, and now forms the foundation of our assessment and evaluation policies in Ontario (see Growing Success, Ministry of Education, Ontario 2010).

If we transfer this basic understanding to the domain of thinking, it becomes apparent that students need criteria to think effectively and accurately. Students need criteria to be successful thinkers. Teaching students what effective criteria looks like then becomes the foundation upon which our teaching should be built. From there, students can unlock their capacity for thinking, which will improve how they learn and what they learn in every domain.

 

Why are we under the impression students don’t know how to think?

Another reason that we are under the impression that students don’t know how to think, is simply because they are constantly looking for the easy answer. The easy answer typically does not require the most complex thinking; it provides a superficial exploration of an idea or topic. As educators, it is essential that we recognize that the reason for poor student engagement in thinking is not the sole responsibility of the student.

How we choose (or not) to design our curriculum is an essential aspect of whether or not our students choose to engage with what they are learning. A wide range of authors, such as Marc Prensky, Sir Ken Robinson, Garfield Gini-Newman all understand that students don’t need to learn in our classrooms. They are learning all the time. Education is no longer a means to knowledge. Knowledge is readily available. Education is a means of learning how to use knowledge in creative and innovative ways.

The next time that we see students in our classes that are is engaged, consider that it is not that the student is lazy, but rather that what they are learning has little to no real application to their daily lives. No one is going to think about something that does not genuinely matter to them. Students are no exception.

Engaging students

To engage students there are a variety of strategies. I have tried several – flipped classrooms, student choice, theme based education – but the difficulty remains the same: students need to be motivated to learn or the need to know what they don’t know in order to make meaningful decisions about what they are choosing to learn. In either case, some students do not readily engage with these formats because of the maturity and experience to be engaged in these forms of learning.

Of course, even using a thinking inspired curriculum design will not have 100% buy in by students either. There is no “magic bullet” for educators, only tools that we can identify and use for different students. However, after spending the past month really pushing students with thoughtful questions, I have seen really positive results. The students are willing to engage in significant conversations, reflect more deeply and take risks because the students recognize that they don’t all ways need the “right answer”, only a “well justified” answer. Through open ended questioning, students find that their thinking is valued. When they are feel valued they are also engaged.