Friday, March 21, 2014

The Art of Designing eTasks

There are at least two different ways to help teachers who are designing iPad activities with students to evaluate the tasks they create. The  SAMR  model helps a  teacher/task designer become aware of what stage the task falls into in terms of the use of tech.  The Bloom Taxonomy applied to apps helps teachers think about the kind of questions we ask students and how we should vary the tasks we offer. By delivering the workshop From Image to Deep Learning, I started to understand that  teachers can also look into the learning cycle as a whole, and how the human learning brain works to promote deep learning. The ideas I share here were inspired by the book The Art of Changing the Brain, which is a must read for any educator willing to take a look into the biology behind learning.




In the workshop, I asked the audience how to teach questions with does to teens, and develop tasks having the learning cycle in mind. After a quick debriefing, I showed a simple iPad activity I carried out in class of 11-year-olds, talked about my take in the lesson, and expanded on why I think this task pleases the learning brain. Now, I post my ideas here to help me reflect on my practice, having the learning cycle described in the aforementioned book in mind.




I showed students a quiz about a famous person I knew they would be interested in. Students took the quiz, and I inductively helped them notice how to make questions about a third person`s likes and dislikes. Then, I asked them to gather information about a celebrity they follow to make a quiz of their own.
I was afraid that I`d have no pictures to work with on the following class, but to my surprise, students had bought the idea and had pictures and lots of information to work with. I was ready to go, so I set the iPad activity and monitored students. Here is what two pairs produced using a wonderful app called visualize.




In the art of changing the brain, Zull talks about phase 1 - concrete experience. In this phase, there is activity in the sensory cortex, where we receive, gather and begin to process the visual, auditory, tactile, olfactory, and gustatory information. Phase 2 - reflexive observation, seems to describe an activity that takes place in the integrative cortex. It is time to connect sensory images to prior experience in one`s neural network or schemas. In class, passing from phase 1 to phase 2 might take time as learners need to relate new information to what they already know. We cannot rush. We must allow time for thinking/recalling as well as time to reflect upon the learning experience.

In the activity I proposed, my students were exposed to a visually appealing quiz about a person they were genuinely interested in, and took the quiz themselves to find out how much they knew about the person. As I see it, students went through stage one and two of the learning cycle before we started the second part of the activity.

In phase 3 - abstract hypothesizing, the front integrative cortex is at work. Students start to prepare to do something with the recently acquired knowledge. In the iPad activity, I asked students to get the information about their favorite celebrities and start to put it in the format of a quiz for the other students in class. And by asking students to make these quizzes to communicate their recently acquired knowledge, teachers allow students time to test their hypothesis and think. In phase 4 - active testing, students shared their quizzes, and by doing so, provided peers with concrete experiences, so the whole class was back to phase 1. Learning becomes cyclical and on going, and hopefully they will remember the language point long after the day of the test.

In conclusion, instead of asking students to pay attention, it is better when we can engage students in tasks in which they  are supposed to reach outcomes, or ask them to look at the topics from different angles. Instead of sitting still, learners could be asked to move around to see the details. In other words, by making learning more concrete, we might reach concrete outcomes.

Monday, March 17, 2014

Five myths about formative assessment

As I am involved in the planning and execution of a formative assessment system for my institution’s adult course, this is a topic that has been on my radar lately. In fact, my previous post was exactly about a formative oral assessment activity. I was also recently invited to conduct a discussion with a group of Language Arts high school teachers implementing an innovative portfolio system for the assessment of their students’ writing.
This recent and extensive contact with teachers in my institution and our partner high school, both piloting formative assessment systems, has raised my awareness of some common myths about formative assessment:

Myth # 1: Formative assessment cannot result in a numerical grade

It is common for educators to associate summative assessment with numerical grades and formative assessment with qualitative performance descriptors. Actually, it is not the grade or the lack of grade that makes the assessment summative or formative. You can have summative assessment with qualitative descriptors and formative assessment with numerical grades.
What makes an assessment tool summative, be it a test or a performance assessment, is the fact that it is administered at the end of a learning cycle. Examples of summative assessment are final exams and proficiency exams. An oral test with qualitative can-do statements, administered at the end of a course, is summative. Conversely, formative assessment is used to “evaluate students in the process of ‘forming’ their competence and skills with the goal of helping them to continue that growth process” (Brown, 2004, p. 6). A graded test aimed at gauging student’s retention of the course content, followed by re-teaching of the areas students had difficulty with and re-testing, is an example of formative assessment, even if it generated a numerical grade.

Myth # 2: Formative assessment can only be used as an informal assessment tool in more traditional settings

This second myth is the result of the first one. It is believed that because formative assessment cannot result in a numerical grade, systems that rely on numerical grades cannot use formative assessment or can only use it informally.
Some time ago I attended a talk in which the presenter showed various examples of how her institution used formative assessment in its courses. At one point during the presentation I asked her what percentage of her final grading system comprised formative assessment. None! Despite the beautiful work done with formative assessment, such as projects, at the end of the day, what really counted were the tests! Thus, the formative assessment ended up being only informal assessment, the type that “elicits performance without recording results and making fixed judgments about students’ competence” (Brown, ibid, p. 5). With well-developed scoring rubrics, though, encompassing not only the product but also the learning process, these formative assessment tools can generate a grade that can compose the general grading system.

Myth # 3: Multiple-choice and selected response tests are always summative, while performance assessments such as portfolios and projects are always formative

How an assessment tool is used  determines whether it is summative or formative, and not whether it is a test or another type of assessment. . Even a portfolio can be summative if students collect work during a period of time and only receive feedback on it at the end. The same applies to project-based learning. If grades on projects are based on the final product only, with no consideration of the process and no feedback during the execution of the project, then the assessment is only summative. Thus, the use of rubrics per se doesn’t qualify an assessment as formative. It is how the rubrics are used and what they consist of that makes the difference.
On the other hand, as mentioned above, a very traditional multiple-choice test can be formative if it is used to gauge student learning and there is opportunity to take the test again. I remember when I moved to the United States to get my Master’s Degree and had to take a driving test. I failed the theoretical test and was asked to go home, study the items I had gotten wrong, and go back the next day to re-take the test. To my surprise, it was the exact same test. What they wanted was for me to master the content, not to punish or trick me!

Myth # 4: Formative assessment isn’t rigorous enough, so it cannot compose a major part of students’ final grade

We tend to confuse rigor with punishment. Traditionally, rigorous tests and other types of assessments are those that are extremely difficult and that very few students do well on. According to traditional testing theory, a good test is one that discriminates the good and the bad students effectively.
Formative assessment is based on a different logic, or paradigm, one in which it is believed that every student can do well under the right conditions and the right amount of practice. If a student needs to retake a test again and again until he/she masters the content, why not? Formative assessment is for learning, not of learning. Thus, the rigor of formative assessment is of a different nature. Formative assessment is not a funnel that only a few get out of, but rather, it is an inverted funnel, which few may get into at first but all or most will get out of eventually.
Putting together a writing portfolio with multiple drafts of compositions, based on the teacher’s and the peers’ feedback, and writing a reflective piece explaining what one has learned from the experience and how the portfolio portrays growth requires much more critical thinking and agency, and is thus much more rigorous, than merely writing a number of one-shot compositions and receiving a meaningless grade on each one.

Myth # 5: Formative assessment is not realistic because students will have to take summative tests all their lives

Students might have to take summative tests all their school lives, before schools adopt more formative types of assessments. Other than that, how many tests do we really take in life? A university entrance exam (in the case of Brazil)? A foreign language proficiency exam? Or perhaps a public service entrance exam? How many of our students overall will actually take these types of exams, and how frequently? Of course, we do have to prepare students to face high-stakes exams and must include summative assessment in our curriculum, but does it need to be the only type of assessment we use?
With the exceptions mentioned above, most of what we learn in life is assessed formatively. We make a mistake, receive feedback on it, and have the chance to correct our path the next time. I’m in the process of learning how to make risotto. I’m not a good cook at all, so I looked up a recipe that I thought was straightforward enough for me, tried it out with my family, received feedback on it, improved my risotto, and then felt ready to invite some close friends over to try it out. Now that it seems that they, too, liked my risotto, I might be ready to invite other people over, maybe even some friends that cook very well. This sounds more real-life to me!

Reference:
Brown, H.D. (2004). Language assessment – principles and classroom practice. White Plains, NY: Longman.

photo courtesy of freedigitalphotos.net

This is a crossposting from my blog TEFLing