Yesterday I had a class of 20 MFL’ers (English, Spanish, French and German) for a class on testing. It’s rather late in the year to start talking about this seeing as they started their teacher training back in September but if we swapped it with another class, such as vocabulary teaching, then I’d only be complaining that it was too late to start teaching vocab (this girl’s hard to please!) so I always try to check prior knowledge in advance and make the class as practical as possible.
Testing is seen by many as a necessary evil; I find it fascinating. The more I read about the do’s and don’t’s, cans and hows, coulds and shoulds the more interesting it becomes. The more I hear from my students about how testing is carried out at their schools the more shocked I become!
So this class I teach in Dutch (seeing as it’s for all the MFL students) whereas in the February cohort we always have only English MFL students so then I get to do it in English. I think I’ve taught this class four or five times already and each time I manage to change it. After each class I sit down and think of what I was or wasn’t happy with. Before each new class I see if I have some fresh articles, new information I’ve discovered, new questions from students etc. So here is what I did yesterday:
Aim: to give the students a global idea of the what, why and how of testing.
I started with a discussion around the following questions:
- Does your school use the Cito watching and listening exams? (these are exams made by the national exam board, they’re not compulsory). What are your experiences with these exams?
- Do you/ how do you practise for exams?
- What sort of tests have you used in/made for your classes?
- Have you marked tests?
- What do you find difficult with regards to marking tests?
- What problems have you encountered with regards to testing?
- What problems do your pupils have with regards to tests?
I generally find it interesting to start with such a discussion so that we all know what’s going on in each other’s schools and language departments and where we all stand as individuals with regards to our vision on testing. Many schools have set tests, often from the course books, and our students are often not allowed to make their own tests.
After that I go on to an input section (pretty traditional lecture-style, though I do expect questions and input from the students). So here is a brief outline of the input section:
- Why do we test? (certification, examination, transparency, system monitoring, school quality, to keep parents informed of progress, etc).
- Why do we test? (utilitarian, political, sociological, technocratic and practical reasons).
Then comes another discussion:
- How much time does your school spend on testing?
- Does your school also perform tests for other purposes i.e. not for a grade (e.g. assessment for learning, portfolios, CEFR-style).
- Which skills are tested?
- Are the tests mainly aimed at short term or long term memory or both?
- Does the school have integrated tests (content and language)?
because of time, we don’t generally get round to discussing every point. If I notice students have a lot to say about one point then I tend to go with the flow. I do, however, always encourage them to sit down later and calculate the amount of time spent on testing compared with the amount of time spent on teaching and on individual learning. Their gut-feeling tells them that many schools spend huge amounts of time on testing and learning for tests and very little time on actual ‘teaching’ (politically incorrect as teachers are ‘supposed’ to assist learning).
We then go on to look at the “physicality” of tests, after first examining some rejected questions from the national exam board (with thanks to a wonderful man who made this available to me!). The points we look at are as follows:
- aims (do pupils know what the aims are)
- organisation (where, when, surroundings, timing, etc – to lower stress)
- clarity of questions
- suitability for age
- points/value per question
- complete sentences necessary/gap fill?
- authentic language
What I find important to pass on to the students is that a test can be seen as a moment for reflection. It is an opportunity for pupils to demonstrate what they CAN DO i.e. the cheese, not the holes! The other thing I emphasise is that it’s not always necessary to use the calculation system frequently seen here whereby the agreement is that 1 point is removed for every 2 mistakes, for example. I prefer students to look at the positive side and award points for every correct answer rather than deducting points for every incorrect answer.
After this we move onto validity and reliability (also with regards to aural and oral testing) and then finally onto the students’ own tests they have brought with them. Some are their own tests, some are standardised tests from course books and others are tests devised within their department. This is the most interesting part of the entire class because we see some really good and really bad examples. One (bad!) example I saw yesterday was a multiple choice test in English where first year pupils were tested on their comprehension of a book they had read. The layout was abismal (a typewriter – remember them?!) and the test was asking details of what clothes one of the characters was wearing in a certain scene, for example. Comprehension or memory? Not exactly motivational!
We then moved onto the final section, and in my opinion, the most important section: what happens afterwards? We looked at pedagogical grading, we looked at the offical psychometrics of testing, we look at how to mark tests and we look at what to do after marking tests: i.e. feedback and washback. Many schools here do not let their pupils take tests home for fear of them being copied because they re-use the same tests year in, year out. Pupils see the tests and their grades during class and then have to hand everything back in again. This creates a competitive atmosphere whereby the pupils are purely aimed at their grades. Some teachers do, of course, spend time on feedback, correction and evaluation but unfortunately it appears that they are the minority.
Next time around, with the February cohort, I think I’ll swap the order around a bit so that we have fewer ‘blocks’ of me, then the students then me again but make it more based on the tests they bring with them. I think it would be better to talk about layout, for example, and instantly look at their tests. Then look at the points per question and look back at their tests, then timing etc etc.