Who We Are



How to assess higher order thinking skills? Theory and practice for paper based and computer based item formats.


Caroline Jongkamp (1966) is a senior consultant for international projects at Cito, the Netherlands. She is experienced as a professional test developer in economics, financial accounting,  and ICT. At Cito, she was responsible for the implementation of computer-based testing in the Dutch secondary leaving exams. She is an experienced change manager involved in change processes on item banking and computer-based testing.

She holds an MSc in Econometrics with specialization in Operations Research.

 Nico Dieteren (1960) is a senior consultant for international projects at Cito, the Netherlands. He is experienced as a professional test developer in economics, geography and social sciences. At Cito he was one of the leading test experts in first experiments with the use of computers in high-stakes final exams for secondary education. He is experienced as department manager for final examinations secondary education in social sciences and arts.

Mr Dieteren holds an MA in Economics and in Geography, with specialization in Economic Geography and holds first degree teacher licences in both subjects. Since 2014 he is accredited as Practitioner by the Association for educational Assessment Europe (AEA-E).


Why IAEA members should attend this workshop:

The workshop will offer an introduction into Item construction for higher order thinking skills and applications for paper based and computer based testing. Participants will gain insight in the do’s and don’ts when developing good open and closed test items and will receive practical exercises with checklists for item properties and for the selection of good contexts. The use of taxonomies for developing an item bank will be shown, starting from the well-known classification by Bloom (revised). Participants will then be introduced to the specific Scalise taxonomy, that shows special features and opportunities when developing items for computer based testing. This introduction will be followed by some practical exercises. Participants will experience that this taxonomy is not just another classification, but also offers item writers useful ‘templates’ for innovative item development for computer based testing. We will encourage participants to do practical work in small groups, as this will also stimulate the learning by exchanging experiences, views and opinions with other experts.


Who this Workshop is for:

The workshop is aimed at those who want to learn more about the development of good test items that assess higher thinking skills in high-stakes tests. Participants might be novice or more experienced users. No prior knowledge is required to attend the workshop, although we aim specifically to invite practitioners who are actively involved in and or responsible for test- and item development projects in their own professional environment.

Participants are invited to bring their own laptops for practicing (Windows and Chrome browser).



The workshop starts with an introduction to the general framework for item development and for the steps in the test development cycle, from the perspective of the test developer.

The first session of the workshop will cover some practical do’s and don’ts when constructing closed or open test items. Based on commonly accepted rules and prescriptions, the participants will screen some example items and learn to make distinction between ‘the good’ and ‘the bad’ items. We make use of handouts and checklists. For some this will be a refreshment of what they already (should) know, for others this might be quite new.

In the second session participants will learn about the main features of item development for higher order thinking skills in combination with the theoretical background of assessing productive skills in realistic contexts. First we will do this for standard paper based assessments and the use of the standard taxonomy of the revised Bloom.

In the third session participants will be informed about some special features of a specific taxonomy that can be used for assessing higher order thinking skills in computer based formats. This taxonomy will proof to be not only a useful classification toll, but also can serve as a practical guide for item developers when looking for specific item types that can be offered in computer based testing platforms.


Preparation for the workshop:

No special preparation is required, the workshop format will be interactive allowing participants to discuss their own experience and/or problems. If available, participants are encouraged to bring some examples of their own items for paper based or computer based format. It is the belief of the workshop leaders that sharing experience in item and tes development will stimulate and enable participants in solving educational measurement problems that they encounter in their practice or anticipate encountering.


Workshop program




Coffee and registration


Welcome & introductions

General framework for test- and item development


Do’s and dont’s in constructing closed and open items

Practical exercise 1




Development of items for HOTS and taxonomies

Practical exercise 2


Item types for computer based testing and specific taxonomy

Practical exercise 3


Workshop close and evaluation





Issues around how best to provide evidence for assessment validity, reliability and fairness: the practice and challenge of validation


Stuart Shaw began his career as an engineer, and holds an honours degree in Physics, a diploma in Applied Physics and a research degree in Metallurgy.  Following his time in industry, he entered the TEFL world (Teaching English as a Foreign Language), gaining a certificate and diploma in TESOL and a Master degree in Applied Linguistics. He had several years of experience as an EFL teacher and Director of Studies. Stuart also holds a postgraduate degree in Theology.

Stuart has worked for Cambridge Assessment since January 2001. He has experience in the areas of researching and managing second language writing assessment in an ESOL context. He  is an experienced presenter and has lectured for the Department of Theoretical and Applied Linguistics (University of Cambridge). He is currently an affiliated lecturer with the Faculty of Education (University of Cambridge).

Stuart is a Fellow of the Association for Educational Assessment in Europe (AEA-Europe) and has been a member of the Professional Development Committee (AEA-Europe). He is also a Fellow of the Chartered Institute of Educational Assessors (CIEA).    


Why IAEA members should attend this workshop:

The workshop is intended to make the complexities of validation theory and practice more apparent and more understandable.  


Who this Workshop is for:

The workshop is envisaged as a resource for students in educational measurement and assessment, for key practitioners in assessment agencies who wish to gain a deeper understanding of validation, for those with an academic interest in assessment, and for the validity novice who should be able to benefit from attending the workshop.



This workshop will highlight the challenges faced when validating the intended interpretation of test scores and their relevance to the proposed uses of those scores.It is hoped that the workshop will engender discussion that will focus on the issues raised when developing, piloting and implementing (in an operational context) a test validation framework which attempts to structure validity evaluation via a number of questions representing components of validity for specific qualifications.

Specifically, it will address a number of outstanding validation challenges: where to start (identifying claims, purposes, interpretations and uses of test scores), how to proceed (determining the relevance and sufficiency of validation evidence), when to stop (evaluating validation arguments), and how to report (identifying appropriate audiences and tailoring content to their requirements).

Validating proposed interpretations and uses of test scores is a difficult task and it is hoped that by sharing experiences through a collaborative workshop environment, greater insights will be drawn leading to an increased understanding of the validation process. 


Workshop program

Coffee and registration

Introduction: why is validation so important?

Theoretical perspectives on validation

Developing a validation framework for general educational assessments


Validation challenges

Claims, purposes, interpretations, uses

Constructing validation arguments

Identifying and collecting validation evidence (adequacy, and relevance)

Evaluating validation arguments

Reporting validation findings


Feedback on issues discussed in relation to particular contexts

Summary of issues


Useful preparatory reading for the workshop:

Shaw, S. & Crisp, V. (2012). An approach to validation: Developing and applying an approach for the validation of general qualifications. Research Matters, Special Issue 3, 1-44. (A copy will be provided for workshop participants.)


Chapter 5 - The deconstruction of validity: 2000-2012 in Newton, P. E & Shaw, S. D. (2014). Validity in Educational and Psychological Assessment. London: SAGE

Change Color