This site has been marked as stale and may be removed soon if no updates are made.

Assessment Techniques

From Brock University Teaching Wiki

Jump to: navigation, search

Contents

[edit] Example Assessment Techniques

[1]

[edit] Cases and open problems

  • Have potential for measuring application of knowledge, analysis, problem-solving and evaluative skills. Short cases are relatively easy to design and mark. Design of more complex cases and their marking schemes are more challenging to design and develop. Marking for grading and feedback are about as fast as essay marking. Work very well in the use of a wiki.

[edit] Computer-based assessment

  • Can be used within Learning Management Systems (e.g. Sakai) to format multiple choice questions, mark and analyse results. Wider range of graphics and simulations can be used. Can be time consuming to create but marking very fast. Reliability is high but validity (match with outcomes) needs careful attention.

[edit] Direct Observation

  • Useful for immediate feedback, for developmental purposes and for estimating performance -providing a simple, structured system is used. The presence of the observer can change the performance so the method should be handled sensitively. Impressionistic observation can be useful if supported by constructive feedback. Can be used by a group of peers to provide feedback as well as assessment. Intensive, lengthy training is required for high reliability if detailed checklists are used. Reliability, validity and manageability are fairly high when structured observation is used.

[edit] Essays

  • A standard method. There are several types of essays that test different styles of writing types of thinking. Measures understanding, synthesis and evaluation, providing you ask the right questions. Relatively easy to set. Marking for grading based on impressionistic marking is fast. Marking for feedback can be time-consuming. Keep the criteria simple. Variations between assessors can be high - and so can variations of the Assessor.

[edit] Learning logs/ diaries

  • Wide variety of formats ranging from an unstructured account of each day to a structured form based on tasks. Some training in reflection recommended. Time-consuming for students. Requires a high level of trust between assessors and students. Measuring reliability is difficult. May have high validity if structure matches learning outcomes. Work very well in the use of a wiki or Blogs.

[edit] Mini-practicals

  • A series of mini-practicals undertaken under timed conditions. Potential for sampling wide range of practical, analytical and interpretative skills. Initial design is time-consuming. Some if not all of the marking can be done on the spot so it is fast. Feedback to students is fast. Reliable but training of assessors is necessary.

[edit] Modified Essay Questions (MEQs)

  • A sequence of questions based on a case study. After students have answered one question, further information and a question are given. The procedure continues, usually for about one hour. Relatively easy to set. May be used in teaching or assessment for developmental or judgmental purposes. Can be computer - or paper-based. Can encourage reflection and analysis. Potentially high reliability, validity and manageability.

[edit] Multiple Choice Questions (MCQs)

  • A standard method. Can sample a wide range of knowledge quickly. Has potential for measuring understanding, analysis, problem solving skills and evaluative skills. Wide variety of formats from true/false to reason assertion. More complex formats not recommended: they confuse students unnecessarily and they are time-consuming to design. More demanding MCQs require more time to set. Better ones are based on case studies or research papers. Easy to mark and analyse results. Useful for self-assessment and screening. Potentially high reliability, validity and manageability. Feedback to students is fast. Danger of testing only trivial knowledge. A team of assessors, working to the same learning outcomes, can brainstorm and produce several questions in an afternoon.

[edit] Orals

  • Tests communication, understanding, capacity to think quickly under pressure and knowledge of procedures. Feedback potential. Marking for grading can be fast but some standardisation of interview procedure is needed to ensure reliability and validity.

[edit] Objective Structured Clinical Examinations (OSCEs)

  • Initially used in medicine but can be used in business, legal practice, management, psychology, science courses and social work. Particularly useful for assessing quickly practical and communication skills. Fairly hard to design and organise, easy to score and provide feedback. Could be used in induction phase to estimate key practical skills. Group OSCEs useful for teaching, feedback and developmental purposes. OSCEs can be used towards the end of a course to provide feedback or to test performance against outcomes. Reliability, validity and manageability are potentially fairly high. Probably less labour intensive than other forms of marking but several assessors required at one time. Initially, they are time consuming to design - but worth the effort.

[edit] Portfolios

  • Wide variety of types from a collection of assignments to reflection upon critical incidents. The latter are probably the most useful for developmental purposes. May be the basis for orals. Rich potential for developing reflective learning if students trained in these techniques. Require a high level of trust between assessors and students. Measuring reliability is difficult. May be high on validity if structure matches objectives of training. Emerging use of electronic portfolios or e-portfolios is gaining in popularity. Sakai has e-portfolios built into its framework. Anyone interested in implementing e-portfolios at a department wide level, should contact Matt Clare

[edit] Poster sessions

  • Tests capacity to present findings and interpretations succinctly and attractively. Danger of focusing unduly on presentation methods can be avoided by the use of simple criteria. Feedback potential: from tutor, self and peers. Marking for grading is fast. Use of criteria reduces variability.

Presentations Tests preparation, understanding, knowledge, capacity to structure, information and oral communication skills. Feedback potential: from tutor, self and peers. Marking for grading based on simple criteria is fast and potentially reliable. Measures of ability to respond to questions and manage discussion could be included.

[edit] Problems

  • A standard method. Has potential for measuring application, analysis and problem solving strategies. Complex problems and their marking schemes can be difficult to design. Marking for grading of easy problems is fast. Marking of complex problems can be slow. Marking for feedback can be slow. Variation between markers is fairly low when based on model answers or marking schemes Allow for creative, valid solutions by bright students. Work very well in the use of a wiki

[edit] Projects, Group Projects and Dissertations

  • Good all-roundability testing. Potential for sampling wide range of practical, analytical and interpretative skills. Wider application of knowledge, understanding and skills to real/simulated situations. Provides a measure of project and time management. Group projects can provide a measure of teamwork skills and leadership. Motivation & teamwork can be high. Marking for grading can be time-consuming. Marking for feedback can be reduced through peer and self-assessment and presentations. Learning gains can be high particularly if reflective learning is part of the criteria. Tests methods and processes as well as end results. Variations between markers possible. Use of criteria reduces variability but variations of challenge of project or dissertation can affect reliability. Work very well in the use of a wiki

[edit] Questionnaires and report forms

  • A general method including a wide variety of types. Structured questionnaires get the information you want but semi or open-ended questionnaires may give you the information that you need. A mixture of structured and open-ended questions is recommended. Criterion reference grading recommended for judgmental purposes. Broad criteria are more reliable and valid than highly detailed criteria. Detailed criteria tempt users to react negatively or disdainfully.

[edit] Reflective Practice Assignments

  • Measures capacity to analyse and evaluate experience in the light of theories and research evidence. Relatively easy to set. Feedback potential from peers, self and tutors. Marking for feedback can be slow. Marking for grading is about the same for essays. Use of criteria reduces variability.

[edit] Reports on Practicals

  • A standard method. Have potential for measuring knowledge of experimental procedures, analysis and interpretation of results. Measure know how of practical skills but not the skills themselves. Marking for grading using impressions or simple structured forms is relatively fast. Marking for feedback with simple structured forms is faster than without them. Variations between markers, without structured forms, can be high. Method is often over-used. To reduce student workload and the assessment load, different foci of assessment for different experiments recommended.

[edit] Self-assessed questions based on open learning (distance learning materials and computer-based approaches)

  • Strictly speaking, a method of learning not of assessment. But could be used more widely. Self-assessed questions could form an integral part of Open Learning. These could be based on checklists, MCQs, short answer questions, MEQs and other methods. Their primary purpose is to provide feedback and guidance to the users. They can be used to integrate open learning and work-based learning when students are on placement. Reliability and validity is probably moderately high and manageability is high, in the long term, but low initially.

Work very well in the use of a wiki.

[edit] Short answer questions

  • A standard method. Has potential for measuring analysis, application of knowledge, problem-solving and evaluative skills. Easier to design than complex MCQs but still relatively slow. Marking to model answers is relatively fast compared with marking problems but not compared with MCQs. Marking for feedback can be relatively fast.

[edit] Simulated interviews

  • Useful for assessing oral communication skills and for developing ways of giving and receiving feedback on performance. Video-recorded sessions take more time but are more useful for feedback and assessment. Peer and self-assessment can be used. Sensitive oral feedback on performance is advisable. Assessment by simple rating schedule or checklist is potentially reliable if assessors, including students, are trained.

[edit] Single Essay Examination

  • Three hours on prepared topic. Relatively easy to set but attention to criteria needed. Wider range of ability tested including capacity to draw on a wide range of knowledge, to synthesize and identify recurrent themes. Marking for feedback is relatively slow. Marking for grading is relatively fast providing the criteria are simple.

[edit] Work based Assessment

  • Variety of methods possible including learning logs, portfolios, projects, structured reports from supervisors or mentors. Important to provide supervisors and mentors training in the use of criteria. Work experiences can be variable so reliability can be low. Validity, as usual, is dependent upon clear learning outcomes.

[edit] Adapted from

  1. George Brown (2001), The Learning and Teaching Support Network Generic Centre. Assessment: A Guide for Lecturers
Personal tools
  • Log in / create account
Brock University
Bookmark and Share