Principles


Last revised: 22 Jul 2025

Foundational principles 

The following imperatives direct the development and delivery of all RACGP assessment activities:

Alignment

Assessments and assessment programs align with the defined outcomes of the RACGP curricula and Progressive capability profile of the specialist general practitioner. There is also alignment with the RACGP Educational Framework and the Aboriginal and Torres Strait Islander Culture and Health Training Framework and the training program. This ensures that what is being assessed accurately reflects the competencies needed for independent practice.3

Safe

Safe environments support optimal performance in assessments and allow participants to benefit from and not be harmed by assessment processes. Safe environments support the identification and reporting of racism, discrimination, bullying and harassment in training and assessments4 and in doing so seek to safeguard those being assessed, those involved in the delivery of assessment and the recipients of care. This includes ensuring culturally safe environments for Aboriginal and Torres Strait Islander people that allow everyone involved in assessments to participate fully. Considerations include patient safety, power dynamics, and cultural safety,5 as well as taking a trauma-informed, strengths-based approach.

Ethical

Assessment delivery and design is ethical. This includes respecting privacy, confidentiality and following relevant guidelines.  This includes ‘in-practice’ assessments where patients are involved.   

Quality

Assessments and assessment programs are contemporary, and evidence based. This includes leveraging technology to enhance efficiency, effectiveness, and accessibility within a secure testing environment where applicable. Continuous improvement is underpinned by ongoing evaluation and monitoring activities. This ensures assessments are relevant, effective and aligned to evolving competency requirements3.   

Continuous improvement processes are underpinned by ongoing comprehensive evaluation and monitoring activities, ensuring assessments are relevant, effective and aligned to evolving competency requirements3. This is supported by the RACGP Education and Training Monitoring and Evaluation Framework. Stakeholders include participants, assessors and regulatory bodies.


Guiding principles 

An assessment program needs to be adaptable to the context and the evolving needs of the general practice profession and community. The range of assessments and their specific purposes, mean that the prioritisation of guiding principles will vary across different assessments. Decision-making around which principles are prioritised will be driven by the purpose of the assessment. RACGP assessments and assessment activities will be developed and delivered to the following principles. RACGP assessments are:


Fairness is a multi-faceted construct that includes elements of assessment design and judgements about performance.

  • Assessment design minimises bias. Bias can be minimised by the design of questions, scenarios and evaluation criteria, assessor training, and the use of multiple assessors and sources of information.
  • Assessment design and delivery is equitable. As far as practicable, accommodations are made so that all doctors, regardless of any disability, have equitable access to assessments in order to demonstrate their learning, understanding and competency and that they have reached the required standard. All doctors should have equitable access to exam support both prior to and after assessments
  • Clarity
    The assessment criteria, expectations, format and processes are clear and transparent to doctors and stakeholders. This includes providing sufficient information on how performance will be evaluated.
  • Context
    Assessments often occur in the environment of clinical practice with all its complexity. Fairness includes consideration of context on performance.6

 


Valid assessments measure the skills, knowledge and abilities intended to be measured. To demonstrate validity, assessments require the collection of evidence from multiple sources. This allows for meaningful interpretation of the assessment outcomes. Authenticity is when they reflect real-world tasks and scenarios that doctors may encounter in their professional practice.

 

Validity

All forms of evidence for validity can be considered as construct validity which is whether ultimately the test measures the constructs it claims to measure. Sources of validity evidence are multi-faceted and include:

  • Face validity, which considers if the test appears to measure what it purports to measure.
  • Content validity, which is whether the test covers all the key components of the constructs it is measuring.
  • Response and process validity, which consider how the cases/items are constructed and delivered and that there is internal consistency.7 

Authenticity is a broad multi-faceted concept and includes:  

  • realism of context
  • meaningfulness for the learner
  • purposefulness for the world of practice.8  
  • the meaningful engagement of the community in the co-design of assessing areas of importance to the patient experience of care, including cultural safety.9 These help to ensure that performance on an assessment is indicative of actual competencies and future performance.


Assessments consistently produce stable and dependable results across different assessors and settings. This requires standardised rubrics and training assessors to ensure consistency.  Triangulation and proportionality are part of this.

Triangulation

Reliability via triangulation can be achieved in multiple ways. For some assessments, an accumulation of multiple smaller observations or interactions over time by different assessors allows triangulation of judgement about performance. Triangulation can also occur via the use of multiple methods that assess the same competence, which can be helpful when measuring a wide range of knowledge, skills, and behaviour content.10

Proportionality

Takes into consideration that the stakes of the decisions made are proportional to the credibility of the underlying information.2 Aggregated quantitative and qualitative data from multiple low-stakes assessments can inform defensible high-stakes decisions. An advantage of multiple low-stakes assessments is the ability to provide feedback and coaching for improvement and to guide learning11 and the results of those interactions are also relevant information.12


Assessments and assessment programs are impactful in that they have a positive educational impact and catalytic effect on doctors, programs and patient health.

  • Educational impact refers to the educational effect that is gained when preparing for assessment motivates the doctor to do useful work.10 
  • Catalytic effect is the ability of the assessment process to create, enhance and support learning. This effect has been identified as a central pillar of assessment for learning. This can be facilitated, where appropriate, by providing meaningful, constructive, actionable feedback. This feedback enables and motivates doctors to incorporate this information into their future learning and performance.10


Assessments are designed proactively to minimise the likelihood of participants being excluded, overlooked or disadvantaged for whatever reason, including disability or cultural background, through the ways in which they are assessed. Inclusive assessment design should ensure equal opportunities for all to demonstrate their learning and achievements, and it reduces the need for individual adjustments to assessments.13


In an assessment framework, rationality refers to the logical, evidence-informed reasoning behind the design of the program of assessment. Assessments and their purpose are considered individually and as part of the system of assessment.6 This is augmented by balance and comprehensiveness. A blueprint can be developed to minimize the gaps in assessment coverage through appropriate sampling on a whole or program approach10

Balance

Both assessments for learning, and assessments of learning, are used to support continuous development. Assessments include those primarily for learning, feedback to guide and enhance learning, and assessments of learning that evaluate overall achievement and proficiency.

Comprehensive

The system of assessment is thorough and effective. It includes only components that are relevant for learning, diagnostic, and/or relevant of learning as appropriate to their purposes, and some components will be integrated into usual work.10  


Assessment tools, processes and results are perceived as credible by participants, the healthcare system, regulators and the community.10   Assessment is more educationally effective when learners engage with assessment processes and perceive the feedback received as credible.14


Assessments are considered for feasibility including practicality logistics and sustainability. Feasibility also needs to consider the resourcing which includes the costs and value of the assessment in contributing to measuring the level of attainment.  



 

Advertising