News & Media

Ethical and effective AI in education: a policymaker’s roadmap


By Education Services Australia | 19 Oct, 2023

Education Technology Digital Technologies

Despite its relative infancy, Generative AI – and its more familiar subset, the Large Language Model (LLM) – are rapidly reshaping the education sector.  

From revolutionising the creation of teaching materials to increasing accessibility, AI’s potential for positive change in the classroom is seemingly endless. But so, too, are its challenges. 

While the full force of this technological tidal wave is yet to be seen, one thing is certain: robust frameworks need to be in place to ensure its secure, safe and constructive use. 

With the recent unanimous approval of ‘The Australian Framework for Generative Artificial Intelligence in Schools' (the Framework), by the state, territory and federal education ministers, Australia’s education policymakers are seeking to guide the responsible and ethical use of generative AI tools in ways that benefit students, schools, and society. 

In anticipation of the challenges of implementing frameworks such as the Framework, a report was commissioned by Education Services Australia (ESA), titled “AI in Australian Education Snapshot: Principles, Policy and Practice”. The report, published in August 2023 before the Framework was finalised, details the complex global landscape policymakers and education technology providers will navigate when rolling out solutions to Australian schools, and uncovers the principles that will ensure that AI used in classrooms is safe, impactful and measurable.   

The reality for policymakers

There are no two ways about it: Artificial Intelligence is transforming the education sector. 

And while the possibilities of this transformation are exciting, they also pose a myriad of ethical, technical and pedagogical concerns that policymakers are working through in real-time. 

Designing policy in this rapidly evolving, multidimensional landscape is a mammoth task. And an equally critical one.  

Meaningful policy will protect schools' and students’ data privacy, ensure AI technologies are audited and held accountable where necessary, and ensure human capacity is maintained and developed. 

But the important question facing the sector is; how can we manage all this – quickly and effectively? 

The principle paradox 

Policymakers are turning to principles-based guidelines developed by governments, industries and not-for-profit organisations for direction. Around the world, over 300 frameworks of AI principles have been published – each with considerable thematic similarity (although only a fraction of these relate to education directly).  

The key themes common to these frameworks include: 

  • Privacy 

  • Accountability 

  • Safety and security 

  • Transparency and explainability 

  • Fairness and non-discrimination 

  • Human control of technology 
  • Professional responsibility  

  • Promotion of human values 

While these documents provide insight into the ethical priorities of integrating AI into the education sector, they don’t include practical instructions for enacting such principles. This leaves policymakers with a challenge: while they establish top-down frameworks, the education sector may independently create solutions from a bottom-up perspective. 

The challenges of implementing the Framework in the fast-moving world of AI present the potential risk of an ever-growing gap between policies and the reality within Australian classrooms.  

Recognising this gap is the first step towards establishing AI's safe, ethical and secure implementation in education. And crafting policies that look good on paper – and work in practice. 

A pragmatic pathway across time horizons 

The report’s authors, Daniel Ingvarson and Beth Havinga advocate for two primary approaches to tackle this gap. Both are grounded in pragmatic steps that policymakers and other education stakeholders can take. 

The first suggestion is to assess each principle (and prospective policy) against criteria identified by the EdSAFE AI Alliance – which are: 

  • Clarity 

  • Measurability 

  • Enforceability 

  • Urgency 

These criteria can help discern which principles can be achieved – at the school, sector and public level. 

The second approach is informed by the US response to AI regulation in education, which prioritises action over principles. The report suggests categorising AI principles into three different time horizons: 

1. Short-term actions

The immediate policy goal should be to achieve ‘base safety’ in educational settings without causing a significant burden on educational institutions. That means ensuring AI products are safe before introducing them to the public, building systems that put security first, and earning the public’s trust through transparency and explainability. 

Achieving base safety will address schools’ concerns around academic integrity, staff development and ‘humans in the loop’ policies. 

2. Medium-term actions

To ensure AI's ongoing efficacy and safety in education settings, targeted research is advised as an intermediate step. This should investigate AI’s impact on teaching, curriculum development, and assessment – and will help future-proof the education system. 

3. Long-term actions

Long-term strategies require policymakers and relevant stakeholders to wrangle with more complex ethical and jurisdictional issues. This could involve creating a more cautious EU-style approach for important items. Or it may mean adopting a self-regulatory US-style approach by partnering with tech companies.  

Embracing a dynamic, multi-faceted approach 

Policymakers are uniquely positioned to draw the different perspectives of industry, not-for-profits and educational institutions together into a cohesive tapestry.  

This is no easy feat, and flexibility and adaptability are essential. 

But the goal is not to arrive at a perfect solution overnight. Instead, it’s about making steady progress and adapting and refining strategies as technology and societal needs evolve.  

For example, as new AI versions are released, tests and conformance measures must be retaken. So, implementing principles must factor in cost and complexity, with careful consideration of the impacts on each part of the education ecosystem. 

In the face of such change and complexity, it is natural to feel overwhelmed. But the pragmatic approach suggested by the report aims to ensure safety, efficacy and equity in using these AI models in education. 

Students, educators and the entire education ecosystem are primed to benefit enormously from this technology. And with the right protection and support, will continue to thrive – now and into the future. 

Download the full report

Stay updated with the latest news from ESA


About the author

ESA Logo

Education Services Australia (ESA) is a not-for-profit education technology company committed to making a positive difference in the lives and learning of Australian students. ESA works with all education systems and sectors to improve student outcomes, enhance teacher impact and strengthen school communities.