Shaping the Future: AI education projects in Oceania

Article #2 of AI in Education Article Series: January 2025 (Updates May 2025)

Curious about what AI agent projects exist across education in Oceania? See our log, understand why they are relevant and who to contact for each.

hero image

Written by

Phoebe Gill

Research manager

021 300 725 phoebe.gill@scarlatti.co.nz

Superpower: Romance languages

Fixations: Sunday drives


Phoebe works predominantly in social and market research, as well as monitoring and evaluation. Her projects often involve large-scale surveying and interviewing, and more recently, Artificial Intelligence in education.

She began her journey to research and evaluation in Brazil in 2020, supporting projects on social services, gender violence and education, for NGOS, governments and intergovernmental agencies. Prior to this, she worked as an English language teacher for adults.

Outside of work, Phoebe loves history, languages, animals and the outdoors. Together with her partner, she offers support services for Latin American migrants in New Zealand.

Phoebe has a Conjoint Bachelor of Arts and Commerce in Marketing (Market Research), International Business and Spanish.

Overview

Everywhere you look, New Zealand organisations are exploring Artificial Intelligence (AI). Within the educational space, most organisations are still at the beginning of their journey. However, as 2025 rolls in, a few are piloting AI tools to solve problems whose solutions have seemed dim until now.

This article is the second in a series titled “AI in Education”, aimed at education providers interested in AI. The intention is for this series to act as a beginner’s guide to the use of AI in education, with a particular focus on AI agents. This series is being developed as part of our project to develop an AI agent for learner oral assessment, funded by the Food and Fibre Centre of Vocational Excellence (FFCoVE). We invite you to follow along as we (Scarlatti) document our learnings about this exciting space.

The article below provides an overview of the emerging AI projects within Oceania’s education sector. This list is not intended to be exhaustive – but provides a glimpse of the landscape and their relevance to the AI agent Scarlatti is currently developing.

Projects in delivery and assessment

Cogniti, University of Sydney

Country: Australia

Phase: Mature

Cogniti is a generative AI platform (rather than an AI agent itself), designed to enable educators to build custom AI agents. Since its soft launch in October 2023, educators from 30 institutions in Australia, New Zealand, and Singapore have created more than 600 AI agents using Cogniti.

For example, the University of Sydney itself has built a peer-based learning tool for chemistry tutorials, a tool that provides tailored revision questions for their immunology courses, and an agent which allows students to role-play responding to racism in everyday conversations. There is also a Cogniti-made oral agent being used by students to prepare and practise for their tutor-run ‘Interactive Orals’ assessment in WRIT1001. Further examples can be found here. Cogniti has recently won Gold for Best use of Generative AI in Oceania.

Relevance for us:

  • This highlights the diversity of uses for AI in education. For example, acting as a tutor, grader, test constructor etc.
  • It functions similarly to how our AI agent for assessment will need to function, by pulling from things such as assessment rubrics, course materials and ideal answers.
  • This demonstrates strong demand by educators in Oceania to trial AI for the benefit of themselves and their learners.
  • Nevertheless, we see a gap in that there is currently no Aotearoa-specific version, built upon a New Zealand ethical framework.

It seems that as of May 2025, Cogniti-based agents are starting to be used to help students prepare and practice for oral assessments, in a similar way to Scarlatti’s agent. However, currently, the University of Sydney only appear to be using it to for practice, unlike Scarlatti’s agent which is being used to administer oral assessments.

Key person(s): Danny Liu

Sofia, University of Auckland

Country: New Zealand

Phase: Launched

Sofia is an anthropomorphic AI agent being trialled with marketing students at the University of Auckland. It has a wide range of functions, including answering common questions, providing course details, acting as a tutor, and producing quizzes. Built-in analytics give teachers insights into student usage and understanding, helping identify areas for additional support. The Sofia team have recently been awarded Silver for Best Use of Generative AI in Oceania, published an academic article on their work, and been mentioned in The Conversation.

Relevance to us:

  • An “assessment agent” can mean many things. For example, it might generate quizzes, assess answers, produce reports, or a combination of these.
  • Analytics can be built into AI agents to help educators understand how students are interacting with the course, enabling opportunities for personalisation or intervention.
  • AI agents can be a digital character, an interface you type into, or a human-like voice you speak with. Different learners may feel differently about these different options.

Key person(s): Shahper Richter, Patrick Dodd and Inna Piven

Projects in delivery (only)

Various AI agents, Toi Ohomai Institute of Technology

Country: New Zealand

Phase: Launched

Toi Ohomai Institute of Technology has used Cogniti to build a wide range of AI agents, for different programmes. As part of this, Toi Ohomai have inputted law and case studies from Aotearoa to make content relevant for students. These agents have included but may not be limited to:

Relevance for us:

  • Despite the primary purpose of these AI agents being delivery, they demonstrate elements required for formal assessment (e.g., review of learner responses and provision of feedback).
  • Toi Ohomai is one of few organisations who have well documented how they conducted evaluations of their agents, both in paper and video.

Key person(s): Josh Burrell, Jonathan Adams and Rochelle Flight

Research Proposal AI Agent, Auckland University of Technology

Country: New Zealand

Phase: Launched

Auckland University of Technology has also adopted Cogniti’s technology to create an agent which aids their postgraduate Nursing and Science students in writing research proposals for their Masters programme. This agent does so by bringing together a large set of example abstracts and provides students with feedback and possible edits.

Relevance for us:

  • While this is not called an assessment tool, it technically is performing at least one of the functions our AI agent for learner oral assessment aims to do – it reviews someone’s work, compares it to other content, and provides feedback.

Key person(s): Kiri Hunter and Lucy Macnaught

Whakawhiti Reo, Kaiawhina Reo and Koro, kahu.code

Country: New Zealand

Phase: Launched

kahu.code specialises in creating bilingual text and voice models as well as large-scale translations, with a focus on reo hangarau (tech terminology). Their mission is to “facilitate seamless kōrero (communication) between technology and humans, enabling effective interactions and understanding, supporting the revitalisation of indigenous languages.” Their products include:

They are also in the early stages of developing a transcription tool (not yet named).

Relevance to us:

  • Our agent speaks only in English and moreover, uses a speech-to-speech model, while these te reo AI products appear to be built using speech-to-text, text-to-speech and text-to-text models.
  • Nevertheless, future AI agents in education could explore the option of agents that either use different voices (e.g., different accents, genders, ages), or can speak in te reo Māori or other languages to support all learners effectively.

Key person(s): Michael Puhara and Xaviere Murray-Puhara

Poipoia te kākano, kia puawai.

Kaituhi, Reo, Reo Ora and Rongo, Te Hiku Media

Country: New Zealand

Phase: Mature

Te Reo Irirangi o Te Hiku o Te Ika (Te Hiku Media), has created a suite of ‘te reo Māori’ natural language processing (NLP) tools that can enable the creation of new digital products and services that leverage te reo Māori speech recognition, including a speech-to-text system (audio transcription), text-to-speech system (TTS), and other pronunciation focussed language tools. For example, see the following products:

  • Kaituhi (transcribes spoken Te Reo Māori and New Zealand English into audio and video files)
  • Reo (synthesizes Te Reo Māori words, large bodies of text and utterances)
  • Reo Ora (provides pronunciation feedback)
  • Rongo (supplies users with specialised API keys to ensure kaitiakitanga of Indigenous data).

Relevance to us:

  • Our agent speaks only in English and moreover, uses a speech-to-speech model, while these te reo AI products appear to be built using speech-to-text, text-to-speech and text-to-text models.
  • Nevertheless, future AI agents in education could explore the option of agents that either use different voices (e.g., different accents, genders, ages), or can speak in te reo Māori or other languages in order to support all learners effectively.

Key person(s): Peter-Lucas Jones

Coach M, Lever Transfer of Learning

Country: Australia

Phase: Launched

Coach M is a text-based AI agent that workplaces can purchase to help employees learn critical skills and improve their on-the-job performance. Over eight weeks, employees engage in three 30-minute instant messaging chats with Coach M. This is possible due to it being based on a database of over 20,000 real-life coaching conversations. These sessions enable employees to reflect on goals and keep themselves accountable. Coach M then tracks their progress and provides insights on how employees are performing.

Relevance to us:

  • Given that we are interested in piloting our AI chatbot for learner oral assessment, it is useful seeing how Coach M is incorporated into the workplace – with three 30-minute chats over 8 weeks. This may help us understand the best way to make the use of an AI chatbot in the workplace practical.

Key person(s): Unknown

Deakin Genie, Deakin University

Country: Australia

Phase: Launched

Deakin Genie is an AI agent developed in 2017 to help students throughout their academic journey, including answering questions about courses, keeping on top of their assignments and planning what to study. It employs advanced natural language processing and machine learning techniques to engage in more natural, context-aware conversations with students. This allows Deakin Genie to understand and respond to complex queries, maintain context throughout interactions, and provide personalised assistance based on individual student needs.

Four years later, Deakin University piloted a new AI automated feedback tool created by FeedbackFruits with students from the Faculty of Sciences and Built Environments. These students were able to upload draft assignments to the tool and get personalised feedback on how well their grammar, structure and referencing aligned with the assessment criteria. For example, if they used the correct tense or abbreviated scientific names accurately. By 2022, the tool had been adopted by 15 courses across multiple faculties and was available to 3800 students.

Relevance for us:

  • While Deakin Genie is quite different to the AI agent for learner oral assessment we plan to make, it is impressive in that it was created in 2017. This demonstrates the number of years educators have been exploring AI to improve the student experience.
  • The AI automated feedback tool highlights how these tools can be adapted and applied to a wide range of disciplines.

Key person(s): Unknown

Wesmigo, Wesley College

Country: Australia

Phase: Launched

Launched in 2022, Wesmigo is a custom-built generative AI chatbot powered by ChatGPT for international baccalaureate (IB) students at Wesley College. The College wanted to create a safe environment for their students to use AI as a coach for brainstorming and help with assignments, monitored from their existing learning management system. By developing this custom agent, they were able to tailor the responses to be age appropriate, safe for children, have knowledge of the school and the IB curriculum. They also created a companion guide to sit alongside the chatbot, to support students to critically evaluate the chatbot's outputs and understand the limitations of generative AI.

Relevance for us:

  • As part of our project, we are also creating various resources to help users think critically about what outputs AI produces and the limitations of this technology through our own ethical considerations table and documenting our lessons learnt which will be published in a playbook at the end of the project.

Key person(s): Unknown

EdChat, South Australian Department Education

Country: Australia

Phase: Launched

EdChat is a generative AI chatbot launched by the South Australian Department of Education built on Microsoft’s Azure platform that provides learners with study support. Since its pilot in 2023 with 1,500 students and 150 teachers across 8 schools, it has been rolled out in more schools across Australia. During this pilot, the parameters of the chatbot (including what content was given and what was blocked) were determined by the Department and each school’s principal could decide on how and to what extent learners were exposed to the chatbot.

Relevance for us:

  • EdChat is similar to Wesmigo above in that it is a tailor-made AI chatbot for use in the classroom with restrictions on how the chatbot interacts with students, highlighting the possible value in these types of chatbot.

Key person(s): Unknown

Catholic CoPilot Chatbot, Brisbane Catholic Education

Country: Australia

Phase: Launched

The Catholic CoPilot Chatbot was developed by Brisbane Catholic Education, a community of more than 140 schools in the Archdiocese of Brisbane, using Microsoft’s 365 Copilot Platform. It is set up to answer student’s common questions, provide them with learning support through a religious lens and help educators with administrative tasks such as lesson planning. In 2024, the chatbot was piloted in select primary and secondary schools, with plans for broader implementation across the region. Teachers from St Francis College Crestmead, one of the schools involved in the pilot, reported saving approximately 9 hours on administrative tasks per week using this chatbot.

Relevance for us:

  • This Catholic CoPilot Chatbot highlights how existing AI platforms such as Microsoft’s Co-pilot can be tailored to organisation's needs whether that is restricting specific content (as done with Wesmigo or EdChat) or layering another lens (such as a religious perspective).
  • Their pilot also illustrates the time savings that AI products can provide for educators.

Key person(s): Unknown

Virtual Peer, Macquarie University

Country: Australia

Phase: In development

Virtual Peer is an AI chatbot developed by Macquarie University and built on Microsoft’s Azure OpenAI platform. It was designed to assist students by providing answers to common academic questions based on verified training data the University provided. The chatbot uses text-based conversational AI to help first-year students navigate their courses and assignments. Currently, it is in the development phase, with a large pilot study occurring in October 2024, with 1,400 students. In this initial two-week pilot phase, over 20,800 messages were exchanged, with 80% of these interactions occurring outside of university operating hours. Further spikes in user engagement were seen in the lead-up to exams.

Relevance for us:

  • Virtual Peer highlights the flexibility that conversational AI products can provide for students – giving them the ability to have questions promptly answered outside of traditional university operating hours.

Key person(s): Unknown.

NSWEduChat, New South Wales Department of Education

Country: Australia

Phase: Launched

In 2024, the department-owned NSWEduChat was piloted across 50 state schools across New South Wales. This custom-built chatbot was designed to act as a virtual tutor and replicate how teachers work by prompting students to work through where they are getting stuck on an assessment question. For example, when used for math homework, it does not give students the answer outright, but prompts them by providing the next step. For example, “To convert an improper fraction, you need to multiply the whole number by the denominator and then add the numerator. Can you do that?”. This contrasts generic AI models such as ChatGPT which would answer outright.

Relevance for us:

  • NSWEduChat shows us the government interest in rolling out education-specific AI products with built-in safeguards.

Key person(s): Unknown

KuraPlan, Emory Fierlinger

Country: Australia and New Zealand

Phase: Launched

Originating out of Aotearoa, KuraPlan is an AI lesson planning tool designed for primary and secondary school teachers (years 1-13). Launched in early 2024, KuraPlan is aligned with the country’s curriculum, and includes subjects from business to Te Reo Māori. Teachers can browse a library of lesson plans created by peers, or generate new plans using the tool. KuraPlan also supports lesson plans in Australia, the United States, Ireland, the United Kingdom and South Africa.

Relevance for us:

  • Unlike Scarlatti’s assessment agent, KuraPlan is targeted at teachers (rather than learners), on lesson delivery rather than assessment, and is for primary and secondary education rather than vocational education.
  • Nevertheless, it demonstrates how AI can be used to reduce teacher workload while still aligning with local educational requirements.

Key person(s): Emory Fierlinger

Will, Soul Machines & Vector

Country: New Zealand

Phase: Launched

In 2018, AI company Soul Machines partnered with Auckland energy company Vector to create an anthropomorphic AI avatar named Will for their ‘Be Sustainable with Energy Programme’. Through this program, Will helps to teach school students about energy and how to use it in a sustainable way. It does so by explaining an aspect of energy before asking them questions that students can respond to orally in real time. Soul Machine also has a number of other AI avatars that help teach other subjects from helping to practice English or getting fit.

Relevance for us:

  • AI products can appear as digital characters, an interface you type into, or a human-like voice you speak with. Different learners may feel differently about these different options.
  • Will is unlike our AI agent as it does not seem to transcribe the conversation it is having with the learner and has an anthropomorphic interface that learners can respond to, rather than clicking a button to record their responses verbally.
  • Unlike the other AI products mentioned in this article, Will has not been rolled out by education providers. Instead it is led by the private company that created it (and what education providers they supply energy to).

Key person(s): Unknown.

Projects in assessment (only)

NIMO, Future Makers & SchoolJoy

Country: New Zealand

Phase: Launched

Future Makers are currently working with educators and schools in New Zealand to trial SchoolJoy’s AI simulation agent, ‘NIMO’. Teachers can use NIMO to practice engaging in challenging conversations. For example, talking with parents who are worried about their child’s progress or a student struggling with interpersonal relationships. The teacher’s communication is then assessed against several competencies. At the end of the simulation, NIMO generates a full transcript of the conversation, provides an analysis of the key moments, and gives recommendations on areas where the teacher could improve. NIMO also has applications outside of education with industry-specific simulations for people working in tech (who want to practice communicating their products to non-technical stakeholders), healthcare workers (who want to practice interacting with patients as a hospice nurse) and those in financial services (giving bankers the confidence to handle difficult conversations with clients).

Relevance to us:

  • NIMO is different to our AI agent for three reasons: It is used by teachers not learners; it is focused on secondary rather than tertiary education; and although it assesses competency, it does so non-formally.
  • However, functionally, NIMO is the closest we have found to what we are developing - it is speech-based (unlike many other AI agents), and it assesses against set competencies.
  • NIMO’s human-like nature is particularly impressive and sets a goal that others developing oral agents may wish to aim for.

Key person(s): Derek Wenmoth and Ian Zhu

AI Generated Assessment (with sector customisation), Construction and Infrastructure Centre of Vocational Excellence

Country: New Zealand

Phase: In development

ConCOVE is currently developing a proof of concept using LLM with Epic Learning, to develop assessments from unit of skill standards, which can then in turn be contextualised to a particular industry, or learner needs. This is aimed at improving the resource development process in terms of time, quality, consistency, and relevance. A key part of this project for the ConCOVE is exploring how the wider education system responds to this technology, including in terms of moderation.

Relevance to us:

  • This agent shows yet again that an “assessment agent” can mean many things. Here, it is used by the provider itself to generate sets of questions, rather than to converse with a learner.
  • This agent highlights how AI can be fed things like sector-specific course content, and generate a customised output such as a set of assessment questions specific to a sector.
  • ConCOVE are documenting how they are evaluating their AI agent.
  • Given the focus on moderation, it will be important to learn from this project.

Key person(s): Eve Price

AI Generated Assessment (with state customisation), Epic Learning

Country: Australia

Phase: Launched

Epic Learning used LLM to overlay state-specific legislation and construction compliance on assessments, to ensure that questions were legally relevant and contextualised to each state's laws and regulations. Note that it does not interact with the learner itself, with the learner still completing the assessment in a traditional format (e.g., written assessment).

Relevance for us:

  • This agent shows yet again that an “assessment agent” can mean many things. Here, it is used by the provider itself to generate sets of questions, rather than to converse with a learner.
  • This agent highlights how AI can be fed content such as state-specific industry standards and legislation, and generate a customised output such as a set of assessment questions specific to a state.
  • While generating a set of assessment questions is different to an agent that runs a dynamic oral assessment and provides a grade, this shows that there is an ability to generate appropriate questions.

Key person(s): Karl Hartley

Gradescope, University of Adelaide

Country: Australia

Phase: Launched

In 2021, the University of Adelaide piloted the AI-assisted grading tool Gradescope with computer science students undertaking coding exams. Gradescope uses AI to cluster similar student answers, enabling tutors to review grouped responses and apply consistent feedback or grades across multiple submissions. This speeds up the marking process while retaining human oversight. A second pilot a year later explored its use for grading hand-drawn and symbol-based assignments.

Relevance to us:

  • Gradescope, while different to our AI agent (which assesses oral responses), shows how AI can enhance grading efficiency through pattern recognition and feedback consistency.
  • It highlights the value of combining automated support with educator input, particularly for scaling formative assessment processes.

Key person(s): Unknown

Written Assessment Grading Tool, New Zealand Qualifications Authority (NZQA)

Country: New Zealand

Phase: In development

The NZQA have just finished piloting a generative AI tool that provides preliminary grades on students’ written NCEA exams. It was trialled on year 10 literacy and numeracy assessments.

Relevance to us:

  • The tool that NZQA is developing shows that large government organisations are open to exploring AI, when it is used for the benefit of learners and providers.

Key person(s): Unknown

AI agent for oral assessment, Food and Fibre Centre of Vocational Excellence

Country: New Zealand

Phase: In development

The Food and Fibre Centre of Vocational Excellence has recently funded Scarlatti (the authors of this article) to develop and pilot an AI agent for learner oral assessment. The intention is to offer students an alternative to traditional written assessments, and providers an alternative to costly one-on-one oral assessments, potentially enabling them to spend more of their time on delivery or pastoral care. The difference between other AI agents in this article is that this one generates questions, converses with the learner (i.e., voice-based), and grades their responses.

Key person(s): Phoebe Gill, Sam Cormack, Adam Barker

Other projects

AI Contact Centre Agent, New Zealand Qualifications Authority (NZQA)

Country: New Zealand

Phase: In development

NZQA already have a chatbot that answers and manages incoming calls from students. Their next steps are to explore turning this into generative AI so that it can be more conversational.

Relevance to us:

  • NZQA’s project show the openness of New Zealand government agencies to explore AI (carefully and slowly), where it benefits learners.

Key person(s): Unknown

Academic Success Monitor, UNSW

Country: Australia

Phase: Mature

The academic success monitor (ASM) is a digital companion for students, academics and support teams that aims to detect students at risk of failing a class early enough to act. It uses data to provide insights into a student’s academic progress, identify a possible risk and offer personalised AI recommendations to the student for targeted support at the right time. If risk of failing continues to escalate, the system alerts the education provider to intervene. ASM has recently won Gold for AI in Education in Oceania.

Relevance to us:

  • While this is not formal educational assessment, it has similarities in that the system analyses a student’s progress (the equivalent of assessing a student’s knowledge), creates a risk level (the equivalent of a grade) and provides personalised recommendations (the equivalent of providing feedback)

Key person(s): Unknown

Scarlatti's take

Across Oceania, we are seeing a rapid growth of AI projects – although these are largely in development. Many are being undertaken with the use of the University of Sydney’s ‘Cogniti’, but a smaller number are being built from scratch. Some are undertaking a singular role (e.g., tutor) while others are undertaking a combination (e.g., tutor, administrative support, quiz generator).

Despite all the advancements, there are very few projects exploring AI for assessment. This gap is even more apparent in the VET space in Oceania, where we were only able to find one other product (ConCOVE’s AI generated assessment agent) being developed (other than our own) for use in assessments.

Questions that we are asking for our own AI agent:

  • How can we incorporate the lessons learnt from other projects into our own?
  • How can we build an AI chatbot for the Aotearoa context, when there is no pre-existing, easily available ethical framework (with most pilots instead using Cogniti, an Australian tool)?
  • What are the differences between an AI chatbot, agent, tool, and platform?

Interested in following our journey into AI?

  • Sign up here to receive our next article directly to your inbox.
  • Contact the Scarlatti team here to share your thoughts or questions.

Contact Phoebe Gill now