Our Approach
to Responsible AI
in Education

Our Approach

WE DEVELOP EDUCATION-FOCUSED LARGE LANGUAGE MODELS (LLMS) FOR USE OF GENERATIVE AI

We developed the LLMs with an emphasis on education workflows and safety requirements. Our focus is on building self-contained small single-purpose models. Our approach is different than general purpose LLMs which are often trained on information collected from the Internet at large.

This allows Merlyn’s response to minimize hallucinations and be closer aligned to school content and curriculums. We are working to proactively identify risks and develop mitigation strategies for the same. Some of our strategies include:

  1. Focusing on grade level, subject matter, and other contexts relevant to a curriculum for relevancy and appropriateness. We custom built an appropriateness model.
  2. Offering schools the option to provide a content curriculum from which our AI solution can retrieve responses.
  3. Training our LLMs to be hallucination resistant.
  4. We seek active feedback from Educators and Administrators via alpha/beta testing and research surveys. This feedback loop lets our educators provide feedback and shape our AI solution’s evolution to be a relevant and safe educational tool.
  5. We seek to empower the teacher to guide student interactions in the classroom.
WE ADAPT SAFETY AND RELIABILITY STANDARDS AND MAKE IT DOMAIN SPECIFIC

We are adapting principles from industry frameworks and have added domain-specific guardrails appropriate for use in education, including:

  1. Applying a safety model on user prompts and model generated responses.
  2. Grounding responses from school-specific corpus.
  3. Beta testing with schools to validate our solutions.
  4. Internal and external 3rd party Red Teaming.
  5. Establishing a Red-Teaming Network of Schools.
  6. Call to Action – As an educator, you should have a say in designing AI systems for your school. Join our mission to create a Red Teaming Network of Schools to do adversarial testing and provide feedback on AI solutions.
WE BELIEVE TRANSPARENCY IS KEY TO THE SUCCESSFUL ADOPTION OF TRUSTWORTHY AI SOLUTIONS

We are in the early days of bringing Generative AI solutions for education. We are enhancing our Safety features every day in connection with the evolutions in the generative AI industry. We aim to be transparent about the risks and limitations of AI with users.

  1. We published our safety model on Hugging Face to help and learn from the community.
  2. We will use Merlyn Training Series to educate our users on AI literacy, enabling them to use our AI solutions and generative AI features with more confidence and awareness of its benefits and limitations.
  3. We seek active feedback and collaboration from school administrators, instructors, and end users for our AI products.
WE PRIORITIZE PRIVACY AND SECURITY IN DEVELOPMENT FROM THE GROUND-UP

Merlyn features have been developed with privacy and security in mind from its initial design, coding to final deployment.

  1. We have implemented practices in accordance with FERPA and COPPA.
  2. We have implemented practices and controls to minimize our retention of personal information.
  3. We have also trained our models to reject requests for personal information.
  4. We work hard to minimize the possibility of generating responses with any personal information in them.

For more information on Security policies – Visit our Trust Center