Responsible AI Approach

at a glance

We believe AI will transform education and learning in a positive way. We also believe in bringing AI to education in a responsible and safe manner.
We develop purpose-built domain-specific AI creating value to end-users, often co-creating with end users.
  1. Right model size for right tasks​
  2. Active feedback from end users​
  3. Domain-specific workflows​
  4. Configurability​
We strive to develop systems that minimize harm and risk in Education.

  1. Context aware safety models​
  2. Red teaming​
  3. 3rd party risk assessment​
  4. Continuous improvement​
We develop contextual and relevant AI systems designed to reduce inaccuracies and risks.

  1. Walled garden for grounded responses​
  2. Training LLMs to be hallucination-resistant​
  3. Rigorous testing and validation​
We emphasize providing clarity about our models and will make efforts to make known limitations and risks of our models clear.
  1. LLM models released on HuggingFace​
  2. Community-based partnerships for frameworks and consortium​
Private & Secure
We are committed to protecting user data and protecting our AI systems against potential vulnerabilities, threats and breaches.
  1. Implementing practices per COPPA, FERPA​
  2. DPAs with schools​
  3. Data minimization and stringent PII treatment​
  4. Periodic penetration testing​
    We develop AI systems that strive to be impartial and unbiased.

    1. Seeking balanced datasets​
    2. Partner with community of end-users​

      In Education,
we strongly believe:  

      1. Teachers are irreplaceable.
      2. Teachers should be empowered with the right tools to control classroom interactions
      3. We should build solutions to make teachers' lives easier so they can do what they do best.
      4. We should co-create solutions with schools and teachers.

      Our Approach
Responsible AI in Education 

      We develop education-focused large language models (LLMs) for use of generative AI

      We developed the LLMs with an emphasis on education workflows and safety requirements. Our focus is on building self-contained small single-purpose models. Our approach is different than general purpose LLMs which are often trained on information collected from the Internet at large.

      This allows Merlyn’s response to minimize hallucinations and be closer aligned to school content and curriculums. We are working to proactively identify risks and develop mitigation strategies for the same. Some of our strategies include:

      1. Focusing on grade level, subject matter, and other contexts relevant to a curriculum for relevancy and appropriateness. We custom built an appropriateness model.
      2. Offering schools the option to provide a content curriculum from which our AI solution can retrieve responses.
      3. Training our LLMs to be hallucination resistant.
      4. We seek active feedback from Educators and Administrators via alpha/beta testing and research surveys. This feedback loop lets our educators provide feedback and shape our AI solution’s evolution to be a relevant and safe educational tool.
      5. We seek to empower the teacher as a "human-in-the-loop" to guide student interactions in the classroom.

      We adapt safety and reliability standards and make it domain specific

      We are adapting principles from industry frameworks and have added domain-specific guardrails appropriate for use in education, including:

      1. Applying a safety model on user prompts and model generated responses.
      2. Grounding responses from school-specific corpus and school-specific policies for any sensitive topics.
      3. Beta testing with schools to validate our solutions.
      4. Internal and external 3rd party Red Teaming
      5. Establishing a Red-Teaming Network of Schools.
      6. Call to Action – As an educator, you should have a say in designing AI systems for your school. Join our mission to create a Red Teaming Network of Schools to do adversarial testing and provide feedback on AI solutions.

      We believe transparency is key to the successful adoption of trustworthy AI solutions

      We are in the early days of bringing Generative AI solutions for education. We are enhancing our Safety features every day in connection with the evolutions in the generative AI industry. We aim to be transparent about the risks and limitations of AI with users.

      1. We published our safety model in Hugging Face (Merlyn Mind Page Link) to help and learn from the community.
      2. We will use Merlyn Training Series to educate our users on AI literacy, enabling them to use our AI solutions and generative AI features with more confidence and awareness of its benefits and limitations.
      3. We seek active feedback and collaboration from school administrators, instructors, and end users for our AI products.

      We prioritize privacy and Security in development from the ground-up

      Merlyn features have been developed with privacy and security in mind from its initial design, coding to final deployment.

      1. We have implemented practices in accordance with FERPA and COPPA.
      2. We have implemented practices and controls to minimize our retention of personal information.
      3. We have also trained our models to reject requests for personal information.
      4. We work hard to minimize the possibility of generating responses with any personal information in them.

      For more information on Security policies – Visit our Trust portal

      Book a demo

      Schedule a free personalized demo to see our purpose-built solutions in action, and hear how innovative schools are leveraging the power of Merlyn in their classrooms.

      Thank you! Your submission has been received!
      Oops! Something went wrong while submitting the form.
      Thank you! Your submission has been received!
      Oops! Something went wrong while submitting the form.