DistillerSR Adopts NIST AI Risk Management Framework (RMF) to Ensure the Continued Development of Trustworthy AI Solutions

NIST AI RMF Sets Policies and Guidelines That Underpin the Trusted Development of New DistillerSR AI Capabilities for Literature Review Automation

DistillerSR

DistillerSR® Inc., the market leader in AI-enabled literature review software and creator of DistillerSR™, today announced that it has adopted the National Institute of Standards and Technology (NIST) AI RMF for all future AI-related product releases. 

Since 2022, DistillerSR has been using Drata™, a continuous compliance automation platform, to monitor its adherence to SOC 2, which is an internationally recognized cybersecurity compliance framework. Drata’s NIST AI RMF module enables DistillerSR to govern adherence to the design, deployment, testing, verification, and validation of new AI capabilities for literature reviews through its evidence management platform. Customers and prospects can access the company’s AI policies on-demand through DistillerSR’s Trust Center, which provides up-to-date information on the company’s policy adherence to the safe and ethical application of AI in its product. 

“We’ve been an industry pioneer in applying scientifically validated AI to literature reviews since 2016,” said Peter O’Blenis, CEO, DistillerSR Inc. “Our adoption of the NIST AI RMF reinforces our trusted leadership position in the safe application of AI in our platform. Now more than ever, it is critical that vendors, such as DistillerSR, provide transparent explanations about how their AI models are designed and ultimately governed in support of health research. This approach engenders trust in the adoption of these new AI models, which will help accelerate the conduct of research, reduce research fatigue, and hasten the availability of new innovative treatments.” 

Literature reviews are the cornerstone of evidence-based research; however, their production has traditionally been highly manual, time consuming, and error prone. In recent years, technology companies have attempted to reduce research fatigue of manual literature reviews by applying large language models (LLMs) to extract and summarize scientific literature and clinical data. Most LLMs, however, generate significant inaccuracies, often called "hallucinations”, which can lead to serious data errors in regulatory submissions, scientific communications, and guideline development.   

In the months ahead, DistillerSR will expand its AI capabilities, combining Generative, LLM, and deterministic AI methods for faster data extraction and evidence summarization. The NIST AI RMF will allow DistillerSR’s customers and prospects to evaluate how the company governs the engineering, privacy, and ethical dimensions of newly developed AI features and their application to enhance health research.  

Today, more than 280 of the world’s leading research organizations, including the largest pharmaceutical and medical device companies, trust DistillerSR to securely produce transparent, audit-ready, and regulatory compliant literature reviews. The DistillerSR platform and modular ecosystem enable our customers to securely automate the management and analysis of evidence-based research — faster, more accurately, more transparently, and at scale. 

Source: DistillerSR Inc.

Share:


Tags: Artificial Intelligence, Literature Review, Software


About DistillerSR Inc.

View Website

The global leader in AI-enabled literature review automation and enterprise evidence management software.

DistillerSR Inc.
505 March Road, Suite 450
Ottawa, Ontario K2K 3A4
Canada