CTX - Advancing the Impact of Clinical Research Informatics (CRI)AUTOMATING THE COLLECTION AND CONVERSION OF CLINICAL RESEARCH DATA

Automatically Captures Clinical Trial EHR/EMRs AND Converts Them to CDISC Standards

  • Seamlessly transforms electronic healthcare data into clinical trial data

  • Eliminates duplication of data entry

  • Single point capture of source data in a clinical research environment

  • Offers a remote form of data collection

  • Reduces to near zero on-site source data doc verification

What is CTX?

CTX is a novel healthcare IT platform/service that will instantaneously collect patient EHR/EMR data AND convert it to CDISC standards

Current Need

Achieving and controlling data quality is extremely complex and expensive

The CTX Solution

CTX will dramatically change the way clinical research informatics is utilized

Features & Benefits

What CTX offers and how those features are likely to benefit potential users

Support & Resources

CTX aims to become a new standard in advancing how clinical research data is collected and converted

Improving Clinical Research Informatics:
A Problem 'Tailor-Made' for An Automated Intervention

Pharmaceutical companies spend millions of dollars annually preparing and integrating clinical data prior to analysis. The current process is in need of better utilization of these resources in order to minimize substantial  delays of products’ time to market that commonly result.  In one experiment by the FDA, manually converting legacy data from about 100 new drug applications to a new standard format costs $7 million. It was recently estimated that as much as 80% of clinical data collected is left in older or legacy standards because companies typically only complete re-consolidation of exactly the new data that is required for regulatory compliance.

In Partnership With:

IBM PartnerWorld

CUSP Group LLC

A complete view of all available clinical data would be incredibly useful for improving clinical analytics, simplifying cross-study comparisons, speeding future trials, and data mining for new indications. Mapping new data as it is created provides some of these benefits and reduces the costs and delays to impose order on the data after the fact. However, the tangled web of clinical data standards and vocabularies makes reorganizing the existing data a daunting challenge, when added to the task of real-time integration for new datasets.

It’s a problem tailor-made for automated big data intervention. Why is the standardization process for clinical data still manual? Modern data techniques excel at replicating small functions across vast seas of data, but this challenge is quite different. Multiple versions of standard schemas, legacy data models, custom experimental domains, and locked or proprietary formats from previous contractor-created datasets are highly resistant to mass organization efforts. Organizing data at this scale, only to return to square one when a new data standard is released, is a serious risk for any data-conscious enterprise.

The very practical savings of capital and time to market, combined with opportunities to improve findings and explore new possibilities by using all available clinical data, give pharmaceutical companies a strong incentive to stop relying on error-prone, slow and costly contractor-executed data transformation. Machine learning has provided a path to bring the best of human institutional knowledge to automation of clinical data integration, offering improved accuracy, speed and cost.

Contact Information (781-281-9940)