End-to-End Process, What Is It?

I have had a few conversations over the past couple of weeks about ‘End-to-End’ clinical solutions and standards that started me thinking. I and many others have talked about End-to-End in a rather casual way in the past and I started to wonder what do we mean by this term, what is it in reality, can I touch this End-to-End thing?

I looked back at some old blogs on the e-Clinical Vision  and the Round World  as well a a few old presentations. One of the earliest presentations I gave on the topic was way back in 2006, in the US, while a number of people contributed to a 1.5 day workshop that I organised at the CDISC interchange in Montreux back in 2007. Then End-to-End was all about integrating the CDISC standards, ODM, LAB, SDTM etc and getting tools to talk to each other. But is that End-to-End?

The problem is I couldn’t, at this time, provide you with a simple straight forward and short, one sentence statement of what End-to-End is! Even after all this time, the idea is somewhat vague. So I thought I would write this blog to put out a few ideas that people can use to discuss and may be we can get to something tangible. I will try and make some sense using several different perspectives.

The End to End Information Flow

The End to End Information Flow

The first perspective is a high-level diagram I put together a couple of years ago and have used quite a lot in presentations since. It is the big picture, a desire. To my way of thinking the End-to-End problem is about making the process of creating the desired business outputs in the picture as easy as possible using data standards as an enabler. Note that the picture is an overview and does not try to list everything we need for a submission but does try to present a big picture view. Here we have the protocol feeding study setup with a study ‘design’ providing define.xml and annotated CRFs early in the process rather than as after thoughts. The collected data flows into the process for the creation of data tabulations and then onwards into the analysis process, the analysis datasets, study reports and patient profiles, all of which then form part of the submission. We want an easy, well understood flow of information from left to right.

Another way of looking at it is to consider what it is and what it isn’t. The list is a little random but it is intended to give the flavour. It definitely is not complete. Feel free to add your own wish list of desires using the comments below:

  • It’s not having to spend all your time mapping
  • It’s a clear picture of the study design
  • It’s being sure you are collecting the correct data for the desired end point(s)
  • It’s being able to trace data in SDTM & ADaM back to its source
  • It’s about process improvement
  • It’s about being able to build studies easier
  • It’s about being able to create SDTM easier
  • It’s not about wondering about where everything goes in SDTM
  • It’s not about having to be a 15 year expert with the standards to get close to a good submission
  • It’s about easier tool integration not having to write ‘adapters’ all the time
  • It’s about understanding your data and being able to answer any query and be able to do so easily
  • It is about being able to look back at a study after several years and gain an understanding about that study
Changing mode, another way to look at it is to consider the issue from a use case perspective. One use case I have thought about is mentioned in the e-Clinical Vision article linked to above:

“of an FDA reviewer being able to view data on screen, click on a data point and instantly see the entire history of that data point, its provenance, be it a data point that was captured or some summary statistic or calculation. If it was a captured data point, we would want to see the CRF page, the audit trail of changes and then its flow through the analysis process and the final submission.”

Here we are looking at traceability that permits automation bringing power to the reviewer’s desktop. If you build such traceability then you’ve probably built a lot of what you need to produce all the other business outputs required.

A second use case was put out there in a discussion I was having with someone about a sponsor company’s needs. It was couched in simple terms “For given end point, am I collecting the right data”. So can I trace back from my end point(s) to all of the collected data. Does all the data collected have a purpose or am I collecting unnecessary data? Care needs to be taken here as there is a creeping trend to capture more what I refer to as GCP data that demonstrates the trial was conducted correctly (the old adage, if it ain’t written down it didn’t happen). One example is the collection of data such as first language of the subject and the language of the informed consent so as to demonstrate that a subject should have been able to understand the contents of the consent form.

And then a third use case. I want to maximise automation to allow myself time to focus on those aspects that cannot be automated and the issues that arise on a study or clinical programme.

Another perspective is one that always has the power to upset. One way of looking at the clinical process is to view it as a conveyor belt; a production line. I await the howls of ‘science cannot be made into a production line’; I don’t ask that it is. I don’t want standards to drive the science, I want the recording of the science to be well structured using standards, that is a big difference. But that is an aside. From the writing of the protocol until a submission, we want an individual to receive the inputs needed for a task, a protocol document, some data, a study set-up or whatever and be able to add value to it, to know what they are expecting to receive and know what they need to do to add that value and pass it onto the next group with that additional value being part of an overall scheme where only necessary work is done and unnecessary work is not. To do this we need an understanding of the big picture, a flow of data, who does what, when and how. When we have that clear understanding, we can add the automation and tools necessary to implement effectively.

Finally we can look at the End-to-End ideal from the perspective of the main stakeholders. The FDA want to be able to receive submissions that are easy to understand, can be reviewed using a consistent process, where data can be manipulated easily and can be aggregated across companies with ease. A sponsor wants some of this, a quick review for instance, be able to deal with queries from the agency easily. But there are conflicting pressures within a sponsor, the study versus the corporate need, the use of study data well beyond the end of a study versus the pressure to get the study closed that allows no time for improvement in process and tools. From the CRO and sponsor the need is the ability to clearly communicate the needs and requirements of a study and to receive study products that can be readily used by the sponsor. For a vendor, the ability to build that best-of-breed tool that integrates easily with the other tools that a sponsor is using.

One thing that end-to-end is not is a single standard. It is not the ODM or define.xml, it is not SDTM on its own. Stating that the end-to-end solution is one standard is analogous to saying a laptop is just a USB connector. A PC is a blend of many things, USB serves the needs of the overall solution but is part of a package of hardware and software that results in a laptop running windows 7 – or whatever other OS you like – upon which runs useful applications. End-to-End is not just standards, its process, tools, standards and many other things working together to meet the business need.

Until we scope or define the ‘End-to-End’ vision, we will continue to have difficulty building it. The post is just some ‘out aloud’ thinking – always dangerous – and an attempt to start defining what we need with the aim of starting a discussion. Feel free to comment.

 

3 comments on “End-to-End Process, What Is It?

  1. Dave’s blog contains a number of visionary ideas which I can only support.
    I do strongly support the idea of providing define.xml and annotated CRFs early in the process, i.m.o. even before the study starts (see e.g. the discussion about „data oddities“ on LinkedIn at https://www.linkedin.com/groups/Growing-alarming-level-data-oddities-56393.S.5960593168612237314).
    I especially like the vision of reviewers (whether at FDA or at sponsors) being able to instantly see the entire history of each data point, including the audit trail, and the flow from study design to submission.
    Some of Dave’s idea can be implemented relatively quickly, as we do already have some of the ingredients available. I will also implement some of these in the open-source „Dataset-XML Viewer“, a viewer application for inspecting SDTM, SEND and ADaM submissions.

    In order to realize end-to-end, we still need to jump over some hurdles, although some of these seem very high, as we do not have influence on them (i.e. FDA). In my opinion, the following is needed (just a few first random ideas):
    – get rid of XPT
    – in the later future, get rid of PDF (annotated CRF can also be delivered in ODM, you need to do it anyway before study start), machine-interpretable protocol needs to be developed
    – strongly encourage adding the mapping information in a machine-readable, even maybe machine-executable form, from the previous stage (operational data ODM, or SDTM in the case of ADaM)
    submission of the archive of the study (preferably ODM)
    – real electronic signatures
    – submit zipped (XML) files through the FDA gateway
    – smart tools for reviewers using web services (almost 50% in SDTM is redundant and can be replaced by implementing smart software and web services)

    But there is also another major problem that remains unsolved: semantic interoperability. CDISC has developed a large amount of controlled terminology, and that is good so, but it also choose to neglect what is already available in healthcare (reinvention of the wheel). CDISC/FDA does still not allow to submit a LOINC code as value for LBTESTCD, and refuses to implement a notation system for units that is used by 99.5% of the healthcare industry (UCUM) and is mandatory in EHRs and by MU (in the USA). As long as we refuse to use some of the major coding systems in healthcare (not-invented-here syndrom), end-to-end will remain a dream, and will make it impossible to compare and aggregate data between sponsor companies.

    And yes, Dave is right about end-to-end not being a single format. Of course it may help (although the same can be achieved by well-defined and standardized transformations), but it is especially „process“, and „vision“. For example, as long as we (or the managers at the sponsors) cannot convince protocol writers that the end product of the study is SDTM and ADaM, and act accordingly, there will remain major difficulties in building end-to-end.

    1. Jozef

      Totally agree with getting define and aCRF produced as early as possible, something I have implemented and it can work; it needs discipline and a desire to do it. Technically totally achievable.

      As for the other points you raise cannot say I disagree with any of them. Doing some work at the moment that might shed some light on the lab units and UCUM issue, hopefully see some output on that in a few months time.

      And the final point re the protocol. Agree again.

      1. Thank you for these meaningful post and discussion.

        Maybe it is already done or not feasible, but can’t we make a bigger picture to this clinical data flow by adding a multi study view and crossing with others domains and systems? Maybe through BRIDG, semantic technologies and linked data?
        I imagine all cdisc xml models (ODM, define and dataset) with embedded semantic tag and linked data id to interoperate easily (or making it less painful) with EHR, but also link to CTMS, reports tools, data analysis tool, etc. ?

Leave a Reply

Your email address will not be published. Required fields are marked *