I have had a few conversations over the past couple of weeks about ‘End-to-End’ clinical solutions and standards that started me thinking. I and many others have talked about End-to-End in a rather casual way in the past and I started to wonder what do we mean by this term, what is it in reality, can I touch this End-to-End thing?
I looked back at some old blogs on the e-Clinical Vision and the Round World as well a a few old presentations. One of the earliest presentations I gave on the topic was way back in 2006, in the US, while a number of people contributed to a 1.5 day workshop that I organised at the CDISC interchange in Montreux back in 2007. Then End-to-End was all about integrating the CDISC standards, ODM, LAB, SDTM etc and getting tools to talk to each other. But is that End-to-End?
The problem is I couldn’t, at this time, provide you with a simple straight forward and short, one sentence statement of what End-to-End is! Even after all this time, the idea is somewhat vague. So I thought I would write this blog to put out a few ideas that people can use to discuss and may be we can get to something tangible. I will try and make some sense using several different perspectives.
The first perspective is a high-level diagram I put together a couple of years ago and have used quite a lot in presentations since. It is the big picture, a desire. To my way of thinking the End-to-End problem is about making the process of creating the desired business outputs in the picture as easy as possible using data standards as an enabler. Note that the picture is an overview and does not try to list everything we need for a submission but does try to present a big picture view. Here we have the protocol feeding study setup with a study ‘design’ providing define.xml and annotated CRFs early in the process rather than as after thoughts. The collected data flows into the process for the creation of data tabulations and then onwards into the analysis process, the analysis datasets, study reports and patient profiles, all of which then form part of the submission. We want an easy, well understood flow of information from left to right.
Another way of looking at it is to consider what it is and what it isn’t. The list is a little random but it is intended to give the flavour. It definitely is not complete. Feel free to add your own wish list of desires using the comments below:
- It’s not having to spend all your time mapping
- It’s a clear picture of the study design
- It’s being sure you are collecting the correct data for the desired end point(s)
- It’s being able to trace data in SDTM & ADaM back to its source
- It’s about process improvement
- It’s about being able to build studies easier
- It’s about being able to create SDTM easier
- It’s not about wondering about where everything goes in SDTM
- It’s not about having to be a 15 year expert with the standards to get close to a good submission
- It’s about easier tool integration not having to write ‘adapters’ all the time
- It’s about understanding your data and being able to answer any query and be able to do so easily
- It is about being able to look back at a study after several years and gain an understanding about that study
“of an FDA reviewer being able to view data on screen, click on a data point and instantly see the entire history of that data point, its provenance, be it a data point that was captured or some summary statistic or calculation. If it was a captured data point, we would want to see the CRF page, the audit trail of changes and then its flow through the analysis process and the final submission.”
Here we are looking at traceability that permits automation bringing power to the reviewer’s desktop. If you build such traceability then you’ve probably built a lot of what you need to produce all the other business outputs required.
A second use case was put out there in a discussion I was having with someone about a sponsor company’s needs. It was couched in simple terms “For given end point, am I collecting the right data”. So can I trace back from my end point(s) to all of the collected data. Does all the data collected have a purpose or am I collecting unnecessary data? Care needs to be taken here as there is a creeping trend to capture more what I refer to as GCP data that demonstrates the trial was conducted correctly (the old adage, if it ain’t written down it didn’t happen). One example is the collection of data such as first language of the subject and the language of the informed consent so as to demonstrate that a subject should have been able to understand the contents of the consent form.
And then a third use case. I want to maximise automation to allow myself time to focus on those aspects that cannot be automated and the issues that arise on a study or clinical programme.
Another perspective is one that always has the power to upset. One way of looking at the clinical process is to view it as a conveyor belt; a production line. I await the howls of ‘science cannot be made into a production line’; I don’t ask that it is. I don’t want standards to drive the science, I want the recording of the science to be well structured using standards, that is a big difference. But that is an aside. From the writing of the protocol until a submission, we want an individual to receive the inputs needed for a task, a protocol document, some data, a study set-up or whatever and be able to add value to it, to know what they are expecting to receive and know what they need to do to add that value and pass it onto the next group with that additional value being part of an overall scheme where only necessary work is done and unnecessary work is not. To do this we need an understanding of the big picture, a flow of data, who does what, when and how. When we have that clear understanding, we can add the automation and tools necessary to implement effectively.
Finally we can look at the End-to-End ideal from the perspective of the main stakeholders. The FDA want to be able to receive submissions that are easy to understand, can be reviewed using a consistent process, where data can be manipulated easily and can be aggregated across companies with ease. A sponsor wants some of this, a quick review for instance, be able to deal with queries from the agency easily. But there are conflicting pressures within a sponsor, the study versus the corporate need, the use of study data well beyond the end of a study versus the pressure to get the study closed that allows no time for improvement in process and tools. From the CRO and sponsor the need is the ability to clearly communicate the needs and requirements of a study and to receive study products that can be readily used by the sponsor. For a vendor, the ability to build that best-of-breed tool that integrates easily with the other tools that a sponsor is using.
One thing that end-to-end is not is a single standard. It is not the ODM or define.xml, it is not SDTM on its own. Stating that the end-to-end solution is one standard is analogous to saying a laptop is just a USB connector. A PC is a blend of many things, USB serves the needs of the overall solution but is part of a package of hardware and software that results in a laptop running windows 7 – or whatever other OS you like – upon which runs useful applications. End-to-End is not just standards, its process, tools, standards and many other things working together to meet the business need.
Until we scope or define the ‘End-to-End’ vision, we will continue to have difficulty building it. The post is just some ‘out aloud’ thinking – always dangerous – and an attempt to start defining what we need with the aim of starting a discussion. Feel free to comment.