Saturday, November 25, 2023

The Need for Speed: Why CDISC Dataset-JSON is so important.

The CDISC community has been suffering for 20 years or more by the obligation of FDA (and other regulatory authorities following FDA) to submit datasets using the SAS Transport 5 (XPT) format.
The disadvantages and limitations of XPT are well known: limitations to 8, 40 and 200 characters, only US-ASCII encoding only, etc.. But there is much more. Essentially, the use of XPT has essentially been a road-blocker for innovation at the regulatory authorities all these years.
Therefore, the CDISC Data Exchange Standards Team has developed a modern exchange format, Dataset-JSON, which, as the name states, is based on JSON, the currently worldwide must used exchange format anyway, especially for the use with APIs (Application Program Interfaces) and RESTful Web Services.
The new exchange format is currently being piloted by the FDA, in cooperation with PHUSE and CDISC.

Unlike XPT, Dataset-JSON is really vendor-neutral and much , much easier to implement in software than XPT. This has also resulted in a large number of applications being developed and showcased during the COSA Dataset-JSON Hackathon. There are however many opportunities created by the new format, which are however not well recognized by the regulatory authorities.
XPT is limited to the storage of "tables" in "files", i.e. two-dimensional. JSON however allows to represent data (and metadata) in many more dimension and deepness. This means that, even when Dataset-JSON will at first still be used to exchange "tables", these can be enhanced and extended to also carry audit trails (much wanted by the FDA), source data (e.g. from EHRs, lab transfers) and any type of additional information, as well on the level of the dataset, the record, as the individual data point.
Furthermore, Dataset-JSON will allow to embed images (e.g. X-Rays, EMRs) and digital data like ECGs into the submission data.

The major advantage of using this modern format is however on another level.

Traditionally, submissions to regulatory authorities are only done after database closure, mapping the data to SDTM, SEND and/or ADaM, etc.. This essentially means a period of often several months are the clinical study has been finalized, and years after the clinical study has been started. In the mean, many patients may have died or seriously harmed, as the treatment they need, is not available yet. This is what we call "the need for speed".

Dataset-JSON can be game changer here.

Essentially, partial submission datasets can be generated as soon as the first clinical data are received from the sites. The regulatory authorities are however not used to start reviewing as soon as the first clinical data is available, among others, due to their technical infrastructure.
JSON is especially used worldwide for use with APIs and RESTful web services, meaning that even submission data can be exchanged real time, once they are created. Although JSON can of course be used with and for "files", the real strength is in its uses for "services". All other industries have moved from files to SOA, "ServiceOriented Architecture".

What does this mean for regulatory submissions? 

Imagine a "regulatory neutral zone" (one can discuss what "neutral" means) between sponsor and regulatory agency, where the sponsor can submit submission records (not necessarily as "files") as soon as they are created, using an API and e.g. using RESTful Web Services. Using the same API, records can also be updated (or deleted) when necessary, using audit trails. On the other side, reviewers can query the study information from the repository, using the API, not necessarily by downloading "files" (although that remains possible), but by getting answers on questions or requests like "give me all subjects and records of subjects with a systolic blood pressure of more than 130 mmHg that have a BMI higher than 28".
This "regulatory neutral zone" is surely different from the current "Electronic Submissions Gateway" (which is completely file based), but more related to API-governed repositories used in many other (also regulated) industries such as aviation, financial, etc..

Essentially, when all this in place, regulatory submission could be started as soon as the first data points become available, and finalized much sooner (even months or years sooner) as is currently the case. This can then save the life of thousands of patients.


 

Monday, January 9, 2023

CDISC SDTM codetables, Define-XML ValueLists and Biomedical Concepts

Yesterday, I started an attempt to implement the "CDISC CodeTables" in software to allow even more automation when doing SDTM mapping using our well-known SDTM-ETL software.
As the name says it, CDISC has published these as tables, and so far only as Excel worksheets. Unfortunately, this information is not in the CDISC-Library yet, otherwise it would only have costed me a relative simple script to access the CDISC-Library API and a few hours to get all the information implemented as Define-XML "ValueLists".

Essentially, I do not really understand (others will probably say "he does not want to understand") why these codetables were not published as Define-XML ValueLists right from the start. Was it that the authors have limited or no Define-XML knowledge (there are CDISC trainings for that ...) or is it still the thinking that Define-XML is something that one produces after the SDTM datasets have been produced (often using some "black box" software of a specific vendor), rather than using Define-XML upfront (pre-SDTM-generation) as a "specification" for the SDTM datasets to be produced (the better practice). Or is it just still the attitude of using Excel for everything ...: "if all you have is Excel, everything is a table".
Now, I do not have anything against tables. I have been teaching relational databases at the university for many years, and these are indeed based on ... tables. The difference however is that in a relational database, the relations are explicit (using foreign keys), where in all the CDISC tables (including for SDTM, SEND and ADaM), the relations are mostly implicit, described in some PDF files.

When I start looking into the Excel files, I immediately had to say "OMG" ...

Each of the Excel files seems to have a somewhat different format, some with and other without empty columns, and completely different headers. So even when I wrote software to read out the content, I would still need to adapt the code (or use parameters) for each input file to have at least some chance of success. Although far from ideal, I then wrote such a little program, and could at least produce some raw XML CDISC CodeLists, although the results still require a lot of afterwork.

So I started with the DS (Disposition) codetable, which went pretty smooth.

Then I decided to tackle a more complicated one, the codetable for EG (ECG - Electrocardiogram).
I knew this would be a non-trivial one, as the EG domain itself is pretty weird. In contrast to normal CDISC practice, EGTESTCD and EGTEST have 2 codelists as can be seen in the CDISC-Library Browser

i.e. one for classic ECGs and one for Holter Monitoring tests.

Personally, I consider this very bad practice. The normal (good) practice is to have a single codelist, and then use Define-XML ValueLists with "subset" codelists for different use cases. This is a practice also followed by CDISC for other domains, e.g. by publishing a subset-codelist for units specifically for Vital Signs tests.

Also, when creating SDTM datasets, we define subset codelists all the time in our define.xml, e.g. based on the category (--CAT variable), but we also generate a subset codelist with only the tests that appear in our CRFs or were defined in the protocol. For example for LB (Laboratory) we will not submit all 2500+ terms for LBTESTCD and LBTEST, but only the ones we used or planned to use.

But maybe the authors of this part of the standard were unaware of define.xml, subset codelists, and especially Define-XML "ValueLists" and the nice possibility to work with "WhereClauses".

So, the codetable for EG, in Excel format, comes with two tabs: "EG_Codetable_Mapping" and "HE_Codetable_Mapping":

 

That the latter is for the "Holter Monitoring Case" is not immediately obvious: there is even no "README" tab explaining the use cases.

As usual (and unfortunately), there are different sets of columns for the different variables the subsets of codes apply to:


This makes it hard to automate anything to use it in software: either one needs to revamp the columns, or do a huge amount of copy-and-paste (as before the CDISC-Library days).

When comparing the contents of the tabs, things even get more complicated.
Some subset codelists appear in both tabs, others such as the ones for units (for EGSTRESU, depending on the value of EGTESTCD) only in the first. Does this means the units subsets are not applicable to the Holter Monitoring use case?

When then comparing the subsets for the value of EGSTRESC (depending on EGTESTCD) in both tabs, some are equal (e.g. for the case of EGTESTCD=AVCOND), others are different, with a range of only 1 term different, to a larger set of terms being different.

I tried to resolve all this by adapting my software - it didn't work well. So I started doing ... copy and paste ...

This results in subset codelists like:


with some codelists coming in two flavors, one for the normal case and one for the Holter Monitoring case - of course I gave these different OIDs.

For the units, the organization in the worksheet is pretty unfortunate, so e.g. leading to:


stating that for each of EGTESTCD being JTAG, JTCBAG, JTCBSB and JTCFAG the only allowed unit is "msec" (milliseconds) for EGSTRESU.
This is valid for use in Define-XML "ValueLists". The "WhereClause" would then e.g. say:
"Use codelist CL.117762.JTAG.UNIT" for EGSTRESU when EGTESTCD=JTAG".

The better way however is to define one codelist, e.g. "ECG_Interval" and define a WhereClause stating when it should be used for EGSTRESU. This leads to e.g. for the Define-XML ValueList and WhereClause:


with the subset item and codelist defined as:

 

and the ValueList of course assigned to EGSTRESU:

 

Essentially, this is all very related to Biomedical Concepts!
For example the concept "JTAG" (with name "JT Interval, Aggregate" ) would then have the property that it is an ECG test (and thus related to EGTESTCD/EGTEST in SDTM) with the property that the unit for it can only be "msec", at least when using "CDISC notation" for the unit. The better would however be to use the UCUMnotation, which is "ms" and which is used everywhere in health care except for at CDISC ..., and which has the advantage of allowing automated unit conversion, which is not possible with CDISC units.

CDISC has now published its first Biomedical Concepts in the CDISC-Library which can be queried using the Library RESTful API:


For example, for the BC "Aspartate Aminotransferase Measurement", the API response (in JSON) is:

 

As I understand it, CDISC is also working on generating BCs starting from codetables, especially for the oncology domains and codelists, where we have similar dependencies between standardized values (--STRESC) and possible units (--STRESU).

It would then be great if we can see all the by CDISC published codetables published as BCs, and made available by the CDISC-Library through the API. With the SDTM information than added, these then correspond to the ValueLists in the define.xml of our SDTM submission.

But I will start with converting these awful Excel codetables to Define-XML CodeLists and ValueLists (with the corresponding WhereClauses of course) first.

Essentially, it should be forbidden that CDISC publishes standards (and even drafts of them) as Excel files, but it should only be allowed that a real and standardized machine-readable form, such as based on XML or JSON, is used. This would finally allow much better QC for the draft standards (instead of visual inspection!) and make the standards immediately usable in systems and software.

I presume many of you will disagree, so your comments are always welcome!