Skip to main content

Pharmacokinetic (PK) analysis and reporting are essential to the development of new drugs in the pharmaceutical industry, and incorporation of PK data into CDISC standard datasets is part of this work. Such projects come with an array of challenges and complexities, new terminology to learn, and analysis techniques and data that may not be familiar to a programmer just getting started in this field.  

In this guide, we will outline some of the common mistakes and how to avoid them, thus helping to ensure that your first PK project is set up for success.

1)      You don’t prepare

Lack of preparation and background understanding can sabotage the success of any project. For complex PK programming projects, ensuring you have as much knowledge in place as possible is especially crucial. So don’t make the mistake of not taking the time to properly delve into the context of the study you are working on before getting started.  

PK describes a drug’s exposure by characterizing absorption, distribution, bioavailability, metabolism, and excretion over time and is assessed by measuring the concentration of one or more drugs in the body over time. This may be represented by blood, plasma, serum or even urine/faecal levels that are usually reported at pre-dose and post-dose time points.  Studies range from simple single-dose studies with one analyte to more complex multiple dosing studies involving multiple analytes. It may be necessary to report only plasma data or to report both plasma and urine data (or other combinations) in the same trial. These variances need to be taken into account when designing dataset and reporting specifications.

Some studies use only simple PK analysis techniques to calculate PK parameters, e.g. non-compartmental analysis, whereas others require more complex PK/pharmacodynamic (PD) modelling or population PK analysis and the requirements for each of these in terms of programming datasets will differ depending on the project.

Since not every study is the same, it is crucial to bear in mind that the requirements for analysing each one could vary widely. Make sure that you get fully briefed by your project lead before starting work.

2)      You don’t thoroughly review the Statistical Analysis Plan (SAP)

Further to point one, the SAP contains crucial information about the analysis and reporting requirements. Therefore, the programmer must take sufficient time to review it to ensure a complete understanding of all the requirements. There are several questions here for programmers to be particularly mindful of:

  • Does the SAP contain details about how to handle Lower Level of Quantification (LLOQ) values in the analysis and reporting of the data?
  • Is it clear in the SAP, which PK parameters the programmer will be responsible for calculating and how they will be calculated? Bear in mind that while some parameters require specialist software and are therefore usually calculated by a PK Scientist, others are simpler to derive and therefore may be calculated by the programmer. It is, therefore, essential to clarify which parameters you may need to derive directly.
  • Is the definition for the PK population clear in the SAP and can it be programmed directly from the datasets or is an external population assignment file required?
  • Does the SAP account for any baseline adjustment potentially needed for endogenous compounds?
  • Does the SAP detail potential exclusions of data from the PK analysis, e.g., as a result of protocol deviations? In the case of such exclusions, it is also important to understand how such data will be received and how this will be flagged within the datasets.
  • Does the SAP contain rules for the handling of missing or incomplete samples? Is it clear how imputed records should be used in subsequent analyses?
  • Does the SAP provide clear expectations about the units to be used for  the PK concentrations and parameters and the associated level of precision they should be reported to? Bear in mind that units and the number of significant figures or decimal places required often varies between studies. 

The above is not a comprehensive list but represents some key aspects for consideration.  It is always good practice for programmers to review the SAP with a critical eye, be proactive and flag up any areas that you believe need more explanation. This will help ensure the project runs smoothly.

3)      You don’t get to grips with the data requirements

You may be proficient with CDISC standards and creating compliant datasets in the form of SDTM and ADaM domains. However, when handling PK data, it is essential to acknowledge and plan for the fact that there are two sequentially produced SDTM datasets required:  SDTM.PC containing the pharmacokinetic concentrations, and SDTM.PP containing the calculated PK parameters. 

Similarly, there will be two sequentially produced analysis datasets: ADPC is derived from SDTM.PC and ADPP from SDTM.PP. The former dataset is used for both onward derivations of PK parameters and for producing tables, figures and listings (TFLs) of concentration data. The ADPP is used for the analysis and reporting of PK parameter data.

Some further points take note of to ensure that datasets are set up correctly in line with the individual study requirements include: 

  • Do any time points in the raw data serve as both a post-dose time point for the first dose, as well as a pre-dose time point for the next? If so, the time point in question may need to be duplicated in ADPC as two rows, once for each related dose (depending on whether youare producing outputs of pre-dose data to assess achievement of steady-state).
  • Do clock changes as a result of daylight saving time adjustments need to be taken into account when working out actual sampling times relative to dose timing?

We should note that the PK parameters are not part of the raw data but are typically received as an external data file from the PK Scientist. Individual projects may have different requirements for the format of the file to be sent to the PK Scientist for the calculation of PK parameters.  Therefore when working on a PK project, programmers should get full agreement upfront on the specifications and format for receiving this dataset. For example, for some projects, the PK scientist may only require the standard ADPC dataset (analysis dataset for pharmacokinetic concentration data), and the pharmacokinetic analysis team would perform any further manipulation of the data. In other cases, programmers may need to provide ‘ready to go’ WinNonlin® input files and therefore need to derive these from ADPC before sending them across to the PK Scientist.  WinNonlin® is the specialist software often used by pharmacokinetic scientists for non-compartmental analysis (NCA) and PK/PD modelling. Another point to note is that the PK Scientist will often send back the parameters as a.CSV file. In these cases, do take the time to request a test transfer to determine the structure of this file so that you can program the SDTM.PP upfront. 

4)      You don’t manage your timelines effectively

Don’t let the complexities of working with PK data compromise any critical deadlines. Effective timeline management is a crucial issue for every clinical study, and in PK programming projects, it can be especially challenging due to certain aspects of the process. First of all, it is important to understand the timelines for analysing the samples and calculating the required PK concentrations. Secondly, the timelines and project plan must account for the subsequent calculation of the PK parameters by the PK Scientist.

It is also worth highlighting that PK data will unblind you to treatment allocation if received before database lock. For this reason, it is often not possible to perform a complete dry run using PK data, and so datasets and outputs may only be produced for the first time after database lock.  This aspect should, of course, be taken into account when planning your timelines. There is the option to perform a dry run using dummy PK data which does enable the programs for the datasets and the TFLs to be written in advance.  On the other hand, since dummy data is often of poor quality, you will inevitably need to perform some manipulation of programs/outputs once the live data is available. 

In summary, PK studies present us with many challenges to overcome and therefore opportunities to learn and develop technically and professionally. Preparation and an understanding of the common pitfalls is key to contributing effectively to your project as a programmer to ultimately ensure that the datasets and outputs produced are of the highest quality.

Did you find this insightful? Enter your details and download the poster ‘Programming the world of PK’ now.

Veramed