A plasma drug concentration-time profile is usually the net effect of two simultaneous processes: (1) absorption of the drug from the GI tract. As absorption is proportional to drug dissolution thus absorption and dissolution are used interchangeably; (2) elimination of the drug from the blood. These two actions, and their net effect, are represented by three profiles and are shown in the figure.

In mathematical terminology, these three curves (profiles) are functions: dissolution or absorption as input, blood concentrations as output, and elimination as the weighting factor or function. To further simplify, in the analogy of linear regression used for calibration curves, output function may be considered as “Y or dependent variable,” “X or input function,” and “M or slope/proportionality constant.” In linear regression analysis, X, Y, and M parameters have values (numbers); however, these are functions in the case of drug concentration profiles. So solving these function-based equations is a bit more complicated.

The procedure is similar to that of linear regression, commonly used to establish calibration curves and then use the calibration curve to determine the unknown concentrations or response (e.g., absorption or peak height/area values). So, if “Y” is known, then “X” may be determined and vice versa. Similarly, if the input function is known, one can determine the output function and vice versa. Determining output function (plasma blood concentrations), if input function (dissolution results) is available, the procedure will be called convolution technique. The inverse of it that is obtaining input function (absorption/dissolution results) if output function is provided, the procedure will be called deconvolution.

There are computer software available that provides the capability of solving for a function when the others are available. However, the convolution approach could be simpler where commonly available spreadsheet software may also be used.  For further detail see, Qureshi, SA. In Vitro-In Vivo Correlation (IVIVC) and Determining Drug Concentrations in Blood from Dissolution Testing – A Simple and Practical Approach. The Open Drug Delivery Journal, 2010, 4, 38-47. (Link)

Theoretical Consideration:  The most commonly used definition of IVIVC (In Vitro/In Vivo Correlation) is the one described in one of the FDA guidance documents (link). It defines IVIVC as a predictive mathematical model describing the relationship between an in vitro property of a dosage form (usually the rate or extent of drug dissolution or release) and a relevant in vivo response, e.g., plasma drug concentration or amount of drug absorbed.

In this regard, the most sought-after relationship is of “Level A.” It is defined as a predictive mathematical model for the relationship between the entire in vitro dissolution/release time course and the entire in vivo response time course, e.g., the time course of plasma drug concentration or amount of drug absorbed.

Practical Consideration: On the practical side, the purpose of IVIVC is to use drug dissolution results from two or more products to predict similarity or dissimilarity of expected plasma drug concentration (profiles). Before one considers relating in vitro results to in vivo, one must establish similarity or dissimilarity of in vivo response, i.e., plasma drug concentration profiles. The methodology of establishing similarity or dissimilarity of plasma drug concentrations profile is known as bioequivalence testing. There are very well-established guidances and standards available for establishing bioequivalence between drug profiles and products.

Ideally, therefore, one should focus on predicting or calculating plasma drug concentration profiles from drug dissolution results for an appropriate IVIVC. A common mathematical technique employed for this purpose is known as convolution, which convolutes (combines) dissolution results (profile) with plasma concentrations following intravenous (IV) drug administration to provide expected plasma concentrations for solid oral dosage forms. In mathematical terminology, dissolution results become an input function, and plasma concentrations (e.g., from IV) become a weighting factor or function, resulting in an output function representing plasma concentrations for the solid oral product.

Further details about this methodology and its use will be described in future posts.

In continuation of the earlier post (link), this post describes some of the steps commonly described in the literature as method development but should not be considered method development steps. These steps are usually variations in experimental conditions to achieve certain desired characteristics of a dissolution test such as discriminatory, improved reproducibility and/or bio-relevancy. The variations can be are numerous, such as choice of apparatus (paddle or basket), spindle rpm (50, 75, or 100), media (water or buffer), pH (between 1 and 6), choice of de-aeration, or its technique, use and choice of a sinker, etc.

Considering these steps or practices as method development is incorrect because of two reasons:

1. These practices are commonly used during the product development stage. At this stage, an analyst is working with the variations in the product (formulation and manufacturing attributes), which requires a fixed dissolution method, not variations in the method itself, to evaluate the impact of product variations.
2. The products are developed for human use. The drug is expected to be released from the product in the GI tract environment, which remains constant, from product to product. The in vitro dissolution testing conditions simulate this GI tract environment. Therefore, these conditions should also remain constant.

It is, therefore, critical that a dissolution test method should be decided and fixed, reflecting GI tract environment, prior to working on the development of a product.

The role of an (or any) analytical method validation (chromatographic/spectrophotometric) is to demonstrate that the method is capable of measuring an analyte accurately (accuracy, which includes specificity) and reliably (precision, which includes repeatability and reproducibility). In addition, if the analyte is expected to be in a wider range, e.g., zero to 100 %, which is usually the case in dissolution testing, then one has to establish that concentrations and responses have a linear relationship (linearity) by measuring responses at different concentrations.

All of the above-mentioned practices (tests) boil down to determining responses (UV absorptions or peak height/area for chromatography) against concentrations and are to be done in replicate (5/6 times) to be able to determine a standard deviation (variance) to establish confidence in the results. In short, if one has different solutions of 100, 50, and 25% concentrations (strength) of drug and measure their responses, which come out in the ratios as the concentrations (by doing it repeatedly 5/6 times), then the method has been validated.

For drug dissolution testing, one has to demonstrate that if the drug is in solution, then the analytical method can measure it accurately and reliably. Therefore, to validate such methods, one needs to add the drug (“spiking”) in solution form to a dissolution testing apparatus, i.e., the vessel containing the required medium volume maintained at 37 ºC and spindle rotating. Samples are withdrawn and processed exactly as if these were from a product (filtration, dilution/concentration, extraction etc.), and responses are measured accordingly. If responses and concentrations are as one would expect (as explained above), then that dissolution method has been validated.

Then, one should be able to use this method (validated) to measure the dissolution of a drug from a product. Method validation steps are independent of drugs and products.

On the other hand, a method development exercise is drug and product-dependent. In (analytical) method development, an analyst needs to select appropriate parameters such as wavelength, chromatographic column, dilution, extraction steps, filters etc. Once such parameters are established, one then moves to the method validation exercise as a second step.

Assay and content uniformity (CU) are two tests generally required to establish a pharmaceutical product’s quality. In one case (assay), the testing is done by pooling the content of multiple units (tablets/capsules) together, while in the second case, CU, evaluation of the units is done individually. The evaluation procedure usually remains the same, or almost the same, grinding the product, dissolving/extracting the drug using an appropriate solvent, and then assaying it using an analytical technique e.g., chromatographic/spectroscopic.

The drug dissolution testing is based on the extraction (procedure) step, perhaps simpler as it does not require grinding of the product. Another advantage of using dissolution tests for extraction purposes is that the choice of a solvent is simple (water, with or without a solubilizer), which is physiologically relevant as well. Thus, one can evaluate CU and assay while conducting a dissolution test.

The question is, why has not this simple approach been adopted or commonly used currently. The reason is that currently used dissolution methods, based on Paddle and Basket apparatuses, do not provide accurate and reproducible results. A more appropriate and reproducible apparatus can certainly provide such simplicity and efficiency. For further detail on this topic, please see, The Open Drug Delivery Journal, 2008, 2, 33-37, ( link).

Recently, a comment is made by a poster that “This whole new PVT Prednisone Lot thing is a Joke!” expressing his/her frustration with the use of the tablets or the practice of PVT. Such comments reflecting the erratic and unpredictable behavior of PVT, previously called apparatus calibration, are not new and have been reported extensively in the literature.

However, a more serious question and concern should be that if the performance of an apparatus cannot be verified appropriately, as recommended and required by the USP, then how reliable would the testing of the pharmaceutical products be using such an apparatus?

This seems to confirm literature reports that there are problems in using the apparatuses because of poor hydrodynamics within dissolution vessels. Therefore, caution should be observed in assessing the quality of products using such apparatuses.

It is a common query as to what should be suggested dissolution test conditions for a drug, XYZ, particularly if the drug XYZ happens to be of low solubility in water. Unfortunately, it is not a valid question. The dissolution tests are primarily conducted for products and not for drugs. Furthermore, even a product does not dictate test conditions because an analyst likes to determine/establish the property (dissolution) of the product itself, which would require an unbiased method. Therefore, a dissolution method should not be product/drug-specific or dependent.

Often it is suggested that if a product (tablet or capsule) floats or moves randomly during dissolution testing, it may produce variable results. Therefore, one should use a sinker to avoid mobility to reduce the variability in dissolution results. However, it should be noted that use of sinkers may invalidate dissolution testing/results and their relevance because:

  1. Dissolution tests are conducted to determine potential drug release characteristics in vivo, where mobility of the products is natural and expected. Therefore, use of a sinker makes drug dissolution testing physiologically not relevant.
  2. When a sinker is used, it forces the product to settle at the bottom of the vessel, where interaction between product and dissolution medium is minimal, resulting in inefficient dissolution. In addition, the caging effect of sinkers may further reduce the dissolution rate. In fact, the use of a sinker may exaggerate the effect of poor hydrodynamics within a vessel. Thus, its use will provide inaccurate results even for in vitro dissolution characteristics of the test product. 
  3. It would not be possible to accurately compare the dissolution characteristics of two products in which one would float and the other not. In such cases, how would one subtract the effect of the sinker from the product which requires one and the other which does not so that a proper comparison of dissolution characteristics of the two products can be made?

Therefore, one should be careful when using a sinker as the tests/results may provide inaccurate reflection product characteristics.

Generics are drug products that are considered identical in dose, strength, route of administration, safety, efficacy, and intended use as an innovator’s product. However, generics are different from innovators’ products about formulation and manufacturing attributes. Because of these differences in formulation and manufacturing, it is expected from the generics that they demonstrate that the drug release from their product is similar to those of the corresponding innovators’ product.

This similarity or equivalence in drug release between generic and innovator products is established by conducting bioavailability/bio-equivalence studies. Such bioequivalence studies, in fact, establish that drug release (dissolution) in vivo from both products is the same. A critical point in understanding this principle is that generics strive to achieve similarity of drug release from innovators’ products in vivo having vastly different formulation and manufacturing attributes. Otherwise, generics and innovators’ products would have different bio-availabilities and would not be bioequivalent. Therefore, a difference in formulations or manufacturing attributes or finding these differences by in vitro dissolution tests are of no real consequences. Thus, the practice of finding such differences or developing dissolution tests under the terminology of “discriminatory test”, is an erroneous and misguided exercise.

The purpose of any analytical procedure, including dissolution testing, is to determine an unknown property of a test substance/matrix. However, in the case of current practices of drug dissolution testing, this is not the situation. Here, one seeks experimental conditions to obtain desired or expected release characteristics of the products. These are then described incorrectly, as procedures for method development, getting discriminatory tests and/or bio-relevant tests, etc. Furthermore, it is important to note that a test product itself is used as its own reference for all these practices. Therefore, one can never know the actual or real dissolution characteristics of any product.
 
For appropriate dissolution testing, the dissolution method (apparatus with associated experimental conditions) must remain constant, i.e., these should not change from product to product. For example, a method must remain the same if one would like to test or compare characteristics of an IR vs. ER product. For further discussion, the linked article may be useful.