It is often debated as to which approach is better or more appropriate for calibration or standardization of the apparatuses. It appears that calibration may not be a critical or important step at present. The reason being, even if the apparatuses (paddle and basket) are adequately calibrated using any of the two approaches, they still would show a lack of relevance of results and provide very high variability in dissolution results.

It has been shown from experimental studies and computer simulation modeling that paddle and basket apparatuses provide poor hydrodynamics. Thus results obtained from these apparatuses would be of limited value and use. As a result, the calibration aspect becomes secondary.

It is often claimed that drug dissolution testing is a useful technique during the product development stage. Does this claim have merit? Let us explore.

A formulator prepares two or more formulations/products having different dissolution rates using commonly described dissolution test conditions. How would the formulator decide which product can be tested in humans? For this purpose, the formulator needs to have some confidence in the predictability of the dissolution test for the behaviour of a product in humans. It is well known that current practices of dissolution testing do not provide such predictability. Thus the testing cannot be used for product development. There are examples that products having differences in vitro results provide overlapping in vivo results.

Then why do people suggest the use of dissolution testing for product development? Apparently, the suggestion is correct but not its interpretation. In principle, it is correct that dissolution tests should be reflective of in vivo results. However, success will depend on how a dissolution test is conducted and what type of instrument/apparatus is used. Presently, people invariably assume that dissolution testing means conducting a test using paddle and basket apparatus. The missing link here is that these apparatuses have never been validated to provide relevant in vivo conditions (environment) to predict in vivo results? Obviously, these apparatuses cannot provide relevant in vivo results. It is like saying that can a distance from point A to B, 1000 miles apart, be travelled in an hour by road? Of course, yes, but we need a car that would run at a speed of 1000 miles/hour. The objective is fine, but the practicality of achieving the objective is not. This is exactly what is happening with the current practices of drug dissolution testing, i.e., the objective is fine, but means (paddle and basket apparatuses) to achieve the objective is not.

The other day someone indicated that even products of drugs from BCS class II (low solubility and high permeability) had not shown successful IVIVC. These drugs, at least in theory, provide the best-case scenario for successful IVIVCs. The question was then asked what may be the reason for such a general lack of success.

For any successful IVIVC, one needs to conduct dissolution tests by mimicking the in vivo environment as closely as possible. This is usually done by conducting a dissolution test using water or aqueous buffers having pH in the range of 5 to 7 maintained at 37C. These conditions represent the GI tract (intestinal) environment.

 On the other hand, the tests are conducted mostly using paddle and basket apparatuses to simulate mixing and stirring environment. Unfortunately, the stirring and mixing environment of these apparatuses lack simulation of the in vivo environment. In fact, these apparatuses almost provide no stirring and mixing. Therefore, because of this mismatch, one should not expect successful IVIVC. For successful IVIVC, one requires an efficient (gentle but thorough) stirring environment. One such possibility to address this issue may be the use of a crescent-shaped spindle. For further discussion on the use of a crescent-shaped spindle, one may search this site or literature in general.

 In short, one should not expect success in developing IVIVC using paddle and basket apparatuses.

The primary purpose of the dissolution test is to distinguish between acceptable and unacceptable batches of a product for human use. However, it is now widely recognized that current practices of dissolution testing may not be used for such purposes, i.e., for bio-relevancy purposes.

Therefore, rather than addressing the underlying deficiencies and improving upon these, the test now commonly propagated as a measure/monitor of batch-to-batch consistency. It is not clear which element(s) of the manufacturing process(es) the test is linked to and how such a link has been established. In addition, there is a lack of validation of an appropriate link of dissolution to manufacturing. In the absence of such validation, it is impossible to describe this test as performance or quality control/assurance.

The current practice of dissolution testing as a QC test may be equated to installing a sophisticated digital camera to take a picture of every finished car coming out on an assembly line. As long as the pictures are consistent from car to car, a car’s performance and quality may be assumed ”assured.” However, as the picture and performance are not linked, there is no guaranty that an acceptable picture, in reality, will reflect an acceptable performance of the car and vise versa.

Similarly, as a dissolution test is not linked to the performance of a product, acceptable dissolution results may not reflect the acceptable performance of the product and vice versa.

An IVIVC (in vitro-in vivo co-relationship) is a terminology commonly used in the area of drug dissolution testing, desiring the relationship of in vitro (drug dissolution) results with in vivo characteristics such as drug levels in humans.

As the in vitro results are generally expressed as cumulated percent drug release, thus these profiles (results) are difficult to compare with in vivo results reported in concentration units. In addition, in vitro results only reflect a product’s release (dissolution) characteristics, while in vivo results reflect the combined effect of drug dissolution and absorption/elimination characteristics. To compare, either the in vitro dissolution results are manipulated (mathematically) using drug absorption/elimination characteristics to predict blood levels and compare with actual/observed drugs levels or by extracting in vivo dissolution results from actual drug levels in humans and comparing with those of others the observed in vitro results. The first approach is known as a convolution technique and the second as de-convolution. One of these techniques of data manipulation would be required to make the results comparable.

Once the desired results are derived, i.e., in vivo dissolution from drug levels in blood (de-convolution) or in vivo drug levels from in vitro dissolution tests (convolution), these are compared with corresponding/actual in vitro or in vivo results, respectively. Achieving level A IVIVC means that these comparisons of results be made point by point, i.e., in vitro and in vivo results are compared individually for each sampling time. More accurately plotting in vitro and in vivo results as a line and obtaining a coefficient of relationship (r) value approaching 1. This is where the difficulty is i.e., assumption of exact or matching time course in both in vitro and in vivo drug release. However, this is a well-known fact that predictability of accurate time course of drug in humans is very difficult, if not impossible.  For example, even when one would like to compare drug levels in humans alone, such as in bioequivalence studies, such point-by-point comparisons of drug levels are neither used nor required by regulatory agencies. The reason being variability in results (within or between human subjects) is expected to be very high. Therefore, how could it be possible to achieve point-by-point comparisons of in vitro and in vivo results? 

To address this difficulty in comparing the in vivo results for bioequivalence studies, one is required to use derived parameters from the drug levels, which are; highest observed drug (concentration) level and the area under the drug concentration-time profiles.  These parameters sort of normalize or reduce the observed variability of drug levels or profiles, thus offering a more reasonable approach for evaluating or comparing in vivo results.

 Thus, in short, one should keep in mind the limitations, in fact impractically, of point by point (Level A) comparison approach for IVIVC purposes. For a more detailed discussion on this aspect, in particular regarding developing IVIVC, one of my publications may be of interest (The Open Drug Delivery Journal, 2010, 4, 38-47, Link).

A common query concerning a dissolution test is how one should conduct the test for a drug. Further, in response to such queries, different suggestions are made for choosing the apparatus, rpm, medium etc. However, unfortunately, such queries and responses lack scientific merit and logical thinking. The reason is that a dissolution test is conducted for a product and not for a drug (active ingredient). That is why pharmacopeial monographs, particularly USP, do not have dissolution tests under drugs (active ingredients) but products.

Therefore, one can only suggest a testing procedure for products and not for drugs. Further, a testing procedure is not, or should not, be linked to a product because testing is done to evaluate a product. Therefore, the procedure must be product-independent. The question then becomes how one should set up the experimental conditions. The answer is that the dissolution testing conditions should reflect the GI tract environment, particularly intestinal. As the GI tract environment does not change from product to product, the testing procedure should be fixed.

 A typical testing environment may be water maintained at 37 °C, with some solubiliser to dissolve low aqueous solubility drugs and a simple stirring mechanism that should provide efficient/thorough product-medium interaction.  It is hoped that this suggestion will help in answering the common and frequent queries.

In a recent publication, USP describes prednisone based performance verification test (PVT) as,

Lot P1 demonstrates sensitivity to test performance parameters (vessels and degassing)”.

It appears that the use of PVT has been reduced from establishing the appropriateness of vessels and degassing of the medium rather than a performance evaluation test for the apparatuses or procedure as it is supposed to. Even claims for sensitivities of the two suggested parameters may be of questionable merit. Because;

1. In a recent study from the FDA laboratory, it has been demonstrated that dissolution test does not appear to show sensitivity to vessel dimensions, stating “Geometric characteristics varied within and among the sets of vessels, but the overall averages and standard deviations of dissolution results (six vessels) showed no statistical significant differences among the vessel sets”.

2.  To be sensitive to a parameter (vessel geometry), there must be a link of dissolution results (response) to vessel geometry (action). For example, a mercury thermometer is based on heat-based expansion. Thus, the higher the temperature higher the expansion. How does vessel geometry (or variation in its contour) relate/link to dissolution results?

 3. Even if the contour of a vessel has any effect, then shouldn’t its control be established using appropriate physical measurements? It is like suggesting monitoring or controlling the room temperature based on measuring the humidity in the room. This appears to be quite impractical and an irrelevant approach.

 4.  Concerning sensitivity to de-aeration, objective and practice also appears irrelevant for a number of reasons. For example (1) conducting a dissolution test using a de-aerated medium should make the test physiologically irrelevant as the physiological environment does not dictate the de-aerated medium. (2) For testing of products that are not sensitive to de-aeration, performing PVT with a de-aerated medium would be irrelevant. (3) Often, dissolution tests are conducted for longer than half an hour. Usually, during the testing, dissolution media become equilibrated with the dissolved gasses (air) and do not remain de-aerated. So, how would the analyst to maintain a consistent de-aeration level which apparently would be impossible?

 In short, it appears PVT in its current form does not appear to provide any useful purpose but a financial burden on the pharmaceutical industry. Therefore, its use may easily be discontinued.

In a recent publication, USP describes the dissolution procedure as,

The procedure can function both as a quality control tool and, under specified circumstances, as a predictor of the dosage form’s performance in vivo.

One may interpret this as an indication of very weak support for the continuation of the procedure, at least in its current format.

Stating that the procedure could predict the product’s in vivo performance under certain specified circumstances is a clear deviation from the commonly accepted understanding. Furthermore, the only reason a dissolution test was introduced was to replace a disintegration test which was considered a poor predictor of in vivo performance of the products. Therefore, now as per the publication, the current dissolution procedure appears to have the same limitation as that of the disintegration test, which obviously has limited use.

Further, if the procedure is to work in some cases, one would require some form of guidance in determining how these specified circumstances are to be determined or established.

On the other hand, describing the procedure as a quality control tool may also become redundant without in vivo relevance. In general, it is accepted that if a dissolution test, as a quality control tool, shows unexpected drug release, it would reflect potential unexpected in vivo drug release of the drug, leading to concerns about the quality of the product. However, if the in vitro and in vivo link is severed or weak, what would be the rationale for using a dissolution procedure as a quality control tool?

The publication appears to have added serious confusion regarding the usefulness of the PVT procedure and the current practices of dissolution testing.

In a recent publication from USP, it is concluded that,

Apparatus 1 results are stable over time. Those in Apparatus 2 show a decrease over time in the geometric mean but show no trend in variability”.

If all things being equal for testing except differences in apparatuses used, then is it not obvious that instability in results would reflect the instability of Apparatus 2? The conclusion from the USP supports the observation/results reported in the literature regarding the poor performance of Apparatus 2. This poor performance in testing reflects the poor hydrodynamic environment within dissolution vessels, as described extensively in the literature. Thus, the use of Apparatus 2 would require caution.

USP calendar shows an entry of a planned webinar to address the problems in the dissolution testing area. Accordingly, USP is requesting the submission of questions so that issues and concerns may be addressed in this regard. The following are some suggested questions/concerns which USP may consider addressing during the webinar or later.

  1. USP usually recommends the use of Paddle and Basket apparatuses for drug dissolution testing as a first choice. Are there any documented reasons (evidence) for these choices showing their superiority compared to other apparatuses? How have these apparatuses been validated as appropriate dissolution apparatuses to evaluate pharmaceutical products for human use?
  2. As a general practice, USP suggests experimental conditions for individual products (monographs) to establish their drug dissolution characteristics. This aspect is also described in more detail in the chapter <1092>. The practice requires that an analyst know the product and its expected dissolution behaviors and adjust the experimental conditions accordingly to achieve the expected characteristics. However, the objective of a dissolution test is to know or establish the drug dissolution characteristics of a test product. How should this conflict be addressed?
  3. Concerning the variability in dissolution results, how would an analyst determine (differentiate) whether the variability is related to the PVT (or a test product) or the apparatuses? Is the information on the contribution to variability due to apparatuses available?
  4. USP supplies prednisone performance verification tablets (PVT), which results in dissolution depending on the apparatus and experimental conditions employed. What would be the “true” (“actual”) dissolution characteristics of PVT as a product which may be used as “reference”?
  5. Apparently, the prednisone performance verification tablets may be characterized as a fast-release product but show a slower release type due to “cone” formation. The “cone” formation is the result of poor hydrodynamics and lack of product-medium interaction within dissolution vessels. Would it not be obvious that the products which will have similar characteristics like PVT would also be inaccurately characterized as slow-release type when in fact, they would be fast release type?
  6. Why does the USP recommend that the dissolution medium be de-aerated?  What is the rationale for this suggestion, when apparently this suggestion would make the dissolution test irrelevant, as the physiological aspect does not require a de-aerated environment?
  7. Pharmacopeial tests are often presented as tests for establishing batch-to-batch consistency in products’ quality, usually not as a bio-relevant test because the test often fails the IVIVC. If bio-relevancy or IVIVC is not the objective, then any other test, such as the disintegration test may be used for consistency checks. The dissolution test was introduced to replace disintegration test because of the lack of bio-relevancy of the latter. Most of the pharmacopeial dissolution tests are not bio-relevant, so how should one justify a dissolution test over a disintegration test?