The most widely used and referred dissolution tolerances are based on the USP Acceptance Table. The results are evaluated in stages. This means repeats are allowed with relaxed tolerances and a higher degree of variances for each subsequent test.

Stage 1:Test 6 tablets. Each unit not less than Q+5% dissolved.
Stage 2:Test 12 tablets (including 6 from stage 1). Average is equal or greater than Q, but no unit less than Q-15%.
Stage 3:Test 24 tablets (including 12 from stage 1 and 2). Average equal and greater than Q, but no more than two units are less than Q-15%, and no unit less than Q-25%.

(The Q-values are provided in individual product monographs, representing expected percent drug release (dissolution) at times, such as 30, 45, 60 minutes etc.) 

Considering the above criteria with a Q-value of 80, one can obtain the following set for acceptable results.

 123456Mean%RSD (CV)
Stage 195100919098102965
Stage 296886511066658223
Stage 3556192981051028526
 10387779793899110

Therefore, one may observe an RSD (CV) of 20% or more, and the results/product would be of acceptable quality for regulatory purposes. The expected high variability (RSD/CV) in results is built in the tolerances.

In this particular case, the test would meet the criteria at the stage 1, thus testing for the next stages would not be required. However, this may be a random phenomenon and one may get results as shown for Stage 2 or 3, at the first stage as well. These tolerances are for last dissolution sampling time, where dissolution is the highest with the least variable results. However, if dissolution results are to be reported for earlier times (such as for extended-release products), the expected variability would be higher.

Therefore, the above discussion clearly indicates that the dissolution results, particularly using Paddle/Basket apparatuses, are expected to be highly variable, often more than 20% RSD. Therefore, as often desired or suggested, setting tolerances tighter would not be scientifically valid or achievable. In that case, one would face a higher number of failures without any apparent reason, as is the case with the testing of the USP PVT prednisone tablets where tolerances are usually set tighter than one would observe in real life, thus failures.

As explained in an earlier post, commonly used convolution/deconvolution techniques for IVIVC purposes link three functions together. The three functions are input (absorption/dissolution results), output (plasma drug concentrations), and weighting function (usually plasma drug concentrations following an intravenous dose).

Deconvolution will be the option one would use when plasma drug concentrations of the test products are available, and one would like to determine the in vivo dissolution results. These in vivo dissolution results are compared with the in vitro dissolution results.

Convolution will be the option one would use when in vitro dissolution results are available, and one would like to determine plasma drug concentrations of the test product. Furthermore, a convolution technique would be the only choice during the product developmental stage, where a formulator likes to have an idea of potential in vivo output. Again, one may use the convolution technique to compare release (dissolution) characteristics of products for generic developments or product modifications. In this case, based on obtained dissolution results from two or more products, one would obtain respective plasma drug concentration profiles, which can be compared using standard and accepted parameters of Cmax and area under the plasma drug concentration curves (AUC).

Studies have shown that the convolution technique provides better accuracy of outcome (plasma drug concentrations) than the deconvolution technique.  Moreover, computation-wise, the convolution technique could be simplified, and calculation may be performed using simple spreadsheet software rather than complex mathematical software. Therefore, it is suggested that one may consider convolution as a first choice for developing IVIVC.

In continuation of the earlier post (link), this post describes some of the steps commonly described in the literature as method development but should not be considered method development steps. These steps are usually variations in experimental conditions to achieve certain desired characteristics of a dissolution test such as discriminatory, improved reproducibility and/or bio-relevancy. The variations can be are numerous, such as choice of apparatus (paddle or basket), spindle rpm (50, 75, or 100), media (water or buffer), pH (between 1 and 6), choice of de-aeration, or its technique, use and choice of a sinker, etc.

Considering these steps or practices as method development is incorrect because of two reasons:

1. These practices are commonly used during the product development stage. At this stage, an analyst is working with the variations in the product (formulation and manufacturing attributes), which requires a fixed dissolution method, not variations in the method itself, to evaluate the impact of product variations.
2. The products are developed for human use. The drug is expected to be released from the product in the GI tract environment, which remains constant, from product to product. The in vitro dissolution testing conditions simulate this GI tract environment. Therefore, these conditions should also remain constant.

It is, therefore, critical that a dissolution test method should be decided and fixed, reflecting GI tract environment, prior to working on the development of a product.

Often in literature and discussion, terminology of a discriminatory test is used to describe that a dissolution test can differentiate or discriminate between products based on formulation and/or manufacturing differences. However, the implied understanding of this terminology is that these differences may reflect products’ in vivo differences, thus their quality in humans. The underlying implied assumption of in vivo relevance is emphasized by suggestions that dissolution testing is conducted using in vivo relevant experimental conditions, e.g., dissolution medium be aqueous having pH in the range of 1 to 7. Interestingly, the literature also well documented that dissolution results with formulation/manufacturing differences seldom reflect corresponding in vivo behavior.

It is, therefore, safe to consider that the use of the terminology of “discriminatory test” as commonly used does not appear to be correct. To be accurate, the discriminatory test terminology should clearly identify a test as an “in vitro discriminatory test” or “in vivo discriminatory test,” also known as a bio-relevant test.

An in vitro discriminatory test would be the test to reflect differences in physical characteristics of the test products (formulation/manufacturing) with no direct or definite consequences in vivo. Such tests may be conducted using any of the experimental conditions necessary concerning apparatuses (paddle/basket, Erlenmeyer flask with magnetic stir, etc.) and media (organic or aqueous solvents having any pH), etc. In this respect, the disintegration test may be considered a discriminating test if formulation/manufacturing differences are linked to disintegrating time. It may be important to note that although often in vitro discriminatory dissolution tests are developed and used but their usefulness is limited and may be an unnecessary burden on the pharmaceutical industry and the regulatory agencies.

An in vivo discriminatory test or a bio-relevant test, on the other hand, would be a test that would relate differences in formulation/manufacturing of products to corresponding differences in vivo such as bioavailability/bioequivalence characteristics. An essential requirement for an in vivo discriminatory test is that the test be conducted using physiologically relevant experimental conditions. For example, an apparatus must provide a gentle but efficient stirring and mixing environment. The medium must be aqueous with pH in the range of 5-7 and maintained at 37C. The medium must not be de-aerated but equilibrated with dissolved gasses. In addition, as the testing environment is linked to the GI tract physiology, which does not change from product to product (e.g., IR to ER), the experimental conditions should not be changed from product to product.

Therefore, it is prudent to indicate the nature of the test described, whether it is an in vitro or in vivo discriminatory type so that proper evaluation and use of the test be considered.

The quality control (QC) aspect of dissolution testing is linked to the release characteristics of the drug from its product, commonly tablet or capsule. This release characteristic, measured in vitro, is supposed to reflect/simulate drug release in vivo. Therefore, the QC test reflects drug release in vivo in humans, thus establishing the quality of the product. Such tests are conducted using experimental conditions that simulate human physiological conditions of GI tract as closely as possible. However, recent studies (see publication section) reflect that experimental conditions used (e.g., apparatuses) do not simulate an appropriate GI tract environment. They lack the needed mixing and stirring in the dissolution vessels. Therefore, current practices of dissolution testing may not reflect the quality of the products, and the test may not be considered a QC test.

On the other hand, considering this lack of QC aspect, commonly dissolution test is presented as a test for consistency check for batch to batch evaluations. Still, it appears to be implied as a QC test. This obviously creates significant confusion in properly describing and/or differentiating the test as a QC or consistency-check test. As stated above, in its current form dissolution test does not appear to be a QC test. Therefore, it should be considered a consistency check without its link to in vivo release and the quality of the product.

A consistency-check test may be performed using any of the experimental conditions that may or may not be physiologically relevant – for example, organic solvents vs aqueous-based, higher or lower temperatures vs 37C, any other type of stirring device (magnetic bar, shakers, propeller with high-speed motors, etc) vs commonly used paddle and basket apparatuses. Further, one may report the results for any sampling time which appear to be most stable and reproducible. This has never been the intent of the dissolution test to be conducted in this manner, particularly as a QC test.

Therefore, to conduct a dissolution test as a QC test, as was originally intended, the test must be conducted by creating or simulating a more appropriate physiological environment, i.e., improved stirring and mixing. This improved stirring and mixing aspect indeed appears to address the limitations of current practices and their artifacts. For further discussion on this topic, please see the recent literature under the publication section.

It is often described that one of the purposes, or perhaps the only purpose, of drug dissolution testing is to monitor batch-to-batch consistency in manufacturing processes. I believe that this view is described to maintain the use of dissolution testing based on paddle and basket apparatuses. This view appears to have been out of frustration due to a lack of success with dissolution testing regarding its relevance to a product’s in vivo performance.

 The question remains, can the testing be used for the consistency check? The answer appears to be a NO. The testing cannot be used for consistency checks in particular using paddle and basket apparatuses. The reason being that for monitoring the consistency of a product or process, the consistency (reproducibility) of the test itself must be established and known first. Unfortunately, consistency (reproducibility) of the testing based on paddle and basket apparatuses has never been established or available. There are literature reports available that provide a measure of expected variability in dissolution testing. The reported variability values in terms of RSD can be as high as 37% using these apparatuses, with the apparatuses working as expected and meeting the USP specifications. Such high variability in testing instruments is not usually acceptable, as the test would not be capable of providing stringent quality control standards for pharmaceutical products where generally desired variability (RSD) of 10% or less is expected or desired.

 Thus, dissolution testing based on paddle and basket apparatuses may not be used for batch-to-batch consistency checks.

PVT (Performance Verification Test) is frequently described as necessary to assess the performance of dissolution apparatuses (paddle and basket). Interestingly, the test quite often fails, i.e., test results often fall outside the expected range, without any known reason or cause.

Commonly described reasons/causes are: worn-out ball bearings, loose motor belts, misalignment of spindles or vessels, inaccurate gap between the bottom of spindle and base of vessel, lack of straightness of spindle rods, wobbling, vibration in the instrument and/or around its surrounding, high/low humidity affecting tablets, inappropriate de-aeration of the medium, inaccuracy in measured rpm, variations in vessel dimensions, mismatch of vessels from different suppliers, not using vessels from the instrument supplier, use of plastic vs. glass vessels, using scratched or not clean vessels, not withdrawing a sample from an appropriate position, not appropriately dropping the tablet or pouring the medium in the vessels, lack of an analyst’s training, in addition, any combination/permutation of these reasons.

 Most interesting is the fact that there has been no experimental evidence available in support of these claims i.e., there is no experimental data available to indicate that these aberrations provided results outside the expected range. To rationalize its continued use, supporters of the PVT maintain the claim that failures indicate potential deficiencies or aberrations, but how?