In continuation of the earlier post (link), this post describes some of the steps commonly described in the literature as method development but should not be considered method development steps. These steps are usually variations in experimental conditions to achieve certain desired characteristics of a dissolution test such as discriminatory, improved reproducibility and/or bio-relevancy. The variations can be are numerous, such as choice of apparatus (paddle or basket), spindle rpm (50, 75, or 100), media (water or buffer), pH (between 1 and 6), choice of de-aeration, or its technique, use and choice of a sinker, etc.

Considering these steps or practices as method development is incorrect because of two reasons:

1. These practices are commonly used during the product development stage. At this stage, an analyst is working with the variations in the product (formulation and manufacturing attributes), which requires a fixed dissolution method, not variations in the method itself, to evaluate the impact of product variations.
2. The products are developed for human use. The drug is expected to be released from the product in the GI tract environment, which remains constant, from product to product. The in vitro dissolution testing conditions simulate this GI tract environment. Therefore, these conditions should also remain constant.

It is, therefore, critical that a dissolution test method should be decided and fixed, reflecting GI tract environment, prior to working on the development of a product.

The role of an (or any) analytical method validation (chromatographic/spectrophotometric) is to demonstrate that the method is capable of measuring an analyte accurately (accuracy, which includes specificity) and reliably (precision, which includes repeatability and reproducibility). In addition, if the analyte is expected to be in a wider range, e.g., zero to 100 %, which is usually the case in dissolution testing, then one has to establish that concentrations and responses have a linear relationship (linearity) by measuring responses at different concentrations.

All of the above-mentioned practices (tests) boil down to determining responses (UV absorptions or peak height/area for chromatography) against concentrations and are to be done in replicate (5/6 times) to be able to determine a standard deviation (variance) to establish confidence in the results. In short, if one has different solutions of 100, 50, and 25% concentrations (strength) of drug and measure their responses, which come out in the ratios as the concentrations (by doing it repeatedly 5/6 times), then the method has been validated.

For drug dissolution testing, one has to demonstrate that if the drug is in solution, then the analytical method can measure it accurately and reliably. Therefore, to validate such methods, one needs to add the drug (“spiking”) in solution form to a dissolution testing apparatus, i.e., the vessel containing the required medium volume maintained at 37 ºC and spindle rotating. Samples are withdrawn and processed exactly as if these were from a product (filtration, dilution/concentration, extraction etc.), and responses are measured accordingly. If responses and concentrations are as one would expect (as explained above), then that dissolution method has been validated.

Then, one should be able to use this method (validated) to measure the dissolution of a drug from a product. Method validation steps are independent of drugs and products.

On the other hand, a method development exercise is drug and product-dependent. In (analytical) method development, an analyst needs to select appropriate parameters such as wavelength, chromatographic column, dilution, extraction steps, filters etc. Once such parameters are established, one then moves to the method validation exercise as a second step.

Assay and content uniformity (CU) are two tests generally required to establish a pharmaceutical product’s quality. In one case (assay), the testing is done by pooling the content of multiple units (tablets/capsules) together, while in the second case, CU, evaluation of the units is done individually. The evaluation procedure usually remains the same, or almost the same, grinding the product, dissolving/extracting the drug using an appropriate solvent, and then assaying it using an analytical technique e.g., chromatographic/spectroscopic.

The drug dissolution testing is based on the extraction (procedure) step, perhaps simpler as it does not require grinding of the product. Another advantage of using dissolution tests for extraction purposes is that the choice of a solvent is simple (water, with or without a solubilizer), which is physiologically relevant as well. Thus, one can evaluate CU and assay while conducting a dissolution test.

The question is, why has not this simple approach been adopted or commonly used currently. The reason is that currently used dissolution methods, based on Paddle and Basket apparatuses, do not provide accurate and reproducible results. A more appropriate and reproducible apparatus can certainly provide such simplicity and efficiency. For further detail on this topic, please see, The Open Drug Delivery Journal, 2008, 2, 33-37, ( link).

Recently, a comment is made by a poster that “This whole new PVT Prednisone Lot thing is a Joke!” expressing his/her frustration with the use of the tablets or the practice of PVT. Such comments reflecting the erratic and unpredictable behavior of PVT, previously called apparatus calibration, are not new and have been reported extensively in the literature.

However, a more serious question and concern should be that if the performance of an apparatus cannot be verified appropriately, as recommended and required by the USP, then how reliable would the testing of the pharmaceutical products be using such an apparatus?

This seems to confirm literature reports that there are problems in using the apparatuses because of poor hydrodynamics within dissolution vessels. Therefore, caution should be observed in assessing the quality of products using such apparatuses.

It is a common query as to what should be suggested dissolution test conditions for a drug, XYZ, particularly if the drug XYZ happens to be of low solubility in water. Unfortunately, it is not a valid question. The dissolution tests are primarily conducted for products and not for drugs. Furthermore, even a product does not dictate test conditions because an analyst likes to determine/establish the property (dissolution) of the product itself, which would require an unbiased method. Therefore, a dissolution method should not be product/drug-specific or dependent.

Often it is suggested that if a product (tablet or capsule) floats or moves randomly during dissolution testing, it may produce variable results. Therefore, one should use a sinker to avoid mobility to reduce the variability in dissolution results. However, it should be noted that use of sinkers may invalidate dissolution testing/results and their relevance because:

  1. Dissolution tests are conducted to determine potential drug release characteristics in vivo, where mobility of the products is natural and expected. Therefore, use of a sinker makes drug dissolution testing physiologically not relevant.
  2. When a sinker is used, it forces the product to settle at the bottom of the vessel, where interaction between product and dissolution medium is minimal, resulting in inefficient dissolution. In addition, the caging effect of sinkers may further reduce the dissolution rate. In fact, the use of a sinker may exaggerate the effect of poor hydrodynamics within a vessel. Thus, its use will provide inaccurate results even for in vitro dissolution characteristics of the test product. 
  3. It would not be possible to accurately compare the dissolution characteristics of two products in which one would float and the other not. In such cases, how would one subtract the effect of the sinker from the product which requires one and the other which does not so that a proper comparison of dissolution characteristics of the two products can be made?

Therefore, one should be careful when using a sinker as the tests/results may provide inaccurate reflection product characteristics.

Generics are drug products that are considered identical in dose, strength, route of administration, safety, efficacy, and intended use as an innovator’s product. However, generics are different from innovators’ products about formulation and manufacturing attributes. Because of these differences in formulation and manufacturing, it is expected from the generics that they demonstrate that the drug release from their product is similar to those of the corresponding innovators’ product.

This similarity or equivalence in drug release between generic and innovator products is established by conducting bioavailability/bio-equivalence studies. Such bioequivalence studies, in fact, establish that drug release (dissolution) in vivo from both products is the same. A critical point in understanding this principle is that generics strive to achieve similarity of drug release from innovators’ products in vivo having vastly different formulation and manufacturing attributes. Otherwise, generics and innovators’ products would have different bio-availabilities and would not be bioequivalent. Therefore, a difference in formulations or manufacturing attributes or finding these differences by in vitro dissolution tests are of no real consequences. Thus, the practice of finding such differences or developing dissolution tests under the terminology of “discriminatory test”, is an erroneous and misguided exercise.

The purpose of any analytical procedure, including dissolution testing, is to determine an unknown property of a test substance/matrix. However, in the case of current practices of drug dissolution testing, this is not the situation. Here, one seeks experimental conditions to obtain desired or expected release characteristics of the products. These are then described incorrectly, as procedures for method development, getting discriminatory tests and/or bio-relevant tests, etc. Furthermore, it is important to note that a test product itself is used as its own reference for all these practices. Therefore, one can never know the actual or real dissolution characteristics of any product.
 
For appropriate dissolution testing, the dissolution method (apparatus with associated experimental conditions) must remain constant, i.e., these should not change from product to product. For example, a method must remain the same if one would like to test or compare characteristics of an IR vs. ER product. For further discussion, the linked article may be useful.

Often it is suggested that one should use a de-gassed or de-aerated dissolution medium. But, unfortunately, conducting a dissolution test using such a condition makes the test and results obtained invalid; why?:

  1. Dissolution tests are conducted to evaluate drug dissolution characteristics in vivo, i.e., in the human GI tract. However, the GI tract contents are not de-aerated or de-gassed, therefore, in vitro tests conducted in a de-gassed medium will not be reflective of the in vivo environment and thus the results will be invalid.
  2. Commonly suggested procedure for preparing a de-aerated medium is by vacuum filtering of heated (41-45 ºC) dissolution medium. Dissolution tests are conducted at 37 ºC, thus dissolution medium temperature will change from ~45 to 37 ºC during testing. Therefore, de-aeration creates a non-physiological condition and introduces instability in the medium characteristics during testing.

For more appropriate drug dissolution testing, the medium should be equilibrated at 37 ºC, which would provide a more appropriate physiological condition and stability in the testing environment.

It is generally accepted that the pH of the aqueous phase within the GI tract ranges from pH 1 to 7 (or 8). This range may further be divided into two sub-groups; one is of pH 1 (perhaps to 3) and a second of pH 5-7.  The segment having pH 1 represents the stomach and the other, in the range of 5-7, is the (small) intestinal part.

It is commonly accepted that most of the drug (or food) ingested gets absorbed from the intestinal part. As absorption depends on dissolution, most of the drug should be available in the solution form in this segment of the GI tract. Obviously, for dissolution purposes, this environment of pH appears to be relevant and critical. Therefore, dissolution tests are to be conducted in media having pH in the range of 5-7.

Conducting dissolution tests, thus, in acidic (HCl) media (pH ~1), does not appear to be an appropriate choice.