In an earlier article (link) the mechanism of drug absorption was described considering ionization characteristics of drugs and the differences between the surface areas of the stomach and intestine. The purpose of the article was to explain and highlight how ionization of both an acidic and basic drug will provide undissociated drug molecules both in the stomach and intestine. It is important to note that the undissociated drug molecules in the solution form are responsible and required for drug absorption.

The low (acidic) pH of the stomach would favor high undissociated concentrations of acidic drugs compared to high (neutral or basic) pH of the intestine, which will favor higher dissociated (ionized) concentrations. The opposite is true for basic drugs where the stomach’s low (acidic) pH will result in higher concentrations of ionized or protonated basic drugs in the stomach compared to higher concentrations of undissociated drugs in the intestine. Thus, the pH of the environments (stomach and intestine) explains only the ionization of drugs (acidic or basic), i.e., comparative availability of undissociated drug molecules in solution form but NOT the EXPECTED absorption of the drugs from these sites. However, the absorption of drugs can only be explained based on the available surface areas of the stomach and intestine. As the intestine provides a much larger and efficient (permeable) surface compared to the stomach, thus it provides far superior and efficient drug absorption, as explained in the previous article.

It is sometimes assumed that as acidic drugs are more readily available in the undissociated form in the stomach (low pH), they will preferentially be absorbed from the stomach. The basic drugs will be more readily available in the undissociated form in the intestine (neutral or higher pH) thus will preferentially be absorbed from the intestine. This assumption is inaccurate, as such a simplistic approach ignores the absorption capacity and differences of surface areas of the stomach and intestine sites. Furthermore, such an assumption i.e. absorption based on pH consideration only contradicts the observations reported extensively in the literature. For example:

  1. “The stomach is primarily a processing organ and not an absorptive organ …”. Washington, N., Washington, C., and Wilson, C. Physiological pharmaceutics: Barriers to drug absorption, 2nd ed. CRC Press, 2001. (p. 82)
  2. “The experimental data available from classical work of Brodie (1964) and more recent studies (Prescott and Nimmo, 1981) are all consistent with the following conclusion: the nonionized form of a drug will be absorbed more rapidly than the ionized form at any particular site in the gastrointestinal tract. However, the rate of absorption of a drug from the intestine will be greater than that from the stomach even if the drug is predominantly ionized in the intestine and largely nonionized in the stomach.” Goodman, L., Gilman, A., and Gilman, A. Goodman and Gilman’s the Pharmacological Basis of Therapeutics. New York: Macmillan, 1985. p. 7.
  3. “Theoretically, weakly acidic drugs (eg, aspirin) are more readily absorbed from an acid medium (stomach) than are weakly basic drugs (eg, quinidine). However, whether a drug is acidic or basic, most absorption occurs in the small intestine because the surface area is larger and membranes are more permeable.” (Merck Manual)
  4. “The scintigraphy studies suggest that a sustained release ibuprofen formulation is absorbed throughout the entire GI tract and that the large bowel is the site that demonstrates the greatest proportion of ibuprofen absorption.” Davies N.M. “Clinical Pharmacokinetics of Ibuprofen: The First 30 Years.” Clinical Pharmacokinetics. 1998: 101–154.
  5. “Ketoprofen, like many other drugs, is mainly absorbed in the small intestine”. Shohin et al. “Biowaiver Monographs for Immediate-release Solid Oral Dosage Forms: Ketoprofen.” Journal of Pharmaceutical Sciences. 2012: 3593–3603.

An important implication of such a physiological process (i.e. drugs absorption from the intestine) is that a dissolution test should also be conducted simulating the intestinal environment, e.g., using a dissolution medium having a pH in the range of 5 to 7. Conducting the tests using acidic pH (simulating the stomach environment) may not be appropriate when the dissolution results are related to physiological outcomes such as plasma drug levels.

This article provides an overview of the mechanism of drug absorption from the GI tract based on solubility/dissolution and dissociation/pH characteristics of a drug. It is argued that although pH values of the environment (stomach and intestine) may play a role, it is the availability of the large surface area of the intestine which predominantly is responsible for the drug absorption for both acidic and basic drugs. Furthermore, in the GI tract, drugs exist in three forms, i.e., solid (outside solvent/solution) and solid and ions in solution which are in equilibrium with one another. However, it is only the drug in a solution form which is relevant for the absorption purpose. The interactions between drug (solid), drug/ions in solution, and the surface areas are discussed in providing efficient drug absorption. Considering the absorption mechanism, the role of in vitro drug dissolution testing is also highlighted. Please click here for the complete article

The similarity Factor or F2 is a parameter commonly used to show the similarity or equivalence of two dissolution profiles. The F2-value is often calculated using the formula described here (link)—the values of F2 range from 0 to 100. Commonly, a value between 50 and 100 is considered to reflect the similarity of two dissolution profiles, which implies that the products will have similar in vivo drug release characteristics (please click here for complete post).

I have received two or three queries on this topic in recent weeks. I am responding with a web post so that others may benefit from my response as well. The current query is as follows (the name has been deleted and data has been blacked out to keep it confidential):

Query:

I read your many excellent articles which guide well the peoples who are new in drug delivery.
After reading a lot of literature on IVIVC, there is still a query in my mind as asked below:

Plasma drug level can be predicted from in vitro dissolution data by two ways:
1. Using convolution approach
2. Using IVIVC

I know how to predict plasma drug level using convolution approach but don’t know how to calculate from IVIVC.
In this context, I need your guidance.

Suppose I have established Level A IVIVC for a tablet formulation with following outcomes; Y = x.xxxX – x.xxx, R2 = 0.xxx.
Then I changed an excipient and did the dissolution testing for new tablet. The new dissolution data is attached. Its outcomes are as; Y = x.xxxX – x.xxx,   R2 = 0.xxx.
Now, how can I predict plasma drug level for this new tablets using previously established  IVIVC.

Thanking you in advance and regards

Response:

Please, note that plasma drug levels can only be predicted/estimated using the convolution method. IVIVC cannot be used to predict plasma drug levels. I realize that there has been significant promotion to this effect, but unfortunately, it is not correct. Furthermore, IVIVC is also of limited or no use during the product development stage, where prediction/estimation of plasma levels is required and the convolution method is the only option for obtaining the required results.

The following articles may be of help (link1, link2, link3).

The following comments are noted from one of my earlier posts, as reported in the FDA transcripts (link):

(1) “It is noted that literally 50 percent of the batches are thrown out every year because of dissolution failures, …”

(2) “There is no evidence that the products out there on the market are bad products. There is no evidence that the agency has done a bad job in serving as a surrogate for ensuring good quality products for the consumer. And, there is no evidence that industry is not focused on quality as an important attribute to manufacturing products.”

Putting these two together clearly shows that we are dealing with the problem of dissolution and not of products or industry? Please click here for the complete post

People who are not familiar with the recent history of dissolution testing and QbD may find the following two links useful. These links are for the transcripts of two FDA meetings (held in 2005) on the topic. These are quite long documents and worth reading every word of it. I have noted some of the quotes which may be quite interesting (shocking!). I believe that the main or one of the main reasons for starting QbD was to determine and address the issues of drug dissolution testing, in a systematic way based on valid statistical design and analysis (aka QbD). I wonder what happened to that objective and where have we been lost!!!

http://www.fda.gov/ohrms/dockets/ac/05/transcripts/2005-4137T1.pdf

http://www.fda.gov/ohrms/dockets/ac/05/transcripts/2005-4187T1.pdf

Some comments from the speakers:

Dr. Helen Winkle:

“There is no evidence that the products out there on the market are bad products. There is no evidence that the agency has done a bad job in serving as a surrogate for ensuring good quality products for the consumer. And, there is no evidence that industry is not focused on quality as an important attribute to manufacturing products.”

“I think this meeting brings us a step closer to understanding quality-by-design, especially as it relates to dissolution. I think it is really important. I think the whole topic today will really help open the door to us to move ahead in the area of dissolution, and I think we have learned a lot through our past meetings here.”

“The meeting topics that we have for this particular meeting are that we are going to talk about quality-by-design and control of drug dissolution.”

Dr. Moheb Nasr:

 “ … that there rate of drug release from solid oral dosage forms is a critical quality attribute.”

“ … that you approve of our approach of implementing quality-by-design in setting dissolution specification.”

Dr. Ajaz Hussain:

 “It is noted that literally 50 percent of the batches are thrown out every year because of dissolution failures, …”

“I see our colleagues from Health Canada here who have been criticizing this [dissolution test] for a long time. Thank you for coming, sir.”

It appears that there is confusion that to develop IVIVC, one is required first to de-convolute a blood drug concentration-time (C-t) profile to obtain a so-called “input function,” and then this function should be used to predict C-t profiles. The confusion appears to come from the way the concept and practice of IVIVC have been presented in the literature.

As described in some earlier posts (link1, link2, link3, link4), and a publication (link), to develop or evaluate products, one does not require IVIVC. The IVIVC is a step to relate in vitro dissolution to in vivo dissolution/absorption. This is why one requires a de-convolution step to obtain in vivo dissolution from a C-t profile. However, it is very important to note that during the product development and evaluation stage one does not have C-t profiles, and the formulator is required to predict/estimate C-t profiles using experimentally observed in vitro dissolution results of test products. Therefore, at this stage, the formulator cannot use the de-convolution step.

On the other hand, as stated above, one needs to predict C-t profiles at the product development stage. For this purpose, the only option is to use the convolution method. Mathematically to use the convolution method, one would require an “input function”, which in reality is the drug elimination rate equation, following drug administration using IV bolus. This input function or elimination rate equation can be obtained from the literature. For most drugs, the elimination rate equation can easily be derived using the elimination half-lives. Thus, there is no reason to conduct a bio-study to obtain this input function or elimination rate equation, as literature often suggests.

To conclude, for predicting C-t profiles, one only requires a one-step convolution method. The convolution method requires the use of an input function, which in reality is the elimination rate equation of the drugs, which can be obtained from the literature. Combining the dissolution results with the elimination rate equation (input function) along with the volume of distribution and bioavailability values of the drug, also obtained from literature, and using the suggested Excel spreadsheet software provide the required C-t profiles.

QbD is often promoted to improve quality, enhance efficiencies, and reduce the cost of manufacturing pharmaceutical products such as tablets and products. This article provides a critical assessment of this view. It is argued that the promotion appears to be an attempt to market the expertise in statistical analyses. This distorted view in fact, appears to be causing confusion and hindrance in accepting the QbD approach. A discussion is provided highlighting the underlying issues in this regard. Link for the article Please click here for the complete article

It is commonly suggested that a dissolution medium should be de-aerated or de-gassed, which presumably helps in reducing the variability in dissolution results. It is to be noted that it is not the presence of air or gas, in the medium which causes the problem. It is the formation of the bubbles from these gases which may cause the problem. The question is why and how these bubbles are formed. If the source of bubble formation is established and then removed, this problem can only be addressed.

The source of the bubble formation may be explained as follows: Drug dissolution tests are conducted using media maintained at 37 °C. However, the media used are generally stored at room temperature, which is lower than 37 °C, commonly around 20 °C. Therefore, when a medium is transferred to dissolution vessels/baths and heated up to 37 °C, the solubility of the dissolved gasses, which are from the air thus de-aeration terminology, from higher to lower solubility. The decrease in solubility of the gasses at higher temperatures causes the dissolved gasses to come out of the medium in the form of tiny bubbles, which tend to stick at random to the vessel and spindle surface and may be to the product itself. However, once the medium is equilibrated at 37 °C the formation of the bubbles stops. Therefore, the answer to the question of why and how the bubbles are formed, is because of a transitory stage during the heating process of the dissolution medium. A simple solution to avoid this problem is to remove the temperature gradient effect, i.e., avoid transferring low-temperature medium directly into the dissolution vessels. Therefore, the analysts should heat the medium to 37 °C outside the dissolution vessel or give sufficient time for the medium to equilibrate in a dissolution vessel at 37 °C with moderate stirring.

However, unfortunately, a practice of de-aeration or de-gassing has been introduced to address this problem of bubble formation. It is a practice that does not appear to be well thought out. The practice has its practical limitations and makes the drug dissolution testing irrelevant and unpredictable. For example:

  1. The physiological environment does not require a de-aerated medium. Obviously, if the results are obtained using a de-aerated medium, they will not relate well with the physiological characteristics of the product.
  2. The commonly suggested procedure of de-aerating, which is based on heating/vacuum steps, is without a measurable endpoint. Therefore, the de-aerating step will always be variable and unpredictable. Thus it will introduce variability in testing.
  3. In addition, no matter how reproducible one tries to be with the de-aeration step, after de-aeration, the medium will quickly start equilibrating itself with the atmospheric gasses. Therefore, until this equilibrium is reached, the system will remain unstable and unreliable.
  4. Often media containing detergents such as SLS are difficult, if not impossible, to de-aerate due to excessive foam formation. Therefore, one may not be able to work with a de-aerated medium using SLS.

On the other hand, the medium equilibrated at 37 °C within a dissolution apparatus, or external to it such as keeping in a water bath, provides a simple, stable, reproducible, and physiologically relevant alternative.