- Proceedings
- Open Access
A framework for personalized medicine: prediction of drug sensitivity in cancer by proteomic profiling
- Dong-Chul Kim^{1},
- Xiaoyu Wang^{2},
- Chin-Rang Yang^{2} and
- Jean X Gao^{1}Email author
https://doi.org/10.1186/1477-5956-10-S1-S13
© Kim et al; licensee BioMed Central Ltd. 2012
- Published: 21 June 2012
Abstract
Background
The goal of personalized medicine is to provide patients optimal drug screening and treatment based on individual genomic or proteomic profiles. Reverse-Phase Protein Array (RPPA) technology offers proteomic information of cancer patients which may be directly related to drug sensitivity. For cancer patients with different drug sensitivity, the proteomic profiling reveals important pathophysiologic information which can be used to predict chemotherapy responses.
Results
The goal of this paper is to present a framework for personalized medicine using both RPPA and drug sensitivity (drug resistance or intolerance). In the proposed personalized medicine system, the prediction of drug sensitivity is obtained by a proposed augmented naive Bayesian classifier (ANBC) whose edges between attributes are augmented in the network structure of naive Bayesian classifier. For discriminative structure learning of ANBC, local classification rate (LCR) is used to score augmented edges, and greedy search algorithm is used to find the discriminative structure that maximizes classification rate (CR). Once a classifier is trained by RPPA and drug sensitivity using cancer patient samples, the classifier is able to predict the drug sensitivity given RPPA information from a patient.
Conclusion
In this paper we proposed a framework for personalized medicine where a patient is profiled by RPPA and drug sensitivity is predicted by ANBC and LCR. Experimental results with lung cancer data demonstrate that RPPA can be used to profile patients for drug sensitivity prediction by Bayesian network classifier, and the proposed ANBC for personalized cancer medicine achieves better prediction accuracy than naive Bayes classifier in small sample size data on average and outperforms other the state-of-the-art classifier methods in terms of classification accuracy.
Keywords
- Bayesian Network
- Random Forest
- Personalized Medicine
- Drug Sensitivity
- Class Variable
Background
In this paper, we present a framework for personalized cancer medicine with RPPA and drug sensitivity. The goal of personalized medicine is to provide optimal drug treatment based on individual's drug sensitivity level, which will save unnecessary cost and treatment. To achieve this, it is assumed that drug sensitivity can be predicted by using quantitative patterns of protein expression which represents molecular characteristics of individual patients [1, 2]. More precisely, as medicinal effect is closely relevant to cancer signaling transduction pathways, proteomic profiling can provide important pathophysiologic cues regarding responses to chemotherapies [3, 4].
The prerequisite work of the proposed personalized medicine is the proteomic profiling of patients who have Different drug sensitivity level. The proteomic profiling is implemented by measuring the expression level of selected proteins which could be related to signaling pathways of the target cancer. To quantitatively measure the systemic responses of proteins in pathways, RPPA is used in conjunction with the quantum dots (Qdot) nano-technology. RPPA originally introduced in [5] is designed for quantitatively profiling protein expression levels in a large number of biological samples [6]. In RPPA, sample lysates are immobilized in series of dilutions to generate dilution curves for quantitative measurements being able to use only small amount (nanoliter) of sample while other protein arrays immobilize antibodies. After primary and secondary antibodies are probed, signal is detected by Qdot assays. Qdot is a nano-metal fluorophore with more bright and linear signal, and also Qdot prevents photo-bleaching effect that often occurs in organic fluorophores [7, 8]. In addition, RPPA offers more accurate pathophysiologic information in a signaling pathway with posttranslational modifications (e.g. phosphorylation) not obtainable by gene microarray and protein-protein interactions.
The paper is organized as follows. In the methods section, the basic concept of Bayesian network and Bayesian network classifier are reviewed, and we give a detailed account of the proposed ANBC. In the results section, we present the experimental result comparing to other classification algorithms. Finally, we conclude with summary and future work in the conclusion section.
Method
Bayesian networks
Bayesian networks classifier
Bayesian Network Classifier (BNC) is a probabilistic classifier based on Bayes' theorem. A set of random variables is defined as X = {X _{1},..., X _{ n-1}, C} where n th variable is a class variable. Bayesian network classifier predicts the label c that maximizes the posterior probability P _{ B }(C = c|X _{1} = x _{1},..., X _{ n-1}= x _{ n-1}) given a Bayesian network structure (Figure 2) and an instance {x _{1},..., x _{ n-1}} of attributes.
Naive Bayes classifier
where $p\left(C\right){\prod}_{i=1}^{n-1}p\left({X}_{i}|C\right)$ (prior×likelihood) is same as joint probability in (2) since it is assumed that each variable X _{ i } is conditionally independent of every other variable X _{ j } for i ≠ j given class variable C as a parent of X _{ i } (Figure 2(A)); we can cancel the constant Z since the evidence Z, p(X _{1},..., X _{ n-1}), is independent to C in maximizing the posterior. Hence, the classifier is defined as $argma{x}_{c\in C}p\left(C=c\right){\prod}_{i=1}^{n-1}p\left({X}_{i}={x}_{i}|C=c\right)$ given a test instance {x _{1},..., x _{ n-1}}. In our application, discrete class variable C = {High, Low} indicates a drug sensitivity level, and an attribute X _{ i } refers to a discretized protein expression level in RPPA. So, in NBC, it is assumed that each protein is conditionally independent to other protein and dependent to only the drug sensitivity. However, this assumption is unrealistic since the selected proteins of RPPA could have the biological interactions in the signaling pathway affecting the efficacy of the drug.
where N _{ ijk } denotes the number of instances in training data where X _{ i } = x _{ ik } and ${\Pi}_{{X}_{i}}={\pi}_{ij}$, and ${N}_{ij}={\sum}_{k=1}^{{r}_{i}}{N}_{ijk}.$. After the parameters are estimated, then these parameters $\Theta ={\left\{{\theta}_{ijk}\right\}}_{i\in \left\{1,\dots ,n\right\},j\in \left\{1,\dots ,{q}_{i}\right\},k\in \left\{1,\dots ,{r}_{i}\right\}}$ are used to compute the likelihood p(X _{ i }|C) of the classifier given a test instance and a class label. In addition, the logarithm of likelihood (∑logp(X _{ i }| C)) is practically taken to avoid numerical underflow in the implementation instead of products of all likelihoods, ∏p(X _{ i }|C).
Augmented naive Bayes classifier
where ${\Pi}_{{X}_{i}}^{\backslash C}$ denotes the parent set of variable X _{ i } except the class variable C.
Discriminative structure learning
where ANBC _{ ij } is a ANBC where the single directed edge from j to i (E _{ ij }) is augmented in the structure of NBC. More precisely, $ANB{C}_{ij}\left({x}_{1}^{m},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{x}_{n-1}^{m}\right)$is defined as argmax _{ c∈C } p(X _{ i } = x _{ i }|X _{ j } = x _{ j } , C = c)∏_{ h, h ≠ i } p(X _{ h } = x _{ h }|C = c). As the second term is CR of NBC, it is constant with respect to i and j. LCR _{ ij } > 0 indicates that the edge E _{ ij } could increase the classification rate of ANBC when E _{ ij } is augmented in the structure of NBC. For ANBC, the number of all possible augmented edges are (n - 1)(n - 2). After we calculate LCR for all possible augmented edges, the edges that have negative LCR are excluded from structure searching space. To decrease more the number of available augmented edges, we select the edge E _{ ij } only if LCR _{ ij } is equal to the max LCR _{ ih } for h ∈ X^{ \i }. Because variable X _{ i } can have only a single X _{ j } as a parent except class variable, only the variable that maximizes $LC{R}_{{X}_{i}{\Pi}_{{X}_{i}}^{\backslash C}}$ is selected as the parent of X _{ i }. In searching step, the structure is iteratively updated by randomly adding or deleting an augmented edge maintaining the acyclic property and the limited number of parents per attribute (Each attribute can have at most two parents including class variable).
Experiments
Lung cancer data
55 antibodies of used in RPPA
p Src(Y527) | p53 | ERK | p ERK | GSK3 | p GSK3 | CyclinB1 | p Rb |
---|---|---|---|---|---|---|---|
p IRS1(Y1179) | p38 | p p38 | PTEN | NQO1 | Stat3 | p NF-kBp65 | p Stat3 |
p IRS1(Y896) | p16 | p JNK | p PTEN | CDK4 | p AKT | CyclinD3 | EGFR |
p IGF1R(Y1158-1162) | Src | RAF1 | p RAF1 | Bcl2 | JNK | b-Catenin | b-Actin |
p IGF1R(Y1162-1163) | p27 | p p53 | Hsp27 | IKBa | pIKBa | Vimentin | p MDM2 |
p EGFR(Y1173) | p21 | sClu | IGF1R | MDM2 | IRS1 | pSrc(Y416) | gH2AX |
E-Cadherin | Rb | AKT | p Bcl2 | mTOR | p mTOR | NF-kBp65 |
Experimental setup
We conducted the comparative evaluations with the following classification algorithms: Support Vector Machine with three Different kernels, Linear kernel (SVML), Polynomial kernel (SVMP), and Radial basis function kernel (SVMR), Logistic Regression (LR), Random Forest (RF), Tree-Augmented Naive Bayes (TAN) [10], NBC, and ANBC we proposed. To evaluate the performance of Different methods, we measure the prediction accuracy on average using leave-one-out estimation Since the structure is randomly updated in searching, 5 times leave-one-out are performed in ANBC. The original continuous values of RPPA are used in SVM, LR, and RF. For the parameter estimation, only maximum likelihood parameters are used for NBC, TAN, and ANBC since we only compare the structure leaning methods rather than discriminative parameter learning methods. To avoid zero conditional probability in logarithm of likelihood when we calculate the joint probability, we set ${\widehat{\theta}}_{ijk}=\frac{{N}_{ijk}+{N}_{ijk}^{\prime}}{{N}_{ij}+{N}_{ij}^{\prime}}$, ${N}_{ijk}^{\prime}=0.5$, ${N}_{ij}^{\prime}=1$ if N _{ ijk } = 0 or N _{ ij } = 0. Accuracy is calculated by a ratio of the number of correct predictions to the total number of samples in leave-one-out estimation. In addition, for reasonable comparison, feature selection is applied for all classification methods because some of methods may not produce a good result in high dimension data and also all 55 proteins may be not related to drug sensitivity directly. For SVM, LR, and RF, attributes are selected by using Information Gain [14] and Ranker implemented in Weka [15]. To select proteins (features) in NBC, TAN, and ANBC, we used Mutual Information between attribute and class variable. The number of features to be selected is predefined as 10, 20, and 30.
Experimental results
Accuracy of sensitivity prediction for 24 drugs with 20 selected features
Drug Name | SVML | SVMP | SVMR | LR | RF | NBC | TAN | ANBC |
---|---|---|---|---|---|---|---|---|
8-aminoadenosine | 68.89 | 68.89 | 68.89 | 71.11 | 55.56 | 91.11 | 93.33 | 93.33 |
8-Cl-adenosine | 51.11 | 55.56 | 55.56 | 55.56 | 64.44 | 93.33 | 86.67 | 92.89 |
Carboplatin | 71.11 | 73.33 | 73.33 | 62.22 | 71.11 | 86.67 | 80.00 | 88.00 |
Chloroquine | 70.45 | 65.91 | 65.91 | 54.55 | 70.45 | 97.73 | 88.64 | 95.91 |
Cisplatin | 79.07 | 65.11 | 65.11 | 58.14 | 81.40 | 90.70 | 93.02 | 91.63 |
Cyclopamine | 28.89 | 40.00 | 17.78 | 51.11 | 42.22 | 84.44 | 80.00 | 86.67 |
Diazonamide | 80.49 | 80.49 | 80.49 | 60.98 | 70.74 | 92.68 | 90.24 | 90.73 |
Docetaxel | 90.24 | 90.24 | 90.24 | 78.05 | 90.24 | 100 | 100 | 100 |
Doxorubicin | 41.30 | 56.52 | 56.52 | 43.48 | 58.70 | 89.13 | 76.09 | 88.70 |
Erlotinib | 86.05 | 86.05 | 86.05 | 88.37 | 90.70 | 88.37 | 97.67 | 88.37 |
Etoposide | 55.81 | 62.79 | 62.79 | 53.49 | 65.12 | 95.35 | 90.70 | 94.88 |
Gefitinib | 90.00 | 90.00 | 90.00 | 90.00 | 90.00 | 95.00 | 65.00 | 95.00 |
Gemcitabine | 81.81 | 81.81 | 81.81 | 61.36 | 77.27 | 100 | 100 | 100 |
Gemcitabine/Cisplatin | 73.81 | 71.43 | 71.43 | 61.90 | 66.67 | 95.24 | 65.24 | 91.43 |
Irinotecan | 47.50 | 55.00 | 55.00 | 50.00 | 40.00 | 92.50 | 90.00 | 92.50 |
Orexin | 83.33 | 83.33 | 83.33 | 77.78 | 83.33 | 100 | 100 | 100 |
Paclitaxel | 85.11 | 85.11 | 85.11 | 61.70 | 85.11 | 100 | 93.62 | 100 |
Paclitaxel/Carboplatin | 90.20 | 90.20 | 90.20 | 82.35 | 90.20 | 98.04 | 98.04 | 98.04 |
Peloruside A | 80.95 | 80.95 | 80.95 | 66.67 | 80.95 | 92.86 | 92.86 | 95.24 |
Pemetrexed | 59.09 | 52.27 | 52.27 | 68.18 | 65.91 | 93.18 | 81.82 | 93.18 |
Pemetrexed/Cisplatin | 61.90 | 61.90 | 61.90 | 57.14 | 47.62 | 83.33 | 85.71 | 90.00 |
Smac Mimetic | 84.62 | 84.62 | 84.62 | 66.67 | 82.05 | 97.44 | 92.31 | 97.44 |
Sorafenib | 87.23 | 87.23 | 87.23 | 78.72 | 85.11 | 97.87 | 91.49 | 97.87 |
Vinorelbine | 79.07 | 79.07 | 79.07 | 51.16 | 76.74 | 90.70 | 93.02 | 90.70 |
Average | 72.00 | 72.83 | 71.90 | 64.61 | 72.15 | 93.57 | 91.06 | 93.85 |
Conclusion
In this paper, we introduce the personalized medicine with RPPA and drug sensitivity. The goal of personalized medicine is to provide the optimal therapy to patients who have Different biological profile regarding the target cancer. For this goal, Bayesian network classifier is applied for the drug sensitivity prediction given patient's RPPA. We propose a new score function LCR for learning discriminative structure of Bayesian network classifier. All augmented edges are scored by LCR that is based on the difference between CR before and after a single edge is augmented. In other words, the score represents how the edge augmented in NBC is likely to increase the classification rate in ANBC. Based on the scored edges, the discriminative structure is discovered through Hill-Climbing search. Since it is known that NBC normally outperforms discriminative learning algorithm for small sized sample data (In our data the number of samples on average is 43), we focus on the idea that is to augment only a least number of edges to improve the performance mostly maintaining the advantage of NBC structure while TAN augments too many edges in NBC. In the experiments, ANBC with proposed score function is compared to well-known classification algorithms such as Support vector machine, Logistic regression, and Random forest. We also compare to Bayesian network classifiers, TAN and NBC with generative parameters. The results show that the ANBC outperforms other classification algorithms and achieves slightly better accuracy than NBC in small sized sample data sup-porting the claim that the dependency of proteins can be used to improve the sensitivity prediction for the personalized medicine. To overcome the limitation of sample size, we plan to investigate more about discriminative parameter learning and effective feature selection for Bayesian network classifier as future works.
Declarations
Acknowledgements
This article has been published as part of Proteome Science Volume 10 Supplement 1, 2012: Selected articles from the IEEE International Conference on Bioinformatics and Biomedicine 2011: Proteome Science. The full contents of the supplement are available online at http://www.proteomesci.com/supplements/10/S1.
Authors’ Affiliations
References
- Wistuba II, Gelovani JG, Jacoby JJ, Davis SE, Herbst RS: Methodological and practical challenges for personalized cancer therapies. Nat Rev Clin Oncol 2011,8(3):135–141.PubMedView ArticleGoogle Scholar
- Mueller C, Liotta L, Espina V: Reverse phase protein microarrays advance to use in clinical trials. Molecular Oncology 2010,4(6):461–481. 10.1016/j.molonc.2010.09.003PubMed CentralPubMedView ArticleGoogle Scholar
- Kornblau SM, Tibes R, Qiu YH, Chen W, Kantarjian HM, Andreeff M, Coombes KR, Mills GB: Functional proteomic profiling of AML predicts response and survival. Blood 2009, 113: 154–164. 10.1182/blood-2007-10-119438PubMed CentralPubMedView ArticleGoogle Scholar
- Cain JW, Hauptschein RS, Stewart JK, Bagci T, Sahagian GG, Jay DG: Identification of CD44 as a Surface Biomarker for Drug Resistance by Surface Proteome Signature Technology. Molecular Cancer Research 2011,9(5):637–647. 10.1158/1541-7786.MCR-09-0237PubMed CentralPubMedView ArticleGoogle Scholar
- Liotta LA, Espina V, Mehta AI, Calvert V, Rosenblatt K, Geho D, Munson PJ, Young L, Wulfkuhle J, Petri-coin EF III: Protein microarrays: Meeting analytical challenges for clinical applications. Cancer Cell 2003,3(4):317–325. 10.1016/S1535-6108(03)00086-2PubMedView ArticleGoogle Scholar
- Spurrier B, Ramalingam S, Nishizuka S: Reverse-phase protein lysate microarrays for cell signaling analysis. Nature Protocols 2008,3(11):1796–1808. 10.1038/nprot.2008.179PubMedView ArticleGoogle Scholar
- Kim YB, Yang CR, Gao J: Functional proteomic pattern identification under low dose ionizing radiation. Artificial Intelligence in Medicine 2010,49(3):177–185. 10.1016/j.artmed.2010.04.001PubMedView ArticleGoogle Scholar
- Wang X, Dong Y, Jiwani A, Zou Y, Pastor J, Kuro-O M, Habib A, Ruan M, Boothman D, Yang C: Improved protein arrays for quantitative systems analysis of the dynamics of signaling pathway interactions. Proteome Science 2011, 9: 53. 10.1186/1477-5956-9-53PubMed CentralPubMedView ArticleGoogle Scholar
- Duda RO, Hart PE: Pattern Classification and Scene Analysis. John Wiley & Sons Inc; 1973.Google Scholar
- Friedman N, Geiger D, Goldszmidt M: Bayesian Network Classifiers. Science 1997, 163: 131–163.Google Scholar
- Pernkopf F, Bilmes J: Discriminative versus generative parameter and structure learning of Bayesian Network Classifiers. International Conference on Machine Learning 2005, 657–664.Google Scholar
- Pernkopf F: Bayesian network classifiers versus selective k-NN classifier. Pattern Recognition 2005, 38: 1–10. 10.1016/j.patcog.2004.05.012View ArticleGoogle Scholar
- Fayyad UM, Irani KB: Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning. Proceedings of the 13th International Joint Conference on Artificial Intelligence 1993, 1022–1027.Google Scholar
- Cover TM, Thomas JA: Elements of Information Theory. Wiley-Interscience; 1991.View ArticleGoogle Scholar
- Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, H I: The WEKA Data Mining Software: An Update. SIGKDD Explorations 2009.,11(1):Google Scholar
- Ng A, Jordan M: On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. Advances in Neural Information Processing Systems 2002., 14: Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.