Predicting beta-turns in proteins using support vector machines with fractional polynomials
© Elbashir et al.; licensee BioMed Central Ltd. 2013
Published: 7 November 2013
β-turns are secondary structure type that have essential role in molecular recognition, protein folding, and stability. They are found to be the most common type of non-repetitive structures since 25% of amino acids in protein structures are situated on them. Their prediction is considered to be one of the crucial problems in bioinformatics and molecular biology, which can provide valuable insights and inputs for the fold recognition and drug design.
We propose an approach that combines support vector machines (SVMs) and logistic regression (LR) in a hybrid prediction method, which we call (H-SVM-LR) to predict β-turns in proteins. Fractional polynomials are used for LR modeling. We utilize position specific scoring matrices (PSSMs) and predicted secondary structure (PSS) as features. Our simulation studies show that H-SVM-LR achieves Qtotal of 82.87%, 82.84%, and 82.32% on the BT426, BT547, and BT823 datasets respectively. These values are the highest among other β-turns prediction methods that are based on PSSMs and secondary structure information. H-SVM-LR also achieves favorable performance in predicting β-turns as measured by the Matthew's correlation coefficient (MCC) on these datasets. Furthermore, H-SVM-LR shows good performance when considering shape strings as additional features.
In this paper, we present a comprehensive approach for β-turns prediction. Experiments show that our proposed approach achieves better performance compared to other competing prediction methods.
Secondary structure of proteins consists of basic elements; these elements are α-helices, β-sheets, random coils, and turns. α-helices and β-sheets are considered as regular secondary structure elements while the residues that correspond to turns structures do not form regular secondary structure elements. In turns structures the Cα-atoms of two residues are separated by one to five peptide bonds and the distance between these Cα-atoms is less than 7A°. The number of peptide bonds that separate the two end residues determines the specific turn type. In α-turns and β-turns, the two end residues are separated by four and three peptide bonds respectively. In γ-turns, δ-turns, and π-turns, the two end residues are separated by two, one, and five peptide bonds respectively. The most common types of turns structure that exist in protein are β-turns structure. They represent approximately 25% of the secondary structure of the proteins sequences. β-turns can reverse the direction of a protein chain therefore they are considered as orienting structure . They also have significant effects in protein folding, because they have the ability to bring together and allow the interactions between the regular secondary structure elements. β-turns are not only important in protein folding but are also implicated in the biological activities of peptides as the bioactive structures that interact with other molecules such as receptors, enzymes and antibodies . They are also important in the design of various peptidomimetics for many diseases . Therefore, the prediction of β-turns is one of the important problems in molecular biology, which can provide valuable insights and inputs for the fold recognition and drug design.
There are different methods designed for β-turns prediction. These methods can be divided into statistical methods and machine learning methods. The statistical methods that are used in β-turns prediction include Chou-Fasman method , Thornton's algorithm , GORBTURN , 1-4 & 2-3 correlation model , sequence couple model , and COUDES method . All of these statistical methods use the sequence as input except for COUDES, which is based on propensities and multiple alignments. COUDES also utilizes secondary structure predicted by PSIPRED , SSPRO2 , and PROF . The machine learning methods include BTPRED , BetaTpred2 , MOLEBRNN  and NetTurnP , which are based on artificial neural networks (ANNs), Kim's method based on k-nearest neighbor (KNN) , as well as support vector machines (SVMs) based methods, which recently have become popular in the field of β-turns prediction. These SVMs based methods include BTSVM , Zhang and colleagues' method , Zheng and Kurgan's method , Hu and Li's method , the method of Liu et al. , DEBT , and the method of Tang et al. . In BTBRED, secondary structure predictions are utilized with two layered network architecture. BetaTpred2 enhances the performance of β-turns prediction by using secondary structure prediction and evolutionary information in form of position specific scoring matrices (PSSMs) as input to the neural networks. MOLEBRNN uses PSSMs as input to a bidirectional Elman-type recurrent neural network. NetTurnP uses evolutionary information and predicted protein sequence features as input to two ANN layers whereas the first layer is trained to predict whether or not an amino acid is located in a β-turn. Kim's method encodes protein sequence using a window of up to 9 residues to be used as input to a KNN based method, which is combined with a filter that uses secondary structure predicted with PSIPRED for the central residue. In BTSVM, position specific frequent matrices (PSFMs) and PSSMs, both calculated with PSI-BLAST , are applied to encode input for SVM classifier. Zhang and colleagues' method is another SVM method that uses PSSMs over a 7-residue window and the secondary structure of the central residue predicted by PSIPRED as an input. In Zheng and Kurgan's method a SVM is utilized to predict β-turns using window based information extracted from four predicted secondary structures (PSSs) with a selected set of PSSMs as input to the SVM. The SVM based method developed by Hu and Li combines the increment of diversity, position conservation scoring function, and secondary structure predicted with PSIPRED to compute the inputs for prediction of β-turns and γ-turns. Liu et al. combine SVM with PSS information obtained by using E-SSpred, a secondary protein structure prediction method. DEBT predicts β-turns and their types using information from multiple sequence alignments, PSSs, and predicted dihedral angles. Tang et al. considered another type of one-dimensional string of symbols representing the clustered region of ϕ, ψ torsion pairs called shape strings as new features. In  we utilized the idea of under-sampling to create several balanced datasets. These balanced sets were used to train several SVMs classifiers independently. The SVMs were aggregated using a linear logistic regression model.
In this paper, we propose a new approach called H-SVM-LR (Hybrid approach of SVMs and Logistic Regression (LR)) for predicting β-turns. Our proposed approach incorporates the idea of clustering by partitioning the non-β-turn class into three subsets using k-means clustering algorithm. Each subset is merged with the positive class (β-turn) to form a sub training set. These sub training sets are used to train localized SVMs classifiers independently. LR model modeled using fractional polynomials, is used to aggregate the localized SVMs to make a collective decision. The merit of using LR to aggregate the localized SVMs is that it will enable us to take advantages of the statistical modeling theory to find the optimal weights for each local SVM . Also LR has the advantages of being widely studied , and in the recent years there are many algorithms have been designed to improve its performance. These algorithms include iteratively re-weighted least squares (IRLS) algorithm, which is a special case of fisher's scoring method [26, 27].
Support vector machine (SVM)
where w is the normal vector to the hyper-plane, b is the offset from the origin, and C is the error penalty parameter. The kernel function, which maps the input space into a higher-dimensional space, can be applied to create SVM classifier for non-linear problem. The kernel functions that can be used for SVM include polynomial kernel function, radial basis (also known as Gaussian kernel function), and sigmoid kernel function.
Logistic regression (LR)
Minimizing the deviance given in the above equation is equivalent to maximizing the log-likelihood.
The dataset BT426, which contains 426 non-homologous protein chains, is used to evaluate our H-SVM-LR prediction method. This dataset was developed by Guruprasad and Rajkumar . We obtained it from Raghava Group's website http://www.imtech.res.in/raghava/bteval/dataset.html. The structure of protein chains in BT426 dataset is determined by X-ray crystallography at two resolution or better. In each chain there is at least one beta-turns structure. 24.9% of all amino acids in BT426 have been assigned to be having β-turns structure. Several recent beta-turns prediction methods use it as a golden set of amino acid sequences to evaluate their performances. We therefore used it to evaluate our methods and to make direct comparisons with the other prediction methods. Besides BT426, we used the dataset of 547 protein sequence (BT547), and the dataset of 823 protein sequence (BT823) to evaluate our approach. These datasets were constructed for training and testing COUDES .
where x is the PSSM's element that stands for the likelihood of the particular residue substitution at that position.
Predicted secondary structure (PSS)
PROTEUS  is used to predict the secondary structure features. The motivation to use PROTEUS comes from the work of Tang et al. , which concludes that the predictions when using PROTEUS and PSSMs were better than when using PHD , JPRED , PROTEUS, and PSSMs together. The secondary structure features are predicted as three structure states: helix (H), strand (E) and coil (C). These three structure states are encoded as 1 0 0 for helix, 0 1 0 for strand, and 0 0 1 for coil.
Predicted shape strings
Tang et al.  predicted shape strings from a predictor constructed based on structural alignment approach. Shape strings were represented by eight states, i.e. S, R, U, V, K, A, T and G. They used a sliding window of 8 amino acids on PSSMs, PSS and shape strings features. We also added shape strings to our PSSMs and PSS features. The shape strings were predicted using the protein shape string and its profile prediction server (DSP) . Besides the eight states DSP defines shape N where the ϕ and ψ angles are undefined, or no structure determination for parts of the sequence. The shape strings features are encoded as (1 0 0 0 0 0 0 0 0) for S, (0 1 0 0 0 0 0 0 0) for R, ..., and (0 0 0 0 0 0 0 0 1) for N.
The proposed approach
A sliding window of size seven residues is used over the matrix that consists of the features. The prediction is made for the central residue. This window size is selected in accordance with Shepherd et al.  who found that the optimal prediction for β-turns is achieved using window size of seven or nine.
Since β-turns account for approximately 25% of the globular protein residues, the ratio of β-turns to non-β-turns is 1:3. Thus, the training sets used for β-turns prediction are imbalanced sets. In our trail experiments, we found that if the non-β-turns set is divided into a three subsets by a suitable clustering algorithm, each non-β-turns subset with the whole β-turns set will form approximately balanced training set. This balanced training set is more likely to be separable in the feature space. That is because the distribution of the non-β-turns samples in a subset is centralized and compacted. In other words, the β-turns set can be easily separated from each non-β-turns cluster by a different hyper-plane. That means good performance would be expected when constructing localized SVMs using each non-β-turns cluster against the β-turns. But, each of these SVMs alone is certainly not a good global classifier. It proposes that it is possible to construct a better classifier than the SVM trained with the whole data by combining these SVMs effectively. Particularly, a localized SVM classifier can be constructed for each sub training set, this way the localized SVMs will not be affected by the heterogeneity of the whole training set. To outperform the SVM that is trained with the whole data, we need to combine these localized SVMs effectively into global one without neglecting their local advantages. Majority voting is one of the methods that are used to combine several classifiers, but its main problem is that it will not give weight to each classifier. LR model can integrate the localized SVMs classifiers, and it allows us to take advantages of the statistical modeling theory to find the optimal weights for each local classifier. The motivation to use this clustered model comes from the work of Yi Chang . In his work, Yi Chang used localized linear SVMs classifier for a data in the feature space defined by a chosen kernel.
LR model selection
We verified the importance of each variable in the LR model using Wald statistics.
We compared the coefficients of the each variable with the coefficient from the model containing only that variable.
Any variable that did not appear to be important was eliminated, and a new model was fitted. The new model was checked whether it is significantly different from the old model. If it is, then the deleted variable is important.
The process of deleting, refitting and verifying was repeated until it appears that all the important variables were included in the model.
We tried to fit a linear LR model to the data but the prediction error is found to be very large, so we considered power transformation using fractional polynomials.
A list of possible interactions between each pairs of variable was created, these interactions terms were added one at a time, in the model containing all the main effects and assess its significance using the likelihood ratio test. The significant interactions were added to the main effect model and its fit was evaluated using Wald tests and LR test for the interaction terms, and any non-significant interaction was dropped.
Fractional polynomials for the SVMs models using the BT426 dataset.
Training and testing
We used LIBSVM package  to train and build the SVMs prediction models. The radial basis kernel function was used to transfer the data from a low dimension space to a higher-dimensional space nonlinearly for all the SVMs. The default grid search approach was used to find the optimal values for the LIBSVM's parameters C and gamma. The leave-one-out cross-validation test, in which different datasets for training and testing are used to evaluate a prediction method, is an accurate test method compared with independent dataset test and sub-dataset test . When using this test, one protein out of N proteins is removed to represent the testing set and the remaining N-1 proteins are combined together to represent the training set that will be used for training the prediction method. This process is then repeated N times by removing one protein in each time. In β-turns prediction, applying this process exactly is time consuming. Thus, most of the state-of-the-art β-turns prediction methods use seven-fold cross validation to assess their prediction performances . Therefore, we used seven-fold cross validation to assess the performance of our H-SVM-LR method. We first started by dividing the dataset into seven subsets that contain equal numbers of proteins. In each set the β-turns account for approximately 25% of the protein residues, in other words each set contains the naturally-accruing proportion of beta-turns. We removed one set to represent the testing set and the other sets were merged together in one training set, which is used to train H-SVM-LR. This process was repeated seven times in order to have a different set for testing each time. We take the average of the results from the seven testing sets to represent the final prediction result.
The quality of prediction is evaluated using four measures, the prediction accuracy, Qpredicted, Qobserved, and MCC. These measures are the most frequently used measures to evaluate the β-turns prediction methods. They are calculated using the four values (i) true positive (TP), which is the number of the residues that are correctly classified as β-turns, (ii) true negative (TN), which is the number of the residues that are correctly classified as non-β-turns, (iii) false positive (FP), which is the number of residues that have non-β-turns structure and incorrectly classified as having β-turns structure, and (iv) false negative (FN), which is the number of residues that have β-turns structure and incorrectly classified as having non-β-turns structure.
Normally, the value of MCC is greater than or equal to -1 and less than or equal to 1. If the value of MCC is close to 1 then there is a perfect positive correlation, if it is close to -1 then there is a perfect negative correlation, and a value close to 0 indicates no correlation.
The receiver operating characteristic (ROC) curve is adopted in this paper as a threshold independent measure. The ROC curve provides the effectiveness of β-turns prediction method. The area under the ROC curve (AUC) is an important index that reflects the prediction reliability. A good classifier has an area close to 1, while a random classifier has an area of 0.5.
Results and discussion
Performance comparison between different features organization on the BT426 dataset.
A sliding window on PSSMs only
A sliding window on both PSSMs and PSS
Comparison of H-SVM-LR with other β-turns prediction methods on the BT426 dataset.
Zheng and Kurgan 
Liu et al. 
Hu and Li 
Zhang et al.(multiple alignment) 
H-SVM-LR shows high MCC 0.56 compared to NetTurnP 0.50, Zheng and Kurgan's method 0.47, and the method of Liu et al. 0.44. Thus, H-SVM-LR has the highest MCC and Qtotal among the other β-turns prediction methods. The MCC value achieved is noteworthy since MCC accounts for both over predictions and under predictions. The Qobserved of H-SVM-LR is higher by 15.06% than the Qobserved of Zheng and Kurgan's method, by 1.76% than the Qobserved of Hu and Li's method, and by 21.46% than the Qobserved of the method of Liu et al. Higher Qobserved values mean that a large percentage of the observed β-urns is correctly predicted. At the same time, the Qpredicted of our method shows that more than 64% of the actual β-turns are correctly predicted. We note that the Qpredicted of H-SVM-LR is 2.13% higher than the Qpredicted of Zheng and Kurgan's method, by 9.23% than the Qpredicted of Hu and Li's method, and by 1.23% higher than the Qpredicted of the method of Liu et al.
Comparison of H-SVM-LR with other β-turns prediction methods on BT547 and BT823 datasets.
Zheng and Kurgan 
Liu et al. 
Hu and Li 
Zheng and Kurgan 
Liu et al. 
Hu and Li 
Including shape strings features
Comparison of H-SVM-LR with the method of Tang et al. .
Tang et al.
Tang et al.
Tang et al.
In this paper, we proposed an approach that combines SVM and LR to create a hybrid method for β-turns prediction. We called this hybrid method H-SVM-LR. In H-SVM-LR, we utilized protein profile in the form of PSSMs, and PSS as features. We also considered shape strings as additional features. We divided the non-β-turn class into three partitions using k-means clustering algorithm and then each partition is combined with the β-turn class to form approximately balanced sub-training sets. SVM classifier is used for each sub-training set. Using this procedure, the problem of imbalanced class can be overcome, and the SVM computational time can be reduced. LR model selected based on fractional polynomials is used to aggregate the decisions of the SVMs to come up with final β-turn or non-β-turn decision. Using LR to aggregate the decisions of the SVMs enables us to take advantages of the statistical modeling theory to find the optimal weights for each SVM. H-SVM-LR achieved MCC of 0.56, and Qtotal of 82.87% on the BT426 dataset when using PSSMs and PSS as features. The MCC and the Qtotal achieved are significantly higher than the best existing methods that predict beta-turns using PSSM and PSS. Also H-SVM-LR obtained the highest MCC and Qtotal on BT547 and BT823 datasets. Furthermore, H-SVM-LR shows good performance when including shape strings features.
This work is supported in part by the National Natural Science Foundation of China under Grant No.61232001, No.61003124, No.61128006, the grant from CityU (Project No. 7002728), the Ph.D. Programs Foundation of Ministry of Education of China No.20090162120073, and the Freedom Explore Program of Central South University No.201012200124.
The publication costs for this article were funded by the corresponding author.
This article has been published as part of Proteome Science Volume 11 Supplement 1, 2013: Selected articles from the IEEE International Conference on Bioinformatics and Biomedicine 2012: Proteome Science. The full contents of the supplement are available online at http://www.proteomesci.com/supplements/11/S1.
- Petersen B, Lundegaard C, Petersen TN: NetTurnP-Neural network prediction of beta-turns by use of evolutionary information and predicted protein sequence features. PLoS ONE 2010, 5: e15079. 10.1371/journal.pone.0015079PubMed CentralPubMedView ArticleGoogle Scholar
- Zheng C, Kurgan L: Prediction of beta-turns at over 80% accuracy based on an ensemble of predicted secondary structures and multiple alignments. BMC-Bioinformatics 2008, 9: 430. 10.1186/1471-2105-9-430PubMed CentralPubMedView ArticleGoogle Scholar
- Kee KS, Jois SD: Design of beta-turn based therapeutic agents. Curr Pharm Des 2003, 9: 1209–24. 10.2174/1381612033454900PubMedView ArticleGoogle Scholar
- Chou PY, Fasman G: Conformational parameters for amino acids in helical, β -sheet and random coil regions calculated from proteins. Biochemistry 1974, 13: 211–222. 10.1021/bi00699a001PubMedView ArticleGoogle Scholar
- Wilmot CM, Thornton JM: Analysis and prediction of the different tybes of β -turns in proteins. J Mol Biol 1988, 203: 221–232. 10.1016/0022-2836(88)90103-9PubMedView ArticleGoogle Scholar
- Wilmot CM: β -Turns and their distortions:a proposed new nomenclature. Protein Eng 1990, 3: 479–493. 10.1093/protein/3.6.479PubMedView ArticleGoogle Scholar
- Zhang CT, Chou KC, Zhang CT, Chou KC: Prediction of beta-turns in proteins by 1–4 & 2–3 correlation model. Biopolymers 1997, 41: 673–702. 10.1002/(SICI)1097-0282(199705)41:6<673::AID-BIP7>3.0.CO;2-NView ArticleGoogle Scholar
- Chou KC: Prediction of beta-turns. J Peptide Res 1997, 49: 120–144.View ArticleGoogle Scholar
- Fuchs PF, Alix AJ: High accuracy prediction of β -turns and their types using propensities and multiple alignments. Proteins-Structure Function and Bioinformatics 2005, 59: 828–839. 10.1002/prot.20461View ArticleGoogle Scholar
- Jones DT: Protein secondary structure prediction based on position-specific scoring matrices. J Mol Biol 1999, 292: 195–202. 10.1006/jmbi.1999.3091PubMedView ArticleGoogle Scholar
- Pollastri G, Przybylski D, Rost B, Baldi P: Improving the prediction of protein secondary structure in three and eight classes using recurrent neural networks and profiles. Proteins 2002, 47: 228–35. 10.1002/prot.10082PubMedView ArticleGoogle Scholar
- Ouali M, King RD: Cascaded multiple classifiers for secondary structure prediction. Protein Sci 2000, 9: 1162–76. 10.1110/ps.9.6.1162PubMed CentralPubMedView ArticleGoogle Scholar
- Shepherd AJ, Gorse D, Thornton JM: Prediction of the location and type of beta-turns in proteins using neural networks. Protein Sci 1999, 8: 1045–1055. 10.1110/ps.8.5.1045PubMed CentralPubMedView ArticleGoogle Scholar
- Kaur H, Raghava GP: Prediction of beta-turns in proteins from multiple alignment using neural network. Protein Sci 2003, 12: 627–634. 10.1110/ps.0228903PubMed CentralPubMedView ArticleGoogle Scholar
- Kirschner A, Frishman D: Prediction of beta-turns and beta-turn types by a novel bidirectional Elman-type recurrent neural network with multiple output layers (MOLECRNN). Gene 2008,422(1–2):22–9. 10.1016/j.gene.2008.06.008PubMedView ArticleGoogle Scholar
- Kim S: Protein β -turn prediction using nearest-neighbor method. Bioinformatics 2004, 20: 40–4. 10.1093/bioinformatics/btg368PubMedView ArticleGoogle Scholar
- Pham TH, Satou K, Ho TB: Prediction and analysis of beta-turns in proteins by support vector machine. Genome Informatics 2003, 14: 196–205.PubMedGoogle Scholar
- Zhang Q, Yoon S, Welsh WJ: Improved method for predicting β -turn using support vector machine. Bioinformatics 2005, 21: 2370–4. 10.1093/bioinformatics/bti358PubMedView ArticleGoogle Scholar
- Hu X, Li Q: Using support vector machine to predict beta-turns and gamma-turns in proteins. J Comput Chem 2008, 29: 1867–1875. 10.1002/jcc.20929PubMedView ArticleGoogle Scholar
- Liu L, Fang Y, Li M, Wang C: Prediction of beta-turn in protein using ESSpred and support vector machine. Protein J 2009, 28: 175–181. 10.1007/s10930-009-9181-4PubMedView ArticleGoogle Scholar
- Kountouris P, Hirst J: Predicting beta-turns and their types using predicted backbone dihedral angles and secondary structures. BMC Bioinformatics 2010, 11: 407. 10.1186/1471-2105-11-407PubMed CentralPubMedView ArticleGoogle Scholar
- Tang Z, Li T, Liu R, Xiong W, Sun J, Zhu Y, Chen G: Improving the performance of beta-turn prediction using predicted shape strings and a two-layer support vector machine model. BMC Bioinformatics 2011, 12: 283. 10.1186/1471-2105-12-283PubMed CentralPubMedView ArticleGoogle Scholar
- Altschul SF, Madden TL, Schaffer AA, Zhang J, Zhang Z, Miller W, Lipman DJ: Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res 1997, 25: 3389–3402. 10.1093/nar/25.17.3389PubMed CentralPubMedView ArticleGoogle Scholar
- Elbashir MK, Wang J, Wu FX: A hybrid approach of support vector machines with logistic regression for β -turn prediction. BIBMW, IEEE International Conference on Bioinformatics and Biomedicine Workshops 2012, 587–593.Google Scholar
- Hosmer D, Lemeshow S: Applied logistic regression. Wiley 2000.Google Scholar
- Maher M, Theodore B: Robust weighted kernel logistic regression in imbalanced and rare events data. Computational Statistics and Data Analysis 2011, 55: 168–183. 10.1016/j.csda.2010.06.014View ArticleGoogle Scholar
- Komarek P, Moore A: Making logistic regression a core data mining tool: a practical investigation of accuracy, speed, and simplicity. In Technical report. Carnegie Mellon University; 2005.Google Scholar
- Guruprasad K, Rajkumar S: Beta- and gamma-turns in proteins revisited: A new set of amino acid dependent positional preferences and potential. J Biosci 2000,25(2):143–156.PubMedGoogle Scholar
- Montgomerie S, Sundararaj S, Gallin WJ, Wishart DS: Improving the accuracy of protein secondary structure prediction using structural alignment. BMC Bioinformatics 2006, 14: 301.View ArticleGoogle Scholar
- Rost B, Sander C: Prediction of protein secondary structure at better than 70% accuracy. J Mol Biol 1993,232(2):584–599. 10.1006/jmbi.1993.1413PubMedView ArticleGoogle Scholar
- Cole C, Barber JD, Barton GJ: The Jpred 3 secondary structure prediction server. Nucleic Acids Res 2008,36(Web Server issue):W197-W201.PubMed CentralPubMedView ArticleGoogle Scholar
- Sun J, Tang S, Xiong W, Cong P, Li T: DSP: a protein shape string and its profile prediction server. Nucleic Acids Res 2012,40(Web server issue):W298. 302PubMed CentralPubMedView ArticleGoogle Scholar
- Chang YI: Boosting SVM classifiers with logistic regression. Technical report, academic Sinica 2003. [http://www3.stat.sinica.edu.tw/library/c_tec_rep/2003–03.pdf]Google Scholar
- Patrick R, Gareth A, Willi S: The use of fractional polynomials to model continuous risk variables in epidemiology. International journal of epidemiology 1999, 28: 964–974. 10.1093/ije/28.5.964View ArticleGoogle Scholar
- Royston P, Altman DG: Regression using fractional polynomials of continuous covariates: (parsimonious parametric modelling (with discussion). Appl Stat 1994, 43: 429–467. 10.2307/2986270View ArticleGoogle Scholar
- R Development Core Team: R: A language and environment for statistical computing, R Foundation for Statistical Computing. Vienna, Austria; 2008.Google Scholar
- CC C, CJ L, LIBSVM: A library for support vector machines. [http://www.csie.ntu.edu.tw/~cjlin/libsvm]
- Chou K, Zhang C: Prediction of protein structural classes. Critical Reviews in Biochem And Mol Biol 1995, 30: 275–349. 10.3109/10409239509083488View ArticleGoogle Scholar
- Elbashir MK, Sheng Y, Wang J, Wu FX, Min Li: Predicting β -turns in protein using kernel logistic regression. BioMed Research International 2013., 2013: Google Scholar
- Brunak S, Chauvin Y, Andersen C, Nielsen H: Assessing the accuracy of prediction algorithms: an overview. Bioinformatics 2000, 16: 412–424. 10.1093/bioinformatics/16.5.412PubMedView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.