 Research
 Open Access
Predicting betaturns in proteins using support vector machines with fractional polynomials
 Murtada Khalafallah Elbashir^{1, 4},
 Jianxin Wang^{1}Email author,
 FangXiang Wu^{2} and
 Lusheng Wang^{3}
https://doi.org/10.1186/1477595611S1S5
© Elbashir et al.; licensee BioMed Central Ltd. 2013
 Published: 7 November 2013
Abstract
Background
βturns are secondary structure type that have essential role in molecular recognition, protein folding, and stability. They are found to be the most common type of nonrepetitive structures since 25% of amino acids in protein structures are situated on them. Their prediction is considered to be one of the crucial problems in bioinformatics and molecular biology, which can provide valuable insights and inputs for the fold recognition and drug design.
Results
We propose an approach that combines support vector machines (SVMs) and logistic regression (LR) in a hybrid prediction method, which we call (HSVMLR) to predict βturns in proteins. Fractional polynomials are used for LR modeling. We utilize position specific scoring matrices (PSSMs) and predicted secondary structure (PSS) as features. Our simulation studies show that HSVMLR achieves Qtotal of 82.87%, 82.84%, and 82.32% on the BT426, BT547, and BT823 datasets respectively. These values are the highest among other βturns prediction methods that are based on PSSMs and secondary structure information. HSVMLR also achieves favorable performance in predicting βturns as measured by the Matthew's correlation coefficient (MCC) on these datasets. Furthermore, HSVMLR shows good performance when considering shape strings as additional features.
Conclusions
In this paper, we present a comprehensive approach for βturns prediction. Experiments show that our proposed approach achieves better performance compared to other competing prediction methods.
Keywords
 Support Vector Machine
 Support Vector Machine Classifier
 Fractional Polynomial
 Torsion Pair
 Shape String
Background
Secondary structure of proteins consists of basic elements; these elements are αhelices, βsheets, random coils, and turns. αhelices and βsheets are considered as regular secondary structure elements while the residues that correspond to turns structures do not form regular secondary structure elements. In turns structures the Cαatoms of two residues are separated by one to five peptide bonds and the distance between these Cαatoms is less than 7A°. The number of peptide bonds that separate the two end residues determines the specific turn type. In αturns and βturns, the two end residues are separated by four and three peptide bonds respectively. In γturns, δturns, and πturns, the two end residues are separated by two, one, and five peptide bonds respectively. The most common types of turns structure that exist in protein are βturns structure. They represent approximately 25% of the secondary structure of the proteins sequences. βturns can reverse the direction of a protein chain therefore they are considered as orienting structure [1]. They also have significant effects in protein folding, because they have the ability to bring together and allow the interactions between the regular secondary structure elements. βturns are not only important in protein folding but are also implicated in the biological activities of peptides as the bioactive structures that interact with other molecules such as receptors, enzymes and antibodies [2]. They are also important in the design of various peptidomimetics for many diseases [3]. Therefore, the prediction of βturns is one of the important problems in molecular biology, which can provide valuable insights and inputs for the fold recognition and drug design.
There are different methods designed for βturns prediction. These methods can be divided into statistical methods and machine learning methods. The statistical methods that are used in βturns prediction include ChouFasman method [4], Thornton's algorithm [5], GORBTURN [6], 14 & 23 correlation model [7], sequence couple model [8], and COUDES method [9]. All of these statistical methods use the sequence as input except for COUDES, which is based on propensities and multiple alignments. COUDES also utilizes secondary structure predicted by PSIPRED [10], SSPRO2 [11], and PROF [12]. The machine learning methods include BTPRED [13], BetaTpred2 [14], MOLEBRNN [15] and NetTurnP [1], which are based on artificial neural networks (ANNs), Kim's method based on knearest neighbor (KNN) [16], as well as support vector machines (SVMs) based methods, which recently have become popular in the field of βturns prediction. These SVMs based methods include BTSVM [17], Zhang and colleagues' method [18], Zheng and Kurgan's method [2], Hu and Li's method [19], the method of Liu et al. [20], DEBT [21], and the method of Tang et al. [22]. In BTBRED, secondary structure predictions are utilized with two layered network architecture. BetaTpred2 enhances the performance of βturns prediction by using secondary structure prediction and evolutionary information in form of position specific scoring matrices (PSSMs) as input to the neural networks. MOLEBRNN uses PSSMs as input to a bidirectional Elmantype recurrent neural network. NetTurnP uses evolutionary information and predicted protein sequence features as input to two ANN layers whereas the first layer is trained to predict whether or not an amino acid is located in a βturn. Kim's method encodes protein sequence using a window of up to 9 residues to be used as input to a KNN based method, which is combined with a filter that uses secondary structure predicted with PSIPRED for the central residue. In BTSVM, position specific frequent matrices (PSFMs) and PSSMs, both calculated with PSIBLAST [23], are applied to encode input for SVM classifier. Zhang and colleagues' method is another SVM method that uses PSSMs over a 7residue window and the secondary structure of the central residue predicted by PSIPRED as an input. In Zheng and Kurgan's method a SVM is utilized to predict βturns using window based information extracted from four predicted secondary structures (PSSs) with a selected set of PSSMs as input to the SVM. The SVM based method developed by Hu and Li combines the increment of diversity, position conservation scoring function, and secondary structure predicted with PSIPRED to compute the inputs for prediction of βturns and γturns. Liu et al. combine SVM with PSS information obtained by using ESSpred, a secondary protein structure prediction method. DEBT predicts βturns and their types using information from multiple sequence alignments, PSSs, and predicted dihedral angles. Tang et al. considered another type of onedimensional string of symbols representing the clustered region of ϕ, ψ torsion pairs called shape strings as new features. In [24] we utilized the idea of undersampling to create several balanced datasets. These balanced sets were used to train several SVMs classifiers independently. The SVMs were aggregated using a linear logistic regression model.
In this paper, we propose a new approach called HSVMLR (Hybrid approach of SVMs and Logistic Regression (LR)) for predicting βturns. Our proposed approach incorporates the idea of clustering by partitioning the nonβturn class into three subsets using kmeans clustering algorithm. Each subset is merged with the positive class (βturn) to form a sub training set. These sub training sets are used to train localized SVMs classifiers independently. LR model modeled using fractional polynomials, is used to aggregate the localized SVMs to make a collective decision. The merit of using LR to aggregate the localized SVMs is that it will enable us to take advantages of the statistical modeling theory to find the optimal weights for each local SVM [24]. Also LR has the advantages of being widely studied [25], and in the recent years there are many algorithms have been designed to improve its performance. These algorithms include iteratively reweighted least squares (IRLS) algorithm, which is a special case of fisher's scoring method [26, 27].
Methods
Support vector machine (SVM)
where w is the normal vector to the hyperplane, b is the offset from the origin, and C is the error penalty parameter. The kernel function, which maps the input space into a higherdimensional space, can be applied to create SVM classifier for nonlinear problem. The kernel functions that can be used for SVM include polynomial kernel function, radial basis (also known as Gaussian kernel function), and sigmoid kernel function.
Logistic regression (LR)
Minimizing the deviance given in the above equation is equivalent to maximizing the loglikelihood.
Datasets
The dataset BT426, which contains 426 nonhomologous protein chains, is used to evaluate our HSVMLR prediction method. This dataset was developed by Guruprasad and Rajkumar [28]. We obtained it from Raghava Group's website http://www.imtech.res.in/raghava/bteval/dataset.html. The structure of protein chains in BT426 dataset is determined by Xray crystallography at two resolution or better. In each chain there is at least one betaturns structure. 24.9% of all amino acids in BT426 have been assigned to be having βturns structure. Several recent betaturns prediction methods use it as a golden set of amino acid sequences to evaluate their performances. We therefore used it to evaluate our methods and to make direct comparisons with the other prediction methods. Besides BT426, we used the dataset of 547 protein sequence (BT547), and the dataset of 823 protein sequence (BT823) to evaluate our approach. These datasets were constructed for training and testing COUDES [9].
Features
PSSMs
where x is the PSSM's element that stands for the likelihood of the particular residue substitution at that position.
Predicted secondary structure (PSS)
PROTEUS [29] is used to predict the secondary structure features. The motivation to use PROTEUS comes from the work of Tang et al. [22], which concludes that the predictions when using PROTEUS and PSSMs were better than when using PHD [30], JPRED [31], PROTEUS, and PSSMs together. The secondary structure features are predicted as three structure states: helix (H), strand (E) and coil (C). These three structure states are encoded as 1 0 0 for helix, 0 1 0 for strand, and 0 0 1 for coil.
Predicted shape strings
Tang et al. [22] predicted shape strings from a predictor constructed based on structural alignment approach. Shape strings were represented by eight states, i.e. S, R, U, V, K, A, T and G. They used a sliding window of 8 amino acids on PSSMs, PSS and shape strings features. We also added shape strings to our PSSMs and PSS features. The shape strings were predicted using the protein shape string and its profile prediction server (DSP) [32]. Besides the eight states DSP defines shape N where the ϕ and ψ angles are undefined, or no structure determination for parts of the sequence. The shape strings features are encoded as (1 0 0 0 0 0 0 0 0) for S, (0 1 0 0 0 0 0 0 0) for R, ..., and (0 0 0 0 0 0 0 0 1) for N.
The proposed approach
A sliding window of size seven residues is used over the matrix that consists of the features. The prediction is made for the central residue. This window size is selected in accordance with Shepherd et al. [13] who found that the optimal prediction for βturns is achieved using window size of seven or nine.
Clustered model
Since βturns account for approximately 25% of the globular protein residues, the ratio of βturns to nonβturns is 1:3. Thus, the training sets used for βturns prediction are imbalanced sets. In our trail experiments, we found that if the nonβturns set is divided into a three subsets by a suitable clustering algorithm, each nonβturns subset with the whole βturns set will form approximately balanced training set. This balanced training set is more likely to be separable in the feature space. That is because the distribution of the nonβturns samples in a subset is centralized and compacted. In other words, the βturns set can be easily separated from each nonβturns cluster by a different hyperplane. That means good performance would be expected when constructing localized SVMs using each nonβturns cluster against the βturns. But, each of these SVMs alone is certainly not a good global classifier. It proposes that it is possible to construct a better classifier than the SVM trained with the whole data by combining these SVMs effectively. Particularly, a localized SVM classifier can be constructed for each sub training set, this way the localized SVMs will not be affected by the heterogeneity of the whole training set. To outperform the SVM that is trained with the whole data, we need to combine these localized SVMs effectively into global one without neglecting their local advantages. Majority voting is one of the methods that are used to combine several classifiers, but its main problem is that it will not give weight to each classifier. LR model can integrate the localized SVMs classifiers, and it allows us to take advantages of the statistical modeling theory to find the optimal weights for each local classifier. The motivation to use this clustered model comes from the work of Yi Chang [33]. In his work, Yi Chang used localized linear SVMs classifier for a data in the feature space defined by a chosen kernel.
LR model selection

We verified the importance of each variable in the LR model using Wald statistics.

We compared the coefficients of the each variable with the coefficient from the model containing only that variable.

Any variable that did not appear to be important was eliminated, and a new model was fitted. The new model was checked whether it is significantly different from the old model. If it is, then the deleted variable is important.

The process of deleting, refitting and verifying was repeated until it appears that all the important variables were included in the model.

We tried to fit a linear LR model to the data but the prediction error is found to be very large, so we considered power transformation using fractional polynomials.

A list of possible interactions between each pairs of variable was created, these interactions terms were added one at a time, in the model containing all the main effects and assess its significance using the likelihood ratio test. The significant interactions were added to the main effect model and its fit was evaluated using Wald tests and LR test for the interaction terms, and any nonsignificant interaction was dropped.
Fractional polynomials
Fractional polynomials for the SVMs models using the BT426 dataset.
Cycle 1  Cycle 2  

Variable  Powers  Powers  
Deviance  Deviance  
P  q  P  q  
SVM model1  256272.1  256255.1  
256235.6  1  256209.8  1  
256180.1  0.5  256146.1  0.5  
256080.4  1  2  256035.3  1  2  
SVM model2  257266.9  257050.1  
256512.8  1  256314.3  1  
256284.1  0  256086.0  0  
256235.6  0.5  1  256035.3  0.5  1  
SVM model3  258586.7  258511.7  
256669.1  1  256247.5  1  
256626.6  0.5  256148.6  0.5  
256512.8  2  3  256035.3  2  2 
Training and testing
We used LIBSVM package [37] to train and build the SVMs prediction models. The radial basis kernel function was used to transfer the data from a low dimension space to a higherdimensional space nonlinearly for all the SVMs. The default grid search approach was used to find the optimal values for the LIBSVM's parameters C and gamma. The leaveoneout crossvalidation test, in which different datasets for training and testing are used to evaluate a prediction method, is an accurate test method compared with independent dataset test and subdataset test [38]. When using this test, one protein out of N proteins is removed to represent the testing set and the remaining N1 proteins are combined together to represent the training set that will be used for training the prediction method. This process is then repeated N times by removing one protein in each time. In βturns prediction, applying this process exactly is time consuming. Thus, most of the stateoftheart βturns prediction methods use sevenfold cross validation to assess their prediction performances [39]. Therefore, we used sevenfold cross validation to assess the performance of our HSVMLR method. We first started by dividing the dataset into seven subsets that contain equal numbers of proteins. In each set the βturns account for approximately 25% of the protein residues, in other words each set contains the naturallyaccruing proportion of betaturns. We removed one set to represent the testing set and the other sets were merged together in one training set, which is used to train HSVMLR. This process was repeated seven times in order to have a different set for testing each time. We take the average of the results from the seven testing sets to represent the final prediction result.
Performance measures
The quality of prediction is evaluated using four measures, the prediction accuracy, Qpredicted, Qobserved, and MCC. These measures are the most frequently used measures to evaluate the βturns prediction methods. They are calculated using the four values (i) true positive (TP), which is the number of the residues that are correctly classified as βturns, (ii) true negative (TN), which is the number of the residues that are correctly classified as nonβturns, (iii) false positive (FP), which is the number of residues that have nonβturns structure and incorrectly classified as having βturns structure, and (iv) false negative (FN), which is the number of residues that have βturns structure and incorrectly classified as having nonβturns structure.
Normally, the value of MCC is greater than or equal to 1 and less than or equal to 1. If the value of MCC is close to 1 then there is a perfect positive correlation, if it is close to 1 then there is a perfect negative correlation, and a value close to 0 indicates no correlation.
The receiver operating characteristic (ROC) curve is adopted in this paper as a threshold independent measure. The ROC curve provides the effectiveness of βturns prediction method. The area under the ROC curve (AUC) is an important index that reflects the prediction reliability. A good classifier has an area close to 1, while a random classifier has an area of 0.5.
Results and discussion
Performance comparison between different features organization on the BT426 dataset.
Features organization  Qtotal  Qpredicted  Qobserved  MCC 

A sliding window on PSSMs only  81.03  63.98  57.40  0.48 
A sliding window on both PSSMs and PSS  82.87  64.83  70.66  0.56 
Comparison of HSVMLR with other βturns prediction methods on the BT426 dataset.
Prediction method  Qtotal  Qpredicted  Qobserved  MCC 

HSVMLR  82.87  64.83  70.66  0.56 
Zheng and Kurgan [2]  80.9  62.7  55.6  0.47 
Liu et al. [20]  80.9  63.6  49.2  0.44 
Hu and Li [19]  79.8  55.6  68.9  0.47 
DEBT [21]  79.2  54.8  70.1  0.48 
BTSVM [17]  78.7  56.0  62.0  0.45 
NetTurnP [1]  78.2  54.4  75.6  0.50 
MOLEBRNN [15]  77.9  53.9  66.0  0.45 
Zhang et al.(multiple alignment) [18]  77.3  53.1  67.0  0.45 
BetaTPred2 [14]  75.5  49.8  72.3  0.43 
Kim [16]  75.0  46.5  66.7  0.40 
COUDES [9]  74.8  48.8  69.9  0.42 
BTPRED [13]  74.4  48.3  57.3  0.35 
HSVMLR shows high MCC 0.56 compared to NetTurnP 0.50, Zheng and Kurgan's method 0.47, and the method of Liu et al. 0.44. Thus, HSVMLR has the highest MCC and Qtotal among the other βturns prediction methods. The MCC value achieved is noteworthy since MCC accounts for both over predictions and under predictions. The Qobserved of HSVMLR is higher by 15.06% than the Qobserved of Zheng and Kurgan's method, by 1.76% than the Qobserved of Hu and Li's method, and by 21.46% than the Qobserved of the method of Liu et al. Higher Qobserved values mean that a large percentage of the observed βurns is correctly predicted. At the same time, the Qpredicted of our method shows that more than 64% of the actual βturns are correctly predicted. We note that the Qpredicted of HSVMLR is 2.13% higher than the Qpredicted of Zheng and Kurgan's method, by 9.23% than the Qpredicted of Hu and Li's method, and by 1.23% higher than the Qpredicted of the method of Liu et al.
Comparison of HSVMLR with other βturns prediction methods on BT547 and BT823 datasets.
Prediction method  Dataset  Qtotal  Qpredicted  Qobserved  MCC 

HSVMLR  82.84  63.60  68.5  0.55  
Zheng and Kurgan [2]  80.5  61.6  54.2  0.45  
Liu et al. [20]  BT547  80.6  64.3  44.5  0.44 
Hu and Li [19]  76.6  47.6  70.2  0.43  
DEBT [21]  80.0  55.9  68.7  0.49  
COUDES [9]  74.6  48.7  70.4  0.42  
HSVMLR  82.32  64.48  72.72  0.56  
Zheng and Kurgan [2]  80.6  60.8  54.6  0.45  
Liu et al. [20]  BT823  80.5  62.3  44.6  0.44 
Hu and Li [19]  76.8  53.0  72.3  0.45  
DEBT [21]  80.9  55.9  66.1  0.48  
COUDES [9]  74.2  47.5  69.6  0.41 
Including shape strings features
Comparison of HSVMLR with the method of Tang et al. [22].
Prediction method  Dataset  Qtotal  Qpredicted  Qobserved  MCC 

HSVMLR  BT426  87.37  74.99  75.20  0.67 
Tang et al.  87.2  73.8  75.9  0.66  
HSVMLR  BT547  88.64  77.79  76.31  0.70 
Tang et al.  87.3  69.8  86.5  0.69  
HSVMLR  BT823  89.55  79.53  77.73  0.72 
Tang et al.  88.7  72.6  88.1  0.73 
Conclusions
In this paper, we proposed an approach that combines SVM and LR to create a hybrid method for βturns prediction. We called this hybrid method HSVMLR. In HSVMLR, we utilized protein profile in the form of PSSMs, and PSS as features. We also considered shape strings as additional features. We divided the nonβturn class into three partitions using kmeans clustering algorithm and then each partition is combined with the βturn class to form approximately balanced subtraining sets. SVM classifier is used for each subtraining set. Using this procedure, the problem of imbalanced class can be overcome, and the SVM computational time can be reduced. LR model selected based on fractional polynomials is used to aggregate the decisions of the SVMs to come up with final βturn or nonβturn decision. Using LR to aggregate the decisions of the SVMs enables us to take advantages of the statistical modeling theory to find the optimal weights for each SVM. HSVMLR achieved MCC of 0.56, and Qtotal of 82.87% on the BT426 dataset when using PSSMs and PSS as features. The MCC and the Qtotal achieved are significantly higher than the best existing methods that predict betaturns using PSSM and PSS. Also HSVMLR obtained the highest MCC and Qtotal on BT547 and BT823 datasets. Furthermore, HSVMLR shows good performance when including shape strings features.
Declarations
Acknowledgements
This work is supported in part by the National Natural Science Foundation of China under Grant No.61232001, No.61003124, No.61128006, the grant from CityU (Project No. 7002728), the Ph.D. Programs Foundation of Ministry of Education of China No.20090162120073, and the Freedom Explore Program of Central South University No.201012200124.
Declarations
The publication costs for this article were funded by the corresponding author.
This article has been published as part of Proteome Science Volume 11 Supplement 1, 2013: Selected articles from the IEEE International Conference on Bioinformatics and Biomedicine 2012: Proteome Science. The full contents of the supplement are available online at http://www.proteomesci.com/supplements/11/S1.
Authors’ Affiliations
References
 Petersen B, Lundegaard C, Petersen TN: NetTurnPNeural network prediction of betaturns by use of evolutionary information and predicted protein sequence features. PLoS ONE 2010, 5: e15079. 10.1371/journal.pone.0015079PubMed CentralPubMedView ArticleGoogle Scholar
 Zheng C, Kurgan L: Prediction of betaturns at over 80% accuracy based on an ensemble of predicted secondary structures and multiple alignments. BMCBioinformatics 2008, 9: 430. 10.1186/147121059430PubMed CentralPubMedView ArticleGoogle Scholar
 Kee KS, Jois SD: Design of betaturn based therapeutic agents. Curr Pharm Des 2003, 9: 1209–24. 10.2174/1381612033454900PubMedView ArticleGoogle Scholar
 Chou PY, Fasman G: Conformational parameters for amino acids in helical, β sheet and random coil regions calculated from proteins. Biochemistry 1974, 13: 211–222. 10.1021/bi00699a001PubMedView ArticleGoogle Scholar
 Wilmot CM, Thornton JM: Analysis and prediction of the different tybes of β turns in proteins. J Mol Biol 1988, 203: 221–232. 10.1016/00222836(88)901039PubMedView ArticleGoogle Scholar
 Wilmot CM: β Turns and their distortions:a proposed new nomenclature. Protein Eng 1990, 3: 479–493. 10.1093/protein/3.6.479PubMedView ArticleGoogle Scholar
 Zhang CT, Chou KC, Zhang CT, Chou KC: Prediction of betaturns in proteins by 1–4 & 2–3 correlation model. Biopolymers 1997, 41: 673–702. 10.1002/(SICI)10970282(199705)41:6<673::AIDBIP7>3.0.CO;2NView ArticleGoogle Scholar
 Chou KC: Prediction of betaturns. J Peptide Res 1997, 49: 120–144.View ArticleGoogle Scholar
 Fuchs PF, Alix AJ: High accuracy prediction of β turns and their types using propensities and multiple alignments. ProteinsStructure Function and Bioinformatics 2005, 59: 828–839. 10.1002/prot.20461View ArticleGoogle Scholar
 Jones DT: Protein secondary structure prediction based on positionspecific scoring matrices. J Mol Biol 1999, 292: 195–202. 10.1006/jmbi.1999.3091PubMedView ArticleGoogle Scholar
 Pollastri G, Przybylski D, Rost B, Baldi P: Improving the prediction of protein secondary structure in three and eight classes using recurrent neural networks and profiles. Proteins 2002, 47: 228–35. 10.1002/prot.10082PubMedView ArticleGoogle Scholar
 Ouali M, King RD: Cascaded multiple classifiers for secondary structure prediction. Protein Sci 2000, 9: 1162–76. 10.1110/ps.9.6.1162PubMed CentralPubMedView ArticleGoogle Scholar
 Shepherd AJ, Gorse D, Thornton JM: Prediction of the location and type of betaturns in proteins using neural networks. Protein Sci 1999, 8: 1045–1055. 10.1110/ps.8.5.1045PubMed CentralPubMedView ArticleGoogle Scholar
 Kaur H, Raghava GP: Prediction of betaturns in proteins from multiple alignment using neural network. Protein Sci 2003, 12: 627–634. 10.1110/ps.0228903PubMed CentralPubMedView ArticleGoogle Scholar
 Kirschner A, Frishman D: Prediction of betaturns and betaturn types by a novel bidirectional Elmantype recurrent neural network with multiple output layers (MOLECRNN). Gene 2008,422(1–2):22–9. 10.1016/j.gene.2008.06.008PubMedView ArticleGoogle Scholar
 Kim S: Protein β turn prediction using nearestneighbor method. Bioinformatics 2004, 20: 40–4. 10.1093/bioinformatics/btg368PubMedView ArticleGoogle Scholar
 Pham TH, Satou K, Ho TB: Prediction and analysis of betaturns in proteins by support vector machine. Genome Informatics 2003, 14: 196–205.PubMedGoogle Scholar
 Zhang Q, Yoon S, Welsh WJ: Improved method for predicting β turn using support vector machine. Bioinformatics 2005, 21: 2370–4. 10.1093/bioinformatics/bti358PubMedView ArticleGoogle Scholar
 Hu X, Li Q: Using support vector machine to predict betaturns and gammaturns in proteins. J Comput Chem 2008, 29: 1867–1875. 10.1002/jcc.20929PubMedView ArticleGoogle Scholar
 Liu L, Fang Y, Li M, Wang C: Prediction of betaturn in protein using ESSpred and support vector machine. Protein J 2009, 28: 175–181. 10.1007/s1093000991814PubMedView ArticleGoogle Scholar
 Kountouris P, Hirst J: Predicting betaturns and their types using predicted backbone dihedral angles and secondary structures. BMC Bioinformatics 2010, 11: 407. 10.1186/1471210511407PubMed CentralPubMedView ArticleGoogle Scholar
 Tang Z, Li T, Liu R, Xiong W, Sun J, Zhu Y, Chen G: Improving the performance of betaturn prediction using predicted shape strings and a twolayer support vector machine model. BMC Bioinformatics 2011, 12: 283. 10.1186/1471210512283PubMed CentralPubMedView ArticleGoogle Scholar
 Altschul SF, Madden TL, Schaffer AA, Zhang J, Zhang Z, Miller W, Lipman DJ: Gapped BLAST and PSIBLAST: a new generation of protein database search programs. Nucleic Acids Res 1997, 25: 3389–3402. 10.1093/nar/25.17.3389PubMed CentralPubMedView ArticleGoogle Scholar
 Elbashir MK, Wang J, Wu FX: A hybrid approach of support vector machines with logistic regression for β turn prediction. BIBMW, IEEE International Conference on Bioinformatics and Biomedicine Workshops 2012, 587–593.Google Scholar
 Hosmer D, Lemeshow S: Applied logistic regression. Wiley 2000.Google Scholar
 Maher M, Theodore B: Robust weighted kernel logistic regression in imbalanced and rare events data. Computational Statistics and Data Analysis 2011, 55: 168–183. 10.1016/j.csda.2010.06.014View ArticleGoogle Scholar
 Komarek P, Moore A: Making logistic regression a core data mining tool: a practical investigation of accuracy, speed, and simplicity. In Technical report. Carnegie Mellon University; 2005.Google Scholar
 Guruprasad K, Rajkumar S: Beta and gammaturns in proteins revisited: A new set of amino acid dependent positional preferences and potential. J Biosci 2000,25(2):143–156.PubMedGoogle Scholar
 Montgomerie S, Sundararaj S, Gallin WJ, Wishart DS: Improving the accuracy of protein secondary structure prediction using structural alignment. BMC Bioinformatics 2006, 14: 301.View ArticleGoogle Scholar
 Rost B, Sander C: Prediction of protein secondary structure at better than 70% accuracy. J Mol Biol 1993,232(2):584–599. 10.1006/jmbi.1993.1413PubMedView ArticleGoogle Scholar
 Cole C, Barber JD, Barton GJ: The Jpred 3 secondary structure prediction server. Nucleic Acids Res 2008,36(Web Server issue):W197W201.PubMed CentralPubMedView ArticleGoogle Scholar
 Sun J, Tang S, Xiong W, Cong P, Li T: DSP: a protein shape string and its profile prediction server. Nucleic Acids Res 2012,40(Web server issue):W298. 302PubMed CentralPubMedView ArticleGoogle Scholar
 Chang YI: Boosting SVM classifiers with logistic regression. Technical report, academic Sinica 2003. [http://www3.stat.sinica.edu.tw/library/c_tec_rep/2003–03.pdf]Google Scholar
 Patrick R, Gareth A, Willi S: The use of fractional polynomials to model continuous risk variables in epidemiology. International journal of epidemiology 1999, 28: 964–974. 10.1093/ije/28.5.964View ArticleGoogle Scholar
 Royston P, Altman DG: Regression using fractional polynomials of continuous covariates: (parsimonious parametric modelling (with discussion). Appl Stat 1994, 43: 429–467. 10.2307/2986270View ArticleGoogle Scholar
 R Development Core Team: R: A language and environment for statistical computing, R Foundation for Statistical Computing. Vienna, Austria; 2008.Google Scholar
 CC C, CJ L, LIBSVM: A library for support vector machines. [http://www.csie.ntu.edu.tw/~cjlin/libsvm]
 Chou K, Zhang C: Prediction of protein structural classes. Critical Reviews in Biochem And Mol Biol 1995, 30: 275–349. 10.3109/10409239509083488View ArticleGoogle Scholar
 Elbashir MK, Sheng Y, Wang J, Wu FX, Min Li: Predicting β turns in protein using kernel logistic regression. BioMed Research International 2013., 2013: Google Scholar
 Brunak S, Chauvin Y, Andersen C, Nielsen H: Assessing the accuracy of prediction algorithms: an overview. Bioinformatics 2000, 16: 412–424. 10.1093/bioinformatics/16.5.412PubMedView ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.