Variance decomposition of protein profiles from antibody arrays using a longitudinal twin model

Background The advent of affinity-based proteomics technologies for global protein profiling provides the prospect of finding new molecular biomarkers for common, multifactorial disorders. The molecular phenotypes obtained from studies on such platforms are driven by multiple sources, including genetic, environmental, and experimental components. In characterizing the contribution of different sources of variation to the measured phenotypes, the aim is to facilitate the design and interpretation of future biomedical studies employing exploratory and multiplexed technologies. Thus, biometrical genetic modelling of twin or other family data can be used to decompose the variation underlying a phenotype into biological and experimental components. Results Using antibody suspension bead arrays and antibodies from the Human Protein Atlas, we study unfractionated serum from a longitudinal study on 154 twins. In this study, we provide a detailed description of how the variation in a molecular phenotype in terms of protein profile can be decomposed into familial i.e. genetic and common environmental; individual environmental, short-term biological and experimental components. The results show that across 69 antibodies analyzed in the study, the median proportion of the total variation explained by familial sources is 12% (IQR 1-22%), and the median proportion of the total variation attributable to experimental sources is 63% (IQR 53-72%). Conclusion The variability analysis of antibody arrays highlights the importance to consider variability components and their relative contributions when designing and evaluating studies for biomarker discoveries with exploratory, high-throughput and multiplexed methods.


Introduction
There is an enormous unmet need for biomarkers to characterize disease type, status, progression, and response to therapy. A biomarker, which can be defined as a physical sign or laboratory measurement that serves as an indicator for biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention [1,2], can then potentially be used for screening, prognosis, monitoring response to treatment, and detection of recurrent diseases. Technological advances in the different fields of life science have enabled the generation of data from highthroughput screening experiments, which can then be used to identify novel biomarker molecules.
Proteomics, the large-scale study of the protein content of cells, organs or organisms, offers the potential to evaluate global changes in protein expression, protein interaction patterns and their post-translational modifications that take place in response to normal or pathological stimuli. The availability of DNA microarray analysis permits expression of thousands of genes to be monitored simultaneously, but RNA expression has been found to correlate significantly to only one third with the corresponding actual protein content [3]. Therefore the importance of proteomic research cannot be overstated, as proteins within the cell provide a structural and functional framework by producing energy and allowing communication, movement, and reproduction [4].
Recent technological advances in the field of genomics and proteomics have also brought about a new statistical research area, the analysis of data from high-throughput screening experiments. Using protein microarray technology, the profiles of thousands of proteins can be monitored in a high-throughput manner with the aim of selecting a subset of proteins that could be characterised as potential biomarkers creating possibilities for diagnostic, prognostic and disease progression monitoring [5]. When proteomics emerged, scientists hoped that available technologies would help them sift through the proteome of blood and tissues to identify profiles of proteins that indicated the presence of a disease. Researchers could then use these molecular signatures as biomarkers to diagnose diseases in their formative stages and therefore be able to tailor appropriate intervention and treatment. However, one problem is that the most preferred sample types, serum or plasma, are highly complex and influenced by a protein composition where 99% of the total protein mass is covered by only 20 proteins. Even though all proteins are likely to appear in blood at a given point in time, due to function, secretion or leakage, their levels can vary by more than 10 orders of magnitude, with abundant proteins masking the presence of rare ones. Among the proteomic approaches applied in the quest for novel biomarkers today, mass spectrometry, often in combination with 2D-gel electrophoresis or chromatographic separation, is still the most commonly used technique in the field [6], even though the technique is limited with regard to new diagnostics [7]. Affinity-based alternatives are on the rise to enable proteome-wide analysis, and initiatives have been set up to produce, validate and offer a larger number of validated affinity reagents [8]. Platforms such as microarrays provide solutions for the implementation of larger sets of affinity reagents and analysis of multiple parameters on a minute amount of sample within a single experiment [9]. Among the immunoassay platforms that employ directly labelled samples [10][11][12], antibody suspension bead arrays to perform highly multiplexed protein profiling in a large number of serum samples [13].

Utility of twin studies
The phenotype of an individual might change as a result of complex, dynamic interactions between an individual's genome and environmental factors. To characterize and quantify the contributions that genes, the shared and the individual-specific environment and the interactions of these make to human complex traits and phenotypes, we use data from relatives who grow up in similar environments but are of differing genetic relatedness (the socalled 'twin design'). For a long time, twin studies [14] have been a valuable source of information about the genetic basis of complex traits. Twins provide a powerful design for inferring genetic effects, as they are blocked for in utero, dietary and socio-economic effects due to common upbringing. Identical/monozygotic (MZ) twins derive from a single fertilized egg and therefore inherit identical genetic material while non identical/dizygotic (DZ) twins share 50% of their genetic material. In a classical twin study the assumption is that MZ and DZ twins share the same degree of similarity in their environments and therefore one compares phenotypic concordance within MZ twin pairs to the concordance within DZ twin pairs. Comparing the resemblance of MZ twins for a trait or disease with the resemblance of DZ twins offers the first estimate of the extent to which genetic variation determines phenotypic variation of the trait or disease. The sources of variation commonly considered in genetics are additive genetic influences (sum of individual effects of all loci that influence the phenotype), dominant genetic influences (these represent interactions between alleles at all loci that influence the phenotype), common environment (influences shared by family members e.g. socioeconomic status) and individual environment (influences that cause differences among members of one family, e.g. dietary and lifestyle choices that are not shared by other family members. Two important population parameters are heritability and familiality. The narrow-sense heritability is the proportion of phenotypic variation within a population attributable to additive genetic effects. Familiality is the proportion of phenotypic variation within a population attributable to genetic (additive and dominant) and common environmental effects.
In the following, we have investigated the variance of exploratory affinity arrays using longitudinal twin study.

Results
Important attributes of discovered biomarkers, in addition to being associated with a disease, are that they can be measured with precision (low variance) and exhibit relatively low amounts of short-term variability relative to the effect size of the difference between healthy and diseased individuals [15]. For antibody-based proteomics, antibodies that have high proportions of variability attributable to familiality and individual environment and have relatively small technical variability are therefore potentially more valuable for use in biomarker discovery.

Pre-processing
For this study the measured relative quantity of interest is the intensity of fluorescence emitted and measured when antibodies immobilized onto beads capture their target protein from a sample (see Figure 1). The intensity levels are influenced by a number of underlying factors besides the antigen's abundance. These are antibody-antigen kinetics and affinities, the number of accessible binding sites on both immobilized antibodies and the labelled antigens, the number of biotin molecules per antigen being incorporated during labelling reaction, as well as complex formation introducing additional biotin molecules carried by proteins attached to the antigen of interest. Therefore the variation in the measured intensity levels will be influenced by both biological and experimental factors including sample preparation, incubation and read-out.
Molecular phenotypes can be noisy, context dependent, and sensitive to laboratory, equipment and experimental conditions. On the current platform, the fluorescence intensities obtained on each sample can in general not be compared directly due to variations in sample treatment, labelling and detection, thus appropriate pre-processing is needed. Plots of the signal intensities of each antibody across the randomized samples ( Figure 2A and Additional File 1) show that for some of the antibodies, such as for example apoh-HPA001654, renbp-HPA000428, apoh-HPA003732 and icam1-HPA004877, signal intensities tended to decrease as intensities were measured from plate well 1 through plate well 96 suggesting an 'intensitydrift effect'. This could be introduced during sample preparation, modification of the potential binding site during sample labelling, slight bleaching of the fluorophore of the reporter protein, or by the dissociation of antibody-antigen complexes over time the samples are being measured.
The data were normalized using probabilistic quotient normalization (PQN) [16]. To assess the efficacy of the normalization we looked at the within-sample concordance before and after normalization, by computing the Spearman correlation between the 48 pairs of aliquot replicates for each antibody in the data set. See appendix for details on the correlation model and an example in Figure 3. Across the 69 antibodies, before normalization the median Spearman correlation was 0.26 with interquartile range (IQR 0.19 -0.36) and after normalization, median Spearman correlation was 0.36 (IQR 0.24 -0.44). These results show that normalization improves the within-sample concordance.

Outliers
Using singular value decomposition (SVD) [17] we projected the 69 dimensional antibody space into a twodimensional space (two principal components). Prior to performing SVD on the data matrix, we scaled the data of each variable (antibody) to have a mean of 0 and standard deviation 1. Plots of the two principal components are shown in Figure 4. As can be seen in the figure, most of the samples cluster together. However, there are some outlying samples. For each of the sample points in the figures, Mahalanobis [18] distance from the origin was calculated. Assuming that the principal components have a bivariate normal distribution, the Mahalanobis distance of each sample was compared to a chi-squared distribution with 2 degrees of freedom. A sample was defined as an outlier if it had a Mahalanobis distance bigger than 18.42 (critical value at a 0.0001 level of significance). Based on the above criteria, the data set contains 6 out of 270 samples classified as outliers as shown in Figure 4 highlighted using numbers.

Variance decomposition
Plots of the data after normalization, removal of outliers and Box-Cox transformations are presented in Figure  2B and Additional File 2. Further, table 1 shows for each antibody the AIC obtained from fitting models 1 and 2 (see Modelling and Estimation in the Materials and Methods Section) on these data. As can be seen in the table, for some antibodies model 1 had a better fit (smaller AIC), whilst for others model 2 had a better fit. This shows that for some antibodies including a linear plate specific intensity-drift effect modelled the data

B1
B2  (see table  above), to check the within sample concordance we look at correlations between the 6 pairs of 3 dimensional vectors. That is for each of the antibodies B1, B2 and B3, we determine the correlation between the black and light-grey vectors. Similarly for antibodies B2, B4 and B6 we look at the correlation between the white and dark-grey vectors.   better. Subsequently for each antibody, model 1 or 2 (depending on the model that had a smaller AIC) was used to obtain maximum likelihood estimates of the fixed effects and the variance components.
For each antibody the percentage of total variance attributable to familiality (fam), individual environment (env), common visit (cv), individual visit (iv) and residual experimental effects (exp) each antibody is presented in Table 2 (rows are ordered by decreasing familiality). In Table 2, we observe that for many proteins the variances of the factors other than experimental variance are estimated to be zero. In general, the maximum likelihood (ML) estimators of the variance components often lie on the boundary of the parameter space i.e. close to zero. This can result from (a) the underlying "true" values of the variances being relatively close to zero and/or (b) there being less information about these variance components, implying that the ML estimators have relatively high variance. Figure 5A represents bar plots of the decomposition of the total phenotypic variance for each antibody after normalization, removal of outliers and Box-Cox transformations. The graph in Figure 5B summarizes the decomposition of total phenotypic variance across all antibodies. As can be seen from these figures, most of the antibody-derived intensities have a high residual (experimental) variance suggesting that a lot of the variance is attributable to experimental sources and sample complexity. Across the 69 antibodies, the median proportion of total phenotypic variance attributable to familiality is 12 (IQR 1-22%); the median proportion of total phenotypic variance attributable to experimental sources is 63% (IQR 53-72%). Combining familiality and individual environment renders the proportion of non-experimental variance that is stable over the time period between the two visits during which data was collected. Across, the 69 antibodies, the median proportion of total variance attributable to familiality and individual environment is 25% (IQR 14-33%). The median proportions of total phenotypic variance attributable to individual visit and common visit is 6% (IQR 0-18%) and 0% (IQR 0-3%), respectively.
In what follows, we present summaries of the proportion of non-experimental variance attributable to familiality, individual environment, common visit and individual visit. The boxplots in Figure 5C summarize the decomposition of non-experimental variance across all antibodies. Across the 69 antibodies the median proportion of non-experimental variance attributable to familial sources is 34% with (IQR 3-60%).. Across all 69 antibodies, the median proportion of non-experimental variance attributable to familiality and individual environment is 71% (IQR 51-93%). The median proportions of non-experimental variance attributable to common visit and individual visit are 0% (IQR 0-10%) and 16% (IQR 0-40%), respectively, indicating that most of the unstable (short-term dynamic) biological variation is attributable to individual visit effects.

Antibodies for targeting potential biomarkers
Highlighting those HPA antibodies with most preferred characteristics, we identified five interesting antibodies where more than 50% of the total phenotypic variance is attributable to stable biological (familial and individual environmental) variability. Among these target proteins were interleukin 12 (il12a), a heterodimeric 70 kDa glycoprotein with implications in cell-mediated immunity [19] targeted by HPA001886 (62%), and angiotensin-converting enzyme 2 (ace2) targeted by HPA000288 (59%). Ace2 is an exopeptidase that catalyses the conversion of angiotensin I or II and its pharmacological inhibition is associated with protective effects from cardiovascular diseases. Further, ADAMTS-like protein 4 (adamstsl4) targeted by HPA006279 (57%) is a secreted glycoprotein with a suggested function to be involved in cell adhesion and protease regulation [20]. The tissue-type plasminogen activator (plat) targeted by HPA003412 (54%) is a secreted serine protease which converts the proenzyme plasminogen to plasmin and up-regulated gene expression has been associated with heart failure [21]. The transforming growth factor beta receptor III (tgfbr3) targeted by HPA008257 (53%) is a glycosylated protein that is found as membrane-bound and as soluble form, and is among the most widely distributed TGF-beta family members and is suggested to function as accessory molecules in TGF-beta and FGF systems [22]. Interestingly, a strong negative correlation between profiles of interleukin 12 and tissue-type plasminogen activator was observed ( Figure 6). A linkage between these two proteins has been reported before [23] with il12a inducing the activation of fibrinolysis and thrombin generation. It was also shown that plasma levels of plat decrease shortly after il12 administration but increased over time as il12a levels lowered.
The results suggest that some antibodies are more suitable and promising than others and that they can be potential candidates to be used further in biomarker discovery studies. Among these, HPA006279 has been shown to reveal interesting differences in plasma protein profiles in Metabolic syndrome cases and controls [24]. The identified antibodies should now be further investigated (i) separately, in combination or supplemented by other candidates in high-throughput screenings across diseases, and (ii) to verify their potential implications in diseases related cardiovascular disorders by meeting the criteria of clinical test systems (sandwich assays), which will most likely the improve precision (reduce variance).

Discussion
In this paper we use a linear mixed-effects model in a unique twin design with duplicated repeated  measurements to apportion the total variation of molecular phenotypes (protein profiles) into biological and experimental (technical) variation. Understanding the sources of variation (e.g familial, individual environmental, and experimental) inherent in the measurement of a molecular phenotype is a key step in assessing the potential for stable, informative biomarkers. We observed that across the 69 antibodies the median proportion of total variation attributable to familial sources was 12%. This familial component is consistent with protein profiles having the potential to reflect the polygenic basis of complex disease. Further, across the 66 HPA antibodies, the median proportion of total variation explained by stable sources (familiality and individual environment) was 25%. Stable variation in the current setting comprises that a protein profile remains constant within an individual over the   course of the sampling period. A small proportion of variation originated from the short-term biological component, individual visit (median proportion 6%). Common visit represented an inconsiderable amount of variation. Most of the variation originated from experimental sources (median proportion 63%). To our current knowledge, the present study is the first to address the key issue of investigating sources of variation in data generated by exploratory antibody microarrays. It should provide important information when aiming at designing and utilizing such assays and be valuable for multiplexed and quality controlled assays [25] that will become more widely used and accepted for clinical testing. Ultimately, diagnostic tools built on markers discovered via these screening approaches could become valuable approaches to predict disease state and progression. As an example, antibodies that allow the comparison of individuals that are discordant for diabetes and diabetesrelated clinical traits would be useful for identifying individuals likely to suffer from diabetes in the future, long before conventional diagnostic techniques can prove effective. In this way these affinity-based proteomics discoveries would become useful in clinical settings.
However, most antibodies used in this screening method have a large residual variance suggesting that a large proportion of variation in the data is experimentally derived. Potential (and non-separable) sources of this experimental variation, which exclude sample collection, preparation and storage, are the: (i) complexity and composition of the serum samples which has an effect on the assays; (ii) biotin modification of samples with regard to the numbers and variability of modifications introduced per molecule and sample; (iii) sample treatment in terms of liquid transfers, heating, and assay buffer dilution; (iv) assay procedure with immobilized antibodies selectively capturing aggregated or free molecules from the surrounding solution; (v) fluorescence-based read-out being influenced by bleaching and dye incorporation onto the target molecules.
(vi) specificity of antibody binding events. Addressing these issues would lead to a reduction in the proportion of phenotypic variation arising from experimental sources. This would in turn reduce the sample sizes (or degree of technical replication) required to detect epidemiological effects of interest. A reduction of the experimental variability, possible to achieve by using two antibodies to detect a target protein, would ensure that the experimental noise does not swamp biological signals of interest. The technical precision of such a measurement and of other antibodies can be improved, either by addressing the issues outlined above, or, in the short term, by assaying samples in technical replicate.
Research in the field of proteomics is advancing, with affinity-based approaches emerging alongside classical mass spectrometric approaches. With array-based proteomics becoming a promising area in the field of biomedical research, decomposition of the underlying variation in protein profiles into biological (both stable and longitudinally fluctuating) and experimental components is an important and useful step in exploring the applicability of antibody arrays for the exploration of the proteome. Ultimately, such proteomic strategies may lead to new disease markers and drug targets can be identified, benefitting from the possibilities offered by the versatility of both the employed affinity reagents and multiplexed techniques.

Materials and methods
In this paper we longitudinally collect serum samples from a cohort of 77 twins (56 MZ pairs and 21 DZ pairs) to explore the sources of variation underlying antibody arraybased protein phenotypes with the aim of aiding the design and interpretation of future proteomic research. We use a mixed-effects modelling approach to obtain estimates of variance components.
This is a longitudinal family study whose design involves duplicate measurements over time in a sample that includes related individuals (twins). Consequently it combines the features of a longitudinal study (the power to study the pattern of changes in a trait over time) with the features of a cross-sectional twin study (which allows one to estimate the variation in a trait attributable to familial sources).). The study design allows us to break down the observed molecular phenotypic variance into four biological components. These are familiality (the combined effects of genetic and common environment) and individual environment, both of which are longitudinally stable biological components. The two short-term biological components are common visit and individual visit, which respectively measure the amount of shared (by twins within a pair) and non-shared longitudinal variation observed over the sampling period of the present study (i.e several months).

Subject recruitment
A total of 154 healthy, postmenopausal, female twins, comprising of 56 MZ and 21 DZ pairs, were ascertained from the TwinsUK database at St. Thomas' Hospital http://www.twinsUK.ac.uk, and invited to participate in this study. Recruitment of the twins was as follows: eligible twins were sent an information sheet containing details of the study as well as two consent forms. If the twins agreed to participate they signed and sent one copy of their consent forms to the administration team at the Department of Twin Research and genetic epidemiology (DTR), King's College London. Once a consent form was received, a member of the administration team would contact the twins by letter and phone to book their appointment. In addition, 68 twins (34 MZ pairs) were asked to re-attend once they had successfully completed the first visit with the time between the two visits ranging from 63 to 99 days (26 twins), 104 to 140 days (34 twins) and 150 to 216 days (8 twins). At each visit to the clinic, the twins were in pairs.
Fasting blood samples were collected from all selected twins from one or both visits rendering a total of 222 samples with samples from individuals of a twin pair taken on the same day. These were to be used for the study of biological and technical variability in the various molecular phenotyping platforms, focusing on the unique opportunities afforded by studies in a large set of clinically well characterised monozygotic and dizygotic twins. These studies were all part of the Molecular Phenotyping to Accelerate Genomic Epidemiology (MolPAGE) project. For the MolPAGE project, eligible volunteers were healthy, Caucasian females of North Europe descent, aged between 45-76 years old. The study was approved by St. Thomas' Hospital Research Ethics Committee (EC04/015 Twins UK). The experiments and subsequent analyses presented in this paper are based on serum samples extracted from the fasting blood samples.

Antibodies, Bead Coupling, Sample preparation and Assay procedure Antibodies
Protocols for antigen selection, cloning, protein expression, immunization of rabbits, and affinity purification to yield polyclonal antibodies were performed as described previously [26,27]. All protein fragments used for immunization were produced with a tag and a target protein part of on average 120 amino acids and in general those proteins had a size of approximately 30 kDa. Antibodies from the Human Protein Atlas (HPA) have been selected as described elsewhere [24] (see also Table 1) and 66 were involved the study, all tested on protein arrays for specificity [28]. In addition rabbit IgG (Jackson ImmunoResearch), a human serum albumin binding protein (Affibody AB) and an anti-IgA antibody (Dako Cytomation) were also included but not highlighted in further detail.

Bead coupling
Antibodies were coupled to magnetic carboxylated beads (MagPlex Microspheres, Luminex-Corp.) in accordance to the manufacturer's protocol with minor modifications. For each antibody, 3.2 μg were coupled to 10 6 beads in a 96well half-area microtiter plate (Greiner BioOne) and beads were allowed to sediment for 30 s on a magnetic bead separator (LifeSep™, Dexter Magnetic Technologies Inc.) prior to the removal of supernatants. For an albumin binding Affibody ® molecule, 4 μg were coupled. Beads were stored in a protein-containing buffer (Blocking Reagent for ELISA, Roche) with NaN 3 . All coupled beads were resuspended with sonication in an ultrasonic cleaner (Branson Ultrasonic Corporation) for 5 min prior to storage at 4°C. All antibody-coupled beads were counted using the Luminex LX200 instrument and the coupling efficiency for each antibody was determined via R-Phycoerythrin labeled anti-rabbit IgG antibody (Jackson ImmunoResearch). A bead mixture was created in solution, and optimized as previously described [29].

Sample labelling
Protocols for sample labelling and assay procedure were based on earlier work [13]. At first, samples were thawed at room temperature and centrifuged for 10 min at 10,000 rpm, transferred into a microtiter plate (Abgene) and heat treated at 56°C over 30 min in a thermo cycler (Biorad). The plates were centrifuged (1 min at 3,000 rpm) and 3 μl of each sample was added to 24.5 μl 1 × PBS with a liquid handler (Plate mate 2 × 2, Matrix). Nhydroxysuccinimidyl ester of biotinoyl-tetraoxapentadecanoic acid (NHS-PEO4-Biotin, Pierce) was then added at 10-fold molar excess to yield an overall 1/10 sample dilution followed by an incubation over 2 hours at 4°C in a microtiter plate shaker (Thermomixer, Eppendorf). The reaction was stopped by the addition of Tris-HCl, pH 8.0 at a 250-fold molar excess over biotin and incubated for another 20 min at 4°C prior to a final storage at -80°C.

Assay procedure
All samples were utilized without removing unincorporated biotin and diluted 1/50 utilizing a liquid handler in a assay buffer composed of 0.5% (w/v) polyvinylalcohol and 0.8% (w/v) polyvinylpyrrolidone (Sigma) in 0.1% casein in PBS (PVXC) supplemented with 0.5 mg/ml non-specific rabbit IgG (Bethyl). A second heat treatment was performed as above and 45 μl sample were added to 5 μl of bead mixtures in a microtiter plate (Greiner BioOne). Incubation took place over night on a shaker at an ambient temperature. Beads were washed in wells with 3 × 50 μl PBST (1 × PBS pH 7.4, 0.1% Tween20) on a magnetic bead separator with a liquid handling system followed 10 min with 50 μl of a stop solution containing 0.1% paraformaldehyde in PBS. Beads were washed 1 × 50 μl PBST and 30 μl of 0.5 μg/ml R-Phycoerythrin labeled streptavidin (Invitrogen) in PBST were added and incubated for 20 min. Finally, beads were washed 3 × 50 μl and measured in 100 μl PBST. Measurements were performed using a Luminex LX200 instrument with Luminex xPONENT software and counting at least 50 events per bead ID, displaying binding events as median fluorescence intensity. A summary of the experimental work flow is presented in Figure 1. In what follows we will refer to each measured proteomic phenotype in the study as an "antibody".
Of the 222 serum samples obtained from fasting blood, 48 samples from 24 MZ twin pairs were split into two aliquots (which usefully allows assessment of technical replicability) rendering an additional 48 sample aliquots.
The resulting 270 sample aliquots were then randomized and placed onto three 96-well plates, where each plate contained an extra 6 reference samples. Subsequently a total of 69 antibodies were utilized to render a 270 by 69 matrix of protein (antibody) array data.
Statistical analysis -Data pre-processing, Modelling and Estimation Data pre-processing Probabilistic quotient normalization (PQN) [16] was used to pre-process the data. In metabonomics, PQN has been used as a robust method to account for dilution of complex biological mixtures. The method performs well in compensating for different dilutions in samples. We found this normalization method to increase a correlation-based measure of technical reproducibility across two aliquots of the same sample (see Results).

Outliers
Intensity distributions of the antibodies after normalization using PQN were investigated to identify samples that were potential outliers. In order to detect potential outliers we used singular value decomposition [17] to project the data into two-dimensional space. Projecting the data onto the space spanned by the first two principal-component loadings vectors is a commonly used technique for detecting anomalous observations whose extreme behaviour explains a considerable proportion of variance in the data.

Modelling and Estimation
All subsequent analyses were done on data that had been normalized using PQN and had undergone removal of outlying samples. Further, a Box-Cox transformation [17] was performed on the data from each antibody. This transformed the data so that their distribution more closely resembled a Gaussian, thereby increasing the quality of fit to the data of the Gaussianbased mixed-effects model.

Variability analysis
For each molecular phenotype (antibody) we used a statistical model to quantify the biological and experimental variation inherent in the phenotype. As mentioned before, the aim was to partition the total variability in a molecular phenotype into variability that is attributable to familiality (additive genetic, dominant genetic and common environmental), individual environmental, common visit, individual visit and experimental components, respectively. Note that familial and individual environmental variability together make up stable biological variability.
As mentioned previously, the measured quantity of interest was the intensity of fluorescence emitted when an immobilised antibody captured an antigen in a sample. For each antibody let Y ijkl denote the normalized and transformed intensity of the lth aliquot replicate (1,2), at the kth visit (1, 2) of the jth twin (1, 2) from the ith twin pair (1, 2, ...,77). The data for each antibody was modelled using a linear mixed-effects model [30][31][32]: where μ p(i, j, k, l) is the mean intensity for samples on plate p (the function p(i, j, k, l) maps sample aliquots to plates), A is the additive genetic effect, D is the dominant genetic effect, C is the common environment effect, E is the individual or unique environment effect, W is the common visit effect, V is the individual visit effect, and lastly ε represents the residual experimental effects. The plate mean μ p(i, j, k, l) is a fixed effect, while A, D, C, E, W, and V are random effects. It is assumed that the random effects are additive and independent. This implies that the total phenotypic variance can be decomposed as: For twin data, the variances Var(A), Var(C) and Var (D) are not all identifiable. However familiality is an estimable function of these variance parameters.
For each antibody, maximum likelihood (ML) estimates of the variance components were obtained by fitting model (1) using the lmer function from the lme4 package in R [33] after normalization, removal of outliers and Box-Cox transformations. The model was reduced prior to fitting. The variances of A, C and D are not identifiable. In order to address non-identifiability we re-parameterized the three non-identifiable variances, Var(A), Var(C) and Var(D) by defining two new random effects H and M (whose variances are identifiable) as follows: Note that if x 1 and x 2 are the phenotype measurements of two monozygotic (MZ) twins then the covariance matrix of the measurements is such that: Var(x 1 ) = Var (x 2 ) = Var(H) + var(M) +Var(E) and covariance(x 1 , x 2 ) = Var(H) + Var(M). Similarly if x 1 and x 2 are the phenotype measurements of two dizygotic (DZ) twins then covariance matrix of the measurements is such that: Var(x 1 ) = Var(x 2 ) = Var(H) + Var(M) + Var(E) and covariance(x 1 , x 2 ) = Var(H). A similar approach has previously been used and elaborated on metabolomics data [34].
The PQN pre-processing procedure that we applied should mitigate any systematic differences between measurements in the different wells introduced by the instrumental time drift between plate well 1 through to plate well 96. However, in order to investigate if there were any substantial significant intensity-drift effects remaining after PQN we also fit a second model (hereafter referred to as model 2) by introducing a linear, plate-specific well location term into model 1, where well location is a number between 1 and 96 representing the position of a sample in the plate wells. That is in model 2, for each antibody we fit a linear mixed-effects model with linear, plate-specific intensity-drift fixed effects and the random effects A, D, C, E, W, and V as before. Subsequently Akaike's information criterion [35,36] was used to compare the fit of models 1 and 2. Akaike's information criterion (AIC) is a measure of the goodness of fit of an estimated statistical model and is defined as: where k is the number of parameters in the model, and L is the maximized value of the likelihood function for the estimated model. For a given a data set, several competing models may be ranked according to their AICs and the model with the smallest AIC is chosen as the one that fits the data best. The pre-processed data and R script used to perform the variability analysis can be found as Additional Files 3 and 4.

Correlation model
Consider a proteomics platform, which measures the abundance of p proteins, indexed by k {1,...,p}. Suppose that there are n samples, indexed by i {1,...,n}, and each is measured in technical duplicate, with duplicates indexed by j {1, 2}. A model for the observed data is: where: • y (k) ij is the abundance of the kth protein as measured in the jth replicate of the ith biological sample