Assessing gene expression quality in Affymetrix microarrays 1

  • Slides: 68
Download presentation
Assessing gene expression quality in Affymetrix microarrays 1

Assessing gene expression quality in Affymetrix microarrays 1

Outline • The Affymetrix platform for gene expression analysis • Affymetrix recommended QA procedures

Outline • The Affymetrix platform for gene expression analysis • Affymetrix recommended QA procedures • The RMA model for probe intensity data • Application of the fitted RMA model to quality assessment 2

The Affymetrix platform for gene expression analysis 3

The Affymetrix platform for gene expression analysis 3

Probe selection Probes are 25 -mers selected from a target m. RNA sequence. 5

Probe selection Probes are 25 -mers selected from a target m. RNA sequence. 5 -50 K target fragments are interrogated by probe sets of 11 -20 probes. Affymetrix uses PM and MM probes 4

Oligonucleotide Arrays Gene. Chip Probe Array Hybridized Probe Cell Single stranded, labeled RNA target

Oligonucleotide Arrays Gene. Chip Probe Array Hybridized Probe Cell Single stranded, labeled RNA target * * * Oligonucleotide probe 1. 28 cm 18µm 106 -107 copies of a specific oligonucleotide probe per feature >450, 000 different probes Image of Hybridized Probe Array 5 Compliments of D. Gerhold

Obtaining the data • RNA samples are prepared, labeled, hybridized with arrays, arrrays are

Obtaining the data • RNA samples are prepared, labeled, hybridized with arrays, arrrays are scanned and the resulting image analyzed to produce an intensity value for each probe cell (>100 processing steps) • Probe cells come in (PM, MM) pairs, 11 -20 per probe set representing each target fragment (550 K) • Of interest is to analyze probe cell intensities to answer questions about the sources of RNA – detection of m. RNA, differential expression assessment, gene expression measurement 6

Affymetrix recommended QA procedures 7

Affymetrix recommended QA procedures 7

Pre-hybe RNA quality assessment Look at gel patterns and RNA quantification to determine hybe

Pre-hybe RNA quality assessment Look at gel patterns and RNA quantification to determine hybe mix quality. QA at this stage is typically meant to preempt putting poor quality RNA on a chip, but loss of valuable samples may also be an issue. 8

Post-hybe QA: Visual inspection of image • Biotinylated B 2 oligonucleotide hybridization: check that

Post-hybe QA: Visual inspection of image • Biotinylated B 2 oligonucleotide hybridization: check that checkerboard, edge and array name cells are all o. k. • Quality of features: discrete squares with pixels of slightly varying intensity • Grid alignment • General inspection: scratches (ignored), bright SAPE residue (masked out) 9

Checkerboard pattern 10

Checkerboard pattern 10

Quality of featutre 11

Quality of featutre 11

Grid alignment 12

Grid alignment 12

General inspection 13

General inspection 13

MAS 5 algorithms • Present calls: from the results of a Wilcoxon’s signed rank

MAS 5 algorithms • Present calls: from the results of a Wilcoxon’s signed rank test based on: (PMi-MMi)/(PMi+MMi)- for small (~. 015). ie. PM-MM > *(PM+MM)? • Signal: 14

Post-hybe QA: Examination of quality report • Percent present calls : Typical range is

Post-hybe QA: Examination of quality report • Percent present calls : Typical range is 20 -50%. Key is consistency. • Scaling factor: Target/(2% trimmed mean of Signal values). No range. Key is consistency. • Background: average of of cell intensities in lowest 2%. No range. Key is consistency. • Raw Q (Noise): Pixel-to-pixel variation among the probe cells used to calculate the background. Between 1. 5 and 3. 0 is ok. 15

Examination of spikes and controls • Hybridization controls: bio. B, bio. C, bio. D

Examination of spikes and controls • Hybridization controls: bio. B, bio. C, bio. D and cre from E. coli and P 1 phage, resp. • Unlabelled poly-A controls: dap, lys, phe, thr, tryp from B. subtilis. Used to monitor wet lab work. • Housekeeping/control genes: GAPDH, Beta-Actin, ISGF-3 (STAT 1): 3’ to 5’ signal intensity ratios of control probe sets. 16

How do we use these indicators for identifying bad chips? We illustrate with 17

How do we use these indicators for identifying bad chips? We illustrate with 17 chips from a large publicly available data set from St Jude’s Children’s Research Hospital in Memphis, TN. 17

Hyperdip_chip A - MAS 5 Qual. Report #12 bad in Noise, Background and Scale.

Hyperdip_chip A - MAS 5 Qual. Report #12 bad in Noise, Background and Scale. Factor #14? #8? C 11? C 13 -15? C 16 -C 4? C 8? R 4? Only C 6 passes all tests. Conclusion? 18

Limitations of Affymetrix QA/QC procedures • Assessments are based on features of the arrays

Limitations of Affymetrix QA/QC procedures • Assessments are based on features of the arrays which are only indirectly related to numbers we care about – the gene expression measures. • The quality of data gauged from spike-ins requiring special processing may not represent the quality of the rest of the data on the chip. We risk QCing the chip QC process itself, but not the gene expression data. 19

New quality measures Aim: • To use QA/QC measures directly based on expression summaries

New quality measures Aim: • To use QA/QC measures directly based on expression summaries and that can be used routinely. To answer the question “are chips different in a way that affects expression summaries? ” we focus on residuals from fits in probe intensity models. 20

The RMA model for probe intensity data 21

The RMA model for probe intensity data 21

Summary of Robust Multi-chip Analysis • Uses only PM values • Chips analysed in

Summary of Robust Multi-chip Analysis • Uses only PM values • Chips analysed in sets (e. g. an entire experiment) • Background adjustment of PM made • These values are normalized • Normalized bg-adjusted PM values are log 2 -d • A linear model including probe and chip effects is fitted robustly to probe chip arrays of log 2 N(PM-bg) values 22

The ideal probe set (Spikeins. Mar S 5 B) 23

The ideal probe set (Spikeins. Mar S 5 B) 23

The probe intensity model On a probe set by probe set basis (fixed k),

The probe intensity model On a probe set by probe set basis (fixed k), the log 2 of the normalized bg-adjusted probe intensities, denoted by Ykij, are modelled as the sum of a probe effect pki and a chip effect ckj , and an error kij Ykij = pki + ckj + kij To make this model identifiable, we constrain the sum of the probe effects to be zero. The pki can be interpreted as probe relative non-specific binding effects. The parameters ckj provide an index of gene expression for each chip. 24

Least squares vs robust fit Robust procedures perform well under a range of possible

Least squares vs robust fit Robust procedures perform well under a range of possible models and greatly facilitates the detection of anomalous data points. Why robust? • Image artifacts • Bad probes • Bad chips • Quality assessment 25

M-estimators (a one slide caption) One can estimate the parameters of the model as

M-estimators (a one slide caption) One can estimate the parameters of the model as solutions to where is a symmetric, positive-definite function that increasing less rapidly than x. One can show that solutions to this minimization problem can be obtained by an IRLS procedure with weights: 26

Robust fit by IRLS At each iteration rij = Yij - current est(pi) -

Robust fit by IRLS At each iteration rij = Yij - current est(pi) - current est(cj), S = MAD(rij) a robust estimate of the scale parameter uij = rij/S standardized residuals wjj = (|uij|) weights to reduce the effect of discrepant points on the next fit Next step estimates are: est(pi) = weighted row i mean – overall weighted mean est(cj) = weighted column j mean 27

Example – Huber function Huber function 28

Example – Huber function Huber function 28

Application of the model to data quality assessment 29

Application of the model to data quality assessment 29

Picture of the data – k=1, …, K • Robust vs Ls fit: whether

Picture of the data – k=1, …, K • Robust vs Ls fit: whether ckj is weighted average or not. • Single chip vs multi chip: whether probe effects are removed from residuals or not – has huge impact on weighting and assessment of precision. 30

Model components – role in QA • Residuals & weights – now >200 K

Model components – role in QA • Residuals & weights – now >200 K per array. - summarize to produce a chip index of quality. - view as chip image, analyse spatial patterns. - scale of residuals for probe set models can be compared between experiments. • Chip effects > 20 K per array - can examine distribution of relative expressions across arrays. • Probe effects > 200 K per model for hg_u 133 - can be compared across fitting sets. 31

Chip index of relative quality We assess gene expression index variability by it’s unscaled

Chip index of relative quality We assess gene expression index variability by it’s unscaled SE: We then normalize by dividing by the median unscaled SE over the chip set (j): 32

Example – NUSE + residual images • Affymetrix hg-u 95 A spike-in, 1532 series

Example – NUSE + residual images • Affymetrix hg-u 95 A spike-in, 1532 series – next slide. • St-Judes Childern’s Research Hospitalseveral groups – slides after next. Note – special challenge here is to detect differences in perfectly good chips!!! 33

L 1532– NUSE+Wts 34

L 1532– NUSE+Wts 34

L 1532– NUSE+Pos res 35

L 1532– NUSE+Pos res 35

St Jude hosptial NUSE + wts images HERE • St-Judes Childern’s Research Hospital- two

St Jude hosptial NUSE + wts images HERE • St-Judes Childern’s Research Hospital- two groups selected from over all fit assessment which follows. 36

hyperdip - weights 37

hyperdip - weights 37

hyperdip – pos res 38

hyperdip – pos res 38

E 2 A_PBX 1 - weights Patterns of weights help characterize the problem 39

E 2 A_PBX 1 - weights Patterns of weights help characterize the problem 39

E 2 A_PBX 1 – pos res Residual patterns may give leads to potential

E 2 A_PBX 1 – pos res Residual patterns may give leads to potential problems. 40

MLL - weights 41

MLL - weights 41

MLL – pos res 42

MLL – pos res 42

Another quality measure: variability of relative log expression How much are robust summaries affected?

Another quality measure: variability of relative log expression How much are robust summaries affected? We can gauge reproducibility of expression measures by summarizing the distribution of relative log expressions: For reference expression, in the absence of technical replicates, we use the median expression value for that gene in a set of chips. 43

Relative expression summaries • IQR(LRkj) measures variability which includes Noise + Differential expression in

Relative expression summaries • IQR(LRkj) measures variability which includes Noise + Differential expression in biological replicates. • When biological replicates are similar (eg. RNA from same tissue type), we can typically detect processing effects with IQR(LR) • Median(LRkj) should be close to zero if No. up and regulated genes are roughly equal. IQR(LRkj)+|Median(LRkj)| can be combined to give a measure of chip expression measurement error. 44

Other Chip features: Signal + Noise We consider the Noise + Signal model: PM

Other Chip features: Signal + Noise We consider the Noise + Signal model: PM = N + S Where N ~ N( , 2) and S ~ Exp(1/ ) We can use this model to obtain “background corrected” PM values – won’t discuss here. Our interest here is to see how measures of level of signal (1/ ) and noise ( ) relate to other indicators. * In the example data sets used here, %P, SF and RMA S/N measures correlate similarly with median NUSE * 45

Comparison of quality indicators 46

Comparison of quality indicators 46

Affy hg_u 95 spike-in - pairs plots – scratch that! Affymetrix HG_U 95 Spike-in

Affy hg_u 95 spike-in - pairs plots – scratch that! Affymetrix HG_U 95 Spike-in Experiment - not much variability to explain! 47

St. Judes U 133 A St Judes Hospital All U 133 A experiments –

St. Judes U 133 A St Judes Hospital All U 133 A experiments – YMMV 48

St. Judes U 133 B St Judes Hospital All U 133 B experiments –

St. Judes U 133 B St Judes Hospital All U 133 B experiments – YMMV 49

Correlation among measures for U 133 A chips Your Mileage May Vary – ie.

Correlation among measures for U 133 A chips Your Mileage May Vary – ie. depending on chip 50 selection, relationships may differ in your chip set

Correlation among measures for U 133 B chips 51

Correlation among measures for U 133 B chips 51

All A vs All B 52

All A vs All B 52

Comparing experiments • NUSE: have no units – only get relative quality within chip

Comparing experiments • NUSE: have no units – only get relative quality within chip set (could use a ref. QC set) • IQR(LR): include some biological variability which might vary between experiments Can use model residual scales (Sk) to compare experiments (assuming the intensity scale was standardized) Next: Analyzed St-Judes chips by treatment group (14 -28 chips per group). Compare scale estimates. 53

U 133 A Boxplot rel scales Vs Abs scale 54

U 133 A Boxplot rel scales Vs Abs scale 54

Next contrast the good and the less good 55

Next contrast the good and the less good 55

hyperdip - weights 56

hyperdip - weights 56

hyperdip – pos res 57

hyperdip – pos res 57

E 2 A_PBX 1 - weights 58

E 2 A_PBX 1 - weights 58

E 2 A_PBX 1 – pos res 59

E 2 A_PBX 1 – pos res 59

More model comparisons • Recommended amount of c. RNA to hybe to chip is

More model comparisons • Recommended amount of c. RNA to hybe to chip is 10 g. • In GLGC dilution have chips with 1. 25, 2. 5, 5, 7. 5, 10 and 20 g of the same c. RNA in replicates of 5 Questions: - can we use less c. RNA? - can we combine chips with different amounts of c. RNA in an experiment? 60

Rel Scales+LR w/I and btw/ group 61

Rel Scales+LR w/I and btw/ group 61

MVA 62

MVA 62

Where we are? • We have measures that are good at detecting differences •

Where we are? • We have measures that are good at detecting differences • Need more actionable information: Ø What is the impact on analysis? Ø What are the causes? Ø Gather more data to move away from relative quality and toward absolute quality. Ø Other levels of quality to investigate – individual probes and probe sets, individual summaries. 63

Acknowledgements • • Terry Speed and Julia Brettschneider Gene Logic, Inc. Affymetrix, Inc. St-Jude's

Acknowledgements • • Terry Speed and Julia Brettschneider Gene Logic, Inc. Affymetrix, Inc. St-Jude's Children’s Research Hospital • The Bio. Conductor Project • The R Project 64

References 1. Mei, R. , et. al. (2003), Probe selection for high-density oligonucleotide arrays,

References 1. Mei, R. , et. al. (2003), Probe selection for high-density oligonucleotide arrays, PNAS, 100(20): 11237 -11242 2. Dai, Hongyue et. al. (2003), Use of hybridization kinetics for differentiating specific from non-specific binding to oligonucleotide microarrays, NAR, Vol. 30, No. 16 e 86 3. Irizarry, R. et. al (2003) Summaries of Affymetrix Gene. Chip probe level data, Nucleic Acids Research, 2003, Vol. 31, No. 4 e 15 4. Irizarry, R. et. al. (2003) Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics, in press. 5. http: //www. stjuderesearch. org 65

Additional slides 66

Additional slides 66

Example – comparing experiments: probe effects • Affy hg-u 95 A • We compare

Example – comparing experiments: probe effects • Affy hg-u 95 A • We compare probe effects from models fitted to data from chips from different lots (3 lots) • For pairs of lots, image est(p 1)-est(p 2) properly scaled and transformed into a weight. • Also look at sign of difference 67

Affy – compare probe effects 68

Affy – compare probe effects 68