Jump directly to main navigation Jump directly to content

Institute of Diagnostic Virology (IVD)

Laboratory for NGS-based pathogen characterization and animal disease diagnostics

The genomes of all organisms and viruses are composed of nucleic acids (deoxyribonucleic acid [DNA] or ribonucleic acid [RNA]) and are the blueprint of life. RNA in addition is necessary for the translation of this blueprint into protein for metabolism and to make-up cells. Analysis of an organism's nucleic acids delivers detailed insights into its properties or physiological condition. Moreover, sequencing the nucleic acids extracted from any type of sample can provide information about the content of organisms originally included in that sample. Therefor the laboratory for next generation sequencing and microarray diagnostics has two different techniques at hand. We operate a fully equipped Genome Sequencer FLX facility and in addition have the complete infrastructure for microarray analyses.

Next Generation Sequencing

The term next generation sequencing (NGS) subsumes the different novel techniques available for DNA sequencing (Bonetta, L. 2006, Genome sequencing in the fast lane, Nature Methods, 2:141-7) that do not rely on the classical chain terminating method published in 1977 by Sanger and co-workers (Sanger, F. et al. 1977, DNA sequencing with chain-terminating inhibitors, Proc. Natl. Acad. Sci. USA, 74:5463–7). Based on the different NGS methods, DNA sequencing now is possible faster and at a lower price per base. Moreover, all these methods are designed for high throughput sequencing. The huge amount of raw sequence data that can be obtained in a single instrument run renders bacterial or viral full genome sequencing possible within a single over night sequencing run.

Main task of the laboratory for NGS and microarray diagnostics is full-length DNA or RNA virus genome sequencing. To this end, we prepare DNA from diverse sources for sequencing and conduct the sequencing of these. Besides full-length genome sequencing we perform metagenomic analyses (metagenomic analyses aim at identification of the members of a microbial community in a certain environment). Moreover, we use amplicon sequencing of specific regions within a genome for in-depth analyses of the SNP (single nucleotide polymorphism) content within this region. Beyond the sequencing activities, establishing new technical equipment, molecular biological methods, and implementing new ways for data analyses are in the focus. With regard to molecular biological methods, an important issue that is addressed in close collaboration with colleagues from the specialized laboratories in the FLI is sample preparation for sequencing.

To top

Microarrays

The first description of micorarrays dates back to 1995 when Schena and co-workers first used this technology for transcriptome analyses, i.e. the comprehensive determination of gene expression within a given cell population (Schena, M. et al. 1995, Quantitative Monitoring of Gene Expression Patterns with a Complementary DNA Microarray, Science, 270:467-70). Soon after, first attempts were undertaken to adapt the microarray technology to diagnostics (the development of microarray technology and its diagnostic use are reviewed in Heller, M.J. 2002, DNA MICROARRAY TECHNOLOGY: Devices, Systems, and Applications, Annu. Rev. Biomed. Eng.,4:129–53). Meanwhile, microarrays of different types and sizes are well established in diagnostics. Contrasting metagenomic analyses, where no constraints exist with regard to nucleic acid sequence composition, microarrays can only detect nucleic acids with a predefined sequence. This sequence is predetermined by the probes applied to the array. A big advantage of microarrays is the reduced cost and the lower complexity of data analysis.

To top

Bioinformatics

Thorough analysis of the large data sets generated with both sequencing and microarrays is a key issue in the laboratory for NGS and microarray diagnostics. Especially the need to transform the huge amount of pure raw data into real knowledge drives the efforts to optimize data analyses. Special attention is dedicated to automation of repeated tasks and, more importantly, to establish methods adaptable by the operator that at the same time robustly yield reliable information. Moreover, the full methodology must be comprehensible by the user in order to enable the user to estimate the validity of the results.

To top