Description: There is an urgent need to translate genome-era discoveries into clinical utility, but the difficulties in making bench-to-bedside translations haven't been well described. The nascent field of translational bioinformatics may help. Dr. Butte's lab at Stanford University builds and applies tools that convert more than 300 billion points of molecular, clinical, and epidemiological data (measured by researchers and clinicians over the past decade) into diagnostics, therapeutics, and new insights into disease. Dr. Butte, a bioinformatician and pediatric endocrinologist, will highlight his lab's work on using publicly available molecular measurements to find new uses for drugs, discovering new treatable mechanisms of disease in type 2 diabetes, and evaluating patients presenting with whole genomes sequenced. 

The NIH Wednesday Afternoon Lecture Series includes weekly scientific talks by some of the top researchers in the biomedical sciences worldwide. 

For more information, visit: 
The NIH Director's Wednesday Afternoon Lecture Series 
Author: Atul Butte, M.D., Ph.D., Stanford University 
Runtime: 01:07:42 

It’s Alive!
Earth’s upper atmosphere—below freezing, nearly without oxygen, flooded by UV radiation—is no place to live. But last winter, scientists from the Georgia Institute of Technology discovered that billions of bacteria actually thrive up there. Expecting only a smattering of microorganisms, the researchers flew six miles above Earth’s surface in a NASA jet plane. There, they pumped outside air through a filter to collect particles. Back on the ground, they tallied the organisms, and the count was staggering: 20 percent of what they had assumed to be just dust or other particles was alive. Earth, it seems, is surrounded by a bubble of bacteria.
It’s Alive!
It’s Alive! & Airborne: In the midst of airborne sea salt and dust, researchers from Georgia Tech unexpectedly found thousands of living fungal cells and bacteria, including E. coli and Streptococcus.  Courtesy Georgia Tech; Photo by Gary Meek


Scientists don’t yet know what the bacteria are doing up there, but they may be essential to how the atmosphere functions, says Kostas Konstantinidis, an environmental microbiologist on the Georgia Tech team. For example, they could be responsible for recycling nutrients in the atmosphere, like they do on Earth. And similar to other particles, they could influence weather patterns by helping clouds form. However, they also may be transmitting diseases from one side of the globe to the other. The researchers found E. coli in their samples (which they think hurricanes lifted from cities), and they plan to investigate whether plagues are raining down on us. If we can find out more about the role of bacteria in the atmosphere, says Ann Womack, a microbial ecologist at the University of Oregon, scientists could even fight climate change by engineering the bacteria to break down greenhouse gases into other, less harmful compounds.
Researchers from Georgia Tech
Researchers from Georgia Tech :  Courtesy Georgia Tech; Photo by Gary Meek
This article originally appeared in the July 2013 issue of Popular Science. See the rest of the magazine here.

The extent of agreement between the NCBI taxonomy and the molecular data.

We report a daily-updated sequenced/species Tree Of Life (sTOL) as a reference for the increasing number of cellular organisms with their genomes sequenced. The sTOL builds on a likelihood-based weight calibration algorithm to consolidate NCBI taxonomy information in concert with unbiased sampling of molecular characters from whole genomes of all sequenced organisms. Via quantifying the extent of agreement between taxonomic and molecular data, we observe there are many potential improvements that can be made to the status quo classification, particularly in the Fungi kingdom; we also see that the current state of many animal genomes is rather poor. To augment the use of sTOL in providing evolutionary contexts, we integrate an ontology infrastructure and demonstrate its utility for evolutionary understanding on: nuclear receptors, stem cells and eukaryotic genomes. The sTOL ( provides a binary tree of (sequenced) life, and contributes to an analytical platform linking genome evolution, function and phenotype.

Scientific Reports
Article number:


InSilico Genomics has found seed money to become a player in the growing market for genomic research software. The company has raised €1.2 million ($1.5 million) to advance its genomics platform, attracting investments from the online payment group Ogone and Foundation Life Sciences Partners, BioInform reported.
A spinoff of the Free University of Brussels, InSilico Genomics offers academics and industry the ability to compare internal RNA sequencing data with public datasets, and the startup's platform supports a host of open-source software tools that enable researchers to slice and dice their data. As BioInform reports, the company aims to beef up the security and credentials of the system as it seeks new business from clinical researchers.
Investors have warmed up to companies in the bioinformatics business, which is expected to explode as companies seek to wrest valuable information from Big Data sources such as next-generation sequencing datasets and vast amounts of electronic patient records. In the genomics arena, the startup Bina Technologies recently announced a $6.25 million round for a software-hardware hybrid offering, and Spiral Genetics snapped up $3 million for a cloud-based bioinformatics tool.

Subscribe at FierceBiotechIT

Author:Biography for Ryan McBride

Ryan McBride is an award-winning journalist who writes about the life sciences industry. Prior to joining FierceMarkets, he was a freelance journalist and served as a correspondent at Xconomy for more than two years. His stories have appeared in The Boston Globe, the Boston Business Journal, The Motley Fool, and many other publications Read more: 

Parallel block selection from genome

Scaling for whole genome sequencing
Moving from exome to whole genome sequencing introduces a myriad of scaling and informatics challenges. In addition to the biological component of correctly identifying biological variation, it’s equally important to be able to handle the informatics complexities that come with scaling up to whole genomes.
At Harvard School of Public Health, we are processing an increasing number of whole genome samples and the goal of this post is to share experiences scaling the bcbio-nextgen pipeline to handle the associated increase in file sizes and computational requirements. We’ll provide an overview of the pipeline architecture in bcbio-nextgen and detail the four areas we found most useful to overcome processing bottlenecks:
  • Support heterogeneous cluster creation to maximize resource usage.
  • Increase parallelism by developing flexible methods to split and process by genomic regions.
  • Avoid file IO and prefer streaming piped processing pipelines.
  • Explore distributed file systems to better handle file IO.
This overview isn’t meant as a prescription, but rather as a description of experiences so far. The work is a collaboration between the HSPH Bioinformatics Core, the research computing team at Harvard FAS and Dell Research. We welcome suggestions and thoughts from others working on these problems.

Pipeline architecture

The bcbio-nextgen pipeline runs in parallel on single multicore machines or distributed on job scheduler managed clusters like LSFSGE, and TORQUE. The IPython parallel framework manages the set up of parallel engines and handling communication between them. These abstractions allow the same pipeline to scale from a single processor to hundreds of node on a cluster.
The high level diagram of the analysis pipeline shows the major steps in the process. For whole genome samples we start with large 100Gb+ files of reads in FASTQ or BAM format and perform alignment, post-alignment processing, variant calling and variant post processing. These steps involve numerous externally developed software tools with different processing and memory requirements.


Author Affiliations:Weijun Luo,Cory Brouwer

Department of Bioinformatics and Genomics, UNC Charlotte, Charlotte, NC 28223 and 


Charlotte Department of Bioinformatics and Genomics, North Carolina Research Campus,

 Kannapolis, NC 28081, USA


Summary: Pathview is a novel tool set for pathway-based data integration and visualization. It maps and renders user data on relevant pathway graphs. Users only need to supply their data and specify the target pathway. Pathview automatically downloads the pathway graph data, parses the data file, maps and integrates user data onto the pathway and renders pathway graphs with the mapped data. Although built as a stand-alone program, Pathview may seamlessly integrate with pathway and functional analysis tools for large-scale and fully automated analysis pipelines.
Availability: The package is freely available under the GPLv3 license through Bioconductor and R-Forge. It is available at and at

The journal and article are being superseded by algorithms that filter, rate and disseminate scholarship as it happens, argues Jason Priem.

  Author Jason Priem:

Henry Oldenburg created the first scientific journal in 1665 with a simple goal: apply an emerging communication technology — the printing press — to improve the dissemination of scholarly knowledge. The journal was a vast improvement over the letter-writing system that it eventually replaced. But it had a cost: no longer could scientists read everything someone sent them; existing information filters became swamped.
To solve this, peer and editorial review emerged as a filter, becoming increasingly standardized in the science boom after the Second World War. This peer-review system applies community evaluation of scholarly products by proxy: editorial boards, editors and peer reviewers are nominated to enact representative judgements on behalf of their communities.
Now we are witnessing the transition to yet another scholarly communication system — one that will harness the technology of the Web to vastly improve dissemination. What the journal did for a single, formal product (the article), the Web is doing for the entire breadth of scholarly output. The article was an attempt to freeze and mount some part of the scholarly process for display. The Web opens the workshop windows to disseminate scholarship as it happens, erasing the artificial distinction between process and product.
Over the next ten years, the view through these open windows will inform powerful, online filters; these will distil communities' impact judgements algorithmically, replacing the peer-review and journal systems.


,Luke is an award-winning journalist specializing in life sciences. Before joining Xconomy, he was the U.S. biotechnology reporter for Bloomberg News, based in San Francisco. There, he led coverage of major medical meetings and broke news about the industry’s top companies. His stories appeared in The New York Times, Los Angeles Times, Boston Globe, and International Herald Tribune. Before that, his passionate coverage of biotechnology won many awards for The Seattle Times ..Read more

Luke Timmerman

Ingenuity Systems, a 15-year veteran of the biological software business, showed today that you can make money not just by generating DNA data, but by helping scientists figure out what it means.
Redwood City, CA-based Ingenuity said today it has agreed to be acquired by Netherlands-based Qiagen for $105 million in cash. Ingenuity, a private company, was able to fetch that price after it closed last year with about $20 million in net sales, the companies said in a statement. The deal is expected to start adding to Qiagen’s profits in 2015, the companies said.
Ingenuity was founded in 1998 by Stanford University graduate students, and so it has been through a couple booms and busts in the bioinformatics world. It started gaining momentum in the market the last few years with its Ingenuity Knowledge Base, in which its people manually sort through scientific literature to form models, and computational structures, which help scientists interpret massive amounts of data they can now get from fast and cheap biological research instruments. Essentially, Ingenuity goes through published research to try to connect the dots between certain gene variations and disease.
Few companies have had success selling software to biologists, who often have very specific needs to analyze their own particular experiments and frequently go with open source software, old Excel spreadsheets, or get by with having a postdoc write custom software in spare time.  But the new automated instruments of biology now spit out so much data that many biologists are drowning in information they don’t know what to do with. Ingenuity has sought to take advantage of the trend, pitching to customers that they can get “actionable insights” from their experiments by using its software. Back in November, the company said it saw 250 percent month-over-month growth in the preceding six months, among customers using its Ingenuity Variant Analysis program, which helps researchers find tiny abnormalities in a whole genome.
Qiagen isn’t known for software, but for selling lab supplies that help scientists prepare, isolate, and process DNA, RNA and proteins they get from biological samples. It sells 500 different products, and has 4,000 employees around the world. Qiagen said it intends to keep Ingenuity’s office in Redwood City, along with many members of its senior management team, including CEO Jake Leschly. Ingenuity currently has about 120 employees.
Ingenuity’s investors include Accel Partners, Industry Ventures, QuestMark Partners, Rho Ventures, and Three Arch Partners, according to its website. The company’s most recent financing appears to have been a $15.4 million Series E equity dealin July 2010, according to a filing with the Securities and Exchange Commission.
Luke Timmerman is the National Biotech Editor of Xconomy. E-mail him at[email protected] 

Nature Publishing Group (NPG) and its publishing partners have introduced the CC-BY licence option on 22 further journals, including Nature Communications. Of the 61 NPG journals that are open access or have open-access options, 42 now offer the choice of CC-BY as one of the options, says the publisher.
Nature Communications will offer CC-BY licences at an APC of $5200, for authors submitting manuscripts on or after 1 April. Two non-commercial CC licence options remain available, and the APC for these has been reduced to $4800, from the current APC of $5000.
The other 21 journals introducing a CC-BY option are published by NPG on behalf of publishing partners and either offer open-access options, or are open-access journals. They join Scientific Reports and the 19 NPG-owned academic journals that introduced CC-BY in 2012.
The partner journals that have introduced the CC-BY licence are: BoneKEy ReportsThe ISME JournalEmerging Microbes & InfectionsEuropean Journal of Human GeneticsEyeJournal of Cerebral Blood Flow and MetabolismCell Death and DifferentiationCell Death and DiseaseCell ResearchGenetics in Medicine,HeredityLaboratory InvestigationModern PathologyMolecular TherapyMolecular Therapy — Nucleic AcidsLight: Science & ApplicationsSpinal CordThe EMBO JournalEMBO reportsMolecular Systems Biology and the British Dental Journal.
The CC BY licence will be available to authors choosing open-access publication options in these journals, in addition to the two non-commercial Creative Commons (CC) licences currently on offer. In addition, The EMBO JournalEMBO reports and Molecular Systems Biology have adopted CC0 waiver for the release of published datasets and figure source data. CC0 allows unrestricted re-use of research data. Data available on’s linked data platform is also available under CC0. Other NPG publishing partners are also said to be considering introducing CC-BY in due course. .

Foodomics: a new comprehensive approach to food and nutrition

Francesco Capozzi and Alessandra Bordoni

In the past 20 years, the scientific community has faced a great development in different fields due to the development of high-throughput, omics technologies. Starting from the four major types of omics measurements (genomics, transcriptomics, proteomics, and metabolomics), a variety of omicssubdisciplines (epigenomics, lipidomics, interactomics, metallomics, diseasomics, etc.) has emerged.
Thanks to the omics approach, researchers are now facing the possibility of connecting food components, foods, the diet, the individual, the health, and the diseases, but this broad vision needs not only the application of advanced technologies, but mainly the ability of looking at the problem with a different approach, a “foodomics approach”. 
Foodomics is the comprehensive, high-throughput approach for the exploitation of food science in the light of an improvement of human nutrition. Foodomics is a new approach to food and nutrition that studies the food domain as a whole with the nutrition domain to reach the main objective, the optimization of human health and well-being […].

University of Bologna -IT

ALESSANDRA BORDONI - University of Bologna -IT
PATRIZIA BRIGIDI - University of Bologna -IT
ALEJANDRO CIFUENTES - Institute of Food Science Research (CIAL) -ES
MARK HULL - University of Leeds -UK
SUSANNE MANDRUP - University of Southern Denmark -DK
ETTORE NOVELLINO - University of Naples “Federico II” -IT
LUIGI RICCIARDIELLO - University of Bologna -IT
University of Bologna -IT

FRANCESCO CAPOZZI - University of Bologna -IT
CLAUDIO CAVANI - University of Bologna -IT
MARCO DALLA ROSA - University of Bologna -IT
 ACHILLE FRANCHINI - University of Bologna -IT
LUISA MANNINA - University of Rome “La Sapienza” -IT
ANDREA SEGRE’ - University of Bologna -Read more

MARI themes

{facebook#} {twitter#} {google#}
Powered by Blogger.