Archive for January 2015
Gold Nano Particles
Gold is widely applied in different research fields as catalysis, drug carriers, optical & electrical biosensors. Buy why this chemically inert metal can take these role? This video from Nature and Nature collections may give you answers.
Tiny treasure: The future of nano-gold @ Youtube
Below is a collection of Nature articles (seem free to access):
The promoting effect of adsorbed carbon monoxide on the oxidation of alcohols on a gold catalyst
Paramaconi Rodriguez, Youngkook Kwon & Marc T. M. Koper
Nature Chemistry 4, 177–182 (11 December 2011)
Nanoparticles that communicate in vivo to amplify tumour targeting
Geoffrey von Maltzahn, Ji-Ho Park, Kevin Y. Lin, Neetu Singh, Christian Schwöppe + et al.
Nature Materials 10, 545–552 (19 June 2011)
Optical detection of single non-absorbing molecules using the surface plasmon resonance of a gold nanorod
Peter Zijlstra, Pedro M. R. Paulo & Michel Orrit
Nature Nanotechnology 7, 379–382 (15 April 2012)
A high-throughput drug screen for Entamoeba histolytica identifies a new lead and target
Anjan Debnath, Derek Parsonage, Rosa M Andrade, Chen He, Eduardo R Cobo + et al.
Nature Medicine 18, 956–960 (20 May 2012)
An invisible metal–semiconductor photodetector
Pengyu Fan, Uday K. Chettiar, Linyou Cao, Farzaneh Afshinmanesh, Nader Engheta + et al. Nature Photonics 6, 380–385 (20 May 2012)
image source: Gold nanoparticles help detect Listeria cheaply
Tiny treasure: The future of nano-gold @ Youtube
Below is a collection of Nature articles (seem free to access):
The promoting effect of adsorbed carbon monoxide on the oxidation of alcohols on a gold catalyst
Paramaconi Rodriguez, Youngkook Kwon & Marc T. M. Koper
Nature Chemistry 4, 177–182 (11 December 2011)
Nanoparticles that communicate in vivo to amplify tumour targeting
Geoffrey von Maltzahn, Ji-Ho Park, Kevin Y. Lin, Neetu Singh, Christian Schwöppe + et al.
Nature Materials 10, 545–552 (19 June 2011)
Optical detection of single non-absorbing molecules using the surface plasmon resonance of a gold nanorod
Peter Zijlstra, Pedro M. R. Paulo & Michel Orrit
Nature Nanotechnology 7, 379–382 (15 April 2012)
A high-throughput drug screen for Entamoeba histolytica identifies a new lead and target
Anjan Debnath, Derek Parsonage, Rosa M Andrade, Chen He, Eduardo R Cobo + et al.
Nature Medicine 18, 956–960 (20 May 2012)
An invisible metal–semiconductor photodetector
Pengyu Fan, Uday K. Chettiar, Linyou Cao, Farzaneh Afshinmanesh, Nader Engheta + et al. Nature Photonics 6, 380–385 (20 May 2012)
image source: Gold nanoparticles help detect Listeria cheaply
The science of taste
Correspondence: Ole G Mouritsen ogm@memphys.sdu.dk
MEMPHYS, Center for Biomembrane Physics and TASTEforLIFE, Department of Physics, Chemistry, and Pharmacy, University of Southern Denmark, Campusvej 55, Odense M, DK-5230, Denmark
Flavour 2015, 4:18 doi:10.1186/s13411-014-0028-3
[More about author]
Homepage at University of Southern Denmark
Editorial
In contrast to smell and the olfactory system, for which the 2004 Nobel Prize in Physiology and Medicine was awarded to Richard Axel and Linda Buck for their discovery of odorant receptors and the organization of the olfactory system [1], our knowledge of the physiological basis for the taste system is considerably less developed [2]. Some progress has been obtained over the last decade by the finding of receptors or receptor candidates for all five basic tastes, bitter, sweet, umami, sour, and salty. The receptors for bitter, sweet, and umami appear to belong to the same superfamily of G-protein-coupled receptors, whereas the receptor for salty is an ion channel. The receptor function for sour is the least understood but may involve some kind of proton sensing.
Notwithstanding the prominent status of physiology of taste and its molecular underpinnings, the multisensory processing and integration of taste with other sensory inputs (sight, smell, sound, mouthfeel, etc.) in the brain and neural system have also received an increasing attention, and an understanding is emerging of how taste relates to learning, perception, emotion, and memory [3]. Similarly, the psychology of taste and how taste dictates food choice, acceptance, and hedonic behavior are in the process of being uncovered [4]. Development of taste preferences in children and gustatory impairment in sick and elderly are now studied extensively to understand the nature of taste and the use of this insight to improve the quality of life.
Finally, a new direction has manifested itself in recent years where scientists and creative chefs apply scientific methods to gastronomy in order to explore taste in traditional and novel dishes and use physical sciences to characterize foodstuff, cooking, and flavor [5]-[8].
Noting that in general our understanding of taste is inferior to our knowledge of the other human senses, an interdisciplinary symposium, The Science of Taste, took place in August 2014 and brought together an international group of scientists and practitioners from a range of different disciplines (biophysics, physiology, sensory sciences, neuroscience, nutrition, psychology, epidemiology, food science, gastronomy, gastroscience, and anthropology) to discuss progress in the science of taste. As a special feature, the symposium organized two tasting events arranged by leading chefs, demonstrating the interaction between creative chefs and scientists.
The symposium led to the following special collection of papers accounting for our current knowledge about the science of taste. The collection includes a selection of opinion articles, short reports, and reviews, in addition to three research papers.
The papers deal with the following topics: the comparative biology of taste [9]; fat as a basic taste [10]; umami taste in relation to gastronomy [11]; the mechanism of kokumi taste [12]; geography as a starting point for deliciousness [13], temporal design of taste and flavor [14]; the pleasure principle of flavors [15]; taste as a cultural activity [16]; taste preferences in primary school children [17]; taste and appetite [18]; umami taste in relation to health [19]; taste receptors in the gastrointestinal tract [20]; neuroenology and the taste of wine [21]; the brain mechanisms behind pleasure [22]; the importance of sound for taste [23]; as well the effect of kokumi substances on the flavor of particular food items [24],[25].
The electronic version of this article is the complete one and can be found online at: http://www.flavourjournal.com/content/4/1/18
The whole collection of article can be found here:
http://www.flavourjournal.com/series/the_science_of_taste
image source: Tip of the Tongue: Humans May Taste at Least 6 Flavors
Have you build your own techno-business? Try IEEE Game Changer online NOW..
How to become an innovator and promote your techno product in the market? IEEE Game Changer gives you some hints and a chance to win a $500 prize.
here is the game: https://www.secured-app.com/ieee/ces/
any one know how to promote your product here?
image source: https://www.secured-app.com/ieee/ces/
Tag :
IEEE,
Extending reference assembly models
Corresponding authors:
Deanna M Church deanna.church@personalis.com
Valerie A Schneider schneiva@ncbi.nlm.nih.gov
Richard Durbin rd@sanger.ac.uk
Paul Flicek flicek@ebi.ac.uk
Genome Biology 2015, 16:13 doi:10.1186/s13059-015-0587-3
Background
One of the flagship products of the Human Genome Project (HGP) was a high-quality human reference assembly [1]. This assembly, coupled with advances in low-cost, high-throughput sequencing, has allowed us to address previously inaccessible questions about population diversity, genome structure, gene expression and regulation [2]-[5]. It has become clear, however, that the original models used to represent the reference assembly inadequately represent our current understanding of genome architecture.
The first assembly models were designed for simple ‘linear’ genome sequences, with little sequence variation and even less structural diversity. The design fit the understanding of human variation at the time the HGP began [6]. The HGP constructed the reference assembly by collapsing sequences from over 50 individuals into a single consensus haplotype representation of each chromosome. Employing a clone-based approach, the sequence of each clone represented a single haplotype from a given donor. At clone boundaries, however, haplotypes could switch abruptly, creating a mosaic structure. This design introduced errors within regions of complex structural variation, when sequences unique to one haplotype prevented construction of clone overlaps. The assembly therefore inadvertently included multiple haplotypes in series in some regions [7]-[9].
The Genome Reference Consortium (GRC) began stewardship of the reference assembly in 2007. The GRC proposed a new assembly model that formalized the inclusion of ‘alternative sequence paths’ in regions with complex structural variation, and then released GRCh37 using this new model [10]. The release of GRCh37 also marked the deposition of the human reference assembly to an International Nucleotide Sequence Database Collaboration (INSDC) database, providing stable, trackable sequence identifiers, in the form of accession and version numbers, for all sequences in the assembly. The GRC developed an assembly model that was incorporated into the National Centre for Biotechnology Information (NCBI) and European Nucleotide Archive (ENA) assembly database that provides a stable identifier for the collection of sequences and the relationship between these sequences that comprise an assembly [11]. Subsequent minor assembly releases added a number of ‘fix patches’ that could be used to resolve mistakes in the reference sequence, as well as ‘novel patches’ that are new alternative sequence representations [10].
The new assembly model presents significant advances to the genomics community, but, to realize those advances, we must address many technical challenges. The new assembly model is neither haploid nor diploid - instead, it includes additional scaffold sequences, aligned to the chromosome assembly, that provide alternative sequence representations for regions of excess diversity. Widely used alignment programs, variant discovery and analysis tools, as well as most reporting formats, expect reads and features to have a single location in the reference assembly as they were developed using a haploid assembly model. Many alignment and analysis tools penalize reads that align to more than one location under the assumption that the location of these reads cannot be resolved owing to paralogous sequences in the genome. These tools do not distinguish allelic duplication, added by the alternative loci, from paralogous duplication found in the genome, thus confounding repeat and mappability calculations, paired-end placements and downstream interpretation of alignments in regions with alternative loci.
To determine the efforts needed to facilitate use of the full assembly, the GRC organized a workshop in conjunction with the 2014 Genome Informatics meeting in Cambridge, UK (http://www.slideshare.net/GenomeRef webcite). Participants identified challenges presented by the new assembly model and discussed ways forward that we describe here.
Towards the graph of human variation
A graph structure is a natural way to represent a population-based genome assembly, with branches in the graph representing all variation found within the source sequences. Most assembly programs internally use a graph representation to build the assembly, but ultimately produce a flattened structure for use by downstream tools [12]-[14]. Recently, formal proposals for representing a population-based reference graph have been described [15]-[17]. The newly formed Global Alliance for Genomics and Health (GA4GH) is leading an effort to formalize data structures for graph-based reference assemblies, but it will likely take years to develop the infrastructure and analysis tools needed to support these new structures and see their widespread adoption across the biological and clinical research communities [18].
The introduction of alternative loci into the assembly model provides a stepping-stone towards a full graph-based representation of a population-based reference genome. The alternative loci provided by the GRC are based on high-quality, finished sequence. Although it is not feasible to represent all known variation using the alternative locus scheme, this model does allow us to better represent regions with extreme levels of diversity. Alternative loci are not meant to represent all variation within a population, but rather provide an immediate solution for adding sequences missing from the chromosome assembly. In practice, alternative locus addition is limited by the availability of high-quality genomic sequence, and the GRC has focused on representing sequence at the most diverse regions, such as the major histocompatibility complex (MHC). The representation of all population variation is better suited to a graph-based representation. The high quality of the sequence at these locations provides robust data to test graph implementations. Additionally, because both NCBI and Ensembl have annotated these sequences, we can also begin to address how to annotate graph structures at these complex loci.
While GRCh37 had only three regions containing nine alternative locus sequences, GRCh38 has 178 regions containing 261 alternative locus sequences, collectively representing 3.6 Mbp of novel sequence and over 150 genes not represented in the primary assembly (Table 1). The increased level of alternative sequence representation intensifies the urgency to develop new analysis methods to support inclusion of these sequences. Inclusion of all sequences in the reference assembly allows us to better analyze these regions with potentially modest updates to currently used tools and reporting structures. Although the addition of the alternative loci to current analysis pipelines might lead to only modest gains in analysis power on a genome-wide scale, some loci will see considerable improvement owing to the addition of significant amounts of sequence that cannot be represented accurately in the chromosome assembly (Figure 1).
Omission of the novel sequence contained in the alternative loci can lead to off-target sequence alignments, and thus incorrect variant calls or other errors, when a sample containing the alternative allele is sequenced and aligned to only the primary assembly. Using reads simulated from the unique portion of the alternative loci, we found that approximately 75% of the reads had an off-target alignment when aligned to the primary assembly alone. This finding was consistent using different alignment methods [10]. The 1000 Genomes Project also observed the detrimental effect of missing sequences and developed a ‘decoy’ sequence dataset in an effort to minimize off-target alignments [19],[20]. Much of this decoy has now been incorporated into GRCh38, and analysis of reads taken from 1000 Genomes samples that previously mapped only to the decoy shows that approximately 70% of these now align to the full GRCh38, with approximately 1% of these reads aligning only to the alternative loci (Figure 2).
We foresee many computational approaches that allow the inclusion of all assembly sequences in analysis pipelines. To better support exploration in this area, we propose some improvements to standard practices and data structures that will facilitate future development.
Enhancement of standard reporting formats (such as BAM/CRAM, VCF/BCF, GFF3) so that they can accommodate features with multiple locations. Doing so while maintaining the allelic relationship between these features is crucial [21]-[24].
Adoption of standard sequence identifiers for sequence analysis and reporting. Using shorthand identifiers (for example, ‘chr1’ or ‘1’) to indicate the sequence is imprecise and also ignores the presence of other sequences in the assembly. In many cases, other top-level sequences, such as unlocalized scaffolds, patches and alternative loci, have a chromosome assignment but not chromosome coordinates. These sequences are independent of the chromosome assembly coordinate system and have their own coordinate space. Alternative loci are related to the chromosome coordinates through alignment to the chromosome assembly. Developing a structure that treats all top-level sequences as first-class citizens during analysis is an important step towards adopting use of the full assembly in analysis pipelines.
Curation of multiple sequence alignments of the alternative loci to each other and the primary path. Currently, pairwise alignments of the alternative loci to the chromosome assembly are available to provide the allelic relationship between the alternative locus and the chromosome. However, these pairwise alignments do not allow for the comparison of alternative loci in a given region to each other. These alignments can also be used to develop graph structures. The relationship of the allelic sequences within a region helps define the assembly structure, and the community should work from a single set of alignments. These should be distributed with the GRC assembly releases.
Recently, the GRC has released a track hub [25] that allows for the distribution of GRC data using standard track names and content (http://ngs.sanger.ac.uk/production/grit/track_hub/hub.txt webcite). Additionally, the GRC has created a GitHub page to track development of tools and resources that facilitate use of the full assembly (https://github.com/GenomeRef/SoftwareDevTracking webcite).
Concluding remarks
As we gain understanding of biological systems, we must update the models we use to represent these data. This can be difficult when the model supports common infrastructure and analysis tools used by a large swath of the scientific community. However, this growth is crucial in order to move the scientific community forward. While adoption of this new model will take substantial effort, doing so is an important step for the human genetics and broader genomics communities. We now have an opportunity and imperative to revisit old assumptions and conventions to develop a more robust analysis framework. The use of all sequences included in the reference will allow for improved genomic analyses and understanding of genomic architecture. Additionally, this new assembly model allows us to take a small step towards the realization of a graph-based assembly representation. The evolution of the assembly model allows us to improve our understanding of genomic architecture and provides a framework for boosting our understanding of how this architecture impacts human development and disease.
The electronic version of this article is the complete one and can be found online at: http://genomebiology.com/2015/16/1/13
image source: http://www.ncbi.nlm.nih.gov/projects/genome/assembly/grc/human/
Deanna M Church deanna.church@personalis.com
Valerie A Schneider schneiva@ncbi.nlm.nih.gov
Richard Durbin rd@sanger.ac.uk
Paul Flicek flicek@ebi.ac.uk
Genome Biology 2015, 16:13 doi:10.1186/s13059-015-0587-3
Background
One of the flagship products of the Human Genome Project (HGP) was a high-quality human reference assembly [1]. This assembly, coupled with advances in low-cost, high-throughput sequencing, has allowed us to address previously inaccessible questions about population diversity, genome structure, gene expression and regulation [2]-[5]. It has become clear, however, that the original models used to represent the reference assembly inadequately represent our current understanding of genome architecture.
The first assembly models were designed for simple ‘linear’ genome sequences, with little sequence variation and even less structural diversity. The design fit the understanding of human variation at the time the HGP began [6]. The HGP constructed the reference assembly by collapsing sequences from over 50 individuals into a single consensus haplotype representation of each chromosome. Employing a clone-based approach, the sequence of each clone represented a single haplotype from a given donor. At clone boundaries, however, haplotypes could switch abruptly, creating a mosaic structure. This design introduced errors within regions of complex structural variation, when sequences unique to one haplotype prevented construction of clone overlaps. The assembly therefore inadvertently included multiple haplotypes in series in some regions [7]-[9].
The Genome Reference Consortium (GRC) began stewardship of the reference assembly in 2007. The GRC proposed a new assembly model that formalized the inclusion of ‘alternative sequence paths’ in regions with complex structural variation, and then released GRCh37 using this new model [10]. The release of GRCh37 also marked the deposition of the human reference assembly to an International Nucleotide Sequence Database Collaboration (INSDC) database, providing stable, trackable sequence identifiers, in the form of accession and version numbers, for all sequences in the assembly. The GRC developed an assembly model that was incorporated into the National Centre for Biotechnology Information (NCBI) and European Nucleotide Archive (ENA) assembly database that provides a stable identifier for the collection of sequences and the relationship between these sequences that comprise an assembly [11]. Subsequent minor assembly releases added a number of ‘fix patches’ that could be used to resolve mistakes in the reference sequence, as well as ‘novel patches’ that are new alternative sequence representations [10].
The new assembly model presents significant advances to the genomics community, but, to realize those advances, we must address many technical challenges. The new assembly model is neither haploid nor diploid - instead, it includes additional scaffold sequences, aligned to the chromosome assembly, that provide alternative sequence representations for regions of excess diversity. Widely used alignment programs, variant discovery and analysis tools, as well as most reporting formats, expect reads and features to have a single location in the reference assembly as they were developed using a haploid assembly model. Many alignment and analysis tools penalize reads that align to more than one location under the assumption that the location of these reads cannot be resolved owing to paralogous sequences in the genome. These tools do not distinguish allelic duplication, added by the alternative loci, from paralogous duplication found in the genome, thus confounding repeat and mappability calculations, paired-end placements and downstream interpretation of alignments in regions with alternative loci.
To determine the efforts needed to facilitate use of the full assembly, the GRC organized a workshop in conjunction with the 2014 Genome Informatics meeting in Cambridge, UK (http://www.slideshare.net/GenomeRef webcite). Participants identified challenges presented by the new assembly model and discussed ways forward that we describe here.
Towards the graph of human variation
A graph structure is a natural way to represent a population-based genome assembly, with branches in the graph representing all variation found within the source sequences. Most assembly programs internally use a graph representation to build the assembly, but ultimately produce a flattened structure for use by downstream tools [12]-[14]. Recently, formal proposals for representing a population-based reference graph have been described [15]-[17]. The newly formed Global Alliance for Genomics and Health (GA4GH) is leading an effort to formalize data structures for graph-based reference assemblies, but it will likely take years to develop the infrastructure and analysis tools needed to support these new structures and see their widespread adoption across the biological and clinical research communities [18].
The introduction of alternative loci into the assembly model provides a stepping-stone towards a full graph-based representation of a population-based reference genome. The alternative loci provided by the GRC are based on high-quality, finished sequence. Although it is not feasible to represent all known variation using the alternative locus scheme, this model does allow us to better represent regions with extreme levels of diversity. Alternative loci are not meant to represent all variation within a population, but rather provide an immediate solution for adding sequences missing from the chromosome assembly. In practice, alternative locus addition is limited by the availability of high-quality genomic sequence, and the GRC has focused on representing sequence at the most diverse regions, such as the major histocompatibility complex (MHC). The representation of all population variation is better suited to a graph-based representation. The high quality of the sequence at these locations provides robust data to test graph implementations. Additionally, because both NCBI and Ensembl have annotated these sequences, we can also begin to address how to annotate graph structures at these complex loci.
While GRCh37 had only three regions containing nine alternative locus sequences, GRCh38 has 178 regions containing 261 alternative locus sequences, collectively representing 3.6 Mbp of novel sequence and over 150 genes not represented in the primary assembly (Table 1). The increased level of alternative sequence representation intensifies the urgency to develop new analysis methods to support inclusion of these sequences. Inclusion of all sequences in the reference assembly allows us to better analyze these regions with potentially modest updates to currently used tools and reporting structures. Although the addition of the alternative loci to current analysis pipelines might lead to only modest gains in analysis power on a genome-wide scale, some loci will see considerable improvement owing to the addition of significant amounts of sequence that cannot be represented accurately in the chromosome assembly (Figure 1).
Omission of the novel sequence contained in the alternative loci can lead to off-target sequence alignments, and thus incorrect variant calls or other errors, when a sample containing the alternative allele is sequenced and aligned to only the primary assembly. Using reads simulated from the unique portion of the alternative loci, we found that approximately 75% of the reads had an off-target alignment when aligned to the primary assembly alone. This finding was consistent using different alignment methods [10]. The 1000 Genomes Project also observed the detrimental effect of missing sequences and developed a ‘decoy’ sequence dataset in an effort to minimize off-target alignments [19],[20]. Much of this decoy has now been incorporated into GRCh38, and analysis of reads taken from 1000 Genomes samples that previously mapped only to the decoy shows that approximately 70% of these now align to the full GRCh38, with approximately 1% of these reads aligning only to the alternative loci (Figure 2).
We foresee many computational approaches that allow the inclusion of all assembly sequences in analysis pipelines. To better support exploration in this area, we propose some improvements to standard practices and data structures that will facilitate future development.
Enhancement of standard reporting formats (such as BAM/CRAM, VCF/BCF, GFF3) so that they can accommodate features with multiple locations. Doing so while maintaining the allelic relationship between these features is crucial [21]-[24].
Adoption of standard sequence identifiers for sequence analysis and reporting. Using shorthand identifiers (for example, ‘chr1’ or ‘1’) to indicate the sequence is imprecise and also ignores the presence of other sequences in the assembly. In many cases, other top-level sequences, such as unlocalized scaffolds, patches and alternative loci, have a chromosome assignment but not chromosome coordinates. These sequences are independent of the chromosome assembly coordinate system and have their own coordinate space. Alternative loci are related to the chromosome coordinates through alignment to the chromosome assembly. Developing a structure that treats all top-level sequences as first-class citizens during analysis is an important step towards adopting use of the full assembly in analysis pipelines.
Curation of multiple sequence alignments of the alternative loci to each other and the primary path. Currently, pairwise alignments of the alternative loci to the chromosome assembly are available to provide the allelic relationship between the alternative locus and the chromosome. However, these pairwise alignments do not allow for the comparison of alternative loci in a given region to each other. These alignments can also be used to develop graph structures. The relationship of the allelic sequences within a region helps define the assembly structure, and the community should work from a single set of alignments. These should be distributed with the GRC assembly releases.
Recently, the GRC has released a track hub [25] that allows for the distribution of GRC data using standard track names and content (http://ngs.sanger.ac.uk/production/grit/track_hub/hub.txt webcite). Additionally, the GRC has created a GitHub page to track development of tools and resources that facilitate use of the full assembly (https://github.com/GenomeRef/SoftwareDevTracking webcite).
Concluding remarks
As we gain understanding of biological systems, we must update the models we use to represent these data. This can be difficult when the model supports common infrastructure and analysis tools used by a large swath of the scientific community. However, this growth is crucial in order to move the scientific community forward. While adoption of this new model will take substantial effort, doing so is an important step for the human genetics and broader genomics communities. We now have an opportunity and imperative to revisit old assumptions and conventions to develop a more robust analysis framework. The use of all sequences included in the reference will allow for improved genomic analyses and understanding of genomic architecture. Additionally, this new assembly model allows us to take a small step towards the realization of a graph-based assembly representation. The evolution of the assembly model allows us to improve our understanding of genomic architecture and provides a framework for boosting our understanding of how this architecture impacts human development and disease.
The electronic version of this article is the complete one and can be found online at: http://genomebiology.com/2015/16/1/13
image source: http://www.ncbi.nlm.nih.gov/projects/genome/assembly/grc/human/
What is photonics?
What is photonics? check this from IYL 2015:
"Photonics is the science of light. It is the technology of generating, controlling, and detecting light waves and photons, which are particles of light. The characteristics of the waves and photons can be used to explore the universe, cure diseases, and even to solve crimes. Scientists have been studying light for hundreds of years. The colors of the rainbow are only a small part of the entire light wave range, called the electromagnetic spectrum. Photonics explores a wider variety of wavelengths, from gamma rays to radio, including X-rays, UV and infrared light.
It was only in the 17th century that Sir Isaac Newton showed that white light is made of different colors of light. At the beginning of the 20th century, Max Planck and later Albert Einstein proposed that light was a wave as well as a particle, which was a very controversial theory at the time. How can light be two completely different things at the same time? Experimentation later confirmed this duality in the nature of light. The word Photonicsappeared around 1960, when the laser was invented by Theodore Maiman.
Even if we cannot see the entire electromagnetic spectrum, visible and invisible light waves are a part of our everyday life. Photonics is everywhere; in consumer electronics (barcode scanners, DVD players, remote TV control), telecommunications (internet), health (eye surgery, medical instruments), manufacturing industry (laser cutting and machining), defense and security (infrared camera, remote sensing), entertainment (holography, laser shows), etc.
All around the world, scientists, engineers and technicians perform cutting edge research surrounding the field of Photonics. The science of light is also actively taught in classrooms and museums where teachers and educators share their passion for this field to young people and the general public. Photonics opens a world of unknown and far-reaching possibilities limited only by lack of imagination."
Photonics technologies are everywhere!
there are more videos about IYL 2015 here:
http://www.light2015.org/Home/About/Resources/Videos.html
and you can learn more about light here:
http://www.light2015.org/Home/LearnAboutLight.html
About IYL 2015:
"The International Year of Light and Light-Based Technologies (IYL 2015) is a global initiative adopted by the United Nations to raise awareness of how optical technologies promote sustainable development and provide solutions to worldwide challenges in energy, education, agriculture, communications and health."
"IYL 2015 is endorsed by a number of international scientific unions and the International Council of Science, and has more than 100 partners from more than 85 countries. Founding Scientific Sponsors of IYL 2015 are the American Physical Society (APS); The American Institute of Physics (AIP); the European Physical Society (EPS); the IEEE Photonics Society (IPS); SPIE, the international society for optics and photonics; the Lightsources.org International Network; the Institute of Physics (IOP); and The Optical Society (OSA)."
image source: http://www.lighting-inspiration.com/iyl2015/
"Photonics is the science of light. It is the technology of generating, controlling, and detecting light waves and photons, which are particles of light. The characteristics of the waves and photons can be used to explore the universe, cure diseases, and even to solve crimes. Scientists have been studying light for hundreds of years. The colors of the rainbow are only a small part of the entire light wave range, called the electromagnetic spectrum. Photonics explores a wider variety of wavelengths, from gamma rays to radio, including X-rays, UV and infrared light.
It was only in the 17th century that Sir Isaac Newton showed that white light is made of different colors of light. At the beginning of the 20th century, Max Planck and later Albert Einstein proposed that light was a wave as well as a particle, which was a very controversial theory at the time. How can light be two completely different things at the same time? Experimentation later confirmed this duality in the nature of light. The word Photonicsappeared around 1960, when the laser was invented by Theodore Maiman.
Even if we cannot see the entire electromagnetic spectrum, visible and invisible light waves are a part of our everyday life. Photonics is everywhere; in consumer electronics (barcode scanners, DVD players, remote TV control), telecommunications (internet), health (eye surgery, medical instruments), manufacturing industry (laser cutting and machining), defense and security (infrared camera, remote sensing), entertainment (holography, laser shows), etc.
All around the world, scientists, engineers and technicians perform cutting edge research surrounding the field of Photonics. The science of light is also actively taught in classrooms and museums where teachers and educators share their passion for this field to young people and the general public. Photonics opens a world of unknown and far-reaching possibilities limited only by lack of imagination."
Photonics technologies are everywhere!
there are more videos about IYL 2015 here:
http://www.light2015.org/Home/About/Resources/Videos.html
and you can learn more about light here:
http://www.light2015.org/Home/LearnAboutLight.html
About IYL 2015:
"The International Year of Light and Light-Based Technologies (IYL 2015) is a global initiative adopted by the United Nations to raise awareness of how optical technologies promote sustainable development and provide solutions to worldwide challenges in energy, education, agriculture, communications and health."
"IYL 2015 is endorsed by a number of international scientific unions and the International Council of Science, and has more than 100 partners from more than 85 countries. Founding Scientific Sponsors of IYL 2015 are the American Physical Society (APS); The American Institute of Physics (AIP); the European Physical Society (EPS); the IEEE Photonics Society (IPS); SPIE, the international society for optics and photonics; the Lightsources.org International Network; the Institute of Physics (IOP); and The Optical Society (OSA)."
image source: http://www.lighting-inspiration.com/iyl2015/
Tag :
youtube,
Why are young people angry?
from blog - Professor Joseph J.Y. Sung,
Before reading, it would be better if you could ask yourself: are you angry with your current status, and why?
image source: Income Inequality and the Wealth Gap
Tag :
general,
Need help Studying chemistry? Try Interactive Periodic Table
How many elements you have to learn from high school? Let's review them for your PhD with the following periodic table:
http://www.rsc.org/periodic-table
This is free from CRC:
http://www.chemnetbase.com/PeriodicTable/index.jsf
I love this one. You can have a video lesson via Youtube by clicking each element box. http://www.periodicvideos.com
Thank you Prof. Martyn Polikaoff and his team at the University of Nottingham.
More about Prof. Polikaoff:
http://www.bradyharanblog.com/blog/2014/12/30/sir-martyn-poliakoff
image source: www.chemnetbase.com, www.rsc.org/periodic-table, www.periodicvideos.com,
How you can recharge your mobile when you jogging?
There is a recent research helping you recharge your battery when you walk - with a specific device embedded in your sport shoes.
Abstract
Modern compact and low power sensors and systems are leading towards increasingly integrated wearable systems. One key bottleneck of this technology is the power supply. The use of energy harvesting techniques offers a way of supplying sensor systems without the need for batteries and maintenance. In this work we present the development and characterization of two inductive energy harvesters which exploit different characteristics of the human gait. A multi-coil topology harvester is presented which uses the swing motion of the foot. The second device is a shock-type harvester which is excited into resonance upon heel strike.
The electronic version of this article is the complete one and can be found online at: http://iopscience.iop.org/0964-1726/24/2/025029
related article on IOP:
Energy harvesting from human motion: exploiting swing and shock excitations
related article on BBC:
Smart shoe devices generate power from walking
image source:
Abstract
Modern compact and low power sensors and systems are leading towards increasingly integrated wearable systems. One key bottleneck of this technology is the power supply. The use of energy harvesting techniques offers a way of supplying sensor systems without the need for batteries and maintenance. In this work we present the development and characterization of two inductive energy harvesters which exploit different characteristics of the human gait. A multi-coil topology harvester is presented which uses the swing motion of the foot. The second device is a shock-type harvester which is excited into resonance upon heel strike.
The electronic version of this article is the complete one and can be found online at: http://iopscience.iop.org/0964-1726/24/2/025029
related article on IOP:
Energy harvesting from human motion: exploiting swing and shock excitations
related article on BBC:
Smart shoe devices generate power from walking
image source:
Chemistry Central Journal themed issue: Current Topics in Chemical Crystallography
Chemistry, Faculty of Natural and Life Sciences, University of Southampton, Southampton SO17 1BJ, UK
Chemistry Central Journal 2014, 8:69 doi:10.1186/s13065-014-0069-9
More about author:
Homepage @ University of Southampton
Intermolecular bonding and supramolecular chemistry
One of the first themes which emerged from our increasing interest in intermolecular bonding was a strong focus on hydrogen bonding strengths and patterns and, before very long, about variations in such patterns. This was not new science, with much of the ideas and groundwork having been laid for some time. The early work of Pauling [8], Powell [9] and Wells [10] and others, prompted the realisation that hydrogen bonding was not just a way of holding protein chains together [11], but a force to be recognised in chemical crystallography in general. The informative writings of Hamilton and Ibers [12], Jeffrey [13] and the papers of Etter [14],[15] were fundamental in promoting this subject, which provided most significant contributions to key areas such as the study of polymorphism [16] and, of course, the whole area of organic solid forms. These were the starting points for the conception of supramolecular chemistry [17] and the development of crystal engineering [18]. Advances in this theme have been spectacular, with many reviews, textbooks and Meetings. A flavour for what is currently in vogue is nicely summarised in the program of a recent Gordon Conference on the subject [19].
In a not unrelated way, studies on the synthesis and structural characterisation of metal co-ordination compounds, especially the use of bi- or multi-coordinating ligands led to the equally significant, and popular area of metal-organic frameworks, or MOFs [20]. In many ways, these types of compound have provided remarkable analogues of the structural characteristics and properties of zeolite and related phases, which had already been utilised as scaffolds for separation science and synthesis and catalysis [21]. Ian Williams and co-workers from Hong Kong University of Science and Technology: HKUST will be contributing to this Issue, with a paper on reduced symmetry of sodalite (SOD) MOFs and the concept of conformational isomers for these frameworks.
Integrating computational chemistry and chemical crystallography
Knowing that the electrons in the molecule were responsible for the scattering of X-rays, the 1980’s saw experiments devised and performed to see whether we could determine and model not only the atomic positions and thermal motion characteristics, but the actual distribution of electrons, on the atoms and in the bonds. Initial work in this area, which became described as “Charge Density Studies”, and its development is nicely presented in a review co-authored by Philip Coppens, one of the early proponents of the subject [22], and taken up by many other researchers. The early work was based on data collected at low temperature on serial diffractometers, sometimes with supporting data from neutron scattering experiments. As in all other areas of x-ray scattering, major developments were made as the technology and data processing software for area detectors provided efficient collection of data of excellent quality, and yet another major theme grew. Integrating the experimental methods with new procedures for refining, which provided descriptions of the electron density in bonds, including intermolecular connections again led to a major involvement with computational chemistry methods. A number of recent publications highlight the impressive levels to which this area has developed, including the combined use of X-ray and neutron scattering to measure both charge and spin densities [23]-[25]. It is very pleasing to have an update on application of charge density studies in crystal engineering, from Piero Macchi and Anna Krawczuk, for this issue.
The rapidly increasing knowledge base on the properties and descriptions of interatomic and intermolecular bonding, coupled with the hugely increasing amount of structural data, fuelled another successful Chemical Crystallography/Computational Chemistry integration, in the form of Crystal Structure Prediction. New computational platforms were designed and assembled, which generated large numbers of possible structures and then classified them in terms of computed lattice energies. The remarkable success of this development is nicely charted in the reports of a series of competitive “Blind Tests” [26]-[31], which will continue through 2014/5. A very useful advantage of the computational procedures associated with this topic is the possibility to compute lattice energies for polymorphs in particular, and we have such an example in one of the “home” contributions to this issue, through a collaboration with Frank Leusen and John Kendrick from Bradford. Another contributed paper, from Thomas Gelbrich and Ulrich Griesser from the University of Innsbruck, will highlight the use of a further energy calculation program, PIXEL, written by Angelo Gavezzotti [32],[33], and will describe how the complete set of pairwise intermolecular interactions in a structure, from van der Waals to hydrogen bonds, can be computed, and how these interact in producing the final, overall energy.
Crystallography under extreme conditions and other specialised experiments
Whilst variable, low and high temperature crystallography has become a very general technique, allowing detailed studies of phase transformations, for example, more specialised experiments involving studies under very high pressures have also yielded some very interesting results, particularly in the area, again, of phase transitions. The origins and development of the technique, the use of which is still not as widespread as low temperature crystallography, are nicely summarised by Andrzej Katrusiak [34]. Simon Parsons from Edinburgh and co-workers will be contributing a paper on a new study to this issue.
A second type of specialised technique, which has blossomed with the availability of pulsed synchrotron sources, is that of time-resolved crystallography. Here we irradiate a sample with pulsed radiation of a relevant wavelength and capture the diffraction pattern in synchonisation with the pulsing. Much of the work in this area is devoted to the study of macromolecules, but reports of studies on short-lived species in small molecule systems are increasing significantly. A review, summarising the history and the state-of-the-art in this technique has recently been published [35].
Crystallographic computing, data bases and descriptors
Here we come to the crux of how we are really driving Chemical Crystallography, how we are storing our results and how we are using them. The first component of our toolkit are, of course the software tools used to drive the diffractometers, process the captured X-rays, solve and refine the structures and then display and interpret the results. I have experienced this development from the very beginning! My first structure was determined using eye-measured film intensities and calculation of 2D electron density maps using Beevers-Lipson strips! Fortunately this “good for the experience” procedure was then superseded by use of an electron density synthesis program, written by Owen Mills from the University of Manchester, and a least squares refinement program written by John Rollett at the University of Oxford. Diagrams were prepared using rulers, compasses and Indian ink. In contrast to this, the software we now have, for calculations and graphics, is state-of-the art, and we have all been very fortunate to have some superbly skilled crystallographic software experts to provide us with such facilities [36]. However, the problems we are now tackling continue to present new demands, so the development of the software continues apace. Richard Cooper, Amber Thomson and Pascal Parois have kindly agreed to contribute a paper to this Issue, describing strategies for handling bigger and bigger structures, and the way in which these are being implemented in the Oxford CRYSTALS package.
As a result of these highly successful developments, we are, of course living with a continuing data explosion. It goes without saying that this data can be really valuable, if we learn how to use it well, and we must have a way of preserving it, not just for posterity, but for persistent use and re-use. Of course, we are very fortunate to have groups of experts in crystallography and data base protocols, who are taking great care of this task also. The resulting databases cover all fields, and are all listed in the IUCr website [37]. The entries in the CSD currently total well over 700,000, and in all databases, well over one million. We visit the databases according to our areas of interest and activity, firstly, perhaps to check that a proposed structure determination is not a duplication (although this is not necessarily a bad thing), and then to use any significant important information which we find as accompanying data when we write up the results of our study. Many publications use sets of data to present comparative studies. The value of this data is truly immense, but it would be even more valuable if the data from all the structures, which we know lie in filing cabinets or on local computer archives, can be made available for database capture. I see from interrogation of the Cambridge Structural Database (CSD) that more of us are now submitting single structure data as “Private Communications”, and this is truly helping to increase content and thus value. I believe that with just a small amount of collective effort, we can add structures which may not be written up in Journal form as Private Communications, and aim for a CSD total breaking the 1 M mark within 3 or 4 years. Let us try!
In many cases, we can mine data from a database to study multi-structure relationships, make valuable comparisons and learn more about general or specific trends. For this purpose, we have to recognise the real situation this presents us with – “here are more than one million answers: what are the questions”? By this, I mean that we must prepare our database searching questions in a most careful way. For example, a simple statistical probe, requiring an answer equivalent to “yes” or “no” may tell us very little. Accordingly, we must develop our expertise in generating descriptors with which we can encode our questions so that the answers are suitably partitioned. For instance, suppose we wish to study the tendencies of an organic compound containing a particular functional group to crystallise as a hydrate. A search which gives the simple answer x% yes, (100-x%) no, has very little use. We would need to encode into our question other factors – what other functional groups are present, and, for example, what positional relationship they have to the target group?
Careful definition of descriptors is also vital in database mining to explore such questions as - “is my molecule/structure similar to any other” or “how similar is my molecule/structure to another”. This has been a critical component in the structural systematics studies which have been the focus of research in my laboratory for several years [38], which has involved development of the XPac concept and software [39]. As a second “home” contribution, my co-workers David Hughes, Thomas Gelbrich, Terence Threlfall and I will be contributing a paper in which we propose some new thoughts on the way in which one can describe, and thus compare, hydrogen-bonded networks. We hope this will be of interest to many readers.
The electronic version of this article is the complete one and can be found online at: http://journal.chemistrycentral.com/content/8/1/69
image source: The Young Crystallographers Group
Cell culture models for study of differentiated adipose cells
Correspondence: Martin Clynes martin.clynes@dcu.ie
National Institute for Cellular Biotechnology, Dublin City University, Glasnevin, Dublin 9, Ireland
Stem Cell Research & Therapy 2014, 5:137 doi:10.1186/scrt527
More about author:
Prof. Martin Clynes @ NICB
Commentary
There is increasing interest in the use of adipose cells, both brown and white types, not least because the obesity epidemic dictates a need for increased research on adipose tissue and lipid metabolism. The paper by Balducci and colleagues – a collaboration between five Italian groups – reports the immortalisation, by lentiviral transduction, of human adipose-derived stromal cells [1].
Adipose cells are an important resource for biomedical research, partly because of the ready availability of human surgical material and the fact that they can be used to generate mesenchymal stem cells [2] – which themselves have interesting differentiation potential, for example towards a hepatocyte phenotype [3] – and even pluripotent stem cells [4]. Adipose-derived stem cells have been reported to differentiate into osteoblasts, chondrocytes, myocytes and neurons, as well as back to adipocytes, depending on the culture conditions [5]. Pluripotent stem cells [6,7], including induced pluripotent stem cells [8], can be differentiated in vitro into cells with multiple phenotypic characteristics of adipose cells, including specifically brown adipose cells [7].
It is interesting to note that telomerase expression in bone marrow stromal cells resulted in enhanced bone formation [9,10]. Another approach to generating large populations of adipose cells in vitro is to use viral immortalisation, as has been also achieved in other systems such as bone marrow progenitor cells [11].
Balducci and colleagues report diversity in differentiation potential between cell lines immortalised with different gene combinations, which is in itself an interesting observation, but it is not entirely clear what the cellular or molecular basis for this may be or whether it is a purely random observation [1]. Whatever the mechanism, human adipose-derived stromal cells co-transduced with human telomerase reverse transcriptase and human papilloma virus E6/E7 generated immortalised cells that retained the capacity to differentiate down osteogenic and adipogenic lineages and to produce angiogenesis-related proteins. Cells transduced with human telomerase reverse transcriptase alone or with human telomerase reverse transcriptase and Simian virus 40 did not retain these capacities to the same extent. The availability of immortal cell lines that closely resemble adipose-derived stromal cells contributes a useful new resource for those working on this fascinating cell type.
Nevertheless, it is important to bear in mind that, as in all such cases, these adipose cells are not normal cells identical to their parental finite-lifespan progenitors – transduced cells are unlikely to be acceptable for therapeutic use except perhaps in the terminal stages of life-threatening diseases. Indeed, Balducci and colleagues acknowledge this limitation and report on chromosomal aberrations and unbalanced translocations in the transduced cells. However, there is no doubt that the availability of these immortalised human adipose-derived stromal cell lines will significantly facilitate research on this interesting and relatively neglected cell type, and availability of these cell lines will help to answer more rapidly questions about their biology and expedite their application in cell therapy/tissue engineering, even if the cells eventually used for therapy will most probably be of primary origin rather than cell lines. The availability of large numbers of these cells that can be easily grown also offers the potential for discovery of additional autocrine, paracrine and endocrine factors which these cells may produce, but at levels too low to be detected from small-scale, limited-lifespan primary cultures, and this, in the end, could be the most valuable legacy from this interesting paper.
The availability of these different sources of adipose cells provides a much-expanded toolkit for research on adipose cells in vitro, and should make a significant impact on the progress of obesity research and on our understanding of adipose cell differentiation.
The electronic version of this article is the complete one and can be found online at: http://stemcellres.com/content/5/6/137
See related research by Balducci et al.,
Immortalization of human adipose-derived stromal cells: production of cell lines with high growth rate, mesenchymal marker expression and capability to secrete high levels of angiogenic factors
image source: The surprising science of fat: you can get fatter and become healthier. PLOS Blogs
National Institute for Cellular Biotechnology, Dublin City University, Glasnevin, Dublin 9, Ireland
Stem Cell Research & Therapy 2014, 5:137 doi:10.1186/scrt527
More about author:
Prof. Martin Clynes @ NICB
Commentary
There is increasing interest in the use of adipose cells, both brown and white types, not least because the obesity epidemic dictates a need for increased research on adipose tissue and lipid metabolism. The paper by Balducci and colleagues – a collaboration between five Italian groups – reports the immortalisation, by lentiviral transduction, of human adipose-derived stromal cells [1].
Adipose cells are an important resource for biomedical research, partly because of the ready availability of human surgical material and the fact that they can be used to generate mesenchymal stem cells [2] – which themselves have interesting differentiation potential, for example towards a hepatocyte phenotype [3] – and even pluripotent stem cells [4]. Adipose-derived stem cells have been reported to differentiate into osteoblasts, chondrocytes, myocytes and neurons, as well as back to adipocytes, depending on the culture conditions [5]. Pluripotent stem cells [6,7], including induced pluripotent stem cells [8], can be differentiated in vitro into cells with multiple phenotypic characteristics of adipose cells, including specifically brown adipose cells [7].
It is interesting to note that telomerase expression in bone marrow stromal cells resulted in enhanced bone formation [9,10]. Another approach to generating large populations of adipose cells in vitro is to use viral immortalisation, as has been also achieved in other systems such as bone marrow progenitor cells [11].
Balducci and colleagues report diversity in differentiation potential between cell lines immortalised with different gene combinations, which is in itself an interesting observation, but it is not entirely clear what the cellular or molecular basis for this may be or whether it is a purely random observation [1]. Whatever the mechanism, human adipose-derived stromal cells co-transduced with human telomerase reverse transcriptase and human papilloma virus E6/E7 generated immortalised cells that retained the capacity to differentiate down osteogenic and adipogenic lineages and to produce angiogenesis-related proteins. Cells transduced with human telomerase reverse transcriptase alone or with human telomerase reverse transcriptase and Simian virus 40 did not retain these capacities to the same extent. The availability of immortal cell lines that closely resemble adipose-derived stromal cells contributes a useful new resource for those working on this fascinating cell type.
Nevertheless, it is important to bear in mind that, as in all such cases, these adipose cells are not normal cells identical to their parental finite-lifespan progenitors – transduced cells are unlikely to be acceptable for therapeutic use except perhaps in the terminal stages of life-threatening diseases. Indeed, Balducci and colleagues acknowledge this limitation and report on chromosomal aberrations and unbalanced translocations in the transduced cells. However, there is no doubt that the availability of these immortalised human adipose-derived stromal cell lines will significantly facilitate research on this interesting and relatively neglected cell type, and availability of these cell lines will help to answer more rapidly questions about their biology and expedite their application in cell therapy/tissue engineering, even if the cells eventually used for therapy will most probably be of primary origin rather than cell lines. The availability of large numbers of these cells that can be easily grown also offers the potential for discovery of additional autocrine, paracrine and endocrine factors which these cells may produce, but at levels too low to be detected from small-scale, limited-lifespan primary cultures, and this, in the end, could be the most valuable legacy from this interesting paper.
The availability of these different sources of adipose cells provides a much-expanded toolkit for research on adipose cells in vitro, and should make a significant impact on the progress of obesity research and on our understanding of adipose cell differentiation.
The electronic version of this article is the complete one and can be found online at: http://stemcellres.com/content/5/6/137
See related research by Balducci et al.,
Immortalization of human adipose-derived stromal cells: production of cell lines with high growth rate, mesenchymal marker expression and capability to secrete high levels of angiogenic factors
image source: The surprising science of fat: you can get fatter and become healthier. PLOS Blogs
Integrating big data and actionable health coaching to optimize wellness
Corresponding author: Leroy Hood lee.hood@systemsbiology.org
Institute for Systems Biology, 401 Terry Avenue North, Seattle 98109, WA, USA
BMC Medicine 2015, 13:4 doi:10.1186/s12916-014-0238-7
More about author:
Lee Hood Group, Institute for Systems Biology
Abstract
The Hundred Person Wellness Project (HPWP) is a 10-month pilot study of 100 ‘well’ individuals where integrated data from whole-genome sequencing, gut microbiome, clinical laboratory tests and quantified self measures from each individual are used to provide actionable results for health coaching with the goal of optimizing wellness and minimizing disease. In a commentary in BMC Medicine, Diamandis argues that HPWP and similar projects will likely result in ‘unnecessary and potential harmful over-testing’. We argue that this new approach will ultimately lead to lower costs, better healthcare, innovation and economic growth. The central points of the HPWP are: 1) it is focused on optimizing wellness through longitudinal data collection, integration and mining of individual data clouds, enabling development of predictive models of wellness and disease that will reveal actionable possibilities; and 2) by extending this study to 100,000 well people, we will establish multiparameter, quantifiable wellness metrics and identify markers for wellness to early disease transitions for most common diseases, which will ultimately allow earlier disease intervention, eventually transitioning the individual early on from a disease back to a wellness trajectory.
The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1741-7015/13/4
image source: The Hundred Person Wellness Project will include around-the-clock monitoring of subjects, news.com.au
Institute for Systems Biology, 401 Terry Avenue North, Seattle 98109, WA, USA
BMC Medicine 2015, 13:4 doi:10.1186/s12916-014-0238-7
More about author:
Lee Hood Group, Institute for Systems Biology
Abstract
The Hundred Person Wellness Project (HPWP) is a 10-month pilot study of 100 ‘well’ individuals where integrated data from whole-genome sequencing, gut microbiome, clinical laboratory tests and quantified self measures from each individual are used to provide actionable results for health coaching with the goal of optimizing wellness and minimizing disease. In a commentary in BMC Medicine, Diamandis argues that HPWP and similar projects will likely result in ‘unnecessary and potential harmful over-testing’. We argue that this new approach will ultimately lead to lower costs, better healthcare, innovation and economic growth. The central points of the HPWP are: 1) it is focused on optimizing wellness through longitudinal data collection, integration and mining of individual data clouds, enabling development of predictive models of wellness and disease that will reveal actionable possibilities; and 2) by extending this study to 100,000 well people, we will establish multiparameter, quantifiable wellness metrics and identify markers for wellness to early disease transitions for most common diseases, which will ultimately allow earlier disease intervention, eventually transitioning the individual early on from a disease back to a wellness trajectory.
The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1741-7015/13/4
image source: The Hundred Person Wellness Project will include around-the-clock monitoring of subjects, news.com.au