Pages

Monday, January 30, 2012

Photo from NASA Mars Orbiter Shows Wind's Handiwork


Photo from NASA Mars Orbiter Shows Wind's Handiwork

Sand dunes trapped in an impact crater in Noachis Terra on MarsThis enhanced-color image shows sand dunes trapped in an impact crater in Noachis Terra, Mars. Image credit: NASA/JPL-Caltech/Univ. of Arizona
› Full image and caption

  • Share278
January 25, 2012
Some images of stark Martian landscapes provide visual appeal beyond their science value, including a recent scene of wind-sculpted features from the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter.

The scene shows dunes and sand ripples of various shapes and sizes inside an impact crater in the Noachis Terra region of southern Mars. Patterns of dune erosion and deposition provide insight into the sedimentary history of the area.

The Mars Reconnaissance Orbiter has been examining Mars with six science instruments since 2006. Now in an extended mission, the orbiter continues to provide insights about the planet's ancient environments and about how processes such as wind, meteorite impacts and seasonal frosts are continuing to affect the Martian surface today. This mission has returned more data about Mars than all other orbital and surface missions combined.

More than 20,600 images taken by HiRISE are available for viewing on the instrument team's website:http://hirise.lpl.arizona.edu. Each observation by this telescopic camera covers several square miles, or square kilometers, and can reveal features as small as a desk.

HiRISE is operated by the University of Arizona, Tucson. The instrument was built by Ball Aerospace & Technologies Corp., Boulder, Colo. The Mars Reconnaissance Orbiter project is managed by the Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Science Mission Directorate, Washington. JPL is a division of the California Institute of Technology, also in Pasadena. Lockheed Martin Space Systems, Denver, built the spacecraft. 

Saturday, January 28, 2012

Cambridge scientist 'debunks' flying myth


Aircraft
A scientist at Cambridge University has debunked the long-held myth about how aircraft stay aloft.
Aeroplanes can fly as their wings cause the air pressure underneath to be greater than that above, lifting them into the air. But, engineers have for years been frustrated by a theory, which wrongly explains what causes the pressure change, a myth commonly found in school textbooks and flight manuals.
But, Prof Holger Babinsky of Cambridge University's engineering department has now created a minute-long video, posted on 'YouTube' website, to lay to rest the myth once and for all, 'The Daily Telegraph' reported.
According to conventional wisdom, the pressure change happens as the air on the curved upper surface of the wing has further to travel than that below the flat underneath surface, meaning it must travel faster to arrive at the other side of the wing at the same time.
Prof Babinsky says the myth goes against the laws of physics and the real explanation has nothing to do with the distance the air has to travel.
According to him, the curvature of the wing causes the change in air pressure because it pulls some of the air upwards, which reduces pressure, and forces the rest beneath
it, creating higher pressure.
A law known as the Bernoulli equation means that when pressure is lower, air moves faster -- so the air stream above the wing does move more quickly than the one below, but this is not what causes the difference in pressure.

World's first magnetic soap 'produced'


Soap

In a pioneering research, scientists claim to have produced the world's first magnetic soap that is composed of iron-rich salts dissolved in water.
A team at Bristol University says that its soap, which responds to a magnetic field when placed in solution, would calm all concerns over the use of surfactants in oil-spill clean-ups and revolutionise industrial cleaning products.
For long, researchers have been searching for a way to control soaps (or surfactants as they are known in industry) once they are in solution to increase the ability to dissolve oils in water and then remove them from a system.
The Bristol University team produced the magnetic soap by dissolving iron in a range of inert surfactant materials composed of chloride and bromide ions, very similar to those found in everyday mouthwash or fabric conditioner.
The addition of the iron creates metallic centres within the soap particles, say the scientists led by Julian Eastoe.
To test its properties, the team introduced a magnet to a test tube containing their new soap lying beneath a less dense organic solution, the 'Angewandte Chemie' journal reported.
When the magnet was introduced the iron-rich soap overcame both gravity and surface tension between the water and oil, to levitate through the organic solvent and reach the source of magnetic energy, proving its magnetic properties.

Friday, January 27, 2012

A non-hydrolytic sol–gel route to highly active MoO3–SiO2–Al2O3 metathesis catalysts


Mesoporous mixed oxides prepared via non-hydrolytic sol–gel outperform markedly other MoO3-based catalysts in the metathesis of propene.
  Graphical abstract: A non-hydrolytic sol–gel route to highly active MoO3–SiO2–Al2O3 metathesis catalysts

Mind the Science Gap – Helping science students connect with a non-science audience


Studying for a Masters degree in Public Health prepares you for many things.  But it doesn’t necessarily give you hands-on experience of how to take complex information and translate it into something others can understand and use.  Yet as an increasing array of public health issues hit the headlines, from fungicide residues in orange juice to the safe development of new technologies, this is exactly where public health professionals need to be developing their skills.  And it’s not only in the public domain: the ability to translate complex science into actionable intelligence is more important now than ever in supporting policy makers and business leaders make decisions that are grounded in evidence rather than speculation.
These were just some of the drivers behind a new course I have just started teaching at the University of Michigan School of Public Health that built around science blogging.  OK, so maybe I wanted to have a little fun with the students as well.  But my experiences with the blog  2020 Science have taught me that the discipline of writing a science-based blog for a broad audience is invaluable for developing highly transferrable communication skills.  And it’s not just me.  Emailing with the scientist, author and blogger Sheryl Kirshenbaum about the course, she admitted “blogging taught me how to effectively communicate with broad audiences”.  (Sheryl also added that she’s also learned a great deal from many wonderful editors – to which I can only add “me too!”).
The new course throws ten Masters of Public Health students in at the deep end by challenging each of them to publish ten posts over ten weeks on the blog Mind The Science Gap – and to respond to the comments they receive.  As this is a science blog, each post will be based around published health-related research.  The challenge for the writers will be to translate this into a science-grounded piece that is relevant and accessible to a broad audience.
The key objective here is to develop new skills through experience.  And for this, I am encouraging as many people as possible to comment on the posts.  As any science blogger will tell you, even simple comments like “I liked this” or “this was confusing” are extremely helpful in understanding what works and what doesn’t.  But I am also hoping readers will look beyond the educational aspects of the exercise, and engage with the students on the subjects they are writing about.  This is where I suspect the experience will become most empowering.
There’s another aspect of the course that intrigues me.  Rather naively, I started this exercise imagining a series of impersonal posts that focused on intellectually interesting but emotionally ambivalent scientific studies.  What I forgot is that public health matters to people.  And so it’s going to be tough for our bloggers to separate what they write about from their passions – and those of their readers.  In fact I’m not even sure that such a separation would be appropriate – for communication to be relevant, it needs to go beyond the numbers.  But how do you effectively combine science with a desire to make the world a better place in a blog?  I try to achieve this on my own blog, but I must admit don’t have any easy answers here.  So as the Mind The Science Gap students develop their skills, I’m going to be doing some learning of my own as I watch how they respond to this particular challenge.
At the end of the day, Mind The Science Gap is about teaching the next generation of public health professionals how to connect more effectively with non-specialist and non-technical audiences – whether they are managers, clients, policy makers or members of the public.  It isn’t about creating a new wave of science bloggers.  But in the process, I like to think that some of the participants will get the blogging bug. Whether they do or not, I’m looking forward to ten weeks of engaging, entertaining and hopefully challenging posts from ten talented students.


Read more: http://2020science.org/#ixzz1keRk1Ove

National Academy publishes new nanomaterials risk research strategy


The US National Academy of Science today published its long-awaited Research Strategy for Environmental, Health, and Safety Aspects of Engineered Nanomaterials. I won’t comment extensively on the report as I was a member of the committee that wrote it.  But I did want to highlight a number of aspects of it that I think are particularly noteworthy:
Great progress so far, but it’s time to change gears. Something we grappled with as a committee was what the value of yet another research strategy was going to be.  After all, it wasn’t so long ago that the US federal government published a well received strategy of its own.  A key driver behind our strategy was a sense that the past decade has been one of defining the challenges we face as the field of nanotechnology develops, while the next decade will require more focus as an ever greater number of nanotechnology-enabled products hit the market.  In other words, from a research perspective it’s time to change gears, building on past work but focusing on rapidly emerging challenges.
Combining life cycle and value chain in a single framework for approaching nanomaterial risk research.  As a committee, we spent considerable time developing a conceptual framework for approaching research addressing the health and environmental impacts of engineered nanomaterials.  What we ended up using was a combination of value chain – ranging from raw materials to intermediate products to final products – and material/product life cycle at each stage of the value chain.  This effectively allows risk hot spots to be identified at each point of a material and product’s development, use and disposal cycle.
Principles, not definitions.  Rather than rely on a single definition of engineered nanomaterial to guide risk-related research, we incorporated a set of principles into our conceptual framework to help identify materials of concern from an environment, health and safety impact perspective.  These build on the principles proposed by myself, Martin Philbert and David Warheit in a toxicology review published last year.  From the National Academies report:
…the present committee focuses on a set of principles in lieu of definitions to help identify nanomaterials and associated processes on which research is needed to ensure the responsible development and use of the materials. The principles were adopted in part because of concern about the use of rigid definitions of ENMs that drive EHS research and risk-based decisions … The principles are technology-independent and can therefore be used as a long-term driver of nanomaterial risk research. They help in identifying materials that require closer scrutiny regarding risk irrespective of whether they are established, emerging, or experimental ENMs. The principles are built on three concepts: emergent risk, plausibility, and severity; …
Emergent risk, as described here, refers to the likelihood that a new material will cause harm in ways that are not apparent, assessable, or manageable with current risk-assessment and risk-management approaches. Examples of emergent risk include the ability of some nanoscale particles to penetrate to biologically relevant areas that are inaccessible to larger particles, the failure of some established toxicity assays to indicate accurately the hazard posed by some nanomaterials, scalable behavior that is not captured by conventional hazard assessments (such as behavior that scales with surface area, not mass), and the possibility of abrupt changes in the nature of material-biologic interactions associated with specific length scales. Identifying emergent risk depends on new research that assesses a novel material’s behavior and potential to cause harm.
Emergent risk is defined in terms of the potential of a material to cause harm in unanticipated or poorly understood ways rather than being based solely on its physical structure or physicochemical properties. Thus, it is not bound by rigid definitions of nanotechnology or nanomaterials. Instead, the principle of emergence enables ENMs that present unanticipated risks to human health and the environment to be distinguished from materials that probably do not. It also removes considerable confusion over how nanoscale atoms, molecules, and internal material structures should be considered from a risk perspective, by focusing on behavior rather than size.
Many of the ENMs of concern in recent years have shown a potential to lead to emergent risks and would be tagged under this principle and thus require further investigation. But the concept also allows more complex nanomaterials to be considered—those in the early stages of development or yet to be developed. These include active and self-assembling nanomaterials. The principle does raise the question of how “emergence” is identified, being by definition something that did not exist previously. However the committee recognized that in many cases it is possible to combine and to interpret existing data in ways that indicate the possible emergence of new risks. For example, some research has suggested that surface area is an important factor that affects the toxic potency of some ENMs; ENMs that have high specific surface area and are poorly soluble might pose an emergent risk.
Plausibility refers in qualitative terms to the science-based likelihood that a new material, product, or process will present a risk to humans or the environment. It combines the possible hazard associated with a material and the potential for exposure or release to occur. Plausibility also refers to the likelihood that a particular technology will be developed and commercialized and thus lead to emergent risks. For example, the self-replicating nanobots envisaged by some writers in the field of nanotechnology might legitimately be considered an emergent risk; if it occurs, the risk would lie outside the bounds of conventional risk assessment. But this scenario is not plausible, clearly lying more appropriately in the realm of science fiction than in science. The principle of plausibility can act as a crude but important filter to distinguish between speculative risks and credible risks.
The principle of severity refers to the extent and magnitude of harm that might result from a poorly managed nanomaterial. It also helps to capture the reduction in harm that may result from research on the identification, assessment, and management of emergent risk. The principle offers a qualitative reality check that helps to guard against extensive research efforts that are unlikely to have a substantial effect on human health or environmental protection. It also helps to ensure that research that has the potential to make an important difference is identified and supported.
Together, those three broad principles provide a basis for developing an informed strategy for selecting materials that have the greatest potential to present risks. They can be used to separate new materials that raise safety concerns from materials that, although they may be novel from an application perspective, do not present undetected, unexpected, or enhanced risks. They contribute to providing a framework for guiding a prioritized risk-research agenda. In this respect, the principles were used by the committee as it considered the pressing risk challenges presented by ENMs.
Maintaining current research and development funding levels.  As a committee, we felt that the current US federal government of ~$120 million into environment, health and safety-specific nanotechnology research was reasonable, especially given the current economic climate.  However, we did recommend that, as knowledge develops and commercialization of products using nanomaterials increases,  funded research is aligned with areas and priorities identified within the strategy.
Developing cross-cutting activities.  There were five areas where the committee felt that further funding was needed to ensure the value of nano-risk research was fully realized.  Each of these cuts across areas of research, and provides the means to maximize the benefit of the science being supported.  From the report:
Informatics: $5 million per year in new funding for the next 5 years should be used to support the development of robust informatics systems and tools for managing and using information on the EHS effects of ENMs. The committee concluded that developing robust and responsive informatics systems for ENM EHS information was critical to guiding future strategic research, and translating research into actionable intelligence. This includes maximizing the value of research that is EHS-relevant but not necessarily EHS-specific, such as studies conducted during the development of new therapeutics. Based on experiences from other areas of research, investment in informatics of the order of $15 million is needed to make substantial progress in a complex and data rich field. However, within the constraints of nanotechnology R&D, the committee concluded that the modest investment proposed would at least allow initial informatics systems to be developed and facilitate planning for the long-term.
Instrumentation: $10 million per year in new funding for the next 5 years should be invested in translating existing measurement and characterization techniques into platforms that are accessible and relevant to EHS research and in developing new EHS- specific measurement and characterization techniques for assessing ENMs under a variety of conditions. The committee recognized that the proposed budget is insufficient for substantial research into developing new nanoscale characterization techniques— especially considering the cost of high-end instruments such as analytic electron microscopes—in excess of $2 million per instrument. However, the proposed budget was considered adequate to support the translation of techniques developed or deployed in other fields for the EHS characterization of ENMs.
Materials: Investment is needed in developing benchmark ENMs over the next 5 years, a long-standing need that has attracted little funding to date. The scope of funding needed depends in part on the development of public-private partnerships. However, to assure that funding is available to address this critical gap, the committee recommends that $3-5 million per year be invested initially in developing and distributing benchmark ENMs. While more funds could be expended on developing a library of materials, this amount will assure that the most critically needed materials are developed. These materials will enable systematic investigation of their behavior and mechanisms of action in environmental and biologic systems. The availability of such materials will allow benchmarking of studies among research groups and research activities. The committee further recommends that activities around materials development be supported by public- private partnerships. Such partnerships would also help to assure that relevant materials are being assessed.
Sources: $2 million per year in new funding for the next 5 years should be invested in characterizing sources of ENM release and exposure throughout the value chain and life cycle of products. The committee considered that this was both an adequate and reasonable budget to support a comprehensive inventory of ENM sources.
Networks: $2 million per year in new funding for the next 5 years should be invested in developing integrated researcher and stakeholder networks that facilitate the sharing of information and the translation of knowledge to effective use. The networks should allow participation of representatives of industry and international research programs and are a needed complement to the informatics infrastructure. They would also facilitate dialogue around the development of a dynamic library of materials. The committee concluded that research and stakeholder networks are critical to realizing the value of federally funded ENM EHS research and considered this to be an area where a relatively small amount of additional funding would have a high impact—both in the development of research strategies and in the translation and use of research findings. Given the current absence of such networks, the proposed budget was considered adequate.
Authority and accountability.  In our report, we talk quite a bit about the need for an entity within the federal government to take the lead in implementing a risk research strategy.  While the US National Nanotechnology Initiative have done a great job coordinating interagency activities, we felt that there is only so far coordination without authority can go if socially and economically important research is to be conducted in a timely and relevant manner.  What this “entity” might look like – we left that to the federal government to chew over.
There’s a lot more to the report – including (as you would expect) a broad assessment of research areas that need attention if the science of nanomaterial human health and environmental impacts is to continue to develop effectively.
This is the first of two reports- the second is due in around 18 months, and will look at progress toward implementing a relevant and effective research strategy.


Tuesday, January 24, 2012

Google Search Kills Human Memory


Bangalore: With technology penetrating every part of our lives, we have learned to search for information on the internet. But could this practice be altering the way we store and process information? A new study has revealed findings that could just go to show that Google could be destroying our memories.
Human brain


Psychology professors from the Columbia University, the University Of Wisconsin-Madison, Harvard University came together and published the study that said people“remember less by knowing information than by knowing where the information can be found”, essentially meaning that  we tend to forget information searched for, and rather remember where on the internet it can be found. “The Internet, with its search engines such as Google and databases such as IMDB and the information stored there, has become an external memory source that we can access at any time,” said the study. “It has become so commonplace to look up the answer to any question the moment it occurs, it can feel like going through withdrawal when we can’t find out something immediately.”


The four tests that the professors conducted on individuals in short, confirmed that searching for information on the internet could be wiping out our “internal memory”, which stores data.  “When people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it”, said the study.


So instead of remembering data, say what happened during an event, we tend to remember which sites we saw that information on, or where on Google we found the site.


According to the study “These results suggest that processes of human memory are adapting to the advent of new computing and communication technology. We are becoming symbiotic with our computer tools growing into interconnected systems that remember less by knowing information than by knowing where the information can be found.”  “This gives us the advantage of access to a vast range of information—although the disadvantages of being constantly ‘wired’ are still being debated”, concluded the professors in the study.