Pages

Monday, January 30, 2012

Cold Plasma Above Earth

 Cold plasma has been well-hidden. Space physicists have long lacked clues to how much of this electrically charged gas exists tens of thousands of miles above Earth and how the stuff may impact our planet’s interaction with the Sun. Now, a new method developed by Swedish researchers makes cold plasma measurable and reveals significantly more cold, charged ions in Earth’s upper altitudes than previously imagined. 

At these lofty elevations, storms of high-energy charged particles -- space weather -- roil the atmosphere, creatingauroras, buffeting satellites, and sometimes wreaking havoc with electronic devices and electric grids on Earth. The new evidence of abundant cold (i.e. low-energy) ions may change our understanding of this tumultuous space weather and lead to more accurate forecasting of it, scientists say. The finding might also shed light on what’s happening around other planets and moons -- for instance, helping explain why the once robust atmosphere of Mars is so wispy today. 

“The more you look for low-energy ions, the more you find,” said Mats Andre, a professor of space physics at the Swedish Institute of Space Physics in Uppsala, Sweden, and leader of the research team. “We didn’t know how much was out there. It’s more than even I thought.” 

The low-energy ions are created in the ionosphere, a region of the upper atmosphere where solar energy can sweep electrons away from molecules, leaving atoms of elements like hydrogen and oxygen with positive charges. Actually detecting these ions at high altitudes has been extremely difficult. 

Now that has changed, making it apparent that low-energy ions abound in the distant reaches where Earth’s atmosphere gives way to outer space. Researchers knew the ions were present at altitudes of about 100 kilometers (60 miles), but Andre and his colleague Chris Cully looked much higher, between 20,000 and 100,000 km (12,400 to 60,000 mi). While the concentration of the previously hidden cold ions varies, about 50 to 70 percent of the time the particles make up most of the mass of great swaths of space, according to the researchers’ satellite measurements and calculations. And, in some high-altitude zones, low-energy ions dominate nearly all of the time. Even at altitudes around 100,000 km -- about a third of the distance to the Moon -- the team detected these previously elusive low-energy ions. 

A scientist examines one of the European Space Agency's four Cluster satellites, used in a recent Geophysical Research Letters study to measure low-energy ions. Credit: European Space Agency
Finding so many relatively cool ions in those regions is surprising, Andre said, because there’s so much energy blasting into Earth’s high altitudes from the solar wind -- a rushing flow of hot plasma streaming from the Sun, which stirs up space-weather storms. 

This hot plasma is about 1,000 times hotter than what Andre considers cold plasma -- but even cold is a relative term. The low-energy ions have an energy that would correspond to about 500,000 degrees Celsius (about one million degrees Fahrenheit) at typical gas densities found on Earth. But because the density of the ions in space is so low, satellites and spacecraft can orbit without bursting into flames. 

The researchers’ new findings have been accepted for publication in Geophysical Research Letters, a journal of theAmerican Geophysical Union

For decades, space physicists have struggled to accurately detect low-energy ions and determine how much of the material is leaving our atmosphere. The satellite Andre works on, one of four European Space Agency CLUSTER spacecraft, is equipped with a detector with thin wire arms that measures the electric field between them as the satellite rotates. But, when the scientists gathered data from their detectors, two mysterious trends appeared. Strong electric fields turned up in unexpected regions of space. And as the spacecraft rotated, measurements of the electric field didn’t fluctuate in the smoothly changing manner that Andre expected. 

“To a scientist, it looked pretty ugly,” Andre said. “We tried to figure out what was wrong with the instrument. Then we realized there’s nothing wrong with the instrument.” Unexpectedly, they found that cold plasma was altering the structure of electrical fields around the satellite. Once they understood that, they could use their field measurements to reveal the presence of the once-hidden ions. 

It’s a clever way of turning the limitations of a spacecraft-based detector into assets, said Thomas Moore, senior project scientist for NASA’s Magnetospheric Multiscale mission at the Goddard Space Flight Center in Greenbelt, Maryland. He was not involved in the new research. 

As scientists use the new measurement method to map cold plasma around Earth, they could discover more about how hot and cold plasmas interact during space storms and other events, deepening researchers’ understanding of space weather, Andre said. 

The new measurements indicate that about a kilogram (two pounds) of cold plasma escapes from Earth’s atmosphere every second, Andre said. Knowing that rate of loss for Earth may help scientists better reconstruct what became of the atmosphere of Mars, which is thought to once have been denser and more similar to Earth’s. The new cold plasma results might also help researchers explain atmospheric traits of other planets and moons, Andre suggested. 

An artist's rendition of Magnetospheric Multiscale mission as it sweeps through a magnetic reconnection event caused when the solar wind meets Earth's magnetic fields. Credit: SWRI
And closer to home, if scientists could develop more accurate space weather forecasts, they could save satellites from being blinded or destroyed, and better warn space station astronauts and airlines of danger from high-energy radiation. While low-energy ions are not responsible for the damage caused by space weather, they do influence that weather. Andre compared the swaths of ions to, say, a low-pressure area in our familiar, down-to-Earth weather -- as opposed to a harmful storm. It is a key player, even if it doesn’t cause the damage itself. “You may want to know where the low-pressure area is, to predict a storm,” Andre noted. 

Improving space weather forecasts to the point where they’re comparable to ordinary weather forecasting, was “not even remotely possible if you’re missing most of your plasma,” Moore, with NASA, said. Now, with a way to measure cold plasma, the goal of high-quality forecasts is one step closer. 

“It is stuff we couldn’t see and couldn’t detect, and then suddenly we could measure it,” Moore said of the low-energy ions. “Now you can actually study it and see if it agrees with the theories.”

Photo from NASA Mars Orbiter Shows Wind's Handiwork


Photo from NASA Mars Orbiter Shows Wind's Handiwork

Sand dunes trapped in an impact crater in Noachis Terra on MarsThis enhanced-color image shows sand dunes trapped in an impact crater in Noachis Terra, Mars. Image credit: NASA/JPL-Caltech/Univ. of Arizona
› Full image and caption

  • Share278
January 25, 2012
Some images of stark Martian landscapes provide visual appeal beyond their science value, including a recent scene of wind-sculpted features from the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter.

The scene shows dunes and sand ripples of various shapes and sizes inside an impact crater in the Noachis Terra region of southern Mars. Patterns of dune erosion and deposition provide insight into the sedimentary history of the area.

The Mars Reconnaissance Orbiter has been examining Mars with six science instruments since 2006. Now in an extended mission, the orbiter continues to provide insights about the planet's ancient environments and about how processes such as wind, meteorite impacts and seasonal frosts are continuing to affect the Martian surface today. This mission has returned more data about Mars than all other orbital and surface missions combined.

More than 20,600 images taken by HiRISE are available for viewing on the instrument team's website:http://hirise.lpl.arizona.edu. Each observation by this telescopic camera covers several square miles, or square kilometers, and can reveal features as small as a desk.

HiRISE is operated by the University of Arizona, Tucson. The instrument was built by Ball Aerospace & Technologies Corp., Boulder, Colo. The Mars Reconnaissance Orbiter project is managed by the Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Science Mission Directorate, Washington. JPL is a division of the California Institute of Technology, also in Pasadena. Lockheed Martin Space Systems, Denver, built the spacecraft. 

Saturday, January 28, 2012

Cambridge scientist 'debunks' flying myth


Aircraft
A scientist at Cambridge University has debunked the long-held myth about how aircraft stay aloft.
Aeroplanes can fly as their wings cause the air pressure underneath to be greater than that above, lifting them into the air. But, engineers have for years been frustrated by a theory, which wrongly explains what causes the pressure change, a myth commonly found in school textbooks and flight manuals.
But, Prof Holger Babinsky of Cambridge University's engineering department has now created a minute-long video, posted on 'YouTube' website, to lay to rest the myth once and for all, 'The Daily Telegraph' reported.
According to conventional wisdom, the pressure change happens as the air on the curved upper surface of the wing has further to travel than that below the flat underneath surface, meaning it must travel faster to arrive at the other side of the wing at the same time.
Prof Babinsky says the myth goes against the laws of physics and the real explanation has nothing to do with the distance the air has to travel.
According to him, the curvature of the wing causes the change in air pressure because it pulls some of the air upwards, which reduces pressure, and forces the rest beneath
it, creating higher pressure.
A law known as the Bernoulli equation means that when pressure is lower, air moves faster -- so the air stream above the wing does move more quickly than the one below, but this is not what causes the difference in pressure.

World's first magnetic soap 'produced'


Soap

In a pioneering research, scientists claim to have produced the world's first magnetic soap that is composed of iron-rich salts dissolved in water.
A team at Bristol University says that its soap, which responds to a magnetic field when placed in solution, would calm all concerns over the use of surfactants in oil-spill clean-ups and revolutionise industrial cleaning products.
For long, researchers have been searching for a way to control soaps (or surfactants as they are known in industry) once they are in solution to increase the ability to dissolve oils in water and then remove them from a system.
The Bristol University team produced the magnetic soap by dissolving iron in a range of inert surfactant materials composed of chloride and bromide ions, very similar to those found in everyday mouthwash or fabric conditioner.
The addition of the iron creates metallic centres within the soap particles, say the scientists led by Julian Eastoe.
To test its properties, the team introduced a magnet to a test tube containing their new soap lying beneath a less dense organic solution, the 'Angewandte Chemie' journal reported.
When the magnet was introduced the iron-rich soap overcame both gravity and surface tension between the water and oil, to levitate through the organic solvent and reach the source of magnetic energy, proving its magnetic properties.

Friday, January 27, 2012

A non-hydrolytic sol–gel route to highly active MoO3–SiO2–Al2O3 metathesis catalysts


Mesoporous mixed oxides prepared via non-hydrolytic sol–gel outperform markedly other MoO3-based catalysts in the metathesis of propene.
  Graphical abstract: A non-hydrolytic sol–gel route to highly active MoO3–SiO2–Al2O3 metathesis catalysts

Mind the Science Gap – Helping science students connect with a non-science audience


Studying for a Masters degree in Public Health prepares you for many things.  But it doesn’t necessarily give you hands-on experience of how to take complex information and translate it into something others can understand and use.  Yet as an increasing array of public health issues hit the headlines, from fungicide residues in orange juice to the safe development of new technologies, this is exactly where public health professionals need to be developing their skills.  And it’s not only in the public domain: the ability to translate complex science into actionable intelligence is more important now than ever in supporting policy makers and business leaders make decisions that are grounded in evidence rather than speculation.
These were just some of the drivers behind a new course I have just started teaching at the University of Michigan School of Public Health that built around science blogging.  OK, so maybe I wanted to have a little fun with the students as well.  But my experiences with the blog  2020 Science have taught me that the discipline of writing a science-based blog for a broad audience is invaluable for developing highly transferrable communication skills.  And it’s not just me.  Emailing with the scientist, author and blogger Sheryl Kirshenbaum about the course, she admitted “blogging taught me how to effectively communicate with broad audiences”.  (Sheryl also added that she’s also learned a great deal from many wonderful editors – to which I can only add “me too!”).
The new course throws ten Masters of Public Health students in at the deep end by challenging each of them to publish ten posts over ten weeks on the blog Mind The Science Gap – and to respond to the comments they receive.  As this is a science blog, each post will be based around published health-related research.  The challenge for the writers will be to translate this into a science-grounded piece that is relevant and accessible to a broad audience.
The key objective here is to develop new skills through experience.  And for this, I am encouraging as many people as possible to comment on the posts.  As any science blogger will tell you, even simple comments like “I liked this” or “this was confusing” are extremely helpful in understanding what works and what doesn’t.  But I am also hoping readers will look beyond the educational aspects of the exercise, and engage with the students on the subjects they are writing about.  This is where I suspect the experience will become most empowering.
There’s another aspect of the course that intrigues me.  Rather naively, I started this exercise imagining a series of impersonal posts that focused on intellectually interesting but emotionally ambivalent scientific studies.  What I forgot is that public health matters to people.  And so it’s going to be tough for our bloggers to separate what they write about from their passions – and those of their readers.  In fact I’m not even sure that such a separation would be appropriate – for communication to be relevant, it needs to go beyond the numbers.  But how do you effectively combine science with a desire to make the world a better place in a blog?  I try to achieve this on my own blog, but I must admit don’t have any easy answers here.  So as the Mind The Science Gap students develop their skills, I’m going to be doing some learning of my own as I watch how they respond to this particular challenge.
At the end of the day, Mind The Science Gap is about teaching the next generation of public health professionals how to connect more effectively with non-specialist and non-technical audiences – whether they are managers, clients, policy makers or members of the public.  It isn’t about creating a new wave of science bloggers.  But in the process, I like to think that some of the participants will get the blogging bug. Whether they do or not, I’m looking forward to ten weeks of engaging, entertaining and hopefully challenging posts from ten talented students.


Read more: http://2020science.org/#ixzz1keRk1Ove

National Academy publishes new nanomaterials risk research strategy


The US National Academy of Science today published its long-awaited Research Strategy for Environmental, Health, and Safety Aspects of Engineered Nanomaterials. I won’t comment extensively on the report as I was a member of the committee that wrote it.  But I did want to highlight a number of aspects of it that I think are particularly noteworthy:
Great progress so far, but it’s time to change gears. Something we grappled with as a committee was what the value of yet another research strategy was going to be.  After all, it wasn’t so long ago that the US federal government published a well received strategy of its own.  A key driver behind our strategy was a sense that the past decade has been one of defining the challenges we face as the field of nanotechnology develops, while the next decade will require more focus as an ever greater number of nanotechnology-enabled products hit the market.  In other words, from a research perspective it’s time to change gears, building on past work but focusing on rapidly emerging challenges.
Combining life cycle and value chain in a single framework for approaching nanomaterial risk research.  As a committee, we spent considerable time developing a conceptual framework for approaching research addressing the health and environmental impacts of engineered nanomaterials.  What we ended up using was a combination of value chain – ranging from raw materials to intermediate products to final products – and material/product life cycle at each stage of the value chain.  This effectively allows risk hot spots to be identified at each point of a material and product’s development, use and disposal cycle.
Principles, not definitions.  Rather than rely on a single definition of engineered nanomaterial to guide risk-related research, we incorporated a set of principles into our conceptual framework to help identify materials of concern from an environment, health and safety impact perspective.  These build on the principles proposed by myself, Martin Philbert and David Warheit in a toxicology review published last year.  From the National Academies report:
…the present committee focuses on a set of principles in lieu of definitions to help identify nanomaterials and associated processes on which research is needed to ensure the responsible development and use of the materials. The principles were adopted in part because of concern about the use of rigid definitions of ENMs that drive EHS research and risk-based decisions … The principles are technology-independent and can therefore be used as a long-term driver of nanomaterial risk research. They help in identifying materials that require closer scrutiny regarding risk irrespective of whether they are established, emerging, or experimental ENMs. The principles are built on three concepts: emergent risk, plausibility, and severity; …
Emergent risk, as described here, refers to the likelihood that a new material will cause harm in ways that are not apparent, assessable, or manageable with current risk-assessment and risk-management approaches. Examples of emergent risk include the ability of some nanoscale particles to penetrate to biologically relevant areas that are inaccessible to larger particles, the failure of some established toxicity assays to indicate accurately the hazard posed by some nanomaterials, scalable behavior that is not captured by conventional hazard assessments (such as behavior that scales with surface area, not mass), and the possibility of abrupt changes in the nature of material-biologic interactions associated with specific length scales. Identifying emergent risk depends on new research that assesses a novel material’s behavior and potential to cause harm.
Emergent risk is defined in terms of the potential of a material to cause harm in unanticipated or poorly understood ways rather than being based solely on its physical structure or physicochemical properties. Thus, it is not bound by rigid definitions of nanotechnology or nanomaterials. Instead, the principle of emergence enables ENMs that present unanticipated risks to human health and the environment to be distinguished from materials that probably do not. It also removes considerable confusion over how nanoscale atoms, molecules, and internal material structures should be considered from a risk perspective, by focusing on behavior rather than size.
Many of the ENMs of concern in recent years have shown a potential to lead to emergent risks and would be tagged under this principle and thus require further investigation. But the concept also allows more complex nanomaterials to be considered—those in the early stages of development or yet to be developed. These include active and self-assembling nanomaterials. The principle does raise the question of how “emergence” is identified, being by definition something that did not exist previously. However the committee recognized that in many cases it is possible to combine and to interpret existing data in ways that indicate the possible emergence of new risks. For example, some research has suggested that surface area is an important factor that affects the toxic potency of some ENMs; ENMs that have high specific surface area and are poorly soluble might pose an emergent risk.
Plausibility refers in qualitative terms to the science-based likelihood that a new material, product, or process will present a risk to humans or the environment. It combines the possible hazard associated with a material and the potential for exposure or release to occur. Plausibility also refers to the likelihood that a particular technology will be developed and commercialized and thus lead to emergent risks. For example, the self-replicating nanobots envisaged by some writers in the field of nanotechnology might legitimately be considered an emergent risk; if it occurs, the risk would lie outside the bounds of conventional risk assessment. But this scenario is not plausible, clearly lying more appropriately in the realm of science fiction than in science. The principle of plausibility can act as a crude but important filter to distinguish between speculative risks and credible risks.
The principle of severity refers to the extent and magnitude of harm that might result from a poorly managed nanomaterial. It also helps to capture the reduction in harm that may result from research on the identification, assessment, and management of emergent risk. The principle offers a qualitative reality check that helps to guard against extensive research efforts that are unlikely to have a substantial effect on human health or environmental protection. It also helps to ensure that research that has the potential to make an important difference is identified and supported.
Together, those three broad principles provide a basis for developing an informed strategy for selecting materials that have the greatest potential to present risks. They can be used to separate new materials that raise safety concerns from materials that, although they may be novel from an application perspective, do not present undetected, unexpected, or enhanced risks. They contribute to providing a framework for guiding a prioritized risk-research agenda. In this respect, the principles were used by the committee as it considered the pressing risk challenges presented by ENMs.
Maintaining current research and development funding levels.  As a committee, we felt that the current US federal government of ~$120 million into environment, health and safety-specific nanotechnology research was reasonable, especially given the current economic climate.  However, we did recommend that, as knowledge develops and commercialization of products using nanomaterials increases,  funded research is aligned with areas and priorities identified within the strategy.
Developing cross-cutting activities.  There were five areas where the committee felt that further funding was needed to ensure the value of nano-risk research was fully realized.  Each of these cuts across areas of research, and provides the means to maximize the benefit of the science being supported.  From the report:
Informatics: $5 million per year in new funding for the next 5 years should be used to support the development of robust informatics systems and tools for managing and using information on the EHS effects of ENMs. The committee concluded that developing robust and responsive informatics systems for ENM EHS information was critical to guiding future strategic research, and translating research into actionable intelligence. This includes maximizing the value of research that is EHS-relevant but not necessarily EHS-specific, such as studies conducted during the development of new therapeutics. Based on experiences from other areas of research, investment in informatics of the order of $15 million is needed to make substantial progress in a complex and data rich field. However, within the constraints of nanotechnology R&D, the committee concluded that the modest investment proposed would at least allow initial informatics systems to be developed and facilitate planning for the long-term.
Instrumentation: $10 million per year in new funding for the next 5 years should be invested in translating existing measurement and characterization techniques into platforms that are accessible and relevant to EHS research and in developing new EHS- specific measurement and characterization techniques for assessing ENMs under a variety of conditions. The committee recognized that the proposed budget is insufficient for substantial research into developing new nanoscale characterization techniques— especially considering the cost of high-end instruments such as analytic electron microscopes—in excess of $2 million per instrument. However, the proposed budget was considered adequate to support the translation of techniques developed or deployed in other fields for the EHS characterization of ENMs.
Materials: Investment is needed in developing benchmark ENMs over the next 5 years, a long-standing need that has attracted little funding to date. The scope of funding needed depends in part on the development of public-private partnerships. However, to assure that funding is available to address this critical gap, the committee recommends that $3-5 million per year be invested initially in developing and distributing benchmark ENMs. While more funds could be expended on developing a library of materials, this amount will assure that the most critically needed materials are developed. These materials will enable systematic investigation of their behavior and mechanisms of action in environmental and biologic systems. The availability of such materials will allow benchmarking of studies among research groups and research activities. The committee further recommends that activities around materials development be supported by public- private partnerships. Such partnerships would also help to assure that relevant materials are being assessed.
Sources: $2 million per year in new funding for the next 5 years should be invested in characterizing sources of ENM release and exposure throughout the value chain and life cycle of products. The committee considered that this was both an adequate and reasonable budget to support a comprehensive inventory of ENM sources.
Networks: $2 million per year in new funding for the next 5 years should be invested in developing integrated researcher and stakeholder networks that facilitate the sharing of information and the translation of knowledge to effective use. The networks should allow participation of representatives of industry and international research programs and are a needed complement to the informatics infrastructure. They would also facilitate dialogue around the development of a dynamic library of materials. The committee concluded that research and stakeholder networks are critical to realizing the value of federally funded ENM EHS research and considered this to be an area where a relatively small amount of additional funding would have a high impact—both in the development of research strategies and in the translation and use of research findings. Given the current absence of such networks, the proposed budget was considered adequate.
Authority and accountability.  In our report, we talk quite a bit about the need for an entity within the federal government to take the lead in implementing a risk research strategy.  While the US National Nanotechnology Initiative have done a great job coordinating interagency activities, we felt that there is only so far coordination without authority can go if socially and economically important research is to be conducted in a timely and relevant manner.  What this “entity” might look like – we left that to the federal government to chew over.
There’s a lot more to the report – including (as you would expect) a broad assessment of research areas that need attention if the science of nanomaterial human health and environmental impacts is to continue to develop effectively.
This is the first of two reports- the second is due in around 18 months, and will look at progress toward implementing a relevant and effective research strategy.