Mesoporous mixed oxides prepared via non-hydrolytic sol–gel outperform markedly other MoO3-based catalysts in the metathesis of propene.
Friday, January 27, 2012
Mind the Science Gap – Helping science students connect with a non-science audience
Studying for a Masters degree in Public Health prepares you for many things. But it doesn’t necessarily give you hands-on experience of how to take complex information and translate it into something others can understand and use. Yet as an increasing array of public health issues hit the headlines, from fungicide residues in orange juice to the safe development of new technologies, this is exactly where public health professionals need to be developing their skills. And it’s not only in the public domain: the ability to translate complex science into actionable intelligence is more important now than ever in supporting policy makers and business leaders make decisions that are grounded in evidence rather than speculation.
These were just some of the drivers behind a new course I have just started teaching at the University of Michigan School of Public Health that built around science blogging. OK, so maybe I wanted to have a little fun with the students as well. But my experiences with the blog 2020 Science have taught me that the discipline of writing a science-based blog for a broad audience is invaluable for developing highly transferrable communication skills. And it’s not just me. Emailing with the scientist, author and blogger Sheryl Kirshenbaum about the course, she admitted “blogging taught me how to effectively communicate with broad audiences”. (Sheryl also added that she’s also learned a great deal from many wonderful editors – to which I can only add “me too!”).
The new course throws ten Masters of Public Health students in at the deep end by challenging each of them to publish ten posts over ten weeks on the blog Mind The Science Gap – and to respond to the comments they receive. As this is a science blog, each post will be based around published health-related research. The challenge for the writers will be to translate this into a science-grounded piece that is relevant and accessible to a broad audience.
The key objective here is to develop new skills through experience. And for this, I am encouraging as many people as possible to comment on the posts. As any science blogger will tell you, even simple comments like “I liked this” or “this was confusing” are extremely helpful in understanding what works and what doesn’t. But I am also hoping readers will look beyond the educational aspects of the exercise, and engage with the students on the subjects they are writing about. This is where I suspect the experience will become most empowering.
There’s another aspect of the course that intrigues me. Rather naively, I started this exercise imagining a series of impersonal posts that focused on intellectually interesting but emotionally ambivalent scientific studies. What I forgot is that public health matters to people. And so it’s going to be tough for our bloggers to separate what they write about from their passions – and those of their readers. In fact I’m not even sure that such a separation would be appropriate – for communication to be relevant, it needs to go beyond the numbers. But how do you effectively combine science with a desire to make the world a better place in a blog? I try to achieve this on my own blog, but I must admit don’t have any easy answers here. So as the Mind The Science Gap students develop their skills, I’m going to be doing some learning of my own as I watch how they respond to this particular challenge.
At the end of the day, Mind The Science Gap is about teaching the next generation of public health professionals how to connect more effectively with non-specialist and non-technical audiences – whether they are managers, clients, policy makers or members of the public. It isn’t about creating a new wave of science bloggers. But in the process, I like to think that some of the participants will get the blogging bug. Whether they do or not, I’m looking forward to ten weeks of engaging, entertaining and hopefully challenging posts from ten talented students.
Read more: http://2020science.org/#ixzz1keRk1Ove
National Academy publishes new nanomaterials risk research strategy
The US National Academy of Science today published its long-awaited Research Strategy for Environmental, Health, and Safety Aspects of Engineered Nanomaterials. I won’t comment extensively on the report as I was a member of the committee that wrote it. But I did want to highlight a number of aspects of it that I think are particularly noteworthy:
Great progress so far, but it’s time to change gears. Something we grappled with as a committee was what the value of yet another research strategy was going to be. After all, it wasn’t so long ago that the US federal government published a well received strategy of its own. A key driver behind our strategy was a sense that the past decade has been one of defining the challenges we face as the field of nanotechnology develops, while the next decade will require more focus as an ever greater number of nanotechnology-enabled products hit the market. In other words, from a research perspective it’s time to change gears, building on past work but focusing on rapidly emerging challenges.
Combining life cycle and value chain in a single framework for approaching nanomaterial risk research. As a committee, we spent considerable time developing a conceptual framework for approaching research addressing the health and environmental impacts of engineered nanomaterials. What we ended up using was a combination of value chain – ranging from raw materials to intermediate products to final products – and material/product life cycle at each stage of the value chain. This effectively allows risk hot spots to be identified at each point of a material and product’s development, use and disposal cycle.
Principles, not definitions. Rather than rely on a single definition of engineered nanomaterial to guide risk-related research, we incorporated a set of principles into our conceptual framework to help identify materials of concern from an environment, health and safety impact perspective. These build on the principles proposed by myself, Martin Philbert and David Warheit in a toxicology review published last year. From the National Academies report:
…the present committee focuses on a set of principles in lieu of definitions to help identify nanomaterials and associated processes on which research is needed to ensure the responsible development and use of the materials. The principles were adopted in part because of concern about the use of rigid definitions of ENMs that drive EHS research and risk-based decisions … The principles are technology-independent and can therefore be used as a long-term driver of nanomaterial risk research. They help in identifying materials that require closer scrutiny regarding risk irrespective of whether they are established, emerging, or experimental ENMs. The principles are built on three concepts: emergent risk, plausibility, and severity; …Emergent risk, as described here, refers to the likelihood that a new material will cause harm in ways that are not apparent, assessable, or manageable with current risk-assessment and risk-management approaches. Examples of emergent risk include the ability of some nanoscale particles to penetrate to biologically relevant areas that are inaccessible to larger particles, the failure of some established toxicity assays to indicate accurately the hazard posed by some nanomaterials, scalable behavior that is not captured by conventional hazard assessments (such as behavior that scales with surface area, not mass), and the possibility of abrupt changes in the nature of material-biologic interactions associated with specific length scales. Identifying emergent risk depends on new research that assesses a novel material’s behavior and potential to cause harm.Emergent risk is defined in terms of the potential of a material to cause harm in unanticipated or poorly understood ways rather than being based solely on its physical structure or physicochemical properties. Thus, it is not bound by rigid definitions of nanotechnology or nanomaterials. Instead, the principle of emergence enables ENMs that present unanticipated risks to human health and the environment to be distinguished from materials that probably do not. It also removes considerable confusion over how nanoscale atoms, molecules, and internal material structures should be considered from a risk perspective, by focusing on behavior rather than size.Many of the ENMs of concern in recent years have shown a potential to lead to emergent risks and would be tagged under this principle and thus require further investigation. But the concept also allows more complex nanomaterials to be considered—those in the early stages of development or yet to be developed. These include active and self-assembling nanomaterials. The principle does raise the question of how “emergence” is identified, being by definition something that did not exist previously. However the committee recognized that in many cases it is possible to combine and to interpret existing data in ways that indicate the possible emergence of new risks. For example, some research has suggested that surface area is an important factor that affects the toxic potency of some ENMs; ENMs that have high specific surface area and are poorly soluble might pose an emergent risk.Plausibility refers in qualitative terms to the science-based likelihood that a new material, product, or process will present a risk to humans or the environment. It combines the possible hazard associated with a material and the potential for exposure or release to occur. Plausibility also refers to the likelihood that a particular technology will be developed and commercialized and thus lead to emergent risks. For example, the self-replicating nanobots envisaged by some writers in the field of nanotechnology might legitimately be considered an emergent risk; if it occurs, the risk would lie outside the bounds of conventional risk assessment. But this scenario is not plausible, clearly lying more appropriately in the realm of science fiction than in science. The principle of plausibility can act as a crude but important filter to distinguish between speculative risks and credible risks.The principle of severity refers to the extent and magnitude of harm that might result from a poorly managed nanomaterial. It also helps to capture the reduction in harm that may result from research on the identification, assessment, and management of emergent risk. The principle offers a qualitative reality check that helps to guard against extensive research efforts that are unlikely to have a substantial effect on human health or environmental protection. It also helps to ensure that research that has the potential to make an important difference is identified and supported.Together, those three broad principles provide a basis for developing an informed strategy for selecting materials that have the greatest potential to present risks. They can be used to separate new materials that raise safety concerns from materials that, although they may be novel from an application perspective, do not present undetected, unexpected, or enhanced risks. They contribute to providing a framework for guiding a prioritized risk-research agenda. In this respect, the principles were used by the committee as it considered the pressing risk challenges presented by ENMs.
Maintaining current research and development funding levels. As a committee, we felt that the current US federal government of ~$120 million into environment, health and safety-specific nanotechnology research was reasonable, especially given the current economic climate. However, we did recommend that, as knowledge develops and commercialization of products using nanomaterials increases, funded research is aligned with areas and priorities identified within the strategy.
Developing cross-cutting activities. There were five areas where the committee felt that further funding was needed to ensure the value of nano-risk research was fully realized. Each of these cuts across areas of research, and provides the means to maximize the benefit of the science being supported. From the report:
Informatics: $5 million per year in new funding for the next 5 years should be used to support the development of robust informatics systems and tools for managing and using information on the EHS effects of ENMs. The committee concluded that developing robust and responsive informatics systems for ENM EHS information was critical to guiding future strategic research, and translating research into actionable intelligence. This includes maximizing the value of research that is EHS-relevant but not necessarily EHS-specific, such as studies conducted during the development of new therapeutics. Based on experiences from other areas of research, investment in informatics of the order of $15 million is needed to make substantial progress in a complex and data rich field. However, within the constraints of nanotechnology R&D, the committee concluded that the modest investment proposed would at least allow initial informatics systems to be developed and facilitate planning for the long-term.
Instrumentation: $10 million per year in new funding for the next 5 years should be invested in translating existing measurement and characterization techniques into platforms that are accessible and relevant to EHS research and in developing new EHS- specific measurement and characterization techniques for assessing ENMs under a variety of conditions. The committee recognized that the proposed budget is insufficient for substantial research into developing new nanoscale characterization techniques— especially considering the cost of high-end instruments such as analytic electron microscopes—in excess of $2 million per instrument. However, the proposed budget was considered adequate to support the translation of techniques developed or deployed in other fields for the EHS characterization of ENMs.Materials: Investment is needed in developing benchmark ENMs over the next 5 years, a long-standing need that has attracted little funding to date. The scope of funding needed depends in part on the development of public-private partnerships. However, to assure that funding is available to address this critical gap, the committee recommends that $3-5 million per year be invested initially in developing and distributing benchmark ENMs. While more funds could be expended on developing a library of materials, this amount will assure that the most critically needed materials are developed. These materials will enable systematic investigation of their behavior and mechanisms of action in environmental and biologic systems. The availability of such materials will allow benchmarking of studies among research groups and research activities. The committee further recommends that activities around materials development be supported by public- private partnerships. Such partnerships would also help to assure that relevant materials are being assessed.Sources: $2 million per year in new funding for the next 5 years should be invested in characterizing sources of ENM release and exposure throughout the value chain and life cycle of products. The committee considered that this was both an adequate and reasonable budget to support a comprehensive inventory of ENM sources.Networks: $2 million per year in new funding for the next 5 years should be invested in developing integrated researcher and stakeholder networks that facilitate the sharing of information and the translation of knowledge to effective use. The networks should allow participation of representatives of industry and international research programs and are a needed complement to the informatics infrastructure. They would also facilitate dialogue around the development of a dynamic library of materials. The committee concluded that research and stakeholder networks are critical to realizing the value of federally funded ENM EHS research and considered this to be an area where a relatively small amount of additional funding would have a high impact—both in the development of research strategies and in the translation and use of research findings. Given the current absence of such networks, the proposed budget was considered adequate.
Authority and accountability. In our report, we talk quite a bit about the need for an entity within the federal government to take the lead in implementing a risk research strategy. While the US National Nanotechnology Initiative have done a great job coordinating interagency activities, we felt that there is only so far coordination without authority can go if socially and economically important research is to be conducted in a timely and relevant manner. What this “entity” might look like – we left that to the federal government to chew over.
There’s a lot more to the report – including (as you would expect) a broad assessment of research areas that need attention if the science of nanomaterial human health and environmental impacts is to continue to develop effectively.
This is the first of two reports- the second is due in around 18 months, and will look at progress toward implementing a relevant and effective research strategy.
Tuesday, January 24, 2012
Google Search Kills Human Memory
Bangalore: With technology penetrating every part of our lives, we have learned to search for information on the internet. But could this practice be altering the way we store and process information? A new study has revealed findings that could just go to show that Google could be destroying our memories.

Psychology professors from the Columbia University, the University Of Wisconsin-Madison, Harvard University came together and published the study that said people“remember less by knowing information than by knowing where the information can be found”, essentially meaning that we tend to forget information searched for, and rather remember where on the internet it can be found. “The Internet, with its search engines such as Google and databases such as IMDB and the information stored there, has become an external memory source that we can access at any time,” said the study. “It has become so commonplace to look up the answer to any question the moment it occurs, it can feel like going through withdrawal when we can’t find out something immediately.”
The four tests that the professors conducted on individuals in short, confirmed that searching for information on the internet could be wiping out our “internal memory”, which stores data. “When people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it”, said the study.
So instead of remembering data, say what happened during an event, we tend to remember which sites we saw that information on, or where on Google we found the site.
According to the study “These results suggest that processes of human memory are adapting to the advent of new computing and communication technology. We are becoming symbiotic with our computer tools growing into interconnected systems that remember less by knowing information than by knowing where the information can be found.” “This gives us the advantage of access to a vast range of information—although the disadvantages of being constantly ‘wired’ are still being debated”, concluded the professors in the study.
Monday, January 23, 2012
The Northernmost Dish In The World: Tracking Satellites and Dodging Polar Bears

Cold Dish Greg White
Sten-Christian Pedersen oversees the northernmost antenna array on Earth, 25 dishes tracking about 100 satellites on the small archipelago of Svalbard, 500 miles south of the North Pole. Even when the winds are –76°F and visibility is 10 feet, Pedersen drives to the satellite station. When there is a risk of avalanche, he takes a helicopter. When there are polar bears, he carries a firearm.
Pedersen says the antennas are protected by “radomes: close to a drum skin, but made out of plastic.” Under the radome is the dish, a lattice of supports with four motors in its base; when the dishes move, it is barely discernable. Svalbard Satellite Station tracks polar orbits—satellites make a complete lengthwise circuit of the globe every 100 minutes, 14 times a day. In 25 days, a polar-orbiting satellite will have seen the whole surface of the Earth.
As a satellite passes overhead, SvalSat’s technicians have 12 to 15 minutes to download its data at 300 megabytes a second. The data (images, mostly) is then sent through fiber-optic cables in concrete tubes aboveground (permafrost prevents burial), before it reaches an undersea cable connecting to northern Norway. A new satellite named NNP, a weather satellite, just launched. Pedersen says that the station is in “freeze, to minimize any risk of a problem.” His week has been an easy one. The sun recently set for the last time until mid-February. “Then,” he says, “you will start to see small light from the south. The blue season, we call it.”
Cloud-Based Quantum Computing Will Allow Secure Calculation on Encrypted Bits

Entangled Qubits Clusters of entangled qubits allow remote quantum computing to be performed on a remote server, while keeping the contents and results hidden. EQUINOX GRAPHICS
When quantum computers eventually reach larger scales, they’ll probably remain pretty precious resources, locked away in research institutions just like our classical supercomputers. So anyone who wants to perform quantum calculations will likely have to do it in the cloud, remotely accessing a quantum server somewhere else. A new double-blind cryptography method would ensure that these calculations remain secret. It uses the uncertain, unusual nature of quantum mechanics as a double advantage.
Imagine you’re a developer and you have some code you’d like to run on a quantum computer. And imagine there’s a quantum computer maker who says you can run your code. But you can’t trust each other — you, the developer, don’t want the computer maker to rip off your great code, and the computer builder doesn’t want you to peep its breakthrough machine. This new system can satisfy both of you.
Stefanie Barz and colleagues at the University of Vienna’s Center for Quantum Science and Technology prepared an experimental demonstration of a blind computing technique, and tested it with two well-known quantum computing algorithms.
Here’s how it would work: You, the developer, prepare some quantum bits, in this case photons that have a polarity (vertical or horizontal) known only to you. Then you would send these to the remote quantum server. The computer would entangle the qubits with even more qubits, using a quantum entangling gate — but the computer wouldn’t know the nature of the entangled states, just that they are in fact entangled. The server is “blind” to the entanglement state, and anyone tapping into the server would be blind, too.
Imagine the computer tries to snoop on the qubits and see their entanglement, which could then be used to extract the information they carry. You’d be able to tell, because of the laws of quantum mechanics. The cat is both dead and alive until you check whether it’s dead or alive, and then it’s one or the other. If your photon has a specific state, you’d be able to tell that it was spied upon.
Back to the entangled bits. The actual information processing takes place via a sequence of measurements on your qubits. These measurements would be directed by you, based on the particular states of each qubit (which, again, only you know). The quantum server would run the measurements and report the results to you. This is called measurement-based quantum computation. Then you’d be able to interpret the results, based on your knowledge of the qubits’ initial states. To the computer — or any interceptor — the whole thing would look utterly random.
Since you know the entangled state on which the measurements were made, you can be certain whether the server really was a quantum computer. And you wouldn’t have to disclose your algorithm, the input or even the output — it’s perfectly secure, the researchers write in their paper, published online today in Science.
Blind quantum computation is more secure than classical blind computation, which relies on tactics like the backward factoring of prime numbers, said Vlatko Vedral, a researcher at the University of Oxford who wrote a Perspective piece explaining this finding.
“The double blindness is guaranteed by the laws of quantum physics, instead of the assumed difficulty of of computational tasks as in classical physics,” Vedral writes.
The Vienna team argues their simulation is a potentially useful technique for future cloud-based quantum computing networks.
“Our experiment is a step toward unconditionally secure quantum computing in a client-server environment where the client’s entire computation remains hidden, a functionality not known to be achievable in the classical world,” they write.
For the First Time, Predator Drones Participate in Civilian Arrests on U.S. Soil

A Customs and Border Protection Predator B (or MQ-9 Reaper)
A somewhat strange story emerged yesterday involving an extremist antigovernment group, a North Dakota sheriff’s office, and six missing cows, but there’s a much larger story behind this brief legal tangle between local law enforcement and the Brossart family of Nelson Country. When Alex, Thomas and Jacob Brossart were arrested on their farm back in June after allegedly chasing the local Sheriff off their property with rifles, they became the first known U.S. citizens to be arrested on American soil with the help of a Predator drone, Stars and Stripes reports.
They will not, however, be the last. Most U.S. citizens are aware that US. Customs and Border Protection owns and operates a handful of aerial drones along the nation’s northern and southern borders (eight Predators to be exact), but when Congress authorized the use of drones along the borders in 2005 it was thought that they would be used strictly to curb illegal immigration and to detect smuggling routes.
But a provision allowing for “interior law enforcement support” is being given fairly liberal interpretation by both the Customs and Border Protection crews that operate the drones and local law enforcement that sometimes wants to borrow CBP’s aerial assets. Local police in North Dakota say they’ve called upon the two Predators operating out of Grand Forks Air Force Base at least two dozen times since June.
These drones are unarmed Predator B drones (known as MQ-9 Reapers elsewhere in the operational lingo), the same “hunter/killer” model employed across the globe in the War on Terror (but without the Hellfire missiles). They are being used for surveillance and situational awareness only, law enforcement officials say. But the fact that they’re being used at all--and especially without anyone higher up the chain of command acknowledging that local police have access to and are using Predator drones routinely--stirs up all kinds of privacy issues. As Stripes notes, it also skirts the Posse Comitatus Act, which prohibits the U.S. military from taking on a police role within the United States.
In the case of the Brossart boys, apparently the sheriff showed up on their place with search warrant in hand seeking access to the family’s land to search for six missing cows thought to be on the premises. The Brossarts--who reportedly are not huge fans of the federal government in general and belong to an antigovernment group that the FBI considers extremist--brandished rifles and allegedly ordered the sheriff off the property. The sheriff complied, but then asked for support from the nearby drone unit, which happened to have a Predator in the air returning from a routine recon of the U.S.-Canada border.
Local law used the drone to keep an eye on the Brossart place overnight and the next day were able to determine via the drone footage that the three Brossarts in question were out on the property and unarmed (there’s a more thorough account of this if you click through to the Stripes piece). All said, the local police were able to sweep in and arrest the Brossarts without firing a shot or ending up in some kind of armed standoff.
To local law enforcement, it’s a good story about technology working to avoid violent confrontations and assist cops in their day-to-day serving and protecting. But it’s also troubling. From a privacy standpoint, the use of military surveillance drones over American cities is fraught with issues. Then there’s the fact that--up until now--very few people seem to have any idea this is going on. The government peering into your backyard, Big Brother is watching, etc. etc.--it’s the kind of thing that’s going to have to be talked about as technologies like drone aircraft become more ubiquitous, both abroad and at home.
Oh, and the six cows were located by police. No word on whether the Predators were scrambled for that part of the operation.
Subscribe to:
Posts (Atom)