Pages

Subscribe:

Ads 468x60px

Tuesday, January 31, 2012

Device Could Drive Down Solar's Cost

Technology Review
Jan 31, 2012
 
Power play: Inverters mounted to the bottom of each panel provide grid-ready power at a test site in Sunnyvale, California. Credit: ArrayPower

As solar panel manufacturers try to harvest more of the sun's energy for less, they face increasingly diminishing returns. At roughly $1 per watt, the cost of solar modules now represents less than a third of the total cost of commercial solar installations. To cut the total cost of solar power—currently $3.00 to $3.50 per watt—bigger gains will have to come from improvements in the power electronics, wiring, and mounting systems required for solar installations.

ArrayPower, a startup based in Sunnyvale, California, has developed a new type of solar inverter—the device that converts direct current (DC) power produced by solar panels to grid-ready, alternating current (AC) electricity—that it claims could significantly reduce the cost of solar power. The company says its "sequenced inverter" will reduce the cost of commercial solar by 35 cents per watt, or more than 10 percent, by lowering capital costs, simplifying installation, and increasing output.

Large-scale solar installations currently use either a single "central" inverter or a number of "string" inverters to convert power from groups of panels strung together in series. Both approaches, however, suffer from low efficiencies because of the way the panels are connected. In either scenario, if one panel is damaged or shaded from the sun, the system's entire output is diminished to the level of its lowest-producing panel.

ArrayPower seeks to maximize power output through a new type of inverter mounted to each panel. The device is similar to microinverters now used in residential solar installations. By converting DC to AC power at each module, microinverters maximize the power output of each module, thereby increasing system output by roughly 3 percent to 10 percent.

Microinverters are typically more expensive because they require sophisticated electronics to filter and smooth the alternating current coming out of each inverter. A major cost is an electrolytic capacitor, essentially a chemical battery that stores energy for short bursts, allowing the inverter to send out pulses of electricity that create an alternating current. Further, microinverters typically only yield single-phase AC electricity, an electric current that is suited for residential use but not commercial or utility use.
To read more click here...

Monday, January 30, 2012

New Capabilities of today’s Automotive Glass Equipment

Engineerblogger
Jan 30, 2012



Today’s know-how, together with new developments in control technology and machine production technologies allow to utilise automotive glass grinding and cutting machines in new ways.

This equipment with its more flexible use, can reach higher quality and/or much shorter cycle times as well as it is open for new applications.

New equipment with direct drive technology is able to preprocess glass in a higher quality and at the same time faster than in the past. Thanks to the drives the gearboxes can be eliminated, higher torque can be achieved and higher resolution encoders can be used. Eliminating the gearboxes menas eliminating the mechanical play totally. The accuracy is given by the measurement system and the performance of the drive regulator. The measurement systems can resolve down to micrometers or micronarcs for polar systems.

Using the new technology the customers do not have to decide between productivity and quality to a given price. The new controls allow to adapt the production quality to the desired level. For example a customer can start its venture in the less demanding replacement segment, where high output is critical. If he is looking for new opportunities later it is possible to reprogram the machine to the highest quality levels. The customer is able to compete on the highest quality levels which allow asking for a higher price for the manufactured product. Todays new electronic developments allow an easy and riskless adaption of parameters. The technology, accuracy and flexibility can be applied in other glass production fields like solar or architectural production as well.


Technical basis of champ’speed
Working with two cutting bridges allows to separate relief cuts and form cuts. Furthermore the customer can separate cutting and breaking if needed. Using the correct combination distributing the processes allows moving the bottleneck process in cutting and grinding from the cutting/breaking into the grinding operation. The best combination depends on the design of the end product. In most of the cases the best solution is to do the relieve cuts on the first station and the cutting and breaking on the second station. Having two independent cutting heads with one breaking ball head gives room for the very best possible combination to increase the quality and accuracy to the maximum with the lowest possible cycle time.


Figure 1: Example windscreen

Producing a form with an accuracy of 100 percent is possible but physical parameters limit the cycle time. If needed, the machine can follow the contour exactly. This will result in a very accurate glass, but takes its time. In theory, the relation between grinding speed and forward movement should be constant. To reach a constant grinding surface the speed has to be reduced at each point where the grinding spindle makes a turn or goes around an edge. This means: the smaller the radius, the lower the speed. As the grinding wheel has a diameter, the speed of its centre point has to increase, because it has to travel a much longer way than the grinding point of the glass. In an inner arc, the travelling speed has to be reduced, because otherwise the wheel would take too much glass and will get choked.

Today there are two ways, to achieve high speed and accuracy. Firstly it is possible to make the design of the form in a way that the end result will still be in specs. This means that the design is not in the middle of the tolerance field designed, but will touch the limits with working in lower tolerance bands. The achieved result is still the same (faster production cycle, in given tolerance), but the result does not depend on factors like speed or grinding wheel diameters anymore. Secondly one can do the same like in the past i.e. opening the contour error and allow bigger deviation. But with these methods the resulting form will then depend again on process parameters like speed. This method is not recommended but is very easy to do and does not require additional know-how or skill, but the production cycle improvement can still be considerable.


Figure 2: Cutting path with new equipment can increase corner speeds, improve quality and is reproducible

Additional to the improvement in cycle time due to the higher moment of the motor, a well designed grinding path can add considerable cycle time advantages. Together with using two bridge cutting and direct drive grinders, the cycle time can come from 27 seconds for a windshield down to 16 seconds, for the same design.

Higher torque and higher accuracy allow on polar machine to increase the diameter range. Not only windshields for trucks and busses but also solar glasses or other high end glass with diagonals up to 3.6m are possible to grind with accuracies below 0.1mm around the whole circumference. This accuracy can be achieved with low cost process due to low cycle times, automotive approved equipment and low cost consumables.
 
Figure 3: Trajectory speed of the grinding wheel center

Commercial applications
The investment is not much different than in the past, but the cycle times improve a lot on the same production space. This alone can justify a replacement of machines. Higher torque motors allow running the grinders faster and more accurate. Adding the possibility to use the tolerance band in improving cycle time gives more flexibility. The decision to invest in accurate or fast equipment has not to be taken anymore. The equipment can be bought and during the time of use switched by parameters to either use the machine in a mode with very low tolerances or in a mode with very high precision.

A company can start of in producing replacement glass with very high output and low cost for example and switch for other projects to OEM manufacturing parameters with high machine and process capabilities and insuring six sigma tolerances or more. The characteristic of the equipment can cater for different markets and customers by simply pushing a button or by an intelligent design. Using this way also small companies can invest and be sure, that the equipment keeps its value for all future ventures and supports future expansions. For solar glass new dimensions of accuracy can be achieved by low costs. The flexibility is there to adjust in the future to all needs of forms or accuracy. Proven process capabilities are given out of the automotive industry.

With this equipment producing changing models is possible without making test glasses. It is possible to change from one model to the next, without wasting one glass and without tweaking parameters. Change over time is dramatically reduced and lower skilled personal can handle the machines.

Figure 4: Example of a new automotive glass preprocessing equipment – champ’speed-line of Bystronic glass

Conclusion – Limitations and things to consider
Due to the fact, that the machine does what the program defines, it means that a form has to be defined 100% correctly. The machine follows the drawing exactly. The CAD-drawing must include all detail and the transition from one drawing element to the next. Transitions of elements have to be correct and tangents have to be handled with care and accuracy. This higher demand in designing capabilities might allow to reduce the capabilities of the machine operator.

About Bystronic Glass
Bystronic glass is the most competent and reliable partner for services, machinery, plants and systems in the glass processing sector. Bystronic glass supplies its well-proven machine technologies also in important areas of the photovoltaic industry. This includes preprocessing, front-end and back-end solutions. Bystronic glass is an international brand with globally operating companies that support their customers on site and through own sales and service companies. Since 1994, Bystronic glass is part of the Conzzeta AG, a renowned Swiss industrial holding company.

Source: Glass on Web


Additional Information:

Keeping high-performance electronics cool

Engineerblogger
Jan 30, 2012



The development of sophisticated electronics using high-performance computer chips that generate much more heat than conventional chips is challenging scientists to come up with a new type of compact cooling system to keep temperatures under control.

For the past few years, a collaborative team of engineers and other scientists from academia and industry has been investigating an advanced cooling system for electric and hybrid cars as well as computers and telecommunications systems, particularly for military use in radar, lasers, and electronics in aircraft.

The technology, which is capable of handling roughly 10 times the heat generated by conventional chips, is a device, called a vapor chamber, using tiny copper spheres and carbon nanotubes to passively wick a coolant toward hot electronics, according to Suresh V. Garimella, the R. Eugene and Susie E. Goodson Distinguished Professor in the School of Mechanical Engineering at Purdue University, West Lafayette, IN.

The current thermal solution it would replace is typically a solid heat spreader using solid aluminum and copper to conduct heat, an approach inadequate for removing large amounts of heat in powerful electronics components while maintaining low operating temperatures.

A Passive System

The vapor chamber comes in the same form factor as a solid heat spreader, says Dr. Garimella, but "The working fluid contained inside continuously undergoes [evaporation] at the heat source to more efficiently remove heat than is possible by devices that rely on conduction alone."

An advantage of vapor chambers compared to other high-performance cooling technology alternatives is that a vapor chamber is a completely passive system. According to Dr. Garimella, "It can…operate continuously without any additional pumps or valves. Such passive systems are associated with high reliability. Active cooling options which allow for high heat dissipation, such as forced liquid cooling, require an external fluid flow system including a separate pump and condenser, adding to the solution cost and size."

Much of the work is being conducted at the Industry/University Cooperative Research Center's Cooling Technologies Research Center, established by Dr. Garimella at Purdue.

Integrating Nanostructures

After publishing its findings last year about the effects of conventional sintered powder copper structures on the performance of a vapor-chamber cooling technology, the team is preparing to report on the feasibility of integrating nanostructures, specifically carbon nanotubes, into the devices to further improve performance. These results and proposed techniques for integrating carbon nanotubes into vapor chambers are expected to be published in the near future, says Dr. Garimella.

"The next step is to experimentally investigate the performance enhancement provided by integration of carbon nanotubes into vapor chambers," he says. "Another critical step in converting performance enhancements observed in the lab to actual devices is to develop engineering models and methods that allow accurate prediction of device performance for specific applications."

When the program, which is being funded by the U.S. Department of Defense's Defense Advanced Research Projects Agency, is completed at the end of this year, the hope is that there will be transition to actual applications, especially for the Department of Defense, where there is significant need, says Dr. Garimella.

Source: American Society of Mechanical Engineers (ASME)

Wednesday, January 25, 2012

Fold-up car of the future unveiled at EU

Engineerblogger
Jan 25, 2012


European Commission Chairman Jose Manuel Barroso unveils at EU headquarters in Brussels the first prototype of a revolutionary electric fold-up car designed in Spain's Basque country, the "Hiriko", the Basque word for "urban"

A tiny revolutionary fold-up car designed in Spain's Basque country as the answer to urban stress and pollution was unveiled Tuesday before hitting European cities in 2013.

The "Hiriko", the Basque word for "urban", is an electric two-seater with no doors whose motor is located in the wheels and which folds up like a child's collapsible buggy, or stroller, for easy parking.

Dreamt up by Boston's MIT-Media lab, the concept was developed by a consortium of seven small Basque firms under the name Hiriko Driving Mobility, with a prototype unveiled by European Commission president Jose Manuel Barroso.

Demonstrating for journalists, Barroso clambered in through the fold-up front windscreen of the 1.5-metre-long car.

"European ideas usually are developed in the United States. This time an American idea is being made in Europe," consortium spokesman Gorka Espiau told AFP.

Its makers are in talks with a number of European cities to assemble the tiny cars that can run 120 kilometres (75 miles) without a recharge and whose speed is electronically set to respect city limits.

They envisage it as a city-owned vehicle, up for hire like the fleets of bicycles available in many European cities, or put up for sale privately at around 12,500 euros.

Several cities have shows interest, including Berlin, Barcelona, San Francisco and Hong Kong. Talks are underway with Paris, London, Boston, Dubai and Brussels.

The vehicle's four wheels turn at right angles to facilitate sideways parking in tight spaces.

The backers describe the "Hiriko" project as a "European social innovation initiative offering a systematic solution to major societal challenges: urban transportation, pollution and job creation."

Source: AFP

Tuesday, January 24, 2012

Europe's Driverless Car : semi-autonomous BMW car being demonstrated on a German autobahn

Technology Review
Jan 24, 2012
Easy ride: A semi-autonomous BMW car being demonstrated on a German autobahn. It can accelerate, brake, and overtake slower vehicles on its own. Credit: BMW

Tucked away in the basement of an iconic office tower shaped like four engine cylinders, engineer Werner Huber is telling me about the joy of driving. We're here at BMW headquarters, in Munich, Germany—capital of Bavaria, and arguably of driving itself. But Huber oversees strategic planning for advanced driver assistance systems, so in a way, his job is to put an end to driving—at least as we know it.

"I think that in 10 to 15 years, it could be another world," Huber says. He's not willing to predict exactly what driving will look like then, but he's certain humans will be doing a lot less of it.

For many people, automated cars call to mind those high-tech vehicles with a rotating periscope on top that Google has been driving around California. But Huber and executives at other European automakers say the automated driving revolution is already here: new safety and convenience technologies are beginning to act as "copilots," automating tedious or difficult driving tasks such as parallel parking.

"Driverless" technology will initially require a driver. And it will creep into everyday use much as airbags did: first as an expensive option in luxury cars, but eventually as a safety feature required by governments. "The evolutionary approach is from comfort systems to safety systems to automatic driving," says Jürgen Leohold, executive director for research at Volkswagen Group in Wolfsburg, Germany.

Both BMW and Volkswagen are among the companies already demonstrating cars that drive themselves. In 2010, Volkswagen sent a driverless Audi TTS up Pike's Peak at close to race speeds. Like similar vehicles from Google, these automated vehicles use some combination of GPS, radar, lasers, ultrasonic sensors, and optical cameras to create a constantly updated, 360-degree model of the surrounding environment, which an in-car computer can use to navigate.

But European automakers say their strategy is to move toward greater levels of autonomy incrementally, depending on what does well in showrooms.

Buyers of European luxury cars are already choosing from a menu of advanced options. For example, for $1,350, people who purchase BMW's 535i xDrive sedan in the United States can opt for a "driver assistance package" that includes radar to detect vehicles in the car's blind spot. For another $2,600, BMW will install "night vision with pedestrian detection," which uses a forward-facing infrared camera to spot people in the road.

Lasers, cameras, and other sensors are the most expensive part of autonomous driving systems. Some experimental self-driving cars are estimated to carry more than $200,000 worth of cameras and other gear. Those costs are also leading automakers toward a gradual approach that starts with sensor technologies and then extends capabilities to control driving tasks as well. In the high-end Mercedes-Benz CL, for instance, cameras not only tell a driver when he or she is leaving the lane but actually help the vehicle steer itself back. Several automakers already sell cars with so-called adaptive cruise control that automatically applies the brakes during highway driving if traffic slows. Next, BMW plans to extend that idea in its upcoming i3 series of electric cars, whose traffic-jam feature will let the car accelerate, decelerate, and steer by itself at speeds of up to 25 miles per hour—as long as the driver leaves a hand on the wheel.
To read more click here...



Related Article:


Cooling semiconductor by laser light

Engineerblogger
Jan 24, 2012


Koji Usami is working in the Quantop laboratories at the Niels Bohr Institute. Photo: Ola J. Joensen

Researchers at the Niels Bohr Institute have combined two worlds – quantum physics and nano physics, and this has led to the discovery of a new method for laser cooling semiconductor membranes. Semiconductors are vital components in solar cells, LEDs and many other electronics, and the efficient cooling of components is important for future quantum computers and ultrasensitive sensors. The new cooling method works quite paradoxically by heating the material! Using lasers, researchers cooled membrane fluctuations to minus 269 degrees C. The results are published in the scientific journal, Nature Physics.

“In experiments, we have succeeded in achieving a new and efficient cooling of a solid material by using lasers. We have produced a semiconductor membrane with a thickness of 160 nanometers and an unprecedented surface area of 1 by 1 millimeter. In the experiments, we let the membrane interact with the laser light in such a way that its mechanical movements affected the light that hit it. We carefully examined the physics and discovered that a certain oscillation mode of the membrane cooled from room temperature down to minus 269 degrees C, which was a result of the complex and fascinating interplay between the movement of the membrane, the properties of the semiconductor and the optical resonances,” explains Koji Usami, associate professor at Quantop at the Niels Bohr Institute.

From gas to solid

Laser cooling of atoms has been practiced for several years in experiments in the quantum optical laboratories of the Quantop research group at the Niels Bohr Institute. Here researchers have cooled gas clouds of cesium atoms down to near absolute zero, minus 273 degrees C, using focused lasers and have created entanglement between two atomic systems. The atomic spin becomes entangled and the two gas clouds have a kind of link, which is due to quantum mechanics. Using quantum optical techniques, they have measured the quantum fluctuations of the atomic spin.

“For some time we have wanted to examine how far you can extend the limits of quantum mechanics – does it also apply to macroscopic materials? It would mean entirely new possibilities for what is called optomechanics, which is the interaction between optical radiation, i.e. light, and a mechanical motion,” explains Professor Eugene Polzik, head of the Center of Excellence Quantop at the Niels Bohr Institute at the University of Copenhagen.

But they had to find the right material to work with.

The experiments are carried out in the Quantop laboratories at the Niels Bohr Institute. The laser light that hits the semiconducting nanomembrane is controlled with a forest of mirrors. Photo: Ola J. Joensen

Lucky coincidence

In 2009, Peter Lodahl (who is today a professor and head of the Quantum Photonic research group at the Niels Bohr Institute) gave a lecture at the Niels Bohr Institute, where he showed a special photonic crystal membrane that was made of the semiconducting material gallium arsenide (GaAs). Eugene Polzik immediately thought that this nanomembrane had many advantageous electronic and optical properties and he suggested to Peter Lodahl’s group that they use this kind of membrane for experiments with optomechanics. But this required quite specific dimensions and after a year of trying they managed to make a suitable one.

“We managed to produce a nanomembrane that is only 160 nanometers thick and with an area of more than 1 square millimetre. The size is enormous, which no one thought it was possible to produce,” explains Assistant Professor Søren Stobbe, who also works at the Niels Bohr Institute.

Koji Usami shows the holder with the semiconductor nanomembrane. The holder measures about one by cm, while the nanomembrane itself has a surface area of 1 by 1 millimeter and a thickness of 160 nanometers. Photo: Ola J. Joensen

Basis for new research

Now a foundation had been created for being able to reconcile quantum mechanics with macroscopic materials to explore the optomechanical effects.

Koji Usami explains that in the experiment they shine the laser light onto the nanomembrane in a vacuum chamber. When the laser light hits the semiconductor membrane, some of the light is reflected and the light is reflected back again via a mirror in the experiment so that the light flies back and forth in this space and forms an optical resonator. Some of the light is absorbed by the membrane and releases free electrons. The electrons decay and thereby heat the membrane and this gives a thermal expansion. In this way the distance between the membrane and the mirror is constantly changed in the form of a fluctuation.

"Changing the distance between the membrane and the mirror leads to a complex and fascinating interplay between the movement of the membrane, the properties of the semiconductor and the optical resonances and you can control the system so as to cool the temperature of the membrane fluctuations. This is a new optomechanical mechanism, which is central to the new discovery. The paradox is that even though the membrane as a whole is getting a little bit warmer, the membrane is cooled at a certain oscillation and the cooling can be controlled with laser light. So it is cooling by warming! We managed to cool the membrane fluctuations to minus 269 degrees C", Koji Usami explains.

“The potential of optomechanics could, for example, pave the way for cooling components in quantum computers. Efficient cooling of mechanical fluctuations of semiconducting nanomembranes by means of light could also lead to the development of new sensors for electric current and mechanical forces. Such cooling in some cases could replace expensive cryogenic cooling, which is used today and could result in extremely sensitive sensors that are only limited by quantum fluctuations,” says Professor Eugene Polzik.

Source: University of Copenhagen

Water sees right through graphene: graphene enhances many materials, but leaves them wettable

Engineerblogger
Jan 24, 2012
Dropsof water on a piece of silicon and on silicon covered by a layer of grapheneshow a minimal change in the contact angle between the water and the basematerial. Researchers at Rice University and Rensselaer Polytechnic Institutedetermined that when applied to most metals and silicon, a single layer ofgraphene is transparent to water. (Credit: Rahul Rao/Rensselaer Polytechnic Institute)

Graphene is largely transparent to the eye and, as it turns out, largely transparent to water.

A new study by scientists at Rice University and Rensselaer Polytechnic Institute (RPI) has determined that gold, copper and silicon get just as wet when clad by a single continuous layer of graphene as they would without.

The research, reported this week in the online edition of Nature Materials, is significant for scientists learning to fine-tune surface coatings for a variety of applications.

"The extreme thinness of graphene makes it a totally non-invasive coating," said Pulickel Ajayan, Rice's Benjamin M. and Mary Greenwood Anderson Professor in Mechanical Engineering and Materials Science and of chemistry. "A drop of water sitting on a surface 'sees through' the graphene layers and conforms to the wetting forces dictated by the surface beneath. It's quite an interesting phenomenon unseen in any other coatings and once again proves that graphene is really unique in many different ways." Ajayan is co-principal investigator of the study with Nikhil Koratkar, a professor of mechanical, aerospace and nuclear engineering at RPI.

A typical surface of graphite, the form of carbon most commonly known as pencil lead, should be hydrophobic, Ajayan said. But in the present study, the researchers found to their surprise that a single-atom-thick layer of the carbon lattice presents a negligible barrier between water and a hydrophilic – water-loving – surface. Piling on more layers reduces wetting; at about six layers, graphene essentially becomes graphite.

An interesting aspect of the study, Ajayan said, may be the ability to change such surface properties as conductivity while retaining wetting characteristics. Because pure graphene is highly conductive, the discovery could lead to a new class of conductive, yet impermeable, surface coatings, he said.

The caveat is that wetting transparency was observed only on surfaces (most metals and silicon) where interaction with water is dominated by weak van der Waals forces, and not for materials like glass, where wettability is dominated by strong chemical bonding, the team reported.

But such applications as condensation heat transfer -- integral to heating, cooling, dehumidifying, water harvesting and many industrial processes -- may benefit greatly from the discovery, according to the paper. Copper is commonly used for its high thermal conductivity, but it corrodes easily. The team coated a copper sample with a single layer of graphene and found the subnanometer barrier protected the copper from oxidation with no impact on its interaction with water; in fact, it enhanced the copper's thermal effectiveness by 30 to 40 percent.

"The finding is interesting from a fundamental point of view as well as for practical uses," Ajayan said. "Graphene could be one of a kind as a coating, allowing the intrinsic physical nature of surfaces, such as wetting and optical properties, to be retained while altering other specific functionalities like conductivity."

The paper's co-authors are Rice graduate student Hemtej Gullapalli, RPI graduate students Javad Rafiee, Xi Mi, Abhay Thomas and Fazel Yavari, and Yunfeng Shi, an assistant professor of materials science and engineering at RPI.

The Advanced Energy Consortium, National Science Foundation and the Office of Naval Research graphene MURI program funded the research.

Source: Rice University

Additional Information:

Researchers provide new insight into how metals fail

Engineerblogger
Jan 24, 2012



Derek Warner

The eventual failure of metals, such as the aluminum in ships and airplanes, can often be blamed on breaks, or voids, in the material's atomic lattice. They're at first invisible, only microns in size, but once enough of them link up, the metal eventually splits apart.

Cornell engineers, trying to better understand this process, have discovered that nanoscale voids behave differently than the larger ones that are hundreds of thousands of atoms in scale, studied through traditional physics. This insight could lead to improved ability to predict how cracks grow in metals, and how to engineer better materials.

Graduate student Linh Nguyen and Derek Warner, assistant professor of civil and environmental engineering, reported their findings in the journal Physical Review Letters, Jan. 20. Using new atomistic simulation techniques, they concluded that the smallest voids in these materials, those having nanometer dimensions, don't contribute in the same way as microscale voids do in material failure at ordinary room temperatures and pressures.

When metals fail, a physical phenomenon known as plasticity often occurs, permanently deforming, or changing the shape of the material. Previously, it was theorized that both nanometer and microscale voids grow via plasticity as the material fails, but the new research says otherwise.

"While this was something amenable to study with traditional atomistic modeling approaches, the interpretation of previous results was difficult due to a longstanding challenge of time scaling," Warner said. "We've come up with a technique to better address that."

Nguyen and Warner's work is supported by the Office of Naval Research, which has particular interest in the use of aluminum and other lightweight, durable metals in high-performance ship structures.

Source: Cornell University

Additional Information:

Monday, January 23, 2012

DNV Develops X-Stream, New Deep-Water Pipeline Concept

Engineerblogger
Jan 23, 2012


DNV has developed a new pipeline concept, called X-Stream, that can significantly reduce the cost of a deep- and ultra-deepwater gas pipeline while still complying with the strictest safety and integrity regime. X-Stream is based on established and field-proven technologies which have been innovatively arranged.

X-Stream can reduce both the pipeline wall thickness and time spent on welding and installation compared to deep-water gas pipelines currently in operation. The exact reduction in the wall thickness depends on the water depth, pipe diameter and actual pipeline profile. Typically, for a gas pipeline in water depths of 2,500 m, the wall thickness reduction can be 25 to 30 % compared to traditional designs.

“It’s essential for DNV that the new concept meets the strict requirements of the existing safety and integrity regime, and I’m pleased to confirm that this concept does,” says Dr. Henrik O. Madsen, DNV’s CEO, who announced the news at a press briefing in London today.

“DNV has been instrumental in developing and upgrading the safety and integrity regime and standards for offshore pipelines over the past decades. Today, more than 65 % of the world’s offshore pipelines are designed and installed to DNV’s offshore pipeline standard. As the deep-water gas transportation market will experience massive investments and considerable growth over the coming years, new safe and cost-efficient solutions are needed,” Dr. Madsen adds.

Current deep-water gas pipelines have thick walls and, due to quality and safety requirements, the number of pipe mills capable of producing the pipe is limited. When installing pipelines, the heavy weights are difficult to handle and the thick walls are challenging to weld. And finally, the number of pipe-laying vessels for deep-water pipelines is limited too.




New offshore oil and gas fields are being developed in deeper and deeper waters and export solutions for the gas are critical. New exploration activities are also heading for ultra-deepwaters. The distance to shore is increasing too. The X-Stream concept can for such fields represent an alternative to e.g. floating LNG plants combined with LNG shuttle tankers.

By controlling the pressure differential between the pipeline’s external and internal pressures at all times, the amount of steel and thickness of the pipe wall can be reduced by as much as 25-30 % – or even more compared to today’s practice and depending on the actual project and its parameters. This will of course make it easier and cheaper to manufacture and install the pipeline.

“By utilising an inverted High Pressure Protection System – i-HIPPS – and inverted Double Block and Bleed valves – i-DBB – the system immediately and effectively isolates the deep-water pipe if the pressure starts to fall. In this way, the internal pipeline pressure is maintained above a critical level for any length of time,” explains Asle Venås, DNV’s Global Pipeline Director.

The new concept is simple and reliable. During installation, it is necessary to fully or partially flood the pipeline to control its differential pressure. During operation, the i-HIPPS and i-DBB systems ensure that the pipeline’s internal pressure can never drop below the collapse pressure – plus a safety margin. In sum – a certain minimum pressure will be maintained in the pipeline at all times.

 

“It will also be important to maintain the minimum pressure in the pipeline during pre-commissioning. This can be done using produced gas separated from the water in the pipe by a set of separation pigs and gel. This technology is not new to the industry. This method has already been initiated as standard practice by several oil companies,” says Mr Venås.

A team of mainly young highly skilled engineers, headed by DNV in Rio de Janeiro, Brazil, is behind the X-Stream concept. As with the other DNV concepts launched in 2010 and 2011, the X-Stream team was asked to think outside the box.

The DNV study is a concept study, and a basic and detailed design will need to be carried out before the X-Stream concept is realised on a real project. DNV intends to work further with the industry to refine and test the concept.

“I’m pleased to announce the outcome of this innovation project. At DNV, we feel confident that, by further qualifying the X-Stream concept, huge financial savings can be made for long distance, deep-water gas pipelines without compromising pipeline safety and integrity,” concludes Dr. Madsen.

Source: Subsea World News

The Nano-economy: Time to Reap the Rewards

Engineerblogger
Jan 23, 2012


In a recent speech I made to business leaders in Boston, I explained that perched atop 26 years of experiences I've stacked up in nanobusiness, I have a pretty good view to the horizon. You know what I see? Decades of investment by government and the private sector have grown into a field of economic opportunity, now ripe with good jobs.

Better yet, I see the harvesting equipment has just been delivered: the Advanced Manufacturing Partnership. It's a new public-private consortium charged with investing more than $500 million in nanotechnology and other emerging technologies. The goal? Convert scientific knowledge to factory floor output -- and high quality jobs -- faster.

Business builders like Dow, Ford, and Proctor and Gamble have come to the table with MIT, Stanford and other universities, to join with the National Economic Council, Office of Science and Technology Policy and the President's Council of Advisors on Science and Technology.

The group's scope is wide, but three goals apply directly to nano-commercialization:
  • Reducing the time to make advanced materials for manufacturing.
  • Developing new technologies to get manufactured goods designed, built, tested and to market faster.
  • Creating an infrastructure and shared facilities that open up opportunities for small and mid-sized innovators.

Best of all, the talk is being backed up with serious investments, including:
  • $300 million in domestic manufacturing in critical national security industries. That includes high-efficiency batteries and advanced composites -- where nanotech leads.
  • $100 million for the research, training and infrastructure to develop and commercialize advanced materials at twice the speed and a greatly reduced price.
  • $12 million from the Commerce Department for an advanced manufacturing technology consortium charged with streamlining new product commercialization.
  • $24 million from the Defense Department for advances in weaponry and programs to reduce development timetables that enable entrepreneurs get into the game.
  • $12 million for consortia to tackle common technological barriers to new product development - the way earlier partnerships approached nanoelectronics
A group of the nation's top engineering schools will collaborate to accelerate the lab-to-factory timetable with AMP connecting them to manufacturers.

The result? The brightest scientific minds and hardest working entrepreneurs on the planet have brought us fresh jobs, ripe for the picking. Already, the U.S. accounts for around 35% of the global nanotechnology markets, estimated at $1.6 trillion during 2009-2013, according to a report by Research and Markets. With AMP, that growth can continue. As the Partnership takes shape, I'd like to add my two cents in advice for organizers: keep AMP a partnership, not a handout. In the toughest economy in 80 years, organizations are appreciative of government stimulus. But I have a deep concern that we can easily become addicted to it and start building business models to earn grants, not profits.

The truest judges of business are people with their own resources at risk -- private sector investors and businesses. I know it can be a ruthless, but keeping the focus on the private sector is the best way to weed out the bad ideas and fertilize the strong. If our officials set their sights on simply providing a little more sunlight to all small and medium-size enterprises, the best nanotechnology companies will rise on their own.

Source: Industry Week

Related Information:

Modified Toyota 2000GT Solar EV

Engineerblogger
Jan 23, 2012


Modified Toyota 2000GT Solar EV


Toyota have developed a solar EV based on the 2000GT, its classic limited production grand tourer.

"There is a solar panel on the hood and a translucent solar panel on the rear window. Solar panels still have low charging efficiency, so they need about two weeks to charge fully from zero. But we've been particular about utilizing solar panels, to power this car without using any electricity from thermal plants, or emitting any CO2."

This car was built by the Toyota Automobile Association, which includes dealers, parts suppliers, and engineers as well as Toyota itself. It incorporates both traditional Japanese craftsmanship and cutting-edge technology.

This car was converted by members of the Crazy Car Project, which includes engineers from car dealers, parts suppliers, and car makers. It incorporates both traditional Japanese craftsmanship and cutting-edge technology.

"We created the interior together with a company called Hayashi Telempu, which supplied parts for the original 2000GT. The idea was to revive the original features using today's technology. By using artificial leather instead of real leather, we've given the interior an even smoother finish. For the wooden finish on the instrument panel, rather than the original brown, we've used Japanese black lacquer, with gold and silver accents. This was commissioned from artisans in Kaga, to create a traditional Japanese atmosphere. Another highlight is the seven-dial meter, a characteristic feature of the original 2000GT. We've kept the original arrangement unchanged, but now it shows EV readings, like the motor rate, battery charge, and battery temperature."

This concept behind this vehicle is a solar car that can carry two people at a top speed of 200 km/h. It has a 35 kWh battery from Panasonic, and uses the motor and inverter from the Lexus LS Hybrid.

"The sound of a gasoline engine in a race is exciting, and with a quiet EV, you can add the kind of sound you like. By making the pitch and frequency vary linearly when the accelerator is pressed, we've created a sound that simulates a race car very well."

"Imagine a parking lot in summer. The parking lot here in midsummer is full of cars, and they're not doing anything useful, just getting hot in the sun. If all cars had solar panels like this, they'd make a great mega-solar plant. If that could be achieved, automobiles, which are said to be unfriendly to the environment, could become good for it. They'd be useful even when they were parked. We've built this car in the hope that, one day, the world will be like that."




Source: DigInfo TV

Additional Information:

Wednesday, January 18, 2012

China's drive for 'green' cars hits roadblocks

Engineerblogger
Jan 18, 2012


A man looks at a Volvo V60 electric car displayed at the Shanghai Auto Show in Shanghai last April. Car makers are struggling to sell environmentally friendly vehicles in China, even as Beijing pumps billions into clean energy.


Foreign and domestic car makers are struggling to sell environmentally friendly vehicles in China, the world's largest auto market, even as Beijing pumps billions into clean energy.


China wants five million "new energy" vehicles on the streets by 2020 to ease chronic pollution and reduce reliance on oil imports, but high prices, lack of infrastructure and consumer reluctance are creating major roadblocks.

The number of electric and hybrid vehicles currently in the country is tiny at about 100,000, mostly in government fleets, according to an industry estimate.

A salesman at the main Shanghai showroom of Chinese car maker BYD said the dealer sold only one electric car and two hybrid cars -- which combine a conventional internal combustion engine and an electric motor -- last year.

BYD, which is backed by US investment titan Warren Buffett, launched a fully electric vehicle for private buyers in October priced at 370,000 yuan ($60,000), though subsidies cut the cost by at least 16 percent.

"People hesitate to choose cars with a high price," said BYD sales manager Zhang Jiankun. "Although the government can provide subsidies for alternative-energy cars, the lack of charging stations is a main concern."

China had an estimated 243 charging stations at the end of 2011, but Beijing plans to invest 100 billion yuan over the next 10 years to build up the new-energy vehicle sector as a whole, focusing on electric models.

Foreign auto makers are also promoting the new technology in China.

US giant General Motors imported its first Chevrolet Volts into China in December and will begin selling the hybrids in early 2012 at 13 dealerships in eight cities.

But the Volt could suffer a potential image problem even as sales get under way in China as the vehicle faces a US government probe after damaged lithium batteries caught fire following crash tests.

GM says it has addressed the safety issue by reinforcing the battery.

The company is also developing a separate electric vehicle with its Chinese partner, domestic auto giant SAIC Motor, which itself launched five new energy vehicles in November.

"It seems every major company has its own electric-vehicle programme," Ray Bierzynski, executive director for electrification strategy of GM China, told reporters last year.

China overtook the US to become the world's top auto market in 2009 and is increasingly important for global players as economic turmoil hits demand in developed markets.

But the push for clean-energy cars comes as China's overall sales slow. Auto sales rose just 2.5 percent to 18.51 million units last year, compared with an increase of more than 32 percent in 2010.

China had hoped to vault its car companies into the top ranks of electric-vehicle producers but in recent months has reconsidered that strategy given the technological lead of foreign firms, and is now focusing more on hybrids.

The government is keen to build up its domestic auto industry so it has slapped import tariffs on some US passenger cars and sports utility vehicles, and said it would "withdraw support" for foreign investment in the sector.

"At the beginning the objective was, literally, to leapfrog. They have realised this is far too over ambitious," said Klaus Paur, director for automotive analysis at market research agency Ipsos in China.

"Currently, the government is re-visiting the strategy on (fully) electric vehicles. This is why they push more into the hybrids," he said.
However, one industry executive said the move did not indicate a "dramatic shift" in China's commitment to electrification.

"As we move down that path, there's a more realistic view of how quickly people can move and how some of the challenges can be addressed," Kevin Wale, president and managing director of GM China Group, told reporters.

The challenge includes building the infrastructure for charging batteries and convincing consumers to trust the technology. China has set up 15 pilot zones for electric vehicles across the country to this aim.

But in a country where car culture is only two decades old and fuel prices are controlled by the government, flashy luxury brands carry more appeal.

"To me, the performance of a car is the top priority, including how powerful it is," said marketing manager Gu Jiahuan, who is shopping for a car.

"Alternative-energy cars are not mature enough. And pure electric cars cannot go very far."

Source: The Associated Press

Five Ways Nanomanufacturing Improves Manufacturing Today

Engineerblogger
Jan 18, 2012



SME’s NanoManufacturing Conference and Exhibits, March 27-28, Boston, will highlight the current and near-term applications of nanotechnology and how they are transforming manufacturing.

Nanomanufacturing is no longer the next frontier. It’s in action today, and it is improving products and processes and saving manufacturers money along the way.

This March, nanomanufacturing experts will be gathering in Boston to share their knowledge with other manufacturing professionals at the NanoManufacturing Conference and Exhibits organized by the Society of Manufacturing Engineers. In addition to discussions on mid- to long-term applications of this smallest of technologies, there will be information on how it is already impacting the industry as evidenced in these five ways.

1) Materials – Nanotechnology is creating exceptionally light, yet extremely tough, materials, such as graphene. Nano composites are uniquely customizable to adhere to other materials. They are currently used in golf clubs and tennis rackets, with expectations that they will transform the aerospace, defense and transportation industries in the not-too-distant future.

2) Coatings – Nanotechnology is enabling coatings to have numerous beneficial properties that are proving very marketable. Nanocoatings are known to be a thermal barrier, flame retardant, ultraviolet resistance, self and easy cleaning, wear resistant, friction reducing, corrosion resistant, anti-scratch resistance, antibacterial and anti-fingerprint. They can even be self healing. Nanocoatings are used in myriad industries including automotive, defense, household cleaners, construction and exterior protection, with a very promising future in many more fields.

3) Energy Collection and Storage – Surface-to-volume ratios give nano particles most of their power. For example, a golf ball by volume equals the surface of a playing card. That same golf ball by nano particles has a surface area equivalent to four football fields. These particles use light more efficiently which is improving the efficiency and cost of solar panels.

4) Lighting – Quantum dots are nanoparticles of a semiconductor material with unique optical and electrical properties. A manufacturer can precisely control the size of a quantum dot to determine the color of light emitted. In addition to enabling the manufacturing of LED lights, quantum dots are used in electroluminescent displays and solid-state lighting.

5) Manufacturing Processes – Self-assembly is a branch of nanotechnology in which objects, devices and systems form structures without external prodding. Biological systems use self-assembly to construct various molecules and structures. Think of it as LEGOS® that assemble themselves. This process is currently being used in computer chips, and has potential benefits for water purification, sanitation, agriculture, alternative energy and medicine.

“More than 1,300 products have already made it to market using nanotechnology, with many more in the pipeline,” said Lauralyn McDaniel, the conference manager. “This conference provides an opportunity for manufacturers from almost any industry to either be introduced to nanomanufacturing or discover the latest advances in the technology.”

The NanoManufacturing conference is co-located with the MicroManufacturing Conference and Exhibits. Both events have been designed based on feedback attendees have given that the most valuable part of attending an SME conference is the people they meet and the resources they gain. To encourage the synchronistic collaboration, the sessions are shorter and breaks are longer, the exhibits have been arranged “in the round” to promote discussion, and the Conversation Connection areas are ideal for having in-depth conversations with colleagues. Attendees of either conference can go back and forth between the two and tailor this event to their own interests and needs.

The NanoManufacturing Conference sessions cover a broad range of topics including, metrology because “if you can’t measure it, you can’t make it,” nanostructure manufacturing techniques, the future applications of graphene in every day products, occupational and environmental health and safety concerns and antimicrobial technologies for medical devices.

Additionally, a panel of nanomanufacturing leaders will address moving from the research and development phase and prototyping to the commercialization phase and volume production. The event concludes with the annual peek into the nanocrystal ball in an attempt to predict what the next five years will bring to nanotechnology.

For those who are new to the technology, need a refresher or just want to explore the topic in more depth, pre-conference workshops and tours to iRobot® and the Center for High-rate Nanomanusturing/Kostas Nanomanufacturing Research Center are also available.

Source: PR Web

Additional Information:

Tuesday, January 17, 2012

Want Cheap Biofuel? A Startup Makes It with Natural Gas

Technology Review
Jan 17, 2012


Fast fuel: Virent’s 100-liter-per-day pilot plant, shown here, produces fuel for Formula 1 race cars. Credit: Virent


Virent, a biofuels company based in Madison, Wisconsin, has developed a potentially inexpensive way to make gasoline and other valuable chemicals out of grass and wood chips. Its approach reduces costs by simplifying or eliminating expensive processing steps, and by using natural gas to increase the amount of fuel that can be made from a given amount of biomass.

In some ways, the process is similar to the one used to refine oil. Virent has demonstrated that it can use it to make gasoline, diesel, and jet fuel, and its 100-liter-per-day gasoline pilot plant makes fuel that's used in Formula 1 racing.

As with many other biofuels companies, Virent's first large-scale product may not be fuel at all. It recently announced a development agreement with Coca-Cola to produce a chemical that can be used to make plastic soda bottles, and is hoping to build a plant for this purpose in 2015.

The company's technology addresses one of the big challenges with making advanced biofuels. This is that making hydrocarbon fuels from grass requires breaking down the long cellulose molecules that make up the bulk of the raw material. Breaking the biomass down is expensive, and is normally done with enzymes that produce sugar, or using high temperatures and pressures to turn it into carbon monoxide and hydrogen gas. Virent's process produces intermediate-sized molecules known as oligomers that require less processing. Its core technology is a way to transform those oligomers into fuel.

Making hydrocarbons from biomass requires first removing the oxygen. Virent has also developed inorganic catalysts that remove most of the oxygen from the molecules it produces. It then uses a series of chemical reactions to remove the remaining oxygen and reconfigure the molecules to take on the properties needed to make in fuels like gasoline or chemicals for making plastic bottles.
To read more click here...

Comparing Energy Conversion of Plants and Solar Cells

Engineerblogger
Jan 17, 2012


In studies at Urbana, Illinois, ARS scientists (left to right) Carl Bernacchi, Don Ort, and Lisa Ainsworth work in a facility where photosynthesis efficiency and yield can be measured in response to a simulated variable. Improving photosynthesis could lead to increased food production from soybeans, shown here. Photo courtesy of Institute for Genomic Biology/University of Illinois.

Scientists now have a way to more accurately compare how efficiently plants and photovoltaic, or solar, cells convert sunlight into energy, thanks to findings by a research consortium that included a U.S. Department of Agriculture (USDA) scientist.

The study, published in Science, could help researchers improve plant photosynthesis, a critical first link in the global supply chain for food, feed, fiber and bioenergy production.

Comparing plant and photovoltaic systems is a challenge. Although both processes harvest energy from sunlight, they use that energy in different ways. Plants convert the sun's energy into chemical energy, whereas solar cells produce electricity. The scientists, including Agricultural Research Service (ARS) research leader Donald Ort in the agency's Global Change and Photosynthesis Research Unit in Urbana, Ill., identified specific designs that hold excellent promise for improving efficiency.

ARS is the USDA's chief intramural scientific research agency.

The first step was to facilitate a direct comparison of the two systems. The researchers set a uniform basis for the comparison and examined the major factors that define the efficiencies of both processes, first considering current technology, then looking forward to possible strategies for improvements.

In all cases, the research team considered the efficiency of harvesting the entire solar spectrum as a basis for comparison. Additionally, the researchers compared plants to solar cell arrays that also store energy in chemical bonds. Calculations were applied to a solar cell array that was coupled to an electrolyzer that used electricity from the array to split water into hydrogen and oxygen. The free energy needed to split water is essentially the same as that needed for photosynthesis or a solar cell, so the comparison provided a level playing field.

Using this type of calculation, the annual averaged efficiency of solar-cell-driven electrolysis is about 10 percent. Solar energy conversion efficiencies for crop plants are about 1 percent, which illustrates the significant potential to improve the efficiency of the natural system, according to Ort. While, in the context of the team's efficiency analysis, solar cells have a clear advantage compared to photosynthesis, there is a need to apply both in the service of sustainable energy conversion for the future. This energy-efficiency analysis between plant photosynthesis and solar cells will lay the groundwork for improving the efficiency of plant photosynthesis in agriculture for improved yield.

Source: Agricultural Research Service

Nanotube bundles could make good solar cells

Engineerblogger
Jan 17, 2012


Photon hitting a bundle of carbon nanotubes

Bundles of carbon nanotubes could increase the efficiency of thin-film solar cells. So say researchers at the Los Alamos National Laboratory in the US who have used high-speed spectroscopy to show that the bundles can not only generate electron-hole pairs when exposed to sunlight but can separate these pairs of charge carriers too. This is the first time that these two crucial functions have been demonstrated in a single thin-film photovoltaic material.

Thin-film photovoltaic materials are better than conventional solar-cell materials, such as silicon, in that they are cheaper to make, are lighter and more flexible. They work by absorbing photons from sunlight and converting these into electron-hole pairs (or excitons). To generate electric current, an electron and hole must then be separated in the brief space of time it takes before the two particles come back together and are reabsorbed into the material. In solar cells, the exciton must quickly travel to another layer in the device (where the charge separation will occur), but it is normally reabsorbed too fast, something that ultimately leads to low light absorption efficiencies.

Semiconducting carbon nanotube bundles could come into their own here, say Jared Crochet and colleagues. Individual semiconducting nanotubes (which are tubules of the semi-metal graphene) suffer from the low efficiency mentioned above, but this problem can be overcome when the tubes are aggregated into bundles of tubes that have the same chirality. Chirality is the direction in which the graphene sheet has been twisted to form a tube – from left to right, or right to left.

Light absorption and charge separation
Such nanotube bundles respond to absorbed light in the same way as the parent material graphene, and charge separation can thus be very efficient. "This effect is promising for incorporating carbon nanotubes into photovoltaic devices as active layers where both light absorption and charge separation can occur," Crochet told nanotechweb.org.

The materials used in these experiments were produced by centrifuging individual carbon nanotubes so that tubes of the same twist direction and diameter aggregated together. The researchers chose bundles with a diameter and twist that strongly absorb light at a wavelength of about 570 nm – ideal for exposing to sunlight.

High-speed spectroscopy
By exposing the samples to a brief flash of laser light and recording spectra every tens of femtosecond, Crochet's team was able to observe signals that are characteristic of excitons being formed, plus additional peaks that indicated the production of free electrons and holes. In samples made of non-bundled individual carbon nanotubes, only the peak corresponding to exciton creation was seen.

The team now plans to incorporate single chirality semiconducting carbon nanotube networks into real-world photovoltaic devices as active layers. "We would ideally like to see an all-carbon solar cell made of graphene, graphene oxide and carbon nanotubes," said Crochet.

The researchers are also busy trying to better understand exciton dissociation and charge transport in the nanotube bundles using the high-speed spectroscopy technique. "The advantage of having the material in a device is that we can investigate every step, from photon absorption to charge collection," concluded Crochet.

The work was reported in Physical Review Letters.


Source: Nanotechweb.org

Nano-engineered carbons promise better gas storage materials for advanced transportation

Engineerblogger
Jan 17, 2012


Tunable sub-nm and supra-nm pores in KOH-activated carbon

Activated carbons provide large surface areas (as high as 3000 m2/g) generated by a maze of nanoscale pores for high-performance adsorption. Applications include the storage of hydrogen and natural gas for advanced transportation (pressure-controlled adsorption/desorption of gas at supercritical temperature). Here, two distinct types of pores have attracted the interest of researchers: sub-nm (<1 nm) pores host deep potential wells – surface sites with high binding energies – which adsorb gas molecules as a high-density fluid; supra-nm (>1 nm) pores host lower binding energies, but offer space for multilayer adsorption. In addition, sub-nm pores favour high volumetric storage capacity; supra-nm pores favour high gravimetric storage capacity. Experts are looking for ways to optimize gas storage capacities by controlling the number of pores in each of the two classes.

In a recent study, published in the journal Nanotechnology, scientists at the University of Missouri, US, have demonstrated that such control is possible. Carbons chemically activated with potassium hydroxide (carbon oxidation; intercalation of metallic potassium into carbon lattice) gave bimodal pore-size distributions with a large, approximately constant number of sub-nm pores and variable number of supra-nm pores (1–5 nm, peaked around 1.5 nm). The control variables were the KOH:C mass ratio and activation temperature.

Tunable pore space

The team showed that supra-nm pores are absent when the KOH:C ratio and activation temperature are low, and increase rapidly in number with increasing KOH:C ratio and activation temperature. By appropriate choice of the variables, the volume in supra-nm pores can be varied anywhere from 0 to 1.0 cm3/g, while the volume in sub-nm remains approximately 0.6 cm3/g.

This tunable pore space will allow researchers to selectively optimize carbons for high volumetric or high gravimetric storage capacity, depending on the requirements in different vehicles for on-board storage of hydrogen and natural gas.

High volumetric capacity is important for designs where space is limited (passenger vehicles - light duty vehicles) and a high gravimetric capacity is key when vehicle weight must be kept to a minimum.

The research was conducted by members of the Alliance for Collaborative Research in Alternative Fuel Technology (ALL-CRAFT), a partnership of the University of Missouri and several other institutions, funded by the National Science Foundation, US Department of Energy, US Department of Defense, California Energy Commission, and Southern California Gas Company, to develop novel storage materials for natural gas and hydrogen for advanced next-generation clean vehicles.

 Source: IOP Publishing

Additional Information:

Monday, January 16, 2012

Longer-lasting chemical catalysts

Engineerblogger
Jan 16, 2012


A graphical representation of the retrievable and reusable polymer–metal catalyst, showing the palladium (blue) that links two imidazole polymer units (red) through their nitrogen atoms. Copyright : 2011 Yoichi Yamada


Metal-based chemical catalysts have excellent green chemistry credentials—in principle at least. In theory, catalysts are reusable because they drive chemical reactions without being consumed. In reality, however, recovering all of a catalyst at the end of a reaction is difficult, so it is gradually lost. Now, chemists can retain, retrieve, and reuse metal catalysts by trapping them with a polymer matrix, thanks to recent work by Yoichi Yamada at the RIKEN Advanced Science Institute, Wako, Yasuhiro Uozumi at RIKEN and Japan’s Institute for Molecular Science and Shaheen Sarkar, also at RIKEN.

Attaching metal catalysts to an insoluble polymer support, which is recoverable at the end of a reaction by simple filtration, is far from a new idea. Traditionally, chemists attached their metal catalyst to an insoluble polymer resin. However, the metal invariably leached out of the polymer over time so the catalysts were still slowly lost.

Yamada and his colleagues' approach, in contrast, integrated the metal into the polymer matrix, which trapped it much more effectively. The researchers achieved this level of integration by starting with a soluble polymer precursor instead of an insoluble resin. This material contains imidazole units, a chemical structure known to bind strongly to metals such as palladium. An insoluble composite material formed only after the researchers added palladium to the mixture because it causes the imidazole units to self-assemble around atoms of the metal—a process that they call 'molecular convolution'.

Scanning electron microscopy revealed that the resulting polymer–palladium globules ranged from 100 to 1,000 nm in diameter, which aggregated into a highly porous structure reminiscent of a tiny bathroom sponge. "This sponge-like insoluble material can easily capture substrates and reactants from the solution, which readily react with metal species embedded in the sponge," says Yamada.

The researchers showed that the catalyst is highly active as well as reusable; it is the most active catalyst yet reported for a carbon–carbon bond-forming reaction known as an allylic arylation. They also reused the catalyst multiple times with no apparent loss of activity, and detected no leaching of palladium from the polymer into the reaction mixture.

Yamada and colleagues are now developing a range of composite catalysts incorporating different metals that can catalyze many other kinds of reactions. "These extremely highly active and reusable catalysts will provide a safe and highly efficient chemical process, which we hope will be adopted for industrial chemical process," Yamada says.

Source: RIKEN

Additional Information:

Project to pour water into volcano to make power

Engineerblogger
Jan 16, 2011

In this May 16, 2008, file photo, Newbery Crater project drilling manager Fred Wilson stands near a drilling rig at the Newberry Crater geothermal project as he describes the work near LaPine, Ore. Geothermal energy developers plan to pump 24 million gallons of water into the side of the dormant Central Oregon volcano this summer to demonstrate new technology they hope will give a boost to a green energy sector that has yet to live up to its promise. (AP Photo/Don Ryan, File)

Geothermal energy developers plan to pump 24 million gallons of water into the side of a dormant volcano in Central Oregon this summer to demonstrate new technology they hope will give a boost to a green energy sector that has yet to live up to its promise.

They hope the water comes back to the surface fast enough and hot enough to create cheap, clean electricity that isn't dependent on sunny skies or stiff breezes—without shaking the earth and rattling the nerves of nearby residents.

Renewable energy has been held back by cheap natural gas, weak demand for power and waning political concern over global warming. Efforts to use the earth's heat to generate power, known as geothermal energy, have been further hampered by technical problems and worries that tapping it can cause earthquakes.

Even so, the federal government, Google and other investors are interested enough to bet $43 million on the Oregon project. They are helping AltaRock Energy, Inc. of Seattle and Davenport Newberry Holdings LLC of Stamford, Conn., demonstrate whether the next level in geothermal power development can work on the flanks of Newberrry Volcano, located about 20 miles south of Bend, Ore.

"We know the heat is there," said Susan Petty, president of AltaRock. "The big issue is can we circulate enough water through the system to make it economic."

The heat in the earth's crust has been used to generate power for more than a century. Engineers gather hot water or steam that bubbles near the surface and use it to spin a turbine that creates electricity. Most of those areas have been exploited. The new frontier is places with hot rocks, but no cracks in the rocks or water to deliver the steam.

To tap that heat—and grow geothermal energy from a tiny niche into an important source of green energy—engineers are working on a new technology called Enhanced Geothermal Systems.

"To build geothermal in a big way beyond where it is now requires new technology, and that is where EGS comes in," said Steve Hickman, a research geophysicist with the U.S. Geological Survey in Menlo Park, Calif.

Wells are drilled deep into the rock and water is pumped in, creating tiny fractures in the rock, a process known as hydroshearing.

Cold water is pumped down production wells into the reservoir, and the steam is drawn out.
Hydroshearing is similar to the process known as hydraulic fracturing, used to free natural gas from shale formations. But fracking uses chemical-laden fluids, and creates huge fractures. Pumping fracking wastewater deep underground for disposal likely led to recent earthquakes in Arkansas and Ohio.

Fears persist that cracking rock deep underground through hydroshearing can also lead to damaging quakes. EGS has other problems. It is hard to create a reservoir big enough to run a commercial power plant.

Progress has been slow. Two small plants are online in France and Germany. A third in downtown Basel, Switzerland, was shut down over earthquake complaints. A project in Australia has had drilling problems.

A new international protocol is coming out at the end of this month that urges EGS developers to keep projects out of urban areas, the so-called "sanity test," said Ernie Majer, a seismologist with the Lawrence Berkeley National Laboratory. It also urges developers to be upfront with local residents so they know exactly what is going on.

AltaRock hopes to demonstrate a new technology for creating bigger reservoirs that is based on the plastic polymers used to make biodegradable cups.

It worked in existing geothermal fields. Newberry will show if it works in a brand new EGS field, and in a different kind of geology, volcanic rock, said Colin Williams, a USGS geophysicist also in Menlo Park.
The U.S. Department of Energy has given the project $21.5 million in stimulus funds. That has been matched by private investors, among them Google with $6.3 million.

Majer said the danger of a major quake at Newbery is very low. The area is a kind of seismic dead zone, with no significant faults. It is far enough from population centers to make property damage unlikely. And the layers of volcanic ash built up over millennia dampen any shaking.

But the Department of Energy will be keeping a close eye on the project, and any significant quakes would shut it down at least temporarily, he said. The agency is also monitoring EGS projects at existing geothermal fields in California, Nevada and Idaho.

"That's the $64,000 question," Majer said. "What's the biggest earthquake we can have from induced seismicity that the public can worry about."

Geologists believe Newberry Volcano was once one of the tallest peaks in the Cascades, reaching an elevation of 10,000 feet and a diameter of 20 miles. It blew its top before the last Ice Age, leaving a caldera studded with towering lava flows, two lakes, and 400 cinder cones, some 400 feet tall.

Although the volcano has not erupted in 1,300 years, hot rocks close to the surface drew exploratory wells in the 1980s.

Over 21 days, AltaRock will pour 800 gallons of water per minute into the 10,600-foot test well, already drilled, for a total of 24 million gallons. According to plan, the cold water cracks the rock. The tiny plastic particles pumped down the well seal off the cracks. Then more cold water goes in, bypassing the first tier, and cracking the rock deeper in the well. That tier is sealed off, and cold water cracks a third section. Later, the plastic melts away.

Seismic sensors produce detailed maps of the fracturing, expected to produce a reservoir of cracks starting about 6,000 feet below the surface, and extending to 11,000 feet. It would be about 3,300 feet in diameter.

The U.S. Bureau of Land Management released an environmental assessment of the Newberry project last month that does not foresee any problems that would stop it. The agency is taking public comments before making a final decision in coming months.

No power plant is proposed, but one could be operating in about 10 years, said Doug Perry, president and CEO of Davenport Newberry.

EGS is attractive because it vastly expands the potential for geothermal power, which, unlike wind and solar, produces power around the clock in any weather.

Natural geothermal resources account for about 0.3% of U.S. electricity production, but a 2007 Massachusetts Institute of Technology report projected EGS could bump that to 10% within 50 years, at prices competitive with fossil-fuels.

Few people expect that kind of timetable now. Electricity prices have fallen sharply because of low natural gas prices and weak demand brought about by the Great Recession and state efficiency programs.

But the resource is vast. A 2008 USGS assessment found EGS throughout the West, where hot rocks are closer to the surface than in the East, has the potential to produce half the country's electricity.

"The important question we need to answer now," said Williams, the USGS geophysicist who compiled the assessment, "is how geothermal fits into the renewable energy picture, and how EGS fits. How much it is going to cost, and how much is available."

Source: The Associated Press

Magnetic Memory Miniaturized to Just 12 Atoms

Technology Review
Jan 16, 2012


This scanning tunneling microscope image shows a group of 12 iron atoms, the smallest magnetic memory bit ever made. Credit: IBM





The smallest magnetic-memory bit ever made—an aggregation of just 12 iron atoms created by researchers at IBM—shows the ultimate limits of future data-storage systems.

The magnetic memory elements don't work in the same way that today's hard drives work, and, in theory, they can be much smaller without becoming unstable. Data-storage arrays made from these atomic bits would be about 100 times denser than anything that can be built today. But the 12 atoms making up each bit must be painstakingly assembled using an expensive and complex microscope, and the bits can hold data for only a few hours and at low temperatures approaching absolute zero, so the miniscule memory elements won't be found in consumer devices anytime soon.

As the semiconductor industry bumps up against the limits of scaling by making memory and computation devices ever smaller, the IBM Almaden research group, led by Andreas Heinrich, is working from the other end, building computing elements atom-by-atom in the lab.

The necessary technology for large-scale manufacturing at the single-atom scale doesn't exist yet. Today, says Heinrich, the question is, "What is it you would want to build on the scale of atoms for data storage and computation, in the distant future?"

As engineers miniaturize conventional devices, they're finding that quantum physics, which never had to be accounted for in the past, makes devices less stable. As conventional magnetic memory bits are miniaturized, for example, each bit's magnetic field begins to affect its neighbors', weakening each bit's ability to hold on to a 1 or a 0.

The IBM researchers found that it was possible to sidestep this problem by using groups of atoms that display a different kind of magnetism. The key, says Heinrich, is the magnetic spin of each individual atom.
To read more click here...