‘Talking’ nanoparticles could spark new wave of smart devices

(Credit: iStock)

(Credit: iStock)

Scientists have developed ‘talking’ nanoparticles inspired by biology that could revolutionise computing and pave the way for intelligent nanodevices.

Although silicon chips are getting smaller and faster every year, in terms of processing power they still pale in comparison to nature. Living things can communicate masses of information using molecules such as pheromones and neurotransmitters, and scientists have been trying to build a similar system into the new wave of nanomaterials.

Previous efforts have managed one-way communication, but a new study published in the journal Nature Communications describes nanoparticles that can share information in both directions.

Ramon Martinez-Manez and colleagues at the University of Valencia in Spain designed a pair of nanoparticles that can trade messages in a chemical ‘language’. One particle has an enzyme that recognises and transforms a molecule into a signal that can be read by an enzyme on the second particle. “The “message” that is transmitted depends on each specific situation,” Martinez-Manez told Professional Engineering.

“In our case the first chemical messenger “asks” the second nanoparticle if a loaded cargo can be delivered and the second nanoparticle sends a chemical messenger that “says” yes the cargo can be delivered.”

Particles like these could potentially be used for building tiny, intelligent ‘nanodevices’ in future, or for information processing in biological ‘computers’. “If we learn how nanoparticles communicate and behave co-operatively this can help to mimic a number of complex biological behaviours,” said Martinez-Manez.

“This is still very preliminary research,” he continued. “Nevertheless, we believe that the concept of establishing communication between nanodevices has enormous potential for the design of complex nanoscale systems.”

A 3D-printed rocket engine just launched a new era of space exploration

File 20170530 16298 10q0h7o

File 20170530 16298 10q0h7o

The rocket that blasted into space from New Zealand on May 25 was special. Not only was it the first to launch from a private site, it was also the first to be powered by an engine made almost entirely using 3D printing.

(Credit: RocketLab)

This might not make it the “first 3D-printed rocket in space” that some headlines described it as, but it does highlight how seriously this manufacturing technique is being taken by the space industry. The Conversation

The team behind the Electron rocket at US company RocketLab say the engine was printed in 24 hours and provides efficiency and performance benefits over other systems. There’s not yet much information out there regarding the exact details of the 3D-printed components. But it’s likely many of them have been designed to minimise weight while maintaining their structural performance, while other components may have been optimised to provide efficient fluid flow. These advantages – reducing weight and the potential for complex new designs – are a large part of why 3D printing is set to find some of its most significant applications in space exploration, with dramatic effect.

One thing the set of technologies known as additive manufacturing or 3D printing does really well is to produce highly complicated shapes. For example, lattice structures produced in exactly the right way so that they weigh less but are just as strong as similar solid components. This creates the opportunity to produce optimised, lightweight parts that were previously impossible to manufacture economically or efficiently with more traditional techniques.

Boeing’s microlattice is an example of taking this to the extreme, supposedly producing mechanically sound structures that are 99.9% air. Not all 3D printing processes can achieve this, but even weight savings of a few percent in aircraft and spacecraft can lead to major benefits through the use of less fuel.

3D printing tends to work best for the production of relatively small, intricate parts rather than large, simple structures, where the higher material and processing costs would outweigh any advantage. For example, a redesigned nozzle can enhance fuel mixing within an engine, leading to better efficiency. Increasing the surface area of a heat shield by using a patterned rather than a flat surface can mean heat is transferred away more efficiently, reducing the chances of overheating.

The techniques can also reduce the amount of material wasted in manufacturing, important because space components tend to be made from highly expensive and often rare materials. 3D printing can also produce whole systems in one go rather than from lots of assembled parts. For example, NASA used it to reduce the components in one of its rocket injectors from 115 to just two. Plus, 3D printers can easily make small numbers of a part – as the space industry often needs – without first creating expensive manufacturing tools.

3D printers in space

3D printers are also likely to find a use in space itself, where it’s difficult to keep large numbers of spare parts and hard to send out for replacements when you’re thousands of kilometres from Earth. There’s now a 3D printer on the International Space Stationso, if something breaks, engineers can send up a design for a replacement and the astronauts can print it out.

The current printer only deals with plastic so it’s more likely to be used for making tools or one-off replacements for low-performance parts such as door handles. But once 3D printers can more easily use other materials, we’re likely to see an increase in their uses. One day, people in space could produce their own food items and even biological materials. Recycling facilities could also enable broken parts to be reused to make the replacements.

Astro printing. Barry Wilmore/NASA

Looking even further ahead, 3D printers could prove useful in building colonies. Places like the moon don’t have much in the way of traditional building materials, but the European Space Agency has proven solar energy can power the production of “bricks” of lunar dust, which would be a good start. Researchers are now looking at how to use 3D printing to take this idea further and develop complete printed buildings on the moon.

To make many of these applications a reality, we’ll need to research more advanced materials and processes that can manufacture components to withstand the extremely harsh conditions of space. Engineers also need to work on developing optimised designs and find ways of testing 3D printed parts to prove they’re safe. And then there’s the irritating issue of gravity, or rather the lack of it. Many current processes use powders or liquids as their raw materials so we’re likely to need some clever tricks in order to make these function safely in a low or microgravity environment.

Some of these barriers may even require entirely new materials and techniques. But as research goes on, 3D printing is likely to be used more and more in space, even if a fully printed space vehicle isn’t going to launch any time soon. The sky is no longer the limit.

Candice Majewski, Lecturer, Department of Mechanical Engineering, University of Sheffield

This article was originally published on The Conversation. Read the original article.

Controlling light with electric fields could create invisibility cloaks

(Credit: iStock)
(Credit: iStock)

A new technique for controlling light with electric fields could make you disappear, or plunge you into a virtual world with no need for goggles.

Researchers from North Carolina State University used electric fields to tune the refractive index of a semiconductor, allowing them to change the behaviour of light passing through it. “Our method is similar to the technique used to provide the computing capabilities of computers,” says Linyou Cao, an assistant professor of materials science and a corresponding author of the work, which was published in the journal Nano Letters.

In computers, an electric field can turn a current either on or off – corresponding to the 1s and 0s that make up binary code. According to the researchers, this new discovery will allow them to do something similar with light. “A light may be controlled to be strong or weak, spread or focused, pointing one direction or others by an electric field,” says Cao.

“We think, that just as computers have changed our way of thinking, this new technique will likely change our way of watching.”

The ability to shape light into arbitrary patterns opens up a world of potential applications, according to Cao. These could include virtual reality that works without goggles, movies that float in front of your eyes, or even an invisibility cloak that can bend light away from you to make you disappear.

Before this discovery, previous attempts to change the refractive index of materials with electric fields only managed tiny amounts of change – between 0.1 and 1% at most. But the group from North Carolina were able to change the refractive index (how much light passing through an object bends) by 60 per cent, using thin films molybdenum sulphide, tungsten sulphide and tungsten selenide.”

“This is only a first step,” said Cao. “We think we can optimize the technique to achieve even larger changes in the refractive index. And we also plan to explore whether this could work at other wavelengths in the visual spectrum.”

A new technique for controlling light with electric fields could make you disappear, or plunge you into a virtual world with no need for goggles.

Researchers from North Carolina State University used electric fields to tune the refractive index of a semiconductor, allowing them to change the behaviour of light passing through it. “Our method is similar to the technique used to provide the computing capabilities of computers,” says Linyou Cao, an assistant professor of materials science and a corresponding author of the work, which was published in the journal Nano Letters.

In computers, an electric field can turn a current either on or off – corresponding to the 1s and 0s that make up binary code. According to the researchers, this new discovery will allow them to do something similar with light. “A light may be controlled to be strong or weak, spread or focused, pointing one direction or others by an electric field,” says Cao.

“We think, that just as computers have changed our way of thinking, this new technique will likely change our way of watching.”

The ability to shape light into arbitrary patterns opens up a world of potential applications, according to Cao. These could include virtual reality that works without goggles, movies that float in front of your eyes, or even an invisibility cloak that can bend light away from you to make you disappear.

Before this discovery, previous attempts to change the refractive index of materials with electric fields only managed tiny amounts of change – between 0.1 and 1% at most. But the group from North Carolina were able to change the refractive index (how much light passing through an object bends) by 60 per cent, using thin films molybdenum sulphide, tungsten sulphide and tungsten selenide.”

“This is only a first step,” said Cao. “We think we can optimise the technique to achieve even larger changes in the refractive index. And we also plan to explore whether this could work at other wavelengths in the visual spectrum.”

Professor Dan Hewak at the University of Southampton’s Optoelectronics Research Centre, who was not connected with the research, told Professional Engineering it represented a “significant step forward” in the field.

“The material offers an opportunity to link optical devices and electronic devices with one fundamental material family,” he said.  “Right now we have silicon chips in our computers but these silicon chips can’t really generate light so they’re used only for electronics and not for displays and optical applications. But these 2D materials are very good electronic materials, but they can also emit light. They provide a link between those two domains.”

This simplifies the electronics, said Hewak, and could lead to lighter, smaller and more efficient devices in future.

Could boardgame-playing AlphaGo program point to bright future for AI?

A human player moves a piece during a game of Go (Credit: iStock)
A human player moves a piece during a game of Go (Credit: iStock)

An ancient Chinese board game might seem a strange, even lowly test for one of the world’s foremost artificial intelligence (AI) systems.

But Go is no ordinary game – more complex than chess, it reportedly has more possible combinations of moves than there are atoms in the visible universe.

Despite the game’s complexity, a Go-playing AI has just reached a huge milestone. AlphaGo, created by London-based Google subsidiary DeepMind, has won a three-game match against the world’s number one Go player, 19-year-old Ke Jie. The Chinese player hails the AI as “like a God” compared to its previous iteration a year before, and its designers say it could give a glimpse of how powerful and useful AI might become in future. What is the scope for its application, and could it transform the future of engineering?

***

“The way in which AlphaGo has unearthed new creative moves and strategies gives us a glimpse of the possibility of AI-aided scientific discovery,” says a DeepMind spokesman to Professional Engineering. “The techniques underpinning AlphaGo and much of our other work are general purpose and could potentially be applied to a wide range of other domains.

“We believe that in the next few years scientists and researchers using similar approaches will generate insights in a multitude of areas, from superconductor material design to drug discovery.”

Other uses for the rapidly developing technology could include monitoring workers as manufacturers embrace more automated robots, says Oxford University computer science expert Stephen Cameron to PE. “In a factory where you have robots near people, or some other sort of mechanical device near people, an AI which is able to spot when a person is not paying attention and could be hurt, could say ‘I don’t like this, I’m going to stop the motors,’ or something like that.”

Some companies have already embraced the monitoring ability that AI offers. Google has used DeepMind AI to cut cooling electricity bills by 40% at its data centres.

Novel new approaches in engineering

The technology could also lead to more “transparent and dynamic” manufacturing processes and business, says University College London expert Peter J. Bentley to PE. “The best placed industries are those with good automation and monitoring already in place so that the AI technologies can use the data in order to optimise and improve the processes.

“Ultimately AI is about distilling large amounts of data into useful knowledge, so those industries with the best quality data will be the ones to benefit in the future.”

AI’s ability to simultaneously process vast amounts of data from disparate sources will allow it to take novel new approaches in engineering, says Oxford University AI expert Stuart Armstrong to PE.

Engineering today tends to be broken down into separate concepts that humans understand, he said. Giving the example of building a bridge, he said engineers start with the concept, before considering aspects such as basic structural issues then tension, compression, resonance and other aspects.

However, an AI could consider all data simultaneously and produce a solution unlike anything created by humans, Armstrong says. “Maybe an AI of comparable abilities, but with the capacity of integrating all sorts of different knowledge, could construct something that humans don’t really understand, because it’s not in separate human-sized chunks.”

Difficult to make predictions

Despite agreeing on some definite uses for AI, such as the burgeoning autonomous vehicle sector – which could optimise manufacturing supply chains and introduce other technologies to the industry – many of the experts who spoke to PE said one thing: it is almost impossible to even comprehend most of AI’s potential uses.

“It is really difficult to make predictions for a whole bunch of reasons,” says Lancaster University’s Ben Wohl to PE. “We have advancements in a whole bunch of areas so the combinations are sometimes hard to forecast.

“But then the other thing is the nature of Moore’s Law [which projects an accelerated rate of computer improvements] and technological advancements. It is really sped up to an incredible pace at the moment, it is hard to use past experience.”

There will be applications of AI that we haven’t even considered, he said. “I think we have to see AI within a broader kind of advancing digital economy which includes intelligent algorithms alongside digital manufacturing and localised manufacturing.”

Algorithm and satellite network ‘could give early warning of tsunamis

Devastation left after the 2011 Japan tsunami (Credit: iStock)
Devastation left after the 2011 Japan tsunami (Credit: iStock)

Scientists are using a new algorithm to analyse data from more than 100 satellites around the world, hoping to detect devastating tsunamis and potentially save thousands of lives.

more destructive, ocean-wide tsunamis happening every 15 years on average.

The Varion method could help warn people living further away from the epicentre, said earthquake engineering expert Tiziana Rossetto of University College London to Professional Engineering. “It’s not going to help coastlines which are very close to tsunami sources, but it will help coastlines further away,” she said. “They don’t feel the ground shaking, so they don’t have the natural warning you would have if you’re closer.”

There is a lot of uncertainty around whether the technique can be used practically in a warning system, said Rossetto, but she added that “it would be a fantastic addition to more traditional methods” if it works. “The more information that can be developed to get a real-time idea of how big the tsunami is and therefore what extent of land could be affected… then great, fantastic. If it’s possible and if it’s possible to get that information in a timely way, really quickly, then that might be fantastic.”

Bombardier signs $700m deal with IBM to cut costs and embrace the cloud

(Credit: iStock)

(Credit: iStock)

Canadian engineering giant Bombardier has hired IBM to improve its information technology operations in a $700m deal, which will also look to cut costs for the company’s air and space business.

The six-year agreement spans Bombardier’s work in 47 countries, and will include cloud management of the rail and plane manufacturer’s worldwide IT infrastructure.

“As part of our turnaround plan, Bombardier is working to improve productivity, reduce costs and grow earnings,” said Sean Terriah, a chief information officer at the company. “With IBM, we will transform our service delivery model to focus on our core competencies, and leverage the best practices of our strategic partner across our infrastructure and operations.”

The move is part of a series of measures introduced by Bombardier CEO Alain Bellemare, who is trying to turnaround the company after its C Series jetliner entered service $2bn over budget, and two years late. Last year he announced plans to cut more than 14,000 jobs.

Virtual drugs, cars and changing rooms: what could VR and AR bring in 5 years?

Virtual reality has many possible applications (Credit: iStock)

Virtual reality has many possible applications (Credit: iStock)

If you have tried a virtual or augmented reality program, the chances are it has been a videogame. From the burgeoning console VR market to the breakaway hit that was Pokémon Go, many people might think of the developing technology as a novelty which could soon become a major entertainment source, but entertainment nonetheless.

However, at VR World in London this week, everyone from space scientists and football clubs to drug designers and motion capture experts were out to show how much more the technology can do. Our reporter Joseph Flaig visited to find out what we can expect to start seeing – as well as hearing and maybe even feeling – in five years’ time.

  • Medicines designed in VR. Scientists could use VR programs to design and test drug molecules in 3D, flipping and testing them alongside models of molecules inside human cells, said Jonas Bostrom, a drug designer from pharmaceutical company AstraZeneca. Despite admitting the technology is “not good enough yet,” he told Professional Engineering that it could start being used in three or four years. Programs will offer unique new perspectives for scientists and bring them closer to their finished work, he said, adding that they will mainly be updating existing drugs for cancer and diabetes.
  • Virtual changing rooms at home. In five years’ time, awkwardly waddling to the changing rooms with an armful of clothes only to find that none of them fit could be a thing of the past. Robin Coles from HSO said retailers could embrace the technology, allowing customers to “try on” clothes in their own homes using VR headsets or seeing the clothes on a 3D-scanned personal avatar. The clothes could either then be delivered or held in a store, he said.
  • Companies feeding customers information directly to AR headsets. Simon Kendrew from new energy company Engie said AR programs on future headsets or glasses with in-built displays could help engage customers with their energy use, offering up-to-date information on their usage and how it could be adapted in line with conditions such as weather and power demand. It could become “part of their experience within their home,” he said, “giving them the right information at the right time in an easy-to-digest and seamless fashion. I think that will open up a lot of possibilities, probably not in the next couple of years, but certainly as we start to look, three, four or five years and beyond.”
  • Feeling as well as seeing. More haptic feedback – recreating the sense of touch through vibrations or other movement – is needed in the mainly sight-and-sound based programs, said technologist Alby Miller from Transport Systems Catapult. Speaking at the Five Years From Now: The Enterprise of the Future panel talk, he said many aspects need to be combined for a program to become truly immersive.

Sansui Horizon 2 with 5-inch HD display, Android 7.0, 4G VoLTE launched for Rs. 4999

 

Sansui Horizon 2

After Horizon 1, Sansui has launched another budget budget 4G smartphone with VoLTE support dubbed as Horizon 2. It has a 5-inch HD display, is powered by a quad-core MediaTek processor, runs on Android 7.0 (Nougat), has a 8-megapixel rear camera with dual-tone LED flash and a 5-megapixel front-facing camera that also has LED flash.

Sansui Horizon 2

Sansui Horizon 2 specifications

  • 5-inch (1280 x 720 pixels) HD IPS display
  • 1.25 GHz Quad-core MediaTek MT6737VW processor with Mali-T720 GPU
  • 2GB RAM, 16GB internal storage, expandable memory up to 64GB with microSD
  • Android 7.0 (Nougat)
  • Dual SIM
  • 8MP rear camera with dual-tone LED Flash
  • 5MP front-facing camera with LED flash
  • 3.5mm audio jack, FM Radio
  • Dimensions:74.7×146.8×9.15mm
  • 4G VoLTE, WiFi 802.11 b/g/n, Bluetooth 4.0, GPS
  • 2450mAh battery

The Sansui Horizon 2 comes in Silver Grey and Rose Gold colors and is priced at Rs. 4,999 and is available exclusively from Flipkart.

Here’s a recap of what’s coming in Android O, as the OS reaches 2 billion active devices

During its IO keynote today, Google reiterated the main new features coming in the Android O release later this year, but also unveiled a couple of new things. Do note that most of what follows has initially been announced back in March when the first Developer Preview of Android O became available. Speaking of pre-release software, the Android O Beta Program is now live if you’re interested in giving the new version a spin (and happen to own one of six supported Nexus and Pixel devices).

Android O comes with the recently detailed Project Treble, which means some of its base has been modularized so that updates will hopefully come faster for devices that aren’t supported directly by Google.

Battery life is a focus in this release, and improvements in this regard will come through new automatic limits imposed onto what apps can do in the background – if said apps target Android O.

The copy and paste workflow has been improved when dealing with names of places and addresses as well as phone numbers, with the addition of Smart Text Selection. Before you had to manually select an address if you then wanted to look it up, but in O double tapping on any of the words in the address will result in the entire thing being automagically selected without any other input needed from you. From there it’s easy to jump to Google Maps to see directions to that place. The same process works for names of businesses too. And when you double tap any part of a phone number, that instantly becomes callable.

For security, a new feature called Google Play Protect will be surfaced in the Play Store. This is basically the same Google engine that has automatically scanned your apps for malicious behavior, but now it will be much more visible to end-users. You’ll see information about when the last scan was performed, and will get an option to start a manual scan at any time. The Find my device feature will also live under the Play Protect umbrella starting in Android O

With the new Autofill API, interesting use cases will be made possible. Say you have your Twitter credentials stored in Chrome. Then you install the Twitter app, and when you want to sign in the username and password will be autofilled by the OS based on the existing data. Of course this will also be used by third parties, most notably password managers, to make signing into stuff a breeze.

Notification Channels mean developers will define categories for notification content, and you’ll choose to “subscribe” only to the channels you’re interested in. Additionally, notifications will be snooze-able in Android O. Picture-in-Picture Mode for videos will take what happens in the YouTube app when you start playing a video and then hit Back to the OS level. So the video will stay with you while you do other things on your device.

If you like those app badges in iOS that let you know you have, say, unread emails – well then you’re going to love Android O. Yes, such Notification Dots will be automatically applied to your apps’ icons, but there’s an unexplained twist: there will be no number in the badge. The badge itself (a dot, basically) will show up, and that’s it. The hue of the badge will be automatically generated using the colors used in every app’s icon. To get a preview of what the dot is about, you long press the app’s icon.

Under the hood enhancements are everywhere in Android O, starting with the claimed twice as fast boot time, and improved app performance – all with no changes required from developers.

Android has reached 2 billion monthly active devices, less than three years after hitting the 1 billion milestone. And with Android O, Google is making its mobile operating system more secure, modular, battery-friendly, and feature-filled than it’s ever been.

First look HTC U11 hands-on

Introduction

Just when we thought HTC was through with this generation’s flagship announcement in the form of the U Ultra, the Taiwanese manufacturer decided it would squeeze one more release in. But before we get to explain the terrible pun we just made, let us introduce the HTC U11 – the company’s true 2017 flagship offer.

HTC U11 hands-on review

Yes, it seems HTC is far from done with its new “liquid design” concept. In fact, it is now clear that the U Play and U Ultra were merely international ambassadors to the company’s new style. According to HTC, we have at least three of four evolutions of it to look forward to, all presumably under the “U” moniker.

HTC U11 at a glance

  • Body: Glass back with a “liquid surface”, IP67 waterproofing.
  • Screen: 5.5″ Super LCD5 with 1,440 x 2,560px resolution and Gorilla Glass 5.
  • Chipset: 2.45GHz Snapdragon 835 running Android 7.1 Nougat with HTC Sense and Google Assistant and Amazon Alexa (where supported).
  • Camera: 12MP UltraPixel 3, UltraSpeed Autofocus, f/1.7, OIS, DualLED flash, RAW capture.
  • Video: 4K video recording with 3D Audio, Hi-Res audio, Acoustic Focus ; 1080p @ 120fps slow-mo.
  • Selfie: 16MP with Live Make-up; 1080p video recording
  • Memory: 4GB RAM (64GB version) and 6GB RAM (128GB version), microSD slot.
  • Connectivity: 1Gbps LTE, VoLTE, Wi-Fi calling, Optional DualSIm model, USB-C port with USB 3.1 and DisplayPort support, no 3.5mm audio jack, Bluetooth 4.2, Wi-Fi 802.11ac, NFC, GPS, GLONASSS, Beidou support.
  • Battery: 3,000mAh, QuickCharge 3.0.
  • Audio: Active Noise Cancellation with supplied headphones, BoomSound stereo speakers, 3D Audio recording with 4 mics, High-res audio support.
  • Sensors: Edge squeeze sensor, fingerprint sensor.

But, without getting ahead of ourselves, let us first meet the new U11 flagship offer. The one intended to pick up where the HTC 10 left off, albeit with a few important twists on style and functionality.