Industry defends diesel cars’ record on air quality

Britain’s automotive industry has defended diesel cars, as the government prepares to announce proposals for improving air quality.

 

The government is expected to follow in London’s footsteps and make it more expensive to use the most-polluting vehicles.

However, the Society of Motor Manufacturers and Traders (SMMT) said some recent reports have been dismissive of progress made by the sector by failing to differentiate between newer and cleaner diesel cars and vehicles of the past.

“Euro 6 diesel cars on sale today are the cleanest in history,” said SMMT chief executive Mike Hawes. “Not only have they drastically reduced or banished particulates, sulphur and carbon monoxide but they also emit vastly lower NOx than their older counterparts – a fact recognised by London in their exemption from the Ultra Low Emission Zone that will come into force in 2019.”

Hawes said diesel cars are also a key part of action to tackle climate change, while also enabling millions of people to travel as affordably as possible.

The SMMT said a record 1.3 million new diesel cars were registered in the UK in 2016, an increase of 0.6% on 2015, a trend it expects to see continue this year.

London’s Mayor Sadiq Khan has promised to crack down on polluting vehicles to make the capital the greenest in the world. He plans to ban new diesel taxis from 2018, while drivers of diesel cars that are more than four years old in 2019 and petrol cars that are more than 13 years old will pay £12.50 a day on top of the existing congestion charge.

Under the proposals, pre-Euro VI trucks will be fined £100 a day for entering the capital, equating to £2 billion in fines per year.

“Of course we all want a cleaner London,” said the Road Haulage Association’s chief executive Richard Burnett. “But don’t let the mayor’s quest for clean air turn the nation’s capital into a ghost town.

“The thousands of restaurants, shops and tourist attractions that make London one of the world’s major tourist centres are massively reliant on an efficient delivery network. That must not be jeopardised.”

The government is due to announce its plans to comply with European Union legislation to improve air quality and meet nitrogen dioxide limits by 24 April, following a ruling by the High Court late last year.

A study in 2015 by King’s College London found that almost 9,500 Londoners die prematurely every year as a result of long-term exposure to air pollution.

Authorities in cities including Paris, Stuttgart, Athens, Brussels and Madrid are also trying to reduce pollution by proposing bans, fines and restrictions on diesel vehicles.

 

Robotic exoskeleton to improve keyhole surgery in €4m project

A wearable robotic exoskeleton might improve a surgeon’s natural hand movements during keyhole surgery.

The technology is still under development and part of the €4 million SMARTsurg project, funded by the European Union under the Horizon 2020 scheme.

Ten partners from four countries are working together on the project , led by the Bristol Robotics Laboratory and the University of the West of England Bristol.

The robotic exoskeleton could increase a surgeon’s dexterity and also give them an ability to ‘sense’, ‘see’, control and safely navigate the human body during operations, says the team working on the exoskeleton.

Keyhole surgery, referred to as minimally invasive surgery (MIS), is an attractive option for doctors as patients lose less blood during operations and recover faster. It also lowers the cost for hospitals as they need less after-care and are less likely to catch infections.

Sanja Dogramadzi, lead of the research project and robot specialist, told PE that robot-assisted MIS is growing in some surgical applications such as urology, which focuses on kidneys, ureters, bladder, prostate and male reproductive organs.

However, robotic systems are costly and so generally limited in other areas of surgery. “Success of the surgery depends on the available surgical tools and it is even more the case with MIS,” Dogramadzi said. “Enabling minimally invasive access in other surgical fields would be extremely beneficial for patients.”

A sense of touch

The main drawback for surgeons carrying out MIS is that when they insert instruments through small incisions, the surgeons typically loose dexterity and sense of touch.

The new technology hopes to overcome these drawbacks with the development of three key biomedical tools that mimic complex human dexterity and senses. These can be worn by the surgeon and transmit the surgeon’s own movements to the very small incisions made in MIS.

It is hoped that this will also lessen the amount of mental and physical stress experienced by surgeons carrying out such precise movements, as well as the highly demanding training needed.

The first will be an exoskeleton that fit over the surgeon’s hands, which will control the instruments inside the body – a surgical ‘gripper’ able to mimic the thumb and two fingers of the hand.

The instrument, which goes inside the body, will have haptic abilities, delivering feedback (typically in the form of small vibrations). This will allow the surgeon to ‘feel’ the tissues and organs inside the body, just like they do during conventional surgery. A prototype has been developed by researchers at Bristol Robotics Laboratory.

Exoskeleton prototype. Credit: UWE Bristol

Exoskeleton prototype. Credit: UWE Bristol

The wearable exoskeleton on the surgeon’s hand will enable movement that is more intuitive as well as giving the surgeon the sense of touch.

The sense of touch will improve upon current haptic systems, which mainly focus on delivering feedback to the arm or forearm of the user. Instead, the system will also focus haptic feedback on the fingers of the surgeon.

Dogrmadzi says that the SMARTsurg exoskeleton and robotic surgical tools will offer much better dexterity to the human hand because of this ability to sense “fine finger motion,” which is translated into fine motion of the surgical instruments.

Researchers will also develop smart glasses to be worn during surgery that would give the doctor a real-time, 3D view of what is happening inside the body during the operation.

This could mean that the surgeon could be on the other side of the room, operating remotely using the exoskeleton gloves, and would be able to see exactly what is going on throughout the operation.

Dogramadzi told PE that the glasses could also provide extra information, such as biopsy results, during surgery.

Tests will be carried out on medical models, called laboratory phantoms, and non-invasive tests on animals, before moving on to more realistic models. Dogramadzi said it should be up to six years before the technology is used for real medical procedures in the operating theatre.

The project’s collaborators include the Centre for Research and Technology Hellas (Greece), Politecnico di Milano (Italy), Bristol Urological Institute (UK), North Bristol National Health Service Trust, the University of Bristol, the European Institute of Oncology (Italy), TheMIS Orthopaedic Center (Greece), CYBERNETIX (France), Optinvent SA (France) and Hypertech Innovations (UK).

India’s robotics revolution

The Sorter robots busy sorting packages for delivery

 

From scavenging for parts in the markets of Mumbai to multimillion-dollar investment rounds, India’s robotics companies have come a long way.

Many people dream of a job that lets them travel the world but, after 450 flights within four years, young engineers Samay Kohli, 30, and Akash Gupta, 27, had had their fill of globe-trotting and wanted to put down roots. So six years ago, they co-founded a firm.

And not just any firm. They decided to take a stab at warehouse automation – arguably the most promising space for autonomous robots. The hunch worked. Their company, Grey Orange, is now one of India’s hottest ventures. Kohli and Gupta started small but now enjoy investments from Tiger Global Management, one of the “tiger cubs” or offshoots of Tiger Management, the hedge fund made famous by investment guru Julian Robertson.

Their babies: the Butlers, swarms of cute but smart cube-shaped bots that roam among the shelves to store and retrieve the goods, and the Sorters, which scan and sort all the packages coming in and out of a facility.

The duo bonded as students at the prestigious engineering college BITS Pilani in Rajasthan, over their shared love of robotics. In 2007, they built ACYUT – one of the first humanoid robots in India – and proceeded to tour the world, holding educational workshops and entering – and winning – robotics competitions.

Despite their success, including a gold medal at the 2009 ROBOlympics (now called the RoboGames), they realised that their technical knowhow was in greater need back home – and went into business in their native Delhi. “We had to do something that we liked,” says Kohli. “We liked robots – so we did robots!”

In recent years, India has developed a healthy start-up scene, fed by falling barriers to entry in software development, a flood of venture capital and a healthy supply of talent from the country’s well-developed IT industry. The focus is almost exclusively on software, and the blueprints for success are in fields such as e-commerce and financial technology.

In 2011, though, there weren’t too many start-ups yet, especially not hardware-oriented ones. The pair noticed that the warehousing industry was especially inefficient – or “ripe for disruption,” as Kohli puts it – and decided to improve it. But launching a hardware start-up back then, he adds, was a nightmare. “The ecosystem right from printed-circuit boards (PCBs) making to machine shops, all those concepts didn’t exist. Most of the components you needed, for the first two or three years we were shipping across the world,” he says.

Despite the initial difficulties, in little over five years Grey Orange has evolved into a leading warehouse automation company that serves Indian e-commerce and logistics giants such as Flipkart, Jabong, DTDC and Delhivery. With customers in Japan, Hong Kong, Singapore and Chile, and offices across the globe, it has gone multinational.

Its Butler bot is superficially similar to the one made by Kiva Systems, which was brought in-house by Amazon in 2012. Relying on machine learning, swarms of Butlers work together to determine the most efficient way to fetch products and bring them to workers at picking stations. The system simultaneously optimises storage locations based on current and past order data.

In a warehouse, on a typical day workers pick about 100 items per hour; by using Butlers, it’s possible to boost this number to 500-600 items an hour.

Then there are the Sorters. These machines use conveyors and scanners to read package barcodes, measure their dimensions and weight, automatically space them, and finally sort them for distribution. Warehouse automation is not new, but Kohli says what singles them out is their approach – uniting picking and sorting under a single overarching intelligent system.

Kohli and Gupta, taking a break on their smart Butler bots

Building from scratch

But Kohli and Gupta weren’t the very first ones on India’s burgeoning hardware start-up scene, though. Go back another few years, and the situation was even worse, according to two other engineers who studied together at the Indian Institute of Technology (IIT) in Bombay. When Gagan Goyal started his educational robotics firm ThinkLABS in 2005 (see box on page 30), even cobbling together a prototype was a challenge.

“We used to go to local markets to get second-hand material. We’d have to scrap things like motors from toy cars,” he says. “We’d speak to the people who made TVs in India because they do work on PCBs and we had to convince them to make just 1,000 units for us.”

His classmate Ankit Mehta founded ideaForge, and developed India’s first drone in 2004 while at IIT Bombay. Mehta claims he and his team stumbled over the now ubiquitous quad-rotor design independently. He says drones were barely on the radar in the West, let alone in the technological backwater India was at the time. They only realised others were working on similar designs when they attempted to file patents.

“It’s been a very arduous journey for us,” says Mehta. “In the early days, we literally had to build our own hardware ecosystem. We had to go and scavenge in the deepest, darkest lanes of the city to find components.”

Luckily, though, their work caught the eye of India’s Defence Research and Development Organization, and in 2009 they delivered what was at the time the world’s smallest and lightest UAV autopilot. This was followed swiftly in 2010 by the country’s first autonomous UAV – the NETRA quad-copter – and the company has now sold more than 400 drones to the country’s armed forces and internal security agencies.

Pushing through

The struggles these founders went through highlight the chicken-and-egg situation that India’s high-tech hardware companies have faced, says Jagannath Raju. A lack of high-profile success stories makes investors and component manufacturers unwilling to take a gamble, he says, which makes starting a hardware company all the more difficult.

Raju would know better than anyone, having founded his robotics firm Systemantics in 1995 after returning from the US. After the company started out consulting for various government agencies, designing everything from inspection robots to underwater vehicles, the Department of Science and Technology recognised the technical depth of Systemantics’ work  and provided some soft financing to develop a robotic arm in 2000.

There was muted interest from industry, though, and the next decade saw the company surviving hand to mouth on often tardy government funding. Then in the late 2000s venture capitalists started paying attention to the Indian market, and in 2010 Systemantics received its first significant funding from Accel.

This allowed the firm to create a series of robotic arms, designed to target smaller businesses in emerging markets, unable to shell out for top-of-the-line industrial robots. “We thought the most appropriate way to disrupt things was on price point and not worrying about catering for someone looking for very precise and very high-speed solutions. That’s maybe 10% of the applications, but another 90% is looking for a much lower-cost and lower-spec kind of product,” says Raju.

While component prices have fallen dramatically in recent years, thanks mainly to cheap imports from China, most are general purpose. “You pay for 100% of the features but you only end up using 25%,” says Raju. To meet its ambitious price point, for the past five years the firm has been working with local suppliers to develop everything from motors to computer boards tailored to its needs.

These efforts may have a positive knock-on effect for start-ups following in Systemantics’ footsteps, because Raju has recognised that securing the firm’s supply chain relies on vendors having enough demand. “We very specifically say that you will supply to us and you will sell it to as big a market as possible,” he says. “No licensing, even if we help you with some money, some design ideas, you are open to do whatever you want as long as you supply our needs.” Raju has even pointed other robotics start-ups towards Systemantics’ vendors.

Moonshoot

Ultimately, though, Raju thinks it will take some high-profile successes for Indian industry to get fully on board. Karthik Reddy, co-founder and managing partner of Blume Ventures, which has invested in both Systemantics and Grey Orange, agrees. He is hopeful that Grey Orange could be the breakthrough success that catalyses India’s robotics ecosystem.

“That’s the dream we have for that company,” he says. “It’s literally one of the best-performing companies we have, if not the best. There’s a long way to go, but in two or three years it can become an inspiration for a lot of youngsters to say we can do this.”

Another project that could inspire a generation of Indian engineering graduates is the country’s first privately funded space mission. Team Indus is one of five finalists in the Google-funded Lunar XPRIZE competition. It has challenged start-ups to land a robotic spacecraft on the Moon, travel 500m across its surface and send HD video to Earth before the end of 2017 – for a grand prize of $20 million.

Team Indus is booked onto an Indian Space Research Organisation (ISRO) rocket due to launch on 28 December and is entering the final stages of testing. Its workforce is conspicuously polarised between senior ex-ISRO scientists and engineering graduates fresh out of university. He’s only 24, but Karan Vaish is already heading the development of the team’s lunar rover.

“It’s great, it creates an image that the guy next door can do this,” he says, adding that opportunities like this, and the high-profile success of start-ups in the software space, is leading to a shift in attitudes in his generation.

“All my friends are breaking out of these hard-core clichéd jobs,” adds Vaish. “They’ve seen organisations grow from very small to a massive scale and they can just see it is actually possible, so they want to take the leap of faith and do it themselves.”

Confidence will only get you so far, though, as many of India’s first wave of start-ups are beginning to find out. The heady optimism of a few years ago has faded as heavily-backed companies struggle to achieve profitability and investors begin to exercise more caution. In hardware, the challenges are even greater.

“One of the difficulties is that you don’t have indigenous talent pools around in large numbers to say we’re suddenly going to build an industry. Developing that is a very slow process,” says Blume’s Reddy. While there is plenty of raw engineering ability among graduates, robotics is a complicated synthesis of multiple fields. And unlike the software industry, the hardware ecosystem hasn’t always got the resources to turn raw talent into productive employees.

Hardware and IoT

Reddy says that most of the Indian robotics start-ups are focused on areas such as retail, services and consumer tech, with little domestic demand. He doesn’t see any Indian robotics company becoming successful without first cracking its home market. “I don’t think there’s enough understanding of where the industry solutions are required,” he says. The real opportunities are in industrial applications, he adds, and will require a combination of hardware, sensors and smart analytics.

Avinash Kaushik, the founder of India’s first hardware accelerator Revvx, agrees. He says India’s IT industry has excelled in building end-to-end solutions for enterprise customers, and the hardware industry needs to follow suit.

That will mean focusing on the problem at hand, not whether you’re a hardware or a software start-up, and being willing to combine all kinds of technology from robotics to Internet of Things and machine learning.

“It’s a convergence of different technologies,” says Kaushik. “Just the hardware is not of much value in the long run. You need to build smart solutions, which can adapt, can monitor the surroundings, take action and be contextually aware.”

Robo-teachers

Gagan Goyal admits he was an academically average student when at the Indian Institute of Technology in Bombay. But in 2001 he got a chance to take part in his first robotics competition. He won many of them across India, and eventually  even represented Asia in a US contest.

His personal journey made him realise what a powerful learning tool robotics can be. Not only does it teach a synthesis of technical skills from mechanics to computer science, it also teaches life skills such as problem solving, teamwork and perseverance. “It‘s like real life. You don’t know the solution, you don’t know where to start,” he says.

This inspired him to start ThinkLABS, India’s first educational robotics company, in 2005. He sold his stake in the firm last year, and by that time ThinkLABS had set up robotics labs at more than 400 schools, and 240 teams participated in the national robotics competition he started.

These days hands-on Stem education in India has become commonplace, he says. Educational robotics companies are cropping up across the country and the government is even offering a one million rupee (£12,000) grant to set up so-called tinkering laboratories.

Predicting earthquakes: AI to the rescue

(Credit: iStock)

When the ground shakes, there are usually consequences – and very often deadly. About 10,000 people die every year during and in the immediate aftermath of earthquakes. But so far, scientists have not had much luck finding ways to accurately predict earthquakes.

“Earthquake prediction is currently impossible,” says John Vidale, a seismologist at the University of Washington. “At best, during times of high aftershock danger, low probabilities of danger can be accurately assessed, this effort is termed ‘operational earthquake forecasting’ in the US.”

Prediction is not working, he says, because the factors that determine the time and impact of an earthquake, the distribution and strength of stress deep in the ground, requires measuring a level of detail that is now – and, he adds, “probably forever,” unobtainable.

It’s not for lack of trying. For decades, scientists have studied foreshocks, changes in groundwater chemistry, odd animal behaviour, electromagnetic disturbances, and so on, all in the hope of find a reliable way to predict an imminent quake. But nothing so far has worked.

At the moment, the best that geologists can do is to determine the particular faults where quakes are likely to take place, and whether they moved in the past. When a fault moves, usually a quake ensues. For instance, a closely monitored fault is the San Andreas Fault in California. Earthquakes there happened in 1857, 1881, 1901, 1922, 1934, and 1966 – so roughly every 22 years. But when geologists assumed a quake would therefore occur between 1988 and 1993, they were way off – it happened only in 2004.

Tremor in the lab

To accurately predict a quake, one has to predict not one but three factors simultaneously: its location, its size, and the time when it’ll hit, says Peggy Hellweg, a seismologist at University of California Berkeley. “I can tell you that there will be an earthquake in California tomorrow – there will be many, most of them too small to be felt. The prediction must be for all three characteristics.”

She thinks that we cannot predict quakes, because we don’t have a broad enough view into the system with enough detail. “If you compare the amount of data we have about quakes with [what we know about] weather, we seismologists are currently collecting data at a level comparable with weather [forecasting] about 100 years ago. We weren’t doing a very good job of predicting weather then. Not until we got the much more encompassing view from satellite data.”

However, a team at the Los Alamos National Laboratory in New Mexico, led by geophysicist Paul Johnson, seems to have made a step forward – albeit for now only in the lab. They decided to feed a machine-learning algorithm raw data— lots and lots of measurements recorded before, during and after lab-simulated earthquakes. The idea was for the technology to use artificial intelligence and recognise the signs that a lab-created tremor is about to happen, by analysing the sounds emitted by the material under the strain simulating a fault line.

Hellweg, who is not involved in this research, says that Johnson and his group are “currently only working on [predicting] time, and maybe location, but not size [of a quake].”

A new signal

To create lab earthquakes, the scientists first insert a block between two other blocks, and put in-between a mixture of rocky material, so-called gouge material, to mimic the properties of real faults. Then they begin to pull out the middle block – and just as in a real quake, right before the rupture the gouge material begins to fail, generating specific cracks and sounds in the process. The block then slips quasi-periodically. Geologists believe that this system simulates the behaviours of real quakes.

The team wanted to study whether the sound emitted by the fault could be an indication of when the next slip would occur. To do so, they recorded all the sound from the experiment and fed the data into a machine-learning algorithm – in the hope that the machine would be able to identify specific patterns.

“There is a ‘training’ phase where the machine-learning algorithm is trained based on knowledge of when a fault may slip and produce an earthquake,” explains Johnson. “Then, the machine-learning is applied to data it has never seen before from the same system.”

As it turns out, the machine can predict very accurately, just by listening to the acoustic signals emitted by a laboratory fault, when a ‘quake’ is likely to take place. The researchers think that the algorithm can identify a new signal amidst all the acoustic data — certain creaking and grinding” sounds that scientists used to believe was just noise.

Of course, a lab experiment is very different from real quakes. But in some ways, they are also similar. “We can’t be sure we’ll ever predict real earthquakes, but we think if progress is to be made, this may be the best possible approach in existence,” says Johnson.

The team’s next step is to study small ruptures on real faults, such as the San Andreas, where repeating earthquakes occur over relatively short periods. Some seismologists, such as Hellweg, are willing to keep an open mind about the results. She says that “the level of noise in the [Johnson’s laboratory] data from other sources is relatively low. So, might there be something there? Yes. How close would we have to be to a source to measure the signals in the real world? Who knows. One challenging question for earthquake prediction systems is – whether it will always work, or at least 90% of the time.”

Not everyone is convinced it can work, though. “Paul has great ideas, but I’d be surprised if an unfocused search turns up heretofore unrecognised silver bullets,” says Vidale. “Many people have pored over the seconds to days before large earthquakes – and large catalogues of small earthquakes as well – the effort has already been lengthy and intense.” But, he admits that he will

Ghost in the Shell thrills but ducks the philosophical questions posed by a cyborg future

Paramount Pictures

How closely will we live with the technology we use in the future? How will it change us? And how close is “close”? Ghost in the Shell imagines a futuristic, hi-tech but grimy and ghetto-ridden Japanese metropolis populated by people, robots, and technologically-enhanced human cyborgs.

Beyond the superhuman strength, resilience, and X-ray vision provided by bodily enhancements, one of the most transformative aspects of this world is the idea of brain augmentation, that as cyborgs we might have two brains rather than one. Our biological brain – the “ghost” in the “shell” – would interface via neural implants to powerful embedded computers that would give us lightening fast reactions and heightened powers of reasoning, learning and memory.

First written as a Manga comic series in 1989 during the early days of the internet, Ghost in the Shell’s creator, Japanese artist Masamune Shirow, foresaw that this brain-computer interface would overcome the fundamental limitation of the human condition: that our minds are trapped inside our heads. In Shirow’s transhuman future our minds would be free to roam, relaying thoughts and imaginings to other networked brains, entering via the cloud into distant devices and sensors, even “deep diving” the mind of another in order to understand and share their experiences.

Shirow’s stories also pin-pointed some of the dangers of this giant technological leap. In a world where knowledge is power, these brain-computer interfaces would create new tools for government surveillance and control, and new kinds of crime such as “mind-jacking” – the remote control of another’s thoughts and actions. Nevertheless there was also a spiritual side to Shirow’s narrative: that the cyborg condition might be the next step in our evolution, and that the widening of perspective and the merging of individuality from a networking of minds could be a path to enlightenment.

Lost in translation

Borrowing heavily from Ghost in the Shell’s re-telling by director Mamoru Oshii in his classic 1995 animated film version, the newly arrived Hollywood cinematic interpretation stars Scarlett Johansson as Major, a cyborg working for Section 9, a government-run security organisation charged with fighting corruption and terrorism. Directed by Rupert Sanders, the new film is visually stunning and the storyline lovingly recreates some of the best scenes from the original anime.

 

Sadly though, Sanders’ movie pulls its punches around the core question of how this technology could change the human condition. Indeed, if casting Western actors in most key roles wasn’t enough, the new film also engages in a form of cultural appropriation by superimposing the myth of the American all-action hero – who you are is defined by what you do – on a character who is almost the complete antithesis of that notion.

Major fights the battles of her masters with increasing reluctance, questioning the actions asked of her, drawn to escape and contemplation. This is no action hero, but someone trying to piece together fragments of meaning from within her cyborg existence with which to assemble a worthwhile life.

A scene midway through the film shows, even more bluntly, the central role of memory in creating the self. We see the complete breakdown of a man who, having been mind-jacked, faces the realisation that his identity is built on false memories of a life never lived, and a family who never existed. The 1995 anime insists that we are individuals only because of our memories. While the new film retains much of the same story line, it refuses to follow the inference. Rather than being defined by our memories, Major’s voice tells us that “we cling to memories as if they define us, but what we do defines us”. Perhaps this is meant to be reassuring, but to me it is both confusing and unfaithful to the spirit of the original tale.

The new film also backs away from another key idea of Shirow’s work, that the human mind – even the human species – are, in essence, information. Where the 1995 anime talked of the possibility of leaving the physical body – the shell – elevating consciousness to a higher plane and “becoming part of all things”, the remake has only veiled hints that such a merging minds, or a melding of the human mind with the internet, could be either positive or transformational.

Open lives

In the real world, the notion of networked minds is already upon us. Touchscreens, keypads, cameras, mobile, the cloud: we are more and more directly and instantly linked to a widening circle of people, while opening up our personal lives to surveillance and potential manipulation by governments, advertisers, or worse.

Brain-computer interfaces are also on their way. There are already brain implants that can mitigate some of the symptoms of brain conditions, from Parkinson’s disease to depression. Others are being developed to overcome sensory impairments such as blindness or to control a paralysed limb. On the other hand, the remote control of behaviour using implanted brain stimulators has been demonstrated in several animal species, a frightening technology that could be applied to humans if someone were to choose to misuse it in that way.

The possibility of voluntarily networking our minds is also here. Devices like the Emotivare simple wearable electroencephalograph-based (EEG) devices that can detect some of the signature electrical signals emitted by our brains, and are sufficiently intelligent to interpret those signals and turn them into useful output. For example, an Emotiv connected to a computer can control a videogame by the power of the wearer’s thoughts alone.

In terms of artificial intelligence, the work in my lab at Sheffield Robotics explores the possibility of building robot analogues of human memory for events and experiences. The fusion of such systems with the human brain is not possible with today’s technology – but it is imaginable in the decades to come. Were an electronic implant developed that could vastly improve your memory and intelligence, would you be tempted? Such technologies may be on the horizon, and science fiction imaginings such as Ghost in the Shell suggest that their power to fundamentally change the human condition should not be underestimated.

Tony Prescott, Professor of Cognitive Neuroscience and Director of the Sheffield Robotics Institute, University of Sheffield

This article was originally published on The Conversation. Read the

New tiny Dutch supercomputer simulates colliding galaxies

To test the new, small supercomputer, the researchers simulated the collision of the Milky Way with the Andromeda Galaxy. This clash will take place in about four billion years. Credit: Jeroen Bédorf (Leiden University)

To test the new, small supercomputer, the researchers simulated the collision of the Milky Way with the Andromeda Galaxy. This clash will take place in about four billion years. Credit: Jeroen Bédorf (Leiden University)

A team of Dutch scientists has built a tiny supercomputer with the computing power of 10,000 PCs – powerful enough to simulate colliding galaxies.

The Little Green Machine II (LGM-2) is the size of four pizza boxes but can perform 200,000,000,000,000 calculations per second. It is ten times faster than its predecessor, LGM-1, built in 2010 and retiring today.

To test the new supercomputer, the researchers simulated the collision between the Milky Way and the Andromeda Galaxy that will occur in about four billion years from now.

Just a few years ago the researchers performed the same simulation at the huge Titan Computer (17.6 petaflops) at Oak Ridge National Laboratory, USA. “Now we can do this calculation at home,” said astronomer Jeroen Bédorf from Leiden University. “That’s so convenient.”

LGM-2 will be used by researchers in oceanography, computer science, artificial intelligence, financial modelling and astronomy. The computer is based at Leiden University in the Netherlands and has been developed with help from IBM.

LGM-II

Photo of the Little Green Machine II. Credit: (Leiden University)

The researchers constructed the machine from four servers with four special graphics cards each, and connected the PCs via a high-speed network.

Project leader Simon Portegies Zwart from Leiden University said that the design is so compact you could transport it on a bicycle. “Besides that, we only use about 1% of the electricity of a similar large supercomputer,” he added.

Unlike its predecessor Little Green Machine I, built in 2010, the new supercomputer uses professional graphics cards that are made for big scientific calculations, rather than default video cards from gaming computers.

Also, the previous supercomputer was based on the x86 architecture from Intel, while the LGM-2 is using the much faster OpenPower architecture developed by IBM.

The name Little Green Machine was chosen because the first supercomputer is also small and consumes little power. And it is also a nod to Jocelyn Bell Burnell, who discovered the first radio pulsar in 1967, which got nicknamed LGM-1 – where LGM stands for Little Green Men.

The construction of the small supercomputer cost about €200,000 euros and was funded by the Netherlands Organization for Scientific Research. The machine was developed in collaboration with researchers from Centrum Wiskkunde & Informatica, Amsterdam and Utrecht University, TU Eindhoven, and TU Delft, Netherlands.

 

Chinese researchers use seaweed to power devices

Seaweed might no longer just be found on your sushi platter, as Chinese researchers discover a way for the algae to give a power boost to our devices.

A team of scientists from Qingdao University have developed a seaweed-derived material that can be used to enhance the performance of semiconductors, lithium-ion batteries and fuel cells.

Carbon-based materials are currently used for energy storage and energy conversion, such as graphite. However, the researchers wanted to create a more sustainable way to keep up with the rapidly growing demand for bigger storage devices, and turned for help to earth-abundant seaweed that grows readily in salt water.

“We wanted to produce carbon-based materials via a really ‘green’ pathway,” said Dongjiang Yang, a nanotechnology expert at the university and lead author of the study. “Given the renewability of seaweed, we chose seaweed extract to synthesise porous carbon materials.”

The team created the porous carbon nanofibres by binding metal ions such as cobalt to the molecules from the seaweed extract – a process called ‘chelating’. This resulted in the nanofibres forming an “egg-box” structure, where the seaweed extract engulfs the metal ions, creating a stable material.

Testing the seaweed-derived material showed that it is significantly better than conventional graphite anodes for lithium-ion batteries, said the scientists.

The research could lead to longer ranges on a single charge for electric cars and even higher capacitance in zinc-air batteries if used as a superconductor material.

The nanofibres also showed improved stability as catalysts for fuel cells, compared to traditional platinum-based catalysts, say the researchers.

The team is now developing seaweed-derived cathodes for lithium-ion batteries and working on suppressing defects that materialise in these cathodes that reduce the mobility of lithium ions and deteriorate the batteries. The scientists are also branching out on the natural materials they use by developing a material using red algae and iron with a high surface area for lithium-sulphur batteries and supercapacitors.

However, the material might not be commercialised for use in batteries anytime soon. As of now, only 18,000 tonnes of seaweed extract can be obtained annually for industrial use and a lot more will be required to produce the material on a commercial scale.