Top tech innovations to come out in 2018

Monika Roy

Published: January 5, 2018

Here are five top innovations in technology sector to come out in 2018.

Scalp cooling for reducing chemotherapy hair loss

scalp-cooling-system

Newly diagnosed cancer patients have a lot to process. For women, the inevitable loss of hair is often one of the hardest. There is a new technology making its way to the US that is looking to eliminate this problem from some patients’ lists of worries.

The practice of “scalp cooling” – which works by reducing the temperature of the scalp a few degrees immediately before, during and after chemotherapy  –  has been shown to be highly effective for preserving hair in women receiving chemotherapy for early-stage breast cancer. United States Food and Drug Administration (FDA) approved the system in May last year.

JAMA, the medical journal of the American Medical Association, in February last year published two studies -- one from the University of California, San Francisco, and one from Baylor College of Medicine in Houston -- confirmed that women with early-stage breast cancer who underwent scalp-cooling treatments were significantly more likely to keep at least some of their hair throughout chemotherapy.

Dr Hope S Rugo, the director of breast oncology and clinical trials education at the UCSF Helen Diller Family Comprehensive Cancer Center and lead investigator of one of the studies, told the New York Times, “We have this huge growing population of breast cancer survivors, and many of them are very traumatized by their treatment. We’re working on all sorts of areas to try to limit that impact, and one is scalp cooling”.

Next

Speech-to-Speech translation (S2S)

s2s-technology

Speech Translation is the process by which conversational spoken phrases are instantly translated and spoken aloud in a second language. This will allow natural language processing features to be built into apps. Imagine discussing important matters across the globe with just a tap, in multiple languages without the aid of a translator or a mediator.

Researchers at Microsoft in 2012 have made software that can learn the sound of human voice, and then use it to speak a language that you don’t. The system could be used to make language tutoring software more personal, or to make tools for travelers.

Microsoft research scientist Frank Soong in a demonstration at Microsoft’s Redmond, Washington campus in 2012 showed how his software could read out text in Spanish using the voice of his boss, Rick Rashid, who leads Microsoft’s research efforts. In a second demonstration, Soong used his software to grant Craig Mundie, Microsoft’s chief research and strategy officer, the ability to speak Mandarin.

According to Chris Wendt of Microsoft/Skype, SR, MT, and Text-to-Speech (TTS) by themselves are not enough to make a translated conversation work. Because clean input is necessary for translation, elements of spontaneous language -- hesitations, repetitions, corrections, etc -- must be cleaned between automatic SR and MT.

For this purpose, Microsoft has built a function called TrueText to turn what you said into what you wanted to say. Because it’s trained on real-world data, it works best on the most common mistakes, Wendt says.

In a report published earlier in 2017, the Translation Automation User Society (TAUS) recognises this Speech-to-Speech (S2S) technology as the paradox the technology currently finds itself in.

Next

Leap motion

leap motion controller

Leap Motion, Inc is an American company that manufactures and markets a computer hardware sensor device that supports hand and finger motions as input, analogous to a mouse, but requires no hand contact or touching.

The Leap Motion controller is a small USB peripheral device which is designed to be placed on a physical desktop, facing upward. It can also be mounted onto a virtual reality headset.

Leap Motion presents an entirely new way to interact with your computers. For the first time, you can control a computer in three dimensions with your natural hand and finger movements.

The technology for Leap Motion was first developed in 2008. Following an initial angel investment, David Holz and his childhood friend Michael Buckwald founded the company in 2010 while Holz was studying for a PhD in mathematics.

The company in June 2011 raised a $1.3M seed financing round with investments from venture capital firms Andreessen Horowitz, Founders Fund, and SOSV, as well as several angel investors.

Later in February 2016, Leap Motion released new software, called Orion, designed for hand tracking in VR (virtual reality).

Next

Electrovibration Technology

electrovibration

Electrovibration Technology is not a touch screen. It's a feel screen! Little motors embedded in smartphones and tablets vibrate to apply the haptic feedback that adds sensation to typing on a virtual keyboard. More than a year ago a company called Senseg demoed an alternative using an electrostatic field, and not a vibrating motor, to create tactile feedback.

Electrovibration technology will change the mobile touchscreen experience dramatically and you will be able to feel different kinds of texture in the coming years.

In 1950, Edward Mallinckrodt, a researcher at Washington University in St Louis, accidentally discovered the phenomenon of electrovibration (also known as electrostatic vibration). He noticed that a brass electric light socket had a different texture when a light was burning than it did when the light was turned off. Along with a team of researchers, he began exploring the phenomenon in more detail by running experiments using an aluminum plate with insulating varnish.

Over a half-century after Mallinckrodt’s discovery, a collaborative team of researchers from Carnegie Mellon University and Disney Research developed an algorithm for rendering 3D textures onto a touch screen using electrovibration. Nicknamed TeslaTouch, the system modifies the frequency and amplitude of an alternating voltage applied to an electrode beneath a touchscreen.

By changing this voltage, TeslaTouch allows a software interface on a tablet computer to provide real-time haptic feedback by modifying the perceived friction of different parts of the screen. As the user swipes, taps, pinches, and manipulates objects on a touchscreen, the software can generate tactile effects that mimic the bumps, ridges, and textures of the surfaces of different objects.

Next

Driverless vehicles

self-driving-cars
A visitor looks at a self-driving car by Google at the Viva Technology event in Paris, France, June 30, 2016. Photo: Reuters

A driverless car is a robotic vehicle that is designed to travel between destinations without a human operator. It is also sometimes called as a self-driving car, an automated car or an autonomous vehicle.

The dream of a self-driving automobile goes as far back as the middle-ages, centuries prior to the invention of the car. The evidence for this comes from a sketching by Leonardo De Vinci that was meant to be a rough blueprint for a self-propelled cart.

It was around the early part of the 20th century that a real concerted effort to develop a driverless car that actually worked started to take shape, beginning with the Houdina Radio Control Company’s first public demonstration of a driverless car in 1925.

The first self-sufficient and truly autonomous cars appeared in the 1980s, with Carnegie Mellon University's Navlab and ALV projects in 1984 and Mercedes-Benz and Bundeswehr University Munich's Eureka Prometheus Project in 1987.

Since then, numerous major companies and research organisations have developed working prototype autonomous vehicles including Mercedes-Benz, General Motors, Continental Automotive Systems, Autoliv Inc, Bosch, Nissan, Toyota, Audi, Volvo, Vislab from University of Parma, Oxford University and Google.

Related Topics

Next

Comments.