Tag Archives: Innovation

Fighter Jets May Launch Small Satellites to Space

by Elizabeth Howell | February 27, 2015

Small satellites could hitch rides to space on an F-15 fighter jet by next year, according to the Defense Advanced Research Projects Agency (DARPA), the agency responsible for developing new technologies for the U.S. military.

DARPA’s so-called Airborne Launch Assist Space Access (ALASA) program is an ambitious project that aims to launch small satellites more quickly, and reduce the cost of lofting them into orbit. Traditional launches using rockets cost roughly $30,000 per pound ($66,000 per kilogram), DARPA officials have said.

The F-15 jet would take off on a nearly vertical trajectory, with the expendable launch vehicle mounted underneath it. Essentially, the fighter jet acts as the first stage of a rocket, according to DARPA. After the aircraft flies to a high altitude, it releases the satellite and can then return to land on a conventional runway.

What Happens When Drones Start Thinking on Their Own?

By Andy Miah, University of Salford   |   February 27, 2015

This article was originally published on The Conversation. The publication contributed this article to Live Science’s Expert Voices: Op-Ed & Insights.

You will be forgiven if you missed the Drones for Good competition held recently in Dubai. Despite drone technology really taking off commercially in the last year or so (the potential puns are endless) they remain a relatively niche interest.

Drones – or unmanned aerial vehicles (UAVs) as they are increasingly known – have reached a mass-market tipping point. You can buy them on the high street for the price of a smartphone and, despite a large DIY Drone community, the out-of-the-box versions are pretty extraordinary, fitted with built-in cameras and “follow me” technology, where your drone will follow you as you walk, run, surf, or hang-glide. Their usefulness to professional filmmakers has led to the first New York Drone Film Festival to be held in March 2015.

Technologically speaking, drones’ abilities have all manner of real-world applications. Some of the highlights from the US$1m prize for the Drones for Good competition include a drone that delivers a life-ring to those in distress in the water. Swiss company Flyability took the international prize for Gimball, a drone whose innovative design allows it to collide into objects without becoming destabilised or hard-to-control, making it useful in rescue missions in difficult areas.

The winner of the national prize was a drone that demonstrates the many emerging uses for drones in conservation. In this case, the Wadi drone can help record and document the diversity of flora and fauna, providing a rapid way to assess changes to the environment.

More civilian uses than military

What does this all mean for how we think about drones in society? It wasn’t long ago that the word “drones” was synonymous with death, destruction, and surveillance. Can we expect us all to have our own personal, wearable drone, as the mini-drone Nixie promises? Of course the technology continues to advance within a military context, where drones – not the kind you can pick up, but large, full-scale aircraft – are serious business. There’s even a space drone, NASA’s Boeing X-37, which spent several years in automated orbit, while others are in development to help explore other planets.

There’s no escaping the fact that drones, like a lot of technology now in the mainstream, have trickled down from their military origins. There are graffiti drones, drone bands, Star Wars-style drone racing competitions using virtual reality interfaces, and even theatrical drone choreography, or beautiful drone sculptures in the sky.

There are a few things about drones that are extremely exciting – and controversial. The autonomous capabilities of drones can be breathtaking – witnessing one just fly off at speed on its own, it feels extremely futuristic. But this is not strictly legal at present due to associated risks.

A pilot must always have “line of sight” of the drone and have the capacity to take control. Technically even the latest drones still require a flight path to be pre-programmed, so the drone isn’t really making autonomous decisions yet, although the new DJI Inspire is pretty close. Drone learning has to be the next step in their evolution.

Yet this prospect of artificial intelligence raises further concerns of control, if a drone could become intelligent enough to take off, fly and get up to all kinds of mischief, and locate a power source to re-charge, all without human intervention or oversight, then where does that leave humanity?

There are also concerns about personal privacy. If Google Glass raised privacy hackles, drones will cause far worse problems. There have already been a few occasions where drones have caused some trouble, such as the one that crashed onto the Whitehouse lawn, or the one that overshot into a runway at London Heathrow. The point at which a drone is involved in something very serious may be the point at which their status as a mainstream toy ends.

This article was originally published on The Conversation. Read the original article.

Follow Innovation Begins Here on Twitter, innovationBeginsHere,Facebook , Google+ & Linkdin.

Future Computers Could Communicate Like Humans

By Elizabeth Palermo, Staff Writer   |   February 27, 2015

In the future, you might be able to talk to computers and robots the same way you talk to your friends.

Researchers are trying to break down the language barrier between humans and computers, as part of a new program from the Defense Advanced Projects Agency (DARPA), which is responsible for developing new technologies for the U.S. military. The program — dubbed Communicating with Computers (CwC) — aims to get computers to express themselves more like humans by enabling them to use spoken language, facial expressions and gestures to communicate.

Today we view computers as tools to be activated by a few clicks or keywords, in large part because we are separated by a language barrier,” Paul Cohen, DARPA’s CwC program manager, said in a statement. “The goal of CwC is to bridge that barrier, and in the process encourage the development of new problem-solving technologies.”

One of the problem-solving technologies that CwC could help further is the computer-based modeling used in cancer research. Computers previously developed by DARPA are already tasked with creating models of the complicated molecular processes that cause cells to become cancerous. But while these computers can churn out models quickly, they’re not so adept at judging if the models are actually plausible and worthy of further research. If the computers could somehow seek the opinions of flesh-and-blood biologists, the work they do would likely be more useful for cancer researchers.

“Because humans and machines have different abilities, collaborations between them might be very productive,” Cohen said.

Of course, getting a computer to collaborate with a person is easier said than done. Putting ideas into words is something that humans do naturally, but communicating is actually more complicated than it may seem, according to DARPA.

Human communication feels so natural that we don’t notice how much mental work it requires,” Cohen said. “But try to communicate while you’re doing something else — the high accident rate among people who text while driving says it all — and you’ll quickly realize how demanding it is.”

To get computers up to the task of communicating with people, CwC researchers have devised several tasks that require computers and humans to work together toward a common goal. One of the tasks, known as “collaborative composition,” involves storytelling. In this exercise, humans and computers take turns contributing sentences until they’ve composed a short story.

“This is a parlor game for humans, but a tremendous challenge for computers,” Cohen said. “To do it well, the machine must keep track of the ideas in the story, then generate an idea about how to extend the story and express this idea in language.”

Another assignment that the CwC is planning is known as “block world,” which would require humans and computers to communicate to build structures out of toy blocks. There’s a tricky part, though: neither humans nor computers will be told what to build. Instead, they’ll have to work together to make a structure that can stand up of its own accord.

In the future, DARPA researchers hope that computers will be able to do more than play with blocks, of course. If it’s successful, CwC could help advance the fields of roboticsand semi-autonomous systems. The programming and preconfigured interfaces currently used in these fields don’t allow for easy communication between machines and humans. Better communications technologies could help robot operators use natural language to describe missions and give directions to the machines they operate both before and during operations. And in addition to making life easier for human operators, CwC could make it possible for robots to request advice or information from humans when they get into sticky situations.

This article was originally published on livescience.

Follow Innovation Begins Here on Twitter, innovationBeginsHere,Facebook , Google+ & Linkdin.

Artificial intelligence Advances into ‘Deep Learning’

Ahmed Banafa, Kaplan University School of Information Technology | February 26, 2015 [Innovation Begins Here]

Ahmed Banafa is a Kaplan University faculty member for the School of Information Technology with experience in IT operations and management and a research background related techniques and analysis. He is a certified Microsoft Office Specialist, and he has served as a reviewer and technical contributor for the publication of several business and technical books.

Deep learning, an emerging topic in artificial intelligence (AI), is quickly becoming one of the most sought-after fields in computer science. A subcategory of machine learning, deep learning deals with the use of neural networks to improve things like speech recognition, computer vision and natural language processing. In the last few years, deep learning has helped forge advances in areas as diverse as object perception, machine translation and voice recognition — all research topics that have long been difficult for AI researchers to crack.

Neural networks

In information technology, a neural network is a system of programs and data structures that approximates the operation of the human brain. A neural network usually involves a large number of processors operating in parallel, each with its own small sphere of knowledge and access to data in its local memory.

Typically, a neural network is initially “trained” or fed large amounts of data and rules about data relationships (for example, “A grandfather is older than a person’s father”). A program can then tell the network how to behave in response to an external stimulus (for example, to input from a computer user who is interacting with the network) or can initiate activity on its own (within the limits of its access to the external world).

Deep learning vs. machine learning

To understand what deep learning is, it’s first important to distinguish it from other disciplines within the field of AI.

One outgrowth of AI was machine learning, in which the computer extracts knowledge through supervised experience. This typically involved a human operator helping the machine learn by giving it hundreds or thousands of training examples, and manually correcting its mistakes.

While machine learning has become dominant within the field of AI, it does have its problems. For one thing, it’s massively time consuming. For another, it’s still not a true measure of machine intelligence since it relies on human ingenuity to come up with the abstractions that allow a computer to learn.

Unlike machine learning, deep learning is mostly unsupervised. It involves, for example, creating large-scale neural nets that allow the computer to learn and “think” by itself — without the need for direct human intervention.

Deep learning “really doesn’t look like a computer program,” said Gary Marcus a psychologist and AI expert at New York University in a recent interview on NPR. Ordinary computer code is written in very strict logical steps, he said, “But what you’ll see in deep learning is something different; you don’t have a lot of instructions that say: ‘If one thing is true do this other thing.'” [Humanity Must ‘Jail’ Dangerous AI to Avoid Doom, Expert Says]

Instead of linear logic, deep learning is based on theories of how the human brain works. The program is made of tangled layers of interconnected nodes. It learns by rearranging connections between nodes after each new experience.

Deep learning has shown potential as the basis for software that could work out the emotions or events described in text (even if they aren’t explicitly referenced), recognize objects in photos, and make sophisticated predictions about people’s likely future behavior.

The Deep Learning Game

In 2011, Google started the Google Brainproject, which created a neural network trained with deep learning algorithms, which famously proved capable of recognizing high-level concepts.

Last year, Facebook established its AI Research Unit , using deep-learning expertise to help create solutions that will better identify faces and objects in the 350 million photos and videos uploaded to Facebook each day.

Another example of deep learning in action is voice recognition like Google Now and Apple’s Siri.

The future

Deep Learning is showing a great deal of promise — and it will make self-driving cars and robotic butlers a real possibility. They will still be limited, but what such systems cando was unthinkable just a few years ago, and it’s advancing at an unprecedented pace. The ability to analyze massive data sets and use deep learning in computer systems that can adapt to experience, rather than depending on a human programmer, will lead to breakthroughs. These range from drug discovery to the development of new materials to robots with a greater awareness of the world around them.

Source : Livescience

Follow Innovation Begins Here on Twitter, innovationBeginsHere, Facebook , Google+ & Linkdin.

Google’s Artificial Intelligence Can Probably Beat You at Video Games

by Tanya Lewis, Staff Writer |   February 26, 2015 [Innovation Begins Here]

Computers have already beaten humans at chess and “Jeopardy!,” and now they can add one more feather to their caps: the ability to best humans in several classic arcade games.

A team of scientists at Google created an artificially intelligent computer program that can teach itself to play Atari 2600 video games, using only minimal background information to learn how to play.

By mimicking some principles of the human brain, the program is able to play at the same level as a professional human gamer, or better, on most of the games, researchers reported today (Feb. 25) in the journal Nature. [Super-Intelligent Machines: 7 Robotic Futures] .

This is the first time anyone has built an artificial intelligence (AI) system that can learn to excel at a wide range of tasks, study co-author Demis Hassabis, an AI researcher at Google DeepMind in London, said at a news conference yesterday.

Future versions of this AI program could be used in more general decision-making applications, from driverless cars to weather prediction, Hassabis said.

Computers have already beaten humans at chess and “Jeopardy!,” and now they can add one more feather to their caps: the ability to best humans in several classic arcade games.

A team of scientists at Google created an artificially intelligent computer program that can teach itself to play Atari 2600 video games, using only minimal background information to learn how to play.

By mimicking some principles of the human brain, the program is able to play at the same level as a professional human gamer, or better, on most of the games, researchers reported today (Feb. 25) in the journal Nature. [Super-Intelligent Machines: 7 Robotic Futures]

This is the first time anyone has built an artificial intelligence (AI) system that can learn to excel at a wide range of tasks, study co-author Demis Hassabis, an AI researcher at Google DeepMind in London, said at a news conference yesterday.

Future versions of this AI program could be used in more general decision-making applications, from driverless cars to weather prediction, Hassabis said.

Learning by reinforcement

Humans and other animals learn by reinforcement — engaging in behaviors that maximize some reward. For example, pleasurable experiences cause the brain to release the chemical neurotransmitter dopamine. But in order to learn in a complex world, the brain has to interpret input from the senses and use these signals to generalize past experiences and apply them to new situations.

When IBM’s Deep Blue computer defeated chess grandmaster Garry Kasparov in 1997, and the artificially intelligent Watson computer won the quiz show “Jeopardy!” in 2011, these were considered impressive technical feats, but they were mostly preprogrammed abilities, Hassabis said. In contrast, the new DeepMind AI is capable of learning on its own, using reinforcement.

To develop the new AI program, Hassabis and his colleagues created an artificial neural network based on “deep learning,” a machine-learning algorithm that builds progressively more abstract representations of raw data. (Google famously used deep learning to train a network of computers to recognize cats based on millions of YouTube videos, but this type of algorithm is actually involved in many Google products, from search to translation.)

The new AI program is called the “deep Q-network,” or DQN, and it runs on a regular desktop computer.

Computers have already beaten humans at chess and “Jeopardy!,” and now they can add one more feather to their caps: the ability to best humans in several classic arcade games.

A team of scientists at Google created an artificially intelligent computer program that can teach itself to play Atari 2600 video games, using only minimal background information to learn how to play.

By mimicking some principles of the human brain, the program is able to play at the same level as a professional human gamer, or better, on most of the games, researchers reported today (Feb. 25) in the journal Nature. [Super-Intelligent Machines: 7 Robotic Futures]

This is the first time anyone has built an artificial intelligence (AI) system that can learn to excel at a wide range of tasks, study co-author Demis Hassabis, an AI researcher at Google DeepMind in London, said at a news conference yesterday.

Future versions of this AI program could be used in more general decision-making applications, from driverless cars to weather prediction, Hassabis said.

Learning by reinforcement

Humans and other animals learn by reinforcement — engaging in behaviors that maximize some reward. For example, pleasurable experiences cause the brain to release the chemical neurotransmitter dopamine. But in order to learn in a complex world, the brain has to interpret input from the senses and use these signals to generalize past experiences and apply them to new situations.

When IBM’s Deep Blue computer defeated chess grandmaster Garry Kasparov in 1997, and the artificially intelligent Watson computer won the quiz show “Jeopardy!” in 2011, these were considered impressive technical feats, but they were mostly preprogrammed abilities, Hassabis said. In contrast, the new DeepMind AI is capable of learning on its own, using reinforcement.

To develop the new AI program, Hassabis and his colleagues created an artificial neural network based on “deep learning,” a machine-learning algorithm that builds progressively more abstract representations of raw data. (Google famously used deep learning to train a network of computers to recognize cats based on millions of YouTube videos, but this type of algorithm is actually involved in many Google products, from search to translation.)

The new AI program is called the “deep Q-network,” or DQN, and it runs on a regular desktop computer.

Playing games

The researchers tested DQN on 49 classic Atari 2600 games, such as “Pong” and “Space Invaders.” The only pieces of information about the game that the program received were the pixels on the screen and the game score. [See video of Google AI playing video games]

Computers have already beaten humans at chess and “Jeopardy!,” and now they can add one more feather to their caps: the ability to best humans in several classic arcade games.

A team of scientists at Google created an artificially intelligent computer program that can teach itself to play Atari 2600 video games, using only minimal background information to learn how to play.

By mimicking some principles of the human brain, the program is able to play at the same level as a professional human gamer, or better, on most of the games, researchers reported today (Feb. 25) in the journal Nature. [Super-Intelligent Machines: 7 Robotic Futures]

This is the first time anyone has built an artificial intelligence (AI) system that can learn to excel at a wide range of tasks, study co-author Demis Hassabis, an AI researcher at Google DeepMind in London, said at a news conference yesterday.

Future versions of this AI program could be used in more general decision-making applications, from driverless cars to weather prediction, Hassabis said.

Learning by reinforcement

Humans and other animals learn by reinforcement — engaging in behaviors that maximize some reward. For example, pleasurable experiences cause the brain to release the chemical neurotransmitter dopamine. But in order to learn in a complex world, the brain has to interpret input from the senses and use these signals to generalize past experiences and apply them to new situations.

When IBM’s Deep Blue computer defeated chess grandmaster Garry Kasparov in 1997, and the artificially intelligent Watson computer won the quiz show “Jeopardy!” in 2011, these were considered impressive technical feats, but they were mostly preprogrammed abilities, Hassabis said. In contrast, the new DeepMind AI is capable of learning on its own, using reinforcement.

To develop the new AI program, Hassabis and his colleagues created an artificial neural network based on “deep learning,” a machine-learning algorithm that builds progressively more abstract representations of raw data. (Google famously used deep learning to train a network of computers to recognize cats based on millions of YouTube videos, but this type of algorithm is actually involved in many Google products, from search to translation.)

The new AI program is called the “deep Q-network,” or DQN, and it runs on a regular desktop computer.

Playing games

The researchers tested DQN on 49 classic Atari 2600 games, such as “Pong” and “Space Invaders.” The only pieces of information about the game that the program received were the pixels on the screen and the game score. [See video of Google AI playing video games]

“The system learns to play by essentially pressing keys randomly” in order to achieve a high score, study co-author Volodymyr Mnih, also a research scientist at Google DeepMind, said at the news conference.

After a couple weeks of training, DQN performed as well as professional human gamers on many of the games, which ranged from side-scrolling shooters to 3D car-racing games, the researchers said. The AI program scored 75 percent of the human score on more than half of the games, they added.

Sometimes, DQN discovered game strategies that the researchers hadn’t even thought of — for example, in the game “Seaquest,” the player controls a submarine and must avoid, collect or destroy objects at different depths. The AI program discovered it could stay alive by simply keeping the submarine just below the surface, the researchers said.

More complex tasks

DQN also made use of another feature of human brains: the ability to remember past experiences and replay them in order to guide actions (a process that occurs in a seahorse-shaped brain region called the hippocampus). Similarly, DQN stored “memories” from its experiences, and fed these back into its decision-making process during gameplay.

But human brains don’t remember all experiences the same way. They’re biased to remember more emotionally charged events, which are likely to be more important. Future versions of DQN should incorporate this kind of biased memory, the researchers said.

Now that their program has mastered Atari games, the scientists are starting to test it on more complex games from the ’90s, such as 3D racing games. “Ultimately, if this algorithm can race a car in racing games, with a few extra tweaks, it should be able to drive a real car,” Hassabis said.

In addition, future versions of the AI program might be able to do things such as plan a trip to Europe, booking all the flights and hotels. But “we’re most excited about using AI to help us do science,” Hassabis said.

Follow Innovation Begins Here on Twitter, innovationBeginsHere, Facebook , Google+ & Linkdin.

Bluetooth Pacifiers and Smart Armchairs

The largest display of consumer electronics on the planet, CES, kicked off here on Monday (Jan. 6). Among the nearly 20,000 gizmos on display are a huge assortment of technologies designed with health and wellness in mind.

As expected, visitors to this year’s CES will see an abundance of fitness trackers for athletes of many different sports, from marathon runners to snowboarders. But attendees will also see gadgets and devices that monitor your health when you aren’t wearing workout clothes, such as an arm chair said to help you get fit while you watch TV, and a Bluetooth-enabled pacifier that lets parents know when baby is running a fever.

Live Science scoured CES in search of the most novel technology for the health-minded set. Here are our favorite finds

being-ces-tech

Lots of wristbands at this year’s CES track your steps, calories burned or time spent working out. But one device aims to monitor your emotional health as well. Called Being and made by Zensorium, the device is touted as a way for people to track some of their moods throughout the day.

Built like a smartwatch, the device features sensors that collect heart rate and blood pressure data. This information is then used to assign the wearer a mood — it doesn’t register all moods, but tells you whether you are excited, stressed, normal or calm. If you’re feeling stressed, Being provides tips on how to unwind; for example, it may encourage you to take deep breaths.

Being also serves as a more conventional activity tracker, monitoring your steps taken and calories burned, as well as mapping out your sleep cycles. The device, due out in April, will retail for $169.15, according to the company. (Photo credit: Zensorium).

smart-mat-ces-tech

You don’t need much equipment to practice yoga, but for yoga enthusiasts who want to go high-tech, there’s SmartMat, a yoga mat with sensors that can detect your pose and provide feedback on how to improve your form.

Users first calibrate the device by providing their heights and weights, and then performing a series of poses so the mat can determine the length of the user’s limbs and torso. This helps the device provide customized feedback, such as whether you need to adjust your position to get the perfect pose, the company said.

“The feedback you get is very specific for your body,” Leanne Beesley, a representative for SmartMat, told Live Science. And the more you use the mat, the more it learns about your body, Beesley said. [Best Fitness Tracker Bands 2015]

The SmartMat can detect 62 different poses, and can hold a charge for six hours. The mat also has different modes, specialized for use at home or during yoga classes. The device is available now for pre-order at $297, and will begin shipping in July. (Photo Credit: SmartMat)

Smart pacifier

pacifi-ces-tech

Any parent who has ever tried to take a sick baby’s temperature will appreciate Pacif-i, a new pacifier that seconds as a pediatric thermometer. This smart device connects via Bluetooth to your tablet or smartphone, allowing you to record your kid’s temperature consistently and without any struggle.

The Pacif-i app graphs baby’s temperature throughout the day, which lets parents monitor a fever and check how well a child is responding to medication. Of course, the pacifier can also be used when a child is well. Pacif-i features a built-in proximity sensor that monitors the device’s location, so a smartphone alarm will warn parents if their pacifier-toting kid wanders away.

Blue Maestro, the company behind the smart pacifier, says the device is due to ship early this year. The expected retail price for Pacif-i is $40.00. (Photo Credit: Blue Maestro).

Follow Innovation Begins Here on Twitter, innovationBeginsHere, Facebook , Google+ & Linkdin.

BMW’s new tech lets your phone check in on your car

The folks at BMW showed up at CES in a big way this year, and that has a lot to do with all the extra tech the company has been shoving under the hood. Demos during the week long convention included dares from BMW to try and crash their cars while the new 360° crash detection system was in place, as well as future-focused discussions on self-driving systems that will be able to drop you off at the curb of your favorite mall and then go park itself. The future of car tech is a big deal, and you can bet your smartphone is going to be a big part of it.

Check out the demo of BMW’s i3 control panel app, which keeps an eye on your remaining battery life and travel schedule to make sure you can get through the day without issue.

Video : https://www.youtube.com/watch?v=uZqWP_xmtI4

Source : Livescience

Follow Innovation Begins Here on Twitter, innovationBeginsHere, Facebook , Google+ & Linkdin.