Fighter Jets May Launch Small Satellites to Space

by Elizabeth Howell | February 27, 2015

Small satellites could hitch rides to space on an F-15 fighter jet by next year, according to the Defense Advanced Research Projects Agency (DARPA), the agency responsible for developing new technologies for the U.S. military.

DARPA’s so-called Airborne Launch Assist Space Access (ALASA) program is an ambitious project that aims to launch small satellites more quickly, and reduce the cost of lofting them into orbit. Traditional launches using rockets cost roughly $30,000 per pound ($66,000 per kilogram), DARPA officials have said.

The F-15 jet would take off on a nearly vertical trajectory, with the expendable launch vehicle mounted underneath it. Essentially, the fighter jet acts as the first stage of a rocket, according to DARPA. After the aircraft flies to a high altitude, it releases the satellite and can then return to land on a conventional runway.

What Happens When Drones Start Thinking on Their Own?

By Andy Miah, University of Salford   |   February 27, 2015

This article was originally published on The Conversation. The publication contributed this article to Live Science’s Expert Voices: Op-Ed & Insights.

You will be forgiven if you missed the Drones for Good competition held recently in Dubai. Despite drone technology really taking off commercially in the last year or so (the potential puns are endless) they remain a relatively niche interest.

Drones – or unmanned aerial vehicles (UAVs) as they are increasingly known – have reached a mass-market tipping point. You can buy them on the high street for the price of a smartphone and, despite a large DIY Drone community, the out-of-the-box versions are pretty extraordinary, fitted with built-in cameras and “follow me” technology, where your drone will follow you as you walk, run, surf, or hang-glide. Their usefulness to professional filmmakers has led to the first New York Drone Film Festival to be held in March 2015.

Technologically speaking, drones’ abilities have all manner of real-world applications. Some of the highlights from the US$1m prize for the Drones for Good competition include a drone that delivers a life-ring to those in distress in the water. Swiss company Flyability took the international prize for Gimball, a drone whose innovative design allows it to collide into objects without becoming destabilised or hard-to-control, making it useful in rescue missions in difficult areas.

The winner of the national prize was a drone that demonstrates the many emerging uses for drones in conservation. In this case, the Wadi drone can help record and document the diversity of flora and fauna, providing a rapid way to assess changes to the environment.

More civilian uses than military

What does this all mean for how we think about drones in society? It wasn’t long ago that the word “drones” was synonymous with death, destruction, and surveillance. Can we expect us all to have our own personal, wearable drone, as the mini-drone Nixie promises? Of course the technology continues to advance within a military context, where drones – not the kind you can pick up, but large, full-scale aircraft – are serious business. There’s even a space drone, NASA’s Boeing X-37, which spent several years in automated orbit, while others are in development to help explore other planets.

There’s no escaping the fact that drones, like a lot of technology now in the mainstream, have trickled down from their military origins. There are graffiti drones, drone bands, Star Wars-style drone racing competitions using virtual reality interfaces, and even theatrical drone choreography, or beautiful drone sculptures in the sky.

There are a few things about drones that are extremely exciting – and controversial. The autonomous capabilities of drones can be breathtaking – witnessing one just fly off at speed on its own, it feels extremely futuristic. But this is not strictly legal at present due to associated risks.

A pilot must always have “line of sight” of the drone and have the capacity to take control. Technically even the latest drones still require a flight path to be pre-programmed, so the drone isn’t really making autonomous decisions yet, although the new DJI Inspire is pretty close. Drone learning has to be the next step in their evolution.

Yet this prospect of artificial intelligence raises further concerns of control, if a drone could become intelligent enough to take off, fly and get up to all kinds of mischief, and locate a power source to re-charge, all without human intervention or oversight, then where does that leave humanity?

There are also concerns about personal privacy. If Google Glass raised privacy hackles, drones will cause far worse problems. There have already been a few occasions where drones have caused some trouble, such as the one that crashed onto the Whitehouse lawn, or the one that overshot into a runway at London Heathrow. The point at which a drone is involved in something very serious may be the point at which their status as a mainstream toy ends.

This article was originally published on The Conversation. Read the original article.

Follow Innovation Begins Here on Twitter, innovationBeginsHere,Facebook , Google+ & Linkdin.

Future Computers Could Communicate Like Humans

By Elizabeth Palermo, Staff Writer   |   February 27, 2015

In the future, you might be able to talk to computers and robots the same way you talk to your friends.

Researchers are trying to break down the language barrier between humans and computers, as part of a new program from the Defense Advanced Projects Agency (DARPA), which is responsible for developing new technologies for the U.S. military. The program — dubbed Communicating with Computers (CwC) — aims to get computers to express themselves more like humans by enabling them to use spoken language, facial expressions and gestures to communicate.

Today we view computers as tools to be activated by a few clicks or keywords, in large part because we are separated by a language barrier,” Paul Cohen, DARPA’s CwC program manager, said in a statement. “The goal of CwC is to bridge that barrier, and in the process encourage the development of new problem-solving technologies.”

One of the problem-solving technologies that CwC could help further is the computer-based modeling used in cancer research. Computers previously developed by DARPA are already tasked with creating models of the complicated molecular processes that cause cells to become cancerous. But while these computers can churn out models quickly, they’re not so adept at judging if the models are actually plausible and worthy of further research. If the computers could somehow seek the opinions of flesh-and-blood biologists, the work they do would likely be more useful for cancer researchers.

“Because humans and machines have different abilities, collaborations between them might be very productive,” Cohen said.

Of course, getting a computer to collaborate with a person is easier said than done. Putting ideas into words is something that humans do naturally, but communicating is actually more complicated than it may seem, according to DARPA.

Human communication feels so natural that we don’t notice how much mental work it requires,” Cohen said. “But try to communicate while you’re doing something else — the high accident rate among people who text while driving says it all — and you’ll quickly realize how demanding it is.”

To get computers up to the task of communicating with people, CwC researchers have devised several tasks that require computers and humans to work together toward a common goal. One of the tasks, known as “collaborative composition,” involves storytelling. In this exercise, humans and computers take turns contributing sentences until they’ve composed a short story.

“This is a parlor game for humans, but a tremendous challenge for computers,” Cohen said. “To do it well, the machine must keep track of the ideas in the story, then generate an idea about how to extend the story and express this idea in language.”

Another assignment that the CwC is planning is known as “block world,” which would require humans and computers to communicate to build structures out of toy blocks. There’s a tricky part, though: neither humans nor computers will be told what to build. Instead, they’ll have to work together to make a structure that can stand up of its own accord.

In the future, DARPA researchers hope that computers will be able to do more than play with blocks, of course. If it’s successful, CwC could help advance the fields of roboticsand semi-autonomous systems. The programming and preconfigured interfaces currently used in these fields don’t allow for easy communication between machines and humans. Better communications technologies could help robot operators use natural language to describe missions and give directions to the machines they operate both before and during operations. And in addition to making life easier for human operators, CwC could make it possible for robots to request advice or information from humans when they get into sticky situations.

This article was originally published on livescience.

Follow Innovation Begins Here on Twitter, innovationBeginsHere,Facebook , Google+ & Linkdin.

C Complete Interview Material – Fundamental Concept

By Jegathesan |   February 27, 2015 [Innovation Begins Here]

Why C is the powerful language?
Why C should be first language to learn?
This is was the same question I asked myself when I started writing my first program. I tried many languages but finally I came to C, the most beautiful and charming language of all. I was literally blown away by the simplicity and elegance of C.

Any person wants to become a software developer must start with C.We can say that C is the only language which has won the hearts of programmers all over the world.

Though C is simple it is one of the most powerful languages ever created. In this dynamic IT world new language come every day and get obsolete, so there must be something in the C which has remained there for 4 decades and more and even today there is hardly any language which can match its strength.

We can divide computer language into three types.
1.High Level Language:
This type of language is capable to providing a good programming by using English words.

2.Low Level Language:
This type of language is capable to utilizing the computer resources to the maximum  extent, Such  languages can interact with hardware of the machine directly and can control the memory and the processor.

3.Low Level Language:
Low level language are suitable for developing software that interact with the hardware. Both these features are available in C and hence C is the MIDDLE LEVEL LANGUAGE.

History:
The C language developed by Dennis Ritchie in 1972. Don’t read too much history, this information enough for interview point of view.

C language properties:
1.Middle level language.
2.Procedure oriented language.
3.Portability.
4.General purpose programming language.
5.Case sensitivity.

Structure:
#include<stdio.h>   ——————-> Preprocessor Directive
int a=10;                      ——————->Global Variable
Main()                         ——————->Main Function
{
int b=10;                     ——————->Local Variable
statements;
}
#include               -> It is an instruction from the user to the computer for including the Standard library (Printf,scanf)into the source code.
<Stdio.h>
<conio.h>            ->This is called header file, It contain declaration of some predefined function.
Global Variable ->When a variable is declared above the main ,its called ‘Global Variable’.It is available whole the program.
Local Variable   ->When a variable is declared inside a block or function,its called ‘Local Variable’.It is available only within the block or function.
Main()                 ->main() function is the starting point of execution of a program. Every program should have main function.

Compilation internal process:
Step 1: [file.c ] Your program end with .C that is the source file written by high or middle level language.
Internal Process: Replacement of preprocessor directive for example when you declare Macros,include the replacement will be  happen here.
Step 2:[file.i] Intermediated file.
Internal Process: Compiler is a system utility, first it will check the error after that it will generate the .O (Object File).
Step 3:[file.o] Object  File
Internal Process: Linker is  a system utility, it will link the source file with definition of standard library function. Header File is nothing but declaration internally it having some definition that will be link here. Then it will generate the Executable file.
Step 4:[a.out] Executable file stores in secondary memory as a image.
Internal Process: Loader load the image file from the secondary memory and will be display.

Follow Innovation Begins Here on Twitter, innovationBeginsHere,Facebook , Google+ & Linkdin.
Copyright © 2015[InnovationBeginsHere].All Rights Reserved.

C interesting Q & A

By Jegathesan | February 26, 2015 [Innovation Begins Here]

Run the following code

/*Innovation Begins Here */

#include<stdio.h>

main(){

char name[10];

printf(“Enter ur name\n”);

gets(name);

puts(name);}

After executing code surely you will get one warning message.

warning: the gets function is dangerous and should not be used.

Do you know why?

The gets() system call is part of the C programming language’s standard I/O library. It takes a char pointer as its only input, and will try to fill the buffer the pointer presumably points to with a line of text. It is widely considered to be a bad idea, since it will gladly overflow any buffer it is passed (BoundsChecking is not performed, in other words, and this can lead to MemoryCorruption). Most C programmers regard use of gets() as a sign of general cluelessness on the part of whoever wrote the code.fgets() is widely advocated as a drop-in replacement to gets(), as fgets() also takes as a parameter the size of the buffer. But this is not entirely satisfactory, either, since if the input exceeds the size of the buffer fgets() will simply null-terminate the string right where the buffer ends and exit, leaving unread characters on standard input.

Follow Innovation Begins Here on Twitter, innovationBeginsHere,Facebook , Google+ & Linkdin.

Artificial intelligence Advances into ‘Deep Learning’

Ahmed Banafa, Kaplan University School of Information Technology | February 26, 2015 [Innovation Begins Here]

Ahmed Banafa is a Kaplan University faculty member for the School of Information Technology with experience in IT operations and management and a research background related techniques and analysis. He is a certified Microsoft Office Specialist, and he has served as a reviewer and technical contributor for the publication of several business and technical books.

Deep learning, an emerging topic in artificial intelligence (AI), is quickly becoming one of the most sought-after fields in computer science. A subcategory of machine learning, deep learning deals with the use of neural networks to improve things like speech recognition, computer vision and natural language processing. In the last few years, deep learning has helped forge advances in areas as diverse as object perception, machine translation and voice recognition — all research topics that have long been difficult for AI researchers to crack.

Neural networks

In information technology, a neural network is a system of programs and data structures that approximates the operation of the human brain. A neural network usually involves a large number of processors operating in parallel, each with its own small sphere of knowledge and access to data in its local memory.

Typically, a neural network is initially “trained” or fed large amounts of data and rules about data relationships (for example, “A grandfather is older than a person’s father”). A program can then tell the network how to behave in response to an external stimulus (for example, to input from a computer user who is interacting with the network) or can initiate activity on its own (within the limits of its access to the external world).

Deep learning vs. machine learning

To understand what deep learning is, it’s first important to distinguish it from other disciplines within the field of AI.

One outgrowth of AI was machine learning, in which the computer extracts knowledge through supervised experience. This typically involved a human operator helping the machine learn by giving it hundreds or thousands of training examples, and manually correcting its mistakes.

While machine learning has become dominant within the field of AI, it does have its problems. For one thing, it’s massively time consuming. For another, it’s still not a true measure of machine intelligence since it relies on human ingenuity to come up with the abstractions that allow a computer to learn.

Unlike machine learning, deep learning is mostly unsupervised. It involves, for example, creating large-scale neural nets that allow the computer to learn and “think” by itself — without the need for direct human intervention.

Deep learning “really doesn’t look like a computer program,” said Gary Marcus a psychologist and AI expert at New York University in a recent interview on NPR. Ordinary computer code is written in very strict logical steps, he said, “But what you’ll see in deep learning is something different; you don’t have a lot of instructions that say: ‘If one thing is true do this other thing.'” [Humanity Must ‘Jail’ Dangerous AI to Avoid Doom, Expert Says]

Instead of linear logic, deep learning is based on theories of how the human brain works. The program is made of tangled layers of interconnected nodes. It learns by rearranging connections between nodes after each new experience.

Deep learning has shown potential as the basis for software that could work out the emotions or events described in text (even if they aren’t explicitly referenced), recognize objects in photos, and make sophisticated predictions about people’s likely future behavior.

The Deep Learning Game

In 2011, Google started the Google Brainproject, which created a neural network trained with deep learning algorithms, which famously proved capable of recognizing high-level concepts.

Last year, Facebook established its AI Research Unit , using deep-learning expertise to help create solutions that will better identify faces and objects in the 350 million photos and videos uploaded to Facebook each day.

Another example of deep learning in action is voice recognition like Google Now and Apple’s Siri.

The future

Deep Learning is showing a great deal of promise — and it will make self-driving cars and robotic butlers a real possibility. They will still be limited, but what such systems cando was unthinkable just a few years ago, and it’s advancing at an unprecedented pace. The ability to analyze massive data sets and use deep learning in computer systems that can adapt to experience, rather than depending on a human programmer, will lead to breakthroughs. These range from drug discovery to the development of new materials to robots with a greater awareness of the world around them.

Source : Livescience

Follow Innovation Begins Here on Twitter, innovationBeginsHere, Facebook , Google+ & Linkdin.

Google’s Artificial Intelligence Can Probably Beat You at Video Games

by Tanya Lewis, Staff Writer |   February 26, 2015 [Innovation Begins Here]

Computers have already beaten humans at chess and “Jeopardy!,” and now they can add one more feather to their caps: the ability to best humans in several classic arcade games.

A team of scientists at Google created an artificially intelligent computer program that can teach itself to play Atari 2600 video games, using only minimal background information to learn how to play.

By mimicking some principles of the human brain, the program is able to play at the same level as a professional human gamer, or better, on most of the games, researchers reported today (Feb. 25) in the journal Nature. [Super-Intelligent Machines: 7 Robotic Futures] .

This is the first time anyone has built an artificial intelligence (AI) system that can learn to excel at a wide range of tasks, study co-author Demis Hassabis, an AI researcher at Google DeepMind in London, said at a news conference yesterday.

Future versions of this AI program could be used in more general decision-making applications, from driverless cars to weather prediction, Hassabis said.

Computers have already beaten humans at chess and “Jeopardy!,” and now they can add one more feather to their caps: the ability to best humans in several classic arcade games.

A team of scientists at Google created an artificially intelligent computer program that can teach itself to play Atari 2600 video games, using only minimal background information to learn how to play.

By mimicking some principles of the human brain, the program is able to play at the same level as a professional human gamer, or better, on most of the games, researchers reported today (Feb. 25) in the journal Nature. [Super-Intelligent Machines: 7 Robotic Futures]

This is the first time anyone has built an artificial intelligence (AI) system that can learn to excel at a wide range of tasks, study co-author Demis Hassabis, an AI researcher at Google DeepMind in London, said at a news conference yesterday.

Future versions of this AI program could be used in more general decision-making applications, from driverless cars to weather prediction, Hassabis said.

Learning by reinforcement

Humans and other animals learn by reinforcement — engaging in behaviors that maximize some reward. For example, pleasurable experiences cause the brain to release the chemical neurotransmitter dopamine. But in order to learn in a complex world, the brain has to interpret input from the senses and use these signals to generalize past experiences and apply them to new situations.

When IBM’s Deep Blue computer defeated chess grandmaster Garry Kasparov in 1997, and the artificially intelligent Watson computer won the quiz show “Jeopardy!” in 2011, these were considered impressive technical feats, but they were mostly preprogrammed abilities, Hassabis said. In contrast, the new DeepMind AI is capable of learning on its own, using reinforcement.

To develop the new AI program, Hassabis and his colleagues created an artificial neural network based on “deep learning,” a machine-learning algorithm that builds progressively more abstract representations of raw data. (Google famously used deep learning to train a network of computers to recognize cats based on millions of YouTube videos, but this type of algorithm is actually involved in many Google products, from search to translation.)

The new AI program is called the “deep Q-network,” or DQN, and it runs on a regular desktop computer.

Computers have already beaten humans at chess and “Jeopardy!,” and now they can add one more feather to their caps: the ability to best humans in several classic arcade games.

A team of scientists at Google created an artificially intelligent computer program that can teach itself to play Atari 2600 video games, using only minimal background information to learn how to play.

By mimicking some principles of the human brain, the program is able to play at the same level as a professional human gamer, or better, on most of the games, researchers reported today (Feb. 25) in the journal Nature. [Super-Intelligent Machines: 7 Robotic Futures]

This is the first time anyone has built an artificial intelligence (AI) system that can learn to excel at a wide range of tasks, study co-author Demis Hassabis, an AI researcher at Google DeepMind in London, said at a news conference yesterday.

Future versions of this AI program could be used in more general decision-making applications, from driverless cars to weather prediction, Hassabis said.

Learning by reinforcement

Humans and other animals learn by reinforcement — engaging in behaviors that maximize some reward. For example, pleasurable experiences cause the brain to release the chemical neurotransmitter dopamine. But in order to learn in a complex world, the brain has to interpret input from the senses and use these signals to generalize past experiences and apply them to new situations.

When IBM’s Deep Blue computer defeated chess grandmaster Garry Kasparov in 1997, and the artificially intelligent Watson computer won the quiz show “Jeopardy!” in 2011, these were considered impressive technical feats, but they were mostly preprogrammed abilities, Hassabis said. In contrast, the new DeepMind AI is capable of learning on its own, using reinforcement.

To develop the new AI program, Hassabis and his colleagues created an artificial neural network based on “deep learning,” a machine-learning algorithm that builds progressively more abstract representations of raw data. (Google famously used deep learning to train a network of computers to recognize cats based on millions of YouTube videos, but this type of algorithm is actually involved in many Google products, from search to translation.)

The new AI program is called the “deep Q-network,” or DQN, and it runs on a regular desktop computer.

Playing games

The researchers tested DQN on 49 classic Atari 2600 games, such as “Pong” and “Space Invaders.” The only pieces of information about the game that the program received were the pixels on the screen and the game score. [See video of Google AI playing video games]

Computers have already beaten humans at chess and “Jeopardy!,” and now they can add one more feather to their caps: the ability to best humans in several classic arcade games.

A team of scientists at Google created an artificially intelligent computer program that can teach itself to play Atari 2600 video games, using only minimal background information to learn how to play.

By mimicking some principles of the human brain, the program is able to play at the same level as a professional human gamer, or better, on most of the games, researchers reported today (Feb. 25) in the journal Nature. [Super-Intelligent Machines: 7 Robotic Futures]

This is the first time anyone has built an artificial intelligence (AI) system that can learn to excel at a wide range of tasks, study co-author Demis Hassabis, an AI researcher at Google DeepMind in London, said at a news conference yesterday.

Future versions of this AI program could be used in more general decision-making applications, from driverless cars to weather prediction, Hassabis said.

Learning by reinforcement

Humans and other animals learn by reinforcement — engaging in behaviors that maximize some reward. For example, pleasurable experiences cause the brain to release the chemical neurotransmitter dopamine. But in order to learn in a complex world, the brain has to interpret input from the senses and use these signals to generalize past experiences and apply them to new situations.

When IBM’s Deep Blue computer defeated chess grandmaster Garry Kasparov in 1997, and the artificially intelligent Watson computer won the quiz show “Jeopardy!” in 2011, these were considered impressive technical feats, but they were mostly preprogrammed abilities, Hassabis said. In contrast, the new DeepMind AI is capable of learning on its own, using reinforcement.

To develop the new AI program, Hassabis and his colleagues created an artificial neural network based on “deep learning,” a machine-learning algorithm that builds progressively more abstract representations of raw data. (Google famously used deep learning to train a network of computers to recognize cats based on millions of YouTube videos, but this type of algorithm is actually involved in many Google products, from search to translation.)

The new AI program is called the “deep Q-network,” or DQN, and it runs on a regular desktop computer.

Playing games

The researchers tested DQN on 49 classic Atari 2600 games, such as “Pong” and “Space Invaders.” The only pieces of information about the game that the program received were the pixels on the screen and the game score. [See video of Google AI playing video games]

“The system learns to play by essentially pressing keys randomly” in order to achieve a high score, study co-author Volodymyr Mnih, also a research scientist at Google DeepMind, said at the news conference.

After a couple weeks of training, DQN performed as well as professional human gamers on many of the games, which ranged from side-scrolling shooters to 3D car-racing games, the researchers said. The AI program scored 75 percent of the human score on more than half of the games, they added.

Sometimes, DQN discovered game strategies that the researchers hadn’t even thought of — for example, in the game “Seaquest,” the player controls a submarine and must avoid, collect or destroy objects at different depths. The AI program discovered it could stay alive by simply keeping the submarine just below the surface, the researchers said.

More complex tasks

DQN also made use of another feature of human brains: the ability to remember past experiences and replay them in order to guide actions (a process that occurs in a seahorse-shaped brain region called the hippocampus). Similarly, DQN stored “memories” from its experiences, and fed these back into its decision-making process during gameplay.

But human brains don’t remember all experiences the same way. They’re biased to remember more emotionally charged events, which are likely to be more important. Future versions of DQN should incorporate this kind of biased memory, the researchers said.

Now that their program has mastered Atari games, the scientists are starting to test it on more complex games from the ’90s, such as 3D racing games. “Ultimately, if this algorithm can race a car in racing games, with a few extra tweaks, it should be able to drive a real car,” Hassabis said.

In addition, future versions of the AI program might be able to do things such as plan a trip to Europe, booking all the flights and hotels. But “we’re most excited about using AI to help us do science,” Hassabis said.

Follow Innovation Begins Here on Twitter, innovationBeginsHere, Facebook , Google+ & Linkdin.