Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Google Continues A.I. Expansion

November 4, 2014 by  
Filed under Computing

Comments Off on Google Continues A.I. Expansion

Google Inc is growing its artificial intelligence area, hiring more than half a dozen leading academics and experts in the field and announcing a partnership with Oxford University to “accelerate” its efforts.

Google will make a “substantial contribution” to establish a research partnership with Oxford’s computer science and engineering departments, the company said on Thursday regarding its work to develop the intelligence of machines and software, often to emulate human-like intelligence.

Google did not provide any financial details about the partnership, saying only in a post on its blog that it will include a program of student internships and a series of joint lectures and workshops “to share knowledge and expertise.”

Google, which is based in Mountain View, California, is building up its artificial intelligence capabilities as it strives to maintain its dominance in the Internet search market and to develop new products such as robotics and self-driving cars. In January Google acquired artificial intelligence company Deep Mind for $400 million according to media reports.

The new hires will be joining Google’s Deep Mind team, including three artificial intelligence experts whose work has focused on improving computer visual recognition systems. Among that team is Oxford Professor Andrew Zisserman, a three-time winner of the Marr Prize for computer vision.

The four founders of Dark Blue Labs will also be joining Google where they will be will be leading efforts to help machines “better understand what users are saying to them.”

Google said that three of the professors will hold joint appointments at Oxford, continuing to work part time at the university.

Source

Will Computer Obtain Common Sense?

December 10, 2013 by  
Filed under Computing

Comments Off on Will Computer Obtain Common Sense?

Even though it may appear PCs are getting dumbed down as we see constant images of cats playing the piano or dogs playing in the snow, one computer is doing the same and getting smarter and smarter.

A computer cluster running the so-called the Never Ending Image Learner at Carnegie Mellon University runs 24 hours a day, 7 days a week searching the Internet for images, studying them on its own and building a visual database. The process, scientists say, is giving the computer an increasing amount of common sense.

“Images are the best way to learn visual properties,” said Abhinav Gupta, assistant research professor in Carnegie Mellon’s Robotics Institute. “Images also include a lot of common sense information about the world. People learn this by themselves and, with [this program], we hope that computers will do so as well.”

The computers have been running the program since late July, analyzing some three million images. The system has identified 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images, according to the university.

The program has connected the dots to learn 2,500 associations from thousands of instances.

Thanks to advances in computer vision that enable software to identify and label objects found in images and recognize colors, materials and positioning, the Carnegie Mellon cluster is better understanding the visual world with each image it analyzes.

The program also is set up to enable a computer to make common sense associations, like buildings are vertical instead of lying on their sides, people eat food, and cars are found on roads. All the things that people take for granted, the computers now are learning without being told.

“People don’t always know how or what to teach computers,” said Abhinav Shrivastava, a robotics Ph.D. student at CMU and a lead researcher on the program. “But humans are good at telling computers when they are wrong.”

He noted, for instance, that a human might need to tell the computer that pink isn’t just the name of a singer but also is the name of a color.

While previous computer scientists have tried to “teach” computers about different real-world associations, compiling structured data for them, the job has always been far too vast to tackle successfully. CMU noted that Facebook alone has more than 200 billion images.

The only way for computers to scan enough images to understand the visual world is to let them do it on their own.

“What we have learned in the last five to 10 years of computer vision research is that the more data you have, the better computer vision becomes,” Gupta said.

CMU’s computer learning program is supported by Google and the Office of Naval Research.

Source

IBM Still Talking Up SyNAPSE

August 19, 2013 by  
Filed under Computing

Comments Off on IBM Still Talking Up SyNAPSE

IBM has unveiled the latest stage in its plans to generate a computer system that copies the human brain, calculating tasks that are relatively easy for humans but difficult for computers.

As part of the firm’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project, IBM researchers have been working with Cornell University and Inilabs to create the programming language with $53m in funding from the Defense Advanced Research Projects Agency (DARPA).

First unveiled two years ago this month, the technology – which mimics both the size and power of humanity’s most complex organ – looks to solve the problems created by traditional computing models when handling vast amounts of high speed data.

IBM explained the new programming language, perhaps not in layman’s terms, by saying it “breaks the mould of sequential operation underlying today’s von Neumann architectures and computers” and instead “is tailored for a new class of distributed, highly interconnected, asynchronous, parallel, large-scale cognitive computing architectures”.

That, in English, basically means that it could be used to create next generation intelligent sensor networks that are capable of perception, action and cognition, the sorts of mental processes that humans take for granted and perform with ease.

Dr Dharmendra Modha, who heads the programme at IBM Research, expanded on what this might mean for the future, sayng that the time has come to move forward into the next stage of information technology.

“Today, we’re at another turning point in the history of information technology. The era that Backus and his contemporaries helped create, the programmable computing era, is being superseded by the era of cognitive computing.

“Increasingly, computers will gather huge quantities of data, reason over the data, and learn from their interactions with information and people. These new capabilities will help us penetrate complexity and make better decisions about everything from how to manage cities to how to solve confounding business problems.”

The hardware for IBM’s cognitive computers mimic the brain, as they are built around small “neurosynaptic cores”. The cores are modeled on the brain, and feature 256 “neurons” (processors), 256 “axons” (memory) and 64,000 “synapses” (communications between neurons and axons).

IBM suggested that potential uses for this technology could include a pair of glasses which assist the visually impaired when navigating through potentially hazardous environments. Taking in vast amounts of visual and sound data, the augmented reality glasses would highlight obstacles such as kerbs and cars, and steer the user clear of danger.

Other uses could include intelligent microphones that keep track of who is speaking to create an accurate transcript of any conversation.

In the long term, IBM hopes to build a cognitive computer scaled to 100 trillion synapses. This would fit inside a space with a volume of no more than two litres while consuming less than one kilowatt of power.

Source

Future PCs Will Be Constant Learners

February 24, 2012 by  
Filed under Computing

Comments Off on Future PCs Will Be Constant Learners

Tomorrow’s computers will constantly improve their understanding of the data they work with, which in turn will aid them in providing users with more appropriate information, predicted the software mastermind behind IBM’s Watson system.

Computers in the future “will learn through interacting with us. They will not necessarily require us to sit down and explicitly program them, but through continuous interaction with humans they will start to understand the kind of data and the kind of computation we need,” said IBM Fellow David Ferrucci, who was IBM’s principal investigator for Watson technologies. Ferrucci spoke at the IBM Smarter Computing Executive Forum, held Wednesday in New York.

“This notion of learning through collaboration and interaction is where we think computing is going,” he said.

IBM’s Watson project was an exercise for the company in how to build machines that can better anticipate user needs.

IBM researchers spent four years developing Watson, a supercomputer designed specifically to compete in the TV quiz show “Jeopardy,” a contest that took place last year. On “Jeopardy,” contestants are asked a range of questions across a wide variety of topic areas.

Watson did win at its “Jeopardy” match. Now IBM thinks the Watson computing model can have a wide range of uses.

Source…