Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Interest Grows In Collaborative Robots

July 5, 2016 by  
Filed under Around The Net

Comments Off on Interest Grows In Collaborative Robots

Robots that work as assistants in unison with people are set to upend the world of industrial robotics by putting automation within reach of many small and medium-sized companies for the first time, according to industry experts.

Collaborative robots, or “cobots”, tend to be inexpensive, easy to use and safe to be around. They can easily be adapted to new tasks, making them well-suited to small-batch manufacturing and ever-shortening product cycles.

Cobots can typically lift loads of up 10 kilograms (22 lb) and can be small enough to put on top of a workbench. They can help with repetitive tasks like picking and placing, packaging or gluing and welding.

Some can repeat a task after being guided once through the process by a worker and recording it. The price of a cobot can be as little as $10,000, although typically they cost two to three times that.

The global cobot market is set to grow from $116 million last year to $11.5 billion by 2025, capital goods analysts at Barclays estimate. That would be roughly equal to the size of the entire industrial robotics market today.

“By 2020 it will be a game-changer,” said Stefan Lampa, head of robotics of Germany’s Kuka, during a panel discussion organized by the International Federation of Robotics (IFR) at the Automatica trade fair in Munich.

Growth in industrial robot unit sales slowed to 12 percent last year from 29 percent in 2014, the IFR said on Wednesday, weighed by a sharp fall in top buyer China.

The world’s top industrial robot makers – Japan’s Fanuc and Yaskawa, Swiss ABB and Kuka – all have collaborative robots on the market, although sales are not yet significant for them.

But the market leader and pioneer is Denmark’s Universal Robots, a start-up that sold its first cobot in 2009 and was acquired by U.S. automatic test equipment maker Teradyne for $285 million last year.

Source-http://www.thegurureview.net/aroundnet-category/interest-grows-in-collaborative-robots.html

Google Says A.I. Is The Next Big Thing

May 3, 2016 by  
Filed under Computing

Comments Off on Google Says A.I. Is The Next Big Thing

Every decade or so, a new era of computing comes along that influences everything we do. Much of the 90s was about client-server and Windows PCs. By the aughts, the Web had taken over and every advertisement carried a URL. Then came the iPhone, and we’re in the midst of a decade defined by people tapping myopically into tiny screens.

So what comes next, when mobile gives way to something else? Mark Zuckerberg thinks it’s VR. There’s likely to be a lot of that, but there’s a more foundational technology that makes VR possible and permeates other areas besides.

“I do think in the long run we will evolve in computing from a mobile-first to an A.I.-first world,” said Sundar Pichai, Google’s CEO, answering an analyst’s question during parent company Alphabet’s quarterly earnings call Thursday.

He’s not predicting that mobile will go away, of course, but that the breakthroughs of tomorrow will come via smarter uses of data rather than clever uses of mobile devices like those that brought us Uber and Instagram.

Forms of artificial intelligence are already being used to sort photographs, fight spam and steer self-driving cars. The latest trend is in bots, which use A.I. services on the back end to complete tasks automatically, like ordering flowers or booking a hotel.

Google believes it has a lead in A.I. and the related field of machine learning, which Alphabet’s Eric Schmidt has already pegged as key to Google’s future.

Machine learning is one of the ways Google hopes to distinguish its emerging cloud computing business from those of rivals like Amazon and Microsoft, Pichai said.

Source-http://www.thegurureview.net/aroundnet-category/google-says-a-i-is-the-next-big-thing-in-computing.html

Google Moves To Drop CAPTCHA

December 16, 2014 by  
Filed under Around The Net

Comments Off on Google Moves To Drop CAPTCHA

Google announced that it is trying to get rid of those annoying CAPTCHAs required by websites, which is short for Completely Automated Public Turing test to tell Computers and Humans Apart.

Instead of requiring that users fill in the letters and numbers shown in a distorted image, sites that use Google’s reCAPTCHA service will be able to use just one click, answering a simple question: Are you a robot?

“reCAPTCHA protects the websites you love from spam and abuse,” wrote Vinay Shet, product manager for Google’s reCAPTCHA service, in a blog post. “For years, we’ve prompted users to confirm they aren’t robots by asking them to read distorted text and type it into a box… But, we figured it would be easier to just directly ask our users whether or not they are robots. So, we did! ”

Google on Wednesday began rolling out a new API that rethinks the reCAPTCHA experience.

CAPTCHA “can be hard to read and frustrating for people, particularly on mobile devices,” said Zeus Kerravala, an analyst with ZK Research. “People often have to put in the text several times. On the surface, this seems a good way to improve the user experience. It still requires human intervention, just something simpler.”

CAPTCHAs were created to foil computer programs that hackers or spammers use to troll for access to websites or to collect email addresses.

Google said CAPTCHAs are less useful than they have been, although they are still frustrating to everyday users.

“CAPTCHAs have long relied on the inability of robots to solve distorted text,’ wrote Shet. “However, our research recently showed that today’s artificial intelligence technology can solve even the most difficult variant of distorted text at 99.8% accuracy. Thus distorted text, on its own, is no longer a dependable test.”

The new API, along with Google’s ability to analyze a user’s actions — before, during, and after clicking on the reCAPTCHA box — let’s the new technology figure out if the user is human or not.

“The new API is the next step in this steady evolution,” Shet stated. “Now humans can just check the box and in most cases, they’re through the challenge.”

Source

Google Continues A.I. Expansion

November 4, 2014 by  
Filed under Computing

Comments Off on Google Continues A.I. Expansion

Google Inc is growing its artificial intelligence area, hiring more than half a dozen leading academics and experts in the field and announcing a partnership with Oxford University to “accelerate” its efforts.

Google will make a “substantial contribution” to establish a research partnership with Oxford’s computer science and engineering departments, the company said on Thursday regarding its work to develop the intelligence of machines and software, often to emulate human-like intelligence.

Google did not provide any financial details about the partnership, saying only in a post on its blog that it will include a program of student internships and a series of joint lectures and workshops “to share knowledge and expertise.”

Google, which is based in Mountain View, California, is building up its artificial intelligence capabilities as it strives to maintain its dominance in the Internet search market and to develop new products such as robotics and self-driving cars. In January Google acquired artificial intelligence company Deep Mind for $400 million according to media reports.

The new hires will be joining Google’s Deep Mind team, including three artificial intelligence experts whose work has focused on improving computer visual recognition systems. Among that team is Oxford Professor Andrew Zisserman, a three-time winner of the Marr Prize for computer vision.

The four founders of Dark Blue Labs will also be joining Google where they will be will be leading efforts to help machines “better understand what users are saying to them.”

Google said that three of the professors will hold joint appointments at Oxford, continuing to work part time at the university.

Source

Scientist Develop Anti-Faking PC

April 3, 2014 by  
Filed under Computing

Comments Off on Scientist Develop Anti-Faking PC

Scientists have developed a computer system with sophisticated pattern recognition abilities that performed more impressively than humans in differentiating between people experiencing genuine pain and people who were just pretending.

In a study published in the journal Current Biology, human subjects did no better than chance – about 50 percent – in correctly judging if a person was feigning pain after seeing videos in which some people were and some were not.

The computer was right 85 percent of the time. Why? The researchers say its pattern-recognition abilities successfully spotted distinctive aspects of facial expressions, particularly involving mouth movements, that people generally missed.

“We all know that computers are good at logic processes and they’ve long out-performed humans on things like playing chess,” said Marian Bartlett of the Institute for Neural Computation at the University of California-San Diego, one of the researchers.

“But in perceptual processes, computers lag far behind humans and have a lot of trouble with perceptual processes that humans tend to find easy, including speech recognition and visual recognition. Here’s an example of a perceptual process that the computer is able to do better than human observers,” Bartlett said in a telephone interview.

For the experiment, 25 volunteers each recorded two videos.

In the first, each of the volunteers immersed an arm in lukewarm water for a minute and were told to try to fool an expert into thinking they were in pain. In the second, the volunteers immersed an arm in a bucket of frigid ice water for a minute, a genuinely painful experience, and were given no instructions on what to do with their facial expressions.

The researchers asked 170 other volunteers to assess which people were in real discomfort and which were faking it.

After they registered a 50 percent accuracy rate, which is no better than a coin flip, the researchers gave the volunteers training in recognizing when someone was faking pain. Even after this, the volunteers managed an accuracy rate of only 55 percent.

The computer’s vision system included a video camera that took images of a person’s facial expressions and decoded them. The computer had been programmed to recognize that one kind of facial movement combinations suggested true pain and another kind suggested faked pain.

Source

Google Buys A.I. Firm

February 7, 2014 by  
Filed under Computing

Comments Off on Google Buys A.I. Firm

Google has purchased DeepMind Technologies, an artificial intelligence company in London, reportedly for $400 million.

A Google representative confirmed the via email, but said the company’s isn’t providing any additional information at this time.

News website Re/code said in a report this past Sunday that Google was paying $400 million for the company, founded by games prodigy and neuroscientist Demis Hassabis, Shane Legg and Mustafa Suleyman.

The company claims on its website that it combines “the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms.” It said its first commercial applications are in simulations, e-commerce and games.

Google announced this month it was paying $3.2 billion in cash to acquire Nest, a maker of smart smoke alarms and thermostats, in what is seen as a bid to expand into the connected home market. It also acquired in January a security firm called Impermium, to boost its expertise in countering spam and abuse.

The Internet giant said on a research site that much of its work on language, speech, translation, and visual processing relies on machine learning and artificial intelligence. “In all of those tasks and many others, we gather large volumes of direct or indirect evidence of relationships of interest, and we apply learning algorithms to generalize from that evidence to new cases of interest,” it said.

In May, Google launched a Quantum Artificial Intelligence Lab, hosted by NASA’s Ames Research Center. The Universities Space Research Association was to invite researchers around the world to share time on the quantum computer from D-Wave Systems, to study how quantum computing can advance machine learning.

Source

Can Robots Run On (NH2)2CO?

November 19, 2013 by  
Filed under Around The Net

Comments Off on Can Robots Run On (NH2)2CO?

Scientists have discovered a way to power future robots using an unusual source — urine.

Researchers at the University of the West of England, Bristol and the University of Bristol collaborated to build a system that will enable robots to function without batteries or being plugged into an electrical outlet.

Based on the functioning of the human heart, the system is designed to pump urine into the robot’s “engine room,” converting the waste into electricity and enabling the robot to function completely on its own.

Scientists are hoping the system, which can hold 24.5 ml of urine, could be used to power future generations of robots, or what they’re calling EcoBots.

“In the city environment, they could re-charge using urine from urinals in public lavatories,” said Peter Walters, a researcher with the University of the West of England. “In rural environments, liquid waste effluent could be collected from farms.”

In the past 10 years, researchers have built four generations of EcoBots, each able to use microorganisms to digest the waste material and generate electricity from it, the university said.

Along with using human and animal urine, the robotic system also can create power by using rotten fruit and vegetables, dead flies, waste water and sludge.

Ioannis Ieropoulos, a scientist with the Bristol Robotics Laboratory, explained that the microorganisms work inside microbial fuel cells where they metabolize the organics, converting them into carbon dioxide and electricity.

Like the human heart, the robotic system works by using artificial muscles that compress a soft area in the center of the device, forcing fluid to be expelled through an outlet and delivered to the fuel cells. The artificial muscles then relax and go through the process again for the next cycle.

“The artificial heartbeat is mechanically simpler than a conventional electric motor-driven pump by virtue of the fact that it employs artificial muscle fibers to create the pumping action, rather than an electric motor, which is by comparison a more complex mechanical assembly,” Walter said.

Source

Inventor Predicts Future Of 3D

October 1, 2013 by  
Filed under Around The Net

Comments Off on Inventor Predicts Future Of 3D

Pablos Holman predicts that in the not too distant future our diets will be tailored to our metabolisms, adding a few bits of broccoli, a smattering of beets and some meat — all extruded from a 3D printer in an appetizing form to please our palates.

Holman is a futurist and inventor at the Intellectual Ventures Laboratory in Bellevue, Wash., where he and others work on futuristic projects like printable food. He was not alone in speaking on the topic at the Inside 3D Printing Conference last week.

Avi Reichentall, CEO of 3D Systems, one of the largest consumer printer companies, has already been able to configure his machines to create a variety of sugary goods, including cakes and candy. The sweets were on display with ornate designs.

Reichentall said consumers can expect his company to build a machine that will take a place next to the coffee maker on a kitchen counter, but instead of a caffeine shot, it will offer a sugar rush.

“We are working on a chocolate printer. I want a chocolate printer in my kitchen. I want it to be as cool as a Keurig coffee maker,” Reichentall said. “We now have 3D printed sugar. We’re going to bring to pastry chefs and confectionaries and bakers a whole range of new sugar printing capabilities.

“This is coming to a marketplace near you very soon,” he said.

While Reichentall focuses on desserts, Holman is busy with main courses, creating machines that can take freeze-dried food and hydrate it as it is being extruded through nozzles to create an eye-pleasing meal.

Source

IBM Still Talking Up SyNAPSE

August 19, 2013 by  
Filed under Computing

Comments Off on IBM Still Talking Up SyNAPSE

IBM has unveiled the latest stage in its plans to generate a computer system that copies the human brain, calculating tasks that are relatively easy for humans but difficult for computers.

As part of the firm’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project, IBM researchers have been working with Cornell University and Inilabs to create the programming language with $53m in funding from the Defense Advanced Research Projects Agency (DARPA).

First unveiled two years ago this month, the technology – which mimics both the size and power of humanity’s most complex organ – looks to solve the problems created by traditional computing models when handling vast amounts of high speed data.

IBM explained the new programming language, perhaps not in layman’s terms, by saying it “breaks the mould of sequential operation underlying today’s von Neumann architectures and computers” and instead “is tailored for a new class of distributed, highly interconnected, asynchronous, parallel, large-scale cognitive computing architectures”.

That, in English, basically means that it could be used to create next generation intelligent sensor networks that are capable of perception, action and cognition, the sorts of mental processes that humans take for granted and perform with ease.

Dr Dharmendra Modha, who heads the programme at IBM Research, expanded on what this might mean for the future, sayng that the time has come to move forward into the next stage of information technology.

“Today, we’re at another turning point in the history of information technology. The era that Backus and his contemporaries helped create, the programmable computing era, is being superseded by the era of cognitive computing.

“Increasingly, computers will gather huge quantities of data, reason over the data, and learn from their interactions with information and people. These new capabilities will help us penetrate complexity and make better decisions about everything from how to manage cities to how to solve confounding business problems.”

The hardware for IBM’s cognitive computers mimic the brain, as they are built around small “neurosynaptic cores”. The cores are modeled on the brain, and feature 256 “neurons” (processors), 256 “axons” (memory) and 64,000 “synapses” (communications between neurons and axons).

IBM suggested that potential uses for this technology could include a pair of glasses which assist the visually impaired when navigating through potentially hazardous environments. Taking in vast amounts of visual and sound data, the augmented reality glasses would highlight obstacles such as kerbs and cars, and steer the user clear of danger.

Other uses could include intelligent microphones that keep track of who is speaking to create an accurate transcript of any conversation.

In the long term, IBM hopes to build a cognitive computer scaled to 100 trillion synapses. This would fit inside a space with a volume of no more than two litres while consuming less than one kilowatt of power.

Source

IBM’s Next-gen Transistors Mimick Human Brain

April 17, 2013 by  
Filed under Computing

Comments Off on IBM’s Next-gen Transistors Mimick Human Brain

IBM has discovered a way to make transistors that could be turned into virtual circuitry that mimics how the human brain operates.

The new transistors would be made from strongly correlated materials, such as metal oxides, which researchers say can be used to build more powerful — but less power-hungry — computation circuitry.

“The scaling of conventional-based transistors is nearing an end, after a fantastic run of 50 years,” said Stuart Parkin, an IBM fellow at IBM Research. “We need to consider alternative devices and materials that operate entirely differently.”

Researchers have been trying to find ways of changing conductivity states in strongly correlated materials for years. Parkin’s team is the first to convert metal oxides from an insulated to conductive state by applying oxygen ions to the material. The team recently published details of the work in the journal Science.

In theory, such transistors could mimic how the human brain operates in that “liquids and currents of ions [would be used] to change materials,” Parkin said, noting that “brains can carry out computing operations a million times more efficiently than silicon-based computers.”

Source

Next Page »