Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Intel To Acquire Deep Learning Company Nervana

August 19, 2016 by  
Filed under Computing

Comments Off on Intel To Acquire Deep Learning Company Nervana

Intel is acquiring deep-learning startup Nervana Systems in a deal that could help it make up for lost ground in the increasingly hot area of artificial intelligence.

Founded in 2014, California-based Nervana offers a hosted platform for deep learning that’s optimized “from algorithms down to silicon” to solve machine-learning problems, the startup says.

Businesses can use its Nervana cloud service to build and deploy applications that make use of deep learning, a branch of AI used for tasks like image recognition and uncovering patterns in large amounts of data.

Also of interest to Intel, Nervana is developing a specialty processor, known as an ASIC, that’s custom built for deep learning.

Financial terms of the deal were not disclosed, but one estimate put the value above $350 million.

“We will apply Nervana’s software expertise to further optimize the Intel Math Kernel Library and its integration into industry standard frameworks,” Diane Bryant, head of Intel’s Data Center Group, said in a blog post. Nervana’s expertise “will advance Intel’s AI portfolio and enhance the deep-learning performance and TCO of our Intel Xeon and Intel Xeon Phi processors.”

Though Intel also acquired AI firm Saffron late last year, the Nervana acquisition “clearly defines the start of Intel’s AI portfolio,” said Paul Teich, principal analyst with Tirias Research.

“Intel has been chasing high-performance computing very effectively, but their hardware-design teams missed the convolutional neural network transition a few years ago,” Teich said. CNNs are what’s fueling the current surge in artificial intelligence, deep learning and machine learning.

As part of Intel, Nervana will continue to operate out of its San Diego headquarters, cofounder and CEO Naveen Rao said in a blog post.

The startup’s 48-person team will join Intel’s Data Center Group after the deal’s close, which is expected “very soon,” Intel said.

Source- http://www.thegurureview.net/aroundnet-category/intel-to-acquire-deep-learning-company-nervana.html

IBM’s Watson Goes Cybersecurity

May 23, 2016 by  
Filed under Computing

Comments Off on IBM’s Watson Goes Cybersecurity

IBM Security has announced a new year-long research project through which it will partner with eight universities to help train its Watson artificial intelligence system to tackle cybercrime.

Knowledge about threats is often hidden in unstructured sources such as blogs, research reports and documentation, said Kevin Skapinetz, director of strategy for IBM Security.

“Let’s say tomorrow there’s an article about a new type of malware, then a bunch of follow-up blogs,” Skapinetz explained. “Essentially what we’re doing is training Watson not just to understand that those documents exist, but to add context and make connections between them.”

Over the past year, IBM Security’s own experts have been working to teach Watson the “language of cybersecurity,” he said. That’s been accomplished largely by feeding it thousands of documents annotated to help the system understand what a threat is, what it does and what indicators are related, for example.

“You go through the process of annotating documents not just for nouns and verbs, but also what it all means together,” Skapinetz said. “Then Watson can start making associations.”

Now IBM aims to accelerate the training process. This fall, it will begin working with students at universities including California State Polytechnic University at Pomona, Penn State, MIT, New York University and the University of Maryland at Baltimore County along with Canada’s universities of New Brunswick, Ottawa and Waterloo.

Over the course of a year, the program aims to feed up to 15,000 new documents into Watson every month, including threat intelligence reports, cybercrime strategies, threat databases and materials from IBM’s own X-Force research library. X-Force represents 20 years of security research, including details on 8 million spam and phishing attacks and more than 100,000 documented vulnerabilities.

Watson’s natural language processing capabilities will help it make sense of those reams of unstructured data. Its data-mining techniques will help detect outliers, and its graphical presentation tools will help find connections among related data points in different documents, IBM said.

Ultimately, the result will be a cloud service called Watson for Cyber Security that’s designed to provide insights into emerging threats as well as recommendations on how to stop them.

Source-http://www.thegurureview.net/computing-category/ibms-watson-to-get-schooled-on-cybersecurity.html

Elon Musk Opens Gym For AI Programmers 

May 10, 2016 by  
Filed under Computing

Comments Off on Elon Musk Opens Gym For AI Programmers 

Techie entrepreneur Elon Musk has rolled out an open-source training “gym” for artificial-intelligence programmers.

It’s an interesting move for a man who in 2014 said artificial intelligence, or A.I., will pose a threat to the human race.

“I think we should be very careful about artificial intelligence,” Musk said about a year and a half ago during an MIT symposium. “If I were to guess at what our biggest existential threat is, it’s probably that… with artificial intelligence, we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and he’s sure he can control the demon. It doesn’t work out.”

Today, Musk is moving to help programmers use A.I. and machine learning to build smart robots and smart devices.

“We’re releasing the public beta of OpenAI Gym, a toolkit for developing and comparing reinforcement learning (RL) algorithms,” wrote Greg Brockman, OpenAI’s CTO, and John Schulman, a scientist working with OpenAI, in a blog post . “We originally built OpenAI Gym as a tool to accelerate our own RL research. We hope it will be just as useful for the broader community.”

The OpenAI Gym is meant as a tool for programmers to use to teach their intelligent systems better ways to learn and develop more complex reasoning. In short, it’s meant to make smart systems smarter.

Musk is a co-chair of OpenAI, a $1 billion organization that was unveiled last December as an effort focused on advancing artificial intelligence that will benefit humanity.

While Musk has warned of what he sees as the perils of A.I., it’s also a technology that he needs for his businesses.

The OpenAI Gym is made up of a suite of environments, including simulated robots and Atari games, as well as a site for comparing and reproducing results.

It’s focused on reinforcement learning, a field of machine learning that involves decision-making and motor control.

According to OpenAI, reinforcement learning is an important aspect of building intelligent systems because it encompasses any problem that involves making a sequence of decisions. For instance, it could focus on controlling a robot’s motors so it’s able to run and jump, or enabling a system to make business decisions regarding pricing and inventory management.

Two major challenges for developers working with reinforcement learning are the lack of standard environments and the need for better benchmarks.

Musk’s group is hoping that the OpenAI Gym addresses both of those issues.

Source- http://www.thegurureview.net/aroundnet-category/elon-musk-opens-training-gym-for-ai-programmers.html

Google Says A.I. Is The Next Big Thing

May 3, 2016 by  
Filed under Computing

Comments Off on Google Says A.I. Is The Next Big Thing

Every decade or so, a new era of computing comes along that influences everything we do. Much of the 90s was about client-server and Windows PCs. By the aughts, the Web had taken over and every advertisement carried a URL. Then came the iPhone, and we’re in the midst of a decade defined by people tapping myopically into tiny screens.

So what comes next, when mobile gives way to something else? Mark Zuckerberg thinks it’s VR. There’s likely to be a lot of that, but there’s a more foundational technology that makes VR possible and permeates other areas besides.

“I do think in the long run we will evolve in computing from a mobile-first to an A.I.-first world,” said Sundar Pichai, Google’s CEO, answering an analyst’s question during parent company Alphabet’s quarterly earnings call Thursday.

He’s not predicting that mobile will go away, of course, but that the breakthroughs of tomorrow will come via smarter uses of data rather than clever uses of mobile devices like those that brought us Uber and Instagram.

Forms of artificial intelligence are already being used to sort photographs, fight spam and steer self-driving cars. The latest trend is in bots, which use A.I. services on the back end to complete tasks automatically, like ordering flowers or booking a hotel.

Google believes it has a lead in A.I. and the related field of machine learning, which Alphabet’s Eric Schmidt has already pegged as key to Google’s future.

Machine learning is one of the ways Google hopes to distinguish its emerging cloud computing business from those of rivals like Amazon and Microsoft, Pichai said.

Source-http://www.thegurureview.net/aroundnet-category/google-says-a-i-is-the-next-big-thing-in-computing.html

Will Facebook Go Open-Source

December 29, 2015 by  
Filed under Around The Net

Comments Off on Will Facebook Go Open-Source

Facebook has unveiled its next-generation GPU-based systems for training neural networks, Open Rack-compatible hardware code-named “Big Sur” which it plans to open source.

The social media giant’s latest machine learning system has been designed for artificial intelligence (AI) computing at a large scale, and in most part has been crafted with Nvidia hardware.

Big Sur comprises eight high-performance GPUs of up to 300 watts each, with the flexibility to configure between multiple PCI-e topologies. It makes use of Nvidia’s Tesla Accelerated Computing Platform, and as a result is twice as fast as Facebook’s previous generation rack.

“This means we can train twice as fast and explore networks twice as large,” said the firm in its engineering blog. “And distributing training across eight GPUs allows us to scale the size and speed of our networks by another factor of two.”

Facebook claims that as well as better performance, Big Sur is also far more versatile and efficient than the off-the-shelf solutions in its previous generation.

“While many high-performance computing systems require special cooling and other unique infrastructure to operate, we have optimised these new servers for thermal and power efficiency, allowing us to operate them even in our own free-air cooled, Open Compute standard data centres,” explained the company.

We spoke to Nvidia’s senior product manager for GPU Computing, Will Ramey, ahead of the launch, who has been working on the Big Sur project alongside Facebook for some time.

“The project is the first time that a complete computing system that is designed for machine learning and AI will be released as an open source solution,” said Ramey. “By taking the purpose-built design spec that Facebook has designed for their own machine learning apps and open sourcing them, people will benefit from and contribute to the project so it can move the entire industry forward.”

While Big Sur was built with Nvidia’s new Tesla M40 hyperscale accelerator in mind, it can actually support a wide range of PCI-e cards in what Facebook believes could make for better efficiencies in production and manufacturing to get more computational power for every penny that it invests.

“Servers can also require maintenance and hefty operational resources, so, like the other hardware in our data centres, Big Sur was designed around operational efficiency and serviceability,” Facebook said. “We’ve removed the components that don’t get used very much, and components that fail relatively frequently – such as hard drives and DIMMs – can now be removed and replaced in a few seconds.”

Perhaps the most interesting aspect of the Big Sur announcement is Facebook’s plans to open-source it and submit the design materials to the Open Compute Project. This is a bid to make it easier for AI researchers to share techniques and technologies.

“As with all hardware systems that are released into the open, it’s our hope that others will be able to work with us to improve it,” Facebook said, adding that it believes open collaboration will help foster innovation for future designs, and put us closer to building complex AI systems that will probably take over the world and kill us all.

Nvidia released its end-to-end hyperscale data centre platform last month claiming that it will let web services companies accelerate their machine learning workloads and power advanced artificial intelligence applications.

Consisting of two accelerators, Nvidia’s latest hyperscale line aims to let researchers design new deep neural networks more quickly for the increasing number of applications they want to power with AI. It also is designed to deploy these networks across the data centre. The line also includes a suite of GPU-accelerated libraries.

Courtesy-TheInq