Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

nVidia NVLINK 2.0 Going In IBM Servers

August 31, 2016 by  
Filed under Computing

Comments Off on nVidia NVLINK 2.0 Going In IBM Servers

On Monday, PCWorld reported that the first servers expected to use Nvidia’s second-generation NVLINK 2.0 technology will be arriving sometime next year using IBM’s upcoming Power9 chip family.

IBM launched its Power8 lineup of superscalar symmetric multiprocessors back in August 2013 at the Hot Chips conference, and the first systems became available in August 2014. The announcement was significant because it signaled the beginning of a continuing partnership between IBM and Nvidia to develop GPU-accelerated IBM server systems, beginning with the Tesla K40 GPU.

The result was an HPC “tag-team” where IBM’s Power8 architecture, a 12-core chip with 96MB of embedded memory, would eventually go on to power Nvidia’s next-generation Pascal architecture which debuted in April 2016 at the company’s GPU Technology Conference.

NVLINK, first announced in March 2014, uses a proprietary High-Speed Signaling interconnect (NVHS) developed by Nvidia. The company says NVHS transmits data over a differential pair running at up to 20Gbps, so eight of these differential 20Gbps connections will form a 160Gbps “Sub-Link” that sends data in one direction. Two sub-links—one for each direction—will form a 320Gbps, or 40GB/s bi-directional “Link” that connects processors together in a mesh framework (GPU-to-GPU or GPU-to-CPU).

NVLINK lanes upgrade from 20Gbps to 25Gbps

IBM is projecting its Power9 servers to be available beginning in the middle of 2017, with PCWorld reporting that the new processor lineup will include support for NVLINK 2.0 technology. Each NVLINK lane will communicate at 25Gbps, up from 20Gbps in the first iteration. With eight differential lanes, this translates to a 400Gbps (50GB/s) bi-directional link between CPUs and GPUs, or about 25 percent more performance if the information is correct.

NVLINK 2.0 capable servers arriving next year

Meanwhile, Nvidia has yet to release any NVLINK 2.0-capable GPUs, but a company presentation slide in Korean language suggests that the technology will first appear in Volta GPUs which are also scheduled for release sometime next year. We were originally under the impression that the new GPU architecture would release in 2018, as per Nvidia’s roadmap. But a source hinted last month that Volta would be getting 16nm FinFET treatment and may show up in roughly the same timeframe as AMD’s HBM 2.0-powered Vega sometime in 2017. After all, it is easier for Nvidia to launch sooner if the new architecture is built on the same node as the Pascal lineup.

Still ahead of PCI-Express 4.0

Nvidia claims that PCI-Express 3.0 (32GB/s with x16 bandwidth) significantly limits a GPU’s ability to access a CPU’s memory system and is about “four to five times slower” than its proprietary standard. Even PCI-Express 4.0, releasing later in 2017, is limited to 64GB/s on a slot with x16 bandwidth.

To put this in perspective, Nvidia’s Tesla P100 Accelerator uses four 40GB/s NVLINK ports to connect clusters of GPUs and CPUs, for a total of 160GB/s of bandwidth.

With a generational NVLINK upgrade from 40GB/s to 50GB/s bi-directional links, the company could release a future Volta-based GPU with four 50GB/s NVLINK ports totaling of 200GB/s of bandwidth, well above and beyond the specifications of the new PCI-Express standard.

Courtesy-Fud

Intel To Acquire Deep Learning Company Nervana

August 19, 2016 by  
Filed under Computing

Comments Off on Intel To Acquire Deep Learning Company Nervana

Intel is acquiring deep-learning startup Nervana Systems in a deal that could help it make up for lost ground in the increasingly hot area of artificial intelligence.

Founded in 2014, California-based Nervana offers a hosted platform for deep learning that’s optimized “from algorithms down to silicon” to solve machine-learning problems, the startup says.

Businesses can use its Nervana cloud service to build and deploy applications that make use of deep learning, a branch of AI used for tasks like image recognition and uncovering patterns in large amounts of data.

Also of interest to Intel, Nervana is developing a specialty processor, known as an ASIC, that’s custom built for deep learning.

Financial terms of the deal were not disclosed, but one estimate put the value above $350 million.

“We will apply Nervana’s software expertise to further optimize the Intel Math Kernel Library and its integration into industry standard frameworks,” Diane Bryant, head of Intel’s Data Center Group, said in a blog post. Nervana’s expertise “will advance Intel’s AI portfolio and enhance the deep-learning performance and TCO of our Intel Xeon and Intel Xeon Phi processors.”

Though Intel also acquired AI firm Saffron late last year, the Nervana acquisition “clearly defines the start of Intel’s AI portfolio,” said Paul Teich, principal analyst with Tirias Research.

“Intel has been chasing high-performance computing very effectively, but their hardware-design teams missed the convolutional neural network transition a few years ago,” Teich said. CNNs are what’s fueling the current surge in artificial intelligence, deep learning and machine learning.

As part of Intel, Nervana will continue to operate out of its San Diego headquarters, cofounder and CEO Naveen Rao said in a blog post.

The startup’s 48-person team will join Intel’s Data Center Group after the deal’s close, which is expected “very soon,” Intel said.

Source- http://www.thegurureview.net/aroundnet-category/intel-to-acquire-deep-learning-company-nervana.html

Is Qualcomm Back in The Black?

July 25, 2016 by  
Filed under Computing

Comments Off on Is Qualcomm Back in The Black?

Qualcomm has had a better than expected results in its Q3 earnings, beating street and even its own estimates.

Qualcomm offered $5.2 billion to $6 billion revenue guidance and it managed to make $6 billion. Non-GAAP diluted EPS was projected at $0.90 – $1.00 and Qualcomm actually managed to make $1.16.

The MSM chip shipments were guided at 175 million to 195 million while the company actually sold 201 million of these chips.

Total reported device sales was expected to be between $52 billion and  $60 billion and in reality Qualcomm scored $62.6 billion. Qualcomm shipped between 321 million to  325 million 3G/4G devices and estimated reported 3G/4G device average selling price was at $191 – $197.

There are a few reasons for such good results, the first being Samsung. The company chose Snapdragon 820 for some markets with its flagship phones. The Snapdragon 820 ended up in 115 devices and it looks like one of the strongest high end phone chips in a while.

The introduction of the Snapdragon 821 will rekindle the fire and will make some additional sales for Samsung Galaxy Note 7 and a few other high end phones including some phones from LG and others. The 4G modem business is in good shape but one has to be careful as Qualcomm might lose some of the iPhone business to Intel. Everyone wants carrier aggregation capable modems these days, that is Cat 6 and up and Qualcomm offers this from Snapdragon 430 to the Snapdragon 820.

It is interesting to notice that while Apple iPhone sales were down, Qualcomm did better mainly as when Apple declines at   the high end, Qualcomm can make money from its high end Snapdragon chips.

We expect to see the announcement of Snapdragon 830 before the end of the year while devices shipping with the new chip in late Q1 2017 or early Q2 2017. As far as we know this might be the 10nm SoC but we will have to wait and see.

Qualcomm is investing heavily in improvements of 4G, current and future generations as well as a concentrated focus on 5G. From where we stand, Qualcomm still has the best chances to dominate the 5G market, especially due to the fact that 5G is an evolution of 4G with some new wave length and concepts added to it.

Last year’s loss of Samsung Galaxy S6 design win hurt a lot, and now the big customer is back, it seems that investing in a custom ARM Kryo core and dominating in Adreno graphics paid off.

Courtesy-Fud

Oracle Goes Deeper Into The Cloud

May 13, 2016 by  
Filed under Computing

Comments Off on Oracle Goes Deeper Into The Cloud

Right on the heels of a similar acquisition last week, Oracle has announced it will pay $532 million to buy Opower, a provider of cloud services to the utilities industry.

Once a die-hard cloud holdout, Oracle has been making up for lost time by buying a foothold in specific industries through acquisitions such as this one. Last week’s Textura buy gave it a leg up in engineering and construction.

“It’s a good move on Oracle’s part, and it definitely strengthens Oracle’s cloud story,” said Frank Scavo, president of Computer Economics.

Opower’s big-data platform helps utilities improve customer service, reduce costs and meet regulatory requirements. It currently stores and analyzes more than 600 billion meter readings from 60 million end customers. Opower claims more than 100 global utilities among its clients, including PG&E, Exelon and National Grid.

Opower will continue to operate independently until the transaction closes, which is expected later this year. The union will create the largest provider of mission-critical cloud services to an industry that’s worth $2.3 trillion, Oracle said.

Oracle’s Utilities business delivers applications and cloud services that automate core operational processes and enable compliance for global electric, gas and water utilities.

“Oracle’s industry organizations maintain unique domain knowledge, specialized expertise and focused product investments,” said Rodger Smith, a senior vice president who leads the Utilities global business unit, in a letter to customers and partners. “This model has proven highly successful across several industries, and we look forward to bringing these same benefits to the customers of Opower.”

Source- http://www.thegurureview.net/aroundnet-category/oracle-pushes-deeper-into-cloud-computing-with-another-acquisition.html

Do Carriers Want To Abandon Google?

April 14, 2016 by  
Filed under Consumer Electronics

Comments Off on Do Carriers Want To Abandon Google?

Carrier dissatisfaction with the Android maker Google is growing as more of them look to alternatives to curb what they perceive as the search engine outfit’s inflexibility.

AT&T has publically mentioned it is looking at flogging a smartphone powered by an alternative version of Android. If true, the move is a deliberate slap in the face to Google.

US carriers are a little perturbed about the amount of control has over its products and are looking to rivals such as Cyanogen, which distributes a version of Android that’s only partially controlled by Google.

ZTE had been in discussions to make the device, these people say. But mysteriously its involvement was put in jeopardy when the US government suddenly imposed trade sanctions on the company – of course this is nothing to do with Google.

The big idea is to do something like Amazon and create new flavor of Android based on Google’s source code but controlled entirely by AT&T. It would also give AT&T sole responsibility for maintaining the OS going forward.

It would bugger up Google’s because changes to the Android system might be difficult to incorporate into AT&T’s new version, and some might not make it over at all. However AT&T would be able to integrate phones more deeply into its existing infrastructure and issue updates when it wants.

One likely possibility would be an OS-level integration with AT&T’s DirectTV service which is tricky under Google’s rules. It is not clear if AT&T is serious, or if it is just a move to force Google to pull finger.

Courtesy-Fud

Cisco Fixes Major Flaw

March 23, 2016 by  
Filed under Computing

Comments Off on Cisco Fixes Major Flaw

Cisco has patched high-impact vulnerabilities in several of its cable modem and residential gateway devices which are popular among those distributed by ISPs to their customers.

The embedded Web server in the Cisco Cable Modem with Digital Voice models DPC2203 and EPC2203 contains a buffer overflow vulnerability that can be exploited remotely without authentication.  Apparently all you need to do is send a crafted HTTP requests to the Web server and you could see some arbitrary code execution.

Cisco said that its customers should contact their service providers to ensure that the software version installed on their devices includes the patch for this issue.

The Web-based administration interfaces of the Cisco DPC3941 Wireless Residential Gateway with Digital Voice and Cisco DPC3939B Wireless Residential Voice Gateway are affected by a vulnerability that could lead to information disclosure. An unauthenticated, remote attacker could exploit the flaw by sending a specially crafted HTTP request to an affected device in order to obtain sensitive information from it.

The Cisco Model DPQ3925 8×4 DOCSIS 3.0 Wireless Residential Gateway with EDVA is affected by a separate vulnerability, also triggered by malicious HTTP requests, that could lead to a denial-of-service attack.

Hackers have been hitting modems, routers and other gateway devices, hard lately – especially those distributed by ISPs to their customers. By compromising such devices, attackers can snoop on, hijack or disrupt network traffic or can attack other devices inside local networks.

Courtesy-Fud

Microsoft Cuts Azure Pricing

January 29, 2016 by  
Filed under Computing

Comments Off on Microsoft Cuts Azure Pricing

Good news for businesses using Microsoft’s Azure cloud platform: their infrastructure bills may get somewhat smaller next month.

Microsoft announced that it will be permanently reducing the prices for its Dv2 compute instances by up to 17 percent next month, depending on the type of instance and what it’s being used for. Users will see the greatest savings if they’re running higher performance Linux instances — up to 17 percent lower prices than they’ve been paying previously. Windows instance discounts top out at a 13 percent reduction compared to current prices.

Right now, the exact details of the discount are a little bit vague, but Microsoft says that it will publish full pricing details in February when they go into effect. Dv2 instances are designed for applications that require more compute power and temporary disk performance than Microsoft’s A series instances.

They’re the successor to Azure’s D-series VMs, and come with processors that are 35 percent faster than their predecessors. Greater speed also corresponds to a higher price, but these discounts will make Dv2-series instances more price competitive with their predecessors. That’s good news for price-conscious users, who may be more inclined to reach for the higher-performance instances now that they’ll be cheaper.

The price changes come after Amazon earlier this week introduced scheduled compute instances, which let users pick out a particular time for their workloads to run on a regular basis, and get discounts based on when they decide to use the system. It’s a system that’s designed to help businesses that need computing power for routine tasks at non-peak times get a discount.

Microsoft’s announcement builds on the company’s longstanding history of reducing prices for Azure in keeping with Amazon’s price cuts in order to remain competitive.

Source-http://www.thegurureview.net/computing-category/microsoft-to-cut-azure-pricing.html

Ericsson And Cisco Join Forces

November 20, 2015 by  
Filed under Computing

Comments Off on Ericsson And Cisco Join Forces

Mobile equipment maker Ericsson and U.S. networking company Cisco Systems Inc announced that they have agreed to a business and technology partnership that should generate additional revenues of $1 billion for each company by 2018.

Ericsson, whose like-for-like sales are down 7 percent so far this year and were roughly flat over the previous three years, said the partnership means new areas of revenue as it will boost its addressable market, mainly in professional services, software and the resale of Cisco products.

“We are the wireless No. 1 in the world,” Ericsson Chief Executive Hans Vestberg told Reuters.

“Cisco is by far the No. 1 in the world when it comes to IP routers. Together we can create innovative solutions.”

The companies said in a statement they would together offer routing, data center, networking, cloud, mobility, management and control, and global services capabilities.

“The strategic partnership will be a key driver of growth and value for the next decade, with each company benefiting from incremental revenue in calendar year 2016 and expected to ramp (up) to $1 billion or more for each by 2018,” they said.

Ericsson expects full-year cost synergies of 1 billion Swedish crowns ($115 million) in 2018 due to the partnership and said it would continue to explore further joint business opportunities with Cisco.

Source http://www.thegurureview.net/aroundnet-category/ericsson-and-cisco-join-forces-in-network-partnership.html

Oracle’s M7 Processor Has Security On Silicon

November 10, 2015 by  
Filed under Computing

Comments Off on Oracle’s M7 Processor Has Security On Silicon

Oracle started shipping systems based on its latest Sparc M7 processor, which the firm said will go a long way to solving the world’s online security problems by building protection into the silicon.

The Sparc M7 chip was originally unveiled at last year’s Openworld show in San Francisco, and was touted at the time as a Heartbleed-prevention tool.

A year on, and Oracle announced the Oracle SuperCluster M7, along with Sparc T7 and M7 servers, at the show. The servers are all based on the 32-core, 256-thread M7 microprocessor, which offers Security in Silicon for better intrusion protection and encryption, and SQL in Silicon for improved database efficiency.

Along with built-in security, the SuperCluster M7 packs compute, networking and storage hardware with virtualisation, operating system and management software into one giant cloud infrastructure box.

Oracle CTO Larry Ellison was on hand at Openworld on Tuesday to explain why the notion of building security into the silicon is so important.

“We are not winning a lot of these cyber battles. We haven’t lost the war but we’re losing a lot of the battles. We have to rethink how we deliver technology especially as we deliver vast amounts of data to the cloud,” he told delegates.

Ellison said that Oracle’s approach to this cyber war is to take security as low down in the stack as possible.

“Database security is better than application security. You should always push security as low in the stack as possible. At the bottom of the stack is silicon. If all of your data in the database is encrypted, that’s better than having an application code that encrypts your data. If it’s in the database, every application that uses that database inherits that security,” he explained.

“Silicon security is better than OS security. Then every operating system that runs on that silicon inherits that security. And the last time I checked, even the best hackers have not figured out a way to download changes to your microprocessor. You can’t alter the silicon, that’s really tricky.”

Ellison’s big idea is to take software security features out of operating systems, VMs and even databases in some cases – because software can be changed – and instead push them into the silicon, which can’t be. He is also urging for security to be switched on as default, without an option to turn it back off again.

“The security features should always be on. We provide encryption in our databases but it can be switched off. That is a bad idea. There should be no way to turn off encryption. The idea of being able to turn on and off security features makes no sense,” he said.

Ellison referred back to a debate that took place at Oracle when it first came up with its backup system – should the firm have only encrypted backups. “We did a customer survey and customers said no, we don’t want to pay the performance penalty in some cases,” he recalled. “In that case customer choice is a bad idea. Maybe someone will forget to turn on encryption when it should have been turned on and you lose 10 million credit cards.”

The Sparc M7 is basically Oracle’s answer to this dire security situation. Ellison said that while the M7 has lots of software features built into the silicon, the most “charismatic” of these is Silicon Secured Memory, which is “deceptively simple” in how it works.

“Every time a computer program asks for memory, say you ask for 8MB of memory, we compute a key and assign this large number to that 8MB of memory,” he explained. “We take those bits and we lock that memory. We also assign that same number to the program. Every time the program accesses memory, we check that number to make sure it’s the memory you allocated earlier. That compare is done by the hardware.”

If a program tries to access memory belonging to another program, the hardware detects a mismatch and raises a signal, flagging up a possible breach or bug.

“We put always-on memory intrusion detection into the silicon. We’re always looking for Heartbleed and Venom-like violations. You cannot turn it off,” the CTO warned.

“We’ve also speeded up encryption and decompression, which is kind of related to encryption. It runs at memory speed there’s zero cost in doing that. We turn it on, you can’t turn it off, it’s on all the time. It’s all built into the M7.”

Ellison claimed that running M7-based systems will stop threats like Heartbleed and Venom in their tracks.

“The way Venom worked, the floppy disc driver concealed this code. It’s the worst kind of situation, you’re writing into memory you’re not supposed to. You’re writing computer instructions into the memory and you’ve just taken over the whole computer,” he explained. “You can steal and change data. M7 – the second we tried to write that code into memory that didn’t belong to that program, where the keys didn’t match, that would have been detected real-time and that access would have been foiled.

All well and good, except for the fact that nearly every current computer system doesn’t run off the M7 processor. Ellison claimed that even if only three or four percent of servers in the cloud an organisation is using have this feature, they will be protected as they’ll get the early warning to then deal with the issue across non-M7 systems.

“You don’t have to replace every micro processor, you just have to replace a few so you get the information real-time,” he added.

“You’ll see us making more chips based on security, to secure our cloud and to sell to people who want to secure their clouds or who want to have secure computers in their datacentre. Pushing security down into silicon is a very effective way to do that and get ahead of bad guys.”

SuperCluster M7 and Sparc M7 servers are available now. Pricing has not been disclosed but based on normal Oracle hardware costs, expect to dig deep to afford one.

Source-http://www.thegurureview.net/computing-category/oracles-new-m7-processor-has-security-on-silicon.html

Qualcomm Goes LTE For Microsoft

October 22, 2015 by  
Filed under Computing

Comments Off on Qualcomm Goes LTE For Microsoft

Qualcomm has continued its friendship with Microsoft by extending its latest LTE-Advanced modem, the X12, to Windows 10 notebooks and tablets.

The chipmaker was the only major chip provider to optimize its architecture for Windows Phone, and Microsoft’s Lumia devices, which run on Snapdragon 808 and 810 chips.

The Windows 10 devices which come to market later this year will have the option to integrate cellular connectivity with the X12, X7 or X5 LTE modems, which support the Microsoft operating system’s native Mobile Broadband Interface Model (MBIM).

Qualcomm said this would give business users, in particular, a similar experience on their large-screened devices as on their smartphones, giving the particular examples of location-based services and security driving LTE usage on PCs and tablets.

Integrated cellular connectivity has not been so important for notebook users, outside of a few scenarios such as WiFi-less trains, most wireless access from notebooks, and even tablets, is over a WLAN.

Qualcomm makes WiFi chips for portable devices but it does not have such a big market share. Working with Microsoft means it could have a higher presence and a far better chance of delivering mass sales. The Surface Pro and its new Surface Book, is getting good reviews and might even be popular.

Courtesy-http://www.thegurureview.net/computing-category/qualcomm-goes-lte-for-microsoft.html

Next Page »