Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

nVidia NVLINK 2.0 Going In IBM Servers

August 31, 2016 by  
Filed under Computing

Comments Off on nVidia NVLINK 2.0 Going In IBM Servers

On Monday, PCWorld reported that the first servers expected to use Nvidia’s second-generation NVLINK 2.0 technology will be arriving sometime next year using IBM’s upcoming Power9 chip family.

IBM launched its Power8 lineup of superscalar symmetric multiprocessors back in August 2013 at the Hot Chips conference, and the first systems became available in August 2014. The announcement was significant because it signaled the beginning of a continuing partnership between IBM and Nvidia to develop GPU-accelerated IBM server systems, beginning with the Tesla K40 GPU.

The result was an HPC “tag-team” where IBM’s Power8 architecture, a 12-core chip with 96MB of embedded memory, would eventually go on to power Nvidia’s next-generation Pascal architecture which debuted in April 2016 at the company’s GPU Technology Conference.

NVLINK, first announced in March 2014, uses a proprietary High-Speed Signaling interconnect (NVHS) developed by Nvidia. The company says NVHS transmits data over a differential pair running at up to 20Gbps, so eight of these differential 20Gbps connections will form a 160Gbps “Sub-Link” that sends data in one direction. Two sub-links—one for each direction—will form a 320Gbps, or 40GB/s bi-directional “Link” that connects processors together in a mesh framework (GPU-to-GPU or GPU-to-CPU).

NVLINK lanes upgrade from 20Gbps to 25Gbps

IBM is projecting its Power9 servers to be available beginning in the middle of 2017, with PCWorld reporting that the new processor lineup will include support for NVLINK 2.0 technology. Each NVLINK lane will communicate at 25Gbps, up from 20Gbps in the first iteration. With eight differential lanes, this translates to a 400Gbps (50GB/s) bi-directional link between CPUs and GPUs, or about 25 percent more performance if the information is correct.

NVLINK 2.0 capable servers arriving next year

Meanwhile, Nvidia has yet to release any NVLINK 2.0-capable GPUs, but a company presentation slide in Korean language suggests that the technology will first appear in Volta GPUs which are also scheduled for release sometime next year. We were originally under the impression that the new GPU architecture would release in 2018, as per Nvidia’s roadmap. But a source hinted last month that Volta would be getting 16nm FinFET treatment and may show up in roughly the same timeframe as AMD’s HBM 2.0-powered Vega sometime in 2017. After all, it is easier for Nvidia to launch sooner if the new architecture is built on the same node as the Pascal lineup.

Still ahead of PCI-Express 4.0

Nvidia claims that PCI-Express 3.0 (32GB/s with x16 bandwidth) significantly limits a GPU’s ability to access a CPU’s memory system and is about “four to five times slower” than its proprietary standard. Even PCI-Express 4.0, releasing later in 2017, is limited to 64GB/s on a slot with x16 bandwidth.

To put this in perspective, Nvidia’s Tesla P100 Accelerator uses four 40GB/s NVLINK ports to connect clusters of GPUs and CPUs, for a total of 160GB/s of bandwidth.

With a generational NVLINK upgrade from 40GB/s to 50GB/s bi-directional links, the company could release a future Volta-based GPU with four 50GB/s NVLINK ports totaling of 200GB/s of bandwidth, well above and beyond the specifications of the new PCI-Express standard.

Courtesy-Fud

Graphene May Give Processors A Boost

June 28, 2016 by  
Filed under Computing

Comments Off on Graphene May Give Processors A Boost

Researchers at MIT have figured out that graphene, sheets of atom-thick carbon, could be used to make chips a million times faster.

The researchers have worked out that slowing the speed of light to the extent that it moves slower than flowing electrons can create an “optical boom”, the optical equivalent of a sonic boom.

Slowing the speed of light is no mean feat, but the clever folks at MIT managed it by using the honeycomb shape of carbon to slow photons to slow photons to several hundredths of their normal speed in a free space, explained researcher Ido Kaminer.

Meanwhile, the characteristics of graphene speed up electrons to a million metres a second, or around 1/300 of the speed of light in a vacuum.

The optical boom is caused when the electrons passing though the graphene reach the speed of light, effectively breaking its barrier in the carbon honeycomb and causing a shockwave of light.

As electrons move faster than the trapped light, they bleed plasmons, a form of virtual particle that represents the oscillation of electrons on the graphene’s surface.

Effectively, it is the equivalent of turning electricity into light. This is nothing new – Thomas Edison did it a century ago with fluorescent tubes – but it can efficiently and controllably generate plasmons at a scale that works with microchip technology.

The discovery could allow chip components to be made from graphene to enable the creation of light-based circuits. These circuits could be the next step in the evolution of chip and computing technology, as the transfer of data through light is far faster than using electrons in today’s chips, even the fast pixel-pushing ones.

So much faster that it’s “six orders of magnitude higher than what is used in electronics”, according to Kaminer. That’s up to a million times faster in plain English.

“There’s a lot of excitement about graphene because it could be easily integrated with other electronics,” said physics professor Marin Soljačić, a researcher on the project, who is confident that MIT can turn this theoretical experiment into a working system. “I have confidence that it should be doable within one to two years.”

This is a pretty big concept and almost sci-fi stuff, but we’re always keen to see smaller and faster chips. It also shows that the future tech envisioned by the world of sci-fi may not be that far away.

Courtesy-TheInq

Qualcomm Releases Car Platform

June 23, 2016 by  
Filed under Computing

Comments Off on Qualcomm Releases Car Platform

Qualcomm has released its Connected Car Reference Platform so that the car industry to build prototypes for the next-generation connected car.

Qualcomm could make piles of dosh if car-makers choose its platforms in the future. While it looks like the whole program and hardware package is not there yet, it gives developers something to play with which should see it under the bonnet of the next generation of car automation.

The next trick will be to get autonomous steering and collision avoidance features into the package. Qualcomm will probably apply its machine learning SDK, announced just a few weeks ago, and the Snapdragon 820 processor.

In a press release Qualcomm said the Connected Car Reference Platform uses a common framework that scales from a basic telematics control unit (TCU) up to a highly integrated wireless gateway, connecting multiple electronic control units (ECUs) within the car and supporting critical functions, such as over-the-air software upgrades and data collection and analytics.

The vehicle’s connectivity hardware and software to be upgraded through its life cycle, providing automakers with a migration path from Dedicated Short Range Communications (DSRC) to hybrid/cellular V2X and from 4G LTE to 5G.

It can also manage concurrent operation of multiple wireless technologies using the same spectrum frequencies, such as Wi-Fi, Bluetooth and Bluetooth Low Energy.

The system supports OEM and third-party applications to providing a secure framework for the development and execution of custom applications.

Qualcomm appears to be working on the problem of over-the-air software updates. Updating software on a mission-critical system such as an autonomous car is a much harder problem than updating a smartphone because it has to be completely secure and work every time without reducing safety. However given that updates have stuffed up the mobile phone business and a car will need lots of them in its much longer working life, it is something which will need to be tackled.

Qualcomm has to solve this problem anyway to accelerate shipments not only to the car market but to the IoT market, where it hopes to sell tens of billions of chips.

Qualcomm says it expects to ship the Connected Car Reference Platform to automakers, tier 1 auto suppliers and developers late this year.

Courtesy-Fud

Did Researchers Create Lifetime Batteries?

May 4, 2016 by  
Filed under Around The Net

Comments Off on Did Researchers Create Lifetime Batteries?

Researchers at the University of California at Irvine (UCI) have accidentally – yes, accidentally – discovered a nanowire-based technology that could lead to batteries that can be charged hundreds of thousands of times.

Mya Le Thai, a PhD candidate at the university, explained in a paper published this week that she and her colleagues used nanowires, a material that is several thousand times thinner than a human hair, extremely conductive and has a surface area large enough to support the storage and transfer of electrons.

Nanowires are extremely fragile and don’t usually hold up well to repeated discharging and recharging, or cycling. They expand and grow brittle in a typical lithium-ion battery, but Le Thai’s team fixed this by coating a gold nanowire in a manganese dioxide shell and then placing it in a Plexiglas-like gel to improve its reliability. All by accident.

The breakthrough could lead to laptop, smartphone and tablet batteries that last forever.

Reginald Penner, chairman of UCI’s chemistry department, said: “Mya was playing around and she coated this whole thing with a very thin gel layer and started to cycle it.

“She discovered that just by using this gel she could cycle it hundreds of thousands of times without losing any capacity. That was crazy, because these things typically die in dramatic fashion after 5,000 or 6,000 or 7,000 cycles at most.”

The battery-like structure was tested more than 200,000 times over a three-month span, and the researchers reported no loss of capacity or power.

“The coated electrode holds its shape much better, making it a more reliable option,” Thai said. “This research proves that a nanowire-based battery electrode can have a long lifetime and that we can make these kinds of batteries a reality.”

The breakthrough also paves the way for commercial batteries that could last a lifetime in appliances, cars and spacecraft.

British fuel-cell maker Intelligent Energy Holdings announced earlier this year that it is working on a smartphone battery that will need to be charged only once a week.

Did Researchers Create Batteries That A Lifetime? : :: TheGuruReview.net ::

Courtesy-TheInq

Will Google Stop Using Java?

April 22, 2016 by  
Filed under Computing

Comments Off on Will Google Stop Using Java?

Google is so hacked off with Oracle’s java antics it is seriously considering taking it out of Android and replacing it with Apple’s open sauce Swift software.

While we would have thought that there would be little choice between Oracle and Apple as evil software outfits, the fact that Apple uncharacteristically made Swift open source might make life a bit brighter for Google. At the moment Oracle is suing Google for silly money for its Java use in Android.

Swift was created as a replacement for Objective C, and is pretty easy-to-write. It was introduced at WWDC 2014, and has major support from IBM as well as a variety of major apps like Lyft, Pixelmator and Vimeo that have all rebuilt iOS apps with Swift.

But since Apple open sourced Swift, Google, Facebook and Uber have al said that they are interested in it. Taking Java out of Android is a big job. Google would also have to make its entire standard library Swift-ready, and support the language in APIs and SDKs. Some low-level Android APIs are C++, which Swift cannot bridge to. Higher level Java APIs would also have to be re-written.

Of course if it did all this, Apple might realize that its biggest rival was using its own software to club it to death. It might not be be so nice about allowing Swift out to play and eventually Google have to fork Swift and dump the Apple version. This would probably result in an anst-ridden moan album about how life is so unfair which makes a fortune while scoring passive agressive revenge on the dumpee.

Courtesy-Fud

Dyreza Trojan Targeting Windows 10

December 9, 2015 by  
Filed under Computing

Comments Off on Dyreza Trojan Targeting Windows 10

An infectious banking trojan has been updated so that it supports financial mayhem on the freshly baked Windows 10 operating system and supporting Microsoft Edge browser.

Microsoft reckons that Windows 10 is installed on over 100 million machines, and this suggests prime picking for people who deploy banking trojans, not to mention the fact that most people will still be getting used to the software and its services and features.

The newest edition to the Windows 10 spectrum is a variant of the Zeus banking malware known as Dyreza. It is related to Dyre, a threat that we reported on earlier this year.

The warning at the time was that as many as one in 20 online banking users could be exposed to the threat, and things look as bad this time around. Heimdal Security said in a blog post that the malware has been strengthened in scale and capability.

“The info-stealer malware now includes support for Windows 10. This new variant can also hook to Microsoft Edge to collect data and then send it to malicious servers,” said the post.

“Moreover, the new Dyreza variant kills a series of processes linked to endpoint security software in order to make its infiltration in the system faster and more effective.”

The threat already has a footprint, and the people behind it have increased it. Heimdal said that, once Dyreza is done with your bank account, it will move you into position on a botnet. The firm estimates that this botnet is currently 80,000-strong.

“By adding support for Windows 10, the Dyreza malware creators have cleared their way to growing the number of infected PCs in their botnet. This financial trojan doesn’t only drain the infected computers of valuable data, it binds them into botnets,” said Heimdal.

Source- http://www.thegurureview.net/computing-category/dyreza-trojan-appears-to-be-targeting-windows-10.html

Is The Shifu Trojan Wreaking Havoc In Japan?

September 17, 2015 by  
Filed under Computing

Comments Off on Is The Shifu Trojan Wreaking Havoc In Japan?

Security research has found a banking trojan called Shifu that is going after Japanese financial firms in a big way.

Shifu is described as “masterful” by IBM X-Force, and is named after the Japanese word for thief, according to the firm. It is also the Chinese word for skilled person, or tutor.

X-Force said in a blog post that the malware has been active since the early summer, and comprises a number of known tools like Dyre, Zeus and Dridex. It has been put together by people who know what they are doing, and sounds like a significant problem for the 20 institutions it is targeting.

“The Shifu trojan may be a new beast, but its inner workings are not entirely unfamiliar. The malware relies on a few tried-and-true trojan mechanisms from other infamous crimeware codes,” said the IBM researchers.

“It appears that Shifu’s internal makeup was composed by savvy developers who are quite familiar with other banking malware, dressing Shifu with selected features from the more nefarious of the bunch.”

The Shifu package offers a range of attack features as well as clean-up tools to cover its tracks. It reads like a Now that’s what I call … recent attacks compilation CD, and has some oldies but baddies.

“Shifu wipes the local System Restore point on infected machines in a similar way to the Conficker worm, which was popular in 2009,” added the firm as one example.

The package can wreak havoc on companies and their users. If we had a bucket of damp sand we would pour it all over Shifu and stamp on it.

“This trojan steals a large variety of information that victims use for authentication purposes. For example, it keylogs passwords, grabs credentials that users key into HTTP form data, steals private certificates and scrapes external authentication tokens used by some banking applications,” said IBM.

“These elements enable Shifu’s operators to use confidential user credentials and take over bank accounts held with a large variety of financial service providers.

“Shifu’s developers could be Russian speakers or native to countries in the former Soviet Union. It is also possible that the actual authors are obfuscating their true origin, throwing researchers off by implicating an allegedly common source of cybercrime.”

Source-http://www.thegurureview.net/computing-category/is-the-shifu-trojan-wreaking-havoc-in-japan.html

Yahoo Acquires Polyvore

August 12, 2015 by  
Filed under Around The Net

Comments Off on Yahoo Acquires Polyvore

Yahoo Inc announced on Friday that it has agreed to acquire fashion start-up Polyvore to help drive traffic and strengthen its mobile and social offerings.

Yahoo, which did not disclose terms of the deal, said Polyvore will accelerate its ‘Mavens’ growth strategy.

The company has been focusing on four areas — mobile, video, native advertising and social — which it calls Mavens, to drive user engagement and ad sales as it battles intense competition from Google Inc and Facebook Inc .

Revenue from Mavens made up about one-third of the company’s total revenue in the quarter ended June 30.

The Mavens portfolio includes BrightRoll, mobile app network Flurry, mobile ad buying platform Yahoo Gemini and blogging site Tumblr.

Polyvore, the brainchild of 3 ex-Yahoo engineers, was started in 2007.

The Mountain View, California-based company allows users to mix-and-match articles of clothing and accessories and customize them into “sets”.

Polyvore’s co-founder and CEO Jess Lee was earlier part of Google Inc’s  associate manager program, which Marissa Mayer headed before joining Yahoo as CEO.

Source

Can Oracle Make Money Off Android?

August 6, 2015 by  
Filed under Computing

Comments Off on Can Oracle Make Money Off Android?

Database outfit Oracle’s moves to try and copyright APIs appear to be part of an attempt for Oracle to make money on Android.

Oracle has asked a U.S. judge for permission to update its copyright lawsuit against Google to include the Android which it claims contains its Java APIs.

Oracle sued Google five years ago and is seeking roughly $1 billion in copyright claims if it manages to convince a court that its APIs are in Android it could up the damages by several billions.

Oracle wrote in a letter to Judge William Alsup on Wednesday that the record of the first trial does not reflect any of these developments in the market, including Google’s dramatically enhanced market position in search engine advertising and the overall financial results from its continuing and expanded infringement.

Last month, the US Supreme Court upheld an appeals court’s ruling that allows Oracle to seek licensing fees for the use of some of the Java language. Google had said it should use Java APIs without paying a fee.

Source

IBM Goes Bare Metal

March 18, 2015 by  
Filed under Computing

Comments Off on IBM Goes Bare Metal

IBM has announced the availability of OpenPower servers as part of the firm’s SoftLayer bare metal cloud offering.

OpenPower, a collaborative foundation run by IBM in conjunction with Google and Nvidia, offers a more open approach to IBM’s Power architecture, and a more liberal licence for the code, in return for shared wisdom from member organisations.

Working in conjunction with Tyan and Mellanox Technologies, both partners in the foundation, the bare metal servers are designed to help organisations easily and quickly extend infrastructure in a customized manner.

“The new OpenPower-based bare metal servers make it easy for users to take advantage of one of the industry’s most powerful and open server architectures,” said Sonny Fulkerson, CIO at SoftLayer.

“The offering allows SoftLayer to deliver a higher level of performance, predictability and dependability not always possible in virtualised cloud environments.”

Initially, servers will run Linux applications and will be based on the IBM Power8 architecture in the same mold as IBM Power system servers.

This will later expand to the Power ecosystem and then to independent software vendors that support Linux on Power application development, and are migrating applications from x86 to the Power architecture.

OpenPower servers are based on open source technology that extends right down to the silicon level, and can allow highly customised servers ranging from physical to cloud, or even hybrid.

Power systems are already installed in SoftLayer’s Dallas data centre, and there are plans to expand to data centres throughout the world. The system was first rolled out in 2014 as part of the Watson portfolio.

Prices will be announced when general availability arrives in the second quarter.

Source

Next Page »