Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

nVidia’s CUDA 5.5 Available

June 25, 2013 by  
Filed under Computing

Comments Off on nVidia’s CUDA 5.5 Available

Nvidia has made its CUDA 5.5 release candidate supporting ARM based processors available for download.

Nvidia has been aggressively pushing its CUDA programming language as a way for developers to exploit the floating point performance of its GPUs. Now the firm has announced the availability of a CUDA 5.5 release candidate, the first version of the language that supports ARM based processors.

Aside from ARM support, Nvidia has improved supported Hyper-Q support and now allows developers to have MPI workload prioritisation. The firm also touted improved performance analysis and improved performance for cross-compilation on x86 processors.

Ian Buck, GM of GPU Computing Software at Nvidia said, “Since developers started using CUDA in 2006, successive generations of better, exponentially faster CUDA GPUs have dramatically boosted the performance of applications on x86-based systems. With support for ARM, the new CUDA release gives developers tremendous flexibility to quickly and easily add GPU acceleration to applications on the broadest range of next-generation HPC platforms.”

Nvidia’s support for ARM processors in CUDA 5.5 is an indication that it will release CUDA enabled Tegra processors in the near future. However outside of the firm’s own Tegra processors, CUDA support is largely useless, as almost all other chip designers have chosen OpenCL as the programming language for their GPUs.

Nvidia did not say when it will release CUDA 5.5, but in the meantime the firm’s release candidate supports Windows, Mac OS X and just about every major Linux distribution.

Source

Are CUDA Applications Limited?

March 29, 2013 by  
Filed under Computing

Comments Off on Are CUDA Applications Limited?

Acceleware said at Nvidia’s GPU Technology Conference (GTC) today that most algorithms that run on GPGPUs are bound by GPU memory size.

Acceleware is partly funded by Nvidia to provide developer training for CUDA to help sell the language to those that are used to traditional C and C++ programming. The firm said that most CUDA algorithms are now limited by GPU local memory size rather than GPU computational performance.

Both AMD and Nvidia provide general purpose GPU (GPGPU) accelerator parts that provide significantly faster computational processing than traditional CPUs, however they have only between 6GB and 8GB of local memory that constrains the size of the dataset the GPU can process. While developers can push more data from system main memory, the latency cost negates the raw performance benefit of the GPU.

Kelly Goss, training program manager at Acceleware, said that “most algorithms are memory bound rather than GPU bound” and “maximising memory usage is key” to optimising GPGPU performance.

She further said that developers need to understand and take advantage of the memory hierarchy of Nvidia’s Kepler GPU and look at ways of reducing the number of memory accesses for every line of GPU computing.

The point Goss was making is that GPU computing is relatively cheap in terms of clock cycles relative to the time it takes to fetch data from local memory, let alone loading GPU memory from system main memory.

Goss, talking to a room full of developers, proceeded to outline some of the performance characteristics of the memory hierarchy in Nvidia’s Kepler GPU architecture, showing the level of detail that CUDA programmers need to pay attention to if they want to extract the full performance potential from Nvidia’s GPGPU computing architecture.

Given Goss’s observation that algorithms running on Nvidia’s GPGPUs are often constrained by local memory size rather than by the GPU itself, the firm might want to look at simplifying the tiers of memory involved and increasing the amount of GPU local memory so that CUDA software developers can process larger datasets.

Source

IBM Moves Into Oracle And HP Turf

February 14, 2013 by  
Filed under Computing

Comments Off on IBM Moves Into Oracle And HP Turf

Big Blue wants to take on competitors such as Oracle and Hewlett Packard by offering a cheap and cheerful Power Systems server and storage product range.

Rod Adkins, a Senior Vice President in IBM’s Systems & Technology Group said the company was was rolling out new servers based on its Power architecture with the Power Express 710 starting at $5,947. He said that the 710 is competitively priced to commodity hardware from Oracle and HP.

Adkins added that IBM is expanding its Power and Storage Systems business into SMB and growth markets. The product launches on Tuesday. IBM said it will start delivering by February 20.

Source…

Will Tegra 4 Launch In Q2?

January 17, 2013 by  
Filed under Computing

Comments Off on Will Tegra 4 Launch In Q2?

Tegra 4 was supposed to be production ready in Q4 2012 and the general expectation was that CES 2013 would be marked by the launch of phones and tablets based on the new chipset.

It turns out that the chip needed another re-spin, something that usually creates a delay of roughly a quarter. We don’t know which part of the chip was to blame but our sources claim that Tegra 4 is a complex chip with a lot of components where many things can go wrong.

Nvidia dared to move to 28nm, change the core from A9 to A15 and find a way to make its LTE work. There were a lot of things that could go wrong and obviously some did.

This is why Intel first shrinks the core, for example from 32nm to 22nm, and then in its “tock” cycle goes for a newly designed core. Nvidia doesn’t have that luxury, as making a 28nm version of Tegra 3 would not be enough for the SoC market in 2013.

A few people at Nvidia have been telling us that the chip has been sampled to accounts and Nvidia is planning to have some designs announced at the Mobile World Congress. We managed to confirm this schedule with some Nvidia partners.

Source…

Will Tegra 4 Support USB 3.0?

January 4, 2013 by  
Filed under Computing

Comments Off on Will Tegra 4 Support USB 3.0?

Wayne, also known as Tegra 4 is coming out at CES 2013, or in some 10 days from now. Nvidia has an event planned days ahead of CES 2013 and the company will likely show some tablets and hybrids based on the new Tegra SoC. Let’s call Wayne Tegra 4 before it gets official.

Nvidia had Wayne ready to launch in Q4 2012 but it had to wait for partners to release the designs based on it, and most of them wanted to do it at CES 2013. European phones based on Wayne are going to show up in February, at the Mobile World Congress.

This is the first quad-core A15 design that will bring a significant performance increase over Tegra 3 and we are hearing that the four-plus-one core will deliver a bigger performance boost than Tegra 3 did over Tegra 2. Also, the fact that the new chip is 28nm and supports DDR3L also promises more efficiency.
USB 3.0 support is something that is getting us excited as usually it is quite slow to transfer anything onto tablets, phones and hybrids. USB 3.0 on tablets will significantly increase the data transfer speed and Tegra 4 will be among the first chips to feature USB 3.0 support, and many consumers will appreciate faster data transfer rates.

The other thing that got our attention is dual display support and you will be able to have two independent screens. On Tegra 3 based devices, you can only mirror the output, not display two independent screens. It could be a very interesting feature for dockable devices.

Source…

nVidia’s Tegra 4 Specs Spotted

December 28, 2012 by  
Filed under Computing

Comments Off on nVidia’s Tegra 4 Specs Spotted

Here is an interesting leak, just what the doctor ordered to spice up a rather slow news cycle. Chiphell has posted a slide containing a few Tegra 4 specs, but we still don’t know the clocks or a few other interesting details. Of course, the leak should be taken with a grain of salt, but the specs are more or less in line with what we were expecting all along.

Tegra 4, codenamed Wayne, is a 28nm part with revamped graphics and new ARM cores. Although the slide does not directly point to the type of ARM cores used in the design, the new chip is based on ARM’s latest A15 core. Like the Tegra 3, the new chip will also feature an additional companion core to improve energy efficiency. No surprises here really.

In terms of GPU performance, Nvidia promises to deliver a six-fold improvement over the Tegra 3 and a 20x improvement over Tegra 2 chips. Oddly enough, in spite of Nvidia’s graphics prowess, Tegra chips never featured world-beating graphics. This time around they could, thanks to the new 72-core GPU. The GPU will be able to cope with 2560×1600 screens at 120Hz, but it could also take on 4K resolutions, although details are still sketchy. At this point 4K support could only be relevant for next-generation smart TVs, with a huge price tag.

As far as other features go, Tegra 4 brings support for USB 3.0 and DDR3L dual-channel memory. The leak does not mention LTE support.

Tegra 4 will have to take on the likes of Samsung’s upcoming Exynos 5440, which should also debut in early 2013. Nvidia was first to market with a quad-core A9 chip, but this time around it will have to face off against the new Exynos and A15 quad-cores from other vendors.

Nvidia is expected to showcase the new chip at CES and we’ll be there to check it out.

Source…

nVidia Soars

November 23, 2012 by  
Filed under Computing

Comments Off on nVidia Soars

Nvidia has published its third quarter earnings and the results are impressive to say the least. With record revenue of $1.2 billion, Nvidia’s net income in Q3 was $209.1 million (GAAP).

Quarterly revenue is up 12.9 percent year-over-year and represents a 15.3 percent sequential bump, beating analyst expectations. However, Nvidia expects its revenue to dip to between $1.02 and $1.17 billion in the fourth quarter.

The company blames the projected slump on a declining PC market. It seems Nvidia does not expect Windows 8 will have a very positive impact on the PC market.

Source…

Amazon Goes To Court

November 9, 2012 by  
Filed under Computing

Comments Off on Amazon Goes To Court

Amazon is suing Daniel Powers, its ex VP in charge of global sales for Amazon Web Services because he joined Google in a cloud role.

Taking the new job, asserts Amazon, violates Powers’ non-compete agreement with Amazon, which let Powers go this summer with a reasonable severance package.

There is a risk that Powers could take important information that he learned about the Amazon web services business to its rival, Google, and that is what the firm is seeking to stop.

According to Geekwire Amazon wants an injunction against Powers to prevent him from “engaging in any activities that directly or indirectly support any aspect of Google’s cloud computing business”.

A court filing claims that Amazon has an agreement with Powers that says he will not join a rival for a “limited time following the termination of his employment”.

Powers, it warns, is a veteran who knows the cloud business from “top to bottom”, adding that he has “acquired and currently possesses extensive knowledge of Amazon’s trade secrets and its highly confidential information”.

The complaint says that he has extensive and detailed information about Amazon Web Services’ prospects, business, potential business partners, pricing strategies and goals.

Amazon has not provided us with further comment.

Source…

Will HP Be Broken Up?

October 17, 2012 by  
Filed under Computing

Comments Off on Will HP Be Broken Up?

HP has been urged by investment bank UBS to break itself up in order to boost its share price.

After years of mismanagement, HP’s stock price is far lower than it was during the heady dotcom bubble days when it pulled off one of the biggest mergers in recent years by buying Compaq. Now the firm’s stock price languishes around the $14 mark, a figure that could top $20 if HP were to break itself up, according to UBS.

UBS analysts including Steven Milunovich reported the firm could “realise greater value” by splitting itself up. The analysts added that each separate division of HP is big enough to stand on its own, claiming, “HP’s units are not minnows but rather they are whales packed into the same pond.”

HP spokesman Michael Thacker claimed the firm’s customers want a big HP, effectively allowing them to have one supplier for their IT needs, a message the firm has been playing up for a number of years now. Thacker said, “No matter how you look at it we are confident that HP is stronger together than apart. The company’s operations across business units are deeply integrated and our customers have told us that they want One HP.”

Source…

GM Adds IT Jobs

October 15, 2012 by  
Filed under Computing

Comments Off on GM Adds IT Jobs

General Motors Co said on Monday it will add 1,500 jobs at a new software development center in Michigan as part of the U.S. automaker’s previously announced plan to move information technology work back into the company.

GM said it will hire the software developers, database experts, analysts and other IT positions over the next four years for the office in Warren, Michigan. It is the second of four software development centers GM plans to open, following one it announced last month in Austin, Texas.

In July, the Detroit automaker said it would reverse years of outsourcing IT work. GM now outsources about 90 percent of its IT services and provides the rest in-house, but it wants to flip those figures in the next three to five years.

The IT overhaul is spearheaded by GM Chief Information Officer Randy Mott, who outlined the plan to GM’s 1,500 IT employees in June. The former Hewlett-Packard Co executive believes the moves will make GM more efficient and productive.

GM, which has not disclosed the cost or savings of its strategy, plans to cut the automaker’s sprawling list of IT applications by at least 40 percent and move to a more standardized platform. GM will also simplify the way it transmits data.

Source…

« Previous PageNext Page »