Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

nVidia Releases CUDA

July 10, 2014 by  
Filed under Computing

Comments Off on nVidia Releases CUDA

Nvidia has released CUDA – its code that lets developers run their code on GPUs – to server vendors in order to get 64-bit ARM cores into the high performance computing (HPC) market.

The firm said today that ARM64 server processors, which are designed for microservers and web servers because of their energy efficiency, can now process HPC workloads when paired with GPU accelerators using the Nvidia CUDA 6.5 parallel programming framework, which supports 64-bit ARM processors.

“Nvidia’s GPUs provide ARM64 server vendors with the muscle to tackle HPC workloads, enabling them to build high-performance systems that maximise the ARM architecture’s power efficiency and system configurability,” the firm said.

The first GPU-accelerated ARM64 software development servers will be available in July from Cirrascale and E4 Computer Engineering, with production systems expected to ship later this year. The Eurotech Group also plans to ship production systems later this year.

Cirrascale’s system will be the RM1905D, a high density two-in-one 1U server with two Tesla K20 GPU accelerators, which the firm claims provides high performance and low total cost of ownership for private cloud, public cloud, HPC and enterprise applications.

E4′s EK003 is a production-ready, low-power 3U dual-motherboard server appliance with two Tesla K20 GPU accelerators designed for seismic, signal and image processing, video analytics, track analysis, web applications and Mapreduce processing.

Eurotech’s system is an “ultra-high density”, energy efficient and modular Aurora HPC server configuration, based on proprietary Brick Technology and featuring direct hot liquid cooling.

Featuring Applied Micro X-Gene ARM64 CPUs and Nvidia Tesla K20 GPU accelerators, the new ARM64 servers will provide customers with an expanded range of efficient, high-performance computing options to drive compute-intensive HPC and enterprise data centre workloads, Nvidia said.

Nvidia added, “Users will immediately be able to take advantage of hundreds of existing CUDA-accelerated scientific and engineering HPC applications by simply recompiling them to ARM64 systems.”

ARM said that it is working with Nvidia to “explore how we can unite GPU acceleration with novel technologies” and drive “new levels of scientific discovery and innovation”.

Source

nVidia Goes For Raspberry Pi

April 14, 2014 by  
Filed under Computing

Comments Off on nVidia Goes For Raspberry Pi

nVidia has unveiled what it claims is “the world’s first mobile supercomputer”, a development kit powered by a Tegra K1 chip.

Dubbed the Jetson TK1, the kit is built for embedded systems to aid the development of computers attempting to simulate human recognition of physical objects, such as robots and self-driving cars.

Speaking at the GPU Technology Conference (GTC) on Tuesday, Nvidia co-founder and CEO Jen Hsun Huang described it as “the world’s tiniest little supercomputer”, noting that it’s capable of running anything the Geforce GTX Titan Z graphics card can run, but at a slower pace.

With a total performance of 326 GFLOPS, the Jetson TK1 should be more powerful than the Raspberry Pi board, which delivers just 24 GFLOPS, but will retail for much more, costing $192 in the US – a number that matches the number of cores in the Tegra K1 processor that Nvidia launched at CES in Las Vegas in January.

Described by the company as a “super chip” that can bridge the gap between mobile computing and supercomputing, the Nvidia Tegra K1, which replaces the Tegra 4, is based on the firm’s Kepler GPU architecture.

The firm boasted at CES that the chip will be capable of bringing next-generation PC gaming to mobile devices, and Nvidia claimed that it will be able to match the PS4 and Xbox One consoles’ graphics performance.

Designed from the ground up for CUDA, which now has more than 100,000 developers, the Jetson TK1 Developer Kit includes the programming tools required by software developers to develop and deploy compute-intensive systems quickly, Nvidia claimed.

“The Jetson TK1 also comes with this new SDK called Vision Works. Stacked onto CUDA, it comes with a whole bunch of primitives whether it’s recognising corners or detecting edges, or it could be classifying objects. Parameters are loaded into this Vision Works primitives system and all of a sudden it recognises objects,” Huang said.

“On top of it, there’s simple pipe lines we’ve created for you in sample code so that it helps you get started on what a structure for motion algorithm, object detection, object tracking algorithms would look like and on top of that you could develop your own application.”

Nvidia also expects the Jetson TK1 to be able to operate in the sub-10 Watt market for applications that previously consumed 100 Watts or more.

Source

nVidia Outs CUDA 6

March 19, 2014 by  
Filed under Computing

Comments Off on nVidia Outs CUDA 6

Nvidia has made the latest GPU programming language CUDA 6 Release Candidate available for developers to download for free.

The release arrives with several new features and improvements to make parallel programming “better, faster and easier” for developers creating next generation scientific, engineering, enterprise and other applications.

Nvidia has aggressively promoted its CUDA programming language as a way for developers to exploit the floating point performance of its GPUs. Available now, the CUDA 6 Release Candidate brings a major new update in unified memory access, which lets CUDA applications access CPU and GPU memory without the need to manually copy data from one to the other.

“This is a major time saver that simplifies the programming process, and makes it easier for programmers to add GPU acceleration in a wider range of applications,” Nvidia said in a blog post on Thursday.

There’s also the addition of “drop-in libraries”, which Nvidia said will accelerate applications by up to eight times.

“The new drop-in libraries can automatically accelerate your BLAS and FFTW calculations by simply replacing the existing CPU-only BLAS or FFTW library with the new, GPU-accelerated equivalent,” the chip designer added.

Multi-GPU Scaling has also been added to the CUDA 6 programming language, introducing re-designed BLAS and FFT GPU libraries that automatically scale performance across up to eight GPUs in a single node. Nvidia said this provides over nine teraflops of double-precision performance per node, supporting larger workloads of up to 512GB in size, more than it’s supported before.

“In addition to the new features, the CUDA 6 platform offers a full suite of programming tools, GPU-accelerated math libraries, documentation and programming guides,” Nvidia said.

The previous CUDA 5.5 Release Candidate was issued last June, and added support for ARM based processors.

Aside from ARM support, Nvidia also improved Hyper-Q support in CUDA 5.5, which allowed developers to use MPI workload prioritisation. The firm also touted improved performance analysis and improved performance for cross-compilation on x86 processors.

Source

nVidia Pays Up

January 10, 2014 by  
Filed under Around The Net

Comments Off on nVidia Pays Up

Nvidia has agreed to pay any Canadian who had the misfortune to buy a certain laptop computer made by Apple, Compaq, Dell, HP, or Sony between November 2005 and February 2010. Apparently these models contained a dodgy graphics card which was not fixed for five years.

Under a settlement approved by the court Nvidia will pay $1,900,000 into a fund for anyone who might have bought a faulty card. The Settlement Agreement provides partial cash reimbursement of the purchase price and you have to submit a claim by February 25, 2014. You will know if your Nvidia card was faulty because your machine would have a distorted or scrambled video, or no video on the screen even when the computer is on. There would be random characters, lines or garbled images – a bit like watching one of the Twilight series. There will be intermittent video issues or a failure to detect wireless adaptor or wireless networks.

The amount of compensation will be determined by the Claims Administrator who will apply a compensation grid and settlement administration guidelines. Cash compensation will also be provided for total loss of use based on the age of the computer; temporary loss of use having regard to the nature and duration of the loss of use; and reimbursement for out-of-pocket expenses caused by Qualifying Symptoms to an Affected Computer.

Source

Raspberry PI Breaks Record

November 13, 2013 by  
Filed under Computing

Comments Off on Raspberry PI Breaks Record

Sinclair ZX80 and runaway success story, the Raspberry Pi might be about to get its own monitor after a Kickstarter campaign to create a low cost 9in screen for it has exceeded its $90,000 goal in a single weekend.

The HDMIPi monitor from startup Raspi.tv presently stands at $100,996 on Kickstarter, an increase of $8,000 in just the last four hours. The concept behind the monitor is to create something small and affordable but with maximum 1920×1080 resolution. Even though the project has had to scale down its ambitions to 1200×800 resolution to fit the business plan, Raspberry Pi fans have flocked to crowdfund the device.

Put in perspective, that’s higher than HD 720p resolution, or as they describe it, “slightly better resolution than the 720p HD footage on BBC iPlayer”.

Monitor cases will be available in a variety of colours, designed by none other than Paul Beech, who designed the original Raspberry Pi logo.

Although primarily designed for the Raspberry Pi, the HDMIPi is a standard HDMI monitor and can be used for other devices – Android sticks, video cameras, games consoles and beyond.

Raspi.tv has pledged to ship orders in February 2014, delays permitting, and is already working on enhancements. It has described touch functionality as something that might become available as a bolt-on at a later date, saying that “enough people have mentioned it that we are sitting up and taking notice”.

As ever with the Raspberry Pi ecosystem, everything is a bit Ryanair, and power supplies, surrounds and so on are not automatically included, though of course, in the true DIY spirit, you can always make your own.

Source

nVidia Launching New Cards

September 10, 2013 by  
Filed under Computing

Comments Off on nVidia Launching New Cards

We weren’t expecting this and it is just a rumour, but reports are emerging that Nvidia is readying two new cards for the winter season. AMD of course is launching new cards four weeks from now, so it is possible that Nvidia would try to counter it.

The big question is with what?

VideoCardz claims one of the cards is an Ultra, possibly the GTX Titan Ultra, while the second one is a dual-GPU job, the Geforce GTX 790. The Ultra is supposedly GK110 based, but it has 2880 unlocked CUDA cores, which is a bit more than the 2688 on the Titan.

The GTX 790 is said to feature two GK110 GPUs, but Nvidia will probably have to clip their wings to get a reasonable TDP.

We’re not entirely sure this is legit. It is plausible, but that doesn’t make it true. It would be good for Nvidia’s image, especially if the revamped GK110 products manage to steal the performance crown from AMD’s new Radeons. However, with such specs, they would end up quite pricey and Nvidia wouldn’t sell that many of them – most enthusiasts would probably be better off waiting for Maxwell.

Source

nVidia’s CUDA 5.5 Available

June 25, 2013 by  
Filed under Computing

Comments Off on nVidia’s CUDA 5.5 Available

Nvidia has made its CUDA 5.5 release candidate supporting ARM based processors available for download.

Nvidia has been aggressively pushing its CUDA programming language as a way for developers to exploit the floating point performance of its GPUs. Now the firm has announced the availability of a CUDA 5.5 release candidate, the first version of the language that supports ARM based processors.

Aside from ARM support, Nvidia has improved supported Hyper-Q support and now allows developers to have MPI workload prioritisation. The firm also touted improved performance analysis and improved performance for cross-compilation on x86 processors.

Ian Buck, GM of GPU Computing Software at Nvidia said, “Since developers started using CUDA in 2006, successive generations of better, exponentially faster CUDA GPUs have dramatically boosted the performance of applications on x86-based systems. With support for ARM, the new CUDA release gives developers tremendous flexibility to quickly and easily add GPU acceleration to applications on the broadest range of next-generation HPC platforms.”

Nvidia’s support for ARM processors in CUDA 5.5 is an indication that it will release CUDA enabled Tegra processors in the near future. However outside of the firm’s own Tegra processors, CUDA support is largely useless, as almost all other chip designers have chosen OpenCL as the programming language for their GPUs.

Nvidia did not say when it will release CUDA 5.5, but in the meantime the firm’s release candidate supports Windows, Mac OS X and just about every major Linux distribution.

Source

nVidia Explains Tegra 4 Delays

May 23, 2013 by  
Filed under Computing

Comments Off on nVidia Explains Tegra 4 Delays

nVidia’s CEO Jen-Hsun Huang mentioned a concrete reason of Tegra 4 delays during the company’s latest earnings call.

The chip was announced back in January, but Jensen told the investors that Tegra 4 was delayed because of Nvidia’s decision to pull in Grey aka Tegra 4i in for six months. Pulling Tegra 4i in and having it scheduled for Q4 2013 was, claims Jensen, the reason for the three-month delay in Tegra 4 production. On the other hand, we heard that early versions of Tegra 4 were simply getting too hot and frankly we don’t see why Nvidia would delay its flagship SoC for tactical reasons.

Engaging the LTE market as soon as possible has been the main reason for pulling Tegra 4i, claims Jensen. It looks to us that Tegra 4 will be more than three months delayed but we have been promised to see Tegra 4 based devices in Q2 2013, or by the end of June 2013.

Nvidia claims Tegra 4i has many design wins and it should be a very popular chip. Nvidia expects to have partners announcing their devices based on this new LTE based chip in early 2014. Some of them might showcase some devices as early as January, but we would be surprised if we don’t see Tegra 4i devices at the Mobile World Congress next year, that kicks off on February 24th 2014.

Jensen described Tegra 4i as an incredibly well positioned product, saying that “it brings a level of capabilities and features of performance that that segment has just never seen”. The latter half of 2013 will definitely be interesting for Nvidia’s Tegra division and we are looking forward to see the first designs based on this new chip.

Source

nVidia Wins With Tegra 4

April 30, 2013 by  
Filed under Computing

Comments Off on nVidia Wins With Tegra 4

Nvidia’s first Tegra 4 design win is here, apparently, and it doesn’t appear very impressive at all. Tegra 4 is late to the party, so it is a bit short on design wins, to put it mildly.

Now a new ZTE smartphone has been spotted by Chinese bloggers and it seems to be based on Nvidia’s first A15 chip. The ZTE 988 is a phablet, with a 5.7-inch 720p screen. It has 2GB of RAM, a 13-megapixel camera and a 6.9mm thin body. It weighs just 110g, which is pretty surprising. The spec is rather underwhelming, especially in the display department.

However, a grain of salt is advised. It is still unclear whether the phone features a Tegra 4 or a Qualcomm chipset. Also, it is rather baffling to see a 720p screen on a Tegra 4 phablet, it just seems like overkill.

Source

TSMC Testing ARM’s Cortex A57

April 11, 2013 by  
Filed under Computing

Comments Off on TSMC Testing ARM’s Cortex A57

ARM and TSMC have manufactured the first Cortex A57 processor based on ARM’s next-gen 64-bit ARMv8 architecture.

The all new chip was fabricated on TSMC’s equally new FinFET 16nm process. The 57 is ARM’s fastest chip to date and it will go after high end tablets, and eventually it will find its place in some PCs and servers as well.

Furthermore the A57 can be coupled with frugal Cortex A53 cores in a big.LITTLE configuration. This should allow it to deliver relatively low power consumption, which is a must for tablets and smartphones. However, bear in mind that A15 cores are only now showing up in consumer products, so it might be a while before we see any devices based on the A57.

In terms of performance, ARM claims the A57 can deliver a “full laptop experience,” even when used in a smartphone connected to a screen, keyboard and mouse wirelessly. It is said to be more power efficient than the A15 and browser performance should be doubled on the A57.

It is still unclear when we’ll get to see the first A57 devices, but it seems highly unlikely that any of them will show up this year. Our best bet is mid-2014, and we are incorrigible optimists. The next big step in ARM evolution will be 20nm A15 cores with next-generation graphics, and they sound pretty exciting as well.

Source

« Previous PageNext Page »