Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

AMD Finally Confirms Polaris Specs

July 1, 2016 by  
Filed under Computing

Comments Off on AMD Finally Confirms Polaris Specs

In an official slides that have leaked, AMD has confirmed most of the specifications for both the Polaris 10 and the Polaris 11 GPUs which will power the upcoming Radeon RX 480, RX 470 and RX 460 graphics cards.

According to the slides published by Computerbase.de, both GPUs are based on AMD’s 4th generation Graphics Core Next (GCN 4.0) GPU architecture, offer 2.8 perf/watt improvement compared to the previous generation, have 4K encode and decode capabilities as well as bring DisplayPort 1.3/1.4 and HDR support.

Powering three different graphics cards, these two GPUs will cover different market segments, so the Polaris 10, codename Ellesmere, will be powering both the Radeon RX 480, meant for affordable VR and 1440p gaming as well as the recently unveiled RX 470, meant to cover the 1080p gaming segment. The Polaris 10 packs 36 Compute Units (CUs) so it should end up with 2304 Stream Processors. Both the RX 480 and RX 470 should be coming with 4GB or 8GB of GDDR5 memory, paired up with a 256-bit memory interface. The Ellesmere GPU offers over 5 TFLOPs of compute performance and should peak at 150W.

The Radeon RX 470 should be based on Ellesmere Pro GPU and will probably end up with both lower clocks as well as less Stream Processors and according to our sources close to the company, should launch with a US $179 price tag, while the RX 480 should launch on 29th of June with a US $199 price tag for a reference 4GB version. Most AIB partners will come up with a custom 8GB graphics cards which should probably launch at US $279+.

The Polaris 11 GPU, codename Baffin, will have 16 CUs and should end up with 1024 Stream Processors. The recently unveiled Radeon RX 460 based on this GPU should come with 4GB of GDDR5 memory paired up with a 128-bit memory interface. The Radeon RX 460 targets casual and MOBA gamers and should provide decent competition to the Geforce GTX 950 as both have a TDP of below 75W and do not need additional PCIe power connectors.

According to earlier leaked benchmarks, AMD’s Polaris architecture packs quite a punch considering both its price and TDP so AMD just might have a chance to get a much needed rebound in the market share.

Courtesy-Fud

 

Micron Announces 3D NAND Based SSDs

June 16, 2016 by  
Filed under Computing

Comments Off on Micron Announces 3D NAND Based SSDs

Micron has announced its first client- and OEM-oriented solid-state drives based on 3D NAND, the Micron 1100 and Micron 2100 series.

The Micron 1100 SSD is a more mainstream oriented SSD that will be based on Marvell’s 88SS1074 controller and Micron’s 384Gb 32-layer TLC NAND. Using a SATA 6Gbps interface and available in M.2 and 2.5-inch form-factors, the Micron 1100 should replace Micron’s mainstream M600 series, based on 16nm MLC NAND.

The Micron 1100 SSD will be available in 256GB, 512GB, 1TB and 2TB capacities. It will offer sequential performance of up to 530MB/s for read and up to 500MB/s for write with random 4K performance of up to 92K for read and up to 83K IOPS for write. With such performance, it is obvious that the Micron 1100 series will target mainstream market and be a budget SSD.

The Micron 2100 is an M.2 PCIe NVMe SSD that is actually Micron’s first client oriented PCIe SSD and also the first PCIe SSD based on 3D NAND. Unfortuantely, Micron did not finalize the precise specifications so we still do not have precise performance numbers but it will be available in capacities reaching 1TB.

The Micron 1100 is expected to hit mass production in July so we should expect some of the first drives by the end of the next month. The Micron 2100 will be coming by the end of summer.

Courtesy-Fud

 

Will HMB 2.0 GPUs Show Up This Year?

April 29, 2016 by  
Filed under Computing

Comments Off on Will HMB 2.0 GPUs Show Up This Year?

Our well-placed industry sources have told us that we should not expect to see the HMB 2.0 based GPUs shipping anytime soon. Nvidia Pascal and AMD Polaris 10 / 11 will stick with GDDR5 memory for the time being.

The 2nd generation High Bandwidth Memory (HBM 2.0) for high-end GPUs might happen in very late Q4 2016 but realistically it probably won’t ship until 2017 in any volume.

The first card that we expect supporting this feature might be the Greenland, a card that AMD might end up calling Vega. Even according Radeon Technology Group’s official GPU roadmap, Vega / Greenland now look like a 2017 product, or at very best, late 2016 card. Nvidia might make the HBM 2.0 version of the Titan card, but we don’t expect to see a Geforce GTX based on Pascal GPU and HBM 2.0 coming to the market this year.

We managed to talk to some of the memory manufactures and they told us that HBM 2.0 is very limited in supply, and limited supply makes things expensive.

It seems that GPUs of 2016, including the new AMD Polaris and the new Geforce, will be stuck with GDDR5 and in best case scenario with GDDR5X from Micron. The word on the street is that both Geforce GTX based on Pascal and AMD/RTG’s Polaris 10 / Ellesmere and Polaris 11 / Baffin might launch at Computex during last days of May or early June 2016.

Courtesy-Fud

Samsung Shows Off The BGA SSD

April 4, 2016 by  
Filed under Around The Net

Comments Off on Samsung Shows Off The BGA SSD

During Samsung’s 2016 SSD Forum in Japan, the company took the wraps off its first ever ball-grid array (BGA) solid state disk for mobile devices, the PM971. This particular SSD aims to replace module-based M.2 drives in the 2-in-1 hybrid PC market. The company is claiming it will offer improved thermals, up to 10-percent more battery life and a reduction in vertical storage height for OEMs, product designers and system manufacturers.

The Samsung PM971 built using the company’s Photon controller and runs MLC 3D V-NAND (contrary to the picture above, PC Watch claims it is actually 3-bits per cell). The drive will be available in 128GB, 256GB and 512GB storage capacities and will feature sequential reads up to 1,500MB/s, sequential writes up to 600MB/s, random reads up to 190,000 IOPS and random writes up to 150,000 IOPS.In general, SSDs with BGA packaging are considerably smaller than those using the M.2 form factor, and Intel has claimed that using a PCI-E BGA SSD could allow an increase in battery size by around 10-percent compared to using an M.2 2260 SSD (with GPIO using 1.8v power rail instead of 3.3v), lower thermals than M.2 (from BGA ball conduction to motherboard instead of through M.2 mounting screws), and a vertical height savings of 0.5mm to 1.5mm in notebook devices.

The nice thing about BGA SSDs is that they are “complete” storage solutions and integrate NAND flash memory, the NAND controller and DRAM all into a single package. Currently, there are several BGA M.2 form factors being proposed that will make single-chip SSDs a reality sooner than later as the result of a collaboration between HP, Intel, Lenovo, Micron, SanDisk, Seagate and Toshiba. The four BGA SSD packages proposed are Type 1620, Type 2024, Type 2228 and Type 2828, ranging anywhere between 16 x 20 millimeters and 28 x 28 millimeters with up to 2-millimeter vertical height. It is currently unknown whether the Samsung PM971 adopts any of these proposed BGA M.2 standards.

Based on the demonstration at the 2016 Samsung SSD Forum in Japan, the PM971 offers decent performance thanks to a PCI-E 3.0 x4 interface and the company’s new Photon controller. According to the PC Watch website, the drive is physically smaller than an SD card and Samsung expects device manufacturers and OEMs to begin adoption in the second half of 2016 or the first half of 2017.

Courtesy-Fud

ARM Goes 4K With Mali

February 5, 2016 by  
Filed under Computing

Comments Off on ARM Goes 4K With Mali

ARM has announced a new mobile graphics chip, the Mali-DP650 which it said was designed to handle 4K content a device’s screen and on an external display.

The new Mali GPU can push enough pixels on the local display it is more likely that it is interested in using the technology for streaming.

Many smartphones can record 4K video and this means that smartphones could be a home to high resolution content which can be streamed to a large, high resolution screen.

It looks like Mali DP650can juggle the device’s native resolution and the external display’s own resolution and the variable refresh rates. At least that is what ARM says it can do.

The GPU is naturally able to handle different resolutions but it is optimized for a “2.5K”, which means WQXGA (2560×1600) on tablets and WQHD (2560×1440) on smartphones, but also Full HD (1920×1080) for slightly lower end devices.

Mark Dickinson, general manager, media processing group, ARM said: “The Mali-DP650 display processor will enable mobile screens with multiple composition layers, for graphics and video, at Full HD (1920×1080 pixels) resolutions and beyond while maintaining excellent picture quality and extending battery life,”

“Smartphones and tablets are increasingly becoming content passports, allowing people to securely download content once and carry it to view on whichever screen is most suitable. The ability to stream the best quality content from a mobile device to any screen is an important capability ARM Mali display technology delivers.”

ARM did not say when the Mali-DP650 will be in the shops or which chips will be the first to incorporate its split-display mode feature.

Courtesy-Fud

Samsung Goes 4GB HBM

February 2, 2016 by  
Filed under Computing

Comments Off on Samsung Goes 4GB HBM

Samsung has begun mass producing what it calls the industry’s first 4GB DRAM package based on the second-generation High Bandwidth Memory (HBM) 2 interface.

Samsung’s new HBM solution will be used in high-performance computing (HPC), advanced graphics, network systems and enterprise servers, and is said to offer DRAM performance that is “seven times faster than the current DRAM performance limit”.

This will apparently allow faster responsiveness for high-end computing tasks including parallel computing, graphics rendering and machine learning.

“By mass producing next-generation HBM2 DRAM, we can contribute much more to the rapid adoption of next-generation HPC systems by global IT companies,” said Samsung Electronics’ SVP of memory marketing, Sewon Chun.

“Also, in using our 3D memory technology here, we can more proactively cope with the multifaceted needs of global IT, while at the same time strengthening the foundation for future growth of the DRAM market.”

The 4GB HBM2 DRAM, which uses Samsung’s 20nm process technology and advanced HBM chip design, is specifically aimed at next-generation HPC systems and graphics cards.

“The 4GB HBM2 package is created by stacking a buffer die at the bottom and four 8Gb core dies on top. These are then vertically interconnected by TSV holes and microbumps,” explained Samsung.

“A single 8Gb HBM2 die contains over 5,000 TSV holes, which is more than 36 times that of an 8Gb TSV DDR4 die, offering a dramatic improvement in data transmission performance compared to typical wire-bonding based packages.”

Samsung’s new DRAM package features 256GBps of bandwidth, which is double that of an HBM1 DRAM package. This is equivalent to a more than seven-fold increase over the 36GBps bandwidth of a 4Gb GDDR5 DRAM chip, which has the fastest data speed per pin (9Gbps) among currently manufactured DRAM chips.

The firm’s 4GB HBM2 also enables enhanced power efficiency by doubling the bandwidth per watt over a 4Gb GDDR5-based solution, and embeds error-correcting code functionality to offer high reliability.

Samsung plans to produce an 8GB HBM2 DRAM package this year, and by integrating this into graphics cards the firm believes designers will be able to save more than 95 percent of space compared with using GDDR5 DRAM. This, Samsung said, will “offer more optimal solutions for compact devices that require high-level graphics computing capabilities”.

Samsung will increase production volume of its HBM2 DRAM over the course of the year to meet anticipated growth in market demand for network systems and servers. The firm will also expand its line-up of HBM2 DRAM solutions in a bid to “stay ahead in the high-performance computing market”.

Courtesy-TheInq

AMD Goes Polaris

January 19, 2016 by  
Filed under Computing

Comments Off on AMD Goes Polaris

AMD has shown off its upcoming next-generation Polaris GPU architecture at CES 2016 in Las Vegas.

Based on the firm’s fourth generation Graphics Core Next (GCN) architecture and built using a 14nm FinFET fabrication process, the upcoming architecture is a big jump from the current 28nm process.

AMD said that it expects shipments of Polaris GPUs to begin in mid-2016, offering improvements such as HDR monitor support and better performance-per-watt.

The much smaller 14nm FinFET process means that Polaris will deliver “a remarkable generational jump in power efficiency”, according to AMD, offering fluid frame rates in graphics, gaming, virtual reality and multimedia applications running on small form-factor thin and light computer designs.

“Our new Polaris architecture showcases significant advances in performance, power efficiency and features,” said AMD president and CEO Lisa Su. “2016 will be a very exciting year for Radeon fans driven by our Polaris architecture, Radeon Software Crimson Edition and a host of other innovations in the pipeline from our Radeon Technologies Group.”

The Polaris architecture features AMD’s fourth-generation GCN architecture, a next-generation display engine with support for HDMI 2.0a and DisplayPort 1.3, and next-generation multimedia features including 4K h.265 encoding and decoding.

GCN enables gamers to experience high-performance video games with Mantle, a tool for alleviating CPU bottlenecks such as API overhead and inefficient multi-threading. Mantle, which is basically AMD’s answer to Microsoft’s Direct X, enables improvements in graphics processing performance. In the past, AMD has claimed that Kaveri teamed with Mantle to enable it to offer built-in Radeon dual graphics to provide performance boosts ranging from 49 percent to 108 percent.

The new GPUs are being sampled to OEMs at the moment and we can expect them to appear in products by mid-2016, AMD said. Once they are in the market, we can expect to see much thinner form factors in devices thanks to the much smaller 14nm chip process.

Courtesy-TheInq

AMD Goes Full Steam To Open-Source

December 30, 2015 by  
Filed under Computing

Comments Off on AMD Goes Full Steam To Open-Source

AMD and now RTG (Radeon Technologies Group) are involved in a major push to open source GPU resources.

According to Ars Technica Under the handle “GPUOpen” AMD is releasing a slew of open-source software and tools to give developers of games, heterogeneous applications, and HPC applications deeper access to the GPU and GPU resources.

In a statement AMD said that as a continuation of the strategy it started with Mantle, it is giving even more control of the GPU to developers.

“ As console developers have benefited from low-level access to the GPU, AMD wants to continue to bring this level of access to the PC space.”

The AMD GPUOpen initiative is meant to give developers the ability to use assets they’ve already made for console development. They will have direct access to GPU hardware, as well as access to a large collection of open source effects, tools, libraries and SDKs, which are being made available on GitHub under an MIT open-source license.

AMD wants GPUOpen will enable console-style development for PC games through this open source software initiative. It also includes an end-to-end open source compute infrastructure for cluster-based computing and a new Linux software and driver strategy

All this ties in with AMD’s Boltzmann Initiative and an HSA (Heterogeneous System Architecture) software suite that includes an HCC compiler for C++ development. This was supposed to open the field of programmers who can use HSA. A new HCC C++ compiler was set up to enable developers to more easily use discrete GPU hardware in heterogeneous systems.

It also allows developers to convert CUDA code to portable C++. According to AMD, internal testing shows that in many cases 90 percent or more of CUDA code can be automatically converted into C++ with the final 10 percent converted manually in the widely popular C++ language. An early access program for the “Boltzmann Initiative” tools is planned for Q1 2016.

AMD GPUOpen includes a new Linux driver model and runtime targeted at HPC Cluster-Class Computing. The headless Linux driver is supposed to handle high-performance computing needs with low latency compute dispatch and PCI Express data transfers, peer-to-peer GPU support, Remote Direct Memory Access (RDMA) from InfiniBand that interconnects directly to GPU memory and Large Single Memory Allocation support.

Courtesy-Fud

Will GDDR5 Rule In 2016

December 21, 2015 by  
Filed under Computing

Comments Off on Will GDDR5 Rule In 2016

AMD over-hyped the new  High Bandwidth Memory standard and now the second generation HBM 2.0 is coming in 2016. However it looks like most of GPUs shipped in this year will still rely on the older GDDR5.

Most of the entry level, mainstream and even performance graphics cards from both Nvidia and AMD will rely on the GDDR5. This memory has been with us since 2007 but it has dramatically increased in speed. The memory chip has shrunken from 60nm in 2007 to 20nm in 2015 making higher clocks and lower voltage possible.

Some of the big boys, including Samsung and Micron, have started producing 8 Gb GDDR5 chips that will enable cards with 1GB memory per chip. The GTX 980 TI has 12 chips with 4 Gb support (512MB per chip) while Radeon Fury X comes with four HMB 1.0 chips supporting 1GB per chip at much higher bandwidth. Geforce Titan X has 24 chips with 512MB each, making the total amount of memory to 12GB.

The next generation cards will  get 12GB memory with 12 GDDR5 memory chips or 24GB with 24 chips. Most of the mainstream and performance cards will come with much less memory.

Only a few high end cards such as Greenland high end FinFET solution from AMD and a Geforce version of Pascal will come with the more expensive and much faster HMB 2.0 memory.

GDDR6 is arriving in 2016 at least at Micron and the company promises a much higher bandwidth compared to the GDDR5. So there will be a few choices.

Source-http://www.thegurureview.net/computing-category/will-gddr5-rule-in-2016.html

AMD Appears To Be Pushing It’s Boltzmann Plan

December 10, 2015 by  
Filed under Computing

Comments Off on AMD Appears To Be Pushing It’s Boltzmann Plan

Troubled chipmaker AMD is putting a lot of its limited investment money into the “Boltzmann Initiative” which is uses heterogeneous system architecture ability to harness both CPU and AMD GPU for compute efficiency through software.

VR-World says that stage one results are finished and where shown off this week at SC15. This included a Heterogeneous Compute Compiler (HCC); a headless Linux driver and HSA runtime infrastructure for cluster-class, High Performance Computing (HPC); and the Heterogeneous-compute Interface for Portability (HIP) tool for porting CUDA-based applications to C++ programming.

AMD hopes the tools will drive application performance from machine learning to molecular dynamics, and from oil and gas to visual effects and computer-generated imaging.

Jim Belak, co-lead of the US Department of Energy’s Exascale Co-design Center in Extreme Materials and senior computational materials scientist at Lawrence Livermore National Laboratory said that AMD’s Heterogeneous-compute Interface for Portability enables performance portability for the HPC community.

“The ability to take code that was written for one architecture and transfer it to another architecture without a negative impact on performance is extremely powerful. The work AMD is doing to produce a high-performance compiler that sits below high-level programming models enables researchers to concentrate on solving problems and publishing groundbreaking research rather than worrying about hardware-specific optimizations.”

The new AMD Boltzmann Initiative suite includes an HCC compiler for C++ development, greatly expanding the field of programmers who can leverage HSA.

The new HCC C++ compiler is a key tool in enabling developers to easily and efficiently apply the hardware resources in heterogeneous systems. The compiler offers more simplified development via single source execution, with both the CPU and GPU code in the same file.

The compiler automates the placement code that executes on both processing elements for maximum execution efficiency.

Source- http://www.thegurureview.net/computing-category/amd-appears-to-be-pushing-its-boltzmann-plan.html

Next Page »