Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Will Razer’s External Graphics Box Fail?

March 30, 2016 by  
Filed under Computing

Comments Off on Will Razer’s External Graphics Box Fail?

We first saw the Razer Core, an external graphics box that connects to a notebook via Thunderbolt 3 port, back at CES 2016 in January, and today, Razer has finally unveiled a bit more details including the price, availability date and compatibility details.

At the GDC 2016 show in San Francisco, Razer has announced that the Core will be ready in April and have a price of US $499. As expected, it has been only validated on Razer Blade Stealth and the newly introduced Razer Blade 2016 Edition notebooks but as it uses Thunderbolt 3 interface, it should be compatible with any other notebook, as long as manufacturer wants it.

With dimensions set at 105 x 353 x 220mm, the Razer Core is reasonably portable. It comes with a 500W PSU and features four USB 3.0 ports, Gigabit Ethernet and Thunderbolt 3 port which is used to connect it to a notebook.

As far as graphics cards support is concerned, Razer says that the Core will work with any AMD Radeon graphics card since Radeon 290 series, including the latest R9 Fury, R9 Nano and Radeon 300 series, as well as pretty much all Nvidia Maxwell GPU based graphics cards since Geforce GTX 750/750 Ti, although we are not sure why would you pair up a US $500 priced box with a US $130 priced graphics cards. The maximum TDP for the graphics card is set at 375W, which means that all dual-GPU solutions are out of the picture, so it will go as far as R9 Fury X or the GTX Titan X.

There aren’t many notebooks that feature a Thunderbolt 3 ports and we have heard before that Thunderbolt 3 might have certain issues with latency, which is probably why other manufacturers like MSI and Alienware, went on with their own proprietary connectors. Of course, Razer probably did the math but we will surely keep a closer eye on it when it ships in April. Both AMD and Nvidia are tweaking their drivers and already have support for external graphics, so it probably will not matter which graphics card you pick.

According to Razer, the Razer Core will be available in April and priced at US $499. Razer is already started taking pre-orders for the Razer Core and offers a US $100 discount in case you buy it with one of their notebooks, Razer Blade 2016 or Blade Stealth.

Courtesy-Fud

MediaTek Shows Off The Helio X25 Chip

March 28, 2016 by  
Filed under Computing

Comments Off on MediaTek Shows Off The Helio X25 Chip

MediaTek has told Fudzilla that the Helio X25 SoC is not only real, but that it is a “turbo” version of the Helio X20.

Meizu is expected to be one of the first companies to use the X25. Last year it was also the first to use MTK 6795T for its Meizu MX5 phone. In that case the “T” suffix stood for Turbo. This phone was 200 MHz faster than the standard Helio X10 “non T” version.

In 2016 is that MediaTek decided to use the new Helio X25 name because of a commercial arrangement. MediaTek didn’t mention any of the partners, but confirmed that the CPU and GPU will be faster. They did not mention specific clock speeds. Below is a diagram of the Helio X20, and we assume that the first “eXtreme performance” cluster will get a frequency boost, as well as the GPU.

The Helio X25 will not have any architectural changes, it is just a faster version of X20, just like MTK 6795T was faster version of MTK 6795. According to the company, the Helio X25 will be available in May.

This three cluster Helio X25 SoC has real potential and should be one of the most advanced mobile solutions when it hits the market.The first leaked scores of the Helio X20 suggest great performance, but the X25 should have even better scores. There should be a dozen design wins with Helio X20/ X25 and most of them are yet to be announced. There should be a few announcements for the Helio X25 soon, but at least we do know that now there will be a even faster version of three cluster processor.

Courtesy-Fud

 

Is The GPU Market Going Down?

March 25, 2016 by  
Filed under Computing

Comments Off on Is The GPU Market Going Down?

The global GPU market has fallen by 20 per cent over the last year.

According to Digitimes it fell to less than 30 million units in 2015 and the outfit suffering most was  AMD. The largest graphics card player Palit Microsystems, which has several brands including Palit and Galaxy, shipped 6.9-7.1 million graphics cards in 2015, down 10 per cent  on year. Asustek Computer shipped 4.5-4.7 million units in 2015, while Colorful shipped 3.9-4.1 million units, and is aiming to raise its shipments by 10 per cent  on year in 2016.

Micro-Star International (MSI) enjoyed healthy graphics card shipments at 3.45-3.55 million in 2015, up 15 per cent  on year, and EVGA, which has tight partnerships with Nvidia, also saw a significant shipment growth, while Gigabyte suffered from a slight drop on year. Sapphire and PowerColor suffered dramatic drops in shipments in 2015.

There are fears that several of the smaller GPU makers could be forced out of the market after AMD gets its act together with the arrival of Zen and Nvidia’s next-generation GPU architectures launch later in 2016.

Courtesy-Fud

MediaTek Goes LTE CAT 6 On Low End SoCs

February 8, 2016 by  
Filed under Computing

Comments Off on MediaTek Goes LTE CAT 6 On Low End SoCs

MediaTek appears to be ready to give three more entry level processors LTE Cat 6 so they can mangage a 300 Mbit download and 50 Mbit upload.  We already knew that the high-end deca-core X20 and mainstream eight core P10 were getting LTE Cat 6.

According to the Gizchina website, the company the three new SoCs carry the catchy titles of MT6739, MT6750 and MT6750T. .

The MT6739 will probably replace the MT6735. Both have quad A53 cores but it will mean that the MT6739 will get a Cat 6 upgrade from Cat 4. The MT6739 supports speeds of up to 1.5GHz, 512 KB L2 cache, 1280×720 at 60fps resolution, and video decode to 1080p 30fps with H.264 and 13 megapixel camera. This means it is an entry level SoC for phones that might fit into the $100 price range.

The MT6750 and MT6750T look like twins, only the T version supports full HD 1920×1080 displays. The MT6750 has eight cores, four A53 clocked at 1.5Ghz and four A53 clocked at 1.0GHz and is manufactured on TSMC’s new 28nm High Performance Mobile Computing manufacturing mode. This is the same manufacturing process MediaTek is using for the Helio P10 SoC. The new process allows lower leakage and better overall transistor performance at lower voltage.

The MT6750 SoC supports single channel LPDDR3 666MHz and eMCP up to 4GB. The SoC supports eMMC 5.1, 16 megapixel camera, 1080p 30 fps with both H.264 and H.265 decoding. It comes with an upgraded ARM Mali T860 MP2 GPU with 350 MHz and display support of 1280×720 HD720 ready with 60 FPS. This means the biggest upgrade is the Cat 6 upgrade and it makes sense – most of European and American networks now are demanding a Cat 6 or higher modem that supports carrier aggregation.

This new SOc looks like a slowed down version of Helios P10 and should be popular for entry level Android phones.

Courtesy-Fud

Samsung And TSMC Battle It Out

February 4, 2016 by  
Filed under Computing

Comments Off on Samsung And TSMC Battle It Out

Samsung and TSMC are starting to slug it out introducing Gen.3 14 and 16-nano FinFET system semiconductor processes, but the cost could mean that smartphone makers shy away from the technology in the short term.

It is starting to look sales teams for the pair are each trying to show that they can use the technology to reduce the most electricity consumption and production costs.

In its yearly result for 2015, TSMC made an announcement that it is planning to enter mass-production system of chips produced by 16-nano FinFET Compact (FFC) process sometime during 1st quarter of this year. TSMC had finished developing 16-nano FFC process at the end of last year. During the announcement TSMC talked up the fact that its 16-nano FFC process focuses on reducing production cost more than before and implementing low electricity.

TSMC is apparently ready for mass-production of 16-nano FFC process sometime during 1st half of this year and secured Huawei’s affiliate called HiSilicon as its first customer.

HiSilicon’s Kirin 950 that is used for Huawei’s premium Smartphone called Mate 8 is produced by TSMC’s 16-nano FF process. Its A9 Chip, which is used for Apple’s iPhone 6S series, is mass-produced using the 16-nano FinFET Plus (FF+) process that was announced in early 2015. By adding FFC process, TSMC now has three 16-nano processors in action.

Samsung is not far behind it has mass-produced Gen.2 14-nano FinFET using a process called LPP (Low Power Plus). This has 15 per cent lower electricity consumption compared to Gen.1 14-nano process called LPE (Low Power Early).

Samsung Electronics’ 14-nano LPP process was seen in the Exynos 8 OCTA series that is used for Galaxy S7 and Qualcomm’s Snapdragon 820. But Samsung Electronics is also preparing for Gen.3 14-nano FinFET process.

Vice-President Bae Young-chang of Samsung’s LSI Business Department’s Strategy Marketing Team said it will use a process similar to the Gen.2 14-nano process.

Both Samsung and TSMC might have a few problems. It is not clear what the yields of these processes are and this might increase the production costs.

Even if Samsung Electronics and TSMC finish developing 10-nano process at the end of this year and enter mass-production system next year, but they will also have to upgrade their current 14 and 16-nano processes to make them more economic.

Even if 10-nano process is commercialized, there still will be many fabless businesses that will use 14 and 16-nano processes because they are cheaper. While we might see a few flagship phones using the higher priced chips, it might be that we will not see 10nm in the majority of phones for years.

 

Courtesy-Fud

Samsung Goes 4GB HBM

February 2, 2016 by  
Filed under Computing

Comments Off on Samsung Goes 4GB HBM

Samsung has begun mass producing what it calls the industry’s first 4GB DRAM package based on the second-generation High Bandwidth Memory (HBM) 2 interface.

Samsung’s new HBM solution will be used in high-performance computing (HPC), advanced graphics, network systems and enterprise servers, and is said to offer DRAM performance that is “seven times faster than the current DRAM performance limit”.

This will apparently allow faster responsiveness for high-end computing tasks including parallel computing, graphics rendering and machine learning.

“By mass producing next-generation HBM2 DRAM, we can contribute much more to the rapid adoption of next-generation HPC systems by global IT companies,” said Samsung Electronics’ SVP of memory marketing, Sewon Chun.

“Also, in using our 3D memory technology here, we can more proactively cope with the multifaceted needs of global IT, while at the same time strengthening the foundation for future growth of the DRAM market.”

The 4GB HBM2 DRAM, which uses Samsung’s 20nm process technology and advanced HBM chip design, is specifically aimed at next-generation HPC systems and graphics cards.

“The 4GB HBM2 package is created by stacking a buffer die at the bottom and four 8Gb core dies on top. These are then vertically interconnected by TSV holes and microbumps,” explained Samsung.

“A single 8Gb HBM2 die contains over 5,000 TSV holes, which is more than 36 times that of an 8Gb TSV DDR4 die, offering a dramatic improvement in data transmission performance compared to typical wire-bonding based packages.”

Samsung’s new DRAM package features 256GBps of bandwidth, which is double that of an HBM1 DRAM package. This is equivalent to a more than seven-fold increase over the 36GBps bandwidth of a 4Gb GDDR5 DRAM chip, which has the fastest data speed per pin (9Gbps) among currently manufactured DRAM chips.

The firm’s 4GB HBM2 also enables enhanced power efficiency by doubling the bandwidth per watt over a 4Gb GDDR5-based solution, and embeds error-correcting code functionality to offer high reliability.

Samsung plans to produce an 8GB HBM2 DRAM package this year, and by integrating this into graphics cards the firm believes designers will be able to save more than 95 percent of space compared with using GDDR5 DRAM. This, Samsung said, will “offer more optimal solutions for compact devices that require high-level graphics computing capabilities”.

Samsung will increase production volume of its HBM2 DRAM over the course of the year to meet anticipated growth in market demand for network systems and servers. The firm will also expand its line-up of HBM2 DRAM solutions in a bid to “stay ahead in the high-performance computing market”.

Courtesy-TheInq

AMD Goes Polaris

January 19, 2016 by  
Filed under Computing

Comments Off on AMD Goes Polaris

AMD has shown off its upcoming next-generation Polaris GPU architecture at CES 2016 in Las Vegas.

Based on the firm’s fourth generation Graphics Core Next (GCN) architecture and built using a 14nm FinFET fabrication process, the upcoming architecture is a big jump from the current 28nm process.

AMD said that it expects shipments of Polaris GPUs to begin in mid-2016, offering improvements such as HDR monitor support and better performance-per-watt.

The much smaller 14nm FinFET process means that Polaris will deliver “a remarkable generational jump in power efficiency”, according to AMD, offering fluid frame rates in graphics, gaming, virtual reality and multimedia applications running on small form-factor thin and light computer designs.

“Our new Polaris architecture showcases significant advances in performance, power efficiency and features,” said AMD president and CEO Lisa Su. “2016 will be a very exciting year for Radeon fans driven by our Polaris architecture, Radeon Software Crimson Edition and a host of other innovations in the pipeline from our Radeon Technologies Group.”

The Polaris architecture features AMD’s fourth-generation GCN architecture, a next-generation display engine with support for HDMI 2.0a and DisplayPort 1.3, and next-generation multimedia features including 4K h.265 encoding and decoding.

GCN enables gamers to experience high-performance video games with Mantle, a tool for alleviating CPU bottlenecks such as API overhead and inefficient multi-threading. Mantle, which is basically AMD’s answer to Microsoft’s Direct X, enables improvements in graphics processing performance. In the past, AMD has claimed that Kaveri teamed with Mantle to enable it to offer built-in Radeon dual graphics to provide performance boosts ranging from 49 percent to 108 percent.

The new GPUs are being sampled to OEMs at the moment and we can expect them to appear in products by mid-2016, AMD said. Once they are in the market, we can expect to see much thinner form factors in devices thanks to the much smaller 14nm chip process.

Courtesy-TheInq

AMD Goes Full Steam To Open-Source

December 30, 2015 by  
Filed under Computing

Comments Off on AMD Goes Full Steam To Open-Source

AMD and now RTG (Radeon Technologies Group) are involved in a major push to open source GPU resources.

According to Ars Technica Under the handle “GPUOpen” AMD is releasing a slew of open-source software and tools to give developers of games, heterogeneous applications, and HPC applications deeper access to the GPU and GPU resources.

In a statement AMD said that as a continuation of the strategy it started with Mantle, it is giving even more control of the GPU to developers.

“ As console developers have benefited from low-level access to the GPU, AMD wants to continue to bring this level of access to the PC space.”

The AMD GPUOpen initiative is meant to give developers the ability to use assets they’ve already made for console development. They will have direct access to GPU hardware, as well as access to a large collection of open source effects, tools, libraries and SDKs, which are being made available on GitHub under an MIT open-source license.

AMD wants GPUOpen will enable console-style development for PC games through this open source software initiative. It also includes an end-to-end open source compute infrastructure for cluster-based computing and a new Linux software and driver strategy

All this ties in with AMD’s Boltzmann Initiative and an HSA (Heterogeneous System Architecture) software suite that includes an HCC compiler for C++ development. This was supposed to open the field of programmers who can use HSA. A new HCC C++ compiler was set up to enable developers to more easily use discrete GPU hardware in heterogeneous systems.

It also allows developers to convert CUDA code to portable C++. According to AMD, internal testing shows that in many cases 90 percent or more of CUDA code can be automatically converted into C++ with the final 10 percent converted manually in the widely popular C++ language. An early access program for the “Boltzmann Initiative” tools is planned for Q1 2016.

AMD GPUOpen includes a new Linux driver model and runtime targeted at HPC Cluster-Class Computing. The headless Linux driver is supposed to handle high-performance computing needs with low latency compute dispatch and PCI Express data transfers, peer-to-peer GPU support, Remote Direct Memory Access (RDMA) from InfiniBand that interconnects directly to GPU memory and Large Single Memory Allocation support.

Courtesy-Fud

TSMC Goes Fan-Out Wafers

December 23, 2015 by  
Filed under Computing

Comments Off on TSMC Goes Fan-Out Wafers

TSMC is scheduled to move its integrated fan-out (InFO) wafer-level packaging technology to volume production in the second quarter of 2016.

Apparently the fruity cargo cult Apple has already signed up to adopt the technology, which means that the rest of the world’s press will probably notice.

According to the Commercial Times TSMC will have 85,000-100,000 wafers fabricated with the foundry’s in-house developed InFo packaging technology in the second quarter of 2016.

TSMC has disclosed its InFO packaging technology will be ready for mass production in 2016. Company president and co-CEO CC Wei remarked at an October 15 investors meeting that TSMC has completed construction of a new facility in Longtan, northern Taiwan.

TSMC’s InFo technology will be ready for volume production in the second quarter of 2016, according to Wei.

TSMC president and co-CEO Mark Liu disclosed the company is working on the second generation of its InFO technology for several projects on 10nm and 7nm process nodes.

Source-http://www.thegurureview.net/computing-category/tsmc-goes-fan-out-wafers.html

Will GDDR5 Rule In 2016

December 21, 2015 by  
Filed under Computing

Comments Off on Will GDDR5 Rule In 2016

AMD over-hyped the new  High Bandwidth Memory standard and now the second generation HBM 2.0 is coming in 2016. However it looks like most of GPUs shipped in this year will still rely on the older GDDR5.

Most of the entry level, mainstream and even performance graphics cards from both Nvidia and AMD will rely on the GDDR5. This memory has been with us since 2007 but it has dramatically increased in speed. The memory chip has shrunken from 60nm in 2007 to 20nm in 2015 making higher clocks and lower voltage possible.

Some of the big boys, including Samsung and Micron, have started producing 8 Gb GDDR5 chips that will enable cards with 1GB memory per chip. The GTX 980 TI has 12 chips with 4 Gb support (512MB per chip) while Radeon Fury X comes with four HMB 1.0 chips supporting 1GB per chip at much higher bandwidth. Geforce Titan X has 24 chips with 512MB each, making the total amount of memory to 12GB.

The next generation cards will  get 12GB memory with 12 GDDR5 memory chips or 24GB with 24 chips. Most of the mainstream and performance cards will come with much less memory.

Only a few high end cards such as Greenland high end FinFET solution from AMD and a Geforce version of Pascal will come with the more expensive and much faster HMB 2.0 memory.

GDDR6 is arriving in 2016 at least at Micron and the company promises a much higher bandwidth compared to the GDDR5. So there will be a few choices.

Source-http://www.thegurureview.net/computing-category/will-gddr5-rule-in-2016.html

« Previous PageNext Page »