RedHat Goes Atomic
The Red Hat Summit kicked off in San Francisco on Tuesday, and continued today with a raft of announcements.
Red Hat launched a new fork of Red Hat Enterprise Linux (RHEL) with the title “Atomic Host”. The new version is stripped down to enable lightweight deployment of software containers. Although the mainline edition also support software containers, this lightweight version improves portability.
This is part of a wider Red Hat initiative, Project Atomic, which also sees virtualisation platform Docker updated as part of the ongoing partnership between the two organisations.
Red Hat also announced a release candidate (RC) for Red Hat Enterprise Linux 7. The beta version has already been downloaded 10,000 times. The Atomic Host fork is included in the RC.
Topping all that is the news that Red Hat’s latest stable release, RHEL 6.5 has been deployed at the Organisation for European Nuclear Research – better known as CERN.
The European laboratory, which houses the Large Hadron Collider (LHC) and was birthplace of the World Wide Web has rolled out the latest versions of Red Hat Enterprise Linux, Red Hat Enterprise Virtualisation and Red Hat Technical Account Management. Although Red Hat has a long history with CERN, this has been a major rollout for the facility.
The logging server of the LHC is one of the areas covered by the rollout, as are the financial and human resources databases.
The infrastructure comprises a series of dual socket servers, virtualised on Dell Poweredge M610 servers with up to 256GB RAM per server and full redundancy to prevent the loss of mission critical data.
Niko Neufeld, deputy project leader at the Large Hadron Collider, said, “Our LHCb experiment requires a powerful, very reliable and highly available IT environment for controlling and monitoring our 70 million CHF detectors. Red Hat Enterprise Virtualization is at the core of our virtualized infrastructure and complies with our stringent requirements.”
Other news from the conference includes the launch of Openshift Marketplace, allowing customers to try solutions for cloud applications, and the release of Red Hat Jboss Fuse 6.1 and Red Hat Jboss A-MQ 6.1, which are standards based integration and messaging products designed to manage everything from cloud computing to the Internet of Things.
Can DirectX-12 Give Mobile A Boot?
Microsoft announced DirectX 12 just a few days ago and for the first time Redmond’s API is relevant beyond the PC space. Some DirectX 12 tech will end up in phones and of course Windows tablets.
Qualcomm likes the idea, along with Nvidia. Qualcomm published an blog post on the potential impact of DirectX 12 on the mobile industry and the takeaway is very positive indeed.
DirectX 12 equals less overhead, more battery life
Qualcomm says it has worked closely with Microsoft to optimise “Windows mobile operating systems” and make the most of Adreno graphics. The chipmaker points out that current Snapdragon chipsets already support DirectX 9.3 and DirectX 11. However, the transition to DirectX 12 will make a huge difference.
“DirectX 12 will turbocharge gaming on Snapdragon enabled devices in many ways. Just a few years ago, our Snapdragon processors featured one CPU core, now most Snapdragon processors offer four. The new libraries and API’s in DirectX 12 make more efficient use of these multiple cores to deliver better performance,” Qualcomm said.
DirectX 12 will also allow the GPU to be used more efficiently, delivering superior performance per watt.
“That means games will look better and deliver longer gameplay longer on a single charge,” Qualcomm’s gaming and graphics director Jim Merrick added.
What about eye candy?
Any improvement in efficiency also tends to have a positive effect on overall quality. Developers can get more out of existing hardware, they will have more resources at their disposal, simple as that.
Qualcomm also points out that DirectX 12 is also the first version to launch on Microsoft’s mobile operating systems at the same time as its desktop and console counterparts.
The company believes this emphasizes the growing shift and consumer demand for mobile gaming. However, it will also make it easier to port desktop and console games to mobile platforms.
Of course, this does not mean that we’ll be able to play Titanfall on a Nokia Lumia, or that similarly demanding titles can be ported. However, it will speed up development and allow developers and publishers to recycle resources used in console and PC games. Since Windows Phone isn’t exactly the biggest mobile platform out there, this might be very helpful and it might attract more developers.
Intel Buys Into Altera
Technology gossip columns are full of news that Intel and Altera have expanded their relationship. Apparently, Altera has been Intel’s shoulder to cry on as the chip giant seeks to move beyond the declining PC market and the breakup of the Wintel alliance. Intel took the break up very hard and there was talk that Alteria might be just a rebound thing.
Last year Intel announced that it would manufacture Altera’s ARM-based quad-core Stratix 10 processors, as part of its efforts to grow its foundry business to make silicon products for third parties. Now the two vendors are expanding the relationship to include multi-die devices integrating Altera’s field-programmable gate arrays (FPGAs) and systems-on-a-chip (SoCs) with a range of other components, from memory to ASICs to processors.
Multi-die devices can drive down production costs and improve performance and energy efficiency of chips for everything from high-performance servers to communications systems. The multi-die devices will take advantage of the Stratix 10 programmable chips that Intel is manufacturing for Altera with its 14-nanometer Tri-Gate process. Intel’s three-dimensional transistor architecture combined with Altera’s FPGA redundancy technology leads to Altera being able to create a highly dense and energy efficient programmable chip die that can offer better integration of components.
At the same time, Intel officials are looking for ways to make more cash from its manufacturing capabilities, including growing its foundry business by making chips for other vendors. CEO Brian Krzanich and other Intel executives have said they will manufacture third-party chips even if they are based on competing infrastructure, which is the case with Altera and its ARM-based chips.
nVidia Goes For Raspberry Pi
nVidia has unveiled what it claims is “the world’s first mobile supercomputer”, a development kit powered by a Tegra K1 chip.
Dubbed the Jetson TK1, the kit is built for embedded systems to aid the development of computers attempting to simulate human recognition of physical objects, such as robots and self-driving cars.
Speaking at the GPU Technology Conference (GTC) on Tuesday, Nvidia co-founder and CEO Jen Hsun Huang described it as “the world’s tiniest little supercomputer”, noting that it’s capable of running anything the Geforce GTX Titan Z graphics card can run, but at a slower pace.
With a total performance of 326 GFLOPS, the Jetson TK1 should be more powerful than the Raspberry Pi board, which delivers just 24 GFLOPS, but will retail for much more, costing $192 in the US – a number that matches the number of cores in the Tegra K1 processor that Nvidia launched at CES in Las Vegas in January.
Described by the company as a “super chip” that can bridge the gap between mobile computing and supercomputing, the Nvidia Tegra K1, which replaces the Tegra 4, is based on the firm’s Kepler GPU architecture.
The firm boasted at CES that the chip will be capable of bringing next-generation PC gaming to mobile devices, and Nvidia claimed that it will be able to match the PS4 and Xbox One consoles’ graphics performance.
Designed from the ground up for CUDA, which now has more than 100,000 developers, the Jetson TK1 Developer Kit includes the programming tools required by software developers to develop and deploy compute-intensive systems quickly, Nvidia claimed.
“The Jetson TK1 also comes with this new SDK called Vision Works. Stacked onto CUDA, it comes with a whole bunch of primitives whether it’s recognising corners or detecting edges, or it could be classifying objects. Parameters are loaded into this Vision Works primitives system and all of a sudden it recognises objects,” Huang said.
“On top of it, there’s simple pipe lines we’ve created for you in sample code so that it helps you get started on what a structure for motion algorithm, object detection, object tracking algorithms would look like and on top of that you could develop your own application.”
Nvidia also expects the Jetson TK1 to be able to operate in the sub-10 Watt market for applications that previously consumed 100 Watts or more.
AMD, Intel & nVidia Go OpenGL
AMD, Intel and Nvidia teamed up to tout the advantages of the OpenGL multi-platform application programming interface (API) at this year’s Game Developers Conference (GDC).
Sharing a stage at the event in San Francisco, the three major chip designers explained how, with a little tuning, OpenGL can offer developers between seven and 15 times better performance as opposed to the more widely recognised increases of 1.3 times.
AMD manager of software development Graham Sellers, Intel graphics software engineer Tim Foley and Nvidia OpenGL engineer Cass Everitt and senior software engineer John McDonald presented their OpenGL techniques on real-world devices to demonstrate how these techniques are suitable for use across multiple platforms.
During the presentation, Intel’s Foley talked up three techniques that can help OpenGL increase performance and reduce driver overhead: persistent-mapped buffers for faster streaming of dynamic geometry, integrating Multidrawindirect (MDI) for faster submission of many draw calls, and packing 2D textures into arrays, so texture changes no longer break batches.
They also mentioned during their presentation that with proper implementations of these high-level OpenGL techniques, driver overhead could be reduced to almost zero. This is something that Nvidia’s software engineers have already claimed is impossible with Direct3D and only possible with OpenGL (see video below).
Nvidia’s VP of game content and technology, Ashu Rege, blogged his account of the GDC joint session on the Nvidia blog.
“The techniques presented apply to all major vendors and are suitable for use across multiple platforms,” Rege wrote.
“OpenGL can cut through the driver overhead that has been a frustrating reality for game developers since the beginning of the PC game industry. On desktop systems, driver overhead can decrease frame rate. On mobile devices, however, driver overhead is even more insidious, robbing both battery life and frame rate.”
The slides from the talk, entitled Approaching Zero Driver Overhead, are embedded below.
At the Game Developers Conference (GDC), Microsoft also unveiled the latest version of its graphics API, Directx 12, with Direct3D 12 for more efficient gaming.
Showing off the new Directx 12 API during a demo of Xbox One racing game Forza 5 running on a PC with an Nvidia Geforce Titan Black graphics card, Microsoft said Directx 12 gives applications the ability to directly manage resources to perform synchronisation. As a result, developers of advanced applications can control the GPU to develop games that run more efficiently.
AMD To Focus On China
Advanced Micro Devices has relocated its desktop chip business operations from the U.S. to the growing market of China, adding to its research lab and testing plant there.
The desktop market in China is growing at a fast pace and its shipments of desktops and laptops are equal in ratio, said Michael Silverman, an AMD spokesman, in an email. “The desktop market in China remains strong,” Silverman said.
The move of AMD’s desktop operations was first reported by technology news publication Digitimes, but the chip maker confirmed the news.
The company is also developing tailored products for users in China, Silverman said.
AMD’s move of desktop operations to China brings them closer to key customers such as Lenovo, said Dean McCarron, principal analyst at Mercury Research.
“Not that they don’t have their sales in the U.S.,” but a significant number of those PCs are made in China and then shipped internationally, McCarron said.
AMD is the world’s second-largest x86 processor maker behind Intel. Many PC makers like HP and Dell get products made in China.
Being in China also solves some desktop supply chain issues because it moves AMD closer to motherboard suppliers like Asustek and MSI, which are based in Taiwan, but get parts made in China. Chips will be shipped to customers faster and at a lower cost, which would reduce the time it takes for PCs to come to market, McCarron said.
AMD already has a plant in Suzhou, which Silverman said “represents half of our global back-end testing capacity.” AMD’s largest research and development center outside the U.S. is in Shanghai.
Some recent products released by the company have been targeted at developing countries. AMD recently starting shipping Sempron and Athlon desktop chips for the Asia-Pacific and Latin America markets, and those chips go into systems priced between $60 and $399. AMD is targeting the chips at users that typically build systems at home and shop for processors, memory and storage. The chips — built on the Jaguar microarchitecture — go into AMD’s new AM1 socket, which will be on motherboards and is designed for users to easily upgrade processors.
China is also big in gaming PCs, and remains a key market for AMD’s desktop chips, said Nathan Brookwood, principal analyst at Insight 64. “White box integrator’s play a big role in China,” he said.
Do Chip Makers Have Cold Feet?
It is starting to look like chip makers are having cold feet about moving to the next technology for chipmaking. Fabricating chips on larger silicon wafers is the latest cycle in a transition, but according to the Wall Street Journal chipmakers are mothballing their plans.
Companies have to make massive upfront outlays for plants and equipment and they are refusing, because the latest change could boost the cost of a single high-volume factory to as much as $10 billion from around $4 billion. Some companies have been reining in their investments, raising fears the equipment needed to produce the new chips might be delayed for a year or more.
ASML, a maker of key machines used to define features on chips, recently said it had “paused” development of gear designed to work with the larger wafers. Intel said it has slowed some payments to the Netherlands-based company under a deal to help develop the technology.
Gary Dickerson, chief executive of Applied Materials said that the move to larger wafers “has definitely been pushed out from a timing standpoint”
Will Google Use Intel Inside?
March 21, 2014 by admin
Filed under Around The Net
Comments Off on Will Google Use Intel Inside?
It seems that Intel has elbowed its way under the bonnet of the high profile Nexus 8 tablet. Word on the street is that the Moorefield chip which is said to make a top speed of around 2.33 GHz, when the wind is behind it, has kicked Qualcomm’s tried and tested Snapdragon chip out of the Nexus range.
The move would give the Nexus 8, some good GPU power thanks to the PowerVR G6430 graphic engine. Google may unveil the actual tablet during the Google I/O event as well as the next big upgrade to the Android software dubbed lollipop. Still it is starting to look like Intel may really become a force to be reckoned with in mobile after all.
However, we should point out that Nexus 8 CPU rumors are nothing new. There was talk of Intel, Qualcomm and even Nvidia over the past couple of months – but we are still not entirely certain what’s under the bonnet.
Is AMD Worried?
AMD’s Mantle has been a hot topic for quite some time and despite its delayed birth, it has finally came delivered performance in Battlefield 4. Microsoft is not sleeping it has its own answer to Mantle that we mentioned here.
Oddly enough we heard some industry people calling it DirectX 12 or DirectX Next but it looks like Microsoft is getting ready to finally update the next generation DirectX. From what we heard the next generation DirectX will fix some of the driver overhead problems that were addressed by Mantle, which is a good thing for the whole industry and of course gamers.
AMD got back to us officially stating that “AMD would like you to know that it supports and celebrates a direction for game development that is aligned with AMD’s vision of lower-level, ‘closer to the metal’ graphics APIs for PC gaming. While industry experts expect this to take some time, developers can immediately leverage efficient API design using Mantle. “
AMD also told us that we can expect some information about this at the Game Developers Conference that starts on March 17th, or in less than two weeks from now.
We have a feeling that Microsoft is finally ready to talk about DirectX Next, DirectX 11.X, DirectX 12 or whatever they end up calling it, and we would not be surprised to see Nvidia 20nm Maxwell chips to support this API, as well as future GPUs from AMD, possibly again 20nm parts.
Is Samsung Ditching Android?
March 13, 2014 by admin
Filed under Around The Net
Comments Off on Is Samsung Ditching Android?
Samsung appears to have delivered a huge snuff to Android OS maker Google. Samsung’s new smartwatch Gear 2 and Gear 2 Neo, the sequels to the poorly reviewed original Galaxy Gear are going to ship without Android.
Instead, the new Gears run Tizen, another open source operating system that Samsung, Intel, and others are working on. It is starting to look like Samsung wants to distance itself from its reliance on Google for software and services.
Samsung’s official reason is that Tizen has better battery life and performance. The new Gears can get up to an extra two days of battery life by running Tizen, even though they have the same size battery. The Galaxy Gear barely made it through a day on one charge.
To be fair Android isn’t optimized to run on wearable devices like smart watches, but Samsung didn’t want to wait around for Google to catch up. It was clearly concerned about beating Apple to market. So far Apple has not shown up.