Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Can MediaTek Bring The Cortex-A72 To Market?

March 31, 2015 by  
Filed under Computing

MediaTek became the first chipmaker to publicly demo a SoC based on ARM’s latest Cortex-A72 CPU core, but the company’s upcoming chip still relies on the old 28nm manufacturing process.

We had a chance to see the upcoming MT8173 in action at the Mobile World Congress a couple of weeks ago.

The next step is to bring the new Cortex-A72 core to a new node and into mobiles. This is what MediaTek is planning to do by the end of the year.

Cortex-A72 smartphone parts coming in Q4

It should be noted that MediaTek’s 8000-series parts are designed for tablets, and the MT8173 is no exception. However, the new core will make its way to smartphone SoCs later this year, as part of the MT679x series.

According to Digitimes Research, MediaTek’s upcoming MT679x chips will utilize a combination of Cortex-A53 and Cortex-A57 cores. It is unclear whether MediaTek will use the planar 20nm node or 16nm FinFET for the new part.

By the looks of it, this chip will replace 32-bit MT6595, which is MediaTek’s most successful high performance part yet, with a few relatively big design wins, including Alcatel, Meizu, Lenovo and Zopo. The new chip will also supplement, and possibly replace the recently introduced MT6795, a 64-bit Cortex-A53/Cortex-A72 part used in the HTC Desire 826.

More questions than answers

Digitimes also claims the MT679x Cortex-A72 parts may be the first MediaTek products to benefit from AMD technology, but details are scarce. We can’t say whether or not the part will use AMD GPU technology, or some HSA voodoo magic. Earlier this month we learned that MediaTek is working with AMD and the latest report appears to confirm our scoop.

The other big question is the node. The chip should launch toward the end of the year, so we probably won’t see any devices prior to Q1 2016. While 28nm is still alive and kicking, by 2016 it will be off the table, at least in this market segment. Previous MediaTek roadmap leaks suggested that the company would transition to 20nm on select parts by the end of the year.

However, we are not entirely sure 20nm will cut it for high-end parts in 2016. Huawei has already moved to 16nm with its latest Kirin 930 SoC, Samsung stunned the world with the 14nm Exynos 7420, and Qualcomm’s upcoming Snapdragon 820 will be a FinFET part as well.

It is obvious that TSMC’s and Samsung’s 20nm nodes will not be used on most, if not all, high-end SoCs next year. With that in mind, it would be logical to expect MediaTek to use a FinFET node as well. On the other hand, depending on the cost, 20nm could still make sense for MediaTek – provided it ends up significantly cheaper than FinFET. While a 20nm chip wouldn’t deliver the same level of power efficiency and performance, with the right price it could find its way to more affordable mid-range devices, or flagships designed by smaller, value-oriented brands (especially those focusing on Chinese and Indian markets).

Source

March 30, 2015 by  
Filed under Computing

Target is reportedly close to paying out $10m to settle a class-action case that was filed after it was hacked and stripped of tens of millions of peoples’ details.

Target was smacked by hackers in 2013 in a massive cyber-thwack on its stores and servers that put some 70 million people’s personal information in harm’s way.

The hack has had massive repercussions. People are losing faith in industry and its ability to store their personal data, and the Target incident is a very good example of why people are right to worry.

As well as tarnishing Target’s reputation, the attack also led to a $162m gap in its financial spreadsheets.

The firm apologized to its punters when it revealed the hack, and chairman, CEO and president Gregg Steinhafel said he was sorry that they have had to “endure” such a thing

Now, according to reports, Target is willing to fork out another $10m to put things right, offering the money as a proposed settlement in one of several class-action lawsuits the company is facing. If accepted, the settlement could see affected parties awarded some $10,000 for their troubles.

We have asked Target to either confirm or comment on this, and are waiting for a response. For now we have an official statement at Reuters to turn to. There we see Target spokeswoman Molly Snyder confirming that something is happening but not mentioning the 10 and six zeroes.

“We are pleased to see the process moving forward and look forward to its resolution,” she said.

Not available to comment, not that we asked, will be the firm’s CIO at the time of the hack. Thirty-year Target veteran Beth Jacob left her role in the aftermath of the attack, and a replacement was immediately sought.

“To ensure that Target is well positioned following the data breach we suffered last year, we are undertaking an overhaul of our information security and compliance structure and practices at Target,” said Steinhafel then.

“As a first step in this effort, Target will be conducting an external search for an interim CIO who can help guide Target through this transformation.”

“Transformational change” pro Bob DeRodes took on the role in May last year and immediately began saying the right things.

“I look forward to helping shape information technology and data security at Target in the days and months ahead,” he said.

“It is clear to me that Target is an organization that is committed to doing whatever it takes to do right by their guests.”

We would ask Steinhafel for his verdict on DeRodes so far and the $10m settlement, but would you believe it, he’s not at Target anymore either having left in the summer last year with a reported $61m golden parachute.

Source

March 27, 2015 by  
Filed under Computing

IBM has started shipping its all-new first z13 mainframe computer.

IBM has high hopes the upgraded model will generate solid sales based not only on usual customer patterns but its design focus aimed at helping them cope with expanding mobile usage, analysis of data, upgrading security and doing more “cloud” remote computing.

Mainframes are still a major part of the Systems and Technology Group at IBM, which overall contributed 10.8 percent of IBM’s total 2014 revenues of $92.8 billion. But the z Systems and their predecessors also generate revenue from software, leasing and maintenance and thus have a greater financial impact on IBM’s overall picture.

The new mainframe’s claim to fame is to use simultaneous multi-threading (SMT) to execute two instruction streams (or threads) on a processor core which delivers more throughput for Linux on z Systems and IBM z Integrated Information Processor (zIIP) eligible workloads.

There is also a single Instruction Multiple Data (SIMD), a vector processing model providing instruction level parallelism, to speed workloads such as analytics and mathematical modeling. All this means COBOL 5.2 and PL/I 4.5 exploit SIMD and improved floating point enhancements to deliver improved performance over and above that provided by the faster processor.

Its on chip cryptographic and compression coprocessors receive a performance boost improving both general processors and Integrated Facility for Linux (IFL) cryptographic performance and allowing compression of more data, helping tosave disk space and reducing data transfer time.

There is also a redesigned cache architecture, using eDRAM technology to provide twice as much second level cache and substantially more third and fourth level caches compared to the zEC12. Bigger and faster caches help to avoid untimely swaps and memory waits while maximisng the throughput of concurrent workload Tom McPherson, vice president of z System development, said that the new model was not just about microprocessors, though this model has many eight-core chips in it. Since everything has to be cooled by a combination of water and air, semiconductor scaling is slowing down, so “you have to get the value by optimizing.

The first real numbers on how the z13 is selling won’t be public until comments are made in IBM’s first-quarter report, due out in mid-April, when a little more than three weeks’ worth of billings will flow into it.

The company’s fiscal fortunes have sagged, with mixed reviews from both analysts and the blogosphere. Much of that revolves around IBM’s lag in cloud services. IBM is positioning the mainframe as a prime cloud server, one of the systems that is actually what cloud computing goes to and runs on.

Source

March 26, 2015 by  
Filed under Computing

HGST has revealed the world’s first 10TB hard drive, but you probably won’t be installing one anytime soon.

The company has been working on the 10TB SMR HelioSeal hard drive for months and now it is almost ready to hit the market.

The drive uses Shingled Magnetic Recording (SMR) to boost density, thus enabling HGST to cram more data on every platter. ZDnet got a quick peek at the drive at a Linux event in Boston, which also featured a burning effigy of Nick Farrell.

Although we’ve covered some SMR drives in the past, the technology is still not very mature and so far it’s been limited to niche drives and enterprise designs, not consumer hard drives. HGST’s new drive is no exception – it is designed for data centers rather than PCs. While you won’t use it to store your music and video, you might end up streaming them from one of these babies.

Although data centres are slowly turning to SSDs for many hosts, even on cheap shared hosting packages, there is still a need for affordable mechanical storage. Even when hard drives are completely phased out from the frontline, they still have a role to play as backup drives.

Source

March 25, 2015 by  
Filed under Computing

Every three years I install Linux and see if it is ready for prime time yet, and every three years I am disappointed. What is so disappointing is not so much that the operating system is bad, it has never been, it is just that who ever designs it refuses to think of the user.

To be clear I will lay out the same rider I have for my other three reviews. I am a Windows user, but that is not out of choice. One of the reasons I keep checking out Linux is the hope that it will have fixed the basic problems in the intervening years. Fortunately for Microsoft it never has.

This time my main computer had a serious outage caused by a dodgy Corsair (which is now a c word) power supply and I have been out of action for the last two weeks. In the mean time I had to run everything on a clapped out Fujitsu notebook which took 20 minutes to download a webpage.

One Ubuntu Linux install later it was behaving like a normal computer. This is where Linux has always been far better than Windows – making rubbish computers behave. I could settle down to work right? Well not really.

This is where Linux has consistently disqualified itself from prime-time every time I have used it. Going back through my reviews, I have been saying the same sort of stuff for years.

Coming from Windows 7, where a user with no learning curve can install and start work it is impossible. Ubuntu can’t. There is a ton of stuff you have to upload before you can get anything that passes for an ordinary service. This uploading is far too tricky for anyone who is used to Windows.

It is not helped by the Ubuntu Software Centre which is supposed to make like easier for you. Say that you need to download a flash player. Adobe has a flash player you can download for Ubuntu. Click on it and Ubuntu asks you if you want to open this file with the Ubuntu Software Center to install it. You would think you would want this right? Thing is is that pressing yes opens the software center but does not download Adobe flash player. The center then says it can’t find the software on your machine.

Here is the problem which I wrote about nearly nine years ago – you can’t download Flash or anything proprietary because that would mean contaminating your machine with something that is not Open Sauce.

Sure Ubuntu will download all those proprietary drivers, but you have to know to ask – an issue which has been around now for so long it is silly. The issue of proprietary drives is only a problem for those who are hard core open saucers and there are not enough numbers of them to keep an operating system in the dark ages for a decade. However, they have managed it.

I downloaded LibreOffice and all those other things needed to get a basic “windows experience” and discovered that all those typefaces you know and love are unavailable. They should have been in the proprietary pack but Ubuntu has a problem installing them. This means that I can’t share documents in any meaningful way with Windows users, because all my formatting is screwed.

LibreOffice is not bad, but it really is not Microsoft Word and anyone who tries to tell you otherwise is lying.

I download and configure Thunderbird for mail and for a few good days it actually worked. However yesterday it disappeared from the side bar and I can’t find it anywhere. I am restricted to webmail and I am really hating Microsoft’s outlook experience.

The only thing that is different between this review and the one I wrote three years ago is that there are now games which actually work thanks to Steam. I have not tried this out yet because I am too stressed with the work backlog caused by having to work on Linux without regular software, but there is an element feeling that Linux is at last moving to a point where it can be a little bit useful.

So what are the main problems that Linux refuses to address? Usability, interface and compatibility.

I know Ubuntu is famous for its shit interface, and Gnome is supposed to be better, but both look and feel dated. I also hate Windows 8′s interface which requires you to use all your computing power to navigate through a touch screen tablet screen when you have neither. It should have been an opportunity for Open saucers to trump Windows with a nice interface – it wasn’t.

You would think that all the brains in the Linux community could come up with a simple easy to use interface which lets you have access to all the files you need without much trouble. The problem here is that Linux fans like to tinker they don’t want usability and they don’t have problems with command screens. Ordinary users, particularly more recent generations will not go near a command screen.

Compatibly issues for games has been pretty much resolved, but other key software is missing and Linux operators do not seem keen to get them on board.

I do a lot of layout and graphics work. When you complain about not being able to use Photoshop, Linux fanboys proudly point to GIMP and say that does the same things. You want to grab them down the throat and stuff their heads down the loo and flush. GIMP does less than a tenth of what Photoshop can do and it does it very badly. There is nothing that can do what CS or any real desktop publishers can do available on Linux.

Proprietary software designed for real people using a desktop tends to trump anything open saucy, even if it is producing a technology marvel.

So in all these years, Linux has not attempted to fix any of the problems which have effectively crippled it as a desktop product.

I will look forward to next week when the new PC arrives and I will not need another Ubuntu desktop experience. Who knows maybe they will have sorted it in three years time again.

Source

March 24, 2015 by  
Filed under Computing

Intel has announced details of its first Xeon system on chip (SoC) which will become the new the Xeon D 1500 processor family.

Although it is being touted as a server, storage and compute applications chip at the “network edge”, word on the street is that it could be under the bonnet of robots during the next apocalypse.

The Xeon D SoCs use the more useful bits of the E3 and Atom SoCs along with 14nm Broadwell core architecture. The Xeon D chip is expected to bring 3.4x better performance per watt than previous Xeon chips.

Lisa Spelman, Intel’s general manager for the Data Centre Products Group, lifted the kimono on the eight-core 2GHz Xeon D 1540 and the four-core 2.2GHz Xeon D 1520, both running at 45W. It also features integrated I/O and networking to slot into microservers and appliances for networking and storage, the firm said.

The chips are also being touted for industrial automation and may see life powering robots on factory floors. Since simple robots can run on basic, low-power processors, there’s no reason why faster chips can’t be plugged into advanced robots for more complex tasks, according to Intel.

Source

March 23, 2015 by  
Filed under Computing

SUSE has released OpenStack Cloud 5, the latest version of the its infrastructure-as-a-service private cloud distro.

Version 5 adds the OpenStack brand front and centre, and its credentials are based on the latest Juno build of the OpenStack open source platform.

This version includes enhanced networking flexibility, with additional plug-ins available and the addition of distributed virtual routing. This enables individual computer nodes to handle routing tasks together, or if needs be, clustering together.

Increased operational efficiency comes in the form of a new seamless integration with existing servers running outside the cloud. In addition, log collection is centralized into a single view.

As you would expect, SUSE OpenStack 5 is designed to fit perfectly alongside the company’s other products, including the recently launched Suse Enterprise Storage and Suse Linux Enterprise Server 12 as well as nodes from earlier versions.

Deployment has also been simplified as part of a move to standardise “as-a-service” models.

Also included is the company’s new Sahara data processing project designed to run Hadoop and Spark on top of OpenStack without degradation. MapR has released support for its own service by way of a co-branded plug-in.

“Furthering the growth of OpenStack enterprise deployments, Suse OpenStack Cloud makes it easier for customers to realise the benefits of a private cloud, saving them money and time they can use to better serve their own customers and business,” said Brian Green, managing director, UK and Ireland, at Suse.

“Automation and high availability features translate to simplicity and efficiency in enterprise data centers.”

Suse OpenStack Cloud 5 becomes generally available from today.

Source

March 20, 2015 by  
Filed under Computing

Intel has confirmed that it will release Core M processors this year based on its new Skylake chip design.

Intel CEO Brian Krzanich said at the Goldman Sachs Technology and Internet conference that the the new Core M chips are due in the second half of the year and will also extend battery life in tablets, hybrids, and laptop PCs.

The new chips will mean much thinner tablets and mobile PCs which will make Apple’s Air look decidedly portly. Intel’s Core M chips, introduced last year, are based on the Broadwell but the Skylake chips should also improve graphics and general application performance.

The Skylake chips will be able to run Windows 10, as well as Google’s Chrome and Android OSes, Krzanich said. But most existing Core M systems run Windows 8.1, and Intel has said device makers haven’t shown a lot of interest in other OSes. So most Skylake devices will probably run Windows 10. Chipzilla is expected to give more details about the new Core M chips in June at the Computex trade show in Taipei.

Skylake systems will also support the second generation of Intel’s RealSense 3D camera technology, which uses a depth sensor to create 3D scans of objects, and which can also be used for gesture and facial recognition. The hope is that the combination of Skylake and a new Windows operating system will give the PC industry a much needed boost.

In related news, Intel announced that socketed Broadwell processors will be available in time for Windows 10.

Source

March 19, 2015 by  
Filed under Computing

HGST announced the acquisition of Belgian software-defined storage provider Amplidata.

Amplidata has been instrumental in HGST’s Active Archive elastic storage solution unveiled at the company’s Big Bang event last September in San Francisco.

Use of Amplidata’s Himalaya distributed storage system, combined with HGST’s unique Helium filled drives, creates systems that can store 10 petabytes on a single rack, designed for cold storage literally and figuratively.

Dave Tang, senior vice president and general manager of HGST’s Elastic Storage Platforms Group, said “Software-defined storage solutions are essential to scale-out storage of the type we unveiled in September. The software is vital to ensuring the durability and scalability of systems.”

Steve Milligan, president and chief executive of Western Digital, added: “We have had an ongoing strategic relationship with Amplidata that included investment from Western Digital Capital and subsequent joint development activity.

“Amplidata has deep technical expertise, an innovative spirit, and valuable intellectual property in this fast-growing market space.

“The acquisition will support our strategic growth initiatives and broaden the scope of opportunity for HGST in cloud data centre storage infrastructure.”

The acquisition is expected to be completed in the first quarter of the year. No financial terms were disclosed.

Amplidata will ultimately be incorporated into the HGST Elastic Storage Platforms Group, a recognition of the fact that every piece of hardware is, in part, software.

Mike Cordano, president of HGST, said at last year’s Big Bang event: “We laugh when we hear that we’re a hardware company. People don’t realise there’s over a million lines of code in that drive. That’s what the firmware is.

“What we’re starting to do now is add software to that and, along with the speed of the PCI-e interface, that makes a much bigger value proposition.”

Source

March 18, 2015 by  
Filed under Computing

IBM has announced the availability of OpenPower servers as part of the firm’s SoftLayer bare metal cloud offering.

OpenPower, a collaborative foundation run by IBM in conjunction with Google and Nvidia, offers a more open approach to IBM’s Power architecture, and a more liberal licence for the code, in return for shared wisdom from member organisations.

Working in conjunction with Tyan and Mellanox Technologies, both partners in the foundation, the bare metal servers are designed to help organisations easily and quickly extend infrastructure in a customized manner.

“The new OpenPower-based bare metal servers make it easy for users to take advantage of one of the industry’s most powerful and open server architectures,” said Sonny Fulkerson, CIO at SoftLayer.

“The offering allows SoftLayer to deliver a higher level of performance, predictability and dependability not always possible in virtualised cloud environments.”

Initially, servers will run Linux applications and will be based on the IBM Power8 architecture in the same mold as IBM Power system servers.

This will later expand to the Power ecosystem and then to independent software vendors that support Linux on Power application development, and are migrating applications from x86 to the Power architecture.

OpenPower servers are based on open source technology that extends right down to the silicon level, and can allow highly customised servers ranging from physical to cloud, or even hybrid.

Power systems are already installed in SoftLayer’s Dallas data centre, and there are plans to expand to data centres throughout the world. The system was first rolled out in 2014 as part of the Watson portfolio.

Prices will be announced when general availability arrives in the second quarter.

Source

Comments