Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Is Samsung Readying A 10nm SoC?

August 22, 2016 by  
Filed under Computing

Comments Off on Is Samsung Readying A 10nm SoC?

Of course, it is that time of the year. Apple, Qualcomm, MediaTek and now Samsung will have 10nm SoCs ready for  phones in early 2017. Of course Samsung wants to use its own 10nm SoC in the Galaxy S8 that is expected in late February 2017, but probably with a mix of 10nm Snapdragon too.

Samsung’s next generation Exynos’ name is very uninspired. You don’t call your much better chip just the Exynos 8895, but that might not be the final name.

The Korean giant went from Exynos 7420 for Galaxy S5 and first 14nm for Android followed a year after with Exynos 8890 still 14nm but witha  custom Exynos M1 “Mongoose” plus Cortex-A53eight core combination.

The new SoC is rumored to come with a 4GHz clock. The same leak suggests that the Snapdragon 830 can reach 3.6 GHz which would be quite an increase from the 2.15Ghz that the company gets with the Snapdragon 820. Samsung’s Exynos 8890 stops at 2.6GHz with one or two cores running while it drops to 2.3 GHz when three of four cores from the main cluster run. Calls us sceptics for this 4GHz number as it sounds like quite a leap from the previous generation.

Let us remind ourselves that the clock speed is quite irrelevant as it doesn’t mean anything, and is almost as irrelevant as an Antutu score. It tells you the maximal clock of a SoC but you really want to know the performance per watt or how much TFlops you can expect in the best case. A clock speed without knowing the architecture is insufficient to make any analysis. We’ve seen in the past that 4GHz processors were slower than 2.5GHz processors.

The fact that Samsung continued to use Snapdragon 820 for its latest greatest Galaxy Note 7 means that the company still needs Qualcomm and we don’t think this is going to change anytime soon. Qualcomm traditionally has a better quality modem tailored well for USA, China, Japan and even the complex Europe or the rest of the world.

Courtesy-Fud

Is IBM Going To Court Over Unix Dispute?

April 15, 2016 by  
Filed under Computing

Comments Off on Is IBM Going To Court Over Unix Dispute?

Defunct Unix Vendor SCO, which claimed that Linux infringed its intellectual property and sought as much as $5bn in compensation from IBM, has filed notice of yet another appeal in the 13-year-old dispute.

The appeal comes after a ruling at the end of February when SCO’s arguments claiming intellectual property ownership over parts of Unix were rejected by a US district court. That judgment noted that SCO had minimal resources to defend counter-claims filed by IBM due to SCO’s bankruptcy.

In a filing, Judge David Nuffer argued that “the nature of the claims are such that no appellate court would have to decide the same issues more than once if there were any subsequent appeals”, effectively suggesting that the case had more than run its course.

On 1 March, that filing was backed up by the judge’s full explanation, declaring IBM the emphatic victor in the long-running saga.

“IT IS ORDERED AND ADJUDGED that pursuant to the orders of the court entered on July 10, 2013, February 5, 2016, and February 8, 2016, judgment is entered in favour of the defendant and plaintiff’s causes of action are dismissed with prejudice,” stated the document.

Now, though, SCO has filed yet again to appeal that judgment, although the precise grounds it is claiming haven’t yet been disclosed.

SCO is being represented by the not-inexpensive law firm of Boise, Schiller & Flexner, which successfully represented the US government against Microsoft in the antitrust case in the late 1990s. Although SCO is officially bankrupt, it’s unclear who continues to bankroll the case. Its one remaining “asset” is its claims for damages against IBM.

Meanwhile, despite the costs of the case, IBM has fought SCO vigorously, refusing even to throw a few million dollars at the company by way of compensation, which would encourage what remains of the company to pursue other, presumably easier, open source targets.

Courtesy-TheInq

 

IBM’s Watson Goes IoT

January 4, 2016 by  
Filed under Computing

Comments Off on IBM’s Watson Goes IoT

 

IBM has announced a major expansion in Europe with the establishment of a new HQ for Watson Internet of Things (IoT).

The Munich site establishes a global headquarters for the Watson IoT program which is dedicated to launching “offerings, capabilities and ecosystem partners” designed to bring the cognitive powers of the company’s game show winning supercomputer to billions of tiny devices and sensors.

Some 1,000 IBM developers, consultants, researchers and designers will join the Munich facility, which the company describes as an “innovation super center”. It is the biggest IBM investment in Europe for over 20 years.

IBM Cloud will power a series of APIs that will allow IoT developers to harness Watson within their devices.

“The IoT will soon be the largest single source of data on the planet, yet almost 90 percent of that data is never acted on,” said Harriet Green, general manager for Watson IoT and Education.

“With its unique abilities to sense, reason and learn, Watson opens the door for enterprises, governments and individuals to finally harness this real-time data, compare it with historical data sets and deep reservoirs of accumulated knowledge, and then find unexpected correlations that generate new insights to benefit business and society alike.”

The APIs were first revealed in September and new ones for the IoT were announced today.

These include the Natural Language Processing API, which contextualizes language from context and is able to respond in the same simple way; Machine Learning Watson API, which can establish patterns in order to perform a repeated task better each time or change the method to suit; Video and Image Analytics API, which can infer information from video feeds; and Text Analytics Watson API, which can glean information from unstructured text data such as Twitter feeds.

The company will also open eight regional centres across four continents to give customers in those territories the opportunity to access information and experiences.

Courtesy-TheInq

 

Suse Goes 64-bit ARM Servers

July 28, 2015 by  
Filed under Computing

Comments Off on Suse Goes 64-bit ARM Servers

Suse wants to speed the development of server systems based on 64-bit ARM processors.

The outfit said that it is making available to its partners a version of Suse Linux Enterprise 12 ported to ARM’s 64-bit architecture (AArch64).

This will enable them to develop, test and deliver products to the market based on ARM chips.

Suse has also implemented support for AArch64 into its openSUSE Build Service. This allows the community to build packages against real 64-bit ARM hardware and the Suse Linux Enterprise 12 binaries.

Hopefully this will improve the time to market for ARM-based solutions, the firm said.

Suse partners include chip makers AMD AppliedMicro and Cavium, while Dell, HP and SoftIron. Suse wants ARM processors to be part of a scalable technology platform in the data centre.

Through participation in the programme, partners will be able to build solutions for various applications, from purpose-built appliances for security, medical and network functions, to hyperscale computing, distributed storage and software-defined networking.

There are multiple vendors using the same core technology licensed from ARM. This provides a common base for the OS vendors, like Suse, to build support in their kernel.

Suse has some competition for ARM-based systems. Last year, Red Hat started up its ARM Partner Early Access Programme (PEAP), while Canonical has offered ARM support in its Ubuntu platform for several years now, including a long-term support (LTS) release last year that included the OpenStack cloud computing framework.

Source

IBM Buys Blue Box

June 15, 2015 by  
Filed under Computing

Comments Off on IBM Buys Blue Box

IBM HAS ACQUIRED Blue Box in an attempt to make its cloud offering even bluer. The Seattle-based company specialises in simple service-as-a-platform clouds based on OpenStack.

This, of course, fits in with IBM’s new direction of a Power PC, OpenStack cloud-based world, as demonstrated by its collaboration with MariaDB on TurboLAMP.

IBM’s move to the cloud is starting to pay off, seeing revenue of $7.7bn in the 12 months to March 2015 and growing more than 16 percent in the first quarter of this year.

The company plans to use the new acquisition to create rapid, integrating cloud-based applications and on-premise systems within the OpenStack managed cloud.

Blue Box also brings a remotely managed OpenStack to provide customers with a local cloud, better visibility control and tighter security.

“IBM is dedicated to helping our clients migrate to the cloud in an open, secure, data rich environment that meets their current and future business needs,” said IBM general manager of cloud services Jim Comfort.

“The acquisition of Blue Box accelerates IBM’s open cloud strategy, making it easier for our clients to move data and applications across clouds and adopt hybrid cloud environments.”

Blue Box will offer customers a more cohesive, consistent and simplified experience, while at the same time integrating with existing IBM packages like the Bluemix digital innovation platform. The firm also offers a single unified control panel for customer operations.

“No brand is more respected in IT than IBM. Blue Box is building a similarly respected brand in OpenStack,” said Blue Box founder and CTO Jesse Proudman.

“Together, we will deliver the technology and products businesses need to give their application developers an agile, responsive infrastructure across public and private clouds.

“This acquisition signals the beginning of new OpenStack options delivered by IBM. Now is the time to arm customers with more efficient development, delivery and lower cost solutions than they’ve seen thus far in the market.”

IBM has confirmed that it plans to help Blue Box customers to grow their technology portfolio, while taking advantage of the broader IBM product set.

Source

SUSE Brings Hadoop To IBM z Mainframes

April 1, 2015 by  
Filed under Computing

Comments Off on SUSE Brings Hadoop To IBM z Mainframes

SUSE and Apache Hadoop vendor Veristorm are teaming up to bring Hadoop to IBM z and IBM Power systems.

The result will mean that regardless of system architecture, users will be able to run Apache Hadoop within a Linux container on their existing hardware, meaning that more users than ever will be able to process big data into meaningful information to inform their business decisions.

SUSE’s Veristorm Data Hub and vStorm Enterprise Hadoop will now be available as zDoop, the first mainframe-compatible Hadoop iteration, running on SUSE Linux Enterprise Server for System z, either on IBM Power12 or Power8 machines in little-endian mode, which makes it significantly easier for x86 based software to be ported to the IBM platform.

SUSE and Veristorm have also committed to work together on educating partners and channels on the benefits of the overall package.

Naji Almahmoud, head of global business development for SUSE, said: “The growing need for big data processing to make informed business decisions is becoming increasingly unavoidable.

“However, existing solutions often struggle to handle the processing load, which in turn leads to more servers and difficult-to-manage sprawl. This partnership with Veristorm allows enterprises to efficiently analyse their mainframe data using Hadoop.”

Veristorm launched Hadoop for Linux in April of last year, explaining that it “will help clients to avoid staging and offloading of mainframe data to maintain existing security and governance controls”.

Sanjay Mazumder, CEO of Veristorm, said that the partnership will help customers “maximize their processing ability and leverage their richest data sources” and deploy “successful, pragmatic projects”.

SUSE has been particularly active of late, announcing last month that its software-defined Enterprise Storage product, built around the open source Ceph framework, was to become available as a standalone product for the first time.

Source

Is Oracle’s Linux 7 Unbreakable?

August 5, 2014 by  
Filed under Computing

Comments Off on Is Oracle’s Linux 7 Unbreakable?

Oracle has announced the release of its Linux distribution Oracle Linux 7.

Oracle Linux 7 is the latest release of the company’s version of its enterprise grade Linux flavour that is a fork of Red Hat Enterprise Linux.

This latest release adds a range of features including XFS, Btrfs, Linux Containers (LXC), Dtrace, Ksplice, Xen enhancements and the Oracle’s Unbreakable Enterprise Kernel Release 3.

“Oracle Linux continues to provide the most flexible options for customers and partners, allowing them to easily innovate, collaborate, and create enterprise-grade solutions,” said Oracle SVP of Linux and Virtualization Engineering Wim Coekaerts.

“With Oracle Linux 7, users have more freedom to choose the technologies and solutions that best meet their business objectives. Oracle Linux allows users to benefit from an open approach for emerging technologies, like Openstack, and allows them to meet the performance and reliability requirements of the modern data center.”

Oracle’s outspoken CEO Larry Ellison recently claimed that its servers were “untouchable”, two weeks after it released patches for 36 vulnerabilities in its Java platform.

The company recently won a court case against Google after successfully arguing that the APIs used in Google’s Android mobile operating system infringed Oracle copyrights.

The Oracle Linux 7 operating system is freely downloadable and distributed with updates and security fixes subsequently available from Oracle Yum servers. A paid option is also available for anyone wishing to buy Oracle support.

Oracle Linux 7 has a 10-year production lifecycle, or lifetime support for subscribers, with additional upgrade support available for users of the Unbreakable Enterprise Kernel.

Source

IBM Goes Linux

September 27, 2013 by  
Filed under Computing

Comments Off on IBM Goes Linux

IBM reportedly will invest $1bn in Linux and other open source technologies for its Power system servers.

The firm is expected to announce the news at the Linuxcon 2013 conference in New Orleans, pledging to spend $1bn over five years on Linux and related open source technologies.

The software technology will be used on IBM’s Power line of servers, which are based on the chip technology of the same name and used for running large scale systems in data centres.

Previously IBM Power systems have mostly run IBM’s proprietary AIX version of Unix, though some used in high performance computing (HPC) configurations have run Linux.

If true, this will make the second time IBM coughs up a $1bn investment in Linux. IBM gave the open source OS the same vote of confidence around 13 years ago.

According to the Wall Street Journal, IBM isn’t investing in Linux to convert its existing AIX customers, but instead Linux will help support data centre applications driving big data, cloud computing and analytics.

“We continue to take share in Unix, but it’s just not growing as fast as Linux,” said IBM VP of Power development Brad McCredie.

The $1bn is expected to go mainly for facilities and staffing to help Power system users move to Linux, with a new centre being opened in France especially to help manage that transition.

Full details are planned to be announced at Linuxcon later today.

Last month, IBM swallowed Israeli security firm Trusteer to boost its customers’ cyber defences with the company’s anti-hacking technology.

Announcing that it had signed a definitive agreement with Trusteer to create a security lab in Israel, IBM said it planned to focus on mobile and application security, counter-fraud and malware detection staffed by 200 Trusteer and IBM researchers.

Source

Dell Promises ExaScale By 2015

June 17, 2013 by  
Filed under Computing

Comments Off on Dell Promises ExaScale By 2015

Dell has claimed it will make exascale computing available by 2015, as the firm enters the high performance computing (HPC) market.

Speaking at the firm’s Enterprise Forum in San Jose, Sam Greenblatt, chief architect of Dell’s Enterprise Solutions Group, said the firm will have exascale systems by 2015, ahead of rival vendors. However, he added that development will not be boosted by a doubling in processor performance, saying Moore’s Law is no longer valid and is actually presenting a barrier for vendors.

“It’s not doubling every two years any more, it has flattened out significantly,” he said. According to Greenblatt, the only way firms can achieve exascale computing is through clustering. “We have to design servers that can actually get us to exascale. The only way you can do it is to use a form of clustering, which is getting multiple parallel processes going,” he said.

Not only did Greenblatt warn that hardware will have to be packaged differently to reach exascale performance, he said that programmers will also need to change. “This is going to be an area that’s really great, but the problem is you never programmed for this area, you programmed to that old Von Neumann machine.”

According to Greenblatt, shifting of data will also be cut down, a move that he said will lead to network latency being less of a performance issue.”Things are going to change very dramatically, your data is going to get bigger, processing power is going to get bigger and network latency is going to start to diminish, because we can’t move all this [data] through the pipe,” he said.

Greenblatt’s reference to data being closer to the processor is a nod to the increasing volume of data that is being handled. While HPC networking firms such as Mellanox and Emulex are increasing bandwidths on their respective switch gear, bandwidth increases are being outpaced by the growth in the size of datasets used by firms deploying analytics workloads or academic research.

That Dell is projecting 2015 for the arrival of exascale clusters is at least a few years sooner than firms such as Intel, Cray and HP, all of which have put a “by 2020″ timeframe on the challenge. However what Greenblatt did not mention is the projected power efficiency of Dell’s 2015 exascale cluster, something that will be critical to its usability.

Source

TSMC Testing ARM’s Cortex A57

April 11, 2013 by  
Filed under Computing

Comments Off on TSMC Testing ARM’s Cortex A57

ARM and TSMC have manufactured the first Cortex A57 processor based on ARM’s next-gen 64-bit ARMv8 architecture.

The all new chip was fabricated on TSMC’s equally new FinFET 16nm process. The 57 is ARM’s fastest chip to date and it will go after high end tablets, and eventually it will find its place in some PCs and servers as well.

Furthermore the A57 can be coupled with frugal Cortex A53 cores in a big.LITTLE configuration. This should allow it to deliver relatively low power consumption, which is a must for tablets and smartphones. However, bear in mind that A15 cores are only now showing up in consumer products, so it might be a while before we see any devices based on the A57.

In terms of performance, ARM claims the A57 can deliver a “full laptop experience,” even when used in a smartphone connected to a screen, keyboard and mouse wirelessly. It is said to be more power efficient than the A15 and browser performance should be doubled on the A57.

It is still unclear when we’ll get to see the first A57 devices, but it seems highly unlikely that any of them will show up this year. Our best bet is mid-2014, and we are incorrigible optimists. The next big step in ARM evolution will be 20nm A15 cores with next-generation graphics, and they sound pretty exciting as well.

Source

Next Page »