Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

nVidia NVLINK 2.0 Going In IBM Servers

August 31, 2016 by  
Filed under Computing

Comments Off on nVidia NVLINK 2.0 Going In IBM Servers

On Monday, PCWorld reported that the first servers expected to use Nvidia’s second-generation NVLINK 2.0 technology will be arriving sometime next year using IBM’s upcoming Power9 chip family.

IBM launched its Power8 lineup of superscalar symmetric multiprocessors back in August 2013 at the Hot Chips conference, and the first systems became available in August 2014. The announcement was significant because it signaled the beginning of a continuing partnership between IBM and Nvidia to develop GPU-accelerated IBM server systems, beginning with the Tesla K40 GPU.

The result was an HPC “tag-team” where IBM’s Power8 architecture, a 12-core chip with 96MB of embedded memory, would eventually go on to power Nvidia’s next-generation Pascal architecture which debuted in April 2016 at the company’s GPU Technology Conference.

NVLINK, first announced in March 2014, uses a proprietary High-Speed Signaling interconnect (NVHS) developed by Nvidia. The company says NVHS transmits data over a differential pair running at up to 20Gbps, so eight of these differential 20Gbps connections will form a 160Gbps “Sub-Link” that sends data in one direction. Two sub-links—one for each direction—will form a 320Gbps, or 40GB/s bi-directional “Link” that connects processors together in a mesh framework (GPU-to-GPU or GPU-to-CPU).

NVLINK lanes upgrade from 20Gbps to 25Gbps

IBM is projecting its Power9 servers to be available beginning in the middle of 2017, with PCWorld reporting that the new processor lineup will include support for NVLINK 2.0 technology. Each NVLINK lane will communicate at 25Gbps, up from 20Gbps in the first iteration. With eight differential lanes, this translates to a 400Gbps (50GB/s) bi-directional link between CPUs and GPUs, or about 25 percent more performance if the information is correct.

NVLINK 2.0 capable servers arriving next year

Meanwhile, Nvidia has yet to release any NVLINK 2.0-capable GPUs, but a company presentation slide in Korean language suggests that the technology will first appear in Volta GPUs which are also scheduled for release sometime next year. We were originally under the impression that the new GPU architecture would release in 2018, as per Nvidia’s roadmap. But a source hinted last month that Volta would be getting 16nm FinFET treatment and may show up in roughly the same timeframe as AMD’s HBM 2.0-powered Vega sometime in 2017. After all, it is easier for Nvidia to launch sooner if the new architecture is built on the same node as the Pascal lineup.

Still ahead of PCI-Express 4.0

Nvidia claims that PCI-Express 3.0 (32GB/s with x16 bandwidth) significantly limits a GPU’s ability to access a CPU’s memory system and is about “four to five times slower” than its proprietary standard. Even PCI-Express 4.0, releasing later in 2017, is limited to 64GB/s on a slot with x16 bandwidth.

To put this in perspective, Nvidia’s Tesla P100 Accelerator uses four 40GB/s NVLINK ports to connect clusters of GPUs and CPUs, for a total of 160GB/s of bandwidth.

With a generational NVLINK upgrade from 40GB/s to 50GB/s bi-directional links, the company could release a future Volta-based GPU with four 50GB/s NVLINK ports totaling of 200GB/s of bandwidth, well above and beyond the specifications of the new PCI-Express standard.

Courtesy-Fud

IBM Acquires EZSource

June 14, 2016 by  
Filed under Computing

Comments Off on IBM Acquires EZSource

The digital transformation revolution is already in full swing, but for companies with legacy mainframe applications, it’s not always clear how to get in the game. IBM announced an acquisition that could help.

The company will acquire Israel-based EZSource, it said, in the hopes of helping developers “quickly and easily understand and change mainframe code.”

EZSource offers a visual dashboard that’s designed to ease the process of modernizing applications. Essentially, it exposes application programming interfaces (APIs) so that developers can focus their efforts accordingly.

Developers must often manually check thousands or millions of lines of code, but EZSource’s software instead alerts them to the number of sections of code that access a particular entity, such as a database table, so they can check them to see if updates are needed.

IBM’s purchase is expected to close in the second quarter of 2016. Terms of the deal were not disclosed.

Sixty-eight percent of the world’s production IT workloads run on mainframes, IBM said, amounting to roughly 30 billion business transactions processed each day.

“The mainframe is the backbone of today’s businesses,” said Ross Mauri, general manager for IBM z Systems. “As clients drive their digital transformation, they are seeking the innovation and business value from new applications while leveraging their existing assets and processes.”

EZSource will bring an important capability to the IBM ecosystem, said Patrick Moorhead, president and principal analyst with Moor Insights & Strategy.

“While IBM takes advantage of a legacy architecture with z Systems, it’s important that the software modernizes, and that’s exactly what EZSource does,” Moorhead said.

Large organizations still run a lot of mainframe systems, particularly within the financial-services sector, noted analyst Frank Scavo, president of Computer Economics.

“As these organizations roll out new mobile, social and other digital business experiences, they have no choice but to expose these mainframe systems via APIs,” Scavo said.

But in many large organizations, skilled mainframe developers are in short supply — especially those who really understand these legacy systems, he added.

“Anything to increase the productivity of these developers will go a long way to ensuring the success of their digital business initiatives,” Scavo said. “Automation tools to discover, expose and analyze the inner workings of these legacy apps are really needed.”

It’s a smart move for IBM, he added.

Source- http://www.thegurureview.net/computing-category/looking-to-transform-mainframe-business-ibm-acquires-ezsource.html

Oracle Goes Deeper Into The Cloud

May 13, 2016 by  
Filed under Computing

Comments Off on Oracle Goes Deeper Into The Cloud

Right on the heels of a similar acquisition last week, Oracle has announced it will pay $532 million to buy Opower, a provider of cloud services to the utilities industry.

Once a die-hard cloud holdout, Oracle has been making up for lost time by buying a foothold in specific industries through acquisitions such as this one. Last week’s Textura buy gave it a leg up in engineering and construction.

“It’s a good move on Oracle’s part, and it definitely strengthens Oracle’s cloud story,” said Frank Scavo, president of Computer Economics.

Opower’s big-data platform helps utilities improve customer service, reduce costs and meet regulatory requirements. It currently stores and analyzes more than 600 billion meter readings from 60 million end customers. Opower claims more than 100 global utilities among its clients, including PG&E, Exelon and National Grid.

Opower will continue to operate independently until the transaction closes, which is expected later this year. The union will create the largest provider of mission-critical cloud services to an industry that’s worth $2.3 trillion, Oracle said.

Oracle’s Utilities business delivers applications and cloud services that automate core operational processes and enable compliance for global electric, gas and water utilities.

“Oracle’s industry organizations maintain unique domain knowledge, specialized expertise and focused product investments,” said Rodger Smith, a senior vice president who leads the Utilities global business unit, in a letter to customers and partners. “This model has proven highly successful across several industries, and we look forward to bringing these same benefits to the customers of Opower.”

Source- http://www.thegurureview.net/aroundnet-category/oracle-pushes-deeper-into-cloud-computing-with-another-acquisition.html

FCC Votes To Tighten Broadband Providers Privacy Rules

April 19, 2016 by  
Filed under Around The Net

Comments Off on FCC Votes To Tighten Broadband Providers Privacy Rules

The U.S. Federal Communications Commission is moving toward major new regulations requiring ISPs to get customer permission before using or sharing their Web-surfing history and other personal information.

The FCC voted 3-2 last week to approve a notice of proposed rule-making, or NPRM, the first step toward passing new regulations, over the objections of the commission’s two Republicans.

The rules, which will now be released for public comment, require ISPs to get opt-in permission from customers if they want to use their personal information for most reasons besides marketing their own products.

Republican Commissioners Ajit Pai and Michael O’Rielly complained that the regulations target Internet service providers but not social networks, video providers and other online services.

“Ironically, selectively burdening ISPs, who are nascent competitors in online advertising, confers a windfall on those who are already winning,” Pai said. “The FCC targets ISPs, and only ISPs, for regulation.”

The proposed rules could prohibit some existing practices, including offering premium services in exchange for targeted advertising, that consumers have already agreed to, O’Rielly added. “The agency knows best and must save consumers from their poor privacy choices,” he said.

But the commission’s three Democrats argued that regulations are important because ISPs have an incredible window into their customers’ lives.

ISPs can collect a “treasure trove” of information about a customer, including location, websites visited, and shopping habits, said Commissioner Mignon Clyburn. “I want the ability to determine when and how my ISP uses my personal information.”

Broadband customers would be able to opt out of data collection for marketing and other communications-related services. For all other purposes, including most sharing of personal data with third parties, broadband providers would be required to get customers’ explicit opt-in permission.

The proposal would also require ISPs to notify customers about data breaches, and to notify those directly affected by a breach within 10 days of its discovery.

Courtesy- http://www.thegurureview.net/aroundnet-category/fcc-votes-to-tighten-broadband-providers-privacy-rules.html

Is IBM Going To Court Over Unix Dispute?

April 15, 2016 by  
Filed under Computing

Comments Off on Is IBM Going To Court Over Unix Dispute?

Defunct Unix Vendor SCO, which claimed that Linux infringed its intellectual property and sought as much as $5bn in compensation from IBM, has filed notice of yet another appeal in the 13-year-old dispute.

The appeal comes after a ruling at the end of February when SCO’s arguments claiming intellectual property ownership over parts of Unix were rejected by a US district court. That judgment noted that SCO had minimal resources to defend counter-claims filed by IBM due to SCO’s bankruptcy.

In a filing, Judge David Nuffer argued that “the nature of the claims are such that no appellate court would have to decide the same issues more than once if there were any subsequent appeals”, effectively suggesting that the case had more than run its course.

On 1 March, that filing was backed up by the judge’s full explanation, declaring IBM the emphatic victor in the long-running saga.

“IT IS ORDERED AND ADJUDGED that pursuant to the orders of the court entered on July 10, 2013, February 5, 2016, and February 8, 2016, judgment is entered in favour of the defendant and plaintiff’s causes of action are dismissed with prejudice,” stated the document.

Now, though, SCO has filed yet again to appeal that judgment, although the precise grounds it is claiming haven’t yet been disclosed.

SCO is being represented by the not-inexpensive law firm of Boise, Schiller & Flexner, which successfully represented the US government against Microsoft in the antitrust case in the late 1990s. Although SCO is officially bankrupt, it’s unclear who continues to bankroll the case. Its one remaining “asset” is its claims for damages against IBM.

Meanwhile, despite the costs of the case, IBM has fought SCO vigorously, refusing even to throw a few million dollars at the company by way of compensation, which would encourage what remains of the company to pursue other, presumably easier, open source targets.

Courtesy-TheInq

 

IBM’s Watson Goes IoT

January 4, 2016 by  
Filed under Computing

Comments Off on IBM’s Watson Goes IoT

 

IBM has announced a major expansion in Europe with the establishment of a new HQ for Watson Internet of Things (IoT).

The Munich site establishes a global headquarters for the Watson IoT program which is dedicated to launching “offerings, capabilities and ecosystem partners” designed to bring the cognitive powers of the company’s game show winning supercomputer to billions of tiny devices and sensors.

Some 1,000 IBM developers, consultants, researchers and designers will join the Munich facility, which the company describes as an “innovation super center”. It is the biggest IBM investment in Europe for over 20 years.

IBM Cloud will power a series of APIs that will allow IoT developers to harness Watson within their devices.

“The IoT will soon be the largest single source of data on the planet, yet almost 90 percent of that data is never acted on,” said Harriet Green, general manager for Watson IoT and Education.

“With its unique abilities to sense, reason and learn, Watson opens the door for enterprises, governments and individuals to finally harness this real-time data, compare it with historical data sets and deep reservoirs of accumulated knowledge, and then find unexpected correlations that generate new insights to benefit business and society alike.”

The APIs were first revealed in September and new ones for the IoT were announced today.

These include the Natural Language Processing API, which contextualizes language from context and is able to respond in the same simple way; Machine Learning Watson API, which can establish patterns in order to perform a repeated task better each time or change the method to suit; Video and Image Analytics API, which can infer information from video feeds; and Text Analytics Watson API, which can glean information from unstructured text data such as Twitter feeds.

The company will also open eight regional centres across four continents to give customers in those territories the opportunity to access information and experiences.

Courtesy-TheInq

 

Oracle’s M7 Processor Has Security On Silicon

November 10, 2015 by  
Filed under Computing

Comments Off on Oracle’s M7 Processor Has Security On Silicon

Oracle started shipping systems based on its latest Sparc M7 processor, which the firm said will go a long way to solving the world’s online security problems by building protection into the silicon.

The Sparc M7 chip was originally unveiled at last year’s Openworld show in San Francisco, and was touted at the time as a Heartbleed-prevention tool.

A year on, and Oracle announced the Oracle SuperCluster M7, along with Sparc T7 and M7 servers, at the show. The servers are all based on the 32-core, 256-thread M7 microprocessor, which offers Security in Silicon for better intrusion protection and encryption, and SQL in Silicon for improved database efficiency.

Along with built-in security, the SuperCluster M7 packs compute, networking and storage hardware with virtualisation, operating system and management software into one giant cloud infrastructure box.

Oracle CTO Larry Ellison was on hand at Openworld on Tuesday to explain why the notion of building security into the silicon is so important.

“We are not winning a lot of these cyber battles. We haven’t lost the war but we’re losing a lot of the battles. We have to rethink how we deliver technology especially as we deliver vast amounts of data to the cloud,” he told delegates.

Ellison said that Oracle’s approach to this cyber war is to take security as low down in the stack as possible.

“Database security is better than application security. You should always push security as low in the stack as possible. At the bottom of the stack is silicon. If all of your data in the database is encrypted, that’s better than having an application code that encrypts your data. If it’s in the database, every application that uses that database inherits that security,” he explained.

“Silicon security is better than OS security. Then every operating system that runs on that silicon inherits that security. And the last time I checked, even the best hackers have not figured out a way to download changes to your microprocessor. You can’t alter the silicon, that’s really tricky.”

Ellison’s big idea is to take software security features out of operating systems, VMs and even databases in some cases – because software can be changed – and instead push them into the silicon, which can’t be. He is also urging for security to be switched on as default, without an option to turn it back off again.

“The security features should always be on. We provide encryption in our databases but it can be switched off. That is a bad idea. There should be no way to turn off encryption. The idea of being able to turn on and off security features makes no sense,” he said.

Ellison referred back to a debate that took place at Oracle when it first came up with its backup system – should the firm have only encrypted backups. “We did a customer survey and customers said no, we don’t want to pay the performance penalty in some cases,” he recalled. “In that case customer choice is a bad idea. Maybe someone will forget to turn on encryption when it should have been turned on and you lose 10 million credit cards.”

The Sparc M7 is basically Oracle’s answer to this dire security situation. Ellison said that while the M7 has lots of software features built into the silicon, the most “charismatic” of these is Silicon Secured Memory, which is “deceptively simple” in how it works.

“Every time a computer program asks for memory, say you ask for 8MB of memory, we compute a key and assign this large number to that 8MB of memory,” he explained. “We take those bits and we lock that memory. We also assign that same number to the program. Every time the program accesses memory, we check that number to make sure it’s the memory you allocated earlier. That compare is done by the hardware.”

If a program tries to access memory belonging to another program, the hardware detects a mismatch and raises a signal, flagging up a possible breach or bug.

“We put always-on memory intrusion detection into the silicon. We’re always looking for Heartbleed and Venom-like violations. You cannot turn it off,” the CTO warned.

“We’ve also speeded up encryption and decompression, which is kind of related to encryption. It runs at memory speed there’s zero cost in doing that. We turn it on, you can’t turn it off, it’s on all the time. It’s all built into the M7.”

Ellison claimed that running M7-based systems will stop threats like Heartbleed and Venom in their tracks.

“The way Venom worked, the floppy disc driver concealed this code. It’s the worst kind of situation, you’re writing into memory you’re not supposed to. You’re writing computer instructions into the memory and you’ve just taken over the whole computer,” he explained. “You can steal and change data. M7 – the second we tried to write that code into memory that didn’t belong to that program, where the keys didn’t match, that would have been detected real-time and that access would have been foiled.

All well and good, except for the fact that nearly every current computer system doesn’t run off the M7 processor. Ellison claimed that even if only three or four percent of servers in the cloud an organisation is using have this feature, they will be protected as they’ll get the early warning to then deal with the issue across non-M7 systems.

“You don’t have to replace every micro processor, you just have to replace a few so you get the information real-time,” he added.

“You’ll see us making more chips based on security, to secure our cloud and to sell to people who want to secure their clouds or who want to have secure computers in their datacentre. Pushing security down into silicon is a very effective way to do that and get ahead of bad guys.”

SuperCluster M7 and Sparc M7 servers are available now. Pricing has not been disclosed but based on normal Oracle hardware costs, expect to dig deep to afford one.

Source-http://www.thegurureview.net/computing-category/oracles-new-m7-processor-has-security-on-silicon.html

Oracle’s New Processor Goes For The Cheap

August 13, 2015 by  
Filed under Computing

Comments Off on Oracle’s New Processor Goes For The Cheap

Oracle is looking to expand the market for its Sparc-based servers with a new, low-cost processor which it curiously called Sonoma.

The company isn’t saying yet when the chip will be in the shops but the spec shows that could become a new rival for Intel’s Xeon chips and make Oracle’s servers more competitive.

Sonoma is named after a place where they make cheap terrible Californian wine  and Oracle aims the chip at Sparc-based servers at “significantly lower price points” than now.

This means that companies can use them for smaller, less critical applications.

Oracle has not done much with its Sparc line-up for a couple of years, and Sonoma was one of a few new chips planned. The database maker will update its Sparc T5, used in its mid-range systems and the high-end Sparc M7. The technology is expected to filter to the Sonoma lower tier servers.

The Sparc M7 will have technologies for encryption acceleration and memory protection built into the chip. It will include coprocessors to speed up database performance.

According to IDG Sonoma will take those same technologies and bring them down to low-cost points. This means that people can use them in cloud computing and for smaller applications.

He didn’t talk about prices or say how much cheaper the new Sparc systems will be, and it could potentially be years before Sonoma comes to market.

Source

Suse Goes 64-bit ARM Servers

July 28, 2015 by  
Filed under Computing

Comments Off on Suse Goes 64-bit ARM Servers

Suse wants to speed the development of server systems based on 64-bit ARM processors.

The outfit said that it is making available to its partners a version of Suse Linux Enterprise 12 ported to ARM’s 64-bit architecture (AArch64).

This will enable them to develop, test and deliver products to the market based on ARM chips.

Suse has also implemented support for AArch64 into its openSUSE Build Service. This allows the community to build packages against real 64-bit ARM hardware and the Suse Linux Enterprise 12 binaries.

Hopefully this will improve the time to market for ARM-based solutions, the firm said.

Suse partners include chip makers AMD AppliedMicro and Cavium, while Dell, HP and SoftIron. Suse wants ARM processors to be part of a scalable technology platform in the data centre.

Through participation in the programme, partners will be able to build solutions for various applications, from purpose-built appliances for security, medical and network functions, to hyperscale computing, distributed storage and software-defined networking.

There are multiple vendors using the same core technology licensed from ARM. This provides a common base for the OS vendors, like Suse, to build support in their kernel.

Suse has some competition for ARM-based systems. Last year, Red Hat started up its ARM Partner Early Access Programme (PEAP), while Canonical has offered ARM support in its Ubuntu platform for several years now, including a long-term support (LTS) release last year that included the OpenStack cloud computing framework.

Source

IBM Buys Blue Box

June 15, 2015 by  
Filed under Computing

Comments Off on IBM Buys Blue Box

IBM HAS ACQUIRED Blue Box in an attempt to make its cloud offering even bluer. The Seattle-based company specialises in simple service-as-a-platform clouds based on OpenStack.

This, of course, fits in with IBM’s new direction of a Power PC, OpenStack cloud-based world, as demonstrated by its collaboration with MariaDB on TurboLAMP.

IBM’s move to the cloud is starting to pay off, seeing revenue of $7.7bn in the 12 months to March 2015 and growing more than 16 percent in the first quarter of this year.

The company plans to use the new acquisition to create rapid, integrating cloud-based applications and on-premise systems within the OpenStack managed cloud.

Blue Box also brings a remotely managed OpenStack to provide customers with a local cloud, better visibility control and tighter security.

“IBM is dedicated to helping our clients migrate to the cloud in an open, secure, data rich environment that meets their current and future business needs,” said IBM general manager of cloud services Jim Comfort.

“The acquisition of Blue Box accelerates IBM’s open cloud strategy, making it easier for our clients to move data and applications across clouds and adopt hybrid cloud environments.”

Blue Box will offer customers a more cohesive, consistent and simplified experience, while at the same time integrating with existing IBM packages like the Bluemix digital innovation platform. The firm also offers a single unified control panel for customer operations.

“No brand is more respected in IT than IBM. Blue Box is building a similarly respected brand in OpenStack,” said Blue Box founder and CTO Jesse Proudman.

“Together, we will deliver the technology and products businesses need to give their application developers an agile, responsive infrastructure across public and private clouds.

“This acquisition signals the beginning of new OpenStack options delivered by IBM. Now is the time to arm customers with more efficient development, delivery and lower cost solutions than they’ve seen thus far in the market.”

IBM has confirmed that it plans to help Blue Box customers to grow their technology portfolio, while taking advantage of the broader IBM product set.

Source

Next Page »