Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Stagefright 2.0 Exploits Android Vulnerabilities

October 13, 2015 by  
Filed under Computing

Comments Off on Stagefright 2.0 Exploits Android Vulnerabilities

Newly found vulnerabilities in the way Android handles media files can allow attackers to compromise devices by tricking users into visiting maliciously crafted Web pages.

The vulnerabilities can lead to remote code execution on almost all devices that run Android, starting with version 1.0 of the OS released in 2008 to the latest 5.1.1, researchers from mobile security firm Zimperium said in a report published Thursday.

The flaws are in the way Android processes the metadata of MP3 audio files and MP4 video files, and they can be exploited when the Android system or another app that relies on Android’s media libraries previews such files.

The Zimperium researchers found similar multimedia processing flaws earlier this year in an Android library called Stagefright that could have been exploited by simply sending Android devices a maliciously crafted MMS message.

Those flaws triggered a coordinated patching effort from device manufacturers that Android’s lead security engineer, Adrian Ludwig, called the “single largest unified software update in the world.” It also contributed to Google, Samsung and LG committing to monthly security updates going forward.

One of the flaws newly discovered by Zimperium is located in a core Android library called libutils and affects almost all devices running Android versions older than 5.0 (Lollipop). The vulnerability can also be exploited in Android Lollipop (5.0 – 5.1.1) by combining it with another bug found in the Stagefright library.

The Zimperium researchers refer to the new attack as Stagefright 2.0 and believe that it affects more than 1 billion devices.

Since the previous attack vector of MMS was closed in newer versions of Google Hangouts and other messaging apps after the previous Stagefright flaws were found, the most straight-forward exploitation method for the latest vulnerabilities is through Web browsers, the Zimperium researchers said.

Zimperium reported the flaws to Google on Aug. 15 and plans to release proof-of-concept exploit code once a fix is released.

That fix will come on Oct. 5 as part of the new scheduled monthly Android security update, a Google representative said.

Source-http://www.thegurureview.net/mobile-category/stagefright-2-0-exploits-android-vulnerabilities.html

Microsoft Drops Ad Business

July 13, 2015 by  
Filed under Around The Net

Comments Off on Microsoft Drops Ad Business

Microsoft Corp that it will hand over its display advertising business to AOL Inc and sell some map-generating technology to ride-hailing app company Uber, as it scales back on unprofitable operations.

The moves mean Microsoft will focus on its growing search advertising business based on its Bing search engine, and displaying maps on its Windows devices rather than generating the maps themselves.

Microsoft, which employs hundreds of people in its display ad business around the world, said those employees would be offered the chance to transfer to AOL and that it was not making any layoffs.

The world’s largest software company no longer breaks out results for its online operations, chiefly its MSN web portal and Bing, but they have lost more than $10 billion over the past five years. Chief Executive Satya Nadella has said Bing will turn a profit next fiscal year.

“Today’s news is evidence of Microsoft’s increased focus on our strengths: in this case, search and search advertising and building great content and consumer services,” saidMicrosoft in a statement.

Under a 10-year deal struck with AOL, now a unit of Verizon Communications Inc ,AOL will sell display ads on MSN, Outlook.com, Xbox, Skype and in some apps in major countries. As part of the deal, Bing will become the search engine behind web searches onAOL starting next year.

Microsoft also struck a multi-year extension to its existing deal with AppNexus, which provides the tech platform for buyers to purchase online ads.

Microsoft and Uber did not disclose financial terms of their deal, under which Uber will take over the part of Microsoft’s mapping unit that works on imagery acquisition and map data processing. Uber will offer jobs to the 100 or so Microsoft employees working in that area, according to a source familiar with the deal.

Source

Will Arm/Atom CPUs Replace Xeon/Opteron?

June 7, 2013 by  
Filed under Computing

Comments Off on Will Arm/Atom CPUs Replace Xeon/Opteron?

Analyst are saying that smartphone chips could one day replace the Xeon and Opteron processors used in most of the world’s top supercomputers. In a paper in a paper titled “Are mobile processors ready for HPC?” researchers at the Barcelona Supercomputing Center wrote that less expensive chips bumping out faster but higher-priced processors in high-performance systems.

In 1993, the list of the world’s fastest supercomputers, known as the Top500, was dominated by systems based on vector processors. They were nudged out by less expensive RISC processors. RISC chips were eventually replaced by cheaper commodity processors like Intel’s Xeon and AMD Opteron and now mobile chips are likely to take over.

The transitions had a common thread, the researchers wrote: Microprocessors killed the vector supercomputers because they were “significantly cheaper and greener,” the report said. At the moment low-power chips based on designs ARM fit the bill, but Intel is likely to catch up so it is not likely to mean the death of x86.

The report compared Samsung’s 1.7GHz dual-core Exynos 5250, Nvidia’s 1.3GHz quad-core Tegra 3 and Intel’s 2.4GHz quad-core Core i7-2760QM – which is a desktop chip, rather than a server chip. The researchers said they found that ARM processors were more power-efficient on single-core performance than the Intel processor, and that ARM chips can scale effectively in HPC environments. On a multi-core basis, the ARM chips were as efficient as Intel x86 chips at the same clock frequency, but Intel was more efficient at the highest performance level, the researchers said.

Source

Do Supercomputers Lead To Downtime?

December 3, 2012 by  
Filed under Computing

Comments Off on Do Supercomputers Lead To Downtime?

As supercomputers grow more powerful, they’ll also become more susceptible to failure, thanks to the increased amount of built-in componentry. A few researchers at the recent SC12 conference, held last week in Salt Lake City, offered possible solutions to this growing problem.

Today’s high-performance computing (HPC) systems can have 100,000 nodes or more — with each node built from multiple components of memory, processors, buses and other circuitry. Statistically speaking, all these components will fail at some point, and they halt operations when they do so, said David Fiala, a Ph.D student at the North Carolina State University, during a talk at SC12.

The problem is not a new one, of course. When Lawrence Livermore National Laboratory’s 600-node ASCI (Accelerated Strategic Computing Initiative) White supercomputer went online in 2001, it had a mean time between failures (MTBF) of only five hours, thanks in part to component failures. Later tuning efforts had improved ASCI White’s MTBF to 55 hours, Fiala said.

But as the number of supercomputer nodes grows, so will the problem. “Something has to be done about this. It will get worse as we move to exascale,” Fiala said, referring to how supercomputers of the next decade are expected to have 10 times the computational power that today’s models do.

Today’s techniques for dealing with system failure may not scale very well, Fiala said. He cited checkpointing, in which a running program is temporarily halted and its state is saved to disk. Should the program then crash, the system is able to restart the job from the last checkpoint.

The problem with checkpointing, according to Fiala, is that as the number of nodes grows, the amount of system overhead needed to do checkpointing grows as well — and grows at an exponential rate. On a 100,000-node supercomputer, for example, only about 35 percent of the activity will be involved in conducting work. The rest will be taken up by checkpointing and — should a system fail — recovery operations, Fiala estimated.

Because of all the additional hardware needed for exascale systems, which could be built from a million or more components, system reliability will have to be improved by 100 times in order to keep to the same MTBF that today’s supercomputers enjoy, Fiala said.

Fiala presented technology that he and fellow researchers developed that may help improve reliability. The technology addresses the problem of silent data corruption, when systems make undetected errors writing data to disk.

Source…