Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Stagefright 2.0 Exploits Android Vulnerabilities

October 13, 2015 by  
Filed under Computing

Comments Off on Stagefright 2.0 Exploits Android Vulnerabilities

Newly found vulnerabilities in the way Android handles media files can allow attackers to compromise devices by tricking users into visiting maliciously crafted Web pages.

The vulnerabilities can lead to remote code execution on almost all devices that run Android, starting with version 1.0 of the OS released in 2008 to the latest 5.1.1, researchers from mobile security firm Zimperium said in a report published Thursday.

The flaws are in the way Android processes the metadata of MP3 audio files and MP4 video files, and they can be exploited when the Android system or another app that relies on Android’s media libraries previews such files.

The Zimperium researchers found similar multimedia processing flaws earlier this year in an Android library called Stagefright that could have been exploited by simply sending Android devices a maliciously crafted MMS message.

Those flaws triggered a coordinated patching effort from device manufacturers that Android’s lead security engineer, Adrian Ludwig, called the “single largest unified software update in the world.” It also contributed to Google, Samsung and LG committing to monthly security updates going forward.

One of the flaws newly discovered by Zimperium is located in a core Android library called libutils and affects almost all devices running Android versions older than 5.0 (Lollipop). The vulnerability can also be exploited in Android Lollipop (5.0 – 5.1.1) by combining it with another bug found in the Stagefright library.

The Zimperium researchers refer to the new attack as Stagefright 2.0 and believe that it affects more than 1 billion devices.

Since the previous attack vector of MMS was closed in newer versions of Google Hangouts and other messaging apps after the previous Stagefright flaws were found, the most straight-forward exploitation method for the latest vulnerabilities is through Web browsers, the Zimperium researchers said.

Zimperium reported the flaws to Google on Aug. 15 and plans to release proof-of-concept exploit code once a fix is released.

That fix will come on Oct. 5 as part of the new scheduled monthly Android security update, a Google representative said.

Source-http://www.thegurureview.net/mobile-category/stagefright-2-0-exploits-android-vulnerabilities.html

Adobe Eases Privacy Concerns

November 14, 2014 by  
Filed under Around The Net

Comments Off on Adobe Eases Privacy Concerns

Tests on the latest version of Adobe System’s e-reader software reveals the company is now collecting less data following a privacy-related row last month, according to the Electronic Frontier Foundation.

Digital Editions version 4.0.1 appears to only collect data on e-books that have DRM (Digital Rights Management), wrote Cooper Quintin, a staff technologist with the EFF. DRM places restrictions on how content can be used with the intent of thwarting piracy.

Adobe was criticized in early October after it was discovered Digital Editions collected metadata about e-books on a device, even if the e-books did not have DRM. Those logs were also sent to Adobe in plain text.

Since that data was not encrypted, critics including the EFF contended it posed major privacy risks for users. For example, plain text content could be intercepted by an interloper from a user who is on the same public Wi-Fi network.

Adobe said on Oct. 23 it fixed the issues in 4.0.1, saying it would not collect data on e-books without DRM and encrypt data that is transmitted back to the company.

Quintin wrote the EFF’s latest test showed the “only time we saw data going back to an Adobe server was when an e-book with DRM was opened for the first time. This data is most likely being sent back for DRM verification purposes, and it is being sent over HTTPS.”

If an e-book has DRM, Adobe may record how long a person reads it or the percentage of the content that is read, which is used for “metered” pricing models.

Other technical metrics are also collected, such as the IP address of the device downloading a book, a unique ID assigned to the specific applications being used at the time and a unique ID for the device, according to Adobe.

Source

Was Dropbox Really Hacked?

January 24, 2014 by  
Filed under Around The Net

Comments Off on Was Dropbox Really Hacked?

Dropbox suffered a major outage over the weekend.

In one of the more bizarre recent incidents, after the service went down on Friday evening a group of hackers claimed to have infiltrated the service and compromised its servers.

However, on the Dropbox blog, Dropbox VP of engineering Ardita Ardwarl told users that hackers were not to blame.

Ardwari said, “On Friday evening we began a routine server upgrade. Unfortunately, a bug installed this upgrade on several active servers, which brought down the entire service. Your files were always safe, and despite some reports, no hacking or DDOS attack was involved.”

The fault occurred when a bug in an upgrade script caused an operating system upgrade to be triggered on several live machines, rendering them inoperative. Although the fault was rectified in three hours, the knock-on effects led to problems that lasted through the weekend for some users.

Dropbox has assured users that there are no further problems and that all users should now be back online. It said that at no point were files in danger, adding that the affected machines didn’t host any user data. In other words, the “hackers” weren’t hackers at all, but attention seeking trolls.

Dropbox claims to have over 200 million users, many of which it has acquired through strategic partnerships with device manufacturers offering free storage with purchases.

Source

The company is looking forward to an initial public offering (IPO) on the stock market, so the timing of such a major outage could not be worse. Dropbox, which includes Bono and The Edge from U2 amongst its investors, has recently enhanced its business offering to appeal to enterprise clients, and such a loss of uptime could affect its ability to attract customers.

Amazon Debuts Cloud-based Transcoding Service

October 28, 2013 by  
Filed under Computing

Comments Off on Amazon Debuts Cloud-based Transcoding Service

Amazon Web Services has rolled out the option to use its Elastic Transcoder for audio-only conversions.

Amazon Elastic Transcoder was developed to offer an easy and low-cost way to convert media files from their source format into versions that will play on devices like smartphones, tablets and PCs.

The new feature lets anyone use Amazon Elastic Transcoder to convert audio-only content like music or podcasts from one format to another. Users can also strip out the audio tracks from video files and create audio-only streams. An option that, for example, can be used to create podcasts from video originals that are compatible with iOS applications that require an audio-only HTTP Live Streaming (HLS) file set, Amazon said.

The output from Elastic Transcoder is two-channel AAC, MP3 or Vorbis. Metadata like track name, artist, genre and album art is included in the output file and users can also specify replacement or additional album art.

Users of the service pay for the length of their converted content. For audio-only transcoding, prices start at $0.0045 per minute. That compares to the video version, which costs from $0.015 per minute for standard definition content and $0.03 per minute for high-definition clips, according to Amazon’s website.

For users who want to try out the service, the AWS Free Tier offers up to 20 minutes of free audio output per month. The service was announced for video in January and is still tagged as a beta.

Source

Cloud Storage Specs Approved

October 29, 2012 by  
Filed under Computing

Comments Off on Cloud Storage Specs Approved

The International Organization for Standardization (ISO) has ratified the Cloud Data Management Interface (CDMI), a set of protocols defining how businesses can safely transport data between private and public clouds.

The Storage Networking Industry Association’s (SNIA) Cloud Storage Initiative Group submitted the standard for approval by the ISO last spring. CDMI is the first industry-developed open standard specifically for data storage as a service.

“There is strong demand for cloud computing standards and to see one of our most active consortia partners contribute this specification in such a timely fashion is very gratifying,” Karen Higginbottom, chairwoman of the ISO committee, said in a statement. “The standard will improve cloud interoperability.”

The CDMI specification is a way to create an interface for accessing data in the cloud by preserving metadata about information that an enterprise stores in the cloud. With metadata associated with the information, companies can retrieve data no matter where it’s stored.

“With the metadata piece, it’s also complementary with existing interfaces. The standard can be used with Amazon, for file or block data and it can use any number of storage protocols, such as NFS, CIFS or iSCSI,” said SNIA Chairman Wayne Adams.

Based on a RESTful HTTP protocol, CDMI provides both a data path and control path for cloud storage and standardizes a common interoperable format for securely moving data and its associated data requirements from cloud to cloud. The standard applies to public, private and hybrid deployment models for storage clouds.

Source…

The First PC Had a Birthday

August 20, 2011 by  
Filed under Computing

Comments Off on The First PC Had a Birthday

The year was 1981 and IBM introduced its IBM PC model 5150 on August 12th, 30 years ago today.

The first IBM PC wasn’t much by today’s standards. It had an Intel 8088 processor that ran at the blazing speed of 4.77MHz. The base memory configuration was all of 16kB expandable all the way up to 256kB, and it had two 5-1/4in, 160kB capacity floppy disk drives but no hard drive.

A keyboard and 12in monochrome monitor were included, with a colour monitor optional. The 5150 ran IBM BASIC in ROM and came with a PC-DOS boot diskette put out by a previously unknown startup software company based out of Seattle named Microsoft.

IBM priced its initial IBM PC at a whopping $1,565, and that was a relatively steep price in those days, worth about $5,000 today, give or take a few hundred dollars. In the US in 1981 that was about the cost of a decent used car.

Because the IBM PC was meant to be sold to the general public but IBM didn’t have any retail stores, the company sold it through US catalogue retailer Sears & Roebuck stores.

Subsequently IBM released follow-on models through 1986 including the PC/XT, the first with an internal hard drive; the PC/AT with an 80286 chip running at 6MHz then 8MHz; the 6MHz XT/286 with zero wait-state memory that was actually faster than the 8MHz PC/AT and (not very) Portable and Convertible models; as well as the ill-fated XT/370, AT/370, 3270 PC and 3270/AT mainframe terminal emulators, plus the unsuccessful PC Jr.

Read More….

IBM Debuts Fast Storage System

July 30, 2011 by  
Filed under Computing

Comments Off on IBM Debuts Fast Storage System

IBM

With an eye toward helping tomorrow’s data intensive organizations, IBM researchers have developed a super-fast storage system capable of scanning in 10 billion files in 43 minutes.

This system easily bested their previous system, demonstrated at Supercomputing 2007, which scanned 1 billion files in three hours.

Key to the increased performance was the use of speedy flash memory to store the metadata that the storage system uses to locate requested information. Traditionally, metadata repositories reside on disk, access to which slows operations.

“If we have that data on very fast storage, then we can do those operations much more quickly,” said Bruce Hillsberg, director of storage systems at IBM Research Almaden, where the cluster was built. “Being able to use solid-state storage for metadata operations really allows us to do some of these management tasks more quickly than we could ever do if it was all on disk.”

IBM foresees that its customers will be grappling with a lot more information in the years to come.

“As customers have to store and process large amounts of data for large periods of time, they will need efficient ways of managing that data,” Hillsberg said.

For the new demonstration, IBM built a cluster of 10 eight-core servers equipped with a total of 6.8 terabytes of solid-state memory. IBM used four 3205 solid-state Storage Systems from Violin Memory. The resulting system was able to read files at a rate of almost 5 GB/s (gigabytes per second).

Read More….