Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Is Microsoft A Risk?

February 29, 2016 by  
Filed under Security

Hewlett Packard Enterprise (HPE) has cast a shade on what it believes to be the biggest risks facing enterprises, and included on that list is Microsoft.

We ain’t surprised, but it is quite a shocking and naked fact when you consider it. The naming and resulting shaming happens in the HPE Cyber Risk Report 2016, which HPE said “identifies the top security threats plaguing enterprises”.

Enterprises, it seems, have myriad problems, of which Microsoft is just one.

“In 2015, we saw attackers infiltrate networks at an alarming rate, leading to some of the largest data breaches to date, but now is not the time to take the foot off the gas and put the enterprise on lockdown,” said Sue Barsamian, senior vice president and general manager for security products at HPE.

“We must learn from these incidents, understand and monitor the risk environment, and build security into the fabric of the organisation to better mitigate known and unknown threats, which will enable companies to fearlessly innovate and accelerate business growth.”

Microsoft earned its place in the enterprise nightmare probably because of its ubiquity. Applications, malware and vulnerabilities are a real problem, and it is Windows that provides the platform for this havoc.

“Software vulnerability exploitation continues to be a primary vector for attack, with mobile exploits gaining traction. Similar to 2014, the top 10 vulnerabilities exploited in 2015 were more than one-year-old, with 68 percent being three years old or more,” explained the report.

“In 2015, Microsoft Windows represented the most targeted software platform, with 42 percent of the top 20 discovered exploits directed at Microsoft platforms and applications.”

It is not all bad news for Redmond, as the Google-operated Android is also put forward as a professional pain in the butt. So is iOS, before Apple users get any ideas.

“Malware has evolved from being simply disruptive to a revenue-generating activity for attackers. While the overall number of newly discovered malware samples declined 3.6 percent year over year, the attack targets shifted notably in line with evolving enterprise trends and focused heavily on monetisation,” added the firm.

“As the number of connected mobile devices expands, malware is diversifying to target the most popular mobile operating platforms. The number of Android threats, malware and potentially unwanted applications have grown to more than 10,000 new threats discovered daily, reaching a total year-over-year increase of 153 percent.

“Apple iOS represented the greatest growth rate with a malware sample increase of more than 230 percent.”

Courtesy-TheInq

February 12, 2016 by  
Filed under Computing

Technology giants are finding some of the strangest places for data centers these days.

Facebook, for example, built a data center in Lulea in Sweden because the icy cold temperatures there would help cut the energy required for cooling. A proposed Facebook data center in Clonee, Ireland, will rely heavily on locally available wind energy. Google’s data center in Hamina in Finland uses sea water from the Bay of Finland for cooling.

Now, Microsoft is looking at locating data centers under the sea.

The company is testing underwater data centers with an eye to reducing data latency for the many users who live close to the sea and also to enable rapid deployment of a data center.

Microsoft, which has designed, built, and deployed its own subsea data center in the ocean, in the period of about a year, started working on the project in late 2014, a year after Microsoft employee, Sean James, who served on a U.S. Navy submarine, submitted a paper on the concept.

A prototype vessel, named the Leona Philpot after an Xbox game character, operated on the seafloor about 1 kilometer from the Pacific coast of the U.S. from August to November 2015, according to a Microsoft page on the project.

The subsea data center experiment, called Project Natick after a town in Massachusetts, is in the research stage and Microsoft warns it is “still early days” to evaluate whether the concept could be adopted by the company and other cloud service providers.

“Project Natick reflects Microsoft’s ongoing quest for cloud datacenter solutions that offer rapid provisioning, lower costs, high responsiveness, and are more environmentally sustainable,” the company said.

Using undersea data centers helps because they can serve the 50 percent of people who live within 200 kilometers from the ocean. Microsoft said in an FAQ that deployment in deepwater offers “ready access to cooling, renewable power sources, and a controlled environment.” Moreover, a data center can be deployed from start to finish in 90 days.

Courtesy- http://www.thegurureview.net/aroundnet-category/microsoft-goes-deep-with-underwater-data-center.html

February 11, 2016 by  
Filed under Around The Net

In a sweeping change of course directed at a tightly controlled television industry, cable and satellite operators in the United States will now be obligated to let their customers freely choose which set-top boxes they can use, according to a proposal announced by the Federal Communications Commission on Wednesday.

The move is expected to have wide-ranging implications for large technology companies looking to get their brand names into every consumer’s living room. For example, under the new rules, Google, Amazon and Apple would now be allowed to create entertainment room devices that blend Internet and cable programming in a way the television industry has until now resisted. Next-generation media players, including the Chromecast, Fire TV and Apple TV, would now be granted permission to line the backs of their devices with coaxial inputs and internal “smart access card” equivalents integrated right into device firmware with a simple subscription activation process.

As the Wall Street Journal notes, Senators Edward Markey of Massachusetts and Richard Blumenthal of Connecticut investigated the cable set-top box market last summer and found that the cable industry generates roughly $19.1 billion in annual revenue from cable box rentals alone.

Meanwhile, the cost of cable set-top boxes has risen 185 percent since 1995, while the cost of PCs, televisions and smartphones has dropped by 90 percent. FCC Chairman Tom Wheeler admits that these economies of scale don’t need to remain so unbalanced any longer.

The FCC says its focus will be primarily on improving day-to-day television experience. In the past, the burdensome requirements of long-term contracts tethered to clunky, unsightly cable and satellite boxes has been a major source of customer complaints.

Wheeler has also said that access to specific video content shouldn’t be frustrating to the average consumer in an age where we are constantly surrounded by a breadth of information to sift through. “Improved search functions [can] lead consumers to a variety of video content that is buried behind guides or available on video services you can’t access with your set-top box today,” Wheeler says.

The FCC is expected to vote on the proposal on Thursday, February 18th. FCC Chairman Tom Wheeler’s full statement on the commission’s new proposal can be found here.

Courtesy-Fud

February 10, 2016 by  
Filed under Computing

Slapdash developers have been advised not to use the open source JSPatch method of updating their wares because it is as vulnerable as a soft boiled egg, for various reasons.

It’s FireEye that is giving JSPatch the stink eye and providing the warning that it has rendered over 1,000 applications open to copy and paste theft of photos and other information. And it doesn’t end there.

FireEye’s report said that Remote Hot Patching may sound like a good idea at the time, but it really isn’t. It is so widely used that is has opened up a 1,220-wide iOS application hole in Apple users’ security. A better option, according to the security firm, is to stick with the Apple method, which should provide adequate and timely protection.

“Within the realm of Apple-provided technologies, the way to remediate this situation is to rebuild the application with updated code to fix the bug and submit the newly built app to the App Store for approval,” said FireEye.

“While the review process for updated apps often takes less time than the initial submission review, the process can still be time-consuming and unpredictable, and can cause loss of business if app fixes are not delivered in a timely and controlled manner.

“However, if the original app is embedded with the JSPatch engine, its behaviour can be changed according to the JavaScript code loaded at runtime. This JavaScript file is remotely controlled by the app developer. It is delivered to the app through network communication.”

Let’s not all make this JSPatch’s problem, because presumably it’s developers who are lacking.

FireEye spoke up for the open source security gear while looking down its nose at hackers. “JSPatch is a boon to iOS developers. In the right hands, it can be used to quickly and effectively deploy patches and code updates. But in a non-utopian world like ours, we need to assume that bad actors will leverage this technology for unintended purposes,” the firm said.

“Specifically, if an attacker is able to tamper with the content of a JavaScript file that is eventually loaded by the app, a range of attacks can be successfully performed against an App Store application.

Courteys-TheInq

February 9, 2016 by  
Filed under Around The Net

Facebook is contemplating the development of a dedicated service or page where users will be able watch videos and not be bothered by other content.

The social network continues to see surging interest in video. During one day last quarter, its users watched a combined 100 million hours of video. Roughly 500 million users watch at least some video each day.

That’s a lot of video and a lot of viewers, and Facebook wants to capitalize on it.

“We are exploring a dedicated place on Facebook for when they just want to watch videos,” CEO Mark Zuckerberg said Wednesday during a conference call to discuss Facebook’s quarterly financial results.

But he was tight-lipped on how the video might actually be presented.

Asked if a stand-alone video app is in the cards, he mentioned the success of Messenger and a Facebook app for managing Pages. “I do think there are additional opportunities for this and we’ll continue looking at them,” he said.

Facebook wants to encourage more video viewing because it keeps users on the site longer, helping it to sell more ads.

“Marketers also really love video and it’s a compelling way to reach consumers,” COO Sheryl Sandberg said during the call.

Zuckerberg has been watching the growth of video for osme time. At a town hall meeting in November 2014, he predicted, ”In five years, most of [Facebook] will be video.”

And it’s likely that most of that video will be consumed over mobile networks.

Among Facebook’s heaviest users — the billion people who access it on a daily basis — 90 percent use a mobile device, either solely or in addition to their PC.

It’s financial results for the fourth quarter were strong. Revenue was $5.8 billion, up 52 percent from the same period in 2014, while net profit more than doubled to $1.6 billion.

http://www.thegurureview.net/aroundnet-category/facebook-exploring-a-dedicated-video-service.html

February 8, 2016 by  
Filed under Computing

MediaTek appears to be ready to give three more entry level processors LTE Cat 6 so they can mangage a 300 Mbit download and 50 Mbit upload.  We already knew that the high-end deca-core X20 and mainstream eight core P10 were getting LTE Cat 6.

According to the Gizchina website, the company the three new SoCs carry the catchy titles of MT6739, MT6750 and MT6750T. .

The MT6739 will probably replace the MT6735. Both have quad A53 cores but it will mean that the MT6739 will get a Cat 6 upgrade from Cat 4. The MT6739 supports speeds of up to 1.5GHz, 512 KB L2 cache, 1280×720 at 60fps resolution, and video decode to 1080p 30fps with H.264 and 13 megapixel camera. This means it is an entry level SoC for phones that might fit into the $100 price range.

The MT6750 and MT6750T look like twins, only the T version supports full HD 1920×1080 displays. The MT6750 has eight cores, four A53 clocked at 1.5Ghz and four A53 clocked at 1.0GHz and is manufactured on TSMC’s new 28nm High Performance Mobile Computing manufacturing mode. This is the same manufacturing process MediaTek is using for the Helio P10 SoC. The new process allows lower leakage and better overall transistor performance at lower voltage.

The MT6750 SoC supports single channel LPDDR3 666MHz and eMCP up to 4GB. The SoC supports eMMC 5.1, 16 megapixel camera, 1080p 30 fps with both H.264 and H.265 decoding. It comes with an upgraded ARM Mali T860 MP2 GPU with 350 MHz and display support of 1280×720 HD720 ready with 60 FPS. This means the biggest upgrade is the Cat 6 upgrade and it makes sense – most of European and American networks now are demanding a Cat 6 or higher modem that supports carrier aggregation.

This new SOc looks like a slowed down version of Helios P10 and should be popular for entry level Android phones.

Courtesy-Fud

February 5, 2016 by  
Filed under Computing

ARM has announced a new mobile graphics chip, the Mali-DP650 which it said was designed to handle 4K content a device’s screen and on an external display.

The new Mali GPU can push enough pixels on the local display it is more likely that it is interested in using the technology for streaming.

Many smartphones can record 4K video and this means that smartphones could be a home to high resolution content which can be streamed to a large, high resolution screen.

It looks like Mali DP650can juggle the device’s native resolution and the external display’s own resolution and the variable refresh rates. At least that is what ARM says it can do.

The GPU is naturally able to handle different resolutions but it is optimized for a “2.5K”, which means WQXGA (2560×1600) on tablets and WQHD (2560×1440) on smartphones, but also Full HD (1920×1080) for slightly lower end devices.

Mark Dickinson, general manager, media processing group, ARM said: “The Mali-DP650 display processor will enable mobile screens with multiple composition layers, for graphics and video, at Full HD (1920×1080 pixels) resolutions and beyond while maintaining excellent picture quality and extending battery life,”

“Smartphones and tablets are increasingly becoming content passports, allowing people to securely download content once and carry it to view on whichever screen is most suitable. The ability to stream the best quality content from a mobile device to any screen is an important capability ARM Mali display technology delivers.”

ARM did not say when the Mali-DP650 will be in the shops or which chips will be the first to incorporate its split-display mode feature.

Courtesy-Fud

February 4, 2016 by  
Filed under Computing

Samsung and TSMC are starting to slug it out introducing Gen.3 14 and 16-nano FinFET system semiconductor processes, but the cost could mean that smartphone makers shy away from the technology in the short term.

It is starting to look sales teams for the pair are each trying to show that they can use the technology to reduce the most electricity consumption and production costs.

In its yearly result for 2015, TSMC made an announcement that it is planning to enter mass-production system of chips produced by 16-nano FinFET Compact (FFC) process sometime during 1st quarter of this year. TSMC had finished developing 16-nano FFC process at the end of last year. During the announcement TSMC talked up the fact that its 16-nano FFC process focuses on reducing production cost more than before and implementing low electricity.

TSMC is apparently ready for mass-production of 16-nano FFC process sometime during 1st half of this year and secured Huawei’s affiliate called HiSilicon as its first customer.

HiSilicon’s Kirin 950 that is used for Huawei’s premium Smartphone called Mate 8 is produced by TSMC’s 16-nano FF process. Its A9 Chip, which is used for Apple’s iPhone 6S series, is mass-produced using the 16-nano FinFET Plus (FF+) process that was announced in early 2015. By adding FFC process, TSMC now has three 16-nano processors in action.

Samsung is not far behind it has mass-produced Gen.2 14-nano FinFET using a process called LPP (Low Power Plus). This has 15 per cent lower electricity consumption compared to Gen.1 14-nano process called LPE (Low Power Early).

Samsung Electronics’ 14-nano LPP process was seen in the Exynos 8 OCTA series that is used for Galaxy S7 and Qualcomm’s Snapdragon 820. But Samsung Electronics is also preparing for Gen.3 14-nano FinFET process.

Vice-President Bae Young-chang of Samsung’s LSI Business Department’s Strategy Marketing Team said it will use a process similar to the Gen.2 14-nano process.

Both Samsung and TSMC might have a few problems. It is not clear what the yields of these processes are and this might increase the production costs.

Even if Samsung Electronics and TSMC finish developing 10-nano process at the end of this year and enter mass-production system next year, but they will also have to upgrade their current 14 and 16-nano processes to make them more economic.

Even if 10-nano process is commercialized, there still will be many fabless businesses that will use 14 and 16-nano processes because they are cheaper. While we might see a few flagship phones using the higher priced chips, it might be that we will not see 10nm in the majority of phones for years.

 

Courtesy-Fud

February 3, 2016 by  
Filed under Computing

Intel is reportedly going to release its first 10nm processor family in 2017, expected to be the first of three generations of processors that will be fabbed on the 10nm process.

Guru 3D found a slide which suggest that Chipzilla will not be sticking to its traditional “tick-tock model.” To be fair Intel has been using the 14nm node for two generations so far – Broadwell and Skylake. Kaby Lake processor architecture that is due later this year, will also use 14nm .

The slide tells us pretty much what we expected. The first processor family to be manufactured on a 10nm node will be Cannonlake, expected to launch in the year 2017. The following year, Intel will reportedly launch Icelake processors, again using the same 10nm node. Icelake will be succeeded by Tigerlake in 2019, the third generation of Intel processors using a 10nm silicon fab process. The codename for Tigerlake’s successor is unknown.  When it comes out in 2020 it will use 5nm.

 

architecture CPU series Tick or Tock Fab node Year Released
Presler/Cedar Mill Pentium 4 / D Tick 65 nm 2006
Conroe/Merom Core 2 Duo/Quad Tock 65 nm 2006
Penryn Core 2 Duo/Quad Tick 45 nm 2007
Nehalem Core i Tock 45 nm 2008
Westmere Core i Tick 32 nm 2010
Sandy Bridge Core i 2xxx Tock 32 nm 2011
Ivy Bridge Core i 3xxx Tick 22 nm 2012
Haswell Core i 4xxx Tock 22 nm 2013
Broadwell Core i 5xxx Tick 14 nm 2014 & 2015 for desktops
Skylake Core i 6xxx Tock 14 nm 2015
Kaby lake Core i 7xxx Tock 14 nm 2016
Cannonlake Core i 8xxx? Tick 10 nm 2017
Ice Lake Core i 8xxx? Tock 10 nm 2018
Tigerlake Core i 9xxx? Tock 10 nm 2019
N/A N/A Tick 5 nm 2020

Courtesy-Fud

February 2, 2016 by  
Filed under Computing

Samsung has begun mass producing what it calls the industry’s first 4GB DRAM package based on the second-generation High Bandwidth Memory (HBM) 2 interface.

Samsung’s new HBM solution will be used in high-performance computing (HPC), advanced graphics, network systems and enterprise servers, and is said to offer DRAM performance that is “seven times faster than the current DRAM performance limit”.

This will apparently allow faster responsiveness for high-end computing tasks including parallel computing, graphics rendering and machine learning.

“By mass producing next-generation HBM2 DRAM, we can contribute much more to the rapid adoption of next-generation HPC systems by global IT companies,” said Samsung Electronics’ SVP of memory marketing, Sewon Chun.

“Also, in using our 3D memory technology here, we can more proactively cope with the multifaceted needs of global IT, while at the same time strengthening the foundation for future growth of the DRAM market.”

The 4GB HBM2 DRAM, which uses Samsung’s 20nm process technology and advanced HBM chip design, is specifically aimed at next-generation HPC systems and graphics cards.

“The 4GB HBM2 package is created by stacking a buffer die at the bottom and four 8Gb core dies on top. These are then vertically interconnected by TSV holes and microbumps,” explained Samsung.

“A single 8Gb HBM2 die contains over 5,000 TSV holes, which is more than 36 times that of an 8Gb TSV DDR4 die, offering a dramatic improvement in data transmission performance compared to typical wire-bonding based packages.”

Samsung’s new DRAM package features 256GBps of bandwidth, which is double that of an HBM1 DRAM package. This is equivalent to a more than seven-fold increase over the 36GBps bandwidth of a 4Gb GDDR5 DRAM chip, which has the fastest data speed per pin (9Gbps) among currently manufactured DRAM chips.

The firm’s 4GB HBM2 also enables enhanced power efficiency by doubling the bandwidth per watt over a 4Gb GDDR5-based solution, and embeds error-correcting code functionality to offer high reliability.

Samsung plans to produce an 8GB HBM2 DRAM package this year, and by integrating this into graphics cards the firm believes designers will be able to save more than 95 percent of space compared with using GDDR5 DRAM. This, Samsung said, will “offer more optimal solutions for compact devices that require high-level graphics computing capabilities”.

Samsung will increase production volume of its HBM2 DRAM over the course of the year to meet anticipated growth in market demand for network systems and servers. The firm will also expand its line-up of HBM2 DRAM solutions in a bid to “stay ahead in the high-performance computing market”.

Courtesy-TheInq

Comments