Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Apple Buys Parts of Qualcomm

December 31, 2015 by  
Filed under Around The Net

Apple has bought one of Qualcomm’s Taiwan graphics labs and is operating it pretty much under everyone’s radar to “invent” something that Qualcomm tried and failed to make successful.

The lab was used by Qualcomm to develop Interferometric Modulator Display and Apple Insider claims it is now being used to develop thinner, lighter, brighter and more energy-efficient screens.

The lab employs at least 50 engineers and has recruited talent from display maker AU Optronics and Qualcomm. Outside the lab there is no signage or much to indicate that the Fruity Cargo Cult has assumed control.

Government records show that the building is registered to Apple Taiwan, and a staff in the building were observed wearing Apple ID badges.

Bloomberg thinks Apple wants to “reduce reliance on the technology developed by suppliers such as Samsung, LG, Sharp and Japan and instead “develop the production processes in-house and outsource to smaller manufacturers such as Taiwan’s AU Optronics or Innolux.

Apple currently uses LCD screens in its Macs and iOS devices and an OLED display for Apple Watch and the new lab was where Qualcomm tried to develop to develop its own Mirasol displays.

Mirasol use a different technology to backlit LCDs or OLED. It uses an array of microscopic mirror-like elements that can reflect light of a specific colour. It does not need a backlight and only uses energy when being switched on or off, like E-Ink.

The downside to IMOD has historically been that it reproduces flat, unsaturated colours, a problem that may be possible to fix. Qualcomm introduced a Toq smartwatch with an IMOD screen, but the device flopped.

Qualcomm took a $142 million charge on its Mirasol display business and a year ago there were rumours Qualcomm was selling off its Longtan Mirasol panel plant to TSMC.

What appears to have happened is that Jobs Mob might have bought more than just the facility, and instead has some interest in using Mirasol IMOD technology which could offer an advanced technological breakthrough in enabling a new class of low-power displays for use in phones, tablets or wearables.

Courtesy-Fud

December 30, 2015 by  
Filed under Computing

AMD and now RTG (Radeon Technologies Group) are involved in a major push to open source GPU resources.

According to Ars Technica Under the handle “GPUOpen” AMD is releasing a slew of open-source software and tools to give developers of games, heterogeneous applications, and HPC applications deeper access to the GPU and GPU resources.

In a statement AMD said that as a continuation of the strategy it started with Mantle, it is giving even more control of the GPU to developers.

“ As console developers have benefited from low-level access to the GPU, AMD wants to continue to bring this level of access to the PC space.”

The AMD GPUOpen initiative is meant to give developers the ability to use assets they’ve already made for console development. They will have direct access to GPU hardware, as well as access to a large collection of open source effects, tools, libraries and SDKs, which are being made available on GitHub under an MIT open-source license.

AMD wants GPUOpen will enable console-style development for PC games through this open source software initiative. It also includes an end-to-end open source compute infrastructure for cluster-based computing and a new Linux software and driver strategy

All this ties in with AMD’s Boltzmann Initiative and an HSA (Heterogeneous System Architecture) software suite that includes an HCC compiler for C++ development. This was supposed to open the field of programmers who can use HSA. A new HCC C++ compiler was set up to enable developers to more easily use discrete GPU hardware in heterogeneous systems.

It also allows developers to convert CUDA code to portable C++. According to AMD, internal testing shows that in many cases 90 percent or more of CUDA code can be automatically converted into C++ with the final 10 percent converted manually in the widely popular C++ language. An early access program for the “Boltzmann Initiative” tools is planned for Q1 2016.

AMD GPUOpen includes a new Linux driver model and runtime targeted at HPC Cluster-Class Computing. The headless Linux driver is supposed to handle high-performance computing needs with low latency compute dispatch and PCI Express data transfers, peer-to-peer GPU support, Remote Direct Memory Access (RDMA) from InfiniBand that interconnects directly to GPU memory and Large Single Memory Allocation support.

Courtesy-Fud

December 29, 2015 by  
Filed under Around The Net

Facebook has unveiled its next-generation GPU-based systems for training neural networks, Open Rack-compatible hardware code-named “Big Sur” which it plans to open source.

The social media giant’s latest machine learning system has been designed for artificial intelligence (AI) computing at a large scale, and in most part has been crafted with Nvidia hardware.

Big Sur comprises eight high-performance GPUs of up to 300 watts each, with the flexibility to configure between multiple PCI-e topologies. It makes use of Nvidia’s Tesla Accelerated Computing Platform, and as a result is twice as fast as Facebook’s previous generation rack.

“This means we can train twice as fast and explore networks twice as large,” said the firm in its engineering blog. “And distributing training across eight GPUs allows us to scale the size and speed of our networks by another factor of two.”

Facebook claims that as well as better performance, Big Sur is also far more versatile and efficient than the off-the-shelf solutions in its previous generation.

“While many high-performance computing systems require special cooling and other unique infrastructure to operate, we have optimised these new servers for thermal and power efficiency, allowing us to operate them even in our own free-air cooled, Open Compute standard data centres,” explained the company.

We spoke to Nvidia’s senior product manager for GPU Computing, Will Ramey, ahead of the launch, who has been working on the Big Sur project alongside Facebook for some time.

“The project is the first time that a complete computing system that is designed for machine learning and AI will be released as an open source solution,” said Ramey. “By taking the purpose-built design spec that Facebook has designed for their own machine learning apps and open sourcing them, people will benefit from and contribute to the project so it can move the entire industry forward.”

While Big Sur was built with Nvidia’s new Tesla M40 hyperscale accelerator in mind, it can actually support a wide range of PCI-e cards in what Facebook believes could make for better efficiencies in production and manufacturing to get more computational power for every penny that it invests.

“Servers can also require maintenance and hefty operational resources, so, like the other hardware in our data centres, Big Sur was designed around operational efficiency and serviceability,” Facebook said. “We’ve removed the components that don’t get used very much, and components that fail relatively frequently – such as hard drives and DIMMs – can now be removed and replaced in a few seconds.”

Perhaps the most interesting aspect of the Big Sur announcement is Facebook’s plans to open-source it and submit the design materials to the Open Compute Project. This is a bid to make it easier for AI researchers to share techniques and technologies.

“As with all hardware systems that are released into the open, it’s our hope that others will be able to work with us to improve it,” Facebook said, adding that it believes open collaboration will help foster innovation for future designs, and put us closer to building complex AI systems that will probably take over the world and kill us all.

Nvidia released its end-to-end hyperscale data centre platform last month claiming that it will let web services companies accelerate their machine learning workloads and power advanced artificial intelligence applications.

Consisting of two accelerators, Nvidia’s latest hyperscale line aims to let researchers design new deep neural networks more quickly for the increasing number of applications they want to power with AI. It also is designed to deploy these networks across the data centre. The line also includes a suite of GPU-accelerated libraries.

Courtesy-TheInq

December 28, 2015 by  
Filed under Computing

The service set up by WordPress to better support WordPress has failed users by suffering a security breach and behaving just like the rest of the internet.

WordPress, and its themes, are often shone with the dark light of the security vulnerability, but we do not hear of WP Engine often. Regardless of that, it seems to do good business and is reaching out to those that it does business with to tell them what went wrong and what they need to do about it.

A reasonable amount of threat mitigation is required, and if you are affected by the issue you are going to have to change your password – again, and probably keep a cautious eye on the comings and goings of your email and financial accounts.

“At WP Engine we are committed to providing robust security. We are writing today to let you know that we learned of an exposure involving some of our customers’ credentials. Out of an abundance of caution, we are proactively taking security measures across our entire customer base,” says the firm in an urgent missive on its web pages.

“We have begun an investigation, however there is immediate action we are taking. Additionally, there is action that requires your immediate attention.”

That action, is probably to panic in the short term, and then to change your password and cancel out any instances of its re-use across the internet. You know the drill, this is a daily thing right. Judging by the WordPress statement we are in the early days of internal investigation.

“While we have no evidence that the information was used inappropriately, as a precaution, we are invalidating the following five passwords associated with your WP Engine account,” explains WordPress as it reveals the sale of its – actually, your, problem. “This means you will need to reset each of them.”

Have fun with that.

Courtesy-TheInq

December 24, 2015 by  
Filed under Computing

Nokia and ARM want to spruce up the TCP/IP stack to make it better suited to networks that need to operate at high speed and/or low latency.

Legacy TCP/IP is seen as one of the slowing points for a lot of future IT – particularly 5G.  LTE was IP-based but it was hell on toast getting it to go and as networks get faster and more virtualised, the TCP/IP stack is failing to keep up.

At the moment Nokia and ARM are using 5G to drive other companies into looking at a
fully revamped TCP/IP stack, optimized for the massively varied use cases of the next mobile generation, for cloud services, and for virtualization and software-defined networking (SDN).

Dubbed the OpenFastPath (OFP) Foundation, founded by Nokia Networks, ARM and industrial IT services player Enea. The cunning plan is to create an open source TCP/IP stack which can accelerate the move towards SDN in carrier and enterprise networks.

AMD, Cavium, Freescale, HPE and the ARM-associated open source initiative, Linaro are all on board with it.

The cunning plan is to create open but secure network applications, which harness IP packet processing. Some want very high throughput, others ultra-low latency others want both and it is probably going to require a flexible standard to make it all go

The standard would support faster packet forwarding, via low IP latency combined with high capacity, and so reduce deployment and management costs by making networks more efficient.

This appears to be based around getting TCP/IP out of the kernel and using them for packet processing involves a number of operations (moving packets into memory, then to the kernel, then back out to the interface) which could be streamlined to reduce latency.

Courtesy-Fud

December 23, 2015 by  
Filed under Computing

TSMC is scheduled to move its integrated fan-out (InFO) wafer-level packaging technology to volume production in the second quarter of 2016.

Apparently the fruity cargo cult Apple has already signed up to adopt the technology, which means that the rest of the world’s press will probably notice.

According to the Commercial Times TSMC will have 85,000-100,000 wafers fabricated with the foundry’s in-house developed InFo packaging technology in the second quarter of 2016.

TSMC has disclosed its InFO packaging technology will be ready for mass production in 2016. Company president and co-CEO CC Wei remarked at an October 15 investors meeting that TSMC has completed construction of a new facility in Longtan, northern Taiwan.

TSMC’s InFo technology will be ready for volume production in the second quarter of 2016, according to Wei.

TSMC president and co-CEO Mark Liu disclosed the company is working on the second generation of its InFO technology for several projects on 10nm and 7nm process nodes.

Source-http://www.thegurureview.net/computing-category/tsmc-goes-fan-out-wafers.html

December 22, 2015 by  
Filed under Around The Net, Internet

Samsung has announced it will begin manufacturing electronics parts for the automotive industry, with a primary focus on autonomous vehicles.

The South Korean electronics giant is only the latest tech firm to make a somewhat belated push into the carmaker industry, as vehicle computer systems and sensors become more sophisticated.

In October, General Motors announced a strategic partnership with South Korea’s LG Electronics. LG will supply a majority of the key components for GM’s upcoming electric vehicle (EV), the Chevrolet Bolt. LG has also been building computer modules for GM’s OnStar telecommunications system for years.

Apple and Google have also developed APIs that are slowly being embedded by automakers to allow smartphones to natively connect and display their infotainment screens. Those APIs led to the rollout in several vehicles this year of Apple’s CarPlay and Android Auto.

Having formerly balked at the automotive electronics market as too small, consumer computer chipmakers are now entering the space with fervor.

Dutch semiconductor maker NXP is closing an $11.8 billion deal to buy Austin-based Freescale, which makes automotive microprocessors. The combined companies would displace Japan’s Renesas as the world’s largest vehicle chipmaker.

German semiconductor maker Infineon Technology has reportedly begun talks to buy a stake in Renesas.

Adding to growth in automotive electronics are regulations mandating technology such as backup cameras in the U.S. and “eCalling” in Europe, which automatically dials emergency services in case of an accident.

According to a report published by Thomson Reuters, Samsung and its tech affiliates are ramping up research and development for auto technology, with two-thirds of their combined 1,804 U.S. patent filings since 2010 related to electric vehicles and electric components for cars.

The combined automotive software, services and components market is worth around $500 billion, according to ABI Resarch.

Source-http://www.thegurureview.net/consumer-category/samsung-announces-entry-into-auto-industry.html

December 21, 2015 by  
Filed under Computing

AMD over-hyped the new  High Bandwidth Memory standard and now the second generation HBM 2.0 is coming in 2016. However it looks like most of GPUs shipped in this year will still rely on the older GDDR5.

Most of the entry level, mainstream and even performance graphics cards from both Nvidia and AMD will rely on the GDDR5. This memory has been with us since 2007 but it has dramatically increased in speed. The memory chip has shrunken from 60nm in 2007 to 20nm in 2015 making higher clocks and lower voltage possible.

Some of the big boys, including Samsung and Micron, have started producing 8 Gb GDDR5 chips that will enable cards with 1GB memory per chip. The GTX 980 TI has 12 chips with 4 Gb support (512MB per chip) while Radeon Fury X comes with four HMB 1.0 chips supporting 1GB per chip at much higher bandwidth. Geforce Titan X has 24 chips with 512MB each, making the total amount of memory to 12GB.

The next generation cards will  get 12GB memory with 12 GDDR5 memory chips or 24GB with 24 chips. Most of the mainstream and performance cards will come with much less memory.

Only a few high end cards such as Greenland high end FinFET solution from AMD and a Geforce version of Pascal will come with the more expensive and much faster HMB 2.0 memory.

GDDR6 is arriving in 2016 at least at Micron and the company promises a much higher bandwidth compared to the GDDR5. So there will be a few choices.

Source-http://www.thegurureview.net/computing-category/will-gddr5-rule-in-2016.html

December 18, 2015 by  
Filed under Security

Hacking a major corporation is so easy that even an elderly grannie could do it, according to technology industry character John McAfee.

McAfee said that looking at the world’s worst hacks you can see a common pattern – they were not accomplished using the most sophisticated hacking tools.

Writing in IBTImes said that the worst attack was in 2012 attack on Saudi Aramco, one of the world’s largest oil companies. Within hours, nearly 35,000 distinct computer systems had their functionality crippled or destroyed, causing a massive disruption to the world’s oil supply chain. It was made possible by an employee that was fooled into clicking a bogus link sent in an email.

He said 90 per cent of hacking was social engineering, and it is the human elements in your organization that are going to determine how difficult, or how easy, it will be to hack you.

The user is the weakest link in the chain of computing trust, imperfect by nature. And all of the security software and hardware in the world will not keep a door shut if an authorized user can be convinced to open it, he said.

“Experienced hackers don’t concern themselves with firewalls, anti-spyware software, anti-virus software, encryption technology. Instead they want to know whether your management personnel are frequently shuffled; whether your employees are dissatisfied; whether nepotism is tolerated; whether your IT managers have stagnated in their training and self-improvement.”

Muct of this information can be picked up on the dark web and the interernet underground, he added.

“”Are you prepared for a world where grandma or anyone else can quickly obtain, on the wide open web, all of the necessary information for a social engineering hack? Is your organization prepared.

 

Source- http://www.thegurureview.net/computing-category/can-corporations-be-easily-hacked.html

December 17, 2015 by  
Filed under Security

A Russian cyberespionage group known as Pawn Storm has made use of new tools in an ongoing attack campaign against defense contractors with the goal of defeating network isolation policies.

Pawn Storm, also known as Sofacy, after its primary malware tool, has been active since at least 2007 and has targeted governmental, security and military organizations from NATO member countries, as well as media organizations, Ukrainian political activists and Kremlin critics.

Since August, the group has been engaged in an attack campaign focused on defense contractors, according to security researchers from Kaspersky Lab.

During this operation, the group has used a new version of a backdoor program called AZZY and a new set of data-stealing modules. One of those modules monitors for USB storage devices plugged into the computer and steals files from them based on rules defined by the attackers.

The Kaspersky Lab researchers believe that this module’s goal is to defeat so-called network air gaps, network segments where sensitive data is stored and which are not connected to the Internet to limit their risk of compromise.

However, it’s fairly common for employees in organizations that use such network isolation policies to move data from air-gapped computers to their workstations using USB thumb drives.

Pawn Storm joins other sophisticated cyberespionage groups, like Equation and Flame, that are known to have used malware designed to defeat network air gaps.

“Over the last year, the Sofacy group has increased its activity almost tenfold when compared to previous years, becoming one of the most prolific, agile and dynamic threat actors in the arena,” the Kaspersky researchers said in a blog post. “This activity spiked in July 2015, when the group dropped two completely new exploits, an Office and Java zero-day.”

Source- http://www.thegurureview.net/aroundnet-category/pawn-storm-hacking-group-develops-new-tools-for-cyberespionage.html

Comments