Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Can Samsung Beat Intel?

April 25, 2016 by  
Filed under Computing

Comments Off on Can Samsung Beat Intel?

Samsung is closing in on Intel in the semiconductor sector as its market share increased by 0.9 percent when compared to a year earlier.

According to beancounters at IBS, the news comes on the heels of an announcement that the three-month average of the global market for semiconductors ending in February fell 6.2 percent compared with the same figure in 2015, down from a 5.8 percent decline in January.

IBS chief executive Handel Jones said:

“Based on talking to customers about buying patterns, we see softness,” said. “Smartphone sales are slowing, and the composition of the market is changing with about half all chips bought by companies in China who want low-end devices In addition, over the past year memory prices have fallen by nearly half both for DRAMs and NAND-based solid-state drives as vendors try to buy market share, said Jones. “It’s more of a price issue because volumes are up.”

Jones expects softness in the PC market will continue through this year. Demand for chips is rising in automotive and for the emerging Internet of Things, but so far both sectors are relatively small, he added.

Data shows that the gap between the market share of these Intel and Samsung firms is narrowing. In 2012, the gap between Intel and Samsung was 5.3 percent. This narrowed to 4.2 percent in 2013, and is now 3.2 percent in 2015. SK Hynix, which now stands as the third largest semiconductor brand in the world, beat Qualcomm with a market share of 4.8 percent.

Courtesy-Fud

Seagate Goes 8TB For Surveillance

November 13, 2015 by  
Filed under Computing

Comments Off on Seagate Goes 8TB For Surveillance

Seagate has become the first hard drive company to create an 8TB unit aimed specifically at the surveillance market, targeting system integrators, end users and system installers.

The Seagate Surveillance HDD, as those wags in marketing have named it, is the highest capacity of any specialist drive for security camera set-ups, and Seagate cites its main selling points as maximizing uptime while removing the need for excess support.

“Seagate has worked closely with the top surveillance manufacturers to evolve the features of our Surveillance HDD products and deliver a customized solution that has precisely matched market needs in this evolving space for the last 10 years,” said Matt Rutledge, Seagate’s senior vice president for client storage.

“With HD recordings now standard for surveillance applications, Seagate’s Surveillance HDD product line has been designed to support these extreme workloads with ease and is capable of a 180TB/year workload, three times that of a standard desktop drive.

“It also includes surveillance-optimized firmware to support up to 64 cameras and is the only product in the industry that can support surveillance solutions, from single-bay DVRs to large multi-bay NVR systems.”

The 3.5in drive is designed to run 24/7 and is able to capture 800 hours of high-definition video from up to 64 cameras simultaneously, making it ideal for shopping centers, urban areas, industrial complexes and anywhere else you need to feel simultaneously safe and violated. Its capacity will allow 6PB in a 42U rack.

Included in the deal is the Seagate Rescue Service, capable of restoring lost data in two weeks if circumstances permit, and sold with end users in mind for whom an IT support infrastructure is either non-existent or off-site. The service has a 90 percent success rate and is available as part of the drive cost for the first three years.

Seagate demonstrated the drive today at the China Public Security Expo. Where better than the home of civil liberty infringement to show off the new drive?

Earlier this year, Seagate announced a new co-venture with SSD manufacturer Micron, which will come as a huge relief after the recent merger announcement between WD and SanDisk.

Courtesy-http://www.thegurureview.net/computing-category/seagate-goes-8tb-for-surveillance.html

China Keeps Supercomputing Title

July 24, 2015 by  
Filed under Computing

Comments Off on China Keeps Supercomputing Title

A supercomputer developed by China’s National Defense University still is the fastest publically known computer in the world, while the U.S. is close to an historic low in the latest edition of the closely followed Top 500 supercomputer ranking, which was just published.

The Tianhe-2 computer, based at the National Super Computer Center in Guangzhou, has been on the top of the list for more than two years and its maximum achieved performance of 33,863 teraflops per second is almost double that of the U.S. Department of Energy’s Cray Titan supercomputer, which is at the Oak Ridge National Laboratory in Tennessee.

The IBM Sequoia computer at the Lawrence Livermore National Laboratory in California is the third fastest machine, and fourth on the list is the Fujitsu K computer at Japan’s Advanced Institute for Computational Science. The only new machine to enter the top 10 is the Shaheen II computer of King Abdullah University of Science and Technology in Saudi Arabia, which is ranked seventh.

The Top 500 list, published twice a year to coincide with supercomputer conferences, is closely watched as an indicator of the status of development and investment in high-performance computing around the world. It also provides insights into what technologies are popular among organizations building these machines, but participation is voluntary. It’s quite possible a number of secret supercomputers exist that are not counted in the list.

With 231 machines in the Top 500 list, the U.S. remains the top country in terms of the number of supercomputers, but that’s close to the all-time low of 226 hit in mid-2002. That was right about the time that China began appearing on the list. It rose to claim 76 machines this time last year, but the latest count has China at 37 computers.

The Top 500 list is compiled by supercomputing experts at the University of Mannheim, Germany; the University of Tennessee, Knoxville; and the Department of Energy’s Lawrence Berkeley National Laboratory.

Source

nVidia Releases CUDA

July 10, 2014 by  
Filed under Computing

Comments Off on nVidia Releases CUDA

Nvidia has released CUDA – its code that lets developers run their code on GPUs – to server vendors in order to get 64-bit ARM cores into the high performance computing (HPC) market.

The firm said today that ARM64 server processors, which are designed for microservers and web servers because of their energy efficiency, can now process HPC workloads when paired with GPU accelerators using the Nvidia CUDA 6.5 parallel programming framework, which supports 64-bit ARM processors.

“Nvidia’s GPUs provide ARM64 server vendors with the muscle to tackle HPC workloads, enabling them to build high-performance systems that maximise the ARM architecture’s power efficiency and system configurability,” the firm said.

The first GPU-accelerated ARM64 software development servers will be available in July from Cirrascale and E4 Computer Engineering, with production systems expected to ship later this year. The Eurotech Group also plans to ship production systems later this year.

Cirrascale’s system will be the RM1905D, a high density two-in-one 1U server with two Tesla K20 GPU accelerators, which the firm claims provides high performance and low total cost of ownership for private cloud, public cloud, HPC and enterprise applications.

E4′s EK003 is a production-ready, low-power 3U dual-motherboard server appliance with two Tesla K20 GPU accelerators designed for seismic, signal and image processing, video analytics, track analysis, web applications and Mapreduce processing.

Eurotech’s system is an “ultra-high density”, energy efficient and modular Aurora HPC server configuration, based on proprietary Brick Technology and featuring direct hot liquid cooling.

Featuring Applied Micro X-Gene ARM64 CPUs and Nvidia Tesla K20 GPU accelerators, the new ARM64 servers will provide customers with an expanded range of efficient, high-performance computing options to drive compute-intensive HPC and enterprise data centre workloads, Nvidia said.

Nvidia added, “Users will immediately be able to take advantage of hundreds of existing CUDA-accelerated scientific and engineering HPC applications by simply recompiling them to ARM64 systems.”

ARM said that it is working with Nvidia to “explore how we can unite GPU acceleration with novel technologies” and drive “new levels of scientific discovery and innovation”.

Source

nVidia Outs CUDA 6

March 19, 2014 by  
Filed under Computing

Comments Off on nVidia Outs CUDA 6

Nvidia has made the latest GPU programming language CUDA 6 Release Candidate available for developers to download for free.

The release arrives with several new features and improvements to make parallel programming “better, faster and easier” for developers creating next generation scientific, engineering, enterprise and other applications.

Nvidia has aggressively promoted its CUDA programming language as a way for developers to exploit the floating point performance of its GPUs. Available now, the CUDA 6 Release Candidate brings a major new update in unified memory access, which lets CUDA applications access CPU and GPU memory without the need to manually copy data from one to the other.

“This is a major time saver that simplifies the programming process, and makes it easier for programmers to add GPU acceleration in a wider range of applications,” Nvidia said in a blog post on Thursday.

There’s also the addition of “drop-in libraries”, which Nvidia said will accelerate applications by up to eight times.

“The new drop-in libraries can automatically accelerate your BLAS and FFTW calculations by simply replacing the existing CPU-only BLAS or FFTW library with the new, GPU-accelerated equivalent,” the chip designer added.

Multi-GPU Scaling has also been added to the CUDA 6 programming language, introducing re-designed BLAS and FFT GPU libraries that automatically scale performance across up to eight GPUs in a single node. Nvidia said this provides over nine teraflops of double-precision performance per node, supporting larger workloads of up to 512GB in size, more than it’s supported before.

“In addition to the new features, the CUDA 6 platform offers a full suite of programming tools, GPU-accelerated math libraries, documentation and programming guides,” Nvidia said.

The previous CUDA 5.5 Release Candidate was issued last June, and added support for ARM based processors.

Aside from ARM support, Nvidia also improved Hyper-Q support in CUDA 5.5, which allowed developers to use MPI workload prioritisation. The firm also touted improved performance analysis and improved performance for cross-compilation on x86 processors.

Source

nVidia Pays Up

January 10, 2014 by  
Filed under Around The Net

Comments Off on nVidia Pays Up

Nvidia has agreed to pay any Canadian who had the misfortune to buy a certain laptop computer made by Apple, Compaq, Dell, HP, or Sony between November 2005 and February 2010. Apparently these models contained a dodgy graphics card which was not fixed for five years.

Under a settlement approved by the court Nvidia will pay $1,900,000 into a fund for anyone who might have bought a faulty card. The Settlement Agreement provides partial cash reimbursement of the purchase price and you have to submit a claim by February 25, 2014. You will know if your Nvidia card was faulty because your machine would have a distorted or scrambled video, or no video on the screen even when the computer is on. There would be random characters, lines or garbled images – a bit like watching one of the Twilight series. There will be intermittent video issues or a failure to detect wireless adaptor or wireless networks.

The amount of compensation will be determined by the Claims Administrator who will apply a compensation grid and settlement administration guidelines. Cash compensation will also be provided for total loss of use based on the age of the computer; temporary loss of use having regard to the nature and duration of the loss of use; and reimbursement for out-of-pocket expenses caused by Qualifying Symptoms to an Affected Computer.

Source

App Stores For Supercomputers Enroute

December 13, 2013 by  
Filed under Computing

Comments Off on App Stores For Supercomputers Enroute

A major problem facing supercomputing is that the firms that could benefit most from the technology, aren’t using it. It is a dilemma.

Supercomputer-based visualization and simulation tools could allow a company to create, test and prototype products in virtual environments. Couple this virtualization capability with a 3-D printer, and a company would revolutionize its manufacturing.

But licensing fees for the software needed to simulate wind tunnels, ovens, welds and other processes are expensive, and the tools require large multicore systems and skilled engineers to use them.

One possible solution: taking an HPC process and converting it into an app.

This is how it might work: A manufacturer designing a part to reduce drag on an 18-wheel truck could upload a CAD file, plug in some parameters, hit start and let it use 128 cores of the Ohio Supercomputer Center’s (OSC) 8,500 core system. The cost would likely be anywhere from $200 to $500 for a 6,000 CPU hour run, or about 48 hours, to simulate the process and package the results up in a report.

Testing that 18-wheeler in a physical wind tunnel could cost as much $100,000.

Alan Chalker, the director of the OSC’s AweSim program, uses that example to explain what his organization is trying to do. The new group has some $6.5 million from government and private groups, including consumer products giant Procter & Gamble, to find ways to bring HPC to manufacturers via an app store.

The app store is slated to open at the end of the first quarter of next year, with one app and several tools that have been ported for the Web. The plan is to eventually spin-off AweSim into a private firm, and populate the app store with thousands of apps.

Tom Lange, director of modeling and simulation in P&G’s corporate R&D group, said he hopes that AweSim’s tools will be used for the company’s supply chain.

The software industry model is based on selling licenses, which for an HPC application can cost $50,000 a year, said Lange. That price is well out of the reach of small manufacturers interested in fixing just one problem. “What they really want is an app,” he said.

Lange said P&G has worked with supply chain partners on HPC issues, but it can be difficult because of the complexities of the relationship.

“The small supplier doesn’t want to be beholden to P&G,” said Lange. “They have an independent business and they want to be independent and they should be.”

That’s one of the reasons he likes AweSim.

AweSim will use some open source HPC tools in its apps, and are also working on agreements with major HPC software vendors to make parts of their tools available through an app.

Chalker said software vendors are interested in working with AweSim because it’s a way to get to a market that’s inaccessible today. The vendors could get some licensing fees for an app and a potential customer for larger, more expensive apps in the future.

AweSim is an outgrowth of the Blue Collar Computing initiative that started at OSC in the mid-2000s with goals similar to AweSim’s. But that program required that users purchase a lot of costly consulting work. The app store’s approach is to minimize cost, and the need for consulting help, as much as possible.

Chalker has a half dozen apps already built, including one used in the truck example. The OSC is building a software development kit to make it possible for others to build them as well. One goal is to eventually enable other supercomputing centers to provide compute capacity for the apps.

AweSim will charge users a fixed rate for CPUs, covering just the costs, and will provide consulting expertise where it is needed. Consulting fees may raise the bill for users, but Chalker said it usually wouldn’t be more than a few thousand dollars, a lot less than hiring a full-time computer scientist.

The AweSim team expects that many app users, a mechanical engineer for instance, will know enough to work with an app without the help of a computational fluid dynamics expert.

Lange says that manufacturers understand that producing domestically rather than overseas requires making products better, being innovative and not wasting resources. “You have to be committed to innovate what you make, and you have to commit to innovating how you make it,” said Lange, who sees HPC as a path to get there.

Source

nVidia Launching New Cards

September 10, 2013 by  
Filed under Computing

Comments Off on nVidia Launching New Cards

We weren’t expecting this and it is just a rumour, but reports are emerging that Nvidia is readying two new cards for the winter season. AMD of course is launching new cards four weeks from now, so it is possible that Nvidia would try to counter it.

The big question is with what?

VideoCardz claims one of the cards is an Ultra, possibly the GTX Titan Ultra, while the second one is a dual-GPU job, the Geforce GTX 790. The Ultra is supposedly GK110 based, but it has 2880 unlocked CUDA cores, which is a bit more than the 2688 on the Titan.

The GTX 790 is said to feature two GK110 GPUs, but Nvidia will probably have to clip their wings to get a reasonable TDP.

We’re not entirely sure this is legit. It is plausible, but that doesn’t make it true. It would be good for Nvidia’s image, especially if the revamped GK110 products manage to steal the performance crown from AMD’s new Radeons. However, with such specs, they would end up quite pricey and Nvidia wouldn’t sell that many of them – most enthusiasts would probably be better off waiting for Maxwell.

Source

nVidia’s CUDA 5.5 Available

June 25, 2013 by  
Filed under Computing

Comments Off on nVidia’s CUDA 5.5 Available

Nvidia has made its CUDA 5.5 release candidate supporting ARM based processors available for download.

Nvidia has been aggressively pushing its CUDA programming language as a way for developers to exploit the floating point performance of its GPUs. Now the firm has announced the availability of a CUDA 5.5 release candidate, the first version of the language that supports ARM based processors.

Aside from ARM support, Nvidia has improved supported Hyper-Q support and now allows developers to have MPI workload prioritisation. The firm also touted improved performance analysis and improved performance for cross-compilation on x86 processors.

Ian Buck, GM of GPU Computing Software at Nvidia said, “Since developers started using CUDA in 2006, successive generations of better, exponentially faster CUDA GPUs have dramatically boosted the performance of applications on x86-based systems. With support for ARM, the new CUDA release gives developers tremendous flexibility to quickly and easily add GPU acceleration to applications on the broadest range of next-generation HPC platforms.”

Nvidia’s support for ARM processors in CUDA 5.5 is an indication that it will release CUDA enabled Tegra processors in the near future. However outside of the firm’s own Tegra processors, CUDA support is largely useless, as almost all other chip designers have chosen OpenCL as the programming language for their GPUs.

Nvidia did not say when it will release CUDA 5.5, but in the meantime the firm’s release candidate supports Windows, Mac OS X and just about every major Linux distribution.

Source

Are CUDA Applications Limited?

March 29, 2013 by  
Filed under Computing

Comments Off on Are CUDA Applications Limited?

Acceleware said at Nvidia’s GPU Technology Conference (GTC) today that most algorithms that run on GPGPUs are bound by GPU memory size.

Acceleware is partly funded by Nvidia to provide developer training for CUDA to help sell the language to those that are used to traditional C and C++ programming. The firm said that most CUDA algorithms are now limited by GPU local memory size rather than GPU computational performance.

Both AMD and Nvidia provide general purpose GPU (GPGPU) accelerator parts that provide significantly faster computational processing than traditional CPUs, however they have only between 6GB and 8GB of local memory that constrains the size of the dataset the GPU can process. While developers can push more data from system main memory, the latency cost negates the raw performance benefit of the GPU.

Kelly Goss, training program manager at Acceleware, said that “most algorithms are memory bound rather than GPU bound” and “maximising memory usage is key” to optimising GPGPU performance.

She further said that developers need to understand and take advantage of the memory hierarchy of Nvidia’s Kepler GPU and look at ways of reducing the number of memory accesses for every line of GPU computing.

The point Goss was making is that GPU computing is relatively cheap in terms of clock cycles relative to the time it takes to fetch data from local memory, let alone loading GPU memory from system main memory.

Goss, talking to a room full of developers, proceeded to outline some of the performance characteristics of the memory hierarchy in Nvidia’s Kepler GPU architecture, showing the level of detail that CUDA programmers need to pay attention to if they want to extract the full performance potential from Nvidia’s GPGPU computing architecture.

Given Goss’s observation that algorithms running on Nvidia’s GPGPUs are often constrained by local memory size rather than by the GPU itself, the firm might want to look at simplifying the tiers of memory involved and increasing the amount of GPU local memory so that CUDA software developers can process larger datasets.

Source

Next Page »