Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Stanford Develops Carbon Nanotubes

October 17, 2013 by  
Filed under Computing

Comments Off on Stanford Develops Carbon Nanotubes

Researchers at Stanford University have demonstrated the first functional computer constructed using only carbon nanotube transistors.

Scientists have been experimenting with transistors based on carbon nanotubes, or CNTs, as substitutes for silicon transistors, which may soon hit their physical limits.

The rudimentary CNT computer is said to run a simple operating system capable of multitasking, according to a synopsis of an article published in the journal Nature.

Made of 178 transistors, each containing between 10 and 200 carbon nanotubes, the computer can do four tasks summarized as instruction fetch, data fetch, arithmetic operation and write-back, and run two different programs concurrently.

The research team was led by Stanford professors Subhasish Mitra and H.S. Philip Wong.

“People have been talking about a new era of carbon nanotube electronics moving beyond silicon,” Mitra said in a statement. “But there have been few demonstrations of complete digital systems using [the] technology. Here is the proof.”

IBM last October said its scientists had placed more than 10,000 transistors made of nano-size tubes of carbon on a single chip. Previous efforts had yielded chips with just a few hundred carbon nanotubes.

Source

Will Arm/Atom CPUs Replace Xeon/Opteron?

June 7, 2013 by  
Filed under Computing

Comments Off on Will Arm/Atom CPUs Replace Xeon/Opteron?

Analyst are saying that smartphone chips could one day replace the Xeon and Opteron processors used in most of the world’s top supercomputers. In a paper in a paper titled “Are mobile processors ready for HPC?” researchers at the Barcelona Supercomputing Center wrote that less expensive chips bumping out faster but higher-priced processors in high-performance systems.

In 1993, the list of the world’s fastest supercomputers, known as the Top500, was dominated by systems based on vector processors. They were nudged out by less expensive RISC processors. RISC chips were eventually replaced by cheaper commodity processors like Intel’s Xeon and AMD Opteron and now mobile chips are likely to take over.

The transitions had a common thread, the researchers wrote: Microprocessors killed the vector supercomputers because they were “significantly cheaper and greener,” the report said. At the moment low-power chips based on designs ARM fit the bill, but Intel is likely to catch up so it is not likely to mean the death of x86.

The report compared Samsung’s 1.7GHz dual-core Exynos 5250, Nvidia’s 1.3GHz quad-core Tegra 3 and Intel’s 2.4GHz quad-core Core i7-2760QM – which is a desktop chip, rather than a server chip. The researchers said they found that ARM processors were more power-efficient on single-core performance than the Intel processor, and that ARM chips can scale effectively in HPC environments. On a multi-core basis, the ARM chips were as efficient as Intel x86 chips at the same clock frequency, but Intel was more efficient at the highest performance level, the researchers said.

Source

LG Buys webOS From HP

March 6, 2013 by  
Filed under Computing

Comments Off on LG Buys webOS From HP

Hewlett-Packard has sold some of the rights to its webOS mobile operating system to LG Electronics for use in smart TVs manufactured by the South Korean electronics giant.

LG has agreed to acquire the source code, webOS engineering team and other assets from HP, in a deal announced on Monday. LG will also license HP patents related to webOS and cloud technology, the companies said.

Financial terms of the deal weren’t disclosed.

HP acquired the mobile operating system, along with device maker Palm, in February 2010. HP used the OS on its short-lived TouchPad device, which debuted in mid-2011 then disappeared weeks later.

HP announced a new tablet, the US$169 Slate 7, on Sunday. The Slate 7 will run the Android operating system.

LG will lead the Open webOS and Enyo open-source projects as part of the deal, the company said. HP will retain ownership of all of Palm’s cloud computing assets, including source code, talent, infrastructure and contracts.

HP said it will also continue to support Palm users.

LG will use the technology to expand the Web capabilities of its smart TVs, said Sam Chang, LG vice president and general manager of innovation and Smart TV, in an interview.

LG bought the webOS assets in part for the engineering team, which includes user experience engineers, he said. The webOS engineers who remained at HP — the companies aren’t saying how many there are — are to join LG’s Silicon Valley labs.

Source…

Is Non-Volatile Memory The Next Craze?

March 4, 2013 by  
Filed under Computing

Comments Off on Is Non-Volatile Memory The Next Craze?

A report from analysts Yole Developpement claims that MRAM/STTMRAM and PCM will lead the Emerging Non-Volatile Memory (ENVM) market and earn a combined $1.6bn by 2018. If the North Koreans have not conquered America, by 2018 then MRAM/STTMRAM and PCM will surely be the top two ENVM on the market.

Yole’s Yann de Charentenay said that their combined sales will almost double each year, with double-density chips launched every two years. So far we have only had FRAM, PCM and MRAM to play with and they were available in low-density chips to only a few players. The market was quite limited and considerably smaller than the DRAM and flash markets which had combined revenues of $50bn+ in 2012, the report said. In the next five years the scalability and chip density of those memories will be greatly improved and will spark many new applications, says the report.

ENVM will greatly improve the input/output performance of enterprise storage systems whose requirements will intensify with the growing need for web-based data supported by cloud servers, the report said. Mobile phones will increase its adoption of PCM as a substitute to flash NOR memory in MCP packages thanks to 1GB chips made available by Micron in 2012, it added. The next milestone will be the higher-density chips, expected in 2015, will allow access to smart phone applications that are quickly replacing entry-level phones.

Source…

Do Supercomputers Lead To Downtime?

December 3, 2012 by  
Filed under Computing

Comments Off on Do Supercomputers Lead To Downtime?

As supercomputers grow more powerful, they’ll also become more susceptible to failure, thanks to the increased amount of built-in componentry. A few researchers at the recent SC12 conference, held last week in Salt Lake City, offered possible solutions to this growing problem.

Today’s high-performance computing (HPC) systems can have 100,000 nodes or more — with each node built from multiple components of memory, processors, buses and other circuitry. Statistically speaking, all these components will fail at some point, and they halt operations when they do so, said David Fiala, a Ph.D student at the North Carolina State University, during a talk at SC12.

The problem is not a new one, of course. When Lawrence Livermore National Laboratory’s 600-node ASCI (Accelerated Strategic Computing Initiative) White supercomputer went online in 2001, it had a mean time between failures (MTBF) of only five hours, thanks in part to component failures. Later tuning efforts had improved ASCI White’s MTBF to 55 hours, Fiala said.

But as the number of supercomputer nodes grows, so will the problem. “Something has to be done about this. It will get worse as we move to exascale,” Fiala said, referring to how supercomputers of the next decade are expected to have 10 times the computational power that today’s models do.

Today’s techniques for dealing with system failure may not scale very well, Fiala said. He cited checkpointing, in which a running program is temporarily halted and its state is saved to disk. Should the program then crash, the system is able to restart the job from the last checkpoint.

The problem with checkpointing, according to Fiala, is that as the number of nodes grows, the amount of system overhead needed to do checkpointing grows as well — and grows at an exponential rate. On a 100,000-node supercomputer, for example, only about 35 percent of the activity will be involved in conducting work. The rest will be taken up by checkpointing and — should a system fail — recovery operations, Fiala estimated.

Because of all the additional hardware needed for exascale systems, which could be built from a million or more components, system reliability will have to be improved by 100 times in order to keep to the same MTBF that today’s supercomputers enjoy, Fiala said.

Fiala presented technology that he and fellow researchers developed that may help improve reliability. The technology addresses the problem of silent data corruption, when systems make undetected errors writing data to disk.

Source…

Intel Preparing New SSDs

August 9, 2012 by  
Filed under Computing

Comments Off on Intel Preparing New SSDs

In addition to the recent price drop for its 320, 330 and 520 series SSDs, Intel is preparing a slight refresh scheduled to launch in Q3 and Q4 2012, according to the recently leaked roadmap at Chinese.VR-Zone.com.

The roadmap kicks off with a rather interesting entry-level 300 series that will apparently get a new 335 series update in Q3 2012. According to the roadmap, the 335 series will initially launch in 240GB capacity and get 80 and 180GB model update in Q1 2013. The new 335 series will most likely still be based on the same SF-2281 controller, be available in 2.5-inch form factor with the SATA 6Gbps interface, and will probably be paired up with a tweaked firmware and a new 20nm NAND flash memory.

Source…

Artificial Photosynthesis Developed

August 6, 2012 by  
Filed under Around The Net

Comments Off on Artificial Photosynthesis Developed

Panasonic said on Monday it has created a new system for artificial photosynthesis that can remove carbon dioxide from the air almost as well as plants do, as part of the company’s entry into an industry-wide trend toward greener tech.

The company said its system uses nitride semiconductors, which are widely used in LEDs (light-emitting diodes) to convert light to energy, and a metal catalyst to convert carbon dioxide and water to formic acid, which is widely used in dyes, leather production and as a preservative.

Carbon dioxide is a major pollutant and considered to be a main cause of the “greenhouse effect,” which most climate scientists believe causes global warming.

Panasonic has struggled with its traditional electronics business and has made eco-friendly products and practices the key element in its turnaround plan. The company is hoping to leverage its large rechargeable battery and solar businesses, while joining the industry in embracing technologies that are friendlier to the environment. The issue is an important one with customers, as demonstrated by the the outcry earlier this month when Apple was forced to rejoin a green standards program when clients complained about its earlier withdrawal.

Panasonic said the system can convert carbon dioxide and water to formic acid with an efficiency of 0.2 percent in laboratory conditions, which is similar to the conversion rate for green plants. The efficiency refers to the portion of the incoming light energy stored in materials produced during the process.

Source…

Super Talent Outs New SSDs

July 27, 2012 by  
Filed under Computing

Comments Off on Super Talent Outs New SSDs

Super Talent has announced a new line of SATA III SSDs, the Super Talent SuperNova. Aimed at the business market, SuperNova SSDs will be available in 128 and 256GB capcities.

Although it has not announced any details regarding the new SuperNova lineup in its official press release, Super Talent did note that SuperNova features high transfer speeds and “the most secure encryption” on the planet, as well as the proprietary RAISE technology that virtually eliminates unrecoverable read errors.

After some digging around we managed to find that SuperNova is based on Sandforce SF-2200 controller paired up with ONFI Synchronous MLC NAND chips that should provide enterprise level of reliability. The sequential performance is set at 555MB/s read and 525MB/s for write while random 4K performance is at 90K IOPS read and 85K IOPS write, for both 128 and 256GB models.

Source…

U.S. Takes Back Supercomputing Crown

June 27, 2012 by  
Filed under Computing

Comments Off on U.S. Takes Back Supercomputing Crown

The U.S., once again, is home to the world’s most powerful supercomputer after being kicked off the list by China two years ago and then again by Japan last year.

The top computer, an IBM system at the Department of Energy’s Lawrence Livermore National Laboratory, is capable of 16.32 sustained petaflops, according to the Top 500 list, a global, twice a year ranking, released Monday.

This system, named Sequoia, has more than 1.57 million compute cores and relies on architecture and parallelism, and not Moore’s Law, to achieve its speeds.

“We’re at the point where the processors themselves aren’t really getting any faster,” said Michael Papka, Argonne National Laboratory deputy associate director for computing, environment and life sciences.

The Argonne lab installed a similar IBM system, which ranks third on the new Top 500 list. “Moore’s Law is generally slowing down and we’re doing it (getting faster speeds) by parallelism,” Papka said.

U.S. high performance computing technology dominates the world market. IBM systems claimed five of the top ten spots in the list, and 213 systems out the 500.

Hewlett-Packard is number two, with 141 systems on the list. Nearly 75% of the systems on this list run Intel processors, and 13% use AMD chips.

Source…

Adata Outs 40MB/s UHS microSD Card

June 7, 2012 by  
Filed under Computing

Comments Off on Adata Outs 40MB/s UHS microSD Card

Adata has launched a 32GB UHS-1 microSD card offering 40MB/s write bandwidth.

Adata, which recently has been making a big push in the solid-state disk (SSD) drive market, has announced its first microSD cards that support the UHS-1 specification. The firm’s Premier Pro cards come in 8GB, 16GB and 32GB capacities with the firm citing read bandwidth of 45MB/s and all important write bandwidth of 40MB/s.

The SD Card Association defined the UHS-I specification as part of its SD Version 3.01 standard, and while Adata’s new cards boast impressive speeds there is a lot of headroom left, with UHS-1 supporting bandwidths up to 104MB/s. Adata’s cards, roughly translated to the ‘X’ speed rating used on a number of memory cards, come out at 266X.

Ray Chu, product manager at Adata said, “These cards have the best read and write performance among all comparable products offered by the industry’s key players. When that is combined with the aggressive pricing options in store for this line, the result is going to be a bonanza for our customers worldwide.”

Source…

« Previous PageNext Page »