Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

MIT Researchers Triple Wireless Speeds

August 29, 2016 by  
Filed under Around The Net

Comments Off on MIT Researchers Triple Wireless Speeds

MIT researchers have uncovered a way to transfer wireless data using a smartphone at a speed about three times faster and twice as far as existing technology.

The researchers developed a technique to coordinate multiple wireless transmitters by synchronizing their wave phases, according to a statement from MIT on Tuesday. Multiple independent transmitters will be able to send data over the same wireless channel to multiple independent receivers without interfering with each other.

Since wireless spectrum is scarce, and network congestion is only expected to grow, the technology could have important implications.

The researchers called the approach MegaMIMO 2.0 (Multiple Input, Multiple Output) .

For their experiments, the researchers set up four laptops in a conference room setting, allowing signals to roam over 802.11 a/g/n Wi-Fi. The speed and distance improvement is expected to also apply to cellular networks. A video describes the technology as well as a technical paper (registration required), which was presented this week to the Association for Computing Machinery’s Special Interest Group on Data Communications (SIGCOMM 16).

The researchers, from MIT’s Computer Science and Artificial Intelligence Lab, are: Ezzeldin Hamed, Hariharan Rahul, Mohammed Abdelghany and Dina Katabi.

Courtesy-http://www.thegurureview.net/mobile-category/mit-researchers-develop-technique-to-triple-wireless-speeds.html

Courtesy-http://www.thegurureview.net/mobile-category/mit-researchers-develop-technique-to-triple-wireless-speeds.html

PC Monitors Vulnerable To Hacking

August 12, 2016 by  
Filed under Security

Comments Off on PC Monitors Vulnerable To Hacking

You should probably be leery of what you see since, apparently, your computer monitor can be hacked.

Researchers at DEF CON presented a way to manipulate the tiny pixels found on a computer display.

Ang Cui and Jatin Kataria of Red Balloon Security were curious how Dell monitors worked and ended up reverse-engineering one.

They picked apart a Dell U2410 monitor and found that the display controller inside can be used to change and log the pixels across the screen.

During their DEF CON presentation, they showed how the hacked monitor could seemingly alter the details on a web page. In one example, they changed a PayPal’s account balance from $0 to $1 million, when in reality the pixels on the monitor had simply been reconfigured.

It wasn’t exactly an easy hack to pull off. To discover the vulnerability, both Cui and Kataria spent their spare time over two years, conducting research and understanding the technology inside the Dell monitor.

However, they also looked at monitors from other brands, including Samsung, Acer and Hewlett Packard, and noticed that it was theoretically possible to hack them in the same manner as well.

The key problem lies in the monitors’ firmware, or the software embedded inside. “There’s no security in the way they update their firmware, and it’s very open,” said Cui, who is also CEO of Red Balloon.

The exploit requires gaining access to the monitor itself, through the HDMI or USB port. Once done, the hack could potentially open the door for other malicious attacks, including ransomware.

For instance, cyber criminals could emblazon a permanent message on the display, and ask for payment to remove it, Kataria said. Or they could even spy on users’ monitors, by logging the pixels generated.

However, the two researchers said they made their presentation to raise awareness about computer monitor security. They’ve posted the code to their research online.

“Is monitor security important? I think it is,” Cui said.

Dell couldn’t be reached for immediate comment.

Source- http://www.thegurureview.net/computing-category/computer-monitors-are-also-vulnerable-to-hacking.html

China Keeps Supercomputing Title

July 24, 2015 by  
Filed under Computing

Comments Off on China Keeps Supercomputing Title

A supercomputer developed by China’s National Defense University still is the fastest publically known computer in the world, while the U.S. is close to an historic low in the latest edition of the closely followed Top 500 supercomputer ranking, which was just published.

The Tianhe-2 computer, based at the National Super Computer Center in Guangzhou, has been on the top of the list for more than two years and its maximum achieved performance of 33,863 teraflops per second is almost double that of the U.S. Department of Energy’s Cray Titan supercomputer, which is at the Oak Ridge National Laboratory in Tennessee.

The IBM Sequoia computer at the Lawrence Livermore National Laboratory in California is the third fastest machine, and fourth on the list is the Fujitsu K computer at Japan’s Advanced Institute for Computational Science. The only new machine to enter the top 10 is the Shaheen II computer of King Abdullah University of Science and Technology in Saudi Arabia, which is ranked seventh.

The Top 500 list, published twice a year to coincide with supercomputer conferences, is closely watched as an indicator of the status of development and investment in high-performance computing around the world. It also provides insights into what technologies are popular among organizations building these machines, but participation is voluntary. It’s quite possible a number of secret supercomputers exist that are not counted in the list.

With 231 machines in the Top 500 list, the U.S. remains the top country in terms of the number of supercomputers, but that’s close to the all-time low of 226 hit in mid-2002. That was right about the time that China began appearing on the list. It rose to claim 76 machines this time last year, but the latest count has China at 37 computers.

The Top 500 list is compiled by supercomputing experts at the University of Mannheim, Germany; the University of Tennessee, Knoxville; and the Department of Energy’s Lawrence Berkeley National Laboratory.

Source

Slack Acquires Screen Hero

February 11, 2015 by  
Filed under Around The Net

Comments Off on Slack Acquires Screen Hero

Slack, the IRC-for-businesses company, has acquired screen-sharing collaboration startup Screenhero with an eye toward adding valuable new communications capabilities to its software.

The deal, which was for an undisclosed sum of cash and stocks, sees Screenhero’s six-person team joining Slack to add screensharing, video chat and voice conferencing to the company’s core enterprise chat room service.

Screenhero is designed to let big teams work together like small teams and has found a dedicated customer base with developers, help desk workers and anybody else who has to work together.

That’s a smart alignment with Slack’s own sales pitch. In fact, Screenhero CEO and co-founder Jahanzeb Sherwani said that 50% of Screenhero’s own customers are also Slack customers, even as both companies made use of each others’ products interally. He added that the company was “under no pressure to sell,” but decided that cozying up with Slack would allow Screenhero to do more with its core concept faster.

It sounds like a match made in “in a Reese’s factory,” quipped Slack CEO and co-founder Stewart Butterfield.

Under this deal, Screenhero will continue to operate as a separate entity, and people can use it as they always have been. But eventually, Sherwani said, all of its features will make it into Slack and the standalone product will be discontinued.

Butterfield said that it’s just a natural progression for Slackas it goes after “bigger and weirder” companies. You can still use whatever external services you’d like for video, voice and screen sharing, per Slack’s emphasis on supporting as many services as a customer might want to use with slick native integrations. But Butterfield wants to ensure that out of the box, Slack customers get something broadly useful for collaboration without having to go through the effort.

Source

Samsung’s Eight-core Chip Goes Hacking

August 13, 2013 by  
Filed under Computing

Comments Off on Samsung’s Eight-core Chip Goes Hacking

A Samsung eight-core chip used in some Galaxy S4 mobile devices is now available for hackers to play with on a developer board from South Korea-based Hardkernel.

Hardkernel’s Odroid XU board has incorporated Samsung’s eight-core Exynos 5 Octa 5410 chip, which is based on ARM’s latest processor designs. Samsung recently announced a new eight-core chip, the Exynos 5 Octa 5420, which packs faster graphics and application processing than the 5410. The 5420 has not yet been shipped yet, however.

The Odroid board is priced at $149 through Aug. 31, after which it will be offered for $169. Samsung for many months has said that a board with an eight-core chip would be released, and has shown prototype developer boards at conferences.

Odroid-XU will provide developers an opportunity to write programs tuned for Samsung’s octa-core chip, which has been a source of controversy. Analysts have said the eight-core design is overkill for small devices like smartphones and tablets, which need long battery life.

The eight-core chip design also takes up a lot of space, which prevented Samsung from putting LTE radios inside some Galaxy S4 models. Qualcomm, which hesitantly moved from the dual core to the quad-core design on its Snapdragon chips, on Friday criticized eight-core chips, calling the idea “dumb.”

Despite the criticism, the board will give developers a first true glimpse of, and an opportunity to write and test applications for, ARM’s Big.Little design. The design combines high-power cores for demanding applications with low-power cores for mundane tasks like texting and calling.

Samsung’s iteration of Big.Little in the Exynos 5 Octa 5410 chip combines four processors based on ARM’s latest Cortex-A15 processor design, which incorporates four low-power Cortex-A7 CPUs. The Cortex-A15 is ARM’s latest processor design and succeeds the previous Cortex-A9 core, which was used in popular smartphones like Apple’s iPhone and the Galaxy S3. Samsung said the eight-core chip provides a balance of power and performance, with the high-power cores kicking in only when necessary.

The board has an Imagination Technologies PowerVR SGX544MP3 graphics processor, 2GB of low-power DDR3 DRAM, two USB 3.0 ports and four USB 2.0 ports. Other features include Wi-Fi, Ethernet and optional Bluetooth. Google’s Android 4.2 operating system is preloaded, and support for other Linux distributions like Ubuntu is expected soon. The board has already been benchmarked on Ubuntu 13.04.

Source

Will Arm/Atom CPUs Replace Xeon/Opteron?

June 7, 2013 by  
Filed under Computing

Comments Off on Will Arm/Atom CPUs Replace Xeon/Opteron?

Analyst are saying that smartphone chips could one day replace the Xeon and Opteron processors used in most of the world’s top supercomputers. In a paper in a paper titled “Are mobile processors ready for HPC?” researchers at the Barcelona Supercomputing Center wrote that less expensive chips bumping out faster but higher-priced processors in high-performance systems.

In 1993, the list of the world’s fastest supercomputers, known as the Top500, was dominated by systems based on vector processors. They were nudged out by less expensive RISC processors. RISC chips were eventually replaced by cheaper commodity processors like Intel’s Xeon and AMD Opteron and now mobile chips are likely to take over.

The transitions had a common thread, the researchers wrote: Microprocessors killed the vector supercomputers because they were “significantly cheaper and greener,” the report said. At the moment low-power chips based on designs ARM fit the bill, but Intel is likely to catch up so it is not likely to mean the death of x86.

The report compared Samsung’s 1.7GHz dual-core Exynos 5250, Nvidia’s 1.3GHz quad-core Tegra 3 and Intel’s 2.4GHz quad-core Core i7-2760QM – which is a desktop chip, rather than a server chip. The researchers said they found that ARM processors were more power-efficient on single-core performance than the Intel processor, and that ARM chips can scale effectively in HPC environments. On a multi-core basis, the ARM chips were as efficient as Intel x86 chips at the same clock frequency, but Intel was more efficient at the highest performance level, the researchers said.

Source

Do Supercomputers Lead To Downtime?

December 3, 2012 by  
Filed under Computing

Comments Off on Do Supercomputers Lead To Downtime?

As supercomputers grow more powerful, they’ll also become more susceptible to failure, thanks to the increased amount of built-in componentry. A few researchers at the recent SC12 conference, held last week in Salt Lake City, offered possible solutions to this growing problem.

Today’s high-performance computing (HPC) systems can have 100,000 nodes or more — with each node built from multiple components of memory, processors, buses and other circuitry. Statistically speaking, all these components will fail at some point, and they halt operations when they do so, said David Fiala, a Ph.D student at the North Carolina State University, during a talk at SC12.

The problem is not a new one, of course. When Lawrence Livermore National Laboratory’s 600-node ASCI (Accelerated Strategic Computing Initiative) White supercomputer went online in 2001, it had a mean time between failures (MTBF) of only five hours, thanks in part to component failures. Later tuning efforts had improved ASCI White’s MTBF to 55 hours, Fiala said.

But as the number of supercomputer nodes grows, so will the problem. “Something has to be done about this. It will get worse as we move to exascale,” Fiala said, referring to how supercomputers of the next decade are expected to have 10 times the computational power that today’s models do.

Today’s techniques for dealing with system failure may not scale very well, Fiala said. He cited checkpointing, in which a running program is temporarily halted and its state is saved to disk. Should the program then crash, the system is able to restart the job from the last checkpoint.

The problem with checkpointing, according to Fiala, is that as the number of nodes grows, the amount of system overhead needed to do checkpointing grows as well — and grows at an exponential rate. On a 100,000-node supercomputer, for example, only about 35 percent of the activity will be involved in conducting work. The rest will be taken up by checkpointing and — should a system fail — recovery operations, Fiala estimated.

Because of all the additional hardware needed for exascale systems, which could be built from a million or more components, system reliability will have to be improved by 100 times in order to keep to the same MTBF that today’s supercomputers enjoy, Fiala said.

Fiala presented technology that he and fellow researchers developed that may help improve reliability. The technology addresses the problem of silent data corruption, when systems make undetected errors writing data to disk.

Source…

Chase Building 1/2 Billion Dollar Data Center

August 24, 2012 by  
Filed under Around The Net

Comments Off on Chase Building 1/2 Billion Dollar Data Center

The enthusiastic backer of Enron and serial over charger of  mortgage payers, JPMorgan Chase has just splashed out on a new $500 million data center.

CEO Jamie Dimon announced the move which practically everyone in the IT industry finds a bit strange. While Chase is the US’s largest bank, the new facilities are a little big by anyone’s standard. It is about the same about of money that Google and Microsoft in their largest data centres for their cloud networks.

Dimon cited the figure as one of the advantages of being a big size. It can afford to invest cash in this way. Size lets Chase build a $500 million data centre that speeds up transactions and invest billions of dollars in products like ATMs and apps that allow your iPhone to deposit cheques, he enthused.

JPMorgan Chase operates two large data centres in Delaware and a 400,000 square foot facility. It also acquired data centres in its deals for distressed rivals Bear Stearns and Washington Mutual in the early days of the 2008 financial crisis. So why it needs a huge new one is anyone’s guess.

Source…

Good Technology Updates Security

July 25, 2012 by  
Filed under Uncategorized

Comments Off on Good Technology Updates Security

Good Technology today announced two updates to its mobile security software products across IOS, Android and Windows Phone devices.

Powering mobile security for major enterprises such as Barclays, Sainsbury’s and LOCOG, Good Technology claims the releases are the first of a kind for the industry and address security threats linked to the bring your own device (BYOD) procedures being used in most big companies.

The first update announced by the firm is the addition of what it calls “Appkinetics” to its Good Dynamics line, which aims to solve the problem of secure private corporate data leakage.

“Good’s patented AppKinetics technology builds on the company’s proven ‘containerization’ security model to enable business apps from Good, its Good Dynamics partner independent software vendors (ISV), and internal enterprise developers,” the firm said in a statement.

“This is to securely exchange information within and between applications and create seamless multi-app workflows without compromising security or employees’ privacy and personal experience.”

The firm’s second update is the addition of eight new partnered apps to its Good Dynamics ecosystem covering the areas of business intelligence, collaboration, document editing, document printing, file storage/content management, remote desktop management and mobile application development platforms (MADPs).

This update allows developers to integrate the Good Dynamics technology into apps so that companies can create secure end-to-end workflows of protected, mobile applications to drive business processes.

Good Technology’s EMEA GM Andy Jacques explained, “If you download the standard consumer document editing application you can copy and paste from that from that app into another app.”

He continued, “If you were to open a piece of corporate mission critical data you can copy and paste that and put it onto Hotmail for example.”

Source…

Cisco Lends A Hand In Fighting Fraud

May 15, 2012 by  
Filed under Computing

Comments Off on Cisco Lends A Hand In Fighting Fraud

Cisco released an API at the Interop 2012 Conference this week for its branch routers designed to enable third-party developers to write applications to beef up the security of phone calls over the router network.

The Cisco UC Gateway Services API is a Web-based programming interface that allows customers and developers access to call information over a Cisco ISR G2 router at the edge of a voice network, such as signaling and media. This information can be used to detect and help prevent malicious activity such as social engineering and identity theft scams, contact center account takeover fraud, unauthorized network and service use, and denial-of-service attacks.

Applications written to the API can then apply appropriate action to terminate, redirect or record the call.

Cisco, citing data from the Communications Fraud Control Association, says global telecom fraud losses are estimated to be $40 billion annually.

Source…

Next Page »