Syber Group
Toll Free : 855-568-TSTG(8784)
Subscribe To : Envelop Twitter Facebook Feed linkedin

Are CCTV Cameras Hackable?

June 28, 2013 by  
Filed under Around The Net

When the nosy British bought CCTV cameras, worried citizens were told that they could not be hacked.

Now a US security expert says he has identified ways to remotely attack high-end surveillance cameras used by industrial plants, prisons, banks and the military. Craig Heffner, said he discovered the previously unreported bugs in digital video surveillance equipment from firms including Cisco, D-Link and TRENDnet.

They could use it as a pivot point, an initial foothold, to get into the network and start attacking internal systems. Heffner said that it was a significant threat as somebody could potentially access a camera and view it. Or they could also use it as a pivot point, an initial foothold, to get into the network and start attacking internal systems.

He will show how to exploit these bugs at the Black Hat hacking conference, which starts on July 31 in Las Vegas. Heffner said he has discovered hundreds of thousands of surveillance cameras that can be accessed via the public internet.

Source

June 27, 2013 by  
Filed under Computing

Popular hardware identification tool CPU-Z has hit the Google Play Store. Like the PC version, the apps is completely free and it offers tons of information about your device.

It identified the SoC on board, along with the architecture and clock speed for each core. It also figures out the exact device brand, amount of RAM, storage, battery level, status and temperature. It can tap into the device’s sensor array, which is more of a gimmick than anything useful.

It’s worth noting that it is still in beta, so there might be some kinks to work out. If you’re eager to give it a go, you can find it here. https://play.google.com/store/apps/details?id=com.cpuid.cpu_z.

Source

June 26, 2013 by  
Filed under Computing

Intel has announced five Xeon Phi accelerators including a high density add-in card while upping memory capacity to 16GB.

Intel has managed to get its Xeon Phi accelerator cards to power the Tianhe-2 cluster to the summit of the Top 500 list, however the firm isn’t waiting around to bring out new products. At the International Supercomputing show, Intel extended its Xeon Phi range with five new products, all of which have more than one TFLOPS double precision floating point performance, and the Xeon Phi 7120P and 7120X cards, which have 16GB of GDDR5 memory.

Intel’s Xeon Phi 7120P and 7120X cards have peak double precision floating point performance of over 1.2 TFLOPS, with 352GB/s bandwidth to the 16GB of GDDR5 memory. The firm also updated its more modest Xeon Phi 3100 series with the 3120P and 3120A cards, both with more than one TFLOPS of double precision floating point performance and 6GB of GDDR5 memory with bandwidth of 240GB/s.

Intel has also brought out the Xeon Phi 5120D, a high density card that uses mini PCI-Express slots. The firm said that the Xeon Phi 5120D card offers double precision floating point performance of more than one TFLOPS and 8GB of GDDR5 memory with bandwidth greater than 300GB/s.

That Intel is concentrating on double precision floating point performance with its Xeon Phi accelerators highlights the firm’s focus on research rather than graphics rendering or workstation tasks. However the firm’s ability to pack 16GB into its Xeon Phi 7100 series cards is arguably the most important development, as larger locally addressable memory means higher resolution simulations.

Intel clearly seems to believe that there is significant money to be made in the high performance PC market, and despite early reservations from industry observers the firm seems to be ramping up its Xeon Phi range at a rate that will start to give rival GPGPU accelerator designer Nvidia cause for concern.

Source

June 25, 2013 by  
Filed under Computing

Nvidia has made its CUDA 5.5 release candidate supporting ARM based processors available for download.

Nvidia has been aggressively pushing its CUDA programming language as a way for developers to exploit the floating point performance of its GPUs. Now the firm has announced the availability of a CUDA 5.5 release candidate, the first version of the language that supports ARM based processors.

Aside from ARM support, Nvidia has improved supported Hyper-Q support and now allows developers to have MPI workload prioritisation. The firm also touted improved performance analysis and improved performance for cross-compilation on x86 processors.

Ian Buck, GM of GPU Computing Software at Nvidia said, “Since developers started using CUDA in 2006, successive generations of better, exponentially faster CUDA GPUs have dramatically boosted the performance of applications on x86-based systems. With support for ARM, the new CUDA release gives developers tremendous flexibility to quickly and easily add GPU acceleration to applications on the broadest range of next-generation HPC platforms.”

Nvidia’s support for ARM processors in CUDA 5.5 is an indication that it will release CUDA enabled Tegra processors in the near future. However outside of the firm’s own Tegra processors, CUDA support is largely useless, as almost all other chip designers have chosen OpenCL as the programming language for their GPUs.

Nvidia did not say when it will release CUDA 5.5, but in the meantime the firm’s release candidate supports Windows, Mac OS X and just about every major Linux distribution.

Source

June 24, 2013 by  
Filed under Computing

Intel has been executing its tick tock strategy flawlessly since January 2006 and now there is some indication that we might see the first slip in 8 years come 2014. Intel’s latest roadmap claims that in 12 months from now, in Q2 2014 Haswell will be replaced by a “Haswell refresh”.

Haswell is a tock, a 22nm new architecture and Broadwell is supposed to be based on Haswell fundamentals, but shrunk to 14nm like a proper “tock”. In case that the Haswell refresh is a tweaked 22nm core, this would mean that after 7 years of execution and billions of investments in cutting edge fabrication processes, Intel would have to slow things down.

It is not certain what would happen to 2015 Skylake, a new 14nm architecture, or the 10nm Skymont that is supposed to be the shrink, but in case Broadwell gets pushed back by a year there is a big possibility that the whole roadmap would slip a year.

When it gets ready the Haswell refresh (possibly a disguise name for Broadwell ed.) is replacing Core i7, Core i5, Core i3, Pentium and Celeron based Haswell chips, some sooner rather than later.

The chipset responsible for Haswell refresh is already branded as Z97 and H97 in desktop versions replacing the Z87 and H87 boards proving that the socket are likely to continue existing at least through 2014. It will be interesting to see the developments and if Broadwell is really delayed or this is just game of words on Intel’s part.

Source

June 21, 2013 by  
Filed under Around The Net

Microsoft has taken the first step in its integration roadmap for SharePoint and Yammer, allowing Office 365 customers to swap SharePoint Online’s activity stream with Yammer’s.

This first, modest integration point will let SharePoint Online users click on the Yammer link and launch a separate browser window where they’re asked to sign in.

Later this year, Microsoft will deepen the integration with a single sign-on and the addition of Yammer to the main Office 365 interface, which will begin to merge the two products’ user experience.

Next month, Microsoft will release a Yammer application for SharePoint that will let users embed a Yammer group feed into a SharePoint site. The application will work both with SharePoint Online and with the on-premises version of the server SharePoint 2013.

Also in July, Microsoft will provide instructions for replacing the SharePoint 2013 newsfeed with Yammer’s.

For now, the first integration step in optional, but Microsoft is strongly suggesting that Office 365 customers make the activity stream switch to Yammer.

“Our recommendation is to use Yammer, since it’s our big bet for enterprise social, and we’re committed to making it the underlying social layer for all our products,” wrote Christophe Fiessinger, a Microsoft Office Division product marketing manager, in a blog post.

Customers should also accompany the technical change with an outreach effort to promote the benefits of using the enterprise social networking features of Yammer, according to Fiessinger.

“To drive adoption and really get the value out of Yammer, you need a strategy, advocates, and openness to the way it will transform the way people in your organization work and communicate,” he wrote.

Microsoft bought Yammer for $1.2 billion in mid-2012 in order to boost the development and availability of enterprise social collaboration features in SharePoint and in other Office and Microsoft business software like the Dynamics applications.

Microsoft makes a convincing case for the benefits of integrating Yammer with SharePoint and its other software to provide a common social collaboration layer, but the process is clearly complicated and will take years.

Source

June 20, 2013 by  
Filed under Computing

Hewlett-Packard wants to help organizations rid themselves of useless data, all the information that is no longer necessary, yet still occupies expensive space on storage servers.

The company’s Autonomy unit has released a new module, called Autonomy Legacy Data Cleanup, that can delete data automatically based on the material’s age and other factors, according to Joe Garber, who is the Autonomy vice president of information governance.

Hewlett-Packard announced the new software, along with a number of other updates and new services, at its HP Discover conference, being held this week in Las Vegas.

For this year’s conference, HP will focus on “products, strategies and solutions that allow our customers to take command of their data that has value, and monetize that information,” said Saar Gillai, HP’s senior vice president and general manager for the converged cloud.

The company is pitching Autonomy Legacy Data Cleanup for eliminating no-longer-relevant data in old SharePoint sites and in e-mail repositories. The software requires the new version of Autonomy’s policy engine, ControlPoint 4.0.

HP Autonomy Legacy Data Cleanup evaluates whether to delete a file based on several factors, Garber said. One factor is the age of the material. If an organization has an information governance policy of only keeping data for seven years, for example, the software will delete any data older than seven years. It will root out and delete duplicate data. Some data is not worth saving, such as system files. Those can be deleted as well. It can also consider how much the data is being accessed by employees: Less consulted data is more suitable for deletion.

Administrators can set other controls as well. If used in conjunction with the indexing and categorization capabilities in Autonomy’s Idol data analysis platform, the new software can eliminate clusters of data on a specific topic. “You apply policies to broad swaths of data based on some conceptual analysis you are able to do on the back end,” Garber said.

Source

June 19, 2013 by  
Filed under Uncategorized

Last week Intel officially released Haswell, but there’s still life in good old Ivy Bridge. The chipmaker has announced a range of low-end Ivy parts and even a Sandy Bridge based Celeron.

The Celeron G470 is possibly the last consumer Sandy Bridge we will ever see. It is a single-core 35W part clocked at 2GHz and it’s priced at just $37.

However, Ivy Bridge parts are a bit more interesting. They include the Celeron 1017, a dual-core, dua-thread chip clocked at 1.6GHz, with a TDP of just 17W. It costs $86 and should be a nice part for low-end laptops and nettops. The Celeron 1005M also costs $86, but it has a 35W TDP and a 1.9GHz clock.

There are four new G2000 Pentiums as well. The G2140 and G2030 are 55W parts, clocked at 3GHz and 3.3GHz respectively. The G2120T and G2030T are 35W chips, clocked at 2.6GHz and 2.7GHz. They cost $64 and $75 respectively. Of course, Pentiums don’t feature Hyperthreading and all four of them are dual-core parts.

The Core i3 line-up also got some speed bumps. The Core i3-3245 and 3250 are clocked at 3.4 and 3.5GHz and both have a TDP of 55W. The 3245 features HD 4000 graphics and costs $134, while the 3250 ends up with HD 2500 graphics and a price tag of $138. Lastly, the Core i3-3250T is a 3GHz part with a 35W TDP, it costs $138, just like its 55W sibling.

Source

June 18, 2013 by  
Filed under Computing

Carl Icahn reportedly is drawing up a shortlist of potential Dell CEO replacements for Michael Dell should his bid for the company be successful.

Icahn and Southeastern Asset Management have made a bid to rival that of Michael Dell and Silver Lake Partners in the high stakes fight over Dell and its board. Now it is being reported that Icahn has already started drawing up a list of candidates that he and Southeastern Asset Management will propose as replacements for Michael Dell as CEO of Dell.

Icahn has previously warned that should his offer for Dell be accepted by the shareholders he would look to not only oust Michael Dell as CEO but replace the firm’s board of directors. Reuters reports that Icahn is casting his net far and wide, including consideration of former HP CEO and current Oracle co-president Mark Hurd.

According to Reuters’ sources Cisco director Michael Capellas, IBM services head Michael Daniels and Oracle’s Hurd are all in the frame, although none of the individuals would confirm having been approached by Icahn.

Michael Dell’s initial plan to buy back the company he founded has met with strong opposition by existing shareholders, some of whom think they are getting shortchanged. According to Michael Dell, the firm’s reorganisation into an enterprise IT vendor will be easier if the company goes private and doesn’t face investor and market scrutiny.

So far Dell’s board is backing Michael Dell’s and Silver Lake Partners’ buyout offer, suggesting that Icahn’s offer is short of cash. However some of Dell’s investors might like the drastic action that Icahn is promising, along with the fact that his offer allows existing shareholders to maintain a diluted stake in the company.

Should Icahn manage to get his takeover offer accepted by Dell’s shareholders, it will set up a sensational return to the PC industry for Hurd and give Dell renewed momentum to compete with HP.

Source

June 17, 2013 by  
Filed under Computing

Dell has claimed it will make exascale computing available by 2015, as the firm enters the high performance computing (HPC) market.

Speaking at the firm’s Enterprise Forum in San Jose, Sam Greenblatt, chief architect of Dell’s Enterprise Solutions Group, said the firm will have exascale systems by 2015, ahead of rival vendors. However, he added that development will not be boosted by a doubling in processor performance, saying Moore’s Law is no longer valid and is actually presenting a barrier for vendors.

“It’s not doubling every two years any more, it has flattened out significantly,” he said. According to Greenblatt, the only way firms can achieve exascale computing is through clustering. “We have to design servers that can actually get us to exascale. The only way you can do it is to use a form of clustering, which is getting multiple parallel processes going,” he said.

Not only did Greenblatt warn that hardware will have to be packaged differently to reach exascale performance, he said that programmers will also need to change. “This is going to be an area that’s really great, but the problem is you never programmed for this area, you programmed to that old Von Neumann machine.”

According to Greenblatt, shifting of data will also be cut down, a move that he said will lead to network latency being less of a performance issue.”Things are going to change very dramatically, your data is going to get bigger, processing power is going to get bigger and network latency is going to start to diminish, because we can’t move all this [data] through the pipe,” he said.

Greenblatt’s reference to data being closer to the processor is a nod to the increasing volume of data that is being handled. While HPC networking firms such as Mellanox and Emulex are increasing bandwidths on their respective switch gear, bandwidth increases are being outpaced by the growth in the size of datasets used by firms deploying analytics workloads or academic research.

That Dell is projecting 2015 for the arrival of exascale clusters is at least a few years sooner than firms such as Intel, Cray and HP, all of which have put a “by 2020″ timeframe on the challenge. However what Greenblatt did not mention is the projected power efficiency of Dell’s 2015 exascale cluster, something that will be critical to its usability.

Source

Comments