UPDATED 15:31 EDT / NOVEMBER 01 2016

NEWS

Symantec unveils new ‘endpoint’ security system powered by machine learning

Mountain View-based cyber security firm Symantec Corp., best known for its Norton antivirus suite, has just unveiled its newest security system for “endpoint” devices such as laptops and smartphones.

Endpoint Security 14 is a major leap forward in security technology, the company says, thanks to its use of machine learning to fight exploits and coordinated attacks.

According to Symantec, machine learning allows Endpoint Security to recognize the patterns that could signify an attack and actively work to mitigate those threats in real time. The company says its software has a 99.9 percent efficacy rate with a low number of false positives, although it did not share what exactly that low number is. Symantec also noted that thanks to its increased reliance on the cloud of threat lookups, Endpoint Security 14 now boasts a 70 percent smaller footprint than its predecessor, making daily definition updates both smaller and faster.

“Symantec Endpoint Protection 14 is a major leap forward in endpoint protection, delivering the latest innovations in endpoint security on a single platform and from a security company you can trust,” Mike Fey, president and chief operating officer at Symantec, said in statement.

The cyber security arms race

Machine learning and AI may offer a huge boost to computer security, but those same tools could also be used for the opposite purpose: to track down new vulnerabilities and exploit them before they can be plugged up. A common fear when it comes to AI is the idea that a computer could spontaneously become self-aware and declare war on humanity, but a more realistic a near-term threat is the possibility that a malicious AI could be created intentionally as a sort of super-powered computer virus.

Earlier this year, researchers at the University of Louisville in Kentucky published a research paper called “Unethical Research: How to Create a Malevolent Artificial Intelligence,” which outlines the conditions and environment that could lead to the creation of a malicious AI, either accidentally or intentionally. The researchers concluded that a lack of oversight on the AI research community is a particularly important risk factor, along with the creation of closed-source AI software that is understood by only a select few.

Of course, these factors are primarily relevant only for large companies conducting AI research such as Google or Facebook, as these high-powered tools are beyond the capabilities of a lone developer — for now.

Image courtesy of Symantec

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.