2018-06-29
要想领先于攻击者,安全团队需要像犯罪分子一样思考、利用人工智能(AI)的能力发现恶意威胁,并停止担心机器学习会取代我们的工作。
在网络安全行业,我们都听过一句古老的格言:“防御者必须时刻警惕,而攻击者只需成功利用一个漏洞就行了。”
虽然这令人生畏,但这正是网络安全行业每天都要面对的现实。我们面临着一场不对称的战争,不幸的是,我们的对手是一大批拥有各种武器的网络犯罪分子。
像其它战场一样,网络空间的不对称战争可以描述为:一方只需要适度投资来实现收益,而另一方则必须投入大量资金来维持足够的防御。在网络安全行业,恶意软件和勒索软件的作者和推广者是前者,而安全行业和潜在受害者是后者。这种投入时间和资源的不平衡导致这场战争是不对称的。
我们以WannaCry勒索软件攻击为例。这是一个简单的恶意软件,令人惊讶的是,它通过窃取的技术进行传播,感染了40多万台机器,攻击者几乎是毫不费力地实现了这一切。
网络犯罪分子富有创意,有能力测试新的攻击。同时,安全团队将资源投入到多层安全防御上,如网络分段和网络钓鱼培训。
这就出现了一个可怕的想法:当网络犯罪分子集中精力利用AI时会发生什么?
一旦攻击者掌握了AI技术,就会大规模渗透网络、窃取数据、传播能够导致设备瘫痪的计算机病毒。这可能会导致大规模军备竞赛,后果无法估量。足够聪明的AI能够清除电子邮件或网站被感染的迹象,这实在太可怕了。
虽然多态恶意软件(到达新机器后发生变形)有一些机器能力,但是这种恶意软件并非每日演变的。勒索软件几年前就出现了,到现在也没太大变化。我们经常看到受害者一次又一次地遭受同种类型的攻击。网络犯罪分子仍然可以用靠谱的工具制造混乱。
虽然我们无法预测网络犯罪分子下一步会做什么,但是一些人已经开始利用AI和机器学习来保护自己了。机器学习并非银子弹,但是它正在快速成为领先犯罪分子,或至少快速检测最新的攻击类型的重要、必不可少的工具。它可以通过持续观察网络来提高安全性,并利用威胁研究团队的能力来创建一个整体大于各部分之和的解决方案。它设置一个基准,以帮助您检测异常行为。但是想领先于攻击者,安全行业需要采取以下三个措施:
首先,我们要像网络犯罪分子一样思考。他们的主要动机很简单--赚钱。他们不断思考怎样用最小的行动产生最大的收益,因此他们非常喜欢网络钓鱼活动。他们可以轻松地发送数百万封电子邮件,将受害者重定向到伪造的网站并获得巨大的收益。网络犯罪分子可能会调整他们的方法,例如,伪装为技术公司而非金融机构(正如我们的《2017年威胁报告》所述),但是这些机制保持不变。如果我们利用机器学习来追踪和分类常规或日常任务,我们就可以腾出时间来像犯罪分子一样思考,并提出应对下一次攻击的解决方案。
第二,我们需要集成了AI技术的安全产品,利用AI固有的优势来发现恶意威胁。这些解决方案必须包含来自最佳威胁研究人员的情报和分析数据并发现进入企业的威胁的模型。这些解决方案可以是通用或特定的。如果我们可以为更多的公司制定计划,就能在对抗网络犯罪方面获得优势。
最后,我们不用担心机器学习会取代我们的工作。真正的威胁来自于不利用机器学习,这种回避迫使您最好的研究员疲于应付繁琐的工作,无法创造性地思考问题、预测新的攻击形式并进行防御。机器学习能够为研究员提供帮助,使他们腾出手来处理更重要的问题。由于人类参与对于监控和塑造机器学习模型至关重要,这会导致就业岗位增加。
如果我们想公平竞争,就要利用机器学习创造一个更安全的世界。虽然网络犯罪分子目前可能不会广泛利用机器学习,但是他们迎头赶上也是迟早的事。当那一天到来时,世界各地的安全团队都要做好准备。
《Cybersecurity: An Asymmetrical Game of War》
8/28/2017
10:30 AM
To stay ahead of the bad guys, security teams need to think like criminals, leverage AI's ability to find malicious threats, and stop worrying that machine learning will take our jobs.
In the cybersecurity industry, we’ve all heard the old adage, "We have to be right 100 percent of the time. Cybercriminals only have to be right once."
It may be daunting, but it’s the reality in which the cybersecurity industry lives every day. We’re facing an asymmetrical game of war and, unfortunately, we’re up against an army of cybercriminals with a vast arsenal of weapons at their fingertips.
As in other combat arenas, asymmetrical warfare in cyberspace describes a situation where one side only has to invest modestly to achieve gains, while the other side must invest heavily to maintain an adequate defense. In the cybersecurity industry, the authors and promoters of malware and ransomware would be the former, while the security industry and potential victims make up the latter. This lopsided investment of time and resources is what makes this war asymmetrical.
Let’s take the recent WannaCry ransomware attack, for example. It was a simple enough form of malware, however it took many by surprise. Through a unique combination of stolen technology and propagation, it was able to land on more than 400,000 machines — all with minimal effort on the part of the perpetrators.
Cybercriminals can afford to be creative, innovative, and test new attacks. Meanwhile security teams invest their resources in creating layers of cybersecurity defenses and basics like network segmentation and phishing education.
And here’s a scary thought: what will happen when cybercriminals focus their energies on leveraging artificial intelligence (AI)?
AI in the wrong hands could cause an explosion of network penetrations, data theft, and a spread of computer viruses that could shut down devices left and right. It could lead to an AI arms race, with unknown consequences. Those little clues that give us hints that an email or web site aren’t really what they claim to be can be cleaned up by a sufficiently smart AI capability. And that’s scary.
While there is some machine power in polymorphic malware — malware that morphs when it lands on a new machine — this type of malware doesn’t evolve every day. Ransomware took off a few years ago, and it hasn’t changed much since then. We are seeing victims fall prey to the same types of attacks over and over again. Cybercriminals are still able to create complete chaos with their tried-and-true tools.
While we can’t always predict what cybercriminals will try next, some of us are already leveraging AI and machine learning to stay ahead of them. It’s not a silver bullet, but machine learning is fast-becoming an important, possibly essential tool for keeping ahead of, or at least quickly detecting, the latest types of attacks. It can improve security by looking at the network on an ongoing basis and leveraging a threat research team’s abilities to create a sum greater than its parts. It sets a baseline to help you detect anomalous behavior. But to stay ahead of the bad guys, the security industry needs to accomplish a few things:
First, we need to think like cybercriminals. Their main motivation is simple — money. They’re constantly thinking about what small action they can take to produce a large outcome, hence the popularity of phishing. They can push out millions of emails with relative ease, send victims to a short-lived site and reap big benefits. Cybercriminals may tweak their approach, for instance, impersonating technology companies instead of financial institutions (as we found in our 2017 Threat Report), but the mechanisms remain the same. If we leverage machine learning to assist in the mundane or routine tasks of tracking and classifying, our creative minds can be free to think like criminals and come up with out-of-the-box solutions to the next attack.
Second, we need security products to incorporate AI into solutions that truly take advantage of its inherent advantages to find malicious threats. These solutions must incorporate intelligence from the best threat researchers and models ready to analyze data and find threats that are coming into businesses today. These solutions can be generic or vertical-specific. If we can create programs for more companies to leverage, we will get a leg up on cybercrime.
Finally, we need not fear that machine learning will take our jobs. The real threat comes from not utilizing machine learning. Such avoidance forces your best researchers to complete tedious work instead of being creative and innovative, dreaming up ways to anticipate new forms of attack and protect against them. Machine learning provides supplemental help so that threat researchers can work on bigger issues. And because the human touch is essential to monitoring and shaping machine learning models, the result is a net increase in job creation.
If we want to even the playing field, we need to embrace machine learning to create a more secure world for everyone. While cybercriminals may not widely leverage machine learning today, it’s only a matter of time before they catch up. And when that day comes, security teams everywhere need to be prepared.
附件:
《Cybersecurity - An Asymmetrical Game of War》--原文
《Cybersecurity - An Asymmetrical Game of War》--译文.pdf
微信公众号