optimizing humans + machines for security testing at … · picture for machine scanning in the...

16
SECURITY TESTING AT SCALE OPTIMIZING HUMANS + MACHINES FOR

Upload: others

Post on 22-May-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

  • SECURITY TESTING AT SCALE

    OPTIMIZING HUMANS + MACHINES FOR

  • Trust is Everything.

    Delivering comprehensive penetration

    testing with actionable results.

    Securing continuously with the world’s most

    skilled ethical hackers and AI technology.

    We are Synack, the most trusted Crowdsourced Security Platform.

  • Table of Contents

    Optimizing Humans + Machines for Security 2

    Recent Challenges of Security Testing 3

    Machines—Scalable but Inefficient 5

    Developing Machines with Human Creativity 7

    Augmenting Human Strengths with Machines 7

    Why It’s Time For Humans + Machines 9

    Opportunities for Leveraging AI in 2020 12

    About Synack 13

  • S EC U R I T Y T E ST I N G AT S C A L E • SY N AC K .CO M 2

    SECURITY TESTING AT SCALE

    Optimizing Humans + Machines for SecurityGone are the days of check-the-box-style security

    involving perfunctory testing right before a product

    or code release. The increasingly rapid pace of

    technological advancement—spurred by digital

    migration, wired Internet of Things (IoT) devices,

    new Agile and DevOps methodologies, and new

    cloud infrastructure, broadens attack surfaces and

    increases the risk of new vulnerabilities every day. With

    conventional security approaches, security testing

    cycles can be limited by resource and capability, to

    discrete periods or inadequate coverage. Traditional

    security team engagements typically last several weeks

    or a few months, and any changes and vulnerabilities

    that occur within that window are usually picked

    up. However, once the engagement is completed,

    organizations face an increased risk of vulnerabilities.

    Adversaries and organized crime cells typically

    carry out their work over very long timeframes. Their

    “business” model often counts on big paydays resulting

    from years of probing and then exploiting weaknesses.

    More than ever before, organizations need to

    proactively and continuously search for, identify, and

    fix vulnerabilities before nation-state adversaries or

    cybercriminals can find them. However, to date, security

    teams have had to choose between effectiveness

    and efficiency. On the one hand, human penetration

    testers can be thorough and rigorous, but they are

    finite. Current estimates of the cyber talent gap predict

    3.5 million unfilled cyber positions by 20211. On the

    other hand, vulnerability scanners can scale, but they

    cannot perform higher order thinking. More than 76

    percent of the vulnerabilities that security researchers

    can find could not be found by a traditional vulnerability

    scanner2. Clearly, security teams need a more effective

    and efficient approach—an integrated solution that

    does not force them to sacrifice quality for quantity or

    vice versa.

    To keep up with increasingly complex and ever-evolving

    attack surfaces, organizations need a way to scale

    their security testing. Crowdsourced security, where

    human security researchers are augmented with artificial intelligence (AI)-enabled machine scanning, is

    a promising new approach for bringing about exactly

    this kind of continuous security, at the scale needed in

    today’s digital ecosystem.

    Gartner estimates in a recent report that over 60

    percent of enterprises will use crowdsourcing to test

    their applications by 20223. For additional efficiency,

    enterprises are augmenting crowdsourcing with smart

    technology. Crowdsourcing and automation adoption is

    increasing quickly—and it’s no surprise given the rapid

    pace of development.

    The optimal combination of human intelligence and

    machine intelligence delivers continuous security

    testing at scale. Machines increase the effectiveness

    of humans by scanning large attack surfaces and

    producing vulnerability data to optimize human-

    driven vulnerability testing so humans can focus and

    prioritize their efforts. Humans, or researchers in turn,

    provide valuable assistance to machine scanners to

    scan deeper into applications. While machine learning

    and AI greatly advance the effectiveness of security

    researchers, neither machines nor humans are as

    effective on their own as they are together, and neither

    can fully displace the other as the threat landscape

    becomes more complex.

    1 Steve Morgan, “Cybersecurity Jobs Report 2018-2021” Cybercrime Magazine, May 31, 2017. https://cybersecurityventures.com/jobs/

    2 Synack Proprietary Data

    3 Gartner Competitive Landscape for Application Testing Services 2019. September 2019. https://www.gartner.com/document/3969692

  • S EC U R I T Y T E ST I N G AT S C A L E • SY N AC K .CO M 3

    In this white paper we delve into these issues: AI will

    never be able to replace humans in security testing—

    but where exactly is the ceiling on machine capabilities

    in security testing? What are the technological factors

    that prevent machines from carrying out certain

    seemingly simple processes? Where are humans’ blind

    spots and biases? How can machines help humans

    become more efficient in their hunt for vulnerabilities?

    SECURITY TESTING AT SCALE

    4 Many Businesses are just beginning their digital transformations

    https://www.cio.com/article/3192152/many-businesses-are-just-beginning-their-digital-transformation-journeys.html

    Recent Challenges of Security Testing Give Rise to Humans + MachinesDigital environments today are dynamic. However,

    much of the cybersecurity infrastructure within

    organizations was originally created for a largely

    static, wired, and controlled client-server environment.

    Firewalls, data loss prevention (DLP), identity and

    access management, and similar legacy technologies

    were created to keep adversaries out, control access

    to data, and protect sensitive information within

    the organization. While these security priorities all

    remain relevant objectives, there are many additional

    security considerations, given how much of our digital

    infrastructure has changed.

    Internet of Things

    To date, the effects of growth

    in IoT devices and systems are

    largely being felt within the retail, consumer

    products, healthcare, transportation, shipping

    sectors, and facilities management functions.

    Regardless of the industry in question, however,

    the increasing number of devices and endpoints

    driven by IoT—from digital kiosks, RFID readers,

    sensors, and beacons to lighting arrays, HVAC

    systems, and scanners—is vastly complicating

    security scanning and penetration testing.

    Digital Migration

    A recent Fujitsu survey4 of 1,600+

    business leaders found that 89

    percent of organizations have started digital

    transformation initiatives, with 34 percent

    of the initiatives having already delivered

    business outcomes. These large organizations

    are facing cybersecurity challenges, such as

    broadening and shifting of attack surfaces, as

    unfamiliar new systems and technologies, such

    as migrating to off premise cloud infrastructure,

    become adopted. A steady stream of

    vulnerabilities requires systems to be patched

    or upgraded. Falling behind creates openings

    where hackers can use a known vulnerability to

    try and exploit these new systems.

  • S EC U R I T Y T E ST I N G AT S C A L E • SY N AC K .CO M 4

    Cloud Infrastructure

    New risks and challenges arise as

    enterprises adopt a cloud-native or

    hybrid cloud/on-premises model. Infrastructure-

    as-a-service and infrastructure as code allow

    responsibility for provisioning and deploying

    software services to spread beyond traditional

    operations or infrastructure teams, directly into

    development teams. Without organizational

    controls, Shadow IT can extend into the

    cloud, bringing additional security risks to the

    enterprise. Cloud resources may be dynamic and

    ephemeral; auto-scaling to match demand, or

    with a lifetime sufficient to complete a business

    operation. Additionally, misconfiguration of access

    control policies can expose critical data or open

    pathways for attackers to gain persistent access.

    Agile & DevSecOps Methodologies

    Two software product development

    methodologies, now prominent across

    multiple industries, share many attributes that

    raise security implications. Both involve shorter

    development cycles, with code releases being

    pushed out on a daily or hourly basis. With

    these shorter timelines, security and IT teams

    are leaving more of the testing and remediation

    work to developers, who are taking ownership

    of processes that were once exclusively the

    domain of IT operations team members. Finally,

    adoption of self-service tools has resulted

    in increased provisioning and managing of

    computing resources in the cloud.

    In light of these new challenges, many organizations have adopted machine scanning to establish

    a baseline in security testing. However, most scanners are too rudimentary to achieve the level of

    security testing we need today.

    SECURITY TESTING AT SCALE

  • S EC U R I T Y T E ST I N G AT S C A L E • SY N AC K .CO M 5

    Machines—Scalable but InefficientMachine scanning is methodical. Scanners can quickly

    scale and continuously search for and discover

    vulnerabilities across vast attack surfaces. At the same

    time, machines are inefficient. The signal-to-noise ratio

    of many scanning products can be intolerably high,

    turning up false positives at such a rate that security

    teams often do not know where to start remediation.

    Additionally, scanners are notoriously incapable of

    carrying out tasks that are simple and intuitive for a

    human, such as web form submissions and multifactor

    authentication. So, the question becomes: How do we

    improve the technology so that automated scanners

    move beyond a blunt instrument status to become truly

    smart machines?

    The initial impulse often is to make automated scanners

    act more like humans. It’s a reasonable starting

    point to make scanners smarter, but to do so, one

    has to first understand the limitations of scanning

    capabilities. Which parts of scanning can be done

    effectively in a fully autonomous manner, and which

    parts require the scanner to ask a human for help? How

    much human behavior should the scanner emulate?

    In general, industry-standard web scanners like Burp

    Suite, Arachni, and OWASP ZAP require some amount

    of human configuration and effort along the way. Let’s

    consider some of these limitations.

    The Limitations of a Scanner

    Machines struggle with complex authentication. This is

    an acute flaw since identity and access management

    (IAM)—the software governing web access, service

    authorization and authentication—is built into virtually

    every online service and device today. Multiple trends

    within the IAM space are likely to further complicate the

    picture for machine scanning in the years ahead. First,

    two-step and multi-factor authentication, which were

    not widespread outside of security industry circles a

    decade ago, are now commonplace, or even standard

    for many user scenarios and devices. Second, the range

    of factors or mechanisms employed in authentication

    today is vast and growing. Years ago, the typical user

    gained access through passwords primarily, or maybe

    with a token fob that generated random authentication

    codes. Today, while passwords are still ever-present,

    authentication is increasingly accomplished through

    text messaging, push notifications, biometrics—

    thumbprint, iris scan, face recognition, or passive

    mechanisms including geolocation, body movement, IP

    address, or time of day. It’s not just that machines have

    difficulty finding the fields to inject credentials—long

    a shortcoming with web scanners. It’s that complex,

    multi-factor authentication today usually requires a

    human presence.

    Creativity & Context

    Closely related to the challenges with authentication,

    traditional machine scanners are weaker when it

    comes to submission flows. A machine doesn’t know

    what to expect when a form, presenting multiple fields,

    requires data entry. We’re all familiar with CAPTCHA

    tests. Whether a CAPTCHA requests that you type

    letters from a distorted image or identify which images

    in a group contain traffic lights, the process requires

    users to demonstrate recognition, segmentation,

    and parsing—each of which are processes that are

    difficult for scanners to complete. When sequenced

    together, they present nearly insurmountable scanning

    challenges for an automated machine.

    Form submission is highly contextual with multiple

    outcomes that can be generated depending on the

    use case, as opposed to the simple use case of

    password logins that move from unauthenticated

    to authenticated. For example, is the form a credit

    counseling flow? Or a credit card approval flow?

    A mortgage loan application is going to have a

    fundamentally different output than an online pizza

    order. Machine scanners will try to brute force different

    combinations of data until they can proceed to another

    SECURITY TESTING AT SCALE

  • S EC U R I T Y T E ST I N G AT S C A L E • SY N AC K .CO M 6

    page. Humans, on the other hand, can easily look at a form and assess how the outbound flows are likely to unwind.

    A mortgage application process might begin with basic personal information such as name, address, and phone

    number, but then would likely progress to employer information and income documentation—abstract concepts

    with any number of variables. This simple example demonstrates how difficult it is for a scanner to parse through

    multiple outbound flows without understanding the context. To date, conventional web scanning approaches that

    have tried to automate these types of workflows have not been successful. Building a scanner that is accurate and

    precise requires spending a significant amount of time and money to gather enough data points to map out the

    specific context for the scanner.

    While scanners struggle with context and business logic, researchers can apply common sense and creativity to

    discover these types of business logic vulnerabilities.

    5 Testing for business logic. https://www.owasp.org/index.php/Testing_for_business_logic

    Testing for business logic flaws in a multi-functional dynamic

    web application requires thinking in unconventional ways. If an

    application's authentication mechanism is developed with the

    intention of performing steps 1, 2, and 3 in that specific order

    to authenticate a user, what happens if the user goes from

    step 1 straight to step 3? In this simplistic example, does the

    application provide access by failing to open, deny access, or

    just error out with a 500 message? This type of vulnerability

    cannot be detected by a vulnerability scanner and relies upon

    the skills and creativity of the penetration tester. In addition, this

    type of vulnerability is usually one of the hardest to detect and

    is usually application specific but is typically one of the most

    detrimental to the application, if exploited.

    O P E N W E B A P P L I C AT I O N S EC U R I T Y P RO J EC T W I K I 5

    SECURITY TESTING AT SCALE

  • S EC U R I T Y T E ST I N G AT S C A L E • SY N AC K .CO M 7

    Developing Machines with Human Creativity

    Augmenting Human Strengths with Machines

    Now that we’ve highlighted some of the limitations of

    machines, let’s identify a method that can be used to

    improve the accuracy and efficiency of scanners, and

    make them smart. Although scanners struggle with higher-order thinking and business logic, there have

    been recent developments to improve their accuracy

    and efficiency. Form training is a distinct method with

    which humans can help machines perform better. As

    with CAPTCHA, web forms with multiple fields can trip

    up scanners, but there are ways to improve scanner

    performance by training them with scripts. A form

    Crowdsourcing penetration testing changed security testing by bringing a diverse crowd of the best security talent

    to hunt for vulnerabilities within organizations, finally allowing security tests to replicate the diversity of adversaries.

    However, while bringing a crowd of humans to hunt at depth has increased organizations’ resistance to malicious

    attack, attackers are getting smarter and leveraging new technologies to enable them to find a hole quickly. This,

    coupled with the continuous delivery of modern development organizations, makes it imperative that crowdsourced

    security continue to evolve to keep pace with today’s digital environments. Below are some of the challenges that

    security researchers alone face during security testing:

    submission is a script that is run through scanners to

    help scanners behave more like a human. The idea is

    to provide the scanner with vulnerability data captured

    from researchers, to enter into form fields. Over time,

    the machine becomes familiar with, recognizes, and

    remembers similar patterns and hunts more effectively.

    By executing certain actions in real time, it becomes

    possible to present a “real” user agent and distribute

    your workload across multiple applications at once to

    avoid tripping alarms. This is a key area where security

    teams can use automation to scale their efforts.

    Scale

    Humans cannot scan hundreds or thousands of targets at a fast pace. They must go

    through the exploit workflow to find a vulnerability, as opposed to scanners that can

    quickly recognize a vulnerability. The human testing process takes time, which can be

    a serious drawback when trying to quickly scale across an attack surface.

    01

    SECURITY TESTING AT SCALE

  • S EC U R I T Y T E ST I N G AT S C A L E • SY N AC K .CO M 8

    Efficiency

    Researchers’ effort and time should be spent seeking the vulnerabilities that

    have a high probability of turning into an exploit. Researchers’ time is valued at

    hundreds of dollars per hour and this time is finite. This means researchers need

    tools to help them focus their efforts so they can spend their time on complex

    and creative tasks. They have started to leverage tools like scanners, fuzzers,

    machines, and open source plugins to help them scale their efforts and become

    more efficient at prioritization.

    Human Focuses & Coverage

    People remain engaged if they are spending their time on tasks that are creative and

    mentally challenging. Similarly we see this in security, where researchers want to spend

    their time on tasks that are engaging and help them to learn new skills. If humans focus

    on what they perceive to be the most challenging and lucrative, this can lead to uneven

    coverage. While a low severity vulnerability on its own may carry acceptable risk, several

    low severity vulnerabilities, if strung together, could be debilitating to an organization.

    Machines can help to mitigate this bias by providing reconnaissance and change

    detection to highlight the areas in the attack surface where researchers should be

    focusing their efforts. Machines can identify a change in attack surface in real-time, and

    can guide researchers to the right place to look. Machines can be a valuable tool to help

    researchers make strides in their efforts and overcome bias by highlighting areas that

    should be scanned.

    Eluding Detection with Evasion

    To simulate a realistic attack environment, companies may choose to not switch off

    certain fraud detection or intrusion detection/prevention systems when going through a

    crowdsourced security test. In these instances, researchers need to stay low so they’re not

    immediately flagged as a target. These systems detect both adversaries and researchers,

    who are frequently thwarted by perimeter protection systems such as those offered by

    Cloudflare, Fortinet, and other similar technology providers. Additionally, researchers

    can be stalled and waste time trying to bypass a fraud detection system, rather than

    accomplishing the task at hand—hunting for exploitable vulnerabilities. This is a big pain

    point of researchers, who often do not have the tools to help them with evasion. However, by

    leveraging scanners that can help evade fraud detection, researchers can spend their time

    hunting for vulnerabilities rather than fighting the fraud detection system.

    02

    03

    04

    SECURITY TESTING AT SCALE

  • S EC U R I T Y T E ST I N G AT S C A L E • SY N AC K .CO M 9

    THE OPTIMAL

    COMBINATION

    OF AI + HI TO

    BEST AUGMENT

    THE MOST ELITE

    SECURITY TALENT

    HUMAN INTELLIGENCE

    +

    Creative Tasks

    Business Logic

    -

    Repetitive Tasks

    Scale

    Coverage

    ARTIFICIAL INTELLIGENCE

    +

    Scale

    Repetitive Tasks

    -

    Creative Tasks

    Mimicking Human Behavior

    Produce noise/false positives

    Why It’s Time For Humans + Machines We’ve now dived into a machine’s agility, but also

    where its limits are. We’ve also explored human

    strengths in higher-order thinking, and their

    challenges in scale. When machines augment human

    testers, two big benefits occur—researchers are much

    more efficient in their hunt for vulnerabilities, and

    security teams can prioritize results and findings. This

    allows them to spend their time on tasks that make

    the most of their efforts. Now that we’ve explored

    how machines can help researchers, let’s explore

    how an optimized approach of human + machine in

    crowdsourced security testing would help security

    teams become much more efficient.

    Coverage Analysis –Let’s consider an online retailer or a travel and hospitality enterprise. These types

    of organizations maintain enormous, dynamic web

    presences. With an intelligent machine, you can

    quickly map your site, develop comprehensive insights

    into traffic patterns, and view all the pages that were

    visited. Machines can perform the initial analysis and

    help to develop an understanding of security teams’

    entire attack surface and global coverage, guiding

    researchers to places they might not have visited

    themselves. Augmenting a crowd-based approach

    with smart scanning makes it possible to direct

    people to different locations, distribute resources

    based on vulnerability priorities, and achieve coverage

    across your entire surface. Coverage analysis can be

    further advanced by mapping pen tester activities

    through a VPN. Traffic from a diverse body of

    researchers can be captured through a Virtual Private

    Network (VPN). When this information is paired with

    an exhaustive web app map, a comprehensive target

    coverage map (including attack classification) can be

    continuously updated for security teams.

    DevSecOps Workflow – With Agile development, code is released constantly, altering the attack

    surface sometimes several times per day. DevSecOps

    shifts vulnerability assessment to the left of the

    development lifecycle by automatically responding

    to changes in the attack surface to reduce the

    lifecycle of vulnerabilities. Machine change detection

    SECURITY TESTING AT SCALE

  • S EC U R I T Y T E ST I N G AT S C A L E • SY N AC K .CO M 10

    technology can notify researchers in real time

    about changes in the attack surface. This type

    of coverage can help humans scale efficiently

    across an attack surface. Additionally, this type of

    detection testing can occur directly in pre-production

    environments. Machines can assist here to help

    highlight the changes to security teams via the use

    of an integration (Splunk, etc.) to further expedite

    the notification process and integrate into existing

    infrastructure. Think of having a static analyzer

    evaluate a Docker container for vulnerabilities as it

    is pushed to a registry, or discovering and scanning

    a new web application when infrastructure changes

    are detected. By integrating security directly into the

    software development process, it is no longer an

    afterthought. This approach helps organizations build

    and release truly secure products.

    Scanning at Scale –Automated smart scanning enables researchers to cover an attack surface that

    might otherwise require weeks or months of manual

    work by a single researcher. Port scanners can detect

    hosts with open ports over a wide attack surface

    for follow-up scanning by tools tailored to the types

    of services, applications, and specific software

    versions discovered. Web application resources can

    be automatically discovered through brute-force

    enumeration and link crawling. Accuracy of scanning

    results can be improved by using techniques such

    as spreading outgoing scan traffic across multiple

    cloud providers and regions to ensure diversity

    of originating IP addresses. By leveraging smart

    scanning during security testing, researchers can

    cover a larger attack surface and focus on areas that

    are most susceptible to attack.

    SECURITY TESTING AT SCALE

    MACHINES

    Coverage Analysis

    HUMANS

    Creativity and Context

    DevSecOps Workflow

    Scale

    Prediction Analysis

  • S EC U R I T Y T E ST I N G AT S C A L E • SY N AC K .CO M 11

    Optimal Human-Machine Workflow Engine: The average Common Vulnerability Scoring System (CVSS)

    score of a vulnerability found by a researcher is 7.1

    compared to a scanner’s 3.7 6. Humans will always find

    higher-severity vulnerabilities than scanners, even if at a

    slower pace. However, when machines are augmented

    by humans, the average CVSS score of a vulnerability

    caught by machine becomes 5.9—more impactful than

    a scanner alone, but nevertheless representative of the

    unique value and impact of humans. Let’s explore how

    this optimized approach would function.

    A crowdsourced approach where a scanner leverages

    human intelligence is optimal for discovering serious

    exploitable vulnerabilities. A machine is not going

    to discover many of the higher-order vulnerabilities

    that a human can, but a human can teach a machine

    to scan more accurately and pass certain human-

    centric barriers automatically, allowing it to hunt for

    vulnerabilities in a very wide scope at scale and present

    its findings to a human for triage. This type of human-

    aided navigation would function through a range of

    features, functions and processes:

    Combined workflow—Web scans can be assisted by a human researcher to teach that scanner how

    to handle a webform it has encountered.

    Automated findings can be triaged by a researcher, and false positives can be fed back

    to scanners to reduce noise.

    Human findings can be fed back to a scanner as targets, so the scanner can re-scan if it missed

    the target or to increase thoroughness.

    Initial web app scoping by a human can be captured and transferred to the scanner to

    continue from there.

    Data that needs interpretation can be easily identified and sent via workflow to a human for

    classification (e.g. hardcoded password in a

    comment without the word “password”).

    A smart machine can accurately identify and rank

    vulnerabilities and present information on potential

    exploitable vulnerabilities to researchers. Researchers

    are then empowered to triage those vulnerabilities in

    a much more targeted scope to find true exploitable

    vulnerabilities. Because humans are much better at

    identifying anomalies or suspicious activities worthy of

    a closer look, it’s important to retain the human element

    in triaging the noise from suspected vulnerabilities

    down to the true exploitable vulnerabilities. The

    capacity to operate this workflow at scale is essential.

    Additionally, security teams can look at these suspected

    vulnerabilities—those that meet a high confidence

    of becoming an exploit—in real time and start taking

    appropriate action as researchers simultaneously triage

    them to find an exploit.

    The optimal combination of humans and algorithmic

    machines filter out over 99 percent of noise7 for security

    teams. This is a huge value-add, as security teams spend

    their time sifting through pages of vulnerabilities with

    no starting point for remediation. With this starting

    point, there are more benefits to come as crowdsourced

    testing becomes even more extensively augmented with

    smart technology, such as AI.

    6 Synack Proprietary Data

    7 Synack Proprietary Data

    SECURITY TESTING AT SCALE

  • S EC U R I T Y T E ST I N G AT S C A L E • SY N AC K .CO M 12

    Opportunities for Leveraging AI in 2020Artificial Intelligence can augment security researchers to make them much more efficient and

    methodical when searching an attack surface for exploits.

    To scale up security testing, crowdsourced security with hundreds of researchers was introduced. Now,

    to scale that crowd, technology should be leveraged for efficient, automated security testing across

    growing attack surfaces. Machines will never be able to replace humans, but when machines augment researchers, security teams can expect to see 20-fold increases in operational efficiency due to improved

    noise reduction. Greater efficiency can allow them to prioritize remediation of high-impact vulnerabilities

    and share vulnerability intelligence with developers to enable them to become smarter with release

    lifecycles. As mentioned, Gartner is predicting explosive growth in automated and crowdsourced security

    testing together, and these new solutions will be employed by more than 76 percent of enterprises by

    2022. It’s clear why humans augmented by machines constitute a new type of security solution that will

    soon be used by the mainstream.

    Pattern Recognition to Prediction Analysis/Forecasting

    As discussed earlier, change detection is a powerful tool. However, scans don’t necessarily

    need to be based exclusively on data changes. AI can predict where there might be a threat

    or a vulnerability based on past experience and large datasets. Ideally, a system could be created that

    is smart enough to predict where vulnerabilities normally occur within an attack surface based on past

    data—with patterns, dynamic code changes, and timing all becoming valuable data points. A security

    testing company that has access to a large data set due to performing security testing across hundreds

    of targets can use that data set to perform deep learning, with inputs being vulnerabilities, traffic, and

    targets. This type of deep learning could produce predictions to help the system understand what’s more

    likely to have changed in the attack surface, or where code changes might be expected to occur over

    a given period of time. These data sets would generally need to be large enough to produce accurate

    models. Collecting the requisite billions and trillions of data points is beyond the capability of all but the

    most sophisticated individual researchers.

    Coverage Analysis and Pattern Recognition to Gain Insights

    Earlier, we discussed crowdsourced security tests measuring traffic through the use of a

    VPN to create a comprehensive coverage map for security teams. This methodology also

    allows crowdsourcing to take continuous security one step further by using AI and machine learning

    to increase a machine’s fidelity—the rate at which a data change is recognized. Once research

    traffic is analyzed, ensuring complete coverage of the target IP ranges or application components

    is possible. Seeking patterns within the research traffic helps guide future platform development,

    including the AI components, to reduce the number of false positives and organize results in a

    digestible way for Red Team researchers.

    SECURITY TESTING AT SCALE

  • S EC U R I T Y T E ST I N G AT S C A L E • SY N AC K .CO M 13

    About SynackSynack, the most trusted crowdsourced security platform, delivers

    continuous and scalable penetration testing with actionable results. The

    company combines the world’s most skilled and trusted ethical hackers

    with AI-enabled technology to create an efficient and effective security

    solution. Headquartered in Silicon Valley with regional offices around

    the world, Synack protects leading global banks, federal agencies, DoD

    classified assets, and close to $1 trillion in Fortune 500 revenue. Synack

    was founded in 2013 by former US Department of Defense hackers Jay

    Kaplan, CEO, and Dr. Mark Kuhr, CTO. For more information, please visit

    www.synack.com.

    SECURITY TESTING AT SCALE

  • © 2019 SY N AC K , I N C . A L L R I G H TS R E S E RV E D .