6

Please see attached document and please make sure the last question is very interactive over the material provided.
securitytech
ATTACHED FILE(S)
CHAPTER 16
Security Testing
In this chapter you will
• Explore the different types of security tests
• Learn about using scanning and penetration testing to find vulnerabilities
• Examine fuzz testing for vulnerabilities
• Examine security models used to implement security in systems
• Explore the types of adversaries associated with software security
When testing for vulnerabilities, a variety of techniques can be used to examine
the software under development. From generalized forms of testing, such as
scanning and fuzzing, to more specific methods, such as penetration testing and
cryptographic testing, different tools and methods can provide insights as to the
locations and levels of security vulnerabilities in the software.
Scanning
Scanning is automated enumeration of specific characteristics of an application
or network. These characteristics can be of many different forms, from operating
characteristics to weaknesses or vulnerabilities. Network scans can be performed
for the sole purpose of learning what network devices are available and
responsive. Systems can be scanned to determine the specific operating system
(OS) in place, a process known as OS fingerprinting. Vulnerability scanners can
scan applications to determine if specific vulnerabilities are present.
Scanning can be used in software development to characterize an application
on a target platform. It can provide the development team with a wealth of
information as to how a system will behave when deployed into production.
There are numerous security standards, including the Payment Card Industry
Data Security Standard (PCI DSS), that have provisions requiring the use of
scanners to identify weaknesses and vulnerabilities in enterprise platforms. The
development team should take note that enterprises will be scanning the
application as installed in the enterprise. Gaining an understanding of the
footprint and security implications of an application before shipping will help the
team to identify potential issues before they are discovered by customers.
Scanners have been developed to search for a variety of specific conditions.
There are scanners that can search code bases for patterns that are indicative of
elements of the OWASP Top 10 and the SANS Top 25 lists. There are scanners
tuned to produce reports for PCI and Sarbanes-Oxley (SOX) compliance. A
common mitigation for several regulatory compliance programs is a specific set
of scans against a specified set of vulnerabilities.
Attack Surface Analyzer
Microsoft has developed and released a tool called the attack surface analyzer,
which is designed to measure the security impact of an application on a Windows
environment. Acting as a sophisticated scanner, the tool can detect the changes
that occur to the underlying Windows OS when an application is installed.
Designed to specifically look for and alert on issues that have been shown to
cause security weaknesses, the attack surface analyzer enables a development
team or an end user to
• View changes in the Windows attack surface resulting from the installation of the
application
• Assess the aggregate attack surface change associated with the application in the
enterprise environment
• Evaluate the risk to the platform where the application is proposed to exist
• Provide incident response teams detailed information associated with a Windows
platform
One of the advantages of the attack surface analyzer is that it operates
independently of the application that is under test. The attack surface analyzer
scans the Windows OS environment and provides actionable information on the
security implications of an application when installed on a Windows platform.
For this reason, it is an ideal scanner for final security testing as part of the
secure development lifecycle (SDL) for applications targeted to Windows
environments.
Penetration Testing
Penetration testing, sometimes called pen testing, is an active form of examining
the system for weaknesses and vulnerabilities. While scanning activities are
passive in nature, penetration testing is more active. Vulnerability scanners
operate in a sweep, looking for vulnerabilities using limited intelligence;
penetration testing harnesses the power of human intellect to make a more
targeted examination. Penetration testers attack a system using information
gathered from it and expert knowledge in how weaknesses can exist in systems.
Penetration testing is designed to mimic the attacker’s ethos and methodology,
with the objective of finding issues before an adversary does. It is a highly
structured and systematic method of exploring a system and finding and
attacking weaknesses.
Penetration testing is a very valuable part of the SDL process. It can dissect a
program and determine if the planned mitigations are effective or not. Pen
testing can discover vulnerabilities that were not thought of or mitigated by the
development team. It can be done with a white-, black-, or grey-box testing mode.
Penetration Testing
Penetration testing is a structured test methodology. The following are the basic
steps employed in the process:
1. Reconnaissance (discovery and enumeration)
2. Attack and exploitation
3. Removal of evidence
4. Reporting
The penetration testing process begins with specific objectives being set out for
the tester to explore. For software under development, these could be input
validation vulnerabilities, configuration vulnerabilities, and vulnerabilities
introduced to the host platform during deployment. Based on the objectives, a
test plan is created and executed to verify that the software is free of known
vulnerabilities. As the testers probe the software, they take notes of the errors
and responses, using this information to shape subsequent tests.
Penetration testing is a slow and methodical process, with each step and
results being validated. The records of the tests should demonstrate a
reproducible situation where the potential vulnerabilities are disclosed. This
information can give the development team a clear picture of what was found so
that the true root causes can be identified and fixed.
Fuzzing
Fuzz testing is a brute-force method of addressing input validation issues and
vulnerabilities. The basis for fuzzing a program is the application of large
numbers of inputs to determine which ones cause faults and which ones might be
vulnerable to exploitation. Fuzz testing can be applied to anywhere data is
exchanged to verify that input validation is being performed properly. Network
protocols can be fuzzed, file protocols can be fuzzed, web protocols can be
fuzzed. The vast majority of browser errors are found via fuzzing.
Fuzz testing works well in white-, black-, or grey-box testing, as it can be
independent of the specifics of the application under test. Fuzz testing works by
sending a multitude of input signals and seeing how the program handles them.
Specifically, malformed inputs can be used to vary parser operation and to check
for memory leaks, buffer overflows, and a wide range of input validation issues.
Since input validation errors are one of the top issues in software vulnerabilities,
fuzzing is the best method of testing against these issues, such as cross-site
scripting and injection vulnerabilities.
There are several ways to classify fuzz testing. One set of categories is smart
and dumb, indicating the type of logic used in creating the input values. Smart
testing uses knowledge of what could go wrong and creates malformed inputs
with this knowledge. Dumb testing just uses random inputs. Another set of terms
used to describe fuzzers is generation-based and mutation-based.

EXAM TIP Fuzz testing is a staple of SDL-based testing, finding a wide range of
errors with a single test method.
Generation-based fuzz testing uses the specifications of input streams to
determine the data streams that are to be used in testing. Mutation-based fuzzers
take known good traffic and mutate it in specific ways to create new input
streams for testing. Each of these has its advantages, and the typical fuzzing
environment involves both used together.
Simulation Testing
Simulation testing involves testing the application in an environment that
mirrors the associated production environment. Examining issues such as
configuration issues and how they affect the program outcome is important. Data
issues that can result in programmatic instability can also be investigated in the
simulated environment.
Setting up an application and startup can be time consuming and expensive.
When developing a new application, considering the challenges associated with
the instantiation of the system can be important with respect to customer
acceptance. Simple applications may have simple setups, but complex
applications can have significant setup issues. Simulation testing can go a long
way toward discovering issues associated with the instantiation of an application
and its operation in the production environment.
Simulation testing can provide that last testing line of defense to ensure the
system is properly functioning prior to deployment. This is an opportunity to
verify that the interface with the OS is correct and that roles are properly
configured to support access and authorization. It also checks that firewall rules
(or other enforcement points) between tiers/environments are properly
documented, configured, and tested to ensure that attack surface/exposure is
managed. Other benefits of simulation testing include validating that the system
itself can stand up to the rigors of production performance—for example, using
load testing to “beat up” the application to ensure availability is sustainable and
that the controls don’t “break” when the load reaches a particular threshold.
Testing for Failure
Not all errors in code result in failure. Not all vulnerabilities are exploitable.
During the testing cycle, it is important to identify errors and defects, even those
that do not cause a failure. Although a specific error, say one in dead code that is
never executed, may not cause a failure in the current version, this same error
may become active in a later version and result in a failure. Leaving an error
such as this alone or leaving it for future regression testing is a practice that can
cause errors to get into production code.
Although most testing is for failure, it is equally important to test for conditions
that result in incorrect values, even if they do not result in failure. Incorrect
values have resulted in the loss of more than one spacecraft in flight; even though
the failure did not cause the program to fail, it did result in system failure. A
common failure condition is load testing, where the software is tested for
capacity issues. Understanding how the software functions under heavy load
conditions can reveal memory issues and other scale-related issues. These
elements can cause failure in the field, and thus extensive testing for these types
of known software issues is best conducted early in the development process
where issues can be addressed prior to release.
Cryptographic Validation
Having secure cryptography is easy: Use approved algorithms and implement
them correctly and securely. The former is relatively easy—pick the algorithm
from a list. The latter is significantly more difficult. Protecting the keys and the
seed values, and ensuring proper operational conditions are met, have proven to
be challenging in many cases. Other cryptographic issues include proper random
number generation and key transmission.
Cryptographic errors come from several common causes. One typical mistake
is choosing to develop your own cryptographic algorithm. Developing a secure
cryptographic algorithm is far from an easy task, and even when done by experts,
weaknesses can occur that make them unusable. Cryptographic algorithms
become trusted after years of scrutiny and attacks, and any new algorithms
would take years to join the trusted set. If you instead decide to rest on secrecy,
be warned that secret or proprietary algorithms have never provided the desired
level of protection. One of the axioms of cryptography is that security through
obscurity has never worked in the long run.
Deciding to use a trusted algorithm is a proper start, but there still are several
major errors that can occur. The first is an error in instantiating the algorithm.
An easy way to avoid this type of error is to use a library function that has
already been properly tested. Sources of these library functions abound and
provide an economical solution to this functionality’s needs. Given an algorithm,
and a proper instantiation, the next item needed is the random number to
generate a random key.
The generation of a real random number is not a trivial task. Computers are
machines that are renowned for reproducing the same output when given the
same input, so generating a string of pure, nonreproducible random numbers is a
challenge. There are functions for producing random numbers built into the
libraries of most programming languages, but these are pseudo-random number
generators, and although the distribution of output numbers appears random, it
generates a reproducible sequence. Given the same input, a second run of the
function will produce the same sequence of “random” numbers. Determining the
seed and random sequence and using this knowledge to “break” a cryptographic
function has been used more than once to bypass the security. This method was
used to subvert an early version of Netscape’s Secure Sockets Layer (SSL)
implementation. An error in the Debian instantiation of OpenSSL resulted in poor
seed generation, which then resulted in a small set of random values.

EXAM TIPCryptographically random numbers are essential in cryptosystems
and are best produced through cryptographic libraries.
Using a number that is cryptographically random and suitable for an
encryption function resolves the random seed problem, and again, the use of
trusted library functions designed and tested for generating such numbers is the
proper methodology. Trusted cryptographic libraries typically include a
cryptographic random number generator.
Poor key management has failed many a cryptographic implementation. A
famous exploit where cryptographic keys were obtained from an executable and
used to break a cryptographic scheme involved hackers using this technique to
break DVD encryption and develop the DeCSS program. Tools have been
developed that can search code for “random” keys and extract them from the
code or running process. The bottom line is simple: Do not hard-code secret keys
in your code. They can, and will, be discovered. Keys should be generated and
then passed by reference, minimizing the travel of copies across a network or
application. Storing them in memory in a noncontiguous fashion is also
important to prevent external detection.
FIPS 140-2
FIPS 140-2 is a prescribed standard, part of the Federal Information Processing
Standards series that relates to the implementation of cryptographic functions.
FIPS 140-2 deals with issues such as the selection of approved algorithms, such as
AES, RSA, and DSA. FIPS 140-2 also deals with the environment where the
cryptographic functions are used, as well as the means of implementation.

EXAM TIPFIPS 140-2 specifies requirements, specifications, and testing of
cryptographic systems for the U.S. federal government.
Regression Testing
Software is a product that continually changes and improves over time. Multiple
versions of software can have different and recurring vulnerabilities. Anytime
that software is changed, whether by configuration, patching, or new modules,
the software needs to be tested to ensure that the changes have not had an
adverse impact on other aspects of the software. Regression testing is a minor
element early in a product’s lifecycle, but as a product gets older and has
advanced through multiple versions, including multiple customizations, etc., the
variance between versions can make regression testing a slow, painful process.
Regression testing is one of the most time-consuming issues associated with
patches for software. Patches may not take long to create—in fact, in some cases,
the party discovering the issue may provide guidance on how to patch. But before
this solution can be trusted across multiple versions of the software, regression
testing needs to occur. When software is “fixed,” several things can happen. First,
the fix may cause a fault in some other part of the software. Second, the fix may
undo some other mitigation at the point of the fix. Third, the fix may repair a
special case, entering a letter instead of a number, but miss the general case of
entering any non-numeric value. The list of potential issues can go on, but the
point is that when a change is made, the stability of the software must be
checked.
Regression testing is not as simple as completely retesting everything—this
would be too costly and inefficient. Depending upon the scope and nature of the
change, an appropriate regression test plan needs to be crafted. Simple changes
to a unit may only require a level of testing be applied to the unit, making
regression testing fairly simple. In other cases, regression testing can have a far-
reaching impact across multiple modules and use cases. A key aspect of the
patching process is determining the correct level, breadth, and scope of
regression testing that is required to cover the patch.
Specialized reports, such as delta analysis and historical trending reports, can
assist in regression testing efforts. These reports are canned types and are
present in a variety of application security test tools. When leveraging regular
scan and reporting cycles, remediation meetings use these reports to enable the
security tester to analyze and work with teams to fix the vulnerabilities
associated with each release—release 1 vs. release 2, or even over the
application’s release lifetime (compare release 1 to 2 to 3 and so on).
Impact Assessment and Corrective Action
Bugs found during software development are scored based on impact. During the
course of development, numerous bugs are recorded in the bug tracking system.
As part of the bug clearing or corrective action process, a prioritization step
determines which bugs get fixed and when. Not all bugs are exploitable, and
among those that are exploitable, some have a greater impact on the system. In
an ideal world, all bugs would be resolved at every stage of the development
process. In the real world, however, some errors are too hard (or expensive) to fix
and the risk associated with them does not support the level of effort required to
fix them in the current development cycle. If a bug required a major redesign,
then the cost could be high. If this bug is critical to the success or failure of the
system, then resolving it becomes necessary. If it is inconsequential, then
resolution may be postponed until the next major update and redesign
opportunity.
Chapter Review
In this chapter, different types of security tests were presented. Scanning was
presented as a means of characterizing and identifying vulnerabilities. While
scanning tends to be broad in scope, the next technique, penetration testing,
tends to be very specific in its methods of finding vulnerabilities. The next
method, fuzzing, is specific in its target, but very general in its method of testing,
finding a wide range of problems. Simulation testing is where the application is
tested in a simulated production environment to find operational errors.
Testing for failures is important, but so is testing for errors that cause incorrect
values but not failure. Cryptographic systems can be complex and difficult to
implement properly. Testing the areas of failure associated with cryptographic
systems was covered. Testing various versions of software is referred to as
regression testing. The chapter closed by examining the impact of a bug and how
this is used in prioritizing corrective actions.
Quick Tips
• Scanning is the automated enumeration of specific characteristics of an
application or network.
• Penetration testing is an active form of examining the system for weaknesses and
vulnerabilities.
• Fuzz testing is a brute-force method of addressing input validation issues and
vulnerabilities.
• Simulation testing involves testing the application in an environment that
mirrors the associated production environment.
• Although most testing is for failure, it is equally important to test for conditions
that result in incorrect values, even if they do not result in failure.
• Only approved cryptographic algorithms should be used; creating your own
cryptography is a bad practice.
• Testing various versions of software is referred to as regression testing.
• Bugs are measured in terms of their impact on the system, and this impact can be
used to prioritize corrective action efforts.

CHAPTER 15
Security Quality Assurance Testing
In this chapter you will
• Explore the aspects of testing software for security
• Learn about standards for software quality assurance
• Discover the basic approaches to functional testing
• Examine types of security testing
• Explore the use of the bug bar and defect tracking in an effort to improve the SDL
process
Testing is a critical part of any development process and testing in a secure
development lifecycle (SDL) environment is an essential part of the security
process. Designing in security is one step, coding is another, and testing provides
the assurance that what was desired and planned becomes reality. Validation and
verification have been essential parts of quality efforts for decades, and software
is no exception. This chapter looks at how and what to test to obtain an
understanding of the security posture of software.
Standards for Software Quality Assurance
Quality is defined as fitness for use according to certain requirements. This can
be different from security, yet there is tremendous overlap in the practical
implementation and methodologies employed. In this regard, lessons can be
learned from international quality assurance standards, for although they may
be more expansive in goals than just security, they can make sense there as well.
ISO 9216
The International Standard ISO/IEC 9216 provides guidance for establishing
quality in software products. With respect to testing, this standard focuses on a
quality model built around functionality, reliability, and usability. Additional
issues of efficiency, maintainability, and portability are included in the quality
model of the standard. With respect to security and testing, it is important to
remember the differences between quality and security. Quality is defined as
fitness for use, or conformance to requirements. Security is less cleanly defined,
but can be defined by requirements. One issue addressed by the standard is the
human side of quality, where requirements can shift over time, or be less clear
than needed for proper addressing by the development team. These are common
issues in all projects, and the standard works to ensure a common understanding
of the goals and objectives of the projects as described by requirements. This
information is equally applicable to security concerns and requirements.
SSE-CMM
The Systems Security Engineering Capability Maturity Model (SSE-CMM) is also
known as ISO/IEC 21827, and is an international standard for the secure
engineering of systems. The SSE-CMM addresses security engineering activities
that span the entire trusted product or secure system lifecycle, including concept
definition, requirements analysis, design, development, integration, installation,
operations, maintenance, and decommissioning. The SSE-CMM is designed to be
employed as a tool to evaluate security engineering practices and assist in the
definition of improvements to them. The SSE-CMM is organized into processes
and corresponding maturity levels. There are 11 processes that define what needs
to be accomplished by security engineering. The maturity level is a standard
CMM metric representing how well each process achieves a set of goals. As a
model, the SSE-CMM has become a de facto standard for evaluating security
engineering capability in an organization.
OSSTMM
The Open Source Security Testing Methodology Manual (OSSTMM) is a peer-
reviewed system describing security testing. OSSTMM provides a scientific
methodology for assessing operational security built upon analytical metrics. It is
broken into five sections: data networks, telecommunications, wireless, human,
and physical security, as shown in Table 15-1. The purpose of the OSSTMM is to
https://learning.oreilly.com/library/view/csslp-certification-all-in-one/9781260441697/ch15.xhtml#ch15tab1
create a system that can accurately characterize the security of an operational
system in a consistent and reliable fashion.

Table 15-1 OSSTM Sections and Test/Audit Areas
OSSTMM provides a scientific methodology that can be used in the testing of
security. The Institute for Security and Open Methodologies, ISECOM, the
developers of OSSTMM, have developed a range of training classes built around
the methodology. OSSTMM can also be used to assist in auditing, as it highlights
what is important to verify as to functional operational security.
Testing Methodology
Testing software during its development is an integral part of the development
process. Developing a test plan, a document detailing a systematic approach to
testing a system such as a machine or software, is the first step. The plan begins
with the test strategy, an outline that describes the overall testing approach. From
this plan, test cases are created. A test case is designed to answer the question,
“What am I going to test, and what will correct look like?” A document
enumerating these cases includes information such as of a unique test identifier,
link to requirement references from design specifications, notes on any
preconditions, a reference to test harnesses and scripts required, and any
additional notes.
A test harness is a means of documenting the software, tools, samples of data
input and output, and configurations used to complete a set of tests. The
individual steps of the tests can be encoded in a series of test scripts. Test scripts
are important for several reasons. They replicate user actions, and by automating
the series of steps (also known as actions) to follow, including inputs, they
remove the errors that could occur with manual testing. They can also be
automated to collect outputs and compare the returned values to expected
results, improving the speed and accuracy of test interpretation.
The tests are sometimes grouped in collections referred to as test suites. It is
common to have test suites for specific functionality cases, such as security, user
inputs, boundary condition checking, databases, etc. By grouping them into
suites, it makes management easier and promotes reuse as opposed to continual
redevelopment of the same types of materials.
Functional Testing
Functional software testing is performed to assess the level of functionality
associated with the software as expected by the end user. Functional testing is
used to determine compliance with requirements in the areas of reliability, logic,
performance, and scalability. Reliability measures that the software functions as
expected by the customer at all times. It is not just a measure of availability, but
functionally complete availability. Resiliency is a measure of how strongly the
software can perform when it is under attack by an adversary.
Steps for Functional Testing
Functional testing involves the following steps in order:
1. Identifying the functions (requirements) that the software is expected to perform
2. Creating input test data based on the function’s specifications
3. Determining expected output test results based on the function’s specifications
4. Executing the test cases corresponding to functional requirements
5. Comparing actual and expected outputs to determine functional compliance
Unit Testing
Unit testing is conducted by developers as they develop the code. This is the first
level of testing and is essential to ensure that logic elements are correct and that
the software under development meets the published requirements. Unit testing
is essential to the overall stability of the project, as each unit must stand on its
own before being connected together. At a minimum, unit testing will ensure
functional logic, understandable code, and a reasonable level of vulnerability
control and mitigation.

EXAM TIPOne of the principal advantages of unit testing is that it is done by
the development team and catches errors early, before they leave the
development phase.
Integration or Systems Testing
Even if each unit tests properly per the requirements and specifications, a system
is built up of many units that work together to achieve a business objective. There
are emergent properties that occur in systems, and integration (or systems-level)
testing should be designed to verify that the correct form and level of the
emergent properties exist in the system. A system can be more than just the sum
of the parts, and if part of the “more” involves security checks, these need to be
verified.
Systems or integration testing is needed to ensure that the overall system is
compliant with the system-level requirements. It is possible for one module to be
correct and another module to also be correct but for the two modules to be
incompatible, causing errors when connected. System tests need to ensure that
the integration of components occurs as designed and that data transfers
between components are secure and proper.
Performance Testing
Part of the set of requirements for the software under development should be the
service levels of agreement that can be expected from the software. Typically,
these are expressed in the terms of a service level agreement (SLA). The typical
objective in performance testing is not the finding of specific bugs, but rather the
goal is to determine bottlenecks and performance factors for the systems under
test. These tests are frequently referred to as load testing and stress testing. Load
testing involves running the system under a controlled speed environment. Stress
testing takes the system past this operating point to see how it responds to
overload conditions.

EXAM TIPRecoverability is the ability of an application to restore itself to
expected levels of functionality after the security protection is breached or
bypassed.
Regression Testing
As changes to software code bases occur, they must be tested against functional
and nonfunctional requirements. This is the normal testing that occurs as part of
any change process. Regression testing is the testing of the changes when applied
to older versions of a code base. This is common in large software projects that
have multiple versions distributed across a customer base. The challenge is not in
the direct effects of a change, but in interactive changes that occur because of
other code differences between the two versions of a program. Regression testing
can be expensive and time consuming, and is one of the major challenges for a
software vendor that is supporting multiple versions of a product.
Security Testing
Testing includes white-box testing, where the test team has access to the design
and coding elements; black-box testing, where the team does not have access; and
grey-box testing, where information is greater than black-box testing but short of
white-box testing. This nomenclature does not describe the actual tests being
performed, but rather indicates the level of information present to the tester
before the test.
White-Box Testing
White-box testing is performed on a system with full knowledge of the working
components, including the source code and its operation. This is commonly done
early in the development cycle. The advantage of white-box testing is that the
attacker has knowledge of how the system works and can employ their time on
compromising it. The unit testing of a section of code by the development team is
an example of white-box testing. White-box testing, by design, provides the
attacker with complete documentation, including source code and configuration
parameters. This information can then be used to devise potential methods of
attacking the software. Thus, white-box testing can focus on the structural basis
of the software and the operational deployment considerations with respect to its
use or misuse.

EXAM TIPWhen testers have access to full knowledge of a system, including
source code, it is referred to as white-box testing.
Black-Box Testing
Black-box testing is where the attacker has no knowledge of the inner workings
of the software under test. This is common in more advanced system-level tests,
such as penetration testing. The lack of knowledge of the specific implementation
is not as important as one may think at times, for the attacker still has the same
knowledge that an end user would possess, so they know what inputs are
requested. Using their knowledge of how things work and what patterns of
vulnerabilities are likely to exist, an attacker is not as blind in black-box testing
as you might think. Black-box testing focuses on the behavioral characteristics of
the application.

EXAM TIPWhen testers have access to no knowledge of how a system works,
including no knowledge of source code, it is referred to as black-box testing.
Grey-Box Testing
Grey-box testing is aptly named, as an attacker has more knowledge of the inner
workings, but less than total access to source code. Grey-box testing is relatively
rare outside of internal testing.
Environment
Software applications operate within a specific environment, which also needs to
be tested. Trust boundaries, described earlier in the book, are devices used to
demarcate the points where data moves from one module set to another. Testing
of data movement across trust boundaries from end to end of the application is
important. When the complete application, from end to end, is more than a single
piece of code, interoperability issues may arise and need to be tested for. When
security credentials, permissions, and access tokens are involved, operations
across trust boundaries and between modules become areas of concern.
Verifying that all dependencies across the breadth of the software are covered,
both logically and from a functional security credential point of view, is
important.
Comparison of Common Testing Types

Bug Tracking
Software will always have errors or bugs. And these bugs come in a variety of
shapes and sizes. Some are from design issues, some from coding, and some from
deployment. If the development team is going to manage these issues, they need
to be collected, enumerated, and prioritized. Tracking the defects as they become
known will allow for better access and management. Remediation of bugs can
take many forms, but typically four states are used:
• Removal of defect
• Mitigation of defect
• Transfer of responsibility
• Ignore the issue
Sometimes, the removal of the defect is not directly possible. This could be
because of other functionality that would be lost in the removal process, or the
cost of returning to design or another previous step in the development process
would be too costly to execute at this point in production. These four states
mirror the options associated with risk, and this makes sense, as bugs create risk
in the system.
The goal of tracking bugs is to ensure that at some point they get addressed by
the development team. As it may not be feasible to correct all bugs at or near the
time of discovery, logging and tracking them provide a means of ensuring that
what is found is eventually addressed. Logging them also provides a metric as to
code quality. By comparing the defect rate during development to other systems
of similar size and complexity, it is possible to get a handle on the development
team’s efficiency.
Software defects, or bugs, can be characterized in different ways. One method
is by the source or effect of the defect. Defects can be broken into five categories:
• Bugs Errors in coding
• Flaws Errors in design
• Behavioral anomalies Issues in how the application operates
• Errors and faults Outcome-based issues from other sources
• Vulnerabilities Items that can be manipulated to make the system operate
improperly
Defects
A defect database can be built to contain the information about defects as they
occur. Issues such as where did the defect occurred, in what part of the code it
occurred, in what build, who developed it, who discovered it, how it was
discovered, if is it exploitable, etc., can be logged. Then, additional disposition
data can be tracked against these elements, providing information for security
reviews.
Tracking all defects, even those that have been closed, provides a wealth of
information to developers. What has gone wrong in the past, where, and how?
The defect database is a tremendous place to learn what not to do, and in some
cases, what not to repeat. This database provides testers with ammunition to go
out hunting for defects.
Errors
Errors are examples of things gone wrong. They can be of varying levels of
severity and impact. Some errors are not a significant issue at the present time,
for they do not carry immediate operational risk. But like all other issues, they
should be documented and put into the database. This allows them to be included
in quality assurance (QA) counts and can help provide an honest assessment of
code quality over time. Errors can be found through a wide variety of testing
efforts, from automated tests to unit tests to code walkthroughs. The important
issue with errors is collecting the information associated with them and
monitoring the metrics.
If testing is a data collection effort aimed at improving the SDL process, then
error data collection should not be an effort aimed at punitive results. The
collection should enable feedback mechanisms to provide information to the
development team, so that over time, fewer errors are made, as the previously
discovered and now-understood problems are not repeated. Monitoring error
levels as part of a long-term security performance metric provides meaningful,
actionable information to improve the efforts of the development team.
Vulnerabilities
Vulnerabilities are special forms of errors, in that they can be exploited by an
adversary to achieve an unauthorized result. As in all other types of defects,
vulnerabilities can range in severity, and this is measured by the potential impact
on the overall system. Vulnerabilities are frequently found during activities such
as penetration testing and fuzz testing. The nature of these testing environments
and the types of results make vulnerability discovery their target of opportunity.
By definition, these types of errors are more potentially damaging and they will
score higher on bug bar criteria than many other error types.
Bug Bar
The concept of a bug bar is an operational measure for what constitutes a
minimum level of quality in the code. The bug bar needs to be defined at the
beginning of the project as a fixed security requirement. Doing this establishes an
understanding of the appropriate level of risk with security issues and establishes
a level of understanding as to what must be remediated before release. During
the testing phase, it is important to hold true to this objective and not let the bar
slip because of production pressures.
A detailed bug bar will list the types of errors that cannot go forward into
production. For instance, bugs labeled as critical or important may not be
allowed into production. These could include bugs that permit access violations,
elevation of privilege, denial of service, or information disclosure. The specifics of
what constitutes each level of bug criticality need to be defined by the security
team in advance of the project so that the testing effort will have concrete
guidance to work from when determining level of criticality and associated
go/no-go status for remediation.
Detailed requirements for testing may include references to the bug bar when
performing tests. For instance, fuzzing involves numerous iterations, so how
many is enough? Microsoft has published guidelines that indicate fuzzing should
be repeated until there are 100,000 to 250,000 clean samples, depending upon the
type of interface, since the last bug bar issue. These types of criteria ensure that
testing is thorough and does not get stopped prematurely by a few low-hanging
fruit–type errors.
Attack Surface Validation
The attack surface evaluation was extensively covered in the design portions of
this book. During the design phase, an estimate of the risks and the mitigation
efforts associated with the risks is performed. Based on the results of this design,
the system is developed, and during development, the actual system design goals
may or may not have been met. Testing the code for obvious failures at each step
along the way provides significant information as to which design elements were
not met.
It is important to document the actual attack surface throughout the
development process. Testing the elements and updating the attack surface
provide the development team with feedback, ensuring that the design attack
surface objectives are being met through the development process. Testing of
elements such as the level of code accessible by untrusted users, the quantity of
elevated privilege code, and the implementation of mitigation plans detailed in
the threat model is essential in ensuring that the security objectives are being met
through the development process.
Testing Artifacts
Testing is a multifaceted process that should occur throughout the development
process. Beginning with requirements, use and misuse cases are created and used
to assist in the development of the proper testing cases to ensure requirements
coverage. As software is developed, testing can occur at various levels—from the
unit level where code is first created to the final complete system and at multiple
stages in between. To ensure appropriate and complete testing coverage, it is
important for the testing group to work with the rest of the development team,
creating and monitoring tests for each level of integration to ensure that the
correct properties are examined at the correct intervals of the secure
development process.
Test Data Lifecycle Management
Testing can require specific useful data to perform certain types of tests. Whether
for error conditions or verification of correct referential integrity testing, test
data must be created to mimic actual production data and specific process
conditions. One manner of developing useable data, especially in complex
environments with multiple referential integrity constraints, is to use production
data that has been anonymized. This is a difficult task as the process of truly
anonymizing data can be more complex than just changing a few account
numbers and names. Managing test data and anonymizing efforts are not trivial
tasks and can require planning and process execution on the part of the testing
team.
Chapter Review
This chapter opened with a look at some standards associated with software
quality assurance. ISO 9216 details quality in software products, while ISO 21827
(SSE-CMM) details the processes of secure engineering of systems. The OSSTMM, a
scientific methodology for assessing operational security built upon analytical
metrics, was presented as an aid to testing and auditing. Functional testing,
including reliability and resiliency testing, was covered. The functional testing
elements of unit testing, systems testing, and performance testing were
presented. Security testing can be performed in white-, grey- or black-box modes,
depending upon the amount of information possessed by the tester. Performance
testing, including the elements of load and stress testing, was presented. Testing
of the operational environment was covered, as it is associated with the trust
boundaries and sets many security conditions on the application. The tracking of
bugs, including the various forms of bugs and the establishment of a bug bar, was
presented. The chapter closed with a discussion on validation of the attack
surface as part of testing.

Name:
ISEC 620 Homework 6
Testing is a crucial phase in the SDLC. The testing phase also comprises of a divert set of tools and techniques. Modules 6, 7, and 8 are dedicated to software testing and analysis. In this homework, you will compare software security analysis tools and techniques.
In the last module, you read Chapter 14 of Conklin& Shoemaker. In this module, you have been reading Chapters 15 and 16. These chapters contain a variety of different software security analysis tools and methods. These include, but are not limited to:
· Static Code Analysis
· Dynamic Code Analysis
· Peer Review
· Quality Assurance Testing
· Penetration Testing
· Fuzzing
Question1
Briefly describe each method.
Question2
Compare static and dynamic code analysis methods.
Question3
What is the main difference between static & dynamic code analysis and penetration testing & fuzzing? Describe.
Question4
How does the peer review process differ from other processes in the list? Describe.
Question5
How does Quality Assurance Testing differ from the other processes in the list? Describe.
Question6- Weekly Learning and Reflection
In two to three paragraphs of prose (i.e., sentences, not bullet lists) using APA style citations if needed, summarize and interact with the content that was covered this week in class. In your summary, you should highlight the major topics, theories, practices, and knowledge that were covered. Your summary should also interact with the material through personal observations, reflections, and applications to the field of study. In particular, highlight what surprised, enlightened, or otherwise engaged you. Make sure to include at least one thing that you’re still confused about or ask a question about the content or the field. In other words, you should think and write critically not just about what was presented but also what you have learned through the session. Questions asked here will be summarized and answered anonymously in the next class.

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more