please see attachments i have one hour and 30mins to complete the assignment can you do it
Defensive Coding Practices
In this chapter you will
• Learn the role of defensive coding in improving secure code
• Explore declarative vs. programmatic security
• Explore the implications of memory management and security
• Examine interfaces and error handling
• Explore the primary mitigations used in defensive coding
Secure code is more than just code that is free of vulnerabilities and defects. Developing code that will
withstand attacks requires additional items, such as defensive coding practices. Adding in a series of
controls designed to enable the software to operate properly even when conditions change or attacks
occur is part of writing secure code. This chapter will examine the principles behind defensive coding
Declarative vs. Programmatic Security
Security can be instantiated in two different ways in code: in the container itself or in the content of the
container. Declarative programming is when programming specifies the what, but not the how, with
respect to the tasks to be accomplished. An example is SQL, where the “what” is described and the SQL
engine manages the “how.” Thus, declarative security refers to defining security relations with respect
to the container. Using a container-based approach to instantiating security creates a solution that is
more flexible, with security rules that are configured as part of the deployment and not the code itself.
Security is managed by the operational personnel, not the development team.
Imperative programming, also called programmatic security, is the opposite case, where the security
implementation is embedded into the code itself. This can enable a much greater granularity in the
approach to security. This type of fine-grained security, under programmatic control, can be used to
enforce complex business rules that would not be possible under an all-or-nothing container-based
approach. This is an advantage for specific conditions, but it tends to make code less portable or
reusable because of the specific business logic that is built into the program.
The choice of declarative or imperative security functions, or even a mix of both, is a design-level
decision. Once the system is designed with a particular methodology, then the secure development
lifecycle (SDL) can build suitable protections based on the design. This is one of the elements that
requires an early design decision, as many other elements are dependent upon it.
Bootstrapping refers to the self-sustaining startup process that occurs when a computer starts or a
program is initiated. When a computer system is started, an orchestrated set of activities is begun that
includes power on self-test (POST) routines, boot loaders, and operating system initialization activities.
Securing a startup sequence is a challenge—malicious software is known to interrupt the bootstrapping
process and insert its own hooks into the operating system.
When coding an application that relies upon system elements, such as environment variables like path,
care must be taken to ensure that values are not being changed outside the control of the application.
Using configuration files to manage startup elements and keeping them under application control can
help in securing the startup and operational aspects of the application.
Cryptography is a complex issue, and one that changes over time as weaknesses in algorithms are
discovered. When an algorithm is known to have failed, as in the case of Data Encryption Standard
(DES), MD5, RC2, and a host of others, there needs to be a mechanism to efficiently replace it in
software. History has shown that the cryptographic algorithms we depend upon today will be
deprecated in the future. Cryptography can be used to protect confidentiality and integrity of data when
at rest, in transit (communication), or even in some cases when being acted upon. This is achieved
through careful selection of proper algorithms and proper implementation.
Cryptographic agility is the ability to manage the specifics of cryptographic function that are embodied
in code without recompiling, typically through a configuration file. Most often, this is as simple as
switching from an insecure to a more secure algorithm. The challenge is in doing this without replacing
the code itself.
Producing cryptographically agile code is not as simple as it seems. The objective is to create software
that can be reconfigured on the fly via configuration files. There are a couple of ways of doing this, and
they involve using library calls for cryptographic functions. The library calls are then abstracted in a
manner by which assignments are managed via a configuration file. This enables the ability to change
algorithms via a configuration file change and a program restart.
Cryptographic agility can also assist in the international problem of approved cryptography. In some
cases, certain cryptographic algorithms are not permitted to be exported to or used in a particular
country. Rather than creating different source-code versions for each country, agility can allow the code
to be managed via configurations.
Cryptographic agility functionality is a design-level decision. Once the decision is made with respect to
whether cryptographic agility is included or not, then the SDL can build suitable protections based on
the design. This is one of the elements that requires an early design decision, as many other elements
are dependent upon it.
EXAM TIP When communications between elements involve sessions—unique communication channels
tied to transactions or users—it is important to secure the session to prevent failures that can cascade
into unauthorized activity. Session management requires sufficient security provisions to guard against
attacks such as brute-force, man-in-the-middle, hijacking, replay, and prediction attacks.
Handling Configuration Parameters
Configuration parameters can change the behavior of an application. Securing configuration parameters
is an important issue when configuration can change programmatic behaviors. Managing the security of
configuration parameters can be critical. To determine the criticality of configuration parameters, one
needs to analyze what application functionality is subject to alteration. The risk can be virtually none for
parameters of no significance to extremely high if critical functions such as cryptographic functions can
be changed or disabled.
Securing critical data such as configuration files is not a subject to be taken lightly. As in all risk-based
security issues, the level of protection should be commensurate with the risk of exposure. When
designing configuration setups, it is important to recognize the level of protection needed. The simplest
levels include having the file in a directory protected by the access control list (ACL); the extreme end
would include encrypting the sensitive data that is stored in the configuration file.
Configuration data can also be passed to an application by a calling application. This can occur in a
variety of ways—for example, as part of a URL string or as a direct memory injection—based on
information provided by the target application. Testing should explore the use of URLs, cookies, temp
files, and other settings to validate correct handling of configuration data.
Memory management is a crucial aspect of code security. Memory is used to hold the operational code,
data, variables, and working space. Memory management is a complex issue because of the dynamic
nature of the usage of memory across a single program, multiple programs, and the operating system.
The allocation and management of memory is the responsibility of both the operating systems and the
application. In managed code applications, the combination of managed code and the intermediate
code execution engine takes care of memory management, and type safety makes the tasking easier.
Memory management is one of the principal strengths of managed code. Another advantage of
managed code is the automatic lifetime control over all resources. Because the code runs in a sandbox
environment, the runtime engine maintains control over all resources.
In unmanaged code situations, the responsibility for memory management is shared between the
operating system and the application, with the task being even more difficult because of the issues
associated with variable type mismatch. In unmanaged code, virtually all operations associated with
resources and memory are the responsibility of the developer, including garbage collection, thread
pooling, memory overflows, and more. As in all situations, complexity is the enemy of security.
Type safety is the extent to which a programming language prevents errors resulting from different data
types in a program. Type safety can be enforced either statically at compile time or dynamically at
runtime to prevent errors. Type safety is linked to memory safety. Type-safe code will not inadvertently
access arbitrary locations of memory outside the expected memory range. Type safety defines all
variables, and this typing defines the memory lengths. One of the results of this definition is that type-
safe programming resolves many memory-related issues automatically.
Locality is a principle that given a memory reference by a program, subsequent memory accesses are
often predictable and are in close proximity to previous references. Buffer overflows are a significant
issue associated with memory management and malicious code. There are various memory attacks that
take advantage of the locality principle. There are also defenses against memory corruption based on
locality attacks. Address Space Layout Randomization (ASLR) is a specific memory management
technique developed by Microsoft to defend against locality attacks.
No application is perfect, and given enough time, they will all experience failure. How an application
detects and handles failures is important. Some errors are user driven; some can be unexpected
consequences or programmatic errors. The challenge is in how the application responds when an error
occurs. This is referred to as error handling. The specific coding aspect of error handling is referred to as
When errors are detected and processed by an application, it is important for the correct processes to
be initiated. If logging of critical information is a proper course of action, one must take care not to
expose sensitive information such as personally identifiable information (PII) in the log entries. If
information is being sent to the screen or terminal, then again, one must take care as to what is
displayed. Disclosing paths, locations, passwords, userids, or any of a myriad of other information that
would be useful to an adversary should be avoided.
Exception management is the programmatic response to the occurrence of an exception during the
operation of a program. Properly coded for, exceptions are handled by special functions in code referred
to as exception handlers. Exception handlers can be designed to specifically address known exceptions
and handle them according to pre-established business rules.
There are some broad classes of exceptions that are routinely trapped and handled by software.
Arithmetic overflows are a prime example. Properly coded for, trapped, and handled with business logic,
this type of error can be handled inside software itself. Determining appropriate recovery values from
arithmetic errors is something that the application is well positioned to do, and something that the
operating system is not.
Part of the development of an application should be an examination of the ways in which the
application could fail, and also the correct ways to address those failures. This is a means of defensive
programming, for if the exceptions are not trapped and handled by the application, they will be handled
by the operating system. The operating system (OS) does not have the embedded knowledge necessary
to properly handle the exceptions.
Exceptions are typically not security issues—however, unhandled exceptions can become security
issues. If the application properly handles an exception, then ultimately through logging of the condition
and later correction by the development team, rare, random issues can be detected and fixed over the
course of versions. Exceptions that are unhandled by the application or left to the OS to handle are the
ones where issues such as privilege escalation typically occur.
Application programming interfaces (APIs) define how software components are connected to and
interacted with. Modern software development is done in a modular fashion, using APIs to connect the
functionality of the various modules. APIs are significant in that they represent entry points into
software. The attack surface analysis and threat model should identify the APIs that could be attacked
and the mitigation plans to limit the risk. Third-party APIs that are being included as part of the
application should also be examined, and errors or issues be mitigated as part of the SDL process. Older,
weak, and deprecated APIs should be identified and not allowed into the final application.
On all interface inputs into your application, it is important to have the appropriate level of
authentication. It is also important to audit the external interactions for any privileged operations
performed via an interface.
There are a set of primary mitigations that have been established over time as proven best practices. As
a CSSLP, you should have these standard tools in your toolbox. An understanding of each, along with
where and how it can be applied, is essential knowledge for all members of the development team.
These will usually be employed through the use of the threat report. The standard best practice–based
primary mitigations are as follows:
• Lock down your environment.
• Establish and maintain control over all of your inputs.
• Establish and maintain control over all of your outputs.
• Assume that external components can be subverted and your code can be read by anyone.
• Use libraries and frameworks that make it easier to avoid introducing weaknesses.
• Use industry-accepted security features instead of inventing your own.
• Integrate security into the entire software development lifecycle.
• Use a broad mix of methods to comprehensively find and prevent weaknesses.
Defensive coding is not a black art; it is merely applying the materials detailed in the threat report.
Attack surface reduction, an understanding of common coding vulnerabilities, and standard mitigations
are the foundational elements of defensive coding. Additional items in the defensive coding toolkit
include code analysis, code review, versioning, cryptographic agility, memory management, exception
handling, interface coding, and managed code.
EXAM TIPConcurrency is the process of two or more threads in a program executing concurrently.
Concurrency can be an issue when these threads access a common object, creating a shared object
property. Should they change the state of the shared object, the conditions for a race condition apply.
Controlling concurrency is one method of controlling for race conditions.
EXAM TIPTo maintain the security of sensitive data, a common practice is tokenization. Tokenization is
the replacement of sensitive data with data that has no external connection to the sensitive data. In the
case of a credit card transaction, for example, the credit card number and expiration date are
considered sensitive and are not to be stored, so restaurants typically print only the last few digits with
XXXXs for the rest, creating a token for the data, but not disclosing the data.
Learning from Past Mistakes
Software engineering is not a new thing. Nor are security issues. One of the best sources of information
regarding failures comes from real-world implementation errors in the industry. When company ABC
makes the news that it has to remediate a security issue, such as a back door in a product left by the
development team, this should be a wake-up call to all teams in all companies.
Errors are going to happen. Mistakes and omissions occur. But to repeat problems once they are known
is a lot harder to explain to customers and management, especially when these errors are of significant
impact and expensive to remediate, both for the software firm and the customer. Learning from others
and adding their failures to your own list of failures to avoid is a good business practice.
Part of the role of the security team is keeping the list of security requirements up to date for projects.
Examining errors from other companies and updating your own set of security requirements to prevent
your firm from falling into known pitfalls will save time and money in the long run.
This chapter opened with an analysis of the differences between declarative and programmatic security.
An examination of bootstrapping, cryptographic agility, and secure handling of configuration parameters
followed suit. Memory management and the related issues of type-safe practices and locality were
presented. Error handling, including exception management, was presented as an important element in
defensive coding. The security implications of the interface coding associated with APIs was presented.
The chapter closed with an examination of the primary mitigations that are used in defensive coding.
• Declarative security refers to defining security relations with respect to the container.
• Programmatic security is where the security implementation is embedded into the code itself.
• Cryptographic agility is the ability to manage the specifics of cryptographic function that are
embodied in code without recompiling, typically through a configuration file.
• Securing configuration parameters is an important issue when configuration can change
• Memory management is a crucial aspect of code security.
• In managed code applications, the combination of managed code and the intermediate code
execution engine takes care of memory management, and type safety makes the tasking easier.
• In unmanaged code situations, the responsibility for memory management is shared between the
operating system and the application, with the task being even more difficult because of the issues
associated with variable type mismatch.
• Type-safe code will not inadvertently access arbitrary locations of memory outside the expected
• Locality is a principle that, given a memory reference by a program, subsequent memory accesses are
often predictable and are in close proximity to previous references.
• Exception management is the programmatic response to the occurrence of an exception during the
operation of a program.
• APIs are significant in that they represent entry points into software.
• A set of primary mitigations have been established over time as proven best practices.
Secure Software Coding Operations
In this chapter you will
• Learn how code reviews can improve security
• Learn basic tools used in building software
• Discover how static and dynamic code analysis can improve code
• Examine antitampering mechanisms that can improve integrity
• Explore the use of configuration management with source code and versioning
When coding operations commence, tools and techniques can be used to assist in the assessment of the
security level of the code under development. Code can be analyzed either statically or dynamically to
find weaknesses and vulnerabilities. Manual code reviews by the development team can provide
benefits both to the code and the team. Code quality does not end with development, as the code
needs to be delivered and installed both intact and correctly on the target system.
Code Analysis (Static and Dynamic)
Code analysis is a term used to describe the processes to inspect code for weaknesses and
vulnerabilities. It can be divided into two forms: static and dynamic. Static analysis involves examination
of the code without execution. Dynamic analysis involves the execution of the code as part of the
testing. Both static and dynamic analyses are typically done with tools, which are much better at the
detailed analysis steps needed for any but the smallest code samples.
Code analysis can be performed at virtually any level of development, from unit level to subsystem to
system to complete application. The higher the level, the greater the test space and more complex the
analysis. When the analysis is done by teams of humans reading the code, typically at the smaller unit
level, it is referred to as code reviews. Code analysis should be done at every level of development,
because the sooner that weaknesses and vulnerabilities are discovered, the easier they are to fix. Issues
found in design are cheaper to fix than those found in coding, which are cheaper than those found in
final testing, and all of these are cheaper than fixing errors once the software has been deployed.
Static code analysis is when the code is examined without being executed. This analysis can be
performed on both source and object code bases. The term source code is typically used to designate
the high-level language code, although technically, source code is the original code base in any form,
from high language to machine code. Static analysis can be performed by humans or tools, with humans
limited to the high-level language, while tools can be used against virtually any form of code base.
Static code analysis is frequently performed using automated tools. These tools are given a variety of
names, but are commonly called source code analyzers. Sometimes, extra phrases, such as binary
scanners or byte code scanners, are used to differentiate the tools. Static tools use a variety of
mechanisms to search for weaknesses and vulnerabilities. Automated tools can provide advantages
when checking syntax, approved function/library calls, and examining rules and semantics associated
with logic and calls. They can catch elements a human might overlook.
Dynamic analysis is performed while the software is executed, either on a target or emulated system.
The system is fed specific test inputs designed to produce specific forms of behaviors. Dynamic analysis
can be particularly important on systems such as embedded systems, where a high degree of
operational autonomy is expected. As a case in point, the failure to perform adequate testing of
software on the Ariane rocket program led to the loss of an Ariane V booster during takeoff. Subsequent
analysis showed that if proper testing had been performed, the error conditions could have been
detected and corrected without the loss of the flight vehicle.
Dynamic analysis requires specialized automation to perform specific testing. There are dynamic test
suites designed to monitor operations for programs that have high degrees of parallel functions. There
are thread-checking routines to ensure multicore processors and software are managing threads
correctly. There are programs designed to detect race conditions and memory addressing errors.
Code reviews are a team-based activity where members of the development team inspect code. The
premise behind peer-based code review is simple. Many eyes can discover what one does not see. This
concept is not without flaws, however, and humans have limited abilities to parse into multilayer
obfuscated code. But herein lies the rub—the objective of most programming efforts is to produce
clean, highly legible code that works not only now, but also, when it is updated later, the new developer
can understand what is happening, how it works, and how to modify it appropriately. This makes the
primary mission of code review to be shared between finding potential weaknesses or vulnerabilities
and assisting developers in the production of clean, understandable code.
The process of the review is simple. The author of the code explains to the team, step by step, line by
line, how the code works. The rest of the team can look for errors that each has experienced in the past
and observe coding style, level of comments, etc. Having to present your code to the team and actually
explain how it works leads developers to make cleaner, more defendable code to the group. This then
has the benefits of the code being more maintainable in the long run. By explaining how it works, this
also helps others on the team understand how it works and provides for backups if a developer leaves
the team and someone else is arbitrarily assigned to modify the code.
Code walkthroughs are ideal times for checking for and ensuring mitigation against certain types of
errors. Lists of common defects, such as the SANS Top 25 and the OWASP Top 10, can be checked. The
list of previous errors experienced by the firm can be checked, for if it happened once, it is best not to
repeat those issues. Unauthorized code elements, including Easter eggs and logic bombs, are much
harder to include in code if the entire team sees all the code. A partial list of errors and how they can be
caught with walkthroughs is shown in Table 14-1.
Table 14-1 Issues for Code Reviews
Another advantage of code reviews is in the development of junior members of the development team.
Code walkthroughs can be educational, both to the presenter and to those in attendance. Members of
the team automatically become familiar with aspects of a project that they are not directly involved in
coding, so if they are ever assigned a maintenance task, the total code base belongs to the entire team,
not different pieces to different coders. Treating the review as a team event, with learning and in a
nonhostile manner, produces a stronger development team as well.
Creating software in a modern development environment is a multistep process. Once the source code
is created, it must still be compiled, linked, tested, packaged (including signing), and distributed. There is
typically a tool or set of tools for each of these tasks. Building software involves partially applying these
tools with the correct options set to create the correct outputs. Options on elements such as compilers
are important, for the options can determine what tests and error checks are performed during the
Organizations employing a secure development lifecycle (SDL) process will have clearly defined
processes and procedures to ensure the correct tools are used and used with the correct settings. Using
these built-in protections can go a long way toward ensuring that the code being produced does not
have issues that should have been caught during development.
EXAM TIPCompilers can have flag options, such as Microsoft’s /GS compiler switch, which enables
stack overflow protection in the form of a cookie to be checked at the end of the function, prior to the
use of the return address. Use of these options can enhance code security by eliminating common stack
Determining the correct set of tools and settings is not a simple task. Language dependencies and legacy
issues make these choices difficult, and yet these are essential steps if one is to fully employ the
capabilities of these tools. Microsoft’s SDL guidelines have required settings for compilers, linkers, and
code analysis tools. Enabling these options will result in more work earlier in the process, but will reduce
the potential for errors later in the development process, where remediation is more time consuming
In addition to the actual tools used for building, there is an opportunity to define safe libraries.
Approved libraries of cryptographic and other difficult tasks can make function call errors a lesser
possibility. Create a library of safe function calls for common problem functions such as buffer
overflows, XSS, and injection attacks. Examples of these libraries are the OWASP Enterprise Security API
project and the Microsoft Anti-Cross Site Scripting Library for .NET.
Integrated Development Environment (IDE)
Automated tools can be built into the integrated development environment, making it easy for the
developer to do both forms of static and dynamic checking automatically. Integrated development
environments have come a long way in their quest to improving workflow and developer productivity.
The current version of Microsoft’s Visual Studio integrates from requirements to data design to coding
and testing, all on a single team-based platform that offers integrated task management, workflow,
code analysis, and bug tracking.
A wide array of IDEs exists for different platforms and languages, with varying capabilities. Using
automation such as a modern IDE is an essential part of an SDL, for it eliminates a whole range of simple
errors and allows tracking of significant metrics. Although using an advanced IDE means a learning curve
for the development team, this curve is short compared to the time that is saved with the team using
the tool. Each daily build and the number of issues prevented early due to more efficient work results in
saved time that would be lost to rework and repair after issues are found, either later in testing or in the
An important factor in ensuring that software is genuine and has not been altered is a method of testing
the software integrity. With software being updated across the Web, how can one be sure that the code
received is genuine and has not been tampered with? The answer comes from the application of digital
signatures to the code, a process known as code signing.
Code signing involves applying a digital signature to code, providing a mechanism where the end user
can verify the code integrity. In addition to verifying the integrity of the code, digital signatures provide
evidence as to the source of the software. Code signing rests upon the established public key
infrastructure. To use code signing, a developer will need a key pair. For this key to be recognized by the
end user, it needs to be signed by a recognized certificate authority.
Automatic update services, such as Microsoft’s Windows Update service, use code signing technologies
to ensure that updates are only applied if they are proper in content and source. This technology is built
into the update application, requiring no specific interaction from the end user to ensure authenticity or
integrity of the updates.
EXAM TIPCode signing provides a means of authenticating the source and integrity of code. It cannot
ensure that code is free of defects or bugs.
Steps to Code Signing
1. The code author uses a one-way hash of the code to produce a digest.
2. The digest is encrypted with the signer’s private key.
3. The code and the signed digest are transmitted to end users.
4. The end user produces a digest of the code using the same hash function as the code author.
5. The end user decrypts the signed digest with the signer’s public key.
6. If the two digests match, the code is authenticated and integrity is assured.
Code signing should be used for all software distribution, and is essential when the code is distributed
via the Web. End users should not update or install software without some means of verifying the proof
of origin and the integrity of the code being installed. Code signing will not guarantee that the code is
defect free; it only demonstrates that the code has not been altered since it was signed and identifies
the source of the code.
Configuration Management: Source Code and Versioning
Development of computer code is not a simple “write it and be done” task. Modern applications take
significant time to build all the pieces and assemble a complete functioning product. The individual
pieces all go through a series of separate builds or versions. Some programming shops do daily builds
slowly, building a stable code base from stable parts. Managing the versions and changes associated
with all these individual pieces is referred to as version control. Sometimes referred to as revision
control, the objective is to uniquely mark and manage each individually different release. This is typically
done with numbers or combinations of numbers and letters, with numbers to the left of the decimal
point indicating major releases, and numbers on the right indicating the level of change relative to the
As projects grow in size and complexity, a version control system, capable of tracking all the pieces and
enabling complete management, is needed. Suppose you need to go back two minor versions on a
config file—which one is it, how do you integrate it into the build stream, and how do you manage the
variants? These are all questions asked by the management team and that are handled by the version
control system. The version control system can also manage access to source files, locking sections of
code so that only one developer at a time can check out and modify pieces of code. This prevents two
different developers from overwriting each other’s work in a seamless fashion. This can also be done by
allowing multiple edits and then performing a version merge of the changes, although this can create
issues if collisions are not properly managed by the development team.
Configuration management and version control operations are highly detailed, with lots of
recordkeeping. Management of this level of detail is best done with an automated system that removes
human error from the operational loop. The level of detail across the breadth of a development team
makes automation the only way in which this can be done in an efficient and effective manner. A wide
range of software options are available to a development team to manage this information. Once a
specific product is chosen, it can be integrated into the SDL process to make its use a nearly transparent
operation from the development team’s perspective.
In this chapter, you were acquainted with the tools and techniques employed in the actual creation of
software. The use of code analyzers, both static and dynamic, to ensure that the pieces of software
being constructed are free of weaknesses and vulnerabilities was covered. An examination of the
advantages of code walkthroughs was presented, along with a list of typical errors that should be
uncovered during such an exercise.
An examination of the build environment, the tools, and the processes was presented. Compilers should
be properly configured with the specific options associated with the type of program to ensure proper
error checking and defensive build elements. Using tools and compilers to do bounds checking and to
create stack overflow mitigations provides significant benefit to the overall code resilience. The use of
integrated development environments to automate development and manage testing functions was
Software is built from many smaller pieces, each of which requires tracking and versioning. The
advantages of automated version control systems were presented, along with version tracking
methodologies. The use of antitampering mechanisms and code signing was presented as a method of
extending control from the development team to the operational team upon installation.
• Code should be inspected during development for weaknesses and vulnerabilities.
• Static code analysis is performed without executing the code.
• Dynamic code analysis involves examining the code under production conditions.
• Code walkthroughs are team events designed to find errors using human-led inspection of source
• Software development is a highly automated task, with many tools available to assist developers in
efficient production of secure code.
• Integrated development environments provide a wide range of automated functionality designed to
make the development team more productive.
• Compilers and tools can be configured to do specific testing of code during the production process,
and they need to be integrated into the SDL environment.
• Code can be cryptographically signed to demonstrate both authenticity and integrity.
• The management of the various elements of code, files, and settings requires a configuration
management/versioning control system to do this efficiently and effectively.
ISEC 620 Homework 5
Defensive coding practices is one of the most critical proactive security countermeasures in SDLC. If software developers follow certain security best-practices, most of the weaknesses can be eliminated. In this module’s readings, you looked at defensive tactics used in the development of software. You also learned OWASP proactive controls.
Extract defensive coding practices from Chapter 13 of the Conklin& Shoemaker. Explain each coding practice in one short paragraph.
For each coding practice, describe a corresponding CWE (https://cwe.mitre.org/) and OWASP proactive control (https://owasp.org/www-project-proactive-controls/)
Question3- Weekly Learning and Reflection
In two to three paragraphs of prose (i.e., sentences, not bullet lists) using APA style citations if needed, summarize and interact with the content that was covered this week in class. In your summary, you should highlight the major topics, theories, practices, and knowledge that were covered. Your summary should also interact with the material through personal observations, reflections, and applications to the field of study. In particular, highlight what surprised, enlightened, or otherwise engaged you. Make sure to include at least one thing that you’re still confused about or ask a question about the content or the field. In other words, you should think and write critically not just about what was presented but also what you have learned through the session. Questions asked here will be summarized and answered anonymously in the next class.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.Read more
Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.Read more
Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.Read more
Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.Read more
By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.Read more