Proposing SQL Statement Coverage Metrics: Difference between revisions

From Ben Works
Jump to navigation Jump to search
Line 29: Line 29:


Web applications are inherently insecure <ref name="a15">] D. Scott and R. Sharp, "Developing secure Web applications," Internet Computing, IEEE, vol. 6, no. 6, pp. 38-45, 2002.</ref> and web applications’ attackers look the same as any other customer to the server <ref name="a12">E. Ogren, "App Security's Evolution," in DarkReading.com, 2007.</ref>. Developers should, but typically do not, focus on building security into web applications <ref name="a10">G. McGraw, Software Security: Building Security in. Upper Saddle River, NJ: Addison-Wesley Professional, 2006.</ref>. Security has been added to the list of web application quality criteria <ref name="a11">J. Offutt, "Quality attributes of Web software applications," IEEE Software, vol. 19, no. 2, pp. 25-32, 2002.</ref> and the result is that companies have begun to incorporate security testing (including input validation testing) into their development methodologies <ref name="a3">B. Brenner, "CSI 2007: Developers need Web application security assistance," in SearchSecurity.com, 2007. </ref>. Security testing is contrasted from traditional testing, as illustrated by Figure 2: Functional vs. Security Testing, adapted from <ref name="a17">H. H. Thompson and J. A. Whittaker, "Testing for software security," Dr. Dobb's Journal, vol. 27, no. 11, pp. 24-34, 2002.</ref>.
Web applications are inherently insecure <ref name="a15">] D. Scott and R. Sharp, "Developing secure Web applications," Internet Computing, IEEE, vol. 6, no. 6, pp. 38-45, 2002.</ref> and web applications’ attackers look the same as any other customer to the server <ref name="a12">E. Ogren, "App Security's Evolution," in DarkReading.com, 2007.</ref>. Developers should, but typically do not, focus on building security into web applications <ref name="a10">G. McGraw, Software Security: Building Security in. Upper Saddle River, NJ: Addison-Wesley Professional, 2006.</ref>. Security has been added to the list of web application quality criteria <ref name="a11">J. Offutt, "Quality attributes of Web software applications," IEEE Software, vol. 19, no. 2, pp. 25-32, 2002.</ref> and the result is that companies have begun to incorporate security testing (including input validation testing) into their development methodologies <ref name="a3">B. Brenner, "CSI 2007: Developers need Web application security assistance," in SearchSecurity.com, 2007. </ref>. Security testing is contrasted from traditional testing, as illustrated by Figure 2: Functional vs. Security Testing, adapted from <ref name="a17">H. H. Thompson and J. A. Whittaker, "Testing for software security," Dr. Dobb's Journal, vol. 27, no. 11, pp. 24-34, 2002.</ref>.
'''PLACEHOLDER FOR FIGURE 2'''
Represented by the left-hand circle in Figure 2, the current software development paradigm includes a list of testing strategies to ensure the correctness of an application in functionality and usability as indicated by a requirements specification. With respect to intended correctness, verification typically entails creating test cases designed to discover faults by causing failures. Oracles tell us what the system should do and failures tell us that the system does not do what it is supposed to do. The right-hand circle in Figure 2 indicates that we validate not only that the system does what it should, but also that the system does not do what it should not: the right-hand circle represents a failure occurring in the system which causes a security problem. The circles intersect because some intended functionality can cause indirect vulnerabilities because privacy and security were not considered in designing the required functionality <ref name="a17"></ref>. Testing for functionality only validates that the application achieves what was written in the requirements specification. Testing for security validates that the application prevents undesirable security risks from occurring, even when the nature of this functionality is spread across several modules and might be


== 9. References ==
== 9. References ==


<references />
<references />

Revision as of 18:26, 11 March 2013

Ben Smith, Younghee Shin, and Laurie Williams

Abstract

An increasing number of cyber attacks are occurring at the application layer when attackers use malicious input. These input validation vulnerabilities can be exploited by (among others) SQL injection, cross site scripting, and buffer overflow attacks. Statement coverage and similar test adequacy metrics have historically been used to assess the level of functional and unit testing which has been performed on an application. However, these currently-available metrics do not highlight how well the system protects itself through validation. In this paper, we propose two SQL injection input validation testing adequacy metrics: target statement coverage and input variable coverage. A test suite which satisfies both adequacy criteria can be leveraged as a solid foundation for input validation scanning with a blacklist. To determine whether it is feasible to calculate values for our two metrics, we perform a case study on a web healthcare application and discuss some issues in implementation we have encountered. We find that the web healthcare application scored 96.7% target statement coverage and 98.5% input variable coverage

1. Introduction

According to the National Vulnerability Database (NVD), more than half of all of the ever-increasing number of cyber vulnerabilities reported in 2002-2006 were input validation vulnerabilities. As Figure 1 shows, the number of input validation vulnerabilities is still increasing.

PLACEHOLDER FOR FIGURE 1<ref>We counted the reported instances of vulnerabilities by using the keywords “SQL injection”, “cross-site scripting”, “XSS”, and “buffer overflow” within the input validation error category from NVD.</ref>

Figure 1 illustrates the number of reported instances of each type of cyber vulnerability listed in the series legend for each year displayed in the x-axis. The curve with the square shaped points is the sum of all reported vulnerabilities that fall into the categories “SQL injection”, “XSS”, or “buffer overflow” when querying the National Vulnerability Database. The curve with diamond shaped points represents all cyber vulnerabilities reported for the year in the x-axis. For several years now, the number of reported input validation vulnerabilities has been half the total number of reported vulnerabilities. Additionally, the graph demonstrates that these curves are monotonically increasing; indicating that we are unlikely to see a drop in the future in ratio of reported input validation vulnerabilities.

Input validation testing is the process of writing and running test cases to investigate how a system responds to malicious input with the intention of using tests to mitigate the risk of a security threat. Input validation testing can increase confidence that input validation has been properly implemented. The goal of input validation testing is to check whether input is validated against constraints given for the input. Input validation testing should test both whether legal input is accepted, and whether illegal input is rejected. A coverage metric can quantify the extent to which this goal has been met. Various coverage criteria have been defined based on the target of testing (specification or program as a target) and underlying testing methods (structural, fault-based and errorbased) <ref name="a19">H. Zhu, P. A. V. Hall, and J. H. R. May, "Software Unit Test Coverage and Adequacy," ACM Computing Surveys, vol. 29, no. 4, 1997. </ref>. Statement coverage and branch coverage are well-known program-based structural coverage criteria <ref name="a19" />.

However, current structural coverage metrics and the tools which implement them do not provide specific information about insufficient or missing input validation. New coverage criteria to measure the adequacy of input validation testing can be used to highlight a level of security testing. Our research objective is to propose and to validate two input validation testing adequacy metrics related to SQL injection vulnerabilities. Our current input validation coverage criteria consist of two experimental metrics: input variable coverage, which measures the percentage of input variables used in at least one test; and target statement coverage, which measures the percentage of SQL statements executed in at least one test.

An input variable is any dynamic, user-assigned variable which an attacker could manipulate to send malicious input to the system. In the context of the Web, any field on a web form is an input variable as well as any number of other client-side input spaces. Within the context of SQL injection attacks, input variables are any variable which is sent to the database management system, as will be illustrated in further detail in Section 2. A target statement is any statement in an application which is subject to attack via malicious input; for this paper, our target statements will be all SQL statements found in production code. Other input sources can be leveraged to form an attack, but we have chosen not to focus on them for this study because they comprise less than half of recently reported cyber vulnerabilities (see Figure 1 and explanation).

In practice, even software development teams who use metrics such as traditional statement coverage often do not achieve 100% values in these metrics before production <ref name="a1">B. Beizer, Software testing techniques: Van Nostrand Reinhold Co. New York, NY, USA, 1990.</ref>. If the lines left uncovered contain target statements, traditional statement coverage could be very high while little to no input validation testing is performed on the system. A target statement or input variable which is involved in at least one test might achieve high input validation coverage metrics yet still remain insecure if the test case(s) did not utilize a malicious form of input. However, a system with a high score in the metrics we define has a foundation for thorough input validation testing. Testers can relatively easily reuse existing test cases with multiple forms of good and malicious input. Our vision is to automate such reuse.

We evaluated our metrics on the server-side code of a Java Server Pages web healthcare application that had an extensive set of JUnit test cases. We manually counted the number of input variables and SQL statements found in this system and dynamically recorded how many of these statements and variables are used in executing a given test set. The rest of this paper is organized as follows: First, Section 2 defines SQL injection attacks. Then, Section 3 introduces our experimental metrics. Section 4 provides a brief summary of related work. Next, Section 5 describes our case study and application of our technique. Section 6 reports the results of our study and discusses their implications. Then, Section 7 illustrates some limitations on our technique and our metrics. Finally, Section 8 concludes and discusses the future use and development of our metrics.

2. Background

Section 2.1 explains the fundamental difference between traditional testing and security testing. Then, Section 2.2 describes SQL injection.

2.1 Testing for Security

Web applications are inherently insecure <ref name="a15">] D. Scott and R. Sharp, "Developing secure Web applications," Internet Computing, IEEE, vol. 6, no. 6, pp. 38-45, 2002.</ref> and web applications’ attackers look the same as any other customer to the server <ref name="a12">E. Ogren, "App Security's Evolution," in DarkReading.com, 2007.</ref>. Developers should, but typically do not, focus on building security into web applications <ref name="a10">G. McGraw, Software Security: Building Security in. Upper Saddle River, NJ: Addison-Wesley Professional, 2006.</ref>. Security has been added to the list of web application quality criteria <ref name="a11">J. Offutt, "Quality attributes of Web software applications," IEEE Software, vol. 19, no. 2, pp. 25-32, 2002.</ref> and the result is that companies have begun to incorporate security testing (including input validation testing) into their development methodologies <ref name="a3">B. Brenner, "CSI 2007: Developers need Web application security assistance," in SearchSecurity.com, 2007. </ref>. Security testing is contrasted from traditional testing, as illustrated by Figure 2: Functional vs. Security Testing, adapted from <ref name="a17">H. H. Thompson and J. A. Whittaker, "Testing for software security," Dr. Dobb's Journal, vol. 27, no. 11, pp. 24-34, 2002.</ref>.

PLACEHOLDER FOR FIGURE 2

Represented by the left-hand circle in Figure 2, the current software development paradigm includes a list of testing strategies to ensure the correctness of an application in functionality and usability as indicated by a requirements specification. With respect to intended correctness, verification typically entails creating test cases designed to discover faults by causing failures. Oracles tell us what the system should do and failures tell us that the system does not do what it is supposed to do. The right-hand circle in Figure 2 indicates that we validate not only that the system does what it should, but also that the system does not do what it should not: the right-hand circle represents a failure occurring in the system which causes a security problem. The circles intersect because some intended functionality can cause indirect vulnerabilities because privacy and security were not considered in designing the required functionality <ref name="a17"></ref>. Testing for functionality only validates that the application achieves what was written in the requirements specification. Testing for security validates that the application prevents undesirable security risks from occurring, even when the nature of this functionality is spread across several modules and might be

9. References

<references />