Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities: Difference between revisions

From Ben Works
Jump to navigation Jump to search
Line 125: Line 125:
# http://cwe.mitre.org/data/slices/2000.html
# http://cwe.mitre.org/data/slices/2000.html
# http://cwe.mitre.org/data/slices/2000.html
# http://cwe.mitre.org/data/slices/2000.html
# http://trends.builtwith.com/blog
# http://cloc.sourceforge.net/. Version 1.52.

Revision as of 02:31, 24 August 2013

B. Smith, L. Williams, "Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities", Proceedings of the International Conference on Software Testing, Verification and Validation (ICST 2011), Berlin, Germany, pp. 220-229, 2011.

Abstract

Development organizations often do not have time to perform security fortification on every file in a product before release. One way of prioritizing security efforts is to use metrics to identify core business logic that could contain vulnerabilities, such as database interaction code. Database code is a source of SQL injection vulnerabilities, but importantly may be home to unrelated vulnerabilities. The goal of this research is to improve the prioritization of security fortification efforts by investigating the ability of SQL hotspots to be used as the basis for a heuristic for prediction of all vulnerability types. We performed empirical case studies of 15 releases of two open source PHP web applications: WordPress, a blogging application, and WikkaWiki, a wiki management engine. Using statistical analysis, we show that the more SQL hotspots a file contains per line of code, the higher the probability that file will contain any type of vulnerability.

1. Introduction

We can get good designs by following good practices instead of poor ones.
~F. Brooks, Jr.

The war for a trustworthy Internet continues. The popular social networking site Twitter was recently compromised by two cross-site scripting attacks, which are common and easy-to-execute exploits of a codelevel programming error[5]. Input validation vulnerabilities1 like this are in the CWE/SANS Top 25 Most Dangerous Programming Errors for 20102 despite the plethora of proposed techniques for protecting against code-level attacks (e.g. the context sensitive string evaluation method proposed by[11]). Additionally, the SANS list of Top Cyber Security Risks3 indicates that input validation vulnerabilities, such as SQL injection, cross-site scripting, and file inclusion continue to be the three most popular techniques used for compromising web sites.

Although techniques such as code reviews and design discussions can help developers reduce the number of vulnerabilities they introduce into the source code, the software development community currently has no single solution that will eliminate all security issues[7]. Furthermore, development organizations often do not have the time or resources to perform vulnerability detection efforts on every source file in a product before its release. Validation and verification (V&V) must be prioritized in such a way that the security fortification starts with the files that are most likely to be vulnerable first. SQL hotspots may help development organizations prioritize security fortification efforts. SQL hotspots (or just "hotspots" in this paper) are any point in the application source code where the system interacts with a database management system[3, 6]. Hotspots are typically associated with input validation vulnerabilities like SQL injection4, but they might also be useful for predicting any web application vulnerability since they protect the typical web application's most valuable asset: the database[3, 6].

The goal of this research is to improve the prioritization of security fortification efforts by investigating the ability of SQL hotspots to be used as the basis for a heuristic for the prediction of all vulnerability types. We have already defined the identification of hotspots[14], and demonstrated[15] that testers can target hotspots at the system level to expose error message information leakage vulnerabilities5. In this paper, we evaluate the ability of hotspots used in a model with number of lines of code to perform in prediction models that can help point testers to files in the source code that are likely to contain all types of web application vulnerabilities. We include lines of code in our model as a way of normalizing the number of SQL hotspots per file to make the comparison between files more accurate even as file sizes vary.

We built and analyzed a prediction model based on the security vulnerability reports of two open source PHP web applications: nine releases of WordPress6, a blogging application, and six releases of WikkaWiki7, a wiki management engine. We compared the evaluation of our model's ability to predict vulnerable files with a random guess calculated based on the distribution of vulnerabilities within each system. The contributions of this paper are as follows:

  • Empirical evidence that SQL hotspots can be used along with lines of code as the basis for a heuristic for prioritizing security V&V efforts because they are predictive of all types of web application vulnerabilities.
  • A resultant design strategy that recommends separating the database concern of an application into a single file to produce a lower proportion of input validation vulnerabilities.

The rest of this paper is organized as follows. Section 2 presents background information related to vulnerability identification. Then, Section 3 reviews related work. Next, Section 4 presents our methodology for gathering and analyzing the vulnerability data. Section 5 presents the results of the study and Section 6 presents the limitations of this study. Finally, Section 7 concludes.

2. Background

According to the ISO, a vulnerability is “..an instance of a [fault] in the specification, development, or configuration of software such that its execution can violate an [implicit or explicit] security policy” [4]. Since no single validation or verification practice can detect every vulnerability in a system[7], we have to assume that the file may have latent, undiscovered vulnerabilities. We call files vulnerable that have been changed due to a vulnerability report. We call files that have not been changed due to vulnerability reports neutral.

A predictive model for classifying components as being either vulnerable or neutral will make either correct or incorrect classifications. As such, for a given test of the model, there are true positives, where the model correctly classifies a component as vulnerable, and true negatives, where the model correctly classifies the component as neutral. When the model is wrong, there are false positives, where the model classifies the component as being vulnerable, but the component was neutral, and false negatives where the model failed to identify a vulnerable component. The performance of a given model to classify components as being one of two binary options has often been evaluated using two measurements: precision and recall[10].

Precision is defined in Equation 1, where tp is the number of true positives identified by the model, and fp is the number of false positives identified by the model. Precision can be viewed as a measure of exactness that a model exhibits.

Recall measures the number of vulnerable files the model retrieves, and is defined in Equation 2 where tp is the number of true positives, and fn is the number of false negatives.

(1)

Other researchers have empirically examined the vulnerability reports of open source applications to determine the best predictive models for vulnerability locations. Nehaus et al.[9] use their tool, Vulture, to predict vulnerable software components in versions of the Mozilla web browser. They demonstrate that vulnerabilities correlate with component imports and that component imports in the Mozilla web browser can be used to consistently and accurately predict vulnerable components. Specifically, Nehaus et al. found that certain imports are almost guaranteed to produce security problems with the importing component later in time.

Zimmerman et al. contend that predicting security vulnerabilities can be thought of as "searching for a needle in a haystack" since the vulnerabilities in their dataset are so small in number and produce a significant bias in the results[17]. These researchers analyzed the predictive power of classical software metrics such as complexity, churn, and code coverage by calculating the correlation coefficient of each metric with vulnerabilities discovered in the Windows Vista operating system. This analysis indicated that these classical metrics can be used in vulnerability prediction models with a high amount of precision, but low recall. Additionally, their results demonstrated that dependencies can be used in a predictive model with a high amount of recall, but low precision.

Gegick et al. [2] use code churn, lines of code to and static analysis alerts from the Fortify tool to predict vulnerable software components on a large telecommunications software system containing over one million lines of code that had been deployed to the field for two years. Gegick et al. determined that a model with churn and static analysis alerts were the most useful for predicting vulnerable files, and that models combining their chosen metrics were more effective than any metric on its own.

Meneely et al. use developer activity metrics to evaluate and predict vulnerable software components[8]. Developer activity metrics measure the amount of interaction occurs between developers by analyzing which files developers have touched within a small time period. These researchers performed an empirical case study on Red Hat Enterprise Linux 4 kernel and found that files developed by otherwise independent developer groups were more likely to contain a vulnerability. They also discovered that files with changes from nine or more developers were more likely to have a vulnerability than files changed by fewer than nine developers.

Shin and Williams[13] investigated the relationship between classical complexity metrics and vulnerabilities. Shin and Williams performed an empirical case study on the JavaScript Engine in the Mozilla application framework and discovered that nine complexity measures such as McCabe's cyclomatic complexity and nesting are weakly correlated with the number of vulnerabilities. These researchers indicate that complexity measures could be used as a predictor of security vulnerabilities in an application, but that other measures of complexity should be developed that more accurately capture the type of complexity that leads to security issues.

Shin, et al. [12] investigated whether complexity, code churn, and developer activity metrics could be used as effective discriminators of software vulnerabilities in two widely-used, open source projects. Shin et al. found that 24 of the 28 metrics they investigated were discriminative of vulnerabilities. Shin et al. found that using all three types of metrics together allowed the production of a model that predicted 80% of the known vulnerable files with less than 25% false positives for both projects.

Other authors have compared the security posture of applications by using static analysis alerts as a proxy measurement of reported vulnerabilities. Walden et al. [16] compare the security posture of web applications using PHP and web applications that use Java. Walden et al. introduce a security metric, CVD: the common vulnerability density. These researchers define CVD as the density per line of code for four different vulnerability types that are common to both Java and PHP. Walden et al. used the Fortify static analysis tool to gather the reported values of CVD for two revisions of 11 projects. They found that although PHP had a higher value for CVD on all of the projects, CVD was decreasing more quickly overall in the measured PHP projects than in the Java projects.

4. Methodology

We conducted two case studies to empirically investigate eight hypothesis related to hotspot source code locations and vulnerabilities reported in the systems' bug tracking systems. We present these hypotheses, as well their results, in Table 1. We will further explain the results in Section 5. Our hypotheses point to the research objective: to improve the prioritization of security fortification efforts by investigating the ability of SQL hotspots to be used as the basis for a heuristic for the prediction of all vulnerability types. We also include lines of code in our analysis as a way of improving the accuracy and predictive power of our heuristic along with SQL hotspots. Specifically, we look at the relationship between hotspots and files (H1-H2), the amount of code change as related to the vulnerability type (H3), the predictive ability of hotspots for any vulnerability type (H4-H5), and the effect that collocating hotspots can have on the number and types of vulnerability in a given system (H6-H8).

For these case studies, we analyzed the Trac issue reports for two open source web applications, WordPress8 and WikkaWiki9. Trac is a web-based issue management system, similar to Bugzilla10, which integrates Subversion11 repository information. The details of our analysis are provided in Sections 4.2 through 4.5.

4.1 Selecting the Study Subjects

To improve the accuracy of tracing vulnerabilities to source code, we chose projects that use the Trac issue-management system. The Trac Users page 12 lists the development teams who choose to report that they use the Trac issue-management system to track their defects. We selected the two projects for the case study (hereafter, our "subjects") by inspecting each of the projects on the Trac Users page for projects that had the following attributes.

  • Implemented in PHP - We chose subjects that were written in PHP. Recent usage statistics indicate that 30% of web applications are implemented using PHP, which is more than any other framework13. We were also interested in controlling language-dependent factors of our analysis since we are not interested in comparing programming languages in terms of their security.
  • Database Interaction - Since we are interested in studying the relationship between hotspots and vulnerabilities, we chose web applications that facilitated some type of database interaction.
  • Traceable Code Changes - One of the main issues in selecting the subjects for this study is that we are interested in tracing vulnerabilities from the issue reports to the files containing the vulnerabilities by analyzing changes made in the Subversion repository. In Trac-based projects, a developer with commit access to the project would commit a set of changes to the repository that contained an approved version of a patch that users had either suggested or created themselves. The developer making the commit would make a comment on the issue report that said something similar to “Fixed by [3350]” which indicates that the issue was resolved by the repository revision number 3350. In both Wordpress and WikkaWiki, the developer communities were consistent about indicating that a given issue was fixed by a certain revision number in the repository.
  • Contained Security Issues - Since we are interested in studying security vulnerabilities, we looked for subjects that contained more than five reported issues that were clear-cut security problems. When we examined a project's Trac web page, we browsed through the issue reports for the project (if there were any) and looked for those reports which we could classify using a CWE14 grouping. The CWE15> provides a unified list of prevalent security vulnerabilities with detailed descriptions, definitions, and a unique classification number. Particularly, we were interested in comparing the proportion of input validation vulnerabilities in each project, so we also produced a yes/no indicator for whether an issue report had a CWE classified input validation vulnerability. We had to manually determine the CWE classification by reading the issue report description and attempting to map this information to a CWE classification definition. Sometimes the issue description did not map to a CWE type, and in these cases, we determined that the issue report was not a security problem. When a project contained no security issue reports in its Trac web page, we rejected the project.

Using these criteria, and searching Trac’s User page we arrived at two study subjects out of 532 possible subjects:

  1. WordPress - advanced blog management software that requires the MySQL database management system v4.1.2 or greater. Recent usage statistics have indicated that 74% of websites that are running blogging software are using WordPress16. WordPress contains 138,967 source lines of code as determined by CLOC17. We examined issue reports on WordPress ranging from December 2004 through August 2009 and spanning nine public releases from Version 1.5 to Version 2.8. In WordPress, security issues are flagged using a user-specified indicator on Trac. We found that 88 out of the 6,647 (or 1.3%) total reported issues in WordPress were security-related. This low density of security-related reports is not uncommon[17].

5. Results

6. Limitations

7. Conclusion

8. Acknowledgements

9. References

[1] T. Fawcett, "An introduction to ROC analysis," Pattern Recognition Letters, vol. 27, no. 8, pp. 861-874, 2006.
[2] M. Gegick, L. Williams, J. Osborne, and M. Vouk, "Prioritizing software security fortification through code-level metrics," in ACM Workshop on Quality of Protection (QoP2008), Alexandria, Virginia, 2008, pp. 31-38.
[3] W. G. J. Halfond and A. Orso, "AMNESIA: analysis and monitoring for neutralizing SQLinjection attacks," in 20th IEEE/ACM Conference on Automated Software Engineering, Long Beach, CA, USA, 2005, pp. 174-183.
[4] ISO/IEC, "DIS 14598-1 Information technology -Software product evaluation," 1996.
[5] J. Kirk, "Twitter Contains Second worm in a Week," in PCWorld Business Center, 2010, http://www.pcworld.com/businesscenter/article/206232/twitter_contains_second_worm_in_a_week.html.
[6] Y. Kosuga, K. Kono, M. Hanaoka, M. Hishiyama, and Y. Takahama, "Sania: syntactic and semantic analysis for automated testing against SQL injection," in 23rd Annual Computer Security Applications Conference, Miami Beach, FL, 2007, pp. 107-117.
[7] G. McGraw, Software Security: Building Security In. Reading, Massachusetts: Addison-Wesley Professional, 2006.
[8] A. Meneely and L. Williams, "Secure open source collaboration: an empirical study of linus' law," in ACM Conference on Computer and Communications Security (CCS2009), Chicago, Illinois, 2009, pp. 453-462.
[9] S. Nehaus, T. Zimmerman, C. Holler, and A. Zeller, "Predicting vulnerable software components," in ACM Conference on computer and communications security, Alexandria, Virginia, USA, 2007, pp. 529-540.
[10] D. L. Olson and D. Delen, Advanced Data Mining Techniques. Berlin Heidelberg: Springer, 2008.
[11] T. Pietraszek and C. V. Berghe, "Defending Against Injection Attacks Through ContextSensitive String Evaluation," in Recent Advances in Intrusion Detection, Springer LNCS 3858, Seattle, Washington, 2006, pp. 124-145.
[12] Y. Shin, A. Meneely, L. Williams, and J. A. Osbourne, "Evaluating Complexity, Code Churn, and Developer Activity metrics as Indicators of Software Vulnerabilities," Transactions on Software Engineering, 2010, to appear. DOI 10.1109/TSE.2010.81.
[13] Y. Shin and L. Williams, "Is complexity really the enemy of software security?," in ACM workshop on Quality of protection (QoP2008), Alexandria, Virginia, 2008, pp. 47-50.
[14] B. Smith, Y. Shin, and L. Williams, "Proposing SQL Statement Coverage Metrics," in Software Engineering for Secure Systems (SESS2008), colocated with ICSE 2008., Leipzig, Germany, 2008, pp. 49-56.
[15] B. Smith, L. Williams, and A. Austin, "Idea: Using system level testing for revealing SQLinjection related error message information leaks," Lecture Notes in Computer Science, vol. 5965, pp. 192-200, Symposium on Engineering Secure Software and Systems 2010 (ESSoS 2010), 2010.
[16] J. Walden, M. Doyle, R. Lenhof, and J. Murray, "Idea: Java vs. PHP: Security Implications of Language Choice for Web Applications," in Engineering Secure Software and Systems, Springer LNCS 5965, Pisa, Italy, 2010, pp. 61-69.
[17] T. Zimmerman, N. Nagappan, and L. Williams, "Searching for a Needle in a Haystack: Predicting Security Vulnerabilities for Windows Vista," in International Conference on Software Testing (ICST 2010), Paris, France, 2010, pp. 421-428.

10. End Notes

  1. Input validation vulnerabilities occur when a system does not assert that input falls within an acceptable range, allowing the system to be exploited perform unintended functionality.
  2. http://cwe.mitre.org/top25/
  3. http://www.sans.org/critical-security-controls/#summary
  4. SQL injection vulnerabilities occur when a lack of input validation could allow a user to force unintended system behavior by altering the logical structure of a SQL statement using SQL reserved words and special characters.
  5. Error message vulnerabilities occur when the system does not correctly handle an exceptional condition, causing sensitive.
  6. http://wordpress.org/
  7. http://wikkawiki.org/HomePage
  8. http://core.trac.wordpress.org/
  9. http://wush.net/trac/wikka/
  10. http://www.bugzilla.org/
  11. http://subversion.tigris.org/
  12. http://trac.edgewall.org/wiki/TracUsers
  13. From http://trends.builtwith.com/framework. ASP.NET follows a close second with 25%, and all other frameworks each comprise less than 20% of the web.
  14. http://cwe.mitre.org/data/slices/2000.html
  15. http://cwe.mitre.org/data/slices/2000.html
  16. http://trends.builtwith.com/blog
  17. http://cloc.sourceforge.net/. Version 1.52.

(2)