Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks: Difference between revisions
Programsam (talk | contribs) |
Programsam (talk | contribs) |
||
| Line 49: | Line 49: | ||
# '''Identify and Instrument Hotspots'''. We manually inspected the source code to discover any point where the system interacts with the database. We note here that hotspots can take many forms; we explain this issue more below. We have written the Java program <code>SQLMarker</code>, introduced in our earlier work<sup>[9]</sup>. <code>SQLMarker</code> keeps a record of the execution state at runtime for each uniquely identified hotspot<sup>13</sup>. <code>SQLMarker</code> has a method, <code>SQLMarker.mark()></code>, which passes the line number and file name to a research database that stores whether the hotspot has been executed. | # '''Identify and Instrument Hotspots'''. We manually inspected the source code to discover any point where the system interacts with the database. We note here that hotspots can take many forms; we explain this issue more below. We have written the Java program <code>SQLMarker</code>, introduced in our earlier work<sup>[9]</sup>. <code>SQLMarker</code> keeps a record of the execution state at runtime for each uniquely identified hotspot<sup>13</sup>. <code>SQLMarker</code> has a method, <code>SQLMarker.mark()></code>, which passes the line number and file name to a research database that stores whether the hotspot has been executed. | ||
# '''Record Hotspots'''. A second class we wrote, called <code>Instrumenter</code>, provides each manually marked hotspot with a unique identifier comprised of the filename and line number, and outputs the number of hotspots found. Once we manually marked each, we executed Instrumenter to store a record of each of these hotspots. | # '''Record Hotspots'''. A second class we wrote, called <code>Instrumenter</code>, provides each manually marked hotspot with a unique identifier comprised of the filename and line number, and outputs the number of hotspots found. Once we manually marked each, we executed Instrumenter to store a record of each of these hotspots. | ||
# '''Execute Original Unit Tests'''. After instrumenting each subject to mark its executed SQL hotspots, we executed the intrinsic unit tests and recorded the resultant number of executed statements. | # '''Execute Original Unit Tests'''. After instrumenting each subject to mark its executed SQL hotspots, we executed the intrinsic unit tests and recorded the resultant number of executed statements. | ||
# '''Create Test Cases'''. We used the stored file name and line number of the hotspot from Step 1 to construct an automated system level test with HtmlUnit<sup>14</sup> that executed the SQL statement located at the stored file name and line number. We constructed an initial automated test for each hotspot by using a call hierarchy and manual testing to make web requests until the hotspot was marked as being executed and then modeled our automated test after the use case we discovered<sup>15</sup>. | |||
# '''Create Test Cases'''. We used the stored file name and line number of the hotspot from Step 1 to construct an automated system level test with HtmlUnit<sup>14</sup> that executed the SQL statement located at the stored file name and line number. We constructed an initial automated test for each hotspot by using a call hierarchy and manual testing to make web requests until the hotspot was marked as being executed and then modeled our automated test after the use case we discovered<sup>15</sup> | # '''Apply Malicious Input'''. We modified the test defined in Step 4 to emulate a malicious user by using 132 forms of malicious input in an attack list from NeuroFuzz<sup>10</sup> in place of normal input. This part of the procedure is similar to “fuzzing”. The difference here is that fuzzing is a semi-random, black box activity; our approach is targeted to specifically attack the areas where user input might reach a hotspot. | ||
. | |||
# '''Apply Malicious Input'''. We modified the test defined in Step 4 to emulate a malicious user by using 132 forms of malicious input in an attack list from NeuroFuzz<sup> | |||
# '''Record Result'''. We then marked each test that caused incorrect SQL operations or an application error in Step 5 as a successful attack and its corresponding SQL statement as a vulnerability. | # '''Record Result'''. We then marked each test that caused incorrect SQL operations or an application error in Step 5 as a successful attack and its corresponding SQL statement as a vulnerability. | ||