<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://bw.kn1.us/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Programsam</id>
	<title>Ben Works - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://bw.kn1.us/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Programsam"/>
	<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Special:Contributions/Programsam"/>
	<updated>2026-04-18T14:22:58Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.0</generator>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=811</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=811"/>
		<updated>2025-07-12T14:27:56Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 7/12/2025&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=810</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=810"/>
		<updated>2025-02-02T20:29:14Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 2/2/2025&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=807</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=807"/>
		<updated>2024-10-26T14:06:38Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 10/26/2024&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=805</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=805"/>
		<updated>2024-06-29T16:26:45Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 6/29/2024&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=803</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=803"/>
		<updated>2024-03-31T18:01:17Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 3/31/2024&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=802</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=802"/>
		<updated>2023-09-30T18:18:48Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 9/30/2023&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=800</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=800"/>
		<updated>2023-09-30T17:47:27Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 9/30/2023&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=798</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=798"/>
		<updated>2023-04-25T15:38:32Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 4/25/2023&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=796</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=796"/>
		<updated>2022-12-28T21:24:59Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 12/28/2022&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=795</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=795"/>
		<updated>2022-07-22T15:01:54Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 7/22/2022&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=794</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=794"/>
		<updated>2021-12-03T17:12:52Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 12/3/2021&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=791</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=791"/>
		<updated>2021-10-09T15:40:49Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 10/9/2021&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=789</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=789"/>
		<updated>2021-07-06T16:02:48Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 7/6/2021&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=788</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=788"/>
		<updated>2021-05-23T18:10:36Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Mediawiki {{CURRENTVERSION}} -- upgraded 5/16/2021&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=787</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=787"/>
		<updated>2021-05-23T18:10:23Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;big&amp;gt;Mediawiki {{CURRENTVERSION}}&amp;lt;/big&amp;gt; -- upgraded 5/16/2021.&lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=786</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=786"/>
		<updated>2021-05-16T14:59:14Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Using MediaWiki v1.35.2 -- upgraded 5/16/2021.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=785</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=785"/>
		<updated>2021-05-16T14:59:05Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;**Using MediaWiki v1.35.2 -- upgraded 5/16/2021.**&lt;br /&gt;
&lt;br /&gt;
Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Proposing_SQL_Statement_Coverage_Metrics&amp;diff=781</id>
		<title>Proposing SQL Statement Coverage Metrics</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Proposing_SQL_Statement_Coverage_Metrics&amp;diff=781"/>
		<updated>2021-05-16T14:46:31Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;B. Smith, Y. Shin, and L. Williams, &amp;quot;Proposing SQL Statement Coverage Metrics&amp;quot;, Proceedings of the Fourth International Workshop on Software Engineering for Secure Systems (SESS 2008), co-located with ICSE, pp. 49-56, 2008.&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&#039;&#039;An increasing number of cyber attacks are occurring at the application layer when attackers use malicious input. These input validation vulnerabilities can be exploited by (among others) SQL injection, cross site scripting, and buffer overflow attacks. Statement coverage and similar test adequacy metrics have historically been used to assess the level of functional and unit testing which has been performed on an application. However, these currently-available metrics do not highlight how well the system protects itself through validation. In this paper, we propose two SQL injection input validation testing adequacy metrics: target statement coverage and input variable coverage. A test suite which satisfies both adequacy criteria can be leveraged as a solid foundation for input validation scanning with a blacklist. To determine whether it is feasible to calculate values for our two metrics, we perform a case study on a web healthcare  application and discuss some issues in implementation we have encountered. We find that the web healthcare application scored 96.7% target statement coverage and 98.5% input variable coverage&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
According to the National Vulnerability Database (NVD)&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;, more than half of all of the ever-increasing number of cyber vulnerabilities reported in 2002-2006 were input validation vulnerabilities. As Figure 1 shows, the number of input validation vulnerabilities is still increasing. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:Sess-figure-1.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Figure 1. NVD&#039;s reported cyber vulnerabilities&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 1 illustrates the number of reported instances of each type of cyber vulnerability listed in the series legend for each year displayed in the x-axis. The curve with the square shaped points is the sum of all reported vulnerabilities that fall into the categories “SQL injection”, “XSS”, or “buffer overflow” when querying the National Vulnerability Database. The curve with diamond shaped points represents all cyber vulnerabilities reported for the year in the x-axis. For several years now, the number of reported input validation vulnerabilities has been half the total number of reported vulnerabilities. Additionally, the graph demonstrates that these curves are monotonically increasing; indicating that we are unlikely to see a drop in the future in ratio of reported input &lt;br /&gt;
validation vulnerabilities. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Input validation testing&#039;&#039; is the process of writing and running test cases to investigate how a system responds to malicious input with the intention of using tests to mitigate the risk of a security threat. Input validation testing can increase confidence that input validation has been properly implemented. The goal of input  validation testing is to check whether input is validated against constraints given for the input. Input validation testing should test both whether legal input is accepted, and whether illegal input is rejected. A coverage metric can quantify the extent to which this goal has been met. Various coverage criteria have been defined based on the target of testing (specification or program as a target) and underlying testing methods (structural, fault-based and error-based)&amp;lt;sup&amp;gt;[19]&amp;lt;/sup&amp;gt;. Statement coverage and branch coverage are well-known program-based structural coverage criteria&amp;lt;sup&amp;gt;[19]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
However, current structural coverage metrics and the tools which implement them do not provide specific information about insufficient or missing input validation. New coverage criteria to measure the adequacy of input validation testing can be used to highlight a level of security testing. &#039;&#039;Our research objective is to propose and to validate two input validation testing adequacy metrics related to SQL injection vulnerabilities&#039;&#039;. Our current input validation coverage criteria consist of two experimental metrics: input variable coverage, which measures the percentage of input variables used in at least one test; and target statement coverage, which measures the percentage of SQL statements executed in at least one test. &lt;br /&gt;
&lt;br /&gt;
An &#039;&#039;input variable&#039;&#039; is any dynamic, user-assigned variable which an attacker could manipulate to send malicious input to the system. In the context of the Web, any field on a web form is an input variable as well as any number of other client-side input spaces. Within the context of SQL injection attacks, input variables are any variable which is sent to the database management system, as will be illustrated in further detail in Section 2. A target statement is any statement in an application which is subject to attack via malicious input; for this paper, our target statements will be all SQL statements found in production code. Other input sources can be leveraged to form an attack, but we have chosen not to focus on them for this study because they comprise less than half of recently reported cyber vulnerabilities (see Figure 1 and explanation). &lt;br /&gt;
&lt;br /&gt;
In practice, even software development teams who use metrics such as traditional statement coverage often do not achieve 100% values in these metrics before production&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. If the lines left uncovered contain target statements, traditional statement coverage could be very high while little to no input validation testing is performed on the system. A target statement or input variable which is involved in at least one test might achieve high input validation coverage metrics yet still remain insecure if the test case(s) did not utilize a malicious form of input. However, a system with a high score in the metrics we define has a foundation for thorough input validation testing. Testers can relatively easily reuse existing test cases with multiple forms of good and malicious input. Our vision is to automate such reuse. &lt;br /&gt;
&lt;br /&gt;
We evaluated our metrics on the server-side code of a Java Server Pages web healthcare application that had an extensive set of JUnit&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt; test cases. We manually counted the number of input variables and SQL statements found in this system and dynamically recorded how many of these statements and variables are used in executing a given test set. The rest of this paper is organized as follows: First, Section 2 defines SQL injection attacks. Then, Section 3 introduces our experimental metrics. Section 4 provides a brief summary of related work. Next, Section 5 describes our case study and application of our technique. Section 6 reports the results of our study and discusses their implications. Then, Section 7 illustrates some limitations on our technique and our metrics. Finally, Section 8 concludes and discusses the future use and development of our metrics.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
Section 2.1 explains the fundamental difference between traditional testing and security testing. Then, Section 2.2 describes SQL injection. &lt;br /&gt;
&lt;br /&gt;
=== 2.1 Testing for Security ===&lt;br /&gt;
&lt;br /&gt;
Web applications are inherently insecure&amp;lt;sup&amp;gt;[15]&amp;lt;/sup&amp;gt; and web applications’ attackers look the same as any other customer to the server&amp;lt;sup&amp;gt;[12]&amp;lt;/sup&amp;gt;. Developers should, but typically do not, focus on building security into web applications &amp;lt;sup&amp;gt;[[#mcgraw|[6]]]&amp;lt;/sup&amp;gt;. Security has been added to the list of web application quality criteria&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt; and the result is that companies have begun to incorporate security testing (including input validation testing) into their development methodologies&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Security testing is contrasted from traditional testing, as illustrated by Figure 2: Functional vs. Security Testing, adapted from&amp;lt;sup&amp;gt;[17]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Sess-figure-2.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Figure 2. Intended vs. Actual Behavior, (adapted from &amp;lt;sup&amp;gt;[17]&amp;lt;/sup&amp;gt;)&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Represented by the left-hand circle in Figure 2, the current software development paradigm includes a list of testing strategies to ensure the correctness of an application in functionality and usability as indicated by a requirements specification. With respect to intended correctness, verification typically entails creating test cases designed to discover faults by causing failures. Oracles tell us what the system should do and failures tell us that the system does not do what it is supposed to do. The right-hand circle in Figure 2 indicates that we validate not only that the system does what it should, but also that the system does not do what it should not: the right-hand circle represents a failure occurring in the system which causes a security problem. The circles intersect because some intended functionality can cause indirect vulnerabilities because privacy and security were not considered in designing the required functionality&amp;lt;sup&amp;gt;[17]&amp;lt;/sup&amp;gt;. Testing for functionality only validates that the application achieves what was written in the requirements specification. Testing for security validates that the application prevents undesirable security risks from occurring, even when the nature of this functionality is spread across several modules and might be due to an oversight in the application’s design. To adapt to the new paradigm, companies have started to incorporate new techniques. Some companies use vulnerability scanners, which behave like a hacker to make automated attempts at gaining access or misusing the system to discover its flaws&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt;. A blacklist is a representative or comprehensive set of all input validation attacks of a given type (such as SQL injection, see Section 2.2). These vulnerability scanners typically use a blacklist to test potential vulnerabilities against all attacks (or a set of representative attacks). Coverage criteria for target statements can help companies assess how much of their system has the framework for a range of input validation testing. A vulnerability scanner is ineffective if its blacklist is not tested against every target statement in the system.&lt;br /&gt;
&lt;br /&gt;
=== 2.2 SQL Injection Attacks ===&lt;br /&gt;
&lt;br /&gt;
A &#039;&#039;SQL injection attack&#039;&#039; is performed when a user exploits a lack of input validation to force unintended system behavior by altering the logical structure of a SQL statement with special characters. The lack of input validation to prevent SQL injection attacks is known as a SQL injection vulnerability&amp;lt;sup&amp;gt;[2, 5, 6, 8, 9, 13-16]&amp;lt;/sup&amp;gt;. Our example of this type of input validation vulnerability begins with the login form presented in Figure 3.&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:Sess-figure-3.png]] &amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Figure 3. Example login form&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usernames typically consist of alphanumeric characters, underscores, periods and dashes. Passwords also typically consist of these character ranges and additionally allow for some other non-alphanumeric characters such as $, ^ or #. The authentication mechanism functions by a code segment resembling the one in Figure 4. Assume there exists some table maintaining a list of all usernames, passwords, and most likely some indication of the role of each unique username.&lt;br /&gt;
&lt;br /&gt;
  //for simplicity, this example is given in PHP. &lt;br /&gt;
  //first, extract the input values from the form &lt;br /&gt;
  $username = $_POST[‘username’]; &lt;br /&gt;
  $password = $_POST[‘password’]; &lt;br /&gt;
  &lt;br /&gt;
  //query the database for a user with username/pw &lt;br /&gt;
  $result = mysql_query(“select * from users where username =  ‘$username’ AND password = ‘$password’”); &lt;br /&gt;
  &lt;br /&gt;
  //extract the first row of the resultset &lt;br /&gt;
  $firstresult = mysql_fetch_array($result); &lt;br /&gt;
  &lt;br /&gt;
  //extract the “role” column from the result &lt;br /&gt;
  $role = $firstresult[‘role’]; &lt;br /&gt;
  &lt;br /&gt;
  //set a cookie for the user with their role &lt;br /&gt;
  setcookie(“userrole”, $role); &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Figure 4. Example authentication code&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The code in Figure 4 performs the following. First, query the database for every entry with the entered username and password. Typically, we use the first row of returned SQL results (which is retrieved by &amp;lt;code&amp;gt;mysql_fetch_array&amp;lt;/code&amp;gt; and stored in &amp;lt;code&amp;gt;$firstresult&amp;lt;/code&amp;gt;) because the web application (or the database management system) will ensure that there are no duplicate usernames and will ensure that every user name is given the appropriate role. Finally, we &lt;br /&gt;
extract the role field from the first result and give the user a cookie&amp;lt;sup&amp;gt;4&amp;lt;/sup&amp;gt;, which allows the login to be persistent (i.e., the user does not have to login to view every protected page).&lt;br /&gt;
&lt;br /&gt;
The example we have presented in Figure 4 performs no input validation, and as a result the example contains at least three input validation vulnerability locations. The first two are the username and password fields as given in the web form in Figure 3. An attacker could cause the code fragment change shown in Figure 5 simply by entering the SQL command fragment “&amp;lt;code&amp;gt;‘ OR 1=1 -- AND&amp;lt;/code&amp;gt;&amp;quot; in the input field instead of any valid user name in Figure &lt;br /&gt;
3.&lt;br /&gt;
&lt;br /&gt;
  //from Figure 4; original code &lt;br /&gt;
  $result = mysql_query(“select * from users where username = &#039;$username’ AND password = ‘$password’”);&lt;br /&gt;
  &lt;br /&gt;
  //code with inserted attack parameters &lt;br /&gt;
  $result = mysql_query(“select * from users where username = ‘’ OR 1=1 -- AND password = ‘PASSWORD’”); &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Figure 5. Example SQL statement, before and after&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The single quotation mark (&amp;lt;code&amp;gt;&#039;&amp;lt;/code&amp;gt;) indicates to the SQL parser that the character sequence for the username column is closed, the fragment &amp;lt;code&amp;gt;OR 1=1&amp;lt;/code&amp;gt; is interpreted as always true, and the hyphens (&amp;lt;code&amp;gt;--&amp;lt;/code&amp;gt;) tells the parser that the SQL command is over and the fragment of the query after the hyphens is a comment. With these values, the $result variable contains a list of every user in the table (and their associated role) because the where clause is always true. The first listing returned from the database is unknown and will vary based on the database configuration. Regardless, the role of the user in the first returned row will be extracted and assigned to a cookie on the attacker’s machine. The consequence is as follows: Assuming the attacker is not a registered user of the system, he or she has just been granted unauthorized access to the system with the role (and identity) associated with the first username in the table. The password field shown in Figure 3 is also vulnerable, but we do not demonstrate this attack for space reasons. Because no input validation was performed, the system can be exploited for a use that was unintended by its developers. &lt;br /&gt;
&lt;br /&gt;
The exploitation of the third vulnerability requires slightly more work than the first two, but is more threatening. Presumably, the developer of this example web application provides different content to a given web user (or provides no content at all) depending on the role parameter, which is stored in a cookie. An example code for the design decision of using a cookie is Figure 6.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;$_COOKIE[‘role’]&amp;lt;/code&amp;gt; macro extracts the value stored on the user’s machine for the parameter passed (in this case “role”). The web application provides one set of content for users with the administrator role and another set of content for those with the employee role. If the role parameter is anything else, the user is redirected to &amp;lt;code&amp;gt;authrequired.html&amp;lt;/code&amp;gt;, which presumably contains some type of message to the user that authentication is required to access the requested page. The vulnerability stems from the relatively well-known fact that HTTP cookies are usually stored in a text file on the user’s machine. In this case, the attacker need only to edit this file and see that there is a parameter named “role” and a reasonable guess for the authentication value would be “admin”. The consequence is as follows: If the attacker succeeds in guessing the correct value, the system provides content to a user who was unauthorized to view it and the system has been exploited.&lt;br /&gt;
&lt;br /&gt;
  if ($_COOKIE[‘role’] == ‘admin’) &lt;br /&gt;
  { &lt;br /&gt;
   //give admin access &lt;br /&gt;
  } &lt;br /&gt;
  else if ($_COOKIE[‘role’] == ‘employee’) &lt;br /&gt;
  { &lt;br /&gt;
   //give employee access &lt;br /&gt;
  } &lt;br /&gt;
  else &lt;br /&gt;
  { &lt;br /&gt;
   //no role or unrecognizable role, &lt;br /&gt;
   //redirect to an error page. &lt;br /&gt;
   header(“Location: authrequired.html”); &lt;br /&gt;
  } &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Figure 6. Example authentication persistence&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A countermeasure for the form input field vulnerability is simply to escape all control characters (such as ‘ or #) from the input variables. For the cookie vulnerability, a countermeasure would be to dynamically generate a unique identifier for the current session and store that in the cookie as well as the associated user role. Because these vulnerabilities can be prevented with input validation, they are known as input validation vulnerabilities. Figure 6 is not a SQL injection attack; however it still represents an input validation vulnerability. We have included it here in the interest of completeness, but we will not focus on this type of vulnerability in the rest of this paper. &lt;br /&gt;
&lt;br /&gt;
Although a number of techniques exist to mitigate the risks posed by SQL injection vulnerabilities&amp;lt;sup&amp;gt;[2, 6, 8, 9, 13, 14]&amp;lt;/sup&amp;gt;, none of these techniques propose a methodology of adequacy as ensured by measuring how many commands issued to a database management system are tested by the test suite.&lt;br /&gt;
&lt;br /&gt;
== 3. Coverage Criteria ==&lt;br /&gt;
&lt;br /&gt;
We define two criteria for input validation testing coverage. Client-side input validation can be bypassed by attackers &amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;. Therefore, we only measure the coverage of server-side code. The followings are basic terms to be used to define input validation coverage criteria. &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Target statement&#039;&#039;&#039;: A target statement (within our context) is a SQL statement which could cause a security problem when malicious input is used. For example, consider the statement &lt;br /&gt;
&lt;br /&gt;
  java.sql.Statement.executeQuery(String sql) &lt;br /&gt;
&lt;br /&gt;
A SQL injection attack can happen when an attacker uses maliciously-devised input as explained in Section 2. Let &#039;&#039;&#039;T&#039;&#039;&#039; be the set of all the SQL statements in an application.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Input variable&#039;&#039;&#039;: An input variable is any variable in the serverside production code which is dynamically user-assigned and sent to the database management system. Let &#039;&#039;&#039;F&#039;&#039;&#039; represent the set of all input variables in all SQL statements occurring in the production code. &lt;br /&gt;
&lt;br /&gt;
=== 3.1 Target Statement Coverage ===&lt;br /&gt;
&lt;br /&gt;
Target statement coverage measures the percentage of SQL statements executed at least once during execution of the test suite. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Definition&#039;&#039;&#039;: A set of input validation tests satisfies target statement coverage if and only if for every SQL statement &#039;&#039;t&#039;&#039; &amp;amp;isin; &#039;&#039;&#039;T&#039;&#039;&#039;, there exists at least one test in the input validation test cases which executes t. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Metric&#039;&#039;&#039;: The target statement coverage criterion can be measured by the percentage of SQL statements tested at least once by the test set out of total SQL&lt;br /&gt;
statements. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Server-side target statement coverage&#039;&#039;&#039; = [[File:Sess-eqn-1.png]]&lt;br /&gt;
&lt;br /&gt;
where Test(&#039;&#039;t&#039;&#039;) is a SQL statement tested at least once. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Coverage interpretation&#039;&#039;&#039;: A low value for target statement coverage indicates that testing was insufficient. Programmers need to add more test cases to the input validation set for untested SQL statements to improve target statement coverage.&lt;br /&gt;
&lt;br /&gt;
=== 3.2 Input Variable Coverage ===&lt;br /&gt;
&lt;br /&gt;
Input variable coverage measures the percentage of input variables used in at least one test at the server-side. Input variable coverage does not consider all the constraints for the input variable. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Definition&#039;&#039;&#039;: A set of tests satisfies input variable coverage criterion if and only if for every input variable &#039;&#039;f&#039;&#039; &amp;amp;isin; &#039;&#039;&#039;F&#039;&#039;&#039;, there exists at least one test that uses that input variable at least once. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Metric&#039;&#039;&#039;: The input variable coverage criterion can be measured by the percentage of input variables tested at least once by the test set out of total number of input variables found in any target statement in the production code of the system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Input variable coverage&#039;&#039;&#039; = [[File:Sess-eqn-2.png]]&lt;br /&gt;
&lt;br /&gt;
where Test(f) is an input variable used in at least one test.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Coverage interpretation&#039;&#039;&#039;: A low value for input variable coverage indicates that input validation testing is insufficient. Programmers need to add more test cases for untested input variables to improve input variable coverage. &lt;br /&gt;
&lt;br /&gt;
We note here that a test set which achieves 100% input variable coverage and 100% target statement coverage may not contain any tests with malicious input. Consider a test set which satisfies both coverage criteria and leverages a blacklist to test for input validation attacks. This test set ensures that every input variable in every target statement is tested with every attack in the blacklist. &lt;br /&gt;
&lt;br /&gt;
The relationship between target statement coverage and input variable coverage is not yet known; however, we contend that input variable coverage is a useful, finer-grained measurement. &lt;br /&gt;
&lt;br /&gt;
Input variable coverage has the effect of weighting a target statement which has more input variables more heavily. Since most input variables are each a separate potential vulnerability if not adequately validated, a target statement which contains more input variables is of a higher threat level.&lt;br /&gt;
&lt;br /&gt;
== 4. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Halfrond and Orso&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt; introduce an approach for evaluating the number of database interaction points which have been tested within a system. Database interaction points are similar to target statements in that they are defined by Halfrond and Orso as any statement in the application code where a SQL command is issued to a relational database management system. These authors chose to focus on dynamically-generated queries, and define a &#039;&#039;command form&#039;&#039; as a single grammatically distinct structure for a SQL query which the application under test can generate. Using their tool &amp;lt;code&amp;gt;DITTO&amp;lt;/code&amp;gt; on an example application, Halfrond and Orso demonstrate that it is feasible to perform automated instrumentation on source code to gather &#039;&#039;command form coverage&#039;&#039;, which is expressed as the number of covered command forms divided by the total number of possible command forms. &lt;br /&gt;
&lt;br /&gt;
Willmor and Embury&amp;lt;sup&amp;gt;[18]&amp;lt;/sup&amp;gt; assess database coverage in the sense of whether the output received from the relational database system itself is correct and whether the database is structured correctly. The authors contend that the view of one system to one database is too simplistic; the research community has yet to consider the effect of incorrect database behavior on multiple concurrent applications or when using multiple database systems. The authors define the &#039;&#039;All Database Operations&#039;&#039; criteria as being satisfied when every database operation, which exists as a control graph node in the system under test, is executed by the test set in question.&lt;br /&gt;
&lt;br /&gt;
== 5. Case Study ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Research Question&#039;&#039;: Is it possible to manually instrument an application which interacts with a database, marking each target statement and input variable, and then dynamically gather the number of target statements executed by a test set? &lt;br /&gt;
&lt;br /&gt;
To test answer our research question, we performed a case study on iTrust&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;, an open source web application designed for storing and distributing healthcare records in a secure manner. Section 5.1 describes the architecture and implementation specifics of iTrust. Then, Section 5.2 gives more information about how our case study was conducted.&lt;br /&gt;
&lt;br /&gt;
=== 5.1 iTrust ===&lt;br /&gt;
&lt;br /&gt;
iTrust is a web application which is written in Java, web-based, and stores medical records for patients for use by healthcare professionals. Code metrics for iTrust Fall 2007 can be found in Table 1. The intent of the system is to be compliant with the Health Insurance Portability and Accountability Act&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt; privacy standard, which ensures that medical records be accessible only by authorized persons. Since 2005, iTrust has been developed and maintained by teams of graduate students in North Carolina State University who have used the application as a part of their Software Reliability and Testing coursework or for research purposes. As such, students were required in their assignments to have high statement coverage, as measured via the djUnit&amp;lt;sup&amp;gt;7&amp;lt;/sup&amp;gt; coverage tool. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Table 1. Code Metrics for iTrust Fall 2007 (7707 LoC in 143 classes Total)&#039;&#039;&#039;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Package&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;Java Class&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;LoC&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;Statements&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;Methods&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;Variables&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;Test Cases&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;Line Coverage&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;17&amp;quot;|edu.ncsu.csc.itrust.dao.mysql&lt;br /&gt;
|AccessDAO&lt;br /&gt;
| 156&lt;br /&gt;
| 6&lt;br /&gt;
| 8&lt;br /&gt;
| 1&lt;br /&gt;
| 12&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
|AllergyDAO&lt;br /&gt;
| 61&lt;br /&gt;
| 2&lt;br /&gt;
| 3&lt;br /&gt;
| 2&lt;br /&gt;
| 5&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| AuthDAO&lt;br /&gt;
| 184&lt;br /&gt;
| 8&lt;br /&gt;
| 10&lt;br /&gt;
| 2&lt;br /&gt;
| 23&lt;br /&gt;
| 98%&lt;br /&gt;
|-&lt;br /&gt;
| BkpStandardsDAO&lt;br /&gt;
| 61&lt;br /&gt;
| 1&lt;br /&gt;
| 5&lt;br /&gt;
| 4&lt;br /&gt;
| 0&lt;br /&gt;
| 0%&lt;br /&gt;
|-&lt;br /&gt;
| CPTCodesDAO&lt;br /&gt;
| 123&lt;br /&gt;
| 4&lt;br /&gt;
| 5&lt;br /&gt;
| 2&lt;br /&gt;
| 8&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| EpidemicDAO&lt;br /&gt;
| 141&lt;br /&gt;
| 2&lt;br /&gt;
| 5&lt;br /&gt;
| 1&lt;br /&gt;
| 6&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| FamilyDAO&lt;br /&gt;
| 112&lt;br /&gt;
| 3&lt;br /&gt;
| 5&lt;br /&gt;
| 2&lt;br /&gt;
| 6&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| HealthRecordsDAO&lt;br /&gt;
| 65&lt;br /&gt;
| 2&lt;br /&gt;
| 3&lt;br /&gt;
| 2&lt;br /&gt;
| 6&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| HospitalsDAO&lt;br /&gt;
| 180&lt;br /&gt;
| 7&lt;br /&gt;
| 8&lt;br /&gt;
| 2&lt;br /&gt;
| 18&lt;br /&gt;
| 88%&lt;br /&gt;
|-&lt;br /&gt;
| ICDCodesDAO&lt;br /&gt;
| 123&lt;br /&gt;
| 4&lt;br /&gt;
| 5&lt;br /&gt;
| 2&lt;br /&gt;
| 1&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| NDCodesDAO&lt;br /&gt;
| 122&lt;br /&gt;
| 4&lt;br /&gt;
| 5&lt;br /&gt;
| 2&lt;br /&gt;
| 8&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| OfficeVisitDAO&lt;br /&gt;
| 362&lt;br /&gt;
| 15&lt;br /&gt;
| 20&lt;br /&gt;
| 6&lt;br /&gt;
| 30&lt;br /&gt;
| 99%&lt;br /&gt;
|-&lt;br /&gt;
| PatientDAO&lt;br /&gt;
| 322&lt;br /&gt;
| 14&lt;br /&gt;
| 15&lt;br /&gt;
| 4&lt;br /&gt;
| 38&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| PersonnelDAO&lt;br /&gt;
| 196&lt;br /&gt;
| 10&lt;br /&gt;
| 8&lt;br /&gt;
| 3&lt;br /&gt;
| 15&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| RiskDAO&lt;br /&gt;
| 126&lt;br /&gt;
| 3&lt;br /&gt;
| 8&lt;br /&gt;
| 1&lt;br /&gt;
| 3&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| TransactionDAO&lt;br /&gt;
| 135&lt;br /&gt;
| 5&lt;br /&gt;
| 7&lt;br /&gt;
| 3&lt;br /&gt;
| 10&lt;br /&gt;
| 93%&lt;br /&gt;
|-&lt;br /&gt;
| VisitRemindersDAO&lt;br /&gt;
| 166&lt;br /&gt;
| 2&lt;br /&gt;
| 3&lt;br /&gt;
| 1&lt;br /&gt;
| 6&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;2&amp;quot;|edu.ncsu.csc.itrust.dao&lt;br /&gt;
|DBUtil&lt;br /&gt;
| 29&lt;br /&gt;
| 1&lt;br /&gt;
| 2&lt;br /&gt;
| 0&lt;br /&gt;
| 1&lt;br /&gt;
| 69%&lt;br /&gt;
|-&lt;br /&gt;
| DAO Classes: 20 Total&lt;br /&gt;
| 2378&lt;br /&gt;
| 93&lt;br /&gt;
| 125&lt;br /&gt;
| 40&lt;br /&gt;
| 196&lt;br /&gt;
| 92%&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In a recent refactoring effort, the iTrust architecture has been formulated to follow a paradigm of Action and Database Access Object (DAO) stereotypes. As shown in Figure 7, iTrust contains JSPs which are the dynamic web pages served to the client. In general, each JSP corresponds to an Action class, which allows the authorized user to view or modify various records contained in the iTrust system. While the Action class provides the logic for ensuring the current user is authorized to view a given set of records, the DAO provides a modular wrapper for the database. Each DAO corresponds to a certain related set of data types, such as Office Visits, Allergies or Health Records. Because of this architecture, every SQL statement used in the production code of iTrust exists in a DAO. iTrust testing is conducted using JUnit v3.0 test cases which make calls either to the Action classes or the DAO classes. Since we are interested in how much testing was performed on the aspects of the system which interact directly with the database, we focus on the DAO classes for this study. &lt;br /&gt;
&lt;br /&gt;
iTrust was written to conform to a MySQL&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt; back-end. The MySQL JDBC connector was used to implement the data storage for the web application by connecting to a remotely executing instance of MySQL v5.1.11-remote-nt. The &amp;lt;code&amp;gt;java.sql.PreparedStatement&amp;lt;/code&amp;gt; class is one way of representing SQL statements in the JDBC framework. Statement objects contain a series of overloaded methods all beginning with the word execute: &amp;lt;code&amp;gt;execute(…)&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;executeQuery(…)&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;executeUpdate(…)&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;executeBatch()&amp;lt;/code&amp;gt;. These methods are the java.sql way of issuing commands to the database and each of them represents a potential change to the database. These method calls, which we have previously introduced as &#039;&#039;target statements&#039;&#039;, are the focus of our coverage metrics. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:SESS-Figure7.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Figure 7. General iTrust architecture&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The version of iTrust we used for this study is referred to as iTrust Fall 2007, named by the year and semester it was built and redistributed to a new set of graduate students. iTrust was written to execute in Java 1.6 and thus our testing was conducted with the corresponding JRE. Code instrumentation and testing were conducted in Eclipse v3.3 Europa on an IBM Lenovo T61p running Windows Vista Ultimate with a 2.40Ghz Intel Core Duo and 2 GB of RAM.&lt;br /&gt;
&lt;br /&gt;
=== 5.2 Study Setup ===&lt;br /&gt;
&lt;br /&gt;
The primary challenge in collecting both of our proposed metrics is that there is currently no static tool which can integrate with the test harness JUnit to determine when SQL statements found within the code have been executed. As a result, we computed our metrics manually and via code instrumentation. &lt;br /&gt;
&lt;br /&gt;
The code fragment in Figure 8 demonstrates the execution of a SQL statement found within an iTrust DAO. Each of the JDBC execute method calls represents communication with the DBMS and has the potential to change the database. &lt;br /&gt;
&lt;br /&gt;
We assign each execute method call a unique identifier id in the range 1, 2, ... , n where n is the total number of execute method calls. We then instrument the code to contain a call to &amp;lt;code&amp;gt;SQLMarker.mark(id)&amp;lt;/code&amp;gt;. This &amp;lt;code&amp;gt;SQLMarker&amp;lt;/code&amp;gt; class interfaces with a research database we have setup to hold status information foreach statically identified execute method call. Before running the test suite, we load (or reload) a SQL table with records corresponding to each unique identifier from 1 to n. These records all contain a field &amp;lt;code&amp;gt;marked&amp;lt;/code&amp;gt; which is set to &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt;. The &amp;lt;code&amp;gt;SQLMarker.mark(id)&amp;lt;/code&amp;gt; method changes &amp;lt;code&amp;gt;marked&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt;. If &amp;lt;code&amp;gt;marked&amp;lt;/code&amp;gt; is already true, it will remain true. &lt;br /&gt;
&lt;br /&gt;
Using this technique, we can monitor the call status of each execute statement found within the iTrust production code. When the test suite is done executing, the table in our research database will contain n unique records which correspond to each method call in the iTrust production code. Each record will contain a boolean flag indicating whether the statement was called during test suite execution. The line with the comment instrumentation shows how this method is implemented in the example code in Figure 8.&lt;br /&gt;
&lt;br /&gt;
  java.sql.Connection conn = factory.getConnection(); &lt;br /&gt;
  java.sql.PreparedStatement ps = conn.prepareStatement(&amp;quot;UPDATE globalVariables set SET VALUE = ? WHERE Name = ‘Timeout’;&amp;quot;); &lt;br /&gt;
  ps.setInt(1, mins); &lt;br /&gt;
  SQLMarker.mark(1, 1); //instrumentation &lt;br /&gt;
  java.sql.ResultSet rs = ps.executeQuery();&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Figure 8. Code Instrumentation&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;SQLMarker.mark&amp;lt;/code&amp;gt; is always placed immediately before the call to the execute SQL query (or target statement) so the method&#039;s execution will be recorded even if the statement throws an exception during its execution. There are issues in making the determination of the number of SQL statements actually possible in the production code; these will be addressed in Section 7.&lt;br /&gt;
&lt;br /&gt;
To calculate input variable coverage, we included a second variable in the &amp;lt;code&amp;gt;SQLMarker.mark&amp;lt;/code&amp;gt; method which allows us to record the number of input variables which were set in the execute method. Initially, the input variable records of each execute method are set to zero, and the &amp;lt;code&amp;gt;SQLMarker.mark&amp;lt;/code&amp;gt; method sets them to the passed value. iTrust uses PreparedStatements for its SQL statements and as Figure 8 demonstrates, the number of input variables is always clearly visible in the production code because PreparedStatements require the explicit setting of each variable included in the statement. As with the determination of SQL statements, there are similar issues with determining the number of SQL input variables which we present in Section 7.&lt;br /&gt;
&lt;br /&gt;
== 6. Results and Discussion ==&lt;br /&gt;
&lt;br /&gt;
We found that 90 of the 93 SQL statements in the iTrust serverside production code were executed by the test suite, yielding a SQL statement coverage score of 96.7%. We found that 209 of the 212 SQL input variables found in the iTrust back-end were executed by the test suite, yielding a SQL variable coverage score of 98.5%. We find that iTrust is a very testable system with respect to SQL statement coverage, because each SQL statement, in essence, is embodied within a method of a DAO. This architectural decision is designed to allow the separation of concerns. For example the action of editing a patient’s records via user interface is separated from the action of actually updating that patient’s records in the database. We find that even though the refactoring of iTrust was intended to produce this high testability, there are still untested SQL statements within the production code. The Action classes of the iTrust framework represent procedures the client can perform with proper authorization. Since iTrust’s line coverage is at 91%, the results for iTrust are actually &#039;&#039;better&#039;&#039; than they would be for many existing systems due to its high testability. &lt;br /&gt;
&lt;br /&gt;
The three uncovered SQL statements occurred in methods which were never called by any Action class and thus are never used in production. Two of the statements related to the management of hospitals and one statement offered an alternate way of managing procedural and diagnosis codes. The uncovered statements certainly could have eventually been used by new features added to the production and thus the fact that they are not executed by any test is still pertinent.&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
Certain facets of the JDBC framework and of SQL in general make it difficult to establish a denominator for the ratio described for each of our coverage metrics. For example, remember that in calculating SQL statement coverage, we must find, mark and count each statically occurring SQL statement within the production code. The fragment presented in Figure 9 contains Java batch SQL statements. Similar to &#039;&#039;batch mode&#039;&#039; in MySQL, each statement is pushed into a single batch statement and then the statements are all executed with one commit. Batch statements can be used to increase efficiency or to help manage concurrency. We can count the number of executed SQL statements in a batch: a dummy variable could be instrumented within the for loop demonstrated in Figure 9 which increments each time a batch statement is added (e.g., &amp;lt;code&amp;gt;ps.addBatch()&amp;lt;/code&amp;gt;). How many SQL statements are possible, though? The numerator will always be the same as the number of &amp;lt;code&amp;gt;DiagnosisBeans&amp;lt;/code&amp;gt; in the variable &amp;lt;code&amp;gt;updateDiagnoses&amp;lt;/code&amp;gt;. These beans are parsed from input the user passes to the Action class via the JSP to make changes to several records in one web form submission. The denominator is potentially infinite, however. &lt;br /&gt;
&lt;br /&gt;
Additionally, the students who have worked on iTrust were required to use PreparedStatements, which elevates our resultant input variable coverage because PreparedStatements require explicit assignment to each input variable, and this may not be the case with other SQL connection methodologies. Furthermore, our metrics do not give any indication of how many input values have been tested in each input variable in each target statement. &lt;br /&gt;
&lt;br /&gt;
This technique is currently only applicable to Java code which implements a JDBC interface and uses PreparedStatements to interact with a SQL database management system. Finally, we recognize that much legacy code is implemented using dynamically generated SQL queries and while our metric for target statement coverage could be applied, our metric for input variable coverage does not contain an adequate definition for counting the input variables in a dynamically generated query. Our approach will be repeatable and can generalize to other applications matching the above restrictions.&lt;br /&gt;
&lt;br /&gt;
  public void updateDiscretionaryAccess(List&amp;lt;DiagnosisBean&amp;gt; updateDiagnoses) &lt;br /&gt;
  { &lt;br /&gt;
    java.sql.Connection conn = factory.getConnection(); &lt;br /&gt;
    java.sql.PreparedStatement ps = conn.prepareStatement(&amp;quot;UPDATE OVDiagnosis SET &lt;br /&gt;
    DiscretionaryAccess=? WHERE ID=?&amp;quot;); &lt;br /&gt;
    for (DiagnosisBean d : updateDiagnoses) { &lt;br /&gt;
      ps.setBoolean(1, d.isDiscretionaryAccess()); &lt;br /&gt;
      ps.setLong(2, d.getOvDiagnosisID()); &lt;br /&gt;
      ps.addBatch(); &lt;br /&gt;
  } &lt;br /&gt;
    SQLMarker.mark(1, 2); &lt;br /&gt;
    ps.executeBatch(); &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Figure 9. Batch SQL Statements&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 8. Conclusions and Future Work ==&lt;br /&gt;
&lt;br /&gt;
We have shown that a major portion of recent cyber vulnerabilities are occurring due to a lack of input validation testing. Testing strategies should incorporate new techniques to account for the likelihood of input validation attacks. Structural coverage metrics allow us to see how much of an application is executed by a given test set. We have shown that the notion of coverage can be extended to target statements and their input values. Finally, we have answered our research question with a case study which demonstrates that using the technique we describe, it is possible to dynamically gather accurate coverage metric values produced by a given test set. We have shown that the notion of coverage can be extended to target statements, and we introduce a technique for manually determining this coverage value.&lt;br /&gt;
&lt;br /&gt;
Future improvements can make these metrics portable to different database management systems as well as usable in varying development languages.  We would eventually extend our metric to evaluate the percentage of all sources of user input that have been involved in a test case.  We would like to automate the process of collecting SQL statement coverage into a tool or plug-in, which can help developers rapidly assess the level of security testing which has been performed as well as find the statements that have not been tested with any test set.  This work will eventually be extended to cross-site scripting attacks and buffer overflow vulnerabilities.  Finally, we would like to integrate these coverage metrics with a larger framework which will allow target statements and variables which are included in the coverage to be tested against sets of pre-generated good and malicious input.&lt;br /&gt;
&lt;br /&gt;
== 9. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
This work is supported by the National Science Foundation under CAREER Grant No. 0346903.   Any opinions expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
== 10. References ==&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt; B. Beizer, Software testing techniques: Van Nostrand Reinhold Co. New York, NY, USA, 1990.&lt;br /&gt;
: &amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt; S. W. Boyd and A. D. Keromytis, &amp;quot;SQLrand: Preventing SQL injection attacks,&amp;quot; in Proceedings of the 2nd Applied Cryptography and Network Security (ACNS) Conference, Yellow Mountain, China, pp. 292-304, 2004. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; B. Brenner, &amp;quot;CSI 2007: Developers need Web application security assistance,&amp;quot; in SearchSecurity.com, 2007. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; M. Cobb, &amp;quot;Making the case for Web application vulnerability scanners,&amp;quot; in SearchSecurity.com, 2007. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt; W. G. Halfond, J. Viegas, and A. Orso, &amp;quot;A Classification of SQL-Injection Attacks and Countermeasures,&amp;quot; in Proceedings of the International Symposium on Secure Software Engineering, March, Arlington, VA, 2006. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt; W. G. J. Halfond and A. Orso, &amp;quot;AMNESIA: analysis and monitoring for NEutralizing SQL-injection attacks,&amp;quot; in Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, Long Beach, CA, USA, pp. 174-183, 2005. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt; W. G. J. Halfond and A. Orso, &amp;quot;Command-Form Coverage for Testing Database Applications,&amp;quot; Proceedings of the IEEE and ACM International Conference on Automated Software Engineering, pp. 69–78, 2006. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt; Y. W. Huang, S. K. Huang, T. P. Lin, and C. H. Tsai, &amp;quot;Web application security assessment by fault injection and behavior monitoring,&amp;quot; in Proceedings of the 12th International Conference on World Wide Web, Budapest, Hungary, pp. 148-159, 2003. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt; S. Kals, E. Kirda, C. Kruegel, and N. Jovanovic, &amp;quot;SecuBat: a web vulnerability scanner,&amp;quot; in Proceedings of the 15th international conference on World Wide Web, Edinburgh, Scotland pp. 247-256, 2006. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt; G. McGraw, Software Security: Building Security in. Upper Saddle River, NJ: Addison-Wesley Professional, 2006. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt; J. Offutt, &amp;quot;Quality attributes of Web software applications,&amp;quot; IEEE Software, vol. 19, no. 2, pp. 25-32, 2002. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[12]&amp;lt;/sup&amp;gt; E. Ogren, &amp;quot;App Security&#039;s Evolution,&amp;quot; in DarkReading.com, 2007. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; T. Pietraszek and C. V. Berghe, &amp;quot;Defending against injection attacks through context-sensitive string evaluation,&amp;quot; in Recent Advances in Intrusion Detection (RAID). Seattle, WA, 2005. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[14]&amp;lt;/sup&amp;gt; F. S. Rietta, &amp;quot;Application layer intrusion detection for SQL injection,&amp;quot; in Proceedings of the 44th annual southeast regional conference, New York, NY, pp. 531-536, 2006. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[15]&amp;lt;/sup&amp;gt; D. Scott and R. Sharp, &amp;quot;Developing secure Web applications,&amp;quot; Internet Computing, IEEE, vol. 6, no. 6, pp. 38-45, 2002. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[16]&amp;lt;/sup&amp;gt; Z. Su and G. Wassermann, &amp;quot;The essence of command injection attacks in web applications,&amp;quot; in Proceedings of the Annual Symposium on Principles of Programming Languages, Charleston, SC, pp. 372-382, 2006. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[17]&amp;lt;/sup&amp;gt; H. H. Thompson and J. A. Whittaker, &amp;quot;Testing for software security,&amp;quot; Dr. Dobb&#039;s Journal, vol. 27, no. 11, pp. 24-34, 2002.&lt;br /&gt;
: &amp;lt;sup&amp;gt;[18]&amp;lt;/sup&amp;gt; D. Willmor and S. M. Embury, &amp;quot;Exploring test adequacy for database systems,&amp;quot; in Proceedings of the 3rd UK Software Testing Research Workshop, Sheffield, UK, pp. p123-133, 2005. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[19]&amp;lt;/sup&amp;gt; H. Zhu, P. A. V. Hall, and J. H. R. May, &amp;quot;Software Unit Test Coverage and Adequacy,&amp;quot; ACM Computing Surveys, vol. 29, no. 4, 1997.&lt;br /&gt;
&lt;br /&gt;
== 11. End Notes ==&lt;br /&gt;
&lt;br /&gt;
# http://nvd.nist.gov/&lt;br /&gt;
# In Figure 1, we counted the reported instances of vulnerabilities by using the keywords &amp;quot;SQL injection&amp;quot;, &amp;quot;cross-site scripting&amp;quot;, &amp;quot;XSS&amp;quot;, and &amp;quot;buffer overflow&amp;quot; within the input validation error category from NVD.&lt;br /&gt;
# http://www.junit.org&lt;br /&gt;
# A cookie is a piece of information that is sent by a web server when a user first accesses the website and saved to a local file. The cookie is then used in consecutive requests to identify the user to the server. See http://www.ietf.org/rfc/rfc2109.txt.&lt;br /&gt;
# http://sourceforge.net/projects/itrust/&lt;br /&gt;
# US Pub. Law 104-192, est. 1996.&lt;br /&gt;
# http://works.dgic.co.jp/djunit/&lt;br /&gt;
# For our case study, we used MySQL v5.0.45-community-nt found at http://www.mysql.com/&lt;br /&gt;
&lt;br /&gt;
[[Category:Workshop Papers]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Proposing_SQL_Statement_Coverage_Metrics&amp;diff=780</id>
		<title>Proposing SQL Statement Coverage Metrics</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Proposing_SQL_Statement_Coverage_Metrics&amp;diff=780"/>
		<updated>2021-05-16T14:46:20Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; B. Smith, Y. Shin, and L. Williams, &amp;quot;Proposing SQL Statement Coverage Metrics&amp;quot;, Proceedings of the Fourth International Workshop on Software Engineering for Secure Systems (SESS 2008), co-located with ICSE, pp. 49-56, 2008.&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&#039;&#039;An increasing number of cyber attacks are occurring at the application layer when attackers use malicious input. These input validation vulnerabilities can be exploited by (among others) SQL injection, cross site scripting, and buffer overflow attacks. Statement coverage and similar test adequacy metrics have historically been used to assess the level of functional and unit testing which has been performed on an application. However, these currently-available metrics do not highlight how well the system protects itself through validation. In this paper, we propose two SQL injection input validation testing adequacy metrics: target statement coverage and input variable coverage. A test suite which satisfies both adequacy criteria can be leveraged as a solid foundation for input validation scanning with a blacklist. To determine whether it is feasible to calculate values for our two metrics, we perform a case study on a web healthcare  application and discuss some issues in implementation we have encountered. We find that the web healthcare application scored 96.7% target statement coverage and 98.5% input variable coverage&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
According to the National Vulnerability Database (NVD)&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;, more than half of all of the ever-increasing number of cyber vulnerabilities reported in 2002-2006 were input validation vulnerabilities. As Figure 1 shows, the number of input validation vulnerabilities is still increasing. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:Sess-figure-1.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Figure 1. NVD&#039;s reported cyber vulnerabilities&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 1 illustrates the number of reported instances of each type of cyber vulnerability listed in the series legend for each year displayed in the x-axis. The curve with the square shaped points is the sum of all reported vulnerabilities that fall into the categories “SQL injection”, “XSS”, or “buffer overflow” when querying the National Vulnerability Database. The curve with diamond shaped points represents all cyber vulnerabilities reported for the year in the x-axis. For several years now, the number of reported input validation vulnerabilities has been half the total number of reported vulnerabilities. Additionally, the graph demonstrates that these curves are monotonically increasing; indicating that we are unlikely to see a drop in the future in ratio of reported input &lt;br /&gt;
validation vulnerabilities. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Input validation testing&#039;&#039; is the process of writing and running test cases to investigate how a system responds to malicious input with the intention of using tests to mitigate the risk of a security threat. Input validation testing can increase confidence that input validation has been properly implemented. The goal of input  validation testing is to check whether input is validated against constraints given for the input. Input validation testing should test both whether legal input is accepted, and whether illegal input is rejected. A coverage metric can quantify the extent to which this goal has been met. Various coverage criteria have been defined based on the target of testing (specification or program as a target) and underlying testing methods (structural, fault-based and error-based)&amp;lt;sup&amp;gt;[19]&amp;lt;/sup&amp;gt;. Statement coverage and branch coverage are well-known program-based structural coverage criteria&amp;lt;sup&amp;gt;[19]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
However, current structural coverage metrics and the tools which implement them do not provide specific information about insufficient or missing input validation. New coverage criteria to measure the adequacy of input validation testing can be used to highlight a level of security testing. &#039;&#039;Our research objective is to propose and to validate two input validation testing adequacy metrics related to SQL injection vulnerabilities&#039;&#039;. Our current input validation coverage criteria consist of two experimental metrics: input variable coverage, which measures the percentage of input variables used in at least one test; and target statement coverage, which measures the percentage of SQL statements executed in at least one test. &lt;br /&gt;
&lt;br /&gt;
An &#039;&#039;input variable&#039;&#039; is any dynamic, user-assigned variable which an attacker could manipulate to send malicious input to the system. In the context of the Web, any field on a web form is an input variable as well as any number of other client-side input spaces. Within the context of SQL injection attacks, input variables are any variable which is sent to the database management system, as will be illustrated in further detail in Section 2. A target statement is any statement in an application which is subject to attack via malicious input; for this paper, our target statements will be all SQL statements found in production code. Other input sources can be leveraged to form an attack, but we have chosen not to focus on them for this study because they comprise less than half of recently reported cyber vulnerabilities (see Figure 1 and explanation). &lt;br /&gt;
&lt;br /&gt;
In practice, even software development teams who use metrics such as traditional statement coverage often do not achieve 100% values in these metrics before production&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. If the lines left uncovered contain target statements, traditional statement coverage could be very high while little to no input validation testing is performed on the system. A target statement or input variable which is involved in at least one test might achieve high input validation coverage metrics yet still remain insecure if the test case(s) did not utilize a malicious form of input. However, a system with a high score in the metrics we define has a foundation for thorough input validation testing. Testers can relatively easily reuse existing test cases with multiple forms of good and malicious input. Our vision is to automate such reuse. &lt;br /&gt;
&lt;br /&gt;
We evaluated our metrics on the server-side code of a Java Server Pages web healthcare application that had an extensive set of JUnit&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt; test cases. We manually counted the number of input variables and SQL statements found in this system and dynamically recorded how many of these statements and variables are used in executing a given test set. The rest of this paper is organized as follows: First, Section 2 defines SQL injection attacks. Then, Section 3 introduces our experimental metrics. Section 4 provides a brief summary of related work. Next, Section 5 describes our case study and application of our technique. Section 6 reports the results of our study and discusses their implications. Then, Section 7 illustrates some limitations on our technique and our metrics. Finally, Section 8 concludes and discusses the future use and development of our metrics.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
Section 2.1 explains the fundamental difference between traditional testing and security testing. Then, Section 2.2 describes SQL injection. &lt;br /&gt;
&lt;br /&gt;
=== 2.1 Testing for Security ===&lt;br /&gt;
&lt;br /&gt;
Web applications are inherently insecure&amp;lt;sup&amp;gt;[15]&amp;lt;/sup&amp;gt; and web applications’ attackers look the same as any other customer to the server&amp;lt;sup&amp;gt;[12]&amp;lt;/sup&amp;gt;. Developers should, but typically do not, focus on building security into web applications &amp;lt;sup&amp;gt;[[#mcgraw|[6]]]&amp;lt;/sup&amp;gt;. Security has been added to the list of web application quality criteria&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt; and the result is that companies have begun to incorporate security testing (including input validation testing) into their development methodologies&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Security testing is contrasted from traditional testing, as illustrated by Figure 2: Functional vs. Security Testing, adapted from&amp;lt;sup&amp;gt;[17]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Sess-figure-2.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Figure 2. Intended vs. Actual Behavior, (adapted from &amp;lt;sup&amp;gt;[17]&amp;lt;/sup&amp;gt;)&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Represented by the left-hand circle in Figure 2, the current software development paradigm includes a list of testing strategies to ensure the correctness of an application in functionality and usability as indicated by a requirements specification. With respect to intended correctness, verification typically entails creating test cases designed to discover faults by causing failures. Oracles tell us what the system should do and failures tell us that the system does not do what it is supposed to do. The right-hand circle in Figure 2 indicates that we validate not only that the system does what it should, but also that the system does not do what it should not: the right-hand circle represents a failure occurring in the system which causes a security problem. The circles intersect because some intended functionality can cause indirect vulnerabilities because privacy and security were not considered in designing the required functionality&amp;lt;sup&amp;gt;[17]&amp;lt;/sup&amp;gt;. Testing for functionality only validates that the application achieves what was written in the requirements specification. Testing for security validates that the application prevents undesirable security risks from occurring, even when the nature of this functionality is spread across several modules and might be due to an oversight in the application’s design. To adapt to the new paradigm, companies have started to incorporate new techniques. Some companies use vulnerability scanners, which behave like a hacker to make automated attempts at gaining access or misusing the system to discover its flaws&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt;. A blacklist is a representative or comprehensive set of all input validation attacks of a given type (such as SQL injection, see Section 2.2). These vulnerability scanners typically use a blacklist to test potential vulnerabilities against all attacks (or a set of representative attacks). Coverage criteria for target statements can help companies assess how much of their system has the framework for a range of input validation testing. A vulnerability scanner is ineffective if its blacklist is not tested against every target statement in the system.&lt;br /&gt;
&lt;br /&gt;
=== 2.2 SQL Injection Attacks ===&lt;br /&gt;
&lt;br /&gt;
A &#039;&#039;SQL injection attack&#039;&#039; is performed when a user exploits a lack of input validation to force unintended system behavior by altering the logical structure of a SQL statement with special characters. The lack of input validation to prevent SQL injection attacks is known as a SQL injection vulnerability&amp;lt;sup&amp;gt;[2, 5, 6, 8, 9, 13-16]&amp;lt;/sup&amp;gt;. Our example of this type of input validation vulnerability begins with the login form presented in Figure 3.&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:Sess-figure-3.png]] &amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Figure 3. Example login form&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usernames typically consist of alphanumeric characters, underscores, periods and dashes. Passwords also typically consist of these character ranges and additionally allow for some other non-alphanumeric characters such as $, ^ or #. The authentication mechanism functions by a code segment resembling the one in Figure 4. Assume there exists some table maintaining a list of all usernames, passwords, and most likely some indication of the role of each unique username.&lt;br /&gt;
&lt;br /&gt;
  //for simplicity, this example is given in PHP. &lt;br /&gt;
  //first, extract the input values from the form &lt;br /&gt;
  $username = $_POST[‘username’]; &lt;br /&gt;
  $password = $_POST[‘password’]; &lt;br /&gt;
  &lt;br /&gt;
  //query the database for a user with username/pw &lt;br /&gt;
  $result = mysql_query(“select * from users where username =  ‘$username’ AND password = ‘$password’”); &lt;br /&gt;
  &lt;br /&gt;
  //extract the first row of the resultset &lt;br /&gt;
  $firstresult = mysql_fetch_array($result); &lt;br /&gt;
  &lt;br /&gt;
  //extract the “role” column from the result &lt;br /&gt;
  $role = $firstresult[‘role’]; &lt;br /&gt;
  &lt;br /&gt;
  //set a cookie for the user with their role &lt;br /&gt;
  setcookie(“userrole”, $role); &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Figure 4. Example authentication code&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The code in Figure 4 performs the following. First, query the database for every entry with the entered username and password. Typically, we use the first row of returned SQL results (which is retrieved by &amp;lt;code&amp;gt;mysql_fetch_array&amp;lt;/code&amp;gt; and stored in &amp;lt;code&amp;gt;$firstresult&amp;lt;/code&amp;gt;) because the web application (or the database management system) will ensure that there are no duplicate usernames and will ensure that every user name is given the appropriate role. Finally, we &lt;br /&gt;
extract the role field from the first result and give the user a cookie&amp;lt;sup&amp;gt;4&amp;lt;/sup&amp;gt;, which allows the login to be persistent (i.e., the user does not have to login to view every protected page).&lt;br /&gt;
&lt;br /&gt;
The example we have presented in Figure 4 performs no input validation, and as a result the example contains at least three input validation vulnerability locations. The first two are the username and password fields as given in the web form in Figure 3. An attacker could cause the code fragment change shown in Figure 5 simply by entering the SQL command fragment “&amp;lt;code&amp;gt;‘ OR 1=1 -- AND&amp;lt;/code&amp;gt;&amp;quot; in the input field instead of any valid user name in Figure &lt;br /&gt;
3.&lt;br /&gt;
&lt;br /&gt;
  //from Figure 4; original code &lt;br /&gt;
  $result = mysql_query(“select * from users where username = &#039;$username’ AND password = ‘$password’”);&lt;br /&gt;
  &lt;br /&gt;
  //code with inserted attack parameters &lt;br /&gt;
  $result = mysql_query(“select * from users where username = ‘’ OR 1=1 -- AND password = ‘PASSWORD’”); &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Figure 5. Example SQL statement, before and after&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The single quotation mark (&amp;lt;code&amp;gt;&#039;&amp;lt;/code&amp;gt;) indicates to the SQL parser that the character sequence for the username column is closed, the fragment &amp;lt;code&amp;gt;OR 1=1&amp;lt;/code&amp;gt; is interpreted as always true, and the hyphens (&amp;lt;code&amp;gt;--&amp;lt;/code&amp;gt;) tells the parser that the SQL command is over and the fragment of the query after the hyphens is a comment. With these values, the $result variable contains a list of every user in the table (and their associated role) because the where clause is always true. The first listing returned from the database is unknown and will vary based on the database configuration. Regardless, the role of the user in the first returned row will be extracted and assigned to a cookie on the attacker’s machine. The consequence is as follows: Assuming the attacker is not a registered user of the system, he or she has just been granted unauthorized access to the system with the role (and identity) associated with the first username in the table. The password field shown in Figure 3 is also vulnerable, but we do not demonstrate this attack for space reasons. Because no input validation was performed, the system can be exploited for a use that was unintended by its developers. &lt;br /&gt;
&lt;br /&gt;
The exploitation of the third vulnerability requires slightly more work than the first two, but is more threatening. Presumably, the developer of this example web application provides different content to a given web user (or provides no content at all) depending on the role parameter, which is stored in a cookie. An example code for the design decision of using a cookie is Figure 6.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;$_COOKIE[‘role’]&amp;lt;/code&amp;gt; macro extracts the value stored on the user’s machine for the parameter passed (in this case “role”). The web application provides one set of content for users with the administrator role and another set of content for those with the employee role. If the role parameter is anything else, the user is redirected to &amp;lt;code&amp;gt;authrequired.html&amp;lt;/code&amp;gt;, which presumably contains some type of message to the user that authentication is required to access the requested page. The vulnerability stems from the relatively well-known fact that HTTP cookies are usually stored in a text file on the user’s machine. In this case, the attacker need only to edit this file and see that there is a parameter named “role” and a reasonable guess for the authentication value would be “admin”. The consequence is as follows: If the attacker succeeds in guessing the correct value, the system provides content to a user who was unauthorized to view it and the system has been exploited.&lt;br /&gt;
&lt;br /&gt;
  if ($_COOKIE[‘role’] == ‘admin’) &lt;br /&gt;
  { &lt;br /&gt;
   //give admin access &lt;br /&gt;
  } &lt;br /&gt;
  else if ($_COOKIE[‘role’] == ‘employee’) &lt;br /&gt;
  { &lt;br /&gt;
   //give employee access &lt;br /&gt;
  } &lt;br /&gt;
  else &lt;br /&gt;
  { &lt;br /&gt;
   //no role or unrecognizable role, &lt;br /&gt;
   //redirect to an error page. &lt;br /&gt;
   header(“Location: authrequired.html”); &lt;br /&gt;
  } &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Figure 6. Example authentication persistence&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A countermeasure for the form input field vulnerability is simply to escape all control characters (such as ‘ or #) from the input variables. For the cookie vulnerability, a countermeasure would be to dynamically generate a unique identifier for the current session and store that in the cookie as well as the associated user role. Because these vulnerabilities can be prevented with input validation, they are known as input validation vulnerabilities. Figure 6 is not a SQL injection attack; however it still represents an input validation vulnerability. We have included it here in the interest of completeness, but we will not focus on this type of vulnerability in the rest of this paper. &lt;br /&gt;
&lt;br /&gt;
Although a number of techniques exist to mitigate the risks posed by SQL injection vulnerabilities&amp;lt;sup&amp;gt;[2, 6, 8, 9, 13, 14]&amp;lt;/sup&amp;gt;, none of these techniques propose a methodology of adequacy as ensured by measuring how many commands issued to a database management system are tested by the test suite.&lt;br /&gt;
&lt;br /&gt;
== 3. Coverage Criteria ==&lt;br /&gt;
&lt;br /&gt;
We define two criteria for input validation testing coverage. Client-side input validation can be bypassed by attackers &amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;. Therefore, we only measure the coverage of server-side code. The followings are basic terms to be used to define input validation coverage criteria. &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Target statement&#039;&#039;&#039;: A target statement (within our context) is a SQL statement which could cause a security problem when malicious input is used. For example, consider the statement &lt;br /&gt;
&lt;br /&gt;
  java.sql.Statement.executeQuery(String sql) &lt;br /&gt;
&lt;br /&gt;
A SQL injection attack can happen when an attacker uses maliciously-devised input as explained in Section 2. Let &#039;&#039;&#039;T&#039;&#039;&#039; be the set of all the SQL statements in an application.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Input variable&#039;&#039;&#039;: An input variable is any variable in the serverside production code which is dynamically user-assigned and sent to the database management system. Let &#039;&#039;&#039;F&#039;&#039;&#039; represent the set of all input variables in all SQL statements occurring in the production code. &lt;br /&gt;
&lt;br /&gt;
=== 3.1 Target Statement Coverage ===&lt;br /&gt;
&lt;br /&gt;
Target statement coverage measures the percentage of SQL statements executed at least once during execution of the test suite. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Definition&#039;&#039;&#039;: A set of input validation tests satisfies target statement coverage if and only if for every SQL statement &#039;&#039;t&#039;&#039; &amp;amp;isin; &#039;&#039;&#039;T&#039;&#039;&#039;, there exists at least one test in the input validation test cases which executes t. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Metric&#039;&#039;&#039;: The target statement coverage criterion can be measured by the percentage of SQL statements tested at least once by the test set out of total SQL&lt;br /&gt;
statements. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Server-side target statement coverage&#039;&#039;&#039; = [[File:Sess-eqn-1.png]]&lt;br /&gt;
&lt;br /&gt;
where Test(&#039;&#039;t&#039;&#039;) is a SQL statement tested at least once. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Coverage interpretation&#039;&#039;&#039;: A low value for target statement coverage indicates that testing was insufficient. Programmers need to add more test cases to the input validation set for untested SQL statements to improve target statement coverage.&lt;br /&gt;
&lt;br /&gt;
=== 3.2 Input Variable Coverage ===&lt;br /&gt;
&lt;br /&gt;
Input variable coverage measures the percentage of input variables used in at least one test at the server-side. Input variable coverage does not consider all the constraints for the input variable. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Definition&#039;&#039;&#039;: A set of tests satisfies input variable coverage criterion if and only if for every input variable &#039;&#039;f&#039;&#039; &amp;amp;isin; &#039;&#039;&#039;F&#039;&#039;&#039;, there exists at least one test that uses that input variable at least once. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Metric&#039;&#039;&#039;: The input variable coverage criterion can be measured by the percentage of input variables tested at least once by the test set out of total number of input variables found in any target statement in the production code of the system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Input variable coverage&#039;&#039;&#039; = [[File:Sess-eqn-2.png]]&lt;br /&gt;
&lt;br /&gt;
where Test(f) is an input variable used in at least one test.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Coverage interpretation&#039;&#039;&#039;: A low value for input variable coverage indicates that input validation testing is insufficient. Programmers need to add more test cases for untested input variables to improve input variable coverage. &lt;br /&gt;
&lt;br /&gt;
We note here that a test set which achieves 100% input variable coverage and 100% target statement coverage may not contain any tests with malicious input. Consider a test set which satisfies both coverage criteria and leverages a blacklist to test for input validation attacks. This test set ensures that every input variable in every target statement is tested with every attack in the blacklist. &lt;br /&gt;
&lt;br /&gt;
The relationship between target statement coverage and input variable coverage is not yet known; however, we contend that input variable coverage is a useful, finer-grained measurement. &lt;br /&gt;
&lt;br /&gt;
Input variable coverage has the effect of weighting a target statement which has more input variables more heavily. Since most input variables are each a separate potential vulnerability if not adequately validated, a target statement which contains more input variables is of a higher threat level.&lt;br /&gt;
&lt;br /&gt;
== 4. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Halfrond and Orso&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt; introduce an approach for evaluating the number of database interaction points which have been tested within a system. Database interaction points are similar to target statements in that they are defined by Halfrond and Orso as any statement in the application code where a SQL command is issued to a relational database management system. These authors chose to focus on dynamically-generated queries, and define a &#039;&#039;command form&#039;&#039; as a single grammatically distinct structure for a SQL query which the application under test can generate. Using their tool &amp;lt;code&amp;gt;DITTO&amp;lt;/code&amp;gt; on an example application, Halfrond and Orso demonstrate that it is feasible to perform automated instrumentation on source code to gather &#039;&#039;command form coverage&#039;&#039;, which is expressed as the number of covered command forms divided by the total number of possible command forms. &lt;br /&gt;
&lt;br /&gt;
Willmor and Embury&amp;lt;sup&amp;gt;[18]&amp;lt;/sup&amp;gt; assess database coverage in the sense of whether the output received from the relational database system itself is correct and whether the database is structured correctly. The authors contend that the view of one system to one database is too simplistic; the research community has yet to consider the effect of incorrect database behavior on multiple concurrent applications or when using multiple database systems. The authors define the &#039;&#039;All Database Operations&#039;&#039; criteria as being satisfied when every database operation, which exists as a control graph node in the system under test, is executed by the test set in question.&lt;br /&gt;
&lt;br /&gt;
== 5. Case Study ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Research Question&#039;&#039;: Is it possible to manually instrument an application which interacts with a database, marking each target statement and input variable, and then dynamically gather the number of target statements executed by a test set? &lt;br /&gt;
&lt;br /&gt;
To test answer our research question, we performed a case study on iTrust&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;, an open source web application designed for storing and distributing healthcare records in a secure manner. Section 5.1 describes the architecture and implementation specifics of iTrust. Then, Section 5.2 gives more information about how our case study was conducted.&lt;br /&gt;
&lt;br /&gt;
=== 5.1 iTrust ===&lt;br /&gt;
&lt;br /&gt;
iTrust is a web application which is written in Java, web-based, and stores medical records for patients for use by healthcare professionals. Code metrics for iTrust Fall 2007 can be found in Table 1. The intent of the system is to be compliant with the Health Insurance Portability and Accountability Act&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt; privacy standard, which ensures that medical records be accessible only by authorized persons. Since 2005, iTrust has been developed and maintained by teams of graduate students in North Carolina State University who have used the application as a part of their Software Reliability and Testing coursework or for research purposes. As such, students were required in their assignments to have high statement coverage, as measured via the djUnit&amp;lt;sup&amp;gt;7&amp;lt;/sup&amp;gt; coverage tool. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Table 1. Code Metrics for iTrust Fall 2007 (7707 LoC in 143 classes Total)&#039;&#039;&#039;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;Package&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;Java Class&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;LoC&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;Statements&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;Methods&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;Variables&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;Test Cases&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;Line Coverage&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;17&amp;quot;|edu.ncsu.csc.itrust.dao.mysql&lt;br /&gt;
|AccessDAO&lt;br /&gt;
| 156&lt;br /&gt;
| 6&lt;br /&gt;
| 8&lt;br /&gt;
| 1&lt;br /&gt;
| 12&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
|AllergyDAO&lt;br /&gt;
| 61&lt;br /&gt;
| 2&lt;br /&gt;
| 3&lt;br /&gt;
| 2&lt;br /&gt;
| 5&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| AuthDAO&lt;br /&gt;
| 184&lt;br /&gt;
| 8&lt;br /&gt;
| 10&lt;br /&gt;
| 2&lt;br /&gt;
| 23&lt;br /&gt;
| 98%&lt;br /&gt;
|-&lt;br /&gt;
| BkpStandardsDAO&lt;br /&gt;
| 61&lt;br /&gt;
| 1&lt;br /&gt;
| 5&lt;br /&gt;
| 4&lt;br /&gt;
| 0&lt;br /&gt;
| 0%&lt;br /&gt;
|-&lt;br /&gt;
| CPTCodesDAO&lt;br /&gt;
| 123&lt;br /&gt;
| 4&lt;br /&gt;
| 5&lt;br /&gt;
| 2&lt;br /&gt;
| 8&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| EpidemicDAO&lt;br /&gt;
| 141&lt;br /&gt;
| 2&lt;br /&gt;
| 5&lt;br /&gt;
| 1&lt;br /&gt;
| 6&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| FamilyDAO&lt;br /&gt;
| 112&lt;br /&gt;
| 3&lt;br /&gt;
| 5&lt;br /&gt;
| 2&lt;br /&gt;
| 6&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| HealthRecordsDAO&lt;br /&gt;
| 65&lt;br /&gt;
| 2&lt;br /&gt;
| 3&lt;br /&gt;
| 2&lt;br /&gt;
| 6&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| HospitalsDAO&lt;br /&gt;
| 180&lt;br /&gt;
| 7&lt;br /&gt;
| 8&lt;br /&gt;
| 2&lt;br /&gt;
| 18&lt;br /&gt;
| 88%&lt;br /&gt;
|-&lt;br /&gt;
| ICDCodesDAO&lt;br /&gt;
| 123&lt;br /&gt;
| 4&lt;br /&gt;
| 5&lt;br /&gt;
| 2&lt;br /&gt;
| 1&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| NDCodesDAO&lt;br /&gt;
| 122&lt;br /&gt;
| 4&lt;br /&gt;
| 5&lt;br /&gt;
| 2&lt;br /&gt;
| 8&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| OfficeVisitDAO&lt;br /&gt;
| 362&lt;br /&gt;
| 15&lt;br /&gt;
| 20&lt;br /&gt;
| 6&lt;br /&gt;
| 30&lt;br /&gt;
| 99%&lt;br /&gt;
|-&lt;br /&gt;
| PatientDAO&lt;br /&gt;
| 322&lt;br /&gt;
| 14&lt;br /&gt;
| 15&lt;br /&gt;
| 4&lt;br /&gt;
| 38&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| PersonnelDAO&lt;br /&gt;
| 196&lt;br /&gt;
| 10&lt;br /&gt;
| 8&lt;br /&gt;
| 3&lt;br /&gt;
| 15&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| RiskDAO&lt;br /&gt;
| 126&lt;br /&gt;
| 3&lt;br /&gt;
| 8&lt;br /&gt;
| 1&lt;br /&gt;
| 3&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
| TransactionDAO&lt;br /&gt;
| 135&lt;br /&gt;
| 5&lt;br /&gt;
| 7&lt;br /&gt;
| 3&lt;br /&gt;
| 10&lt;br /&gt;
| 93%&lt;br /&gt;
|-&lt;br /&gt;
| VisitRemindersDAO&lt;br /&gt;
| 166&lt;br /&gt;
| 2&lt;br /&gt;
| 3&lt;br /&gt;
| 1&lt;br /&gt;
| 6&lt;br /&gt;
| 100%&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;2&amp;quot;|edu.ncsu.csc.itrust.dao&lt;br /&gt;
|DBUtil&lt;br /&gt;
| 29&lt;br /&gt;
| 1&lt;br /&gt;
| 2&lt;br /&gt;
| 0&lt;br /&gt;
| 1&lt;br /&gt;
| 69%&lt;br /&gt;
|-&lt;br /&gt;
| DAO Classes: 20 Total&lt;br /&gt;
| 2378&lt;br /&gt;
| 93&lt;br /&gt;
| 125&lt;br /&gt;
| 40&lt;br /&gt;
| 196&lt;br /&gt;
| 92%&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In a recent refactoring effort, the iTrust architecture has been formulated to follow a paradigm of Action and Database Access Object (DAO) stereotypes. As shown in Figure 7, iTrust contains JSPs which are the dynamic web pages served to the client. In general, each JSP corresponds to an Action class, which allows the authorized user to view or modify various records contained in the iTrust system. While the Action class provides the logic for ensuring the current user is authorized to view a given set of records, the DAO provides a modular wrapper for the database. Each DAO corresponds to a certain related set of data types, such as Office Visits, Allergies or Health Records. Because of this architecture, every SQL statement used in the production code of iTrust exists in a DAO. iTrust testing is conducted using JUnit v3.0 test cases which make calls either to the Action classes or the DAO classes. Since we are interested in how much testing was performed on the aspects of the system which interact directly with the database, we focus on the DAO classes for this study. &lt;br /&gt;
&lt;br /&gt;
iTrust was written to conform to a MySQL&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt; back-end. The MySQL JDBC connector was used to implement the data storage for the web application by connecting to a remotely executing instance of MySQL v5.1.11-remote-nt. The &amp;lt;code&amp;gt;java.sql.PreparedStatement&amp;lt;/code&amp;gt; class is one way of representing SQL statements in the JDBC framework. Statement objects contain a series of overloaded methods all beginning with the word execute: &amp;lt;code&amp;gt;execute(…)&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;executeQuery(…)&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;executeUpdate(…)&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;executeBatch()&amp;lt;/code&amp;gt;. These methods are the java.sql way of issuing commands to the database and each of them represents a potential change to the database. These method calls, which we have previously introduced as &#039;&#039;target statements&#039;&#039;, are the focus of our coverage metrics. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:SESS-Figure7.png]]&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Figure 7. General iTrust architecture&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The version of iTrust we used for this study is referred to as iTrust Fall 2007, named by the year and semester it was built and redistributed to a new set of graduate students. iTrust was written to execute in Java 1.6 and thus our testing was conducted with the corresponding JRE. Code instrumentation and testing were conducted in Eclipse v3.3 Europa on an IBM Lenovo T61p running Windows Vista Ultimate with a 2.40Ghz Intel Core Duo and 2 GB of RAM.&lt;br /&gt;
&lt;br /&gt;
=== 5.2 Study Setup ===&lt;br /&gt;
&lt;br /&gt;
The primary challenge in collecting both of our proposed metrics is that there is currently no static tool which can integrate with the test harness JUnit to determine when SQL statements found within the code have been executed. As a result, we computed our metrics manually and via code instrumentation. &lt;br /&gt;
&lt;br /&gt;
The code fragment in Figure 8 demonstrates the execution of a SQL statement found within an iTrust DAO. Each of the JDBC execute method calls represents communication with the DBMS and has the potential to change the database. &lt;br /&gt;
&lt;br /&gt;
We assign each execute method call a unique identifier id in the range 1, 2, ... , n where n is the total number of execute method calls. We then instrument the code to contain a call to &amp;lt;code&amp;gt;SQLMarker.mark(id)&amp;lt;/code&amp;gt;. This &amp;lt;code&amp;gt;SQLMarker&amp;lt;/code&amp;gt; class interfaces with a research database we have setup to hold status information foreach statically identified execute method call. Before running the test suite, we load (or reload) a SQL table with records corresponding to each unique identifier from 1 to n. These records all contain a field &amp;lt;code&amp;gt;marked&amp;lt;/code&amp;gt; which is set to &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt;. The &amp;lt;code&amp;gt;SQLMarker.mark(id)&amp;lt;/code&amp;gt; method changes &amp;lt;code&amp;gt;marked&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt;. If &amp;lt;code&amp;gt;marked&amp;lt;/code&amp;gt; is already true, it will remain true. &lt;br /&gt;
&lt;br /&gt;
Using this technique, we can monitor the call status of each execute statement found within the iTrust production code. When the test suite is done executing, the table in our research database will contain n unique records which correspond to each method call in the iTrust production code. Each record will contain a boolean flag indicating whether the statement was called during test suite execution. The line with the comment instrumentation shows how this method is implemented in the example code in Figure 8.&lt;br /&gt;
&lt;br /&gt;
  java.sql.Connection conn = factory.getConnection(); &lt;br /&gt;
  java.sql.PreparedStatement ps = conn.prepareStatement(&amp;quot;UPDATE globalVariables set SET VALUE = ? WHERE Name = ‘Timeout’;&amp;quot;); &lt;br /&gt;
  ps.setInt(1, mins); &lt;br /&gt;
  SQLMarker.mark(1, 1); //instrumentation &lt;br /&gt;
  java.sql.ResultSet rs = ps.executeQuery();&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Figure 8. Code Instrumentation&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;SQLMarker.mark&amp;lt;/code&amp;gt; is always placed immediately before the call to the execute SQL query (or target statement) so the method&#039;s execution will be recorded even if the statement throws an exception during its execution. There are issues in making the determination of the number of SQL statements actually possible in the production code; these will be addressed in Section 7.&lt;br /&gt;
&lt;br /&gt;
To calculate input variable coverage, we included a second variable in the &amp;lt;code&amp;gt;SQLMarker.mark&amp;lt;/code&amp;gt; method which allows us to record the number of input variables which were set in the execute method. Initially, the input variable records of each execute method are set to zero, and the &amp;lt;code&amp;gt;SQLMarker.mark&amp;lt;/code&amp;gt; method sets them to the passed value. iTrust uses PreparedStatements for its SQL statements and as Figure 8 demonstrates, the number of input variables is always clearly visible in the production code because PreparedStatements require the explicit setting of each variable included in the statement. As with the determination of SQL statements, there are similar issues with determining the number of SQL input variables which we present in Section 7.&lt;br /&gt;
&lt;br /&gt;
== 6. Results and Discussion ==&lt;br /&gt;
&lt;br /&gt;
We found that 90 of the 93 SQL statements in the iTrust serverside production code were executed by the test suite, yielding a SQL statement coverage score of 96.7%. We found that 209 of the 212 SQL input variables found in the iTrust back-end were executed by the test suite, yielding a SQL variable coverage score of 98.5%. We find that iTrust is a very testable system with respect to SQL statement coverage, because each SQL statement, in essence, is embodied within a method of a DAO. This architectural decision is designed to allow the separation of concerns. For example the action of editing a patient’s records via user interface is separated from the action of actually updating that patient’s records in the database. We find that even though the refactoring of iTrust was intended to produce this high testability, there are still untested SQL statements within the production code. The Action classes of the iTrust framework represent procedures the client can perform with proper authorization. Since iTrust’s line coverage is at 91%, the results for iTrust are actually &#039;&#039;better&#039;&#039; than they would be for many existing systems due to its high testability. &lt;br /&gt;
&lt;br /&gt;
The three uncovered SQL statements occurred in methods which were never called by any Action class and thus are never used in production. Two of the statements related to the management of hospitals and one statement offered an alternate way of managing procedural and diagnosis codes. The uncovered statements certainly could have eventually been used by new features added to the production and thus the fact that they are not executed by any test is still pertinent.&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
Certain facets of the JDBC framework and of SQL in general make it difficult to establish a denominator for the ratio described for each of our coverage metrics. For example, remember that in calculating SQL statement coverage, we must find, mark and count each statically occurring SQL statement within the production code. The fragment presented in Figure 9 contains Java batch SQL statements. Similar to &#039;&#039;batch mode&#039;&#039; in MySQL, each statement is pushed into a single batch statement and then the statements are all executed with one commit. Batch statements can be used to increase efficiency or to help manage concurrency. We can count the number of executed SQL statements in a batch: a dummy variable could be instrumented within the for loop demonstrated in Figure 9 which increments each time a batch statement is added (e.g., &amp;lt;code&amp;gt;ps.addBatch()&amp;lt;/code&amp;gt;). How many SQL statements are possible, though? The numerator will always be the same as the number of &amp;lt;code&amp;gt;DiagnosisBeans&amp;lt;/code&amp;gt; in the variable &amp;lt;code&amp;gt;updateDiagnoses&amp;lt;/code&amp;gt;. These beans are parsed from input the user passes to the Action class via the JSP to make changes to several records in one web form submission. The denominator is potentially infinite, however. &lt;br /&gt;
&lt;br /&gt;
Additionally, the students who have worked on iTrust were required to use PreparedStatements, which elevates our resultant input variable coverage because PreparedStatements require explicit assignment to each input variable, and this may not be the case with other SQL connection methodologies. Furthermore, our metrics do not give any indication of how many input values have been tested in each input variable in each target statement. &lt;br /&gt;
&lt;br /&gt;
This technique is currently only applicable to Java code which implements a JDBC interface and uses PreparedStatements to interact with a SQL database management system. Finally, we recognize that much legacy code is implemented using dynamically generated SQL queries and while our metric for target statement coverage could be applied, our metric for input variable coverage does not contain an adequate definition for counting the input variables in a dynamically generated query. Our approach will be repeatable and can generalize to other applications matching the above restrictions.&lt;br /&gt;
&lt;br /&gt;
  public void updateDiscretionaryAccess(List&amp;lt;DiagnosisBean&amp;gt; updateDiagnoses) &lt;br /&gt;
  { &lt;br /&gt;
    java.sql.Connection conn = factory.getConnection(); &lt;br /&gt;
    java.sql.PreparedStatement ps = conn.prepareStatement(&amp;quot;UPDATE OVDiagnosis SET &lt;br /&gt;
    DiscretionaryAccess=? WHERE ID=?&amp;quot;); &lt;br /&gt;
    for (DiagnosisBean d : updateDiagnoses) { &lt;br /&gt;
      ps.setBoolean(1, d.isDiscretionaryAccess()); &lt;br /&gt;
      ps.setLong(2, d.getOvDiagnosisID()); &lt;br /&gt;
      ps.addBatch(); &lt;br /&gt;
  } &lt;br /&gt;
    SQLMarker.mark(1, 2); &lt;br /&gt;
    ps.executeBatch(); &lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&#039;&#039;&#039;Figure 9. Batch SQL Statements&#039;&#039;&#039;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 8. Conclusions and Future Work ==&lt;br /&gt;
&lt;br /&gt;
We have shown that a major portion of recent cyber vulnerabilities are occurring due to a lack of input validation testing. Testing strategies should incorporate new techniques to account for the likelihood of input validation attacks. Structural coverage metrics allow us to see how much of an application is executed by a given test set. We have shown that the notion of coverage can be extended to target statements and their input values. Finally, we have answered our research question with a case study which demonstrates that using the technique we describe, it is possible to dynamically gather accurate coverage metric values produced by a given test set. We have shown that the notion of coverage can be extended to target statements, and we introduce a technique for manually determining this coverage value.&lt;br /&gt;
&lt;br /&gt;
Future improvements can make these metrics portable to different database management systems as well as usable in varying development languages.  We would eventually extend our metric to evaluate the percentage of all sources of user input that have been involved in a test case.  We would like to automate the process of collecting SQL statement coverage into a tool or plug-in, which can help developers rapidly assess the level of security testing which has been performed as well as find the statements that have not been tested with any test set.  This work will eventually be extended to cross-site scripting attacks and buffer overflow vulnerabilities.  Finally, we would like to integrate these coverage metrics with a larger framework which will allow target statements and variables which are included in the coverage to be tested against sets of pre-generated good and malicious input.&lt;br /&gt;
&lt;br /&gt;
== 9. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
This work is supported by the National Science Foundation under CAREER Grant No. 0346903.   Any opinions expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.&lt;br /&gt;
&lt;br /&gt;
== 10. References ==&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt; B. Beizer, Software testing techniques: Van Nostrand Reinhold Co. New York, NY, USA, 1990.&lt;br /&gt;
: &amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt; S. W. Boyd and A. D. Keromytis, &amp;quot;SQLrand: Preventing SQL injection attacks,&amp;quot; in Proceedings of the 2nd Applied Cryptography and Network Security (ACNS) Conference, Yellow Mountain, China, pp. 292-304, 2004. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; B. Brenner, &amp;quot;CSI 2007: Developers need Web application security assistance,&amp;quot; in SearchSecurity.com, 2007. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; M. Cobb, &amp;quot;Making the case for Web application vulnerability scanners,&amp;quot; in SearchSecurity.com, 2007. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt; W. G. Halfond, J. Viegas, and A. Orso, &amp;quot;A Classification of SQL-Injection Attacks and Countermeasures,&amp;quot; in Proceedings of the International Symposium on Secure Software Engineering, March, Arlington, VA, 2006. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt; W. G. J. Halfond and A. Orso, &amp;quot;AMNESIA: analysis and monitoring for NEutralizing SQL-injection attacks,&amp;quot; in Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, Long Beach, CA, USA, pp. 174-183, 2005. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt; W. G. J. Halfond and A. Orso, &amp;quot;Command-Form Coverage for Testing Database Applications,&amp;quot; Proceedings of the IEEE and ACM International Conference on Automated Software Engineering, pp. 69–78, 2006. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt; Y. W. Huang, S. K. Huang, T. P. Lin, and C. H. Tsai, &amp;quot;Web application security assessment by fault injection and behavior monitoring,&amp;quot; in Proceedings of the 12th International Conference on World Wide Web, Budapest, Hungary, pp. 148-159, 2003. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt; S. Kals, E. Kirda, C. Kruegel, and N. Jovanovic, &amp;quot;SecuBat: a web vulnerability scanner,&amp;quot; in Proceedings of the 15th international conference on World Wide Web, Edinburgh, Scotland pp. 247-256, 2006. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt; G. McGraw, Software Security: Building Security in. Upper Saddle River, NJ: Addison-Wesley Professional, 2006. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt; J. Offutt, &amp;quot;Quality attributes of Web software applications,&amp;quot; IEEE Software, vol. 19, no. 2, pp. 25-32, 2002. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[12]&amp;lt;/sup&amp;gt; E. Ogren, &amp;quot;App Security&#039;s Evolution,&amp;quot; in DarkReading.com, 2007. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; T. Pietraszek and C. V. Berghe, &amp;quot;Defending against injection attacks through context-sensitive string evaluation,&amp;quot; in Recent Advances in Intrusion Detection (RAID). Seattle, WA, 2005. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[14]&amp;lt;/sup&amp;gt; F. S. Rietta, &amp;quot;Application layer intrusion detection for SQL injection,&amp;quot; in Proceedings of the 44th annual southeast regional conference, New York, NY, pp. 531-536, 2006. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[15]&amp;lt;/sup&amp;gt; D. Scott and R. Sharp, &amp;quot;Developing secure Web applications,&amp;quot; Internet Computing, IEEE, vol. 6, no. 6, pp. 38-45, 2002. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[16]&amp;lt;/sup&amp;gt; Z. Su and G. Wassermann, &amp;quot;The essence of command injection attacks in web applications,&amp;quot; in Proceedings of the Annual Symposium on Principles of Programming Languages, Charleston, SC, pp. 372-382, 2006. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[17]&amp;lt;/sup&amp;gt; H. H. Thompson and J. A. Whittaker, &amp;quot;Testing for software security,&amp;quot; Dr. Dobb&#039;s Journal, vol. 27, no. 11, pp. 24-34, 2002.&lt;br /&gt;
: &amp;lt;sup&amp;gt;[18]&amp;lt;/sup&amp;gt; D. Willmor and S. M. Embury, &amp;quot;Exploring test adequacy for database systems,&amp;quot; in Proceedings of the 3rd UK Software Testing Research Workshop, Sheffield, UK, pp. p123-133, 2005. &lt;br /&gt;
: &amp;lt;sup&amp;gt;[19]&amp;lt;/sup&amp;gt; H. Zhu, P. A. V. Hall, and J. H. R. May, &amp;quot;Software Unit Test Coverage and Adequacy,&amp;quot; ACM Computing Surveys, vol. 29, no. 4, 1997.&lt;br /&gt;
&lt;br /&gt;
== 11. End Notes ==&lt;br /&gt;
&lt;br /&gt;
# http://nvd.nist.gov/&lt;br /&gt;
# In Figure 1, we counted the reported instances of vulnerabilities by using the keywords &amp;quot;SQL injection&amp;quot;, &amp;quot;cross-site scripting&amp;quot;, &amp;quot;XSS&amp;quot;, and &amp;quot;buffer overflow&amp;quot; within the input validation error category from NVD.&lt;br /&gt;
# http://www.junit.org&lt;br /&gt;
# A cookie is a piece of information that is sent by a web server when a user first accesses the website and saved to a local file. The cookie is then used in consecutive requests to identify the user to the server. See http://www.ietf.org/rfc/rfc2109.txt.&lt;br /&gt;
# http://sourceforge.net/projects/itrust/&lt;br /&gt;
# US Pub. Law 104-192, est. 1996.&lt;br /&gt;
# http://works.dgic.co.jp/djunit/&lt;br /&gt;
# For our case study, we used MySQL v5.0.45-community-nt found at http://www.mysql.com/&lt;br /&gt;
&lt;br /&gt;
[[Category:Workshop Papers]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=779</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Main_Page&amp;diff=779"/>
		<updated>2021-05-02T13:13:53Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Thoughts: Maybe each paper should be a category?&lt;br /&gt;
 &lt;br /&gt;
* [[An Empirical Evaluation of the MuJava Mutation Operators]], MUTATION2007&lt;br /&gt;
* [[Proposing SQL Statement Coverage Metrics]], SESS2008&lt;br /&gt;
* [[Idea: Using System Level Testing for Revealing SQL Injection-Related Error Message Information Leaks]], ESSoS2010&lt;br /&gt;
* [[Using SQL Hotspots in a Prioritization Heuristic for Detecting All Types of Web Application Vulnerabilities]], ICST2011&lt;br /&gt;
* [[Truckers Drive Their Own Assessment for Obstructive Sleep Apnea: A Collaborative Approach to Online Self-Assessment for Obstructive Sleep Apnea]], JCSM2011&lt;br /&gt;
* [[On Guiding the Augmentation of an Automated Test Suite via Mutation Analysis]], ESE2009&lt;br /&gt;
* [[Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms]], IHI2012, in progress&lt;br /&gt;
&lt;br /&gt;
Stuff having to do with this wiki (not to be printed)&lt;br /&gt;
* [[How to do References]]&lt;br /&gt;
* [[Notes about Images]]&lt;br /&gt;
* [[Formatting decisions]]&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=776</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=776"/>
		<updated>2014-01-05T22:39:05Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 5. Case Studies */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Schneider&amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
Section 4.1 describes our high-level user-based non-repudiation assessment criteria for EHR audit mechanisms, based on non-specific auditable events (such as “view data” and “create data”).  Section 4.2 describes the development and execution of our lower-level black-box test plan to help evaluate the logging of specific auditable events (such as “view diagnosis data” and “view patient demographics data”) for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
Section 4.1.1 describes the derivation of our high-level assessment criteria for user-based non-repudiation based on non-specific auditable event types. Section 4.1.2 describes our methodology for assessing EHR system audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
Our high-level assessment of user-based non-repudiation first involves compiling a list of non-specific events that should be logged in software audit mechanisms, according to other researchers and standards organizations. Non-specific events include basic actions such as “viewing” and “updating”, but these events do not specify &#039;&#039;what information&#039;&#039; is viewed or updated. Our goal is to compile a set of common non-specific auditable event types for user-based non-repudiation based on the general guidelines and checklists from four academic and professional sources:&lt;br /&gt;
&lt;br /&gt;
* Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source.&lt;br /&gt;
* The Certification Commission for Health Information Technology (CCHIT)&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt; specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health &amp;amp; Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;. We collect 17 auditable events from this source.&lt;br /&gt;
* The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;. We collect 18 auditable events from this source.&lt;br /&gt;
* The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;. We collect 8 auditable events from this source.&lt;br /&gt;
&lt;br /&gt;
Combining all four sets of data, we collect 60 total non-specific auditable events and event types. After combining duplicates, our set contains 28 unique auditable events and event types. The only item appearing in all four suggested auditable events sets is “security administration event”, suggesting all four sources are concerned about software security. Out of the 28 unique events, 18 (64.3%) are contained in at least two of the source sets. Ten events (35.7%) are only contained in one source set. The overlap among the four sources suggests some common understanding and agreement of general events that should be logged, yet the disparity seems to indicate disagreement about the scope and breadth of auditable events. Table 1 provides a comparison of the four source sets of non-specific auditable events and event types.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| &#039;&#039;Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;(Yes or No)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|- style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| User login/logout&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Session timeout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Account lockout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Create data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Update data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Delete data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| View data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Query data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Node-authentication failure&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Signature created/validated&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
|  &lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Export data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Import data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Security administration event&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System backup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System restore&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Initiate a network connection&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Accept a network connection&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Grant access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y &lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Modify access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Revoke access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System, network, or services changes&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Application process abort/failure/abnormal end&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Detection of malicious activity&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Changes to audit log configuration&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Next, we categorize each individual auditable event or event type from Table 1 into one of two categories: events that &#039;&#039;affect&#039;&#039; user-based non-repudiation, and events that &#039;&#039;do not affect&#039;&#039; user-based non-repudiation. Our categorization is denoted in Table 1 under the “Affects User-based Non-repudiation” column. When categorizing these events, we determine if the given event can be traced to a specific user accountholder in an EHR system. If so, we categorize this event as one that affects user-based non-repudiation. If the event cannot be traced to a specific user accountholder, we categorize the event as one that does not affect user-based non-repudiation. For example, the “view data” event suggests a user accountholder (such as a physician) has authenticated into an EHR system and is viewing protected patient health information. The action of viewing this protected data can be traced to the physician’s user account. Therefore, this event is categorized as one that does affect user-based non-repudiation. On the other hand, an “application process failure” does not suggest any intervention by a user accountholder. Instead, this event suggests an internal EHR system state change. Therefore, we categorize this event as not affecting user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
Of the 28 total auditable events and event types, we identify 16 events that affect user-based non-repudiation. Of these 16 actions, only 9 events (56.25%) are suggested by two or more of the sources. The remaining 7 events (43.75%) are contained in only one source set.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
For each EHR system, we deploy the software on a local web server following the deployment instructions provided by each EHR’s community website. Next, we consult official documentation typically provided on the website for each of the EHR systems. In the documentation (typically user guides, development guides, or community wiki pages) we search for sections on auditing and logging to understand how to access these mechanisms in the actual application. Once we understand how to access the auditing mechanism, we open our locally-deployed EHR system and attempt to access these features to continue our analysis. We document all of our observations or difficulties during this analysis process for reflection after the analysis is complete. &lt;br /&gt;
&lt;br /&gt;
Once we have either physical access to or a general understanding of the given application’s auditing mechanism, we record the following information:&lt;br /&gt;
&lt;br /&gt;
# A flag (satisfied or unsatisfied) for each of the assessment criteria listed in the “Logging Actions” column of Table 2.&lt;br /&gt;
# Any observations or important findings that may influence the results or provide justifications for results&lt;br /&gt;
&lt;br /&gt;
We repeat this process for each of the three EHR systems in the study.&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
Our low-level assessment of user-based non-repudiation involves constructing a black-box test plan for testing an EHR system’s recording of &#039;&#039;specific&#039;&#039; auditable events (such as “view diagnosis data”). In this paper, we briefly describe the process for the audit test cases used to evaluate user-based non-repudiation audit functionality.  We developed this methodology in earlier work&amp;lt;sup&amp;gt;[14]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In 2006, through a consensus-based process that engaged stakeholders, CCHIT defined certification criteria focused on the functional capabilities that should be included in ambulatory (outpatient) and inpatient EHR systems.  The requirements specifications contain 284 different functional descriptions of EHR behavior. &lt;br /&gt;
&lt;br /&gt;
The CCHIT ambulatory certification criteria contain eight requirements related to audit.  The audit requirements contain functionality such as “The system shall allow an authorized administrator to set the inclusion or exclusion of auditable events based on organizational policy &amp;amp; operating requirements/limits.”  One CCHIT audit criterion states that the set of auditable events in an EHR system should include the following fourteen items:&lt;br /&gt;
&lt;br /&gt;
# Application start/stop&lt;br /&gt;
# User login/logout&lt;br /&gt;
# Session timeout&lt;br /&gt;
# Account lockout&lt;br /&gt;
# Patient Record created/viewed/updated/deleted&lt;br /&gt;
# Scheduling&lt;br /&gt;
# Query&lt;br /&gt;
# Order&lt;br /&gt;
# Node-authentication failure&lt;br /&gt;
# Signature created/validated&lt;br /&gt;
# PHI Export (e.g. print)&lt;br /&gt;
# PHI import&lt;br /&gt;
# Security administration events&lt;br /&gt;
# Backup and restore&lt;br /&gt;
&lt;br /&gt;
The list is provided here verbatim from the CCHIT ambulatory criteria.  The criteria are vague. For example, the phrase “security administration events” is undefined and could relate to authentication attempts, deletion of log files, or assigning user privileges. Likewise the term “scheduling” could relate to scheduling patient appointments, scheduling system backups, or scheduling system down-time for maintenance. The interpretation of these phrases varies, and the intended meanings are ambiguous.&lt;br /&gt;
&lt;br /&gt;
Due to the vagueness in these auditable events, we elected to approach the CCHIT certification criteria as a general functional requirements specification. The criteria describe functionality for EHR systems, such as editing a patient’s health record, signing a note about a patient, and indicating advance directives (e.g. a do-not-resuscitate order). Using these functional CCHIT requirements&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;, we develop a set of 58 black-box test cases that assess the ability of an EHR system to audit the user actions specified by these CCHIT requirements.  These test cases all involve a registered user performing a given action within the EHR system, therefore representing an assessment of user-based non-repudiation within each EHR system. The 58 test cases correspond to 58 individual CCHIT requirements statements.  Our test plan covers the 20.4% of the CCHIT requirements that are relevant to personal or protected health information.  The remaining 79.6% of the CCHIT requirements do not pertain to personal health information, and therefore do not necessitate an audit record for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
We iterated through each of the 284 ambulatory CCHIT requirements, extracting keywords and applying the template to produce a test case when necessary. We generate a test case from a specific requirement based on keywords within the requirements statement.  We know that a CCHIT requirements statement should result in a test case based on certain keywords within the requirements statement.  For example, requirements that include phrases like “problem list,” “clinical documents,” and “diagnostic test” all indicate the user’s interaction with a piece of a patient’s protected health information.&lt;br /&gt;
&lt;br /&gt;
Additionally, we extract an action phrase (e.g. “edit”) and an object phrase (e.g. “demographics”) from each relevant requirement to construct the black-box test case.  We present the template used for these black-box tests in Section 4.2.1, and present an example of a test case and its corresponding requirement in Section 4.2.2. &lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
Test Procedure Template: &lt;br /&gt;
# Authenticate as &amp;lt;&#039;&#039;insert a registered user name&#039;&#039;&amp;gt;.&lt;br /&gt;
# Open the user interface for &amp;lt;&#039;&#039;insert action phrase&#039;&#039;&amp;gt;ing an &amp;lt;&#039;&#039;insert object phrase&#039;&#039;&amp;gt;.&lt;br /&gt;
# Verb an &amp;lt;&#039;&#039;insert object phrase&#039;&#039;&amp;gt;with details.&lt;br /&gt;
# Logout as &amp;lt;&#039;&#039;insert a registered user name&#039;&#039;&amp;gt;.&lt;br /&gt;
# Authenticate as &amp;lt;&#039;&#039;insert an administrator’s user name&#039;&#039;&amp;gt;.&lt;br /&gt;
# Open the audit records for today’s date.&lt;br /&gt;
&lt;br /&gt;
Expected Results Template:&lt;br /&gt;
* The audit records should show that registered user &amp;lt;&#039;&#039;insert action phrase&#039;&#039;&amp;gt;ed an &amp;lt;&#039;&#039;insert object phrase&#039;&#039;&amp;gt;.&lt;br /&gt;
* The audit records should be clearly readable and easily accessible.&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
Example Natural Language Artifact:&lt;br /&gt;
* CCHIT Criteria: AM 03.08.01 – The system shall provide the ability to associate orders and medications with one or more codified problems/diagnoses.&lt;br /&gt;
&lt;br /&gt;
Example Test Procedure:&lt;br /&gt;
&lt;br /&gt;
# Authenticate as Dr. Robert Alexander.&lt;br /&gt;
# Remove the association between Theodore S. Smith’s Hypertension diagnosis and Zantac.&lt;br /&gt;
# Add the association back between Theodore S. Smith’s Hypertension diagnosis and Zantac.&lt;br /&gt;
# Logout as Dr. Robert Alexander.&lt;br /&gt;
# Authenticate as Denny Hudzinger.&lt;br /&gt;
# Open the audit records for today’s date. If necessary, focus on patient Theodore S. Smith.&lt;br /&gt;
&lt;br /&gt;
Example Expected Results:&lt;br /&gt;
&lt;br /&gt;
* The audit records should show adding and removing the association of Theodore S. Smith’s Hypertension diagnosis and Zantac, both linked to Dr. Robert Alexander, and with today’s date.&lt;br /&gt;
* The audit records should be clearly readable and easily accessible&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
Section 5.1 describes the EHR systems we used in this case study. Section 5.2 describes our EHR audit mechanism assessment based on the high-level assessment criteria from Section 4.1.  Then, Section 5.3 describes our low-level black-box test case evaluation of three open-source EHR systems.&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
In this study, we compare and contrast audit mechanisms from three open-source EHR systems. The criteria for inclusion in this study involved (1) being open-source for ease-of-access, and (2) having a fully-functional default demo deployment available online. For this study, we assess the following EHR systems:&lt;br /&gt;
&lt;br /&gt;
* Open Electronic Medical Records (OpenEMR)&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; system, &lt;br /&gt;
* Open Medical Record System (OpenMRS)&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt; system, with added Access Logging Module&amp;lt;sup&amp;gt;4&amp;lt;/sup&amp;gt;.&lt;br /&gt;
* Tolven Healthcare Innovations’s Electronic Clinician Health Record (eCHR)&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt; system, with added Performance Plugin&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt; module&lt;br /&gt;
&lt;br /&gt;
A summary of these software applications appears in Table 2.&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=775</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=775"/>
		<updated>2014-01-05T22:37:30Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 4.2.2 Audit Test Case Example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Schneider&amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
Section 4.1 describes our high-level user-based non-repudiation assessment criteria for EHR audit mechanisms, based on non-specific auditable events (such as “view data” and “create data”).  Section 4.2 describes the development and execution of our lower-level black-box test plan to help evaluate the logging of specific auditable events (such as “view diagnosis data” and “view patient demographics data”) for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
Section 4.1.1 describes the derivation of our high-level assessment criteria for user-based non-repudiation based on non-specific auditable event types. Section 4.1.2 describes our methodology for assessing EHR system audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
Our high-level assessment of user-based non-repudiation first involves compiling a list of non-specific events that should be logged in software audit mechanisms, according to other researchers and standards organizations. Non-specific events include basic actions such as “viewing” and “updating”, but these events do not specify &#039;&#039;what information&#039;&#039; is viewed or updated. Our goal is to compile a set of common non-specific auditable event types for user-based non-repudiation based on the general guidelines and checklists from four academic and professional sources:&lt;br /&gt;
&lt;br /&gt;
* Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source.&lt;br /&gt;
* The Certification Commission for Health Information Technology (CCHIT)&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt; specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health &amp;amp; Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;. We collect 17 auditable events from this source.&lt;br /&gt;
* The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;. We collect 18 auditable events from this source.&lt;br /&gt;
* The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;. We collect 8 auditable events from this source.&lt;br /&gt;
&lt;br /&gt;
Combining all four sets of data, we collect 60 total non-specific auditable events and event types. After combining duplicates, our set contains 28 unique auditable events and event types. The only item appearing in all four suggested auditable events sets is “security administration event”, suggesting all four sources are concerned about software security. Out of the 28 unique events, 18 (64.3%) are contained in at least two of the source sets. Ten events (35.7%) are only contained in one source set. The overlap among the four sources suggests some common understanding and agreement of general events that should be logged, yet the disparity seems to indicate disagreement about the scope and breadth of auditable events. Table 1 provides a comparison of the four source sets of non-specific auditable events and event types.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| &#039;&#039;Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;(Yes or No)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|- style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| User login/logout&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Session timeout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Account lockout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Create data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Update data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Delete data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| View data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Query data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Node-authentication failure&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Signature created/validated&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
|  &lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Export data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Import data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Security administration event&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System backup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System restore&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Initiate a network connection&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Accept a network connection&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Grant access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y &lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Modify access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Revoke access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System, network, or services changes&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Application process abort/failure/abnormal end&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Detection of malicious activity&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Changes to audit log configuration&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Next, we categorize each individual auditable event or event type from Table 1 into one of two categories: events that &#039;&#039;affect&#039;&#039; user-based non-repudiation, and events that &#039;&#039;do not affect&#039;&#039; user-based non-repudiation. Our categorization is denoted in Table 1 under the “Affects User-based Non-repudiation” column. When categorizing these events, we determine if the given event can be traced to a specific user accountholder in an EHR system. If so, we categorize this event as one that affects user-based non-repudiation. If the event cannot be traced to a specific user accountholder, we categorize the event as one that does not affect user-based non-repudiation. For example, the “view data” event suggests a user accountholder (such as a physician) has authenticated into an EHR system and is viewing protected patient health information. The action of viewing this protected data can be traced to the physician’s user account. Therefore, this event is categorized as one that does affect user-based non-repudiation. On the other hand, an “application process failure” does not suggest any intervention by a user accountholder. Instead, this event suggests an internal EHR system state change. Therefore, we categorize this event as not affecting user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
Of the 28 total auditable events and event types, we identify 16 events that affect user-based non-repudiation. Of these 16 actions, only 9 events (56.25%) are suggested by two or more of the sources. The remaining 7 events (43.75%) are contained in only one source set.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
For each EHR system, we deploy the software on a local web server following the deployment instructions provided by each EHR’s community website. Next, we consult official documentation typically provided on the website for each of the EHR systems. In the documentation (typically user guides, development guides, or community wiki pages) we search for sections on auditing and logging to understand how to access these mechanisms in the actual application. Once we understand how to access the auditing mechanism, we open our locally-deployed EHR system and attempt to access these features to continue our analysis. We document all of our observations or difficulties during this analysis process for reflection after the analysis is complete. &lt;br /&gt;
&lt;br /&gt;
Once we have either physical access to or a general understanding of the given application’s auditing mechanism, we record the following information:&lt;br /&gt;
&lt;br /&gt;
# A flag (satisfied or unsatisfied) for each of the assessment criteria listed in the “Logging Actions” column of Table 2.&lt;br /&gt;
# Any observations or important findings that may influence the results or provide justifications for results&lt;br /&gt;
&lt;br /&gt;
We repeat this process for each of the three EHR systems in the study.&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
Our low-level assessment of user-based non-repudiation involves constructing a black-box test plan for testing an EHR system’s recording of &#039;&#039;specific&#039;&#039; auditable events (such as “view diagnosis data”). In this paper, we briefly describe the process for the audit test cases used to evaluate user-based non-repudiation audit functionality.  We developed this methodology in earlier work&amp;lt;sup&amp;gt;[14]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In 2006, through a consensus-based process that engaged stakeholders, CCHIT defined certification criteria focused on the functional capabilities that should be included in ambulatory (outpatient) and inpatient EHR systems.  The requirements specifications contain 284 different functional descriptions of EHR behavior. &lt;br /&gt;
&lt;br /&gt;
The CCHIT ambulatory certification criteria contain eight requirements related to audit.  The audit requirements contain functionality such as “The system shall allow an authorized administrator to set the inclusion or exclusion of auditable events based on organizational policy &amp;amp; operating requirements/limits.”  One CCHIT audit criterion states that the set of auditable events in an EHR system should include the following fourteen items:&lt;br /&gt;
&lt;br /&gt;
# Application start/stop&lt;br /&gt;
# User login/logout&lt;br /&gt;
# Session timeout&lt;br /&gt;
# Account lockout&lt;br /&gt;
# Patient Record created/viewed/updated/deleted&lt;br /&gt;
# Scheduling&lt;br /&gt;
# Query&lt;br /&gt;
# Order&lt;br /&gt;
# Node-authentication failure&lt;br /&gt;
# Signature created/validated&lt;br /&gt;
# PHI Export (e.g. print)&lt;br /&gt;
# PHI import&lt;br /&gt;
# Security administration events&lt;br /&gt;
# Backup and restore&lt;br /&gt;
&lt;br /&gt;
The list is provided here verbatim from the CCHIT ambulatory criteria.  The criteria are vague. For example, the phrase “security administration events” is undefined and could relate to authentication attempts, deletion of log files, or assigning user privileges. Likewise the term “scheduling” could relate to scheduling patient appointments, scheduling system backups, or scheduling system down-time for maintenance. The interpretation of these phrases varies, and the intended meanings are ambiguous.&lt;br /&gt;
&lt;br /&gt;
Due to the vagueness in these auditable events, we elected to approach the CCHIT certification criteria as a general functional requirements specification. The criteria describe functionality for EHR systems, such as editing a patient’s health record, signing a note about a patient, and indicating advance directives (e.g. a do-not-resuscitate order). Using these functional CCHIT requirements&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;, we develop a set of 58 black-box test cases that assess the ability of an EHR system to audit the user actions specified by these CCHIT requirements.  These test cases all involve a registered user performing a given action within the EHR system, therefore representing an assessment of user-based non-repudiation within each EHR system. The 58 test cases correspond to 58 individual CCHIT requirements statements.  Our test plan covers the 20.4% of the CCHIT requirements that are relevant to personal or protected health information.  The remaining 79.6% of the CCHIT requirements do not pertain to personal health information, and therefore do not necessitate an audit record for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
We iterated through each of the 284 ambulatory CCHIT requirements, extracting keywords and applying the template to produce a test case when necessary. We generate a test case from a specific requirement based on keywords within the requirements statement.  We know that a CCHIT requirements statement should result in a test case based on certain keywords within the requirements statement.  For example, requirements that include phrases like “problem list,” “clinical documents,” and “diagnostic test” all indicate the user’s interaction with a piece of a patient’s protected health information.&lt;br /&gt;
&lt;br /&gt;
Additionally, we extract an action phrase (e.g. “edit”) and an object phrase (e.g. “demographics”) from each relevant requirement to construct the black-box test case.  We present the template used for these black-box tests in Section 4.2.1, and present an example of a test case and its corresponding requirement in Section 4.2.2. &lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
Test Procedure Template: &lt;br /&gt;
# Authenticate as &amp;lt;&#039;&#039;insert a registered user name&#039;&#039;&amp;gt;.&lt;br /&gt;
# Open the user interface for &amp;lt;&#039;&#039;insert action phrase&#039;&#039;&amp;gt;ing an &amp;lt;&#039;&#039;insert object phrase&#039;&#039;&amp;gt;.&lt;br /&gt;
# Verb an &amp;lt;&#039;&#039;insert object phrase&#039;&#039;&amp;gt;with details.&lt;br /&gt;
# Logout as &amp;lt;&#039;&#039;insert a registered user name&#039;&#039;&amp;gt;.&lt;br /&gt;
# Authenticate as &amp;lt;&#039;&#039;insert an administrator’s user name&#039;&#039;&amp;gt;.&lt;br /&gt;
# Open the audit records for today’s date.&lt;br /&gt;
&lt;br /&gt;
Expected Results Template:&lt;br /&gt;
* The audit records should show that registered user &amp;lt;&#039;&#039;insert action phrase&#039;&#039;&amp;gt;ed an &amp;lt;&#039;&#039;insert object phrase&#039;&#039;&amp;gt;.&lt;br /&gt;
* The audit records should be clearly readable and easily accessible.&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
Example Natural Language Artifact:&lt;br /&gt;
* CCHIT Criteria: AM 03.08.01 – The system shall provide the ability to associate orders and medications with one or more codified problems/diagnoses.&lt;br /&gt;
&lt;br /&gt;
Example Test Procedure:&lt;br /&gt;
&lt;br /&gt;
# Authenticate as Dr. Robert Alexander.&lt;br /&gt;
# Remove the association between Theodore S. Smith’s Hypertension diagnosis and Zantac.&lt;br /&gt;
# Add the association back between Theodore S. Smith’s Hypertension diagnosis and Zantac.&lt;br /&gt;
# Logout as Dr. Robert Alexander.&lt;br /&gt;
# Authenticate as Denny Hudzinger.&lt;br /&gt;
# Open the audit records for today’s date. If necessary, focus on patient Theodore S. Smith.&lt;br /&gt;
&lt;br /&gt;
Example Expected Results:&lt;br /&gt;
&lt;br /&gt;
* The audit records should show adding and removing the association of Theodore S. Smith’s Hypertension diagnosis and Zantac, both linked to Dr. Robert Alexander, and with today’s date.&lt;br /&gt;
* The audit records should be clearly readable and easily accessible&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=774</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=774"/>
		<updated>2014-01-05T22:36:29Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 4.2.1 Audit Test Case Template */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Schneider&amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
Section 4.1 describes our high-level user-based non-repudiation assessment criteria for EHR audit mechanisms, based on non-specific auditable events (such as “view data” and “create data”).  Section 4.2 describes the development and execution of our lower-level black-box test plan to help evaluate the logging of specific auditable events (such as “view diagnosis data” and “view patient demographics data”) for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
Section 4.1.1 describes the derivation of our high-level assessment criteria for user-based non-repudiation based on non-specific auditable event types. Section 4.1.2 describes our methodology for assessing EHR system audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
Our high-level assessment of user-based non-repudiation first involves compiling a list of non-specific events that should be logged in software audit mechanisms, according to other researchers and standards organizations. Non-specific events include basic actions such as “viewing” and “updating”, but these events do not specify &#039;&#039;what information&#039;&#039; is viewed or updated. Our goal is to compile a set of common non-specific auditable event types for user-based non-repudiation based on the general guidelines and checklists from four academic and professional sources:&lt;br /&gt;
&lt;br /&gt;
* Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source.&lt;br /&gt;
* The Certification Commission for Health Information Technology (CCHIT)&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt; specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health &amp;amp; Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;. We collect 17 auditable events from this source.&lt;br /&gt;
* The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;. We collect 18 auditable events from this source.&lt;br /&gt;
* The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;. We collect 8 auditable events from this source.&lt;br /&gt;
&lt;br /&gt;
Combining all four sets of data, we collect 60 total non-specific auditable events and event types. After combining duplicates, our set contains 28 unique auditable events and event types. The only item appearing in all four suggested auditable events sets is “security administration event”, suggesting all four sources are concerned about software security. Out of the 28 unique events, 18 (64.3%) are contained in at least two of the source sets. Ten events (35.7%) are only contained in one source set. The overlap among the four sources suggests some common understanding and agreement of general events that should be logged, yet the disparity seems to indicate disagreement about the scope and breadth of auditable events. Table 1 provides a comparison of the four source sets of non-specific auditable events and event types.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| &#039;&#039;Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;(Yes or No)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|- style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| User login/logout&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Session timeout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Account lockout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Create data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Update data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Delete data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| View data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Query data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Node-authentication failure&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Signature created/validated&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
|  &lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Export data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Import data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Security administration event&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System backup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System restore&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Initiate a network connection&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Accept a network connection&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Grant access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y &lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Modify access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Revoke access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System, network, or services changes&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Application process abort/failure/abnormal end&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Detection of malicious activity&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Changes to audit log configuration&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Next, we categorize each individual auditable event or event type from Table 1 into one of two categories: events that &#039;&#039;affect&#039;&#039; user-based non-repudiation, and events that &#039;&#039;do not affect&#039;&#039; user-based non-repudiation. Our categorization is denoted in Table 1 under the “Affects User-based Non-repudiation” column. When categorizing these events, we determine if the given event can be traced to a specific user accountholder in an EHR system. If so, we categorize this event as one that affects user-based non-repudiation. If the event cannot be traced to a specific user accountholder, we categorize the event as one that does not affect user-based non-repudiation. For example, the “view data” event suggests a user accountholder (such as a physician) has authenticated into an EHR system and is viewing protected patient health information. The action of viewing this protected data can be traced to the physician’s user account. Therefore, this event is categorized as one that does affect user-based non-repudiation. On the other hand, an “application process failure” does not suggest any intervention by a user accountholder. Instead, this event suggests an internal EHR system state change. Therefore, we categorize this event as not affecting user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
Of the 28 total auditable events and event types, we identify 16 events that affect user-based non-repudiation. Of these 16 actions, only 9 events (56.25%) are suggested by two or more of the sources. The remaining 7 events (43.75%) are contained in only one source set.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
For each EHR system, we deploy the software on a local web server following the deployment instructions provided by each EHR’s community website. Next, we consult official documentation typically provided on the website for each of the EHR systems. In the documentation (typically user guides, development guides, or community wiki pages) we search for sections on auditing and logging to understand how to access these mechanisms in the actual application. Once we understand how to access the auditing mechanism, we open our locally-deployed EHR system and attempt to access these features to continue our analysis. We document all of our observations or difficulties during this analysis process for reflection after the analysis is complete. &lt;br /&gt;
&lt;br /&gt;
Once we have either physical access to or a general understanding of the given application’s auditing mechanism, we record the following information:&lt;br /&gt;
&lt;br /&gt;
# A flag (satisfied or unsatisfied) for each of the assessment criteria listed in the “Logging Actions” column of Table 2.&lt;br /&gt;
# Any observations or important findings that may influence the results or provide justifications for results&lt;br /&gt;
&lt;br /&gt;
We repeat this process for each of the three EHR systems in the study.&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
Our low-level assessment of user-based non-repudiation involves constructing a black-box test plan for testing an EHR system’s recording of &#039;&#039;specific&#039;&#039; auditable events (such as “view diagnosis data”). In this paper, we briefly describe the process for the audit test cases used to evaluate user-based non-repudiation audit functionality.  We developed this methodology in earlier work&amp;lt;sup&amp;gt;[14]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In 2006, through a consensus-based process that engaged stakeholders, CCHIT defined certification criteria focused on the functional capabilities that should be included in ambulatory (outpatient) and inpatient EHR systems.  The requirements specifications contain 284 different functional descriptions of EHR behavior. &lt;br /&gt;
&lt;br /&gt;
The CCHIT ambulatory certification criteria contain eight requirements related to audit.  The audit requirements contain functionality such as “The system shall allow an authorized administrator to set the inclusion or exclusion of auditable events based on organizational policy &amp;amp; operating requirements/limits.”  One CCHIT audit criterion states that the set of auditable events in an EHR system should include the following fourteen items:&lt;br /&gt;
&lt;br /&gt;
# Application start/stop&lt;br /&gt;
# User login/logout&lt;br /&gt;
# Session timeout&lt;br /&gt;
# Account lockout&lt;br /&gt;
# Patient Record created/viewed/updated/deleted&lt;br /&gt;
# Scheduling&lt;br /&gt;
# Query&lt;br /&gt;
# Order&lt;br /&gt;
# Node-authentication failure&lt;br /&gt;
# Signature created/validated&lt;br /&gt;
# PHI Export (e.g. print)&lt;br /&gt;
# PHI import&lt;br /&gt;
# Security administration events&lt;br /&gt;
# Backup and restore&lt;br /&gt;
&lt;br /&gt;
The list is provided here verbatim from the CCHIT ambulatory criteria.  The criteria are vague. For example, the phrase “security administration events” is undefined and could relate to authentication attempts, deletion of log files, or assigning user privileges. Likewise the term “scheduling” could relate to scheduling patient appointments, scheduling system backups, or scheduling system down-time for maintenance. The interpretation of these phrases varies, and the intended meanings are ambiguous.&lt;br /&gt;
&lt;br /&gt;
Due to the vagueness in these auditable events, we elected to approach the CCHIT certification criteria as a general functional requirements specification. The criteria describe functionality for EHR systems, such as editing a patient’s health record, signing a note about a patient, and indicating advance directives (e.g. a do-not-resuscitate order). Using these functional CCHIT requirements&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;, we develop a set of 58 black-box test cases that assess the ability of an EHR system to audit the user actions specified by these CCHIT requirements.  These test cases all involve a registered user performing a given action within the EHR system, therefore representing an assessment of user-based non-repudiation within each EHR system. The 58 test cases correspond to 58 individual CCHIT requirements statements.  Our test plan covers the 20.4% of the CCHIT requirements that are relevant to personal or protected health information.  The remaining 79.6% of the CCHIT requirements do not pertain to personal health information, and therefore do not necessitate an audit record for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
We iterated through each of the 284 ambulatory CCHIT requirements, extracting keywords and applying the template to produce a test case when necessary. We generate a test case from a specific requirement based on keywords within the requirements statement.  We know that a CCHIT requirements statement should result in a test case based on certain keywords within the requirements statement.  For example, requirements that include phrases like “problem list,” “clinical documents,” and “diagnostic test” all indicate the user’s interaction with a piece of a patient’s protected health information.&lt;br /&gt;
&lt;br /&gt;
Additionally, we extract an action phrase (e.g. “edit”) and an object phrase (e.g. “demographics”) from each relevant requirement to construct the black-box test case.  We present the template used for these black-box tests in Section 4.2.1, and present an example of a test case and its corresponding requirement in Section 4.2.2. &lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
Test Procedure Template: &lt;br /&gt;
# Authenticate as &amp;lt;&#039;&#039;insert a registered user name&#039;&#039;&amp;gt;.&lt;br /&gt;
# Open the user interface for &amp;lt;&#039;&#039;insert action phrase&#039;&#039;&amp;gt;ing an &amp;lt;&#039;&#039;insert object phrase&#039;&#039;&amp;gt;.&lt;br /&gt;
# Verb an &amp;lt;&#039;&#039;insert object phrase&#039;&#039;&amp;gt;with details.&lt;br /&gt;
# Logout as &amp;lt;&#039;&#039;insert a registered user name&#039;&#039;&amp;gt;.&lt;br /&gt;
# Authenticate as &amp;lt;&#039;&#039;insert an administrator’s user name&#039;&#039;&amp;gt;.&lt;br /&gt;
# Open the audit records for today’s date.&lt;br /&gt;
&lt;br /&gt;
Expected Results Template:&lt;br /&gt;
* The audit records should show that registered user &amp;lt;&#039;&#039;insert action phrase&#039;&#039;&amp;gt;ed an &amp;lt;&#039;&#039;insert object phrase&#039;&#039;&amp;gt;.&lt;br /&gt;
* The audit records should be clearly readable and easily accessible.&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=773</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=773"/>
		<updated>2014-01-05T22:34:54Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 4.2. Low-level Assessment using Black-box Test Cases */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Schneider&amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
Section 4.1 describes our high-level user-based non-repudiation assessment criteria for EHR audit mechanisms, based on non-specific auditable events (such as “view data” and “create data”).  Section 4.2 describes the development and execution of our lower-level black-box test plan to help evaluate the logging of specific auditable events (such as “view diagnosis data” and “view patient demographics data”) for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
Section 4.1.1 describes the derivation of our high-level assessment criteria for user-based non-repudiation based on non-specific auditable event types. Section 4.1.2 describes our methodology for assessing EHR system audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
Our high-level assessment of user-based non-repudiation first involves compiling a list of non-specific events that should be logged in software audit mechanisms, according to other researchers and standards organizations. Non-specific events include basic actions such as “viewing” and “updating”, but these events do not specify &#039;&#039;what information&#039;&#039; is viewed or updated. Our goal is to compile a set of common non-specific auditable event types for user-based non-repudiation based on the general guidelines and checklists from four academic and professional sources:&lt;br /&gt;
&lt;br /&gt;
* Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source.&lt;br /&gt;
* The Certification Commission for Health Information Technology (CCHIT)&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt; specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health &amp;amp; Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;. We collect 17 auditable events from this source.&lt;br /&gt;
* The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;. We collect 18 auditable events from this source.&lt;br /&gt;
* The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;. We collect 8 auditable events from this source.&lt;br /&gt;
&lt;br /&gt;
Combining all four sets of data, we collect 60 total non-specific auditable events and event types. After combining duplicates, our set contains 28 unique auditable events and event types. The only item appearing in all four suggested auditable events sets is “security administration event”, suggesting all four sources are concerned about software security. Out of the 28 unique events, 18 (64.3%) are contained in at least two of the source sets. Ten events (35.7%) are only contained in one source set. The overlap among the four sources suggests some common understanding and agreement of general events that should be logged, yet the disparity seems to indicate disagreement about the scope and breadth of auditable events. Table 1 provides a comparison of the four source sets of non-specific auditable events and event types.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| &#039;&#039;Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;(Yes or No)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|- style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| User login/logout&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Session timeout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Account lockout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Create data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Update data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Delete data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| View data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Query data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Node-authentication failure&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Signature created/validated&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
|  &lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Export data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Import data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Security administration event&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System backup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System restore&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Initiate a network connection&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Accept a network connection&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Grant access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y &lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Modify access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Revoke access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System, network, or services changes&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Application process abort/failure/abnormal end&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Detection of malicious activity&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Changes to audit log configuration&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Next, we categorize each individual auditable event or event type from Table 1 into one of two categories: events that &#039;&#039;affect&#039;&#039; user-based non-repudiation, and events that &#039;&#039;do not affect&#039;&#039; user-based non-repudiation. Our categorization is denoted in Table 1 under the “Affects User-based Non-repudiation” column. When categorizing these events, we determine if the given event can be traced to a specific user accountholder in an EHR system. If so, we categorize this event as one that affects user-based non-repudiation. If the event cannot be traced to a specific user accountholder, we categorize the event as one that does not affect user-based non-repudiation. For example, the “view data” event suggests a user accountholder (such as a physician) has authenticated into an EHR system and is viewing protected patient health information. The action of viewing this protected data can be traced to the physician’s user account. Therefore, this event is categorized as one that does affect user-based non-repudiation. On the other hand, an “application process failure” does not suggest any intervention by a user accountholder. Instead, this event suggests an internal EHR system state change. Therefore, we categorize this event as not affecting user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
Of the 28 total auditable events and event types, we identify 16 events that affect user-based non-repudiation. Of these 16 actions, only 9 events (56.25%) are suggested by two or more of the sources. The remaining 7 events (43.75%) are contained in only one source set.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
For each EHR system, we deploy the software on a local web server following the deployment instructions provided by each EHR’s community website. Next, we consult official documentation typically provided on the website for each of the EHR systems. In the documentation (typically user guides, development guides, or community wiki pages) we search for sections on auditing and logging to understand how to access these mechanisms in the actual application. Once we understand how to access the auditing mechanism, we open our locally-deployed EHR system and attempt to access these features to continue our analysis. We document all of our observations or difficulties during this analysis process for reflection after the analysis is complete. &lt;br /&gt;
&lt;br /&gt;
Once we have either physical access to or a general understanding of the given application’s auditing mechanism, we record the following information:&lt;br /&gt;
&lt;br /&gt;
# A flag (satisfied or unsatisfied) for each of the assessment criteria listed in the “Logging Actions” column of Table 2.&lt;br /&gt;
# Any observations or important findings that may influence the results or provide justifications for results&lt;br /&gt;
&lt;br /&gt;
We repeat this process for each of the three EHR systems in the study.&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
Our low-level assessment of user-based non-repudiation involves constructing a black-box test plan for testing an EHR system’s recording of &#039;&#039;specific&#039;&#039; auditable events (such as “view diagnosis data”). In this paper, we briefly describe the process for the audit test cases used to evaluate user-based non-repudiation audit functionality.  We developed this methodology in earlier work&amp;lt;sup&amp;gt;[14]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In 2006, through a consensus-based process that engaged stakeholders, CCHIT defined certification criteria focused on the functional capabilities that should be included in ambulatory (outpatient) and inpatient EHR systems.  The requirements specifications contain 284 different functional descriptions of EHR behavior. &lt;br /&gt;
&lt;br /&gt;
The CCHIT ambulatory certification criteria contain eight requirements related to audit.  The audit requirements contain functionality such as “The system shall allow an authorized administrator to set the inclusion or exclusion of auditable events based on organizational policy &amp;amp; operating requirements/limits.”  One CCHIT audit criterion states that the set of auditable events in an EHR system should include the following fourteen items:&lt;br /&gt;
&lt;br /&gt;
# Application start/stop&lt;br /&gt;
# User login/logout&lt;br /&gt;
# Session timeout&lt;br /&gt;
# Account lockout&lt;br /&gt;
# Patient Record created/viewed/updated/deleted&lt;br /&gt;
# Scheduling&lt;br /&gt;
# Query&lt;br /&gt;
# Order&lt;br /&gt;
# Node-authentication failure&lt;br /&gt;
# Signature created/validated&lt;br /&gt;
# PHI Export (e.g. print)&lt;br /&gt;
# PHI import&lt;br /&gt;
# Security administration events&lt;br /&gt;
# Backup and restore&lt;br /&gt;
&lt;br /&gt;
The list is provided here verbatim from the CCHIT ambulatory criteria.  The criteria are vague. For example, the phrase “security administration events” is undefined and could relate to authentication attempts, deletion of log files, or assigning user privileges. Likewise the term “scheduling” could relate to scheduling patient appointments, scheduling system backups, or scheduling system down-time for maintenance. The interpretation of these phrases varies, and the intended meanings are ambiguous.&lt;br /&gt;
&lt;br /&gt;
Due to the vagueness in these auditable events, we elected to approach the CCHIT certification criteria as a general functional requirements specification. The criteria describe functionality for EHR systems, such as editing a patient’s health record, signing a note about a patient, and indicating advance directives (e.g. a do-not-resuscitate order). Using these functional CCHIT requirements&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;, we develop a set of 58 black-box test cases that assess the ability of an EHR system to audit the user actions specified by these CCHIT requirements.  These test cases all involve a registered user performing a given action within the EHR system, therefore representing an assessment of user-based non-repudiation within each EHR system. The 58 test cases correspond to 58 individual CCHIT requirements statements.  Our test plan covers the 20.4% of the CCHIT requirements that are relevant to personal or protected health information.  The remaining 79.6% of the CCHIT requirements do not pertain to personal health information, and therefore do not necessitate an audit record for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
We iterated through each of the 284 ambulatory CCHIT requirements, extracting keywords and applying the template to produce a test case when necessary. We generate a test case from a specific requirement based on keywords within the requirements statement.  We know that a CCHIT requirements statement should result in a test case based on certain keywords within the requirements statement.  For example, requirements that include phrases like “problem list,” “clinical documents,” and “diagnostic test” all indicate the user’s interaction with a piece of a patient’s protected health information.&lt;br /&gt;
&lt;br /&gt;
Additionally, we extract an action phrase (e.g. “edit”) and an object phrase (e.g. “demographics”) from each relevant requirement to construct the black-box test case.  We present the template used for these black-box tests in Section 4.2.1, and present an example of a test case and its corresponding requirement in Section 4.2.2. &lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=772</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=772"/>
		<updated>2014-01-05T22:32:28Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Schneider&amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
Section 4.1 describes our high-level user-based non-repudiation assessment criteria for EHR audit mechanisms, based on non-specific auditable events (such as “view data” and “create data”).  Section 4.2 describes the development and execution of our lower-level black-box test plan to help evaluate the logging of specific auditable events (such as “view diagnosis data” and “view patient demographics data”) for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
Section 4.1.1 describes the derivation of our high-level assessment criteria for user-based non-repudiation based on non-specific auditable event types. Section 4.1.2 describes our methodology for assessing EHR system audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
Our high-level assessment of user-based non-repudiation first involves compiling a list of non-specific events that should be logged in software audit mechanisms, according to other researchers and standards organizations. Non-specific events include basic actions such as “viewing” and “updating”, but these events do not specify &#039;&#039;what information&#039;&#039; is viewed or updated. Our goal is to compile a set of common non-specific auditable event types for user-based non-repudiation based on the general guidelines and checklists from four academic and professional sources:&lt;br /&gt;
&lt;br /&gt;
* Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source.&lt;br /&gt;
* The Certification Commission for Health Information Technology (CCHIT)&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt; specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health &amp;amp; Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;. We collect 17 auditable events from this source.&lt;br /&gt;
* The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;. We collect 18 auditable events from this source.&lt;br /&gt;
* The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;. We collect 8 auditable events from this source.&lt;br /&gt;
&lt;br /&gt;
Combining all four sets of data, we collect 60 total non-specific auditable events and event types. After combining duplicates, our set contains 28 unique auditable events and event types. The only item appearing in all four suggested auditable events sets is “security administration event”, suggesting all four sources are concerned about software security. Out of the 28 unique events, 18 (64.3%) are contained in at least two of the source sets. Ten events (35.7%) are only contained in one source set. The overlap among the four sources suggests some common understanding and agreement of general events that should be logged, yet the disparity seems to indicate disagreement about the scope and breadth of auditable events. Table 1 provides a comparison of the four source sets of non-specific auditable events and event types.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| &#039;&#039;Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;(Yes or No)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|- style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| User login/logout&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Session timeout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Account lockout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Create data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Update data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Delete data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| View data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Query data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Node-authentication failure&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Signature created/validated&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
|  &lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Export data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Import data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Security administration event&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System backup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System restore&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Initiate a network connection&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Accept a network connection&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Grant access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y &lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Modify access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Revoke access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System, network, or services changes&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Application process abort/failure/abnormal end&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Detection of malicious activity&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Changes to audit log configuration&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Next, we categorize each individual auditable event or event type from Table 1 into one of two categories: events that &#039;&#039;affect&#039;&#039; user-based non-repudiation, and events that &#039;&#039;do not affect&#039;&#039; user-based non-repudiation. Our categorization is denoted in Table 1 under the “Affects User-based Non-repudiation” column. When categorizing these events, we determine if the given event can be traced to a specific user accountholder in an EHR system. If so, we categorize this event as one that affects user-based non-repudiation. If the event cannot be traced to a specific user accountholder, we categorize the event as one that does not affect user-based non-repudiation. For example, the “view data” event suggests a user accountholder (such as a physician) has authenticated into an EHR system and is viewing protected patient health information. The action of viewing this protected data can be traced to the physician’s user account. Therefore, this event is categorized as one that does affect user-based non-repudiation. On the other hand, an “application process failure” does not suggest any intervention by a user accountholder. Instead, this event suggests an internal EHR system state change. Therefore, we categorize this event as not affecting user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
Of the 28 total auditable events and event types, we identify 16 events that affect user-based non-repudiation. Of these 16 actions, only 9 events (56.25%) are suggested by two or more of the sources. The remaining 7 events (43.75%) are contained in only one source set.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
For each EHR system, we deploy the software on a local web server following the deployment instructions provided by each EHR’s community website. Next, we consult official documentation typically provided on the website for each of the EHR systems. In the documentation (typically user guides, development guides, or community wiki pages) we search for sections on auditing and logging to understand how to access these mechanisms in the actual application. Once we understand how to access the auditing mechanism, we open our locally-deployed EHR system and attempt to access these features to continue our analysis. We document all of our observations or difficulties during this analysis process for reflection after the analysis is complete. &lt;br /&gt;
&lt;br /&gt;
Once we have either physical access to or a general understanding of the given application’s auditing mechanism, we record the following information:&lt;br /&gt;
&lt;br /&gt;
# A flag (satisfied or unsatisfied) for each of the assessment criteria listed in the “Logging Actions” column of Table 2.&lt;br /&gt;
# Any observations or important findings that may influence the results or provide justifications for results&lt;br /&gt;
&lt;br /&gt;
We repeat this process for each of the three EHR systems in the study.&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=771</id>
		<title>IHI Table1</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=771"/>
		<updated>2014-01-05T22:32:13Z</updated>

		<summary type="html">&lt;p&gt;Programsam: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=770</id>
		<title>IHI Table1</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=770"/>
		<updated>2014-01-05T22:30:45Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| &#039;&#039;Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;(Yes or No)&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|- style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| User login/logout&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Session timeout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Account lockout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Create data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Update data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Delete data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| View data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Query data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Node-authentication failure&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Signature created/validated&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
|  &lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Export data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Import data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Security administration event&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System backup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System restore&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Initiate a network connection&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Accept a network connection&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Grant access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y &lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Modify access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Revoke access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System, network, or services changes&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Application process abort/failure/abnormal end&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Detection of malicious activity&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Changes to audit log configuration&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=769</id>
		<title>IHI Table1</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=769"/>
		<updated>2014-01-05T22:24:06Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| &#039;&#039;Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;Yes or No&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|- style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| User login/logout&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Session timeout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Account lockout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Create data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Update data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Delete data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Query data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Node-authentication failure&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Signature created/validated&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
|  &lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Export data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Import data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Security administration event&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System backup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| System restore&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Initiate a network connection&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Accept a network connection&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Grant access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y &lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Modify access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-  style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| Revoke access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System, network, or services changes&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Application process abort/failure/abnormal end&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Detection of malicious activity&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Changes to audit log configuration&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=768</id>
		<title>IHI Table1</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=768"/>
		<updated>2014-01-05T22:23:13Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| &#039;&#039;Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;Yes or No&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|- style=&amp;quot;font-weight: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| User login/logout&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Session timeout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Account lockout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Create data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Update data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Delete data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|- &lt;br /&gt;
|-&lt;br /&gt;
| Query data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Node-authentication failure&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Signature created/validated&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
|  &lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Export data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Import data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Security administration event&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System backup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System restore&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Initiate a network connection&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Accept a network connection&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Grant access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y &lt;br /&gt;
|-&lt;br /&gt;
| Modify access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Revoke access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System, network, or services changes&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Application process abort/failure/abnormal end&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Detection of malicious activity&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Changes to audit log configuration&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=767</id>
		<title>IHI Table1</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=767"/>
		<updated>2014-01-05T22:22:14Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| &#039;&#039;Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;Yes or No&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|- style=&amp;quot;font-style: bold; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| User login/logout&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Session timeout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Account lockout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Create data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Update data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Delete data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|- &lt;br /&gt;
|-&lt;br /&gt;
| Query data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Node-authentication failure&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Signature created/validated&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
|  &lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Export data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Import data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Security administration event&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System backup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System restore&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Initiate a network connection&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Accept a network connection&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Grant access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y &lt;br /&gt;
|-&lt;br /&gt;
| Modify access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Revoke access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System, network, or services changes&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Application process abort/failure/abnormal end&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Detection of malicious activity&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Changes to audit log configuration&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=766</id>
		<title>IHI Table1</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=766"/>
		<updated>2014-01-05T22:21:59Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| &#039;&#039;Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;Yes or No&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|- style=&amp;quot;font-style: strong; background-color: #EEEEEE&amp;quot;&lt;br /&gt;
| User login/logout&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Session timeout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Account lockout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Create data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Update data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Delete data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|- &lt;br /&gt;
|-&lt;br /&gt;
| Query data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Node-authentication failure&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Signature created/validated&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
|  &lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Export data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Import data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Security administration event&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System backup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System restore&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Initiate a network connection&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Accept a network connection&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Grant access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y &lt;br /&gt;
|-&lt;br /&gt;
| Modify access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Revoke access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System, network, or services changes&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Application process abort/failure/abnormal end&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Detection of malicious activity&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Changes to audit log configuration&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=765</id>
		<title>IHI Table1</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=765"/>
		<updated>2014-01-05T22:20:21Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| &#039;&#039;Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&#039;&#039;&lt;br /&gt;
| &#039;&#039;Yes or No&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| User login/logout&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Session timeout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Account lockout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Create data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Update data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Delete data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|- &lt;br /&gt;
|-&lt;br /&gt;
| Query data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Node-authentication failure&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Signature created/validated&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
|  &lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Export data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Import data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Security administration event&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System backup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System restore&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Initiate a network connection&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Accept a network connection&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Grant access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y &lt;br /&gt;
|-&lt;br /&gt;
| Modify access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Revoke access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System, network, or services changes&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Application process abort/failure/abnormal end&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Detection of malicious activity&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Changes to audit log configuration&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=764</id>
		<title>IHI Table1</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=764"/>
		<updated>2014-01-05T22:19:57Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&lt;br /&gt;
| CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&lt;br /&gt;
| SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&lt;br /&gt;
| IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&lt;br /&gt;
| Yes or No&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| User login/logout&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Session timeout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Account lockout&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Create data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Update data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Delete data&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|- &lt;br /&gt;
|-&lt;br /&gt;
| Query data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Node-authentication failure&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Signature created/validated&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
|  &lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Export data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Import data&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Security administration event&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Scheduling&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System backup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System restore&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Initiate a network connection&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Accept a network connection&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Grant access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y &lt;br /&gt;
|-&lt;br /&gt;
| Modify access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| Revoke access rights&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| Y&lt;br /&gt;
|-&lt;br /&gt;
| System, network, or services changes&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Application process abort/failure/abnormal end&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Detection of malicious activity&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| Changes to audit log configuration&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
| X&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=763</id>
		<title>IHI Table1</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=763"/>
		<updated>2014-01-05T22:01:48Z</updated>

		<summary type="html">&lt;p&gt;Programsam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation&lt;br /&gt;
! Auditable Events&lt;br /&gt;
! colspan=4 | Source of Software Audit mechanism Checklist&lt;br /&gt;
! Affects User-based Non-repudiation&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;Log Entry Item&#039;&#039;&lt;br /&gt;
| Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;&lt;br /&gt;
| CCHIT&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;&lt;br /&gt;
| SANS&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;&lt;br /&gt;
| IEEE&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;&lt;br /&gt;
| Yes or No&lt;br /&gt;
|-&lt;br /&gt;
| System startup&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System shutdown&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| X&lt;br /&gt;
| &lt;br /&gt;
| N&lt;br /&gt;
|-&lt;br /&gt;
| System restart&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| X&lt;br /&gt;
|&lt;br /&gt;
| N&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=762</id>
		<title>IHI Table1</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=IHI_Table1&amp;diff=762"/>
		<updated>2014-01-05T21:58:09Z</updated>

		<summary type="html">&lt;p&gt;Programsam: Created page with &amp;quot;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot; |+ Table 1.  Line, Field, Method Counts for iTrust v2a-d for Java package edu.ncsu.itrust. ! ! colspan=4 | Line Cou...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align: left; width: 100%;&amp;quot;&lt;br /&gt;
|+ Table 1.  Line, Field, Method Counts for iTrust v2a-d for Java package edu.ncsu.itrust.&lt;br /&gt;
!&lt;br /&gt;
! colspan=4 | Line Count for Team&lt;br /&gt;
! colspan=4 | Field Count for Team&lt;br /&gt;
! colspan=4 | Method Count for Team&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Class&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;A&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;B&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;C&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;D&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;A&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;B&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;C&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;D&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;A&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;B&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;C&#039;&#039;&#039;&lt;br /&gt;
| &#039;&#039;&#039;D&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Auth&lt;br /&gt;
| 280&lt;br /&gt;
| 299&lt;br /&gt;
| 278&lt;br /&gt;
| 278&lt;br /&gt;
| 2&lt;br /&gt;
| 2&lt;br /&gt;
| 2&lt;br /&gt;
| 2&lt;br /&gt;
| 16&lt;br /&gt;
| 16&lt;br /&gt;
| 16&lt;br /&gt;
| 16&lt;br /&gt;
|-&lt;br /&gt;
| Demographics&lt;br /&gt;
| 628&lt;br /&gt;
| 544&lt;br /&gt;
| 828&lt;br /&gt;
| 540&lt;br /&gt;
| 27&lt;br /&gt;
| 25&lt;br /&gt;
| 25&lt;br /&gt;
| 25&lt;br /&gt;
| 22&lt;br /&gt;
| 18&lt;br /&gt;
| 26&lt;br /&gt;
| 19&lt;br /&gt;
|-&lt;br /&gt;
| Transactions&lt;br /&gt;
| 123&lt;br /&gt;
| 120&lt;br /&gt;
| 133&lt;br /&gt;
| 183&lt;br /&gt;
| 2&lt;br /&gt;
| 2&lt;br /&gt;
| 2&lt;br /&gt;
| 2&lt;br /&gt;
| 7&lt;br /&gt;
| 7&lt;br /&gt;
| 7&lt;br /&gt;
| 10&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=761</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=761"/>
		<updated>2014-01-05T21:57:04Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 4.1.2 High-level Assessment Methodology */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Schneider&amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
Section 4.1 describes our high-level user-based non-repudiation assessment criteria for EHR audit mechanisms, based on non-specific auditable events (such as “view data” and “create data”).  Section 4.2 describes the development and execution of our lower-level black-box test plan to help evaluate the logging of specific auditable events (such as “view diagnosis data” and “view patient demographics data”) for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
Section 4.1.1 describes the derivation of our high-level assessment criteria for user-based non-repudiation based on non-specific auditable event types. Section 4.1.2 describes our methodology for assessing EHR system audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
Our high-level assessment of user-based non-repudiation first involves compiling a list of non-specific events that should be logged in software audit mechanisms, according to other researchers and standards organizations. Non-specific events include basic actions such as “viewing” and “updating”, but these events do not specify &#039;&#039;what information&#039;&#039; is viewed or updated. Our goal is to compile a set of common non-specific auditable event types for user-based non-repudiation based on the general guidelines and checklists from four academic and professional sources:&lt;br /&gt;
&lt;br /&gt;
* Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source.&lt;br /&gt;
* The Certification Commission for Health Information Technology (CCHIT)&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt; specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health &amp;amp; Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;. We collect 17 auditable events from this source.&lt;br /&gt;
* The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;. We collect 18 auditable events from this source.&lt;br /&gt;
* The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;. We collect 8 auditable events from this source.&lt;br /&gt;
&lt;br /&gt;
Combining all four sets of data, we collect 60 total non-specific auditable events and event types. After combining duplicates, our set contains 28 unique auditable events and event types. The only item appearing in all four suggested auditable events sets is “security administration event”, suggesting all four sources are concerned about software security. Out of the 28 unique events, 18 (64.3%) are contained in at least two of the source sets. Ten events (35.7%) are only contained in one source set. The overlap among the four sources suggests some common understanding and agreement of general events that should be logged, yet the disparity seems to indicate disagreement about the scope and breadth of auditable events. Table 1 provides a comparison of the four source sets of non-specific auditable events and event types.&lt;br /&gt;
&lt;br /&gt;
Next, we categorize each individual auditable event or event type from Table 1 into one of two categories: events that &#039;&#039;affect&#039;&#039; user-based non-repudiation, and events that &#039;&#039;do not affect&#039;&#039; user-based non-repudiation. Our categorization is denoted in Table 1 under the “Affects User-based Non-repudiation” column. When categorizing these events, we determine if the given event can be traced to a specific user accountholder in an EHR system. If so, we categorize this event as one that affects user-based non-repudiation. If the event cannot be traced to a specific user accountholder, we categorize the event as one that does not affect user-based non-repudiation. For example, the “view data” event suggests a user accountholder (such as a physician) has authenticated into an EHR system and is viewing protected patient health information. The action of viewing this protected data can be traced to the physician’s user account. Therefore, this event is categorized as one that does affect user-based non-repudiation. On the other hand, an “application process failure” does not suggest any intervention by a user accountholder. Instead, this event suggests an internal EHR system state change. Therefore, we categorize this event as not affecting user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
Of the 28 total auditable events and event types, we identify 16 events that affect user-based non-repudiation. Of these 16 actions, only 9 events (56.25%) are suggested by two or more of the sources. The remaining 7 events (43.75%) are contained in only one source set.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
For each EHR system, we deploy the software on a local web server following the deployment instructions provided by each EHR’s community website. Next, we consult official documentation typically provided on the website for each of the EHR systems. In the documentation (typically user guides, development guides, or community wiki pages) we search for sections on auditing and logging to understand how to access these mechanisms in the actual application. Once we understand how to access the auditing mechanism, we open our locally-deployed EHR system and attempt to access these features to continue our analysis. We document all of our observations or difficulties during this analysis process for reflection after the analysis is complete. &lt;br /&gt;
&lt;br /&gt;
Once we have either physical access to or a general understanding of the given application’s auditing mechanism, we record the following information:&lt;br /&gt;
&lt;br /&gt;
# A flag (satisfied or unsatisfied) for each of the assessment criteria listed in the “Logging Actions” column of Table 2.&lt;br /&gt;
# Any observations or important findings that may influence the results or provide justifications for results&lt;br /&gt;
&lt;br /&gt;
We repeat this process for each of the three EHR systems in the study.&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=760</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=760"/>
		<updated>2014-01-05T21:55:58Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 4.1.1 Derivation of Non-specific Auditable Events */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Schneider&amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
Section 4.1 describes our high-level user-based non-repudiation assessment criteria for EHR audit mechanisms, based on non-specific auditable events (such as “view data” and “create data”).  Section 4.2 describes the development and execution of our lower-level black-box test plan to help evaluate the logging of specific auditable events (such as “view diagnosis data” and “view patient demographics data”) for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
Section 4.1.1 describes the derivation of our high-level assessment criteria for user-based non-repudiation based on non-specific auditable event types. Section 4.1.2 describes our methodology for assessing EHR system audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
Our high-level assessment of user-based non-repudiation first involves compiling a list of non-specific events that should be logged in software audit mechanisms, according to other researchers and standards organizations. Non-specific events include basic actions such as “viewing” and “updating”, but these events do not specify &#039;&#039;what information&#039;&#039; is viewed or updated. Our goal is to compile a set of common non-specific auditable event types for user-based non-repudiation based on the general guidelines and checklists from four academic and professional sources:&lt;br /&gt;
&lt;br /&gt;
* Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source.&lt;br /&gt;
* The Certification Commission for Health Information Technology (CCHIT)&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt; specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health &amp;amp; Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;. We collect 17 auditable events from this source.&lt;br /&gt;
* The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;. We collect 18 auditable events from this source.&lt;br /&gt;
* The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;. We collect 8 auditable events from this source.&lt;br /&gt;
&lt;br /&gt;
Combining all four sets of data, we collect 60 total non-specific auditable events and event types. After combining duplicates, our set contains 28 unique auditable events and event types. The only item appearing in all four suggested auditable events sets is “security administration event”, suggesting all four sources are concerned about software security. Out of the 28 unique events, 18 (64.3%) are contained in at least two of the source sets. Ten events (35.7%) are only contained in one source set. The overlap among the four sources suggests some common understanding and agreement of general events that should be logged, yet the disparity seems to indicate disagreement about the scope and breadth of auditable events. Table 1 provides a comparison of the four source sets of non-specific auditable events and event types.&lt;br /&gt;
&lt;br /&gt;
Next, we categorize each individual auditable event or event type from Table 1 into one of two categories: events that &#039;&#039;affect&#039;&#039; user-based non-repudiation, and events that &#039;&#039;do not affect&#039;&#039; user-based non-repudiation. Our categorization is denoted in Table 1 under the “Affects User-based Non-repudiation” column. When categorizing these events, we determine if the given event can be traced to a specific user accountholder in an EHR system. If so, we categorize this event as one that affects user-based non-repudiation. If the event cannot be traced to a specific user accountholder, we categorize the event as one that does not affect user-based non-repudiation. For example, the “view data” event suggests a user accountholder (such as a physician) has authenticated into an EHR system and is viewing protected patient health information. The action of viewing this protected data can be traced to the physician’s user account. Therefore, this event is categorized as one that does affect user-based non-repudiation. On the other hand, an “application process failure” does not suggest any intervention by a user accountholder. Instead, this event suggests an internal EHR system state change. Therefore, we categorize this event as not affecting user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
Of the 28 total auditable events and event types, we identify 16 events that affect user-based non-repudiation. Of these 16 actions, only 9 events (56.25%) are suggested by two or more of the sources. The remaining 7 events (43.75%) are contained in only one source set.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=759</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=759"/>
		<updated>2014-01-05T18:56:03Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 4.1.1 Derivation of Non-specific Auditable Events */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Schneider&amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
Section 4.1 describes our high-level user-based non-repudiation assessment criteria for EHR audit mechanisms, based on non-specific auditable events (such as “view data” and “create data”).  Section 4.2 describes the development and execution of our lower-level black-box test plan to help evaluate the logging of specific auditable events (such as “view diagnosis data” and “view patient demographics data”) for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
Section 4.1.1 describes the derivation of our high-level assessment criteria for user-based non-repudiation based on non-specific auditable event types. Section 4.1.2 describes our methodology for assessing EHR system audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
Our high-level assessment of user-based non-repudiation first involves compiling a list of non-specific events that should be logged in software audit mechanisms, according to other researchers and standards organizations. Non-specific events include basic actions such as “viewing” and “updating”, but these events do not specify &#039;&#039;what information&#039;&#039; is viewed or updated. Our goal is to compile a set of common non-specific auditable event types for user-based non-repudiation based on the general guidelines and checklists from four academic and professional sources:&lt;br /&gt;
&lt;br /&gt;
* Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source.&lt;br /&gt;
* The Certification Commission for Health Information Technology (CCHIT)  specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health &amp;amp; Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use&amp;lt;sup&amp;gt;[2]&amp;lt;/sup&amp;gt;. We collect 17 auditable events from this source.&lt;br /&gt;
* The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems&amp;lt;sup&amp;gt;[7]&amp;lt;/sup&amp;gt;. We collect 18 auditable events from this source.&lt;br /&gt;
* The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events&amp;lt;sup&amp;gt;[6]&amp;lt;/sup&amp;gt;. We collect 8 auditable events from this source.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=758</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=758"/>
		<updated>2014-01-05T18:54:50Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 4.1 High-level Assessment using Audit Guidelines and Checklists */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Schneider&amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
Section 4.1 describes our high-level user-based non-repudiation assessment criteria for EHR audit mechanisms, based on non-specific auditable events (such as “view data” and “create data”).  Section 4.2 describes the development and execution of our lower-level black-box test plan to help evaluate the logging of specific auditable events (such as “view diagnosis data” and “view patient demographics data”) for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
Section 4.1.1 describes the derivation of our high-level assessment criteria for user-based non-repudiation based on non-specific auditable event types. Section 4.1.2 describes our methodology for assessing EHR system audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=757</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=757"/>
		<updated>2014-01-05T18:54:34Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 4. Assessment Methodology */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Schneider&amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
Section 4.1 describes our high-level user-based non-repudiation assessment criteria for EHR audit mechanisms, based on non-specific auditable events (such as “view data” and “create data”).  Section 4.2 describes the development and execution of our lower-level black-box test plan to help evaluate the logging of specific auditable events (such as “view diagnosis data” and “view patient demographics data”) for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=756</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=756"/>
		<updated>2014-01-05T18:54:15Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 3.2.2. Ineffective Log Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Schneider&amp;lt;sup&amp;gt;[13]&amp;lt;/sup&amp;gt; compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon&amp;lt;sup&amp;gt;[4]&amp;lt;/sup&amp;gt; also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation.&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=755</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=755"/>
		<updated>2014-01-05T18:53:28Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 3.2.1. Ill-defined Standards, Policies, and Regulations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=754</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=754"/>
		<updated>2014-01-05T18:52:59Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 3.2. Challenges in Policy, Regulations, and Compliance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.&lt;br /&gt;
&lt;br /&gt;
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;.  Distributed web services, for example, may have different policies based on the host machines&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt; state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.&lt;br /&gt;
&lt;br /&gt;
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however&amp;lt;sup&amp;gt;[11]&amp;lt;/sup&amp;gt;. In healthcare&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;, viewing and reading data in EHR systems is a vital concern when managing protected health information.&lt;br /&gt;
&lt;br /&gt;
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=753</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=753"/>
		<updated>2014-01-05T18:50:50Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 3.1.2. Log File Reliability */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=752</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=752"/>
		<updated>2014-01-05T18:50:30Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 3.1.1. Limited Infrastructure Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=751</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=751"/>
		<updated>2014-01-05T18:50:09Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 3. Related Work */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.&lt;br /&gt;
&lt;br /&gt;
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application [3]. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define [3]. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=750</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=750"/>
		<updated>2014-01-05T18:49:12Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 2. Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=749</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=749"/>
		<updated>2014-01-05T18:49:01Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 2. Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;storage confidentiality&#039;&#039; &amp;amp;ndash; malicious users should not be able to access log entries &lt;br /&gt;
* &#039;&#039;machine-based non-repudiation&#039;&#039; &amp;amp;ndash; log files can be traced to a specific machine to identify the source of the audit entries&lt;br /&gt;
* &#039;&#039;application-based non-repudiation&#039;&#039; &amp;amp;ndash; log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries&lt;br /&gt;
* &#039;&#039;transmission confidentiality&#039;&#039; &amp;amp;ndash; accuracy and integrity of log file data is preserved during transmission&lt;br /&gt;
&lt;br /&gt;
Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. B&amp;amp;ouml;ck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.&lt;br /&gt;
&lt;br /&gt;
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An &#039;&#039;insider attack&#039;&#039; occurs when employees of an organization with legitimate access to their organizations&#039; information systems use these systems to sabotage their organizations&#039; IT infrastructure or commit fraud&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002&amp;lt;sup&amp;gt;[9]&amp;lt;/sup&amp;gt;.  According to the study:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 90% of insider attackers were given administrative or high-level privileges to the target system.&lt;br /&gt;
* 81% of the incidents involved losses to the organization, with dollar amounts estimated between &amp;quot;five hundred dollars&amp;quot; and &amp;quot;tens of millions of dollars.&amp;quot;&lt;br /&gt;
* The majority of attacks occurred after the employees were terminated from the organization.&lt;br /&gt;
* Lack of access controls facilitated IT sabotage&lt;br /&gt;
&lt;br /&gt;
Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=748</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=748"/>
		<updated>2014-01-05T18:46:35Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 2. Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;ndash; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;ndash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
B&amp;amp;ouml;ck, et al., identify four primary concerns regarding software audit mechanism reliability&amp;lt;sup&amp;gt;[1]&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
	<entry>
		<id>https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=747</id>
		<title>Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms</title>
		<link rel="alternate" type="text/html" href="https://bw.kn1.us/wiki/index.php?title=Modifying_Without_a_Trace:_High-level_Audit_Guidelines_are_Inadequate_for_Electronic_Health_Record_Audit_Mechanisms&amp;diff=747"/>
		<updated>2014-01-05T18:44:39Z</updated>

		<summary type="html">&lt;p&gt;Programsam: /* 2. Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;J. King, B. Smith, L. Williams, &amp;quot;Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms&amp;quot;, Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. &#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems&#039;&#039;. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.&lt;br /&gt;
&lt;br /&gt;
== 1. Introduction ==&lt;br /&gt;
&lt;br /&gt;
Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson&amp;lt;sup&amp;gt;[3]&amp;lt;/sup&amp;gt;, “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations&amp;lt;sup&amp;gt;[8]&amp;lt;/sup&amp;gt;. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”&amp;lt;sup&amp;gt;[5]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of &#039;&#039;what&#039;&#039; should be logged, &#039;&#039;how&#039;&#039; it should be logged, and &#039;&#039;when&#039;&#039; logged information should be monitored.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation&#039;&#039;. In performing this study, we investigate the following questions:&lt;br /&gt;
&lt;br /&gt;
* R1: What events should be included in an EHR log file for non-repudiation?&lt;br /&gt;
* R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?&lt;br /&gt;
&lt;br /&gt;
Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of &#039;&#039;non-specific&#039;&#039; auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess &#039;&#039;specific&#039;&#039; user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.&lt;br /&gt;
&lt;br /&gt;
The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.&lt;br /&gt;
&lt;br /&gt;
== 2. Background ==&lt;br /&gt;
&lt;br /&gt;
The United States Department of Justice’s Global Justice Information Sharing Initiative defines:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;non-repudiation&#039;&#039; &amp;amp;em; a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action&amp;lt;sup&amp;gt;[10]&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;user-based non-repudiation&#039;&#039; &amp;amp;dash; a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.&lt;br /&gt;
&lt;br /&gt;
== 3. Related Work ==&lt;br /&gt;
&lt;br /&gt;
=== 3.1. Challenges in Technology ===&lt;br /&gt;
&lt;br /&gt;
==== 3.1.1. Limited Infrastructure Resources ====&lt;br /&gt;
&lt;br /&gt;
==== 3.1.2. Log File Reliability ====&lt;br /&gt;
&lt;br /&gt;
=== 3.2. Challenges in Policy, Regulations, and Compliance ===&lt;br /&gt;
&lt;br /&gt;
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ====&lt;br /&gt;
&lt;br /&gt;
==== 3.2.2. Ineffective Log Analysis ====&lt;br /&gt;
&lt;br /&gt;
== 4. Assessment Methodology ==&lt;br /&gt;
&lt;br /&gt;
=== 4.1 High-level Assessment using Audit Guidelines and Checklists ===&lt;br /&gt;
&lt;br /&gt;
==== 4.1.1 Derivation of Non-specific Auditable Events ====&lt;br /&gt;
&lt;br /&gt;
==== 4.1.2 High-level Assessment Methodology ====&lt;br /&gt;
&lt;br /&gt;
=== 4.2. Low-level Assessment using Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
==== 4.2.1 Audit Test Case Template ====&lt;br /&gt;
&lt;br /&gt;
==== 4.2.2 Audit Test Case Example ====&lt;br /&gt;
&lt;br /&gt;
== 5. Case Studies ==&lt;br /&gt;
&lt;br /&gt;
=== 5.1. Open-source EHR Systems Studied ===&lt;br /&gt;
&lt;br /&gt;
=== 5.2. High-level User-based Non-repudiation Assessment ===&lt;br /&gt;
&lt;br /&gt;
=== 5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases ===&lt;br /&gt;
&lt;br /&gt;
== 6. Modifying without a Trace ==&lt;br /&gt;
&lt;br /&gt;
== 7. Limitations ==&lt;br /&gt;
&lt;br /&gt;
== 8. Future Work ==&lt;br /&gt;
&lt;br /&gt;
== 9. Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== 10. Acknowledgements ==&lt;br /&gt;
&lt;br /&gt;
== 11. References ==&lt;/div&gt;</summary>
		<author><name>Programsam</name></author>
	</entry>
</feed>