Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms: Difference between revisions

 
(6 intermediate revisions by the same user not shown)
Line 101: Line 101:


* Chuvakin and Peterson<sup>[3]</sup> provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source.
* Chuvakin and Peterson<sup>[3]</sup> provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source.
* The Certification Commission for Health Information Technology (CCHIT) specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health & Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use<sup>[2]</sup>. We collect 17 auditable events from this source.
* The Certification Commission for Health Information Technology (CCHIT)<sup>1</sup> specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health & Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use<sup>[2]</sup>. We collect 17 auditable events from this source.
* The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems<sup>[7]</sup>. We collect 18 auditable events from this source.
* The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems<sup>[7]</sup>. We collect 18 auditable events from this source.
* The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events<sup>[6]</sup>. We collect 8 auditable events from this source.
* The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events<sup>[6]</sup>. We collect 8 auditable events from this source.
Combining all four sets of data, we collect 60 total non-specific auditable events and event types. After combining duplicates, our set contains 28 unique auditable events and event types. The only item appearing in all four suggested auditable events sets is “security administration event”, suggesting all four sources are concerned about software security. Out of the 28 unique events, 18 (64.3%) are contained in at least two of the source sets. Ten events (35.7%) are only contained in one source set. The overlap among the four sources suggests some common understanding and agreement of general events that should be logged, yet the disparity seems to indicate disagreement about the scope and breadth of auditable events. Table 1 provides a comparison of the four source sets of non-specific auditable events and event types.
{| class="wikitable" style="text-align: left; width: 100%;"
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation
! Auditable Events
! colspan=4 | Source of Software Audit mechanism Checklist
! Affects User-based Non-repudiation
|-
| ''Log Entry Item''
| ''Chuvakin and Peterson<sup>[3]</sup>''
| ''CCHIT<sup>[2]</sup>''
| ''SANS<sup>[7]</sup>''
| ''IEEE<sup>[6]</sup>''
| ''(Yes or No)''
|-
| System startup
| X
| X
| X
|
| N
|-
| System shutdown
| X
| X
| X
|
| N
|-
| System restart
|
|
| X
|
| N
|- style="font-weight: bold; background-color: #EEEEEE"
| User login/logout
| X
| X
| X
|
| Y
|-  style="font-weight: bold; background-color: #EEEEEE"
| Session timeout
|
| X
|
|
| Y
|-  style="font-weight: bold; background-color: #EEEEEE"
| Account lockout
|
| X
|
|
| Y
|-  style="font-weight: bold; background-color: #EEEEEE"
| Create data
| X
| X
| X
|
| Y
|-  style="font-weight: bold; background-color: #EEEEEE"
| Update data
| X
| X
| X
|
| Y
|-  style="font-weight: bold; background-color: #EEEEEE"
| Delete data
| X
| X
| X
|
| Y
|-  style="font-weight: bold; background-color: #EEEEEE"
| View data
| X
| X
| X
|
| Y
|-  style="font-weight: bold; background-color: #EEEEEE"
| Query data
|
| X
|
|
| Y
|-
| Node-authentication failure
| X
| X
| X
|
| N
|-  style="font-weight: bold; background-color: #EEEEEE"
| Signature created/validated
|
| X
|
| Y
|-  style="font-weight: bold; background-color: #EEEEEE"
| Export data
|
| X
|
|
| Y
|-  style="font-weight: bold; background-color: #EEEEEE"
| Import data
|
| X
|
|
| Y
|-
| Security administration event
| X
| X
| X
| X
| N
|-
| Scheduling
|
| X
|
|
| N
|-  style="font-weight: bold; background-color: #EEEEEE"
| System backup
| X
| X
|
|
| Y
|-  style="font-weight: bold; background-color: #EEEEEE"
| System restore
|
| X
|
|
| Y
|-
| Initiate a network connection
| X
|
| X
| X
| N
|-
| Accept a network connection
|
|
| X
| X
| N
|-  style="font-weight: bold; background-color: #EEEEEE"
| Grant access rights
| X
|
| X
| X
| Y
|-  style="font-weight: bold; background-color: #EEEEEE"
| Modify access rights
| X
|
| X
| X
| Y
|-  style="font-weight: bold; background-color: #EEEEEE"
| Revoke access rights
| X
|
| X
| X
| Y
|-
| System, network, or services changes
| X
|
| X
| X
| N
|-
| Application process abort/failure/abnormal end
| X
|
| X
|
| N
|-
| Detection of malicious activity
| X
|
| X
|
| N
|-
| Changes to audit log configuration
|
|
|
| X
| N
|}
Next, we categorize each individual auditable event or event type from Table 1 into one of two categories: events that ''affect'' user-based non-repudiation, and events that ''do not affect'' user-based non-repudiation. Our categorization is denoted in Table 1 under the “Affects User-based Non-repudiation” column. When categorizing these events, we determine if the given event can be traced to a specific user accountholder in an EHR system. If so, we categorize this event as one that affects user-based non-repudiation. If the event cannot be traced to a specific user accountholder, we categorize the event as one that does not affect user-based non-repudiation. For example, the “view data” event suggests a user accountholder (such as a physician) has authenticated into an EHR system and is viewing protected patient health information. The action of viewing this protected data can be traced to the physician’s user account. Therefore, this event is categorized as one that does affect user-based non-repudiation. On the other hand, an “application process failure” does not suggest any intervention by a user accountholder. Instead, this event suggests an internal EHR system state change. Therefore, we categorize this event as not affecting user-based non-repudiation.
Of the 28 total auditable events and event types, we identify 16 events that affect user-based non-repudiation. Of these 16 actions, only 9 events (56.25%) are suggested by two or more of the sources. The remaining 7 events (43.75%) are contained in only one source set.


==== 4.1.2 High-level Assessment Methodology ====
==== 4.1.2 High-level Assessment Methodology ====
For each EHR system, we deploy the software on a local web server following the deployment instructions provided by each EHR’s community website. Next, we consult official documentation typically provided on the website for each of the EHR systems. In the documentation (typically user guides, development guides, or community wiki pages) we search for sections on auditing and logging to understand how to access these mechanisms in the actual application. Once we understand how to access the auditing mechanism, we open our locally-deployed EHR system and attempt to access these features to continue our analysis. We document all of our observations or difficulties during this analysis process for reflection after the analysis is complete.
Once we have either physical access to or a general understanding of the given application’s auditing mechanism, we record the following information:
# A flag (satisfied or unsatisfied) for each of the assessment criteria listed in the “Logging Actions” column of Table 2.
# Any observations or important findings that may influence the results or provide justifications for results
We repeat this process for each of the three EHR systems in the study.


=== 4.2. Low-level Assessment using Black-box Test Cases ===
=== 4.2. Low-level Assessment using Black-box Test Cases ===
Our low-level assessment of user-based non-repudiation involves constructing a black-box test plan for testing an EHR system’s recording of ''specific'' auditable events (such as “view diagnosis data”). In this paper, we briefly describe the process for the audit test cases used to evaluate user-based non-repudiation audit functionality.  We developed this methodology in earlier work<sup>[14]</sup>.
In 2006, through a consensus-based process that engaged stakeholders, CCHIT defined certification criteria focused on the functional capabilities that should be included in ambulatory (outpatient) and inpatient EHR systems.  The requirements specifications contain 284 different functional descriptions of EHR behavior.
The CCHIT ambulatory certification criteria contain eight requirements related to audit.  The audit requirements contain functionality such as “The system shall allow an authorized administrator to set the inclusion or exclusion of auditable events based on organizational policy & operating requirements/limits.”  One CCHIT audit criterion states that the set of auditable events in an EHR system should include the following fourteen items:
# Application start/stop
# User login/logout
# Session timeout
# Account lockout
# Patient Record created/viewed/updated/deleted
# Scheduling
# Query
# Order
# Node-authentication failure
# Signature created/validated
# PHI Export (e.g. print)
# PHI import
# Security administration events
# Backup and restore
The list is provided here verbatim from the CCHIT ambulatory criteria.  The criteria are vague. For example, the phrase “security administration events” is undefined and could relate to authentication attempts, deletion of log files, or assigning user privileges. Likewise the term “scheduling” could relate to scheduling patient appointments, scheduling system backups, or scheduling system down-time for maintenance. The interpretation of these phrases varies, and the intended meanings are ambiguous.
Due to the vagueness in these auditable events, we elected to approach the CCHIT certification criteria as a general functional requirements specification. The criteria describe functionality for EHR systems, such as editing a patient’s health record, signing a note about a patient, and indicating advance directives (e.g. a do-not-resuscitate order). Using these functional CCHIT requirements<sup>[2]</sup>, we develop a set of 58 black-box test cases that assess the ability of an EHR system to audit the user actions specified by these CCHIT requirements.  These test cases all involve a registered user performing a given action within the EHR system, therefore representing an assessment of user-based non-repudiation within each EHR system. The 58 test cases correspond to 58 individual CCHIT requirements statements.  Our test plan covers the 20.4% of the CCHIT requirements that are relevant to personal or protected health information.  The remaining 79.6% of the CCHIT requirements do not pertain to personal health information, and therefore do not necessitate an audit record for user-based non-repudiation.
We iterated through each of the 284 ambulatory CCHIT requirements, extracting keywords and applying the template to produce a test case when necessary. We generate a test case from a specific requirement based on keywords within the requirements statement.  We know that a CCHIT requirements statement should result in a test case based on certain keywords within the requirements statement.  For example, requirements that include phrases like “problem list,” “clinical documents,” and “diagnostic test” all indicate the user’s interaction with a piece of a patient’s protected health information.
Additionally, we extract an action phrase (e.g. “edit”) and an object phrase (e.g. “demographics”) from each relevant requirement to construct the black-box test case.  We present the template used for these black-box tests in Section 4.2.1, and present an example of a test case and its corresponding requirement in Section 4.2.2.


==== 4.2.1 Audit Test Case Template ====
==== 4.2.1 Audit Test Case Template ====
Test Procedure Template:
# Authenticate as <''insert a registered user name''>.
# Open the user interface for <''insert action phrase''>ing an <''insert object phrase''>.
# Verb an <''insert object phrase''>with details.
# Logout as <''insert a registered user name''>.
# Authenticate as <''insert an administrator’s user name''>.
# Open the audit records for today’s date.
Expected Results Template:
* The audit records should show that registered user <''insert action phrase''>ed an <''insert object phrase''>.
* The audit records should be clearly readable and easily accessible.


==== 4.2.2 Audit Test Case Example ====
==== 4.2.2 Audit Test Case Example ====
Example Natural Language Artifact:
* CCHIT Criteria: AM 03.08.01 – The system shall provide the ability to associate orders and medications with one or more codified problems/diagnoses.
Example Test Procedure:
# Authenticate as Dr. Robert Alexander.
# Remove the association between Theodore S. Smith’s Hypertension diagnosis and Zantac.
# Add the association back between Theodore S. Smith’s Hypertension diagnosis and Zantac.
# Logout as Dr. Robert Alexander.
# Authenticate as Denny Hudzinger.
# Open the audit records for today’s date. If necessary, focus on patient Theodore S. Smith.
Example Expected Results:
* The audit records should show adding and removing the association of Theodore S. Smith’s Hypertension diagnosis and Zantac, both linked to Dr. Robert Alexander, and with today’s date.
* The audit records should be clearly readable and easily accessible


== 5. Case Studies ==
== 5. Case Studies ==
Section 5.1 describes the EHR systems we used in this case study. Section 5.2 describes our EHR audit mechanism assessment based on the high-level assessment criteria from Section 4.1.  Then, Section 5.3 describes our low-level black-box test case evaluation of three open-source EHR systems.


=== 5.1. Open-source EHR Systems Studied ===
=== 5.1. Open-source EHR Systems Studied ===
In this study, we compare and contrast audit mechanisms from three open-source EHR systems. The criteria for inclusion in this study involved (1) being open-source for ease-of-access, and (2) having a fully-functional default demo deployment available online. For this study, we assess the following EHR systems:
* Open Electronic Medical Records (OpenEMR)<sup>2</sup> system,
* Open Medical Record System (OpenMRS)<sup>3</sup> system, with added Access Logging Module<sup>4</sup>.
* Tolven Healthcare Innovations’s Electronic Clinician Health Record (eCHR)<sup>5</sup> system, with added Performance Plugin<sup>6</sup> module
A summary of these software applications appears in Table 2.


=== 5.2. High-level User-based Non-repudiation Assessment ===
=== 5.2. High-level User-based Non-repudiation Assessment ===