Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms: Difference between revisions
Programsam (talk | contribs) |
Programsam (talk | contribs) |
||
| (16 intermediate revisions by the same user not shown) | |||
| Line 40: | Line 40: | ||
One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An ''insider attack'' occurs when employees of an organization with legitimate access to their organizations' information systems use these systems to sabotage their organizations' IT infrastructure or commit fraud<sup>[9]</sup>. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002<sup>[9]</sup>. According to the study: | One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An ''insider attack'' occurs when employees of an organization with legitimate access to their organizations' information systems use these systems to sabotage their organizations' IT infrastructure or commit fraud<sup>[9]</sup>. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002<sup>[9]</sup>. According to the study: | ||
* 90% of insider attackers were given administrative or high-level privileges to the target system. | * 90% of insider attackers were given administrative or high-level privileges to the target system. | ||
| Line 50: | Line 49: | ||
== 3. Related Work == | == 3. Related Work == | ||
Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance. | |||
=== 3.1. Challenges in Technology === | === 3.1. Challenges in Technology === | ||
Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability | |||
==== 3.1.1. Limited Infrastructure Resources ==== | ==== 3.1.1. Limited Infrastructure Resources ==== | ||
Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully. | |||
Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application<sup>[3]</sup>. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define<sup>[3]</sup>. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored. | |||
==== 3.1.2. Log File Reliability ==== | ==== 3.1.2. Log File Reliability ==== | ||
Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data<sup>[8]</sup>. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened. | |||
=== 3.2. Challenges in Policy, Regulations, and Compliance === | === 3.2. Challenges in Policy, Regulations, and Compliance === | ||
As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis. | |||
==== 3.2.1. Ill-defined Standards, Policies, and Regulations ==== | ==== 3.2.1. Ill-defined Standards, Policies, and Regulations ==== | ||
Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed. | |||
Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company<sup>[8]</sup>. Distributed web services, for example, may have different policies based on the host machines<sup>[3]</sup>; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson<sup>[3]</sup> state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline. | |||
Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however<sup>[11]</sup>. In healthcare<sup>[5]</sup>, viewing and reading data in EHR systems is a vital concern when managing protected health information. | |||
Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms<sup>[3]</sup>, including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work. | |||
==== 3.2.2. Ineffective Log Analysis ==== | ==== 3.2.2. Ineffective Log Analysis ==== | ||
With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review<sup>[11]</sup>. | |||
Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation<sup>[8]</sup>. | |||
Schneider<sup>[13]</sup> compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon<sup>[4]</sup> also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation. | |||
== 4. Assessment Methodology == | == 4. Assessment Methodology == | ||
Section 4.1 describes our high-level user-based non-repudiation assessment criteria for EHR audit mechanisms, based on non-specific auditable events (such as “view data” and “create data”). Section 4.2 describes the development and execution of our lower-level black-box test plan to help evaluate the logging of specific auditable events (such as “view diagnosis data” and “view patient demographics data”) for user-based non-repudiation. | |||
=== 4.1 High-level Assessment using Audit Guidelines and Checklists === | === 4.1 High-level Assessment using Audit Guidelines and Checklists === | ||
Section 4.1.1 describes the derivation of our high-level assessment criteria for user-based non-repudiation based on non-specific auditable event types. Section 4.1.2 describes our methodology for assessing EHR system audit mechanisms. | |||
==== 4.1.1 Derivation of Non-specific Auditable Events ==== | ==== 4.1.1 Derivation of Non-specific Auditable Events ==== | ||
Our high-level assessment of user-based non-repudiation first involves compiling a list of non-specific events that should be logged in software audit mechanisms, according to other researchers and standards organizations. Non-specific events include basic actions such as “viewing” and “updating”, but these events do not specify ''what information'' is viewed or updated. Our goal is to compile a set of common non-specific auditable event types for user-based non-repudiation based on the general guidelines and checklists from four academic and professional sources: | |||
* Chuvakin and Peterson<sup>[3]</sup> provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source. | |||
* The Certification Commission for Health Information Technology (CCHIT)<sup>1</sup> specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health & Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use<sup>[2]</sup>. We collect 17 auditable events from this source. | |||
* The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems<sup>[7]</sup>. We collect 18 auditable events from this source. | |||
* The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events<sup>[6]</sup>. We collect 8 auditable events from this source. | |||
Combining all four sets of data, we collect 60 total non-specific auditable events and event types. After combining duplicates, our set contains 28 unique auditable events and event types. The only item appearing in all four suggested auditable events sets is “security administration event”, suggesting all four sources are concerned about software security. Out of the 28 unique events, 18 (64.3%) are contained in at least two of the source sets. Ten events (35.7%) are only contained in one source set. The overlap among the four sources suggests some common understanding and agreement of general events that should be logged, yet the disparity seems to indicate disagreement about the scope and breadth of auditable events. Table 1 provides a comparison of the four source sets of non-specific auditable events and event types. | |||
{| class="wikitable" style="text-align: left; width: 100%;" | |||
|+ Table 1. A comparison of auditable events by source, with a categorization of events affecting user-based non-repudiation | |||
! Auditable Events | |||
! colspan=4 | Source of Software Audit mechanism Checklist | |||
! Affects User-based Non-repudiation | |||
|- | |||
| ''Log Entry Item'' | |||
| ''Chuvakin and Peterson<sup>[3]</sup>'' | |||
| ''CCHIT<sup>[2]</sup>'' | |||
| ''SANS<sup>[7]</sup>'' | |||
| ''IEEE<sup>[6]</sup>'' | |||
| ''(Yes or No)'' | |||
|- | |||
| System startup | |||
| X | |||
| X | |||
| X | |||
| | |||
| N | |||
|- | |||
| System shutdown | |||
| X | |||
| X | |||
| X | |||
| | |||
| N | |||
|- | |||
| System restart | |||
| | |||
| | |||
| X | |||
| | |||
| N | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| User login/logout | |||
| X | |||
| X | |||
| X | |||
| | |||
| Y | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| Session timeout | |||
| | |||
| X | |||
| | |||
| | |||
| Y | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| Account lockout | |||
| | |||
| X | |||
| | |||
| | |||
| Y | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| Create data | |||
| X | |||
| X | |||
| X | |||
| | |||
| Y | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| Update data | |||
| X | |||
| X | |||
| X | |||
| | |||
| Y | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| Delete data | |||
| X | |||
| X | |||
| X | |||
| | |||
| Y | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| View data | |||
| X | |||
| X | |||
| X | |||
| | |||
| Y | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| Query data | |||
| | |||
| X | |||
| | |||
| | |||
| Y | |||
|- | |||
| Node-authentication failure | |||
| X | |||
| X | |||
| X | |||
| | |||
| N | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| Signature created/validated | |||
| | |||
| X | |||
| | |||
| | |||
| Y | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| Export data | |||
| | |||
| X | |||
| | |||
| | |||
| Y | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| Import data | |||
| | |||
| X | |||
| | |||
| | |||
| Y | |||
|- | |||
| Security administration event | |||
| X | |||
| X | |||
| X | |||
| X | |||
| N | |||
|- | |||
| Scheduling | |||
| | |||
| X | |||
| | |||
| | |||
| N | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| System backup | |||
| X | |||
| X | |||
| | |||
| | |||
| Y | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| System restore | |||
| | |||
| X | |||
| | |||
| | |||
| Y | |||
|- | |||
| Initiate a network connection | |||
| X | |||
| | |||
| X | |||
| X | |||
| N | |||
|- | |||
| Accept a network connection | |||
| | |||
| | |||
| X | |||
| X | |||
| N | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| Grant access rights | |||
| X | |||
| | |||
| X | |||
| X | |||
| Y | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| Modify access rights | |||
| X | |||
| | |||
| X | |||
| X | |||
| Y | |||
|- style="font-weight: bold; background-color: #EEEEEE" | |||
| Revoke access rights | |||
| X | |||
| | |||
| X | |||
| X | |||
| Y | |||
|- | |||
| System, network, or services changes | |||
| X | |||
| | |||
| X | |||
| X | |||
| N | |||
|- | |||
| Application process abort/failure/abnormal end | |||
| X | |||
| | |||
| X | |||
| | |||
| N | |||
|- | |||
| Detection of malicious activity | |||
| X | |||
| | |||
| X | |||
| | |||
| N | |||
|- | |||
| Changes to audit log configuration | |||
| | |||
| | |||
| | |||
| X | |||
| N | |||
|} | |||
Next, we categorize each individual auditable event or event type from Table 1 into one of two categories: events that ''affect'' user-based non-repudiation, and events that ''do not affect'' user-based non-repudiation. Our categorization is denoted in Table 1 under the “Affects User-based Non-repudiation” column. When categorizing these events, we determine if the given event can be traced to a specific user accountholder in an EHR system. If so, we categorize this event as one that affects user-based non-repudiation. If the event cannot be traced to a specific user accountholder, we categorize the event as one that does not affect user-based non-repudiation. For example, the “view data” event suggests a user accountholder (such as a physician) has authenticated into an EHR system and is viewing protected patient health information. The action of viewing this protected data can be traced to the physician’s user account. Therefore, this event is categorized as one that does affect user-based non-repudiation. On the other hand, an “application process failure” does not suggest any intervention by a user accountholder. Instead, this event suggests an internal EHR system state change. Therefore, we categorize this event as not affecting user-based non-repudiation. | |||
Of the 28 total auditable events and event types, we identify 16 events that affect user-based non-repudiation. Of these 16 actions, only 9 events (56.25%) are suggested by two or more of the sources. The remaining 7 events (43.75%) are contained in only one source set. | |||
==== 4.1.2 High-level Assessment Methodology ==== | ==== 4.1.2 High-level Assessment Methodology ==== | ||
For each EHR system, we deploy the software on a local web server following the deployment instructions provided by each EHR’s community website. Next, we consult official documentation typically provided on the website for each of the EHR systems. In the documentation (typically user guides, development guides, or community wiki pages) we search for sections on auditing and logging to understand how to access these mechanisms in the actual application. Once we understand how to access the auditing mechanism, we open our locally-deployed EHR system and attempt to access these features to continue our analysis. We document all of our observations or difficulties during this analysis process for reflection after the analysis is complete. | |||
Once we have either physical access to or a general understanding of the given application’s auditing mechanism, we record the following information: | |||
# A flag (satisfied or unsatisfied) for each of the assessment criteria listed in the “Logging Actions” column of Table 2. | |||
# Any observations or important findings that may influence the results or provide justifications for results | |||
We repeat this process for each of the three EHR systems in the study. | |||
=== 4.2. Low-level Assessment using Black-box Test Cases === | === 4.2. Low-level Assessment using Black-box Test Cases === | ||
Our low-level assessment of user-based non-repudiation involves constructing a black-box test plan for testing an EHR system’s recording of ''specific'' auditable events (such as “view diagnosis data”). In this paper, we briefly describe the process for the audit test cases used to evaluate user-based non-repudiation audit functionality. We developed this methodology in earlier work<sup>[14]</sup>. | |||
In 2006, through a consensus-based process that engaged stakeholders, CCHIT defined certification criteria focused on the functional capabilities that should be included in ambulatory (outpatient) and inpatient EHR systems. The requirements specifications contain 284 different functional descriptions of EHR behavior. | |||
The CCHIT ambulatory certification criteria contain eight requirements related to audit. The audit requirements contain functionality such as “The system shall allow an authorized administrator to set the inclusion or exclusion of auditable events based on organizational policy & operating requirements/limits.” One CCHIT audit criterion states that the set of auditable events in an EHR system should include the following fourteen items: | |||
# Application start/stop | |||
# User login/logout | |||
# Session timeout | |||
# Account lockout | |||
# Patient Record created/viewed/updated/deleted | |||
# Scheduling | |||
# Query | |||
# Order | |||
# Node-authentication failure | |||
# Signature created/validated | |||
# PHI Export (e.g. print) | |||
# PHI import | |||
# Security administration events | |||
# Backup and restore | |||
The list is provided here verbatim from the CCHIT ambulatory criteria. The criteria are vague. For example, the phrase “security administration events” is undefined and could relate to authentication attempts, deletion of log files, or assigning user privileges. Likewise the term “scheduling” could relate to scheduling patient appointments, scheduling system backups, or scheduling system down-time for maintenance. The interpretation of these phrases varies, and the intended meanings are ambiguous. | |||
Due to the vagueness in these auditable events, we elected to approach the CCHIT certification criteria as a general functional requirements specification. The criteria describe functionality for EHR systems, such as editing a patient’s health record, signing a note about a patient, and indicating advance directives (e.g. a do-not-resuscitate order). Using these functional CCHIT requirements<sup>[2]</sup>, we develop a set of 58 black-box test cases that assess the ability of an EHR system to audit the user actions specified by these CCHIT requirements. These test cases all involve a registered user performing a given action within the EHR system, therefore representing an assessment of user-based non-repudiation within each EHR system. The 58 test cases correspond to 58 individual CCHIT requirements statements. Our test plan covers the 20.4% of the CCHIT requirements that are relevant to personal or protected health information. The remaining 79.6% of the CCHIT requirements do not pertain to personal health information, and therefore do not necessitate an audit record for user-based non-repudiation. | |||
We iterated through each of the 284 ambulatory CCHIT requirements, extracting keywords and applying the template to produce a test case when necessary. We generate a test case from a specific requirement based on keywords within the requirements statement. We know that a CCHIT requirements statement should result in a test case based on certain keywords within the requirements statement. For example, requirements that include phrases like “problem list,” “clinical documents,” and “diagnostic test” all indicate the user’s interaction with a piece of a patient’s protected health information. | |||
Additionally, we extract an action phrase (e.g. “edit”) and an object phrase (e.g. “demographics”) from each relevant requirement to construct the black-box test case. We present the template used for these black-box tests in Section 4.2.1, and present an example of a test case and its corresponding requirement in Section 4.2.2. | |||
==== 4.2.1 Audit Test Case Template ==== | ==== 4.2.1 Audit Test Case Template ==== | ||
Test Procedure Template: | |||
# Authenticate as <''insert a registered user name''>. | |||
# Open the user interface for <''insert action phrase''>ing an <''insert object phrase''>. | |||
# Verb an <''insert object phrase''>with details. | |||
# Logout as <''insert a registered user name''>. | |||
# Authenticate as <''insert an administrator’s user name''>. | |||
# Open the audit records for today’s date. | |||
Expected Results Template: | |||
* The audit records should show that registered user <''insert action phrase''>ed an <''insert object phrase''>. | |||
* The audit records should be clearly readable and easily accessible. | |||
==== 4.2.2 Audit Test Case Example ==== | ==== 4.2.2 Audit Test Case Example ==== | ||
Example Natural Language Artifact: | |||
* CCHIT Criteria: AM 03.08.01 – The system shall provide the ability to associate orders and medications with one or more codified problems/diagnoses. | |||
Example Test Procedure: | |||
# Authenticate as Dr. Robert Alexander. | |||
# Remove the association between Theodore S. Smith’s Hypertension diagnosis and Zantac. | |||
# Add the association back between Theodore S. Smith’s Hypertension diagnosis and Zantac. | |||
# Logout as Dr. Robert Alexander. | |||
# Authenticate as Denny Hudzinger. | |||
# Open the audit records for today’s date. If necessary, focus on patient Theodore S. Smith. | |||
Example Expected Results: | |||
* The audit records should show adding and removing the association of Theodore S. Smith’s Hypertension diagnosis and Zantac, both linked to Dr. Robert Alexander, and with today’s date. | |||
* The audit records should be clearly readable and easily accessible | |||
== 5. Case Studies == | == 5. Case Studies == | ||
Section 5.1 describes the EHR systems we used in this case study. Section 5.2 describes our EHR audit mechanism assessment based on the high-level assessment criteria from Section 4.1. Then, Section 5.3 describes our low-level black-box test case evaluation of three open-source EHR systems. | |||
=== 5.1. Open-source EHR Systems Studied === | === 5.1. Open-source EHR Systems Studied === | ||
In this study, we compare and contrast audit mechanisms from three open-source EHR systems. The criteria for inclusion in this study involved (1) being open-source for ease-of-access, and (2) having a fully-functional default demo deployment available online. For this study, we assess the following EHR systems: | |||
* Open Electronic Medical Records (OpenEMR)<sup>2</sup> system, | |||
* Open Medical Record System (OpenMRS)<sup>3</sup> system, with added Access Logging Module<sup>4</sup>. | |||
* Tolven Healthcare Innovations’s Electronic Clinician Health Record (eCHR)<sup>5</sup> system, with added Performance Plugin<sup>6</sup> module | |||
A summary of these software applications appears in Table 2. | |||
=== 5.2. High-level User-based Non-repudiation Assessment === | === 5.2. High-level User-based Non-repudiation Assessment === | ||