Modifying Without a Trace: High-level Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms: Difference between revisions

From Ben Works
Jump to navigation Jump to search
Line 97: Line 97:


==== 4.1.1 Derivation of Non-specific Auditable Events ====
==== 4.1.1 Derivation of Non-specific Auditable Events ====
Our high-level assessment of user-based non-repudiation first involves compiling a list of non-specific events that should be logged in software audit mechanisms, according to other researchers and standards organizations. Non-specific events include basic actions such as “viewing” and “updating”, but these events do not specify ''what information'' is viewed or updated. Our goal is to compile a set of common non-specific auditable event types for user-based non-repudiation based on the general guidelines and checklists from four academic and professional sources:
* Chuvakin and Peterson<sup>[3]</sup> provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source.
* The Certification Commission for Health Information Technology (CCHIT)  specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health & Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use<sup>[2]</sup>. We collect 17 auditable events from this source.
* The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems<sup>[7]</sup>. We collect 18 auditable events from this source.
* The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events<sup>[6]</sup>. We collect 8 auditable events from this source.


==== 4.1.2 High-level Assessment Methodology ====
==== 4.1.2 High-level Assessment Methodology ====

Revision as of 18:56, 5 January 2014

J. King, B. Smith, L. Williams, "Modifying Without a Trace: General Audit Guidelines are Inadequate for Electronic Health Record Audit Mechanisms", Proceedings of the International Health Informatics Symposium (IHI 2012), pp. 305-314, 2012.

Abstract

Without adequate audit mechanisms, electronic health record (EHR) systems remain vulnerable to undetected misuse. Users could modify or delete protected health information without these actions being traceable. The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. We qualitatively assess three open-source EHR systems. In our high-level analysis, we derive a set of 16 non-specific auditable event types that affect non-repudiation. We find that the EHR systems audit an average of 12.5% of non-specific event types. In our lower-level analysis, we generate 58 black-box test cases based on specific auditable events derived from the Certification Commission for Health Information certification criteria. We find that only 4.02% of these test executions pass. Additionally, 20% of tests fail in all three EHR systems on actions including the modification of patient demographics, assignment of user privileges, and change of user passwords. The ambiguous nature of non-specific auditable event types may explain the overall inadequacy of auditing for non-repudiation. EHR system developers should focus on specific auditable events for managing protected health information instead of non-specific auditable event types derived from generalized guidelines.

1. Introduction

Without adequate audit systems to ensure accountability, electronic health record (EHR) systems remain vulnerable to undetected misuse, both malicious and accidental. Users could modify or delete protected health information without these actions being traceable to the modifier. According to Chuvakin and Peterson[3], “If [an organization’s information technology] isn’t accountable, the organization probably isn’t either.” Patients need to trust the privacy practices and accountability of healthcare organizations. Administering software audit mechanisms forms a basis for privacy-driven and accountability-driven policy and regulations, including government regulations[8]. The United States Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule states that one must implement, “mechanisms that record and examine activity in information systems that contain or use electronic protected health information”[5].

Storing an accurate history of user interaction with a software application and its underlying data helps build a sense of accountability, since a user cannot expressly deny performing certain actions that were recorded by the audit mechanism. In the case of a medical mistake, audit mechanisms can provide a record by which healthcare practitioners can exonerate themselves from legal action by demonstrating that they prescribed the correct drug at a certain time, or that a certain test result was, in fact, what they claim it was. The health informatics field needs standards that address the implementation of software audit mechanisms to monitor access and information disclosure, including details of what should be logged, how it should be logged, and when logged information should be monitored.

The objective of this paper is to assess electronic health record audit mechanisms to determine the current degree of auditing for non-repudiation and determine if high-level audit guidelines adequately address non-repudiation. In performing this study, we investigate the following questions:

  • R1: What events should be included in an EHR log file for non-repudiation?
  • R2: What are the strengths and weaknesses of software auditing mechanisms in EHR systems?

Software audit log files may include system logs and server logs that assist with debugging and troubleshooting. For this paper, we focus on user activity logs that contain data related to user actions within an EHR system for the purpose of audit and user accountability. In this study, we first perform a high-level analysis of EHR audit mechanisms by deriving a set of 16 general assessment criteria, derived from four academic and professional sources of non-specific auditable events (such as “view data” and “create data”). Next, we perform a lower-level analysis by deriving 58 audit-related black-box test cases to assess specific user actions (such as “view diagnosis data” and “view patient demographics”) in an EHR system. By assessing each EHR’s audit mechanism at both the high- and low-levels, our goal is to compare and contrast the results and suggest techniques for healthcare software developers to strengthen EHR audit mechanisms.

The remainder of this paper is organized as follows. Section 2 briefly discusses background information related to this study and some key terms and definitions. Section 3 discusses related work with audit mechanisms. Section 4 describes the formulation of our high-level and low-level assessment criteria for analyzing non-repudiation in EHR systems. Section 5 presents the open-source EHR systems studied and presents our case studies of evaluating the open-source EHR audit mechanisms. Section 6 discusses the implications and significance of our evaluations. Section 7 presents limitations of our work. Section 8 presents our discussion. Section 9 presents future work in the field of EHR audit mechanisms. Finally, Section 10 summarizes our findings and concludes the paper.

2. Background

The United States Department of Justice’s Global Justice Information Sharing Initiative defines:

  • non-repudiation – a technique used to ensure that someone performing an action on a computer cannot falsely deny that they performed that action. Non-repudiation provides undeniable proof that a user took a specific action[10].

With software systems that manage protected, sensitive data (including EHR systems), a more-specific definition of non-repudiation is needed. We further define the following term based on the definition of non-repudiation above:

  • user-based non-repudiation – a techniques used to ensure that an authenticated user accountholder performing an action within a software system cannot falsely deny that they performed that action.

Böck, et al., identify four primary concerns regarding software audit mechanism reliability[1]:

  • storage confidentiality – malicious users should not be able to access log entries
  • machine-based non-repudiation – log files can be traced to a specific machine to identify the source of the audit entries
  • application-based non-repudiation – log entries can be traced to trusted software applications such that malicious users cannot manually create fake log entries
  • transmission confidentiality – accuracy and integrity of log file data is preserved during transmission

Satisfying these concerns is not a simple task, especially for software developers who may implement software audit mechanisms without proactively considering the protection and reliability of the data contained within the log files. Böck, et al., suggest that these four concerns should be considered as a core set of requirements for any software audit mechanism[1]. Yet actually implementing the software and hardware infrastructure to fulfill these requirements may prove challenging. Combined with limited resources and a concern for user-based non-repudiation, the difficult task of satisfying these requirements may lead some system architects and software developers to abandon the idea of a reliable software audit mechanism in favor of a simplified, more vulnerable one based upon limited storage, unprotected log files, and weak non-repudiation.

One motivation for implementing EHR audit mechanisms for user-based non-repudiation involves the mitigation of insider attack. An insider attack occurs when employees of an organization with legitimate access to their organizations' information systems use these systems to sabotage their organizations' IT infrastructure or commit fraud[9]. Researchers at the Software Engineering Institute at Carnegie Mellon University released a comprehensive study on insider threats that reviewed 49 cases of Insider IT Sabotage between 1996 and 2002[9]. According to the study:

  • 90% of insider attackers were given administrative or high-level privileges to the target system.
  • 81% of the incidents involved losses to the organization, with dollar amounts estimated between "five hundred dollars" and "tens of millions of dollars."
  • The majority of attacks occurred after the employees were terminated from the organization.
  • Lack of access controls facilitated IT sabotage

Although federal laws, such as HIPAA, provide legal sanction against tampering with or stealing medical records, we cannot assume that employees working within a medical organization will always follow the rules.

Related literature has identified several challenges and limitations with software audit mechanisms. Here, we discuss challenges in technology and challenges with policy, regulations, and compliance.

3.1. Challenges in Technology

Audit mechanisms in EHR systems face several challenges and limitations because of technology. We group these challenges into two categories: limited infrastructure resources and log file reliability

3.1.1. Limited Infrastructure Resources

Behind every piece of software lies some sort of hardware configuration. Hardware, itself, provides limitations that affect software. For example, information storage may be restricted to a single hard drive with a limited storage capacity. As a result, EHR systems must manage storage resources carefully.

Another challenge involves distributed software systems. Chuvakin and Peterson suggest that the biggest technological challenge of audit mechanisms involves determining the location at which generating, storing, and managing the log files will be most beneficial for the subject domain and intent of the software application[3]. In these systems, software components may run on separate host machines. For example, one machine may host a database server while a separate machine hosts a web server. In this situation, software audit mechanisms are not as centralized or easy to implement with the physically distributed nature of the overall software application. Here, the actual site of the audit logging functionality is not easy to define[3]. Should software generate audit trails at the web server level, at the database server level, both, or at some third-party location? Software architects must determine the ideal location of user-based non-repudiation audit mechanisms to ensure all user accountholder actions are recorded and monitored.

3.1.2. Log File Reliability

Another technological challenge facing software audit mechanisms involves reliability of the audit mechanism, itself. NIST highlights the issue of breach of audit mechanism log data[8]. Audit mechanism log files need protection to ensure that the data contained within the log files is unmodified, accurate, and reliable. Engineering this protection of the audit mechanism log files may be challenging; it may also be overlooked by system developers who are unaware or indifferent to the implications of unprotected log files and inaccurate data that may result from modified logs. In this unprotected situation, log files are no longer trustworthy, the audit mechanism is no longer effective for monitoring user-based non-repudiation, and the accountability of the system is weakened.

3.2. Challenges in Policy, Regulations, and Compliance

As previously discussed in Section 1, policies and regulations such as those defined by HIPAA suggest a foundation for software audit mechanisms, yet fail to provide any fundamental guidance for software developers to build compliant software systems. In this section, we group policy and regulatory challenges into two categories: ill-defined standards, policies, and regulations; and ineffective log analysis.

3.2.1. Ill-defined Standards, Policies, and Regulations

Standards provide a foundation for consistency and quality. With software systems, coding standards provide a set of guidelines and suggestions for making program code style consistent across software applications; software developers may choose to ignore standards if they wish, but overall quality and understandability may be sacrificed.

Software audit mechanisms are inconsistent. Log file content, timestamps, and formats may vary externally over software companies and internally over software applications of the same company[8]. Distributed web services, for example, may have different policies based on the host machines[3]; the database server may have one set of auditing policies, while the web server may have a completely different set of auditing policies. In addition, the physical location of the distributed systems may cause concern. Again, the organization (or country) that hosts the database server likely has different policies and regulations compared to the organization (or country) that hosts the web server. Furthermore, the transmission of data between these servers may pass through additional organizational authority, which likely introduces an additional degree of varying policies and regulations. Chuvakin and Peterson[3] state that administrators of such complicated distributed systems may not currently enable security features (such as software audit mechanisms) by default; instead, software organizations must actively enable auditing features by choice. Without a default auditing system enabled, user-based non-repudiation and enforcement of accountability would likely decline.

Even if software audit mechanisms are enabled, these mechanisms still face other challenges, such as ambiguous logging requirements. When implementing audit mechanisms, software developers may focus on recording only additions, deletions, and modifications of data; the developers tend to overlook viewing or reading of data, however[11]. In healthcare[5], viewing and reading data in EHR systems is a vital concern when managing protected health information.

Without well-defined standards and regulations by a central governing body, the industry has no widely accepted standard for software audit mechanisms[3], including audit mechanisms in EHR systems. This leaves the responsibility of interpreting and complying with vague regulatory verbiage to individual software development teams who may be unprepared, untrained, or unaware of policies and regulations that govern the software systems upon which they work.

3.2.2. Ineffective Log Analysis

With respect to software audit mechanisms, accountability and non-repudiation implies that the stored log files should be analyzed to monitor compliance; without log analysis, the audit trail remains unseen, compliance remains unchecked, and accountability remains unmonitored for non-repudiation. Log file analysis seems to fall into three categories: manual, automated, or a combination of both. However, a current lack of efficient automated log file analysis policies and tools often leads to manual log file review[11].

Software companies tend to inadequately prepare, support, and maintain human log file analyzers [8]. Preparation, support, and maintenance of effective human analyzers should include two activities: initial training in current regulations, and continued training in evolving policy, regulation, and case law. The current ineffective training practices in industry likely results in diminished control of accountability and non-repudiation[8].

Schneider[13] compares accountability to defensive strategy: unacceptable actions (such as a receptionist viewing protected health data without authorization) may be capable of being prevented, but must instead be identified to reprimand the given user who performed the unacceptable actions. Schneider suggests analysis methods must be mature enough to identify these users based on digital evidence (such as audit mechanism data), just as law enforcement investigators collect fingerprints from a crime scene. Dixon[4] also suggests this notion of computer forensics – computer data must be preserved, identified, extracted, documented, and interpreted when legal or compliance issues transpire. Likewise, effective software audit mechanism analysis must preserve, identify, extract, document, and interpret log files entries for user-based non-repudiation.

4. Assessment Methodology

Section 4.1 describes our high-level user-based non-repudiation assessment criteria for EHR audit mechanisms, based on non-specific auditable events (such as “view data” and “create data”). Section 4.2 describes the development and execution of our lower-level black-box test plan to help evaluate the logging of specific auditable events (such as “view diagnosis data” and “view patient demographics data”) for user-based non-repudiation.

4.1 High-level Assessment using Audit Guidelines and Checklists

Section 4.1.1 describes the derivation of our high-level assessment criteria for user-based non-repudiation based on non-specific auditable event types. Section 4.1.2 describes our methodology for assessing EHR system audit mechanisms.

4.1.1 Derivation of Non-specific Auditable Events

Our high-level assessment of user-based non-repudiation first involves compiling a list of non-specific events that should be logged in software audit mechanisms, according to other researchers and standards organizations. Non-specific events include basic actions such as “viewing” and “updating”, but these events do not specify what information is viewed or updated. Our goal is to compile a set of common non-specific auditable event types for user-based non-repudiation based on the general guidelines and checklists from four academic and professional sources:

  • Chuvakin and Peterson[3] provide a general checklist of items that should be logged in web-based software applications. We collect 17 auditable events from this source.
  • The Certification Commission for Health Information Technology (CCHIT) specifies an appendix of auditable events specific to EHR systems. CCHIT is a certification body authorized by the United States Department of Health & Human Services for the purpose of certifying EHR systems based on satisfactory compliance with government-developed criteria for meaningful use[2]. We collect 17 auditable events from this source.
  • The SysAdmin, Audit, Network, Security (SANS) Institute provides a checklist of information system audit logging requirements to help advocate appropriate and consistent audit logs in software information systems[7]. We collect 18 auditable events from this source.
  • The “IEEE Standard for Information Technology: Hardcopy Device and System Security” presents a section on best practices for logging and auditability, including a listing of suggested auditable events[6]. We collect 8 auditable events from this source.

4.1.2 High-level Assessment Methodology

4.2. Low-level Assessment using Black-box Test Cases

4.2.1 Audit Test Case Template

4.2.2 Audit Test Case Example

5. Case Studies

5.1. Open-source EHR Systems Studied

5.2. High-level User-based Non-repudiation Assessment

5.3 Low-level User-based Non-repudiation Assessment with Black-box Test Cases

6. Modifying without a Trace

7. Limitations

8. Future Work

9. Conclusion

10. Acknowledgements

11. References