The coming age of continuous assurance

The emerging field of continuous auditing attempts to better match internal and external auditing practices to the reality of the modern IT-dominated environment in order to provide stakeholders with more timely and meaningful assurance.

(pages 23-29 of printed journal)

By Miklos A Vasarhelyi1

 

Illustration of booksIntroduction


In recent decades, businesses worldwide have been transformed by powerful information technologies to operate in what has been labelled the ‘now economy’, characterised by 24-hour globalised operations, customer interactions and management decisions. These electronic transformations have affected the entire business cycle by incorporating a multiplicity of technologies into organisational processes. Financial processes have also been transformed, supported by the evolution of information technology (IT). Typically IT and financial processes have progressed more quickly than the assurance (audit) process. Assurance has lagged, stifled by the conservatism of its practitioners, obsolete regulations, and the absence of social or economic catalysts. The emerging field of continuous auditing attempts to better match internal and external auditing practices to the reality of the modern IT-dominated environment in order to provide stakeholders with more timely and meaningful assurance.


The real-time economy


Businesses have accelerated activities in every possible domain in order to achieve two interlinked objectives:

–  Decreasing process costs through automation; and
–  Increasing process accuracy.

Incorporating computers into business processes reduces manual processing and the costs of the associated human errors. Importantly, computerisation changes the intrinsic nature of these processes, requiring adjustments to the socio-technical structures of organisations.


The development of mainframes, microcomputers and, most recently, internetworking – using the internet for a wide range of business transactions – have provided the major drivers of business electronisation. However, the real-time economy has mainly depended on the evolution of (i) sensing devices that automatically record economic events, (ii) enterprise resource planning (ERP) packages that integrate automated business processes, (iii) specialised business-reporting languages, and (iv) methods of integrating automated- and human-decision processes. Despite these developments, business processes must continue to evolve in order to leverage technology more effectively. These changes include business-process re-engineering and process reorganisation.


The continuous-audit paradigm represents a shift in practice toward a degree of automation better suited to the technologically sophisticated, cost-efficient modern entity. The development of continuous auditing requires a fundamental reconsideration of all aspects of auditing, particularly:

–  How data is made available to auditors;
–  What tests auditors conduct;
–  How alarms triggered by audits are dealt with; and
–  The nature, frequency and direction of assurance reports.

The importance of some of these issues will only become apparent as continuous auditing is implemented. However, the auditing profession and other stakeholders must start thinking now about the impact of continuous auditing when it is easier to guide this change, rather than after systems have become established.


Latencies


Four major types of latency (delay) can be reduced through the adoption of new technologies:


1. Intra-process latency: The time taken for a process (e.g. updating accounts payable) to be performed. These latencies are affected by the degree of automation of process steps.

2. Inter-process latency: The time taken for data to pass between processes. These latencies are influenced by the adoption of interoperability standards such as XML (eXtensible Markup Language), a set of rules for encoding information into machine-readable form. The financial value-chain will be substantially enriched when XML-coded transactions interact with XBRL (eXtensible Business Reporting Language) for general-ledger postings and, ultimately, the preparation of financial statements.

3. Decision latency: The time taken for a decision to be made, reduced to nanoseconds if decisions are made electronically. Auditors typically make a series of decisions based on errors detected in samples of populations. These human interventions take time. Rules can be developed to automatically highlight items for further examination or to accept samples as representative of populations.

4. Decision-implementation latency: The time taken for the implementation of decisions, contingent on the nature of processes and the types of connections between processes. Once a sample is deemed to require more intensive examination, original documents must be retrieved for scrutiny and analysis. Automation can reduce this latency by automatically subjecting sub-samples to increased filtering and analysis.


Elements of progressive automation


Starting 25 years ago with US telecommunications giant American Telephone & Telegraph Company (AT&T), I have worked with many colleagues on a plethora of projects aimed at increasing the timeliness of auditing and improving the quality of organisational data. This work has been undertaken primarily in conjunction with firms’ internal auditors, but also occasionally with their external counterparts or process managers. These projects have generated a series of tentative conclusions about where we stand today, namely:

–  Auditing must evolve toward a radically different methodology to satisfy the general objectives of assurance and data integrity.
–  Regulations governing the scope of audits are too narrow. The domain needs to expand, involving a reconsideration of the objectives of audits.
–  The automation of audit processes is just one part of a wider set of economic, technological and process innovations.
–  A new framework of assurance is evolving to incorporate more advanced analytics, attempting to leapfrog the delay of assurance technology in relation to business.
–  After scope and analytics, the two major changes awaiting the audit process are the timing and location of assurance work.


Electronic measurement and reporting (XBRL)


XBRL, in both its external financial reporting (FR) and general ledger (GL) versions, represents a very positive step towards automation, but it still perpetuates some of the limitations of the ‘paper-oriented’ reporting model. To improve their social-agency function, audits should encompass corporate measurements and databases, not just financial statements. XBRL is an imperfect, rigid model for representing the interlinked fuzzy-boundary business organisations of today. However, electronic assurance functions will be influenced by, and will influence the evolution of, XBRL.

Like most substantive regulatory-based changes, the evolution of XBRL is generating a series of unintended (positive and negative) consequences including (i) pressure towards standardisation of reporting, (ii) facilitation of more-frequent reporting, and (iii) standardisation of the semantics of accounting reporting.


Monitoring and control


Conceptually, the processes of measurement, monitoring, control and assurance are tightly interlinked. Control typically focuses on the comparison of actual values with predetermined benchmarks to produce variances or discrepancies. When the absolute value of a discrepancy exceeds a standardised amount, an alarm or alert occurs. Although extensive effort has gone into improving the measurement of actual values through the use of IT, there has been limited work into defining the models or standards that should be used under different circumstances, particularly when seasonality, growth rates, business cycles and extraordinary events all influence actual measures.


The proliferation of business processes and the ubiquity of technology and automation will not only change the minimum level of control from accounts (embodying multiple transactions) to individual transactions but will also require a different set of meta-controls to be enacted. These are part of the natural changes in business processes discussed earlier.


Two key considerations have been embedded in assurance regulations and procedures. First is the trade-off between the cost and benefits of audits, leading to high-materiality thresholds considering today’s technology. Second is the atomicity of controls and their observability, leading to most audit research being focused on data auditing and not the structure of and compliance of controls.


The advent of the Sarbanes-Oxley Act in the US in 2002 – which tightened financial reporting and audit requirements – propitiated a reconsideration of the latter which further emphasised manage-ment’s accountability for control and accounting with assurers being clearly delegated to non-operational review roles.


Continuous assurance


The first recognisable example of what would today be termed continuous auditing was our large-scale auditing system developed in the late 1980s at Bell Laboratories, the research arm of AT&T. That project relied on the pre-internet advanced information technologies of the day (PCs, databases, corporate networks) to assure the reliability of the entity’s billing systems through automated data acquisition and analysis, and the electronic communication of alarms for a customer base of over 50 million households.


This system was designed to monitor and audit the billing system of AT&T in the context of the corporation’s ‘take back’ strategy, which involved billing clients directly rather than through operating companies. As the system – labelled Continuous Process Audit System (CPAS) – was enormous and highly sensitive, data was extracted through a semantic process in which electronic versions of reports were captured via a remote job-entry system and their content pattern-scanned for specific content. Figure 1 symbolically demonstrates the system’s electronic remote job-entry reports being filtered through semantic extraction procedures and placed in a relational database. This database was queried by screen-based reports that visually described the system in a flow chart-like presentation comfortable to auditors.

Graph - CPAS Architecture


Figure 1 CPAS Architecture

Internal auditors, who intensively participated in the effort, were ‘knowledge engineered’ to acquaint them with the system and its audit rules. Past audit reports were also used to identify sources of data (metrics), types of analysis performed (analytics), comparative models (standards) and to determine when alarms should be issued.


This effort in actual data monitoring – to identify process flaws or data exceptions – was originally termed ‘continuous audit’ but today would be labelled ‘continuous data audit’. It demonstrated that the ultimate aim of continuous auditing is to bring auditing closer to operational processes, and away from the traditional backward-looking annual examination of financial statements. The CPAS project was eventually paralleled by the ‘Prometheus’ project that delivered information to billing managers in a manner analogous (but not identical) to the process-monitoring features of CPAS.

We progressively re-conceptualised continuous auditing with three main components: (i) continuous data audit, (ii) continuous control monitoring and (iii) continuous risk measurement and assessment.


Continuous data assurance (CDA)


Continuous data assurance (CDA) uses software to extract data from IT systems for analysis at the transactional level to provide more detailed assurance. CDA systems provide the ability to design expectation models for analytical procedures at the business-process level, as opposed to the current practice of relying on ratio or trend analysis at higher levels of data aggregation. Testing the content of an entity’s data flow against such process-level benchmarks focuses on examining both exceptional transactions and exceptional outcomes of expected transactions. CDA software can continuously and automatically monitor transactions, comparing their generic characteristics with these predetermined benchmarks, thereby identifying anomalous situations. When significant discrepancies occur, alarms are triggered and routed to appropriate stakeholders.


Transaction verification is essential in most CDA implementations, especially in entities with disparate legacy IT systems rather than a single, integrated, ERP system. When data is uploaded to a firm’s data warehouse, potential errors that may be introduced to the data set have to be identified and removed before the data is suitable for automated testing. This step is undertaken by the transaction-verification component of a CDA system. In a tightly integrated enterprise environment with automated business-process controls, such data errors may be prevented by the client’s ERP system.


Transaction verification is implemented by specifying data validity, consistency and referential integrity rules, which are then used to filter the population of data. These rules are designed to detect and remove two types of data errors:

1. Data-integrity violations, including invalid purchase quantities, receiving quantities, and cheque numbers.

2. Referential-integrity violations, largely caused by unmatched records among different business processes. For example, if a receiving transaction cannot be matched with any related ordering transaction, the indication is that a payment is being requested for a non-existent purchase order.

CDA can be used for verifying master data, transactions and key process metrics using analytics (including continuity equations).


For Itau-Unibanco (Brazil’s largest private financial institution and the twelfth largest bank in the world) we have been developing a set of CDA applications that include (i) auditor branch monitoring, (ii) transitory account monitoring, and (iii) branch sales analysis and monitoring. These endeavours are focused on detecting errors, deterring inappropriate events and behaviours, reducing or avoiding financial losses, and helping assure compliance with existing laws, policies, norms and procedures.


We have also worked with the audit-innovation team of consumer products multinational Procter & Gamble in three projects, involving (i) identifying inventory problems at over 160 locations using key performance indicators, (ii) examining worldwide vendor files and understanding vendor structures, duplicate payments and other issues, and (iii) automating the order-to-cash audit process on a stepwise basis.


Continuous controls monitoring (CCM)


The advent of the Sarbanes-Oxley Act and its assurance provisions on controls brought increased attention to the configurable nature of ERP controls. Continuous monitoring of business-process controls relies on automatic procedures, presuming that both the controls themselves and the monitoring procedures are formal or able to be formalised.


Working with German manufacturing conglomerate Siemens AG, we made our first attempt to prototype control-monitoring, which was followed by a more complete audit-automation study. We are currently working on the formalisation of functions and their consequences upon separation of duties. This work has taught us about Siemens’ audit action sheets (AASs), which provide a step-by-step review of their internal systems. In the Procter & Gamble audit-automation project, we first created AASs when deciding which steps of the order-to-cash process to automate.


Among the lessons from the Siemens projects are (i) ERP systems are very opaque, (ii) process- and control-rating schema are desirable, (iii) 20-40 per cent of controls may be deterministically monitored, (iv) perhaps another 20-40 per cent are potentially monitorable, (v) this CCM is a new form of alarm evidence that we do yet not know how to deal with, and (vi) continuous risk-management and assessment is needed for weighing evidence and choice of procedures.


CCM can be used for monitoring access control and authorisations, system configurations, and business process settings.


CDA and CCM are complementary processes. Neither process is self-sufficient or comprehensive. Even if no data faults are found it cannot be concluded that controls are fail-safe. Further, even if controls are being implemented, data integrity cannot be assumed. When combined, however, these monitoring approaches present a more complete reliance picture. Yet we were encouraged to take it further, particularly following the financial crisis of 2007-08 and the pressure of the Public Company Accounting Oversight Board (established in the US under the Sarbanes-Oxley Act) for ‘risk-based audits’. We hence conceptualised the Continuous Risk Monitoring and Assessment (CRMA) approach.


Continuous risk management and assessment (CRMA)


In compliance with the Sarbanes-Oxley Act, management must monitor internal controls to ensure that risks are being adequately assessed and managed. Enterprise risk management systems help companies identify and manage corporate risks. In compliance with Basel protocols, banks are required to assess their overall capital adequacies in relation to their risk profiles. Regulators have encouraged financial institutions to validate their risk models to increase the reliability of risk assessments. The Accounting Oversight Board has exerted substantial pressure on audit firms to reduce audit costs through smarter use of audit procedures.


CRMA is a real-time integrated risk assessment approach, aggregating data across different functional tasks in organisations to assess risk exposures and provide reasonable assurance on firms’ risk assessments. It includes processes that:

–  Measure risk factors on a continuing basis;
–  Integrate different risk scenarios into quantitative frameworks; and
–  Provide inputs for audit planning.

For the Itau Unibanco project, we have been developing a set of risk indicators for the product-to-sales cycle, including four different banking products as well as a set of macro risk indicators, all of which will be added to a normative risk dashboard (a display of system measures and alerts), to be included as audit evidence to decide on the extent and scope of audit procedures. The CRMA area is still incipient and requires extensive thought, experimentation and prototyping, but we feel that this will be a very important and valuable area of research.


Future developments and constraints


Our experiences – gained from working on over 10 corporate partnership projects – have led us to believe that substantive changes to both measurement and assurance models are necessary. The same rules that helped to formalise and evolve the role of auditing in modern life now inhibit the evolution of many business measurement processes, including assurance. These changes, forced by the progressive advent of the real-time economy, must be research-based, tested in practice, and then promoted and incorporated by standard setters.


The top priorities in assurance research should encompass creating:

–  Control-system measurement and monitoring schemata, including attempts at formalisation and structuring. This should include some form of control representation and taxonomy, methods of quantification of control combinations, and methods of incorporating the results of control-monitoring into quantitative assessment of control design and effectiveness;
–  Standards for business process monitoring and alarming;
–  Automatic confirmation tools;
–  A variety of modular audit bots (agents) to be incorporated into programs of audit automation; and
–  Alternative real-time audit reports for different compliance masters.
At the same time, complementary research should focus on extending assurance to non-financial processes through continuity equations, developing standards for continuous auditing, and developing complementary assurance products.
We must also reconsider concepts and standards, particularly:
–  Independence (which needs to be redefined);
–  The external audit billing model, which should be restructured to bill on function, not hours;
–  Audit firms improving their knowledge collection and management processes to feed their analytical toolkits;
–  Audit firms engaging in audit automation and proactively promoting corporate data collection during the automation process;
–  Value-adding, which must be justified in terms of data quality; and
–  Redefining the concept of materiality.


This lecture examined the changes occurring in the real-time economy and how they necessitate something close to real-time assurance. It then discussed the evolution of continuous assurance and its current conceptualisation, and finished by listing a series of priorities for research in the area.

References


Alles, MA, Brennan, G, Kogan, A, and Vasarhelyi, M.A, 2006, ‘Continuous monitoring of business process controls: A pilot implementation of a continuous auditing system at Siemens’, International Journal of Accounting Information Systems, June, pp. 137-161.

The Institute of Internal Auditors, Continuous Auditing: Implications for Assurance, Monitoring, and Risk Assessment, GTAG # 3, Altamonte Springs, Florida, 2005.

Vasarhelyi, M.A and Greenstein, M.L, 2003, ‘Underlying principles of the electronization of business: A research agenda’, International Journal of Accounting Information Systems, March, pp. 1-25.

Vasarhelyi, M.A and Halper, F, 1991, ‘The continuous audit of online systems’, Auditing:
A Journal of Practice and Theory
, vol.10 (1), pp. 110-125.


1 The author thanks Qi Liu and J.P Krahel for their advice on this paper.

Top ^

A condensed version of the 71st CPA Australia/University of Melbourne Annual Research Lecture presented on 11 October 2010. A recorded version of the lecture is also available.

Miklos A. Vasarhelyi is KPMG Professor of AIS, Rutgers Business School; Technology Consultant, AT&T Labs; and editor of the Artificial Intelligence in Accounting and Auditing series and AAA’s Journal of Information Systems.


Authorised by: Brooke Young, Director, Marketing and Commercial Engagement
Maintainer: Aida Viziru, Faculty of Business and Economics

Disclaimer & Copyright | Privacy | Accessibility

The University of Melbourne ABN: 84 002 705 224
CRICOS Provider Number: 00116K (More information)