Mulheres artificiais contra a corrupção: em busca de legitimidade no Tribunal de Contas da União

Palavras-chave: Sistemas de Inteligência Artificial, Processos de Auditoria, Teorização, Corrupção

Resumo

Este estudo descreve a busca pela legitimidade de quatro artefatos de tecnologia da informação para auxiliar auditores na vigilância contra fraude e corrupção no Tribunal de Contas da União (TCU). ALICE, ADELE, MONICA e SOFIA são sistemas de Inteligência Artificial (IA) propostos para auxiliar os processos de auditoria no setor público. Um questionário online foi utilizado para reunir as respostas de 60 auditores de todo o Brasil, com entrevistas semiestruturadas com o Diretor de Dados, três desenvolvedores de TI e cinco gerentes de auditoria do TCU selecionados por amostragem intencional (purposive sampling). A pesquisa demonstra que o uso dos sistemas baseados em IA é baixo entre os auditores do TCU devido a um limitado benefício percebido. Embora alguns respondentes reconheçam as vantagens dos sistemas baseados em IA, o seu uso é adiado por uma fraca teorização e difusão em relação ao significado e ao uso desses sistemas dentro da organização; os auditores mostraram uma prioridade no uso dos métodos tradicionais de auditoria em detrimento da inovação digital, restringindo o potencial de controle do uso dos artefatos tecnológicos contra a corrupção.

Downloads

Não há dados estatísticos.
Publicado
2019-11-28
Como Citar
Neves, F. R., da Silva, P. B., & Carvalho, H. L. M. de. (2019). Mulheres artificiais contra a corrupção: em busca de legitimidade no Tribunal de Contas da União. Revista De Contabilidade E Organizações, 13, 31-50. https://doi.org/10.11606/issn.1982-6486.rco.2019.158530
Seção
Fraudes e Corrupção: o que Contabilidade e Organizações têm a dizer?

1 INTRODUCTION

The use of Artificial Intelligence (AI) in public organizations has the potential to push forward the anticorruption agenda through the use of new methods of detection, prevention and analysis of cybercrimes, fraud and corruption (Chen et al., 2004; Salovaara, 2012; Valle-Cruz & Sandoval-Almazan, 2018). Governments of all levels should utilize digital innovations to respond to technological changes (Hinings, Gegenhuber & Greenwood, 2018; Sousa, Melo, Bermejo, Farias & Gomes, 2019).

The term Artificial Intelligence (AI) covers technologies mimicking capabilities of a human mind to solve complex problems, handling a specific set of inputs, and driving the computing process that draws conclusions or suggests an output (Muggleton, 2014; Jordan & Mitchell, 2015; Davenport & Kirby, 2016; Frey & Osborne, 2017).

Previous researches reported that anticorruption technologies are designed and used specifically for detecting fraud, including Expert System (ES) or “intelligent mining” of data sets on administrative procedures (Williams, 1995; Othman, Aris, Mardziyah, Zainan & Amin, 2015; Perols, Bowen, Zimmermann & Samba, 2016; Zerbino, Aloini, Dulmin & Mininno, 2018; Goto, 2018). Furthermore, the use of this digital innovation can also play an essential role in pushing organizations to explore new modes of structuring their workforce (IBA, 2017; Frey & Osborne, 2017).

However, little is known about what would lead government institutions to implement new practices from Artificial Intelligence-based systems. In general, national literature of associated cases describes only the experiences of these implementations. Examples include the use of an Artificial Intelligence-based system (called Harpia) by the Brazilian Internal Revenue Service against customs fraud (Digiampietri et al., 2008); the use of Data Mining to enhance cartel detection in government procurement by the Brazilian Office of the Comptroller Generali (hereafter CGU) (Ralha & Silva, 2012); other publications have portrayed the use of an AI-based artifact (called Watson) in complex criminal investigations by the Brazilian Federal Police (Sperb, 2017); and the use of Machine Learning in the control practices by the Court of Account of the State of Maranhão (Carmo, Souza, Reis & Vieira, 2018).

In turn, the international literature has a range of approaches for investigating the effects of AI-based systems on novice auditors' training (Wongpinunwatana, Ferguson & Bowen, 2000); the impact of information technology on the audit process (Bierstaker, Burnaby & Thibodeau, 2001); the use of computer-assisted audit tools and techniques (Braun & Davis, 2003; Mahzan & Lymer, 2014) and the use of AI-based systems by the Big 4 accounting firms in Japan (Goto, 2018).

Those publications are predominantly linked to the private sector, with a few exceptions portraying the government sector (see Braun & Davis, 2003; Mikhaylov, Esteve & Campion, 2018), and are mostly conceptual, as in Kokina and Davenport (2017), Hinings, Gegenhuber and Greenwood (2018) and Sousa et al. (2019), which provide an overview of the appearance of Artificial Intelligence in accounting and auditing without empirical data. Although several scholars continue to study the introduction and implementation of AI-based systems in the private and the public sector from different theoretical perspectives, there has been a paucity of research concerning how these technological artifacts get implemented, especially in the governmental context.

We investigate how technology affects the organization at a micro-level, contributing to empirical data depicting the internal theorization process among organizational members. To become established, digital innovation artifacts should gain legitimacy by processes of theorizing (Suchman, 1995; Strang & Meyer, 1993), where innovators (e.g., IT Developers) should present the arguments concerning the problems they are solving. In the case study, more active institutional work by key actors still needed to incorporate new AI solutions and practices.

We use the case of the Brazilian Federal Court of Accounts (hereafter TCU), a Brazilian Supreme Audit Institution (SAI), which implemented AI systems to analyze the procurement processes of the federal administration. The following acronyms are used: Analysis of Bids, Contracts and Public Calls (ALICE); Analysis of the Dispute in Electronic Bids (ADELE); Integrated Monitoring for Acquisition Control (MONICA); and Guidance System on Facts and Evidence for the Auditor (SOFIA). These artifacts have been integrated into the auditing workflow since 2015 to foster the internal and external information-consuming system (Silva, 2016).

In order to conduct this research, the main question that guides our study is: How can AI-based systems gain legitimacy in a Supreme Audit Institution focused on surveillance against fraud and corruption?

We obtained data from nine semi-structured interviews with IT Developers, Audit Managers, and the Chief Data Officer, and a survey questionnaire was completed by 60 auditors representing national SAI. Our results provide evidence that use of systems based on Artificial Intelligence is dependent on how they are theorized within the organization. To increase Artificial Intelligence-based systems’ use, national SAI IT Developers and Managers should improve auditors’ skills, emphasizing how the technologies can operationally enhance audit efficiency and performance in their daily practices and the corruption surveillance.

This paper is organized as follows: first, it reviews previous research on the theorization of technological institutional change; second, we present the case study and its framework; third, we exhibit the methodology and the context by the perceptions of the main actors within the Supreme Audit Institution; finally, we present the analysis and conclusions.

2 THEORIZATION OF TECHNOLOGICAL INSTITUTIONAL CHANGE

Previous research on institutional change has generally focused on organizations' responses to external changes. Tolbert and Zucker (1983) presented a model of institutionalization of new practices and affirmed that organizations that adopt changes in advance have different objectives (searching for efficiency or problem solving) from the organizations that adopt them later (search for legitimation). This occurs when events or ‘jolts’ break up settled practices. These jolts may take the form of social turmoil, technological disruptions, or regulatory change (Greenwood, Suddaby & Hinings, 2002).

DiMaggio and Powell (1983) discussed the trend of similar behavior among organizations in response to external pressures. Oliver (1991) argued that organizations could give different answers to institutional pressures, and may accept or reject them, or decide on ceremonial action for compliance purposes.

Subsequently, the literature sought a new explanation regarding the responses given by organizations showing that changes can be affected by intra-organizational factors such as leadership style, power relations, technical capacity and existence of diffusion channels (Liguori, Sicilia & Steccolini, 2012). Finally, studies on digital innovation and transformation seek to explain how some types of digitally institutional arrangements emerge and diffuse through fields and organizations (Currie, 2009; Yoo, Boland, Lyytinen, & Majchrzak, 2012; Hinings et al., 2018).

Some previous studies called for the refocus of institutional analysis on individuals, pointing out that there is still a misunderstanding of how individuals make sense of institutional pressures that generate changes throughout the organization (Barley & Tolbert, 1997). However, the changing process may face barriers to its implementation. It might occur because individuals perceive pressures to which they are exposed in different ways and can generate differentiated actions (Greenwood & Hinings, 1996).

One of the causes of resistance by individuals to change may be associated with the previously institutionalized symbolic values and practices they have, which may conflict with the content proposed in the change. When a new practice arises or changes, it does not have initial social legitimacy and is not widely accepted. The decision to adopt a new practice, then, differs from the decision of a practice that is already institutionalized, since the latter would be "objective and external," with low risks of criticism or questioning (Meyer & Rowan, 1977). Thus, in the face of change, there is a decision-making process on how to respond or not respond to adoption.

There may also be internal organizational pressures that arise from the existence of institutional complexity, defined as those arising from conflicting demands of different actors with power and misalignment of means and ends in the demands and ambiguity of institutional demands (Greenwood, Raynard, Kodeih, Micelotta & Lounsbury, 2011).

The decline of this resistance could be achieved through a form of institutional work (Lawrence & Suddaby, 2006), which can be understood as the process by which organizational ideas become abstracted into theoretical models to support their diffusion in time and space (Mena & Suddaby, 2016). Theorization also refers to the framing of new ideas into conceptual models of cause and effect (Strang & Meyer, 1993).

Institutional change could follow a specific pattern of stages in which theorization plays a key role by providing legitimacy for innovation, allowing innovation to become a taken-for-granted object (Tolbert & Zucker, 1996). Currently, part of the existing organizational research has focused on theorization as a process targeting organizational field internal members to develop a common understanding (Greenwood et al., 2002; Greenwood & Suddaby, 2006; Smets, Morris & Greenwood, 2012).

In the stage of theorization, the change should become meaningful to the individuals, increasing the chance of internalization as well as diffusion (Strang & Meyer, 1993). Theorizing should present a problem requiring a publicly recognized resolution. Then organizational actors affected by the problem diagnose the sources of dissatisfaction or failures, allowing a unique solution or treatment to be developed, which may raise the legitimacy of the change proposed (Tolbert & Zucker, 1996). Diffusion follows successful theorization processes (Greenwood et al., 2002).

One factor linked to institutionalization is legitimacy, which consists of a generalized perception or supposition found to be suitable within a system of norms, values, beliefs, and socially constructed definitions (Suchman, 1995). Legitimacy would play a central role in the organizational change, affecting "not only how people act toward organizations, but also how they understand them" (Suchman, 1995, p. 575).

Thus, theorization involves two essential tasks: problematization and justification. Problematization is the specification of general organizational problems, “generating public recognition of a consistent pattern of dissatisfaction or organizational failing that is characteristic of some array of organizations” (Tolbert & Zucker, 1996, p. 183). On the other hand, justification is achieved by giving moral legitimacy and/or asserting pragmatic legitimacy, and “developing theories that provide a diagnosis of the sources of dissatisfaction or failings, theories that are compatible with a particular structure as a solution or treatment” (Tolbert & Zucker, 1996, p. 183).

Such models rationally present the innovative idea, suggesting that the proposed relationship holds, regardless of the context (Strang & Meyer, 1993; Greenwood et al., 2002). Such models might be more or less complicated, varying from simple casual relations (Greenwood et al., 2002) to more complicated theories (Strang & Meyer, 1993; Lounsbury & Crumley, 2007). At least, theorization makes new, and even potentially complex, ideas readily understandable to large audiences (Nigam & Ocasio, 2010; Mena & Suddaby, 2016).

3 AI USES BY SUPREME AUDIT INSTITUTIONS FOR FRAUD AND CORRUPTION SURVEILLANCE

Preventing corruption is not the main objective of Supreme Audit Institutions (like TCU). Nonetheless, it is during the process of auditing that most frauds and evidence of corruption are detected (Borge, 1999; Kayrak, 2008). This set of assignments allows the TCU to prevent and fight cases of corruption (Melo, Pereira & Figueiredo, 2009).

However, the structure of the TCU is limited concerning the number of actions that need to be controlled (Silva, 2016). Furthermore, Dye (2007) suggests that SAIs cannot remain indifferent to cases of fraud and corruption, especially in the contexts of less developed countries. Recently, scholars showed that SAIs contribute to improving government efficiency and that they have a significant influence on perceived levels of corruption (Blume & Voigt, 2011; Tara, Gherai, Laurentiu & Matica, 2016), and that international pressures over Supreme Audit Institutions to fight against corruption are increasing (Reichborn-Kjennerud, González-Díaz, Bracci, Carrington, Hathaway, Jeppesen & Steccolini, 2019).

Therefore, TCU has been exploring the potential use of AI technology to identify and extract possible indications of fraud and corruption from public documents (Ramirez & Perez, 2016; Silva, 2016; OECD, 2017). Thus, technological artifacts are expected to optimize tasks and provide more effective results, not only aiding manual labor, but also widely used for calculations, monitoring, and communication.

Data Mining and Machine Learning (applications of AI-based) are the current technological procedures used for such tasks, which enable public sector auditors to explore vast amounts of data quickly and efficiently, rather than operate static rules written by a programmer, allowing the computers to learn and draw conclusions (Williams, 1995; Jans, Alles & Vasarhelyi, 2013; Jordan & Mitchell, 2015; Silva, 2016; Taurion, 2016).

Nonetheless, the way those professionals deal with these cognitive technologies, the low diffusion, and the lack of effectiveness in their use may limit the potential of these digital innovations in the organization. Then, the existence of artifacts such as ALICE, ADELE, MONICA, and SOFIA may not represent the more effective results in the surveillance against fraud and corruption, if there are no internalized values for their potential users, and if the existing structure has either regulatory or organizational bottlenecks.

In Table 1 below it is shown how AI artifacts are used in the TCU audit processes, explaining their description and output; and based at the Davenport and Kirby (2016) framework, how autonomously .

Table 1:
Mapping TCU's Cognitive Technologies
Task Description Output Level of Intelligence* Task Types*
ALICE Access Comprasnet system and collects files and data of all the bidding processes published throughout the day. Then, typologies (possible inconsistencies) are tested throughout the documents. e-mail + Panel Context Awareness and Learning Analyzing Numbers & Words
ADELE Displays information from trades processed through Comprasnet in a dashboard. It is possible to graphically obtain information about the competition (or not) in a particular bidding session. Panel Support for Humans Analyzing Numbers & Words
MONICA It is a dashboard that shows all public purchases, including those that ALICE may miss. Panel Support for Humans Analyzing Numbers & Words
SOFIA It works as an automatic auditor assistant, through a Macro in Microsoft Word processor, identifying relevant elements that are searched in TCU's databases. Word's Icon Support for Humans Analyzing Numbers & Words

Source: Research Data.

Note: *Based on Davenport & Kirby (2016).

4 METHODOLOGY AND DATA COLLECTION

To inductively explore the use of the AI systems in the auditors' daily practice, our approach is developed by a case study (Modell, 2005; Bryman, 2012) in the context of a public organization. The research is composed of information from the Chief Data Officer, three IT Developers (of the AI systems) and five TCU Audit Managers selected by purposive sampling (Rapley, 2014; Patton, 2014) through previous contact by institutional e-mail. An online survey questionnaire (Evans & Mathur, 2005) was sent to the TCU’s auditors who work in typical public procurement control units.

Protocol for both the interviews and the questionnaire were developed (Roulston, 2014; Gepp, Linnenluecke, O’Neill, & Smith, 2018). For the interviews, we first identified the key actors who were involved in the development of the AI systems to learn more about the systems and how these systems assist control actions within the organization.

The data collection occurred from March to May 2019. Evidence was collected via semi-structured interviews (Bryman, 2012). Face to face interviews were held at the organization’s facilities whenever possible, and through Skype, allowing better conditions for the interviewee. This flexibility allowed the researchers to “reach key informants and increase participation.” (Janghorban, Roudsari & Taghipour, 2014, p. 1; Deakin & Wakefield, 2013).

Due to their social position and expertise, those members (Chief Data Officer, IT Developers and TCU Audit Managers) are considered to be elite professionals in their field (Empson, 2017). The interviews lasted one hour on average and were partially transcripted using NVivo 12® Software, as the intention was only to preserve the parts concerning the subjects directly related to the research issue. Some field notes were also taken.

Interviews were conducted in Brazilian Portuguese, and some quotes were translated into English. Concerning translation in the analytic process, our concern was how to convey the original contextual meanings of the interviewees' narratives (Roulston, 2014).

All interview questions were open-ended to have as little as possible influence on the respondents' answers by not projecting any values of the researchers (Easterby-Smith, Thorpe & Jackson, 2008). To guarantee the anonymity, the interviewees received identity codes, as shown in Table 2. The interviews’ protocols in Brazilian Portuguese are shown in Appendix B and were applied separately to: (i) IT Developers (with 23 questions); and (ii) Audit Senior Managers (with six questions).

We use thematic analysis (Braun & Clarke, 2012) in a search for themes that might emerge as better describing the phenomenon. The transcripts were summarized separately by outlining the key points made by participants (noting individual comments) in response to the questions. During the initial coding of transcripts, inductive codes were assigned to segments of data that described themes observed in the text (Braun & Clarke, 2012). Themes within each data group (IT Developers and Audit Senior Managers) were clustered, with differences identified between the responses of groups. A table with their quotations in English was coded into themes using NVivo12® Software to structure the analysis (see Appendix A).

Some video content analyses (Banks, 2007; Christianson, 2016) were made using available videos on YouTube. A thirty-minute lecture delivered by the Chief Data Officer about the diffusion strategy and a one-hour recorded lecture given by a TCU’s Auditor manager talking about AI technology tools in a national congress also have been used to collect information about the systems and the organizational strategies.

Table 2:
List of interviews
Interview length Date Nature Interviewee
1 (01:34:00) March 19th, 2019 Face to face IT Developer 1
2 (01:34:00) March 19th, 2019 Face to face IT Developer 2
3 (01:29:00) March 20th, 2019 Face to face IT Developer 3
4 (00:22:56) April 4th, 2019 Skype Audit Senior Manager 1
5 (01:42:28) April 4th, 2019 Face to face Audit Senior Manager 2
6 (00:40:00) April 15th, 2019 Skype Audit Senior Manager 3
7 (01:10:00) April 15th, 2019 Skype Audit Senior Manager 4
8 (00:30:00) April 30rd, 2019 Skype Chief Data Officer
9 (00:38:33) May 14th, 2019 Face to face Audit Senior Manager 5

Source: Research Data.

Note: Chief Data Officer: a person in charge of a department responsible for advanced data analysis, prospecting of AI methods and technologies into new products, business, and services.

An online survey questionnaire (Evans & Mathur, 2005) were applied to the TCU’s auditors who work directly in typical public procurement control units. Other auditors were not considered because they were not working specifically in the functions under study. To increase validity, we conducted a pre-test questionnaire (Krosnick, 2018). Four auditors previously checked the pre-test questionnaire and made suggestions for improvements that were taken into consideration. They were not considered in the submitted version. The pre-test questionnaire provided us feedback on specific instrument design issues. For example, pilot study participants provide examples of the IA systems used, and we made minor wording changes to ensure that respondents were able to distinguish between each type of IA systems. Further, pilot study participants reviewed the instrument to ensure that wording from the audit standards was clear.

Using the SurveyMonkey® web platform, the auditors answered 17 questions. 320 questionnaires were sent through institutional e-mail. There were 60 responses from geographically different regions of Brazil, which represents approximately 19% of valid answers. The questionnaires were collected from March 28th to April 26th. The sample is shown in Figure 1.

The questionnaire protocol (see Appendix C) in Brazilian Portuguese was divided into three parts. The first part contained questions about their general knowledge about the systems, skills, and professional background. In the second part, respondents were asked about the use of technologies indicating their frequency of use (1 = daily; 3 = three times a week) and possible bottlenecks. The third and the last part of the questionnaire contained questions that focused on which technological tools are used effectively and the effects of technology on their daily practices.

Some questions had an open field (e.g. "others") (Krosnick, 2018), where the auditors could freely discourse about the subject. We then use methodological triangulation (Modell, 2005) to gather the data from questionnaires with data obtained from interviews to strengthen our research.

Sample questionnaire

Figure 1: Sample questionnaire

5 FINDINGS AND DISCUSSIONS

Our findings are separated into three themes to better understand the process of the AI systems theorization. The quotations which represent each theme are shown in Appendix A. Some of them were highlighted for better representation. These induced themes serve as the basis for the subsequent analyses and discussions about each one.

Theorizing digital innovation emergence

The first thing observed in talking with the IT Developers and planners was the creation of justifications for the development of technological solutions. The implementation of AI systems occurred due to identified needs by some auditors directly linked to control activities in the TCU. Because of the lack of human resources, the need to enhance internal processes justified the development and implementation of the AI systems in the organization.

Before the implementation of the systems, as per the Audit Managers, the auditors were required to await some external representation regarding public biddings that had already occurred (and sometimes concluded). With the emergence of these new technologies, they were able to anticipate the representations (IT developer 1; Audit Manager 1, 2 and 3). This means, among other advantages, time gain, efficiency, and cost savings were achieved.

[before the use of AI tools] It was very intuitive. Our job was more reactive; we kept waiting for the bidder company to make an official representation, to bring the problem to us (Audit Manager 2).

ALICE was already an AI system at CGU. From the range of the possibilities that this tool could bring to the organization and, according to one of the Audit Managers, given the low usage at CGU, the idea was presented to the Logistics Procurement Secretariat (SELOG), which aided the implementation of it in at TCU. This step started in 2015, and in May 2016 ALICE was transferred to the TCU.

In 2015, the TCU conducted some efforts to change their workflow and go beyond a conventional “pay and chase” model of surveillance to a more preventive action to avoid fraud and corruption through the use of data analytics (Audit Manager 3).

ADELE arose through developers intending to increase the accuracy in the verification of the competitiveness among bidders within the Federal Public Administration. In an external audit work, one of ADELE's developers performed an extensive data analysis using spreadsheets. Thus, he verified that the presented results were valid for the analyses, but it would require much time and extreme effort to process. Looking for efficiency was one of the justifications presented for the development of the system (IT developer 3).

SOFIA is a virtual assistant that checks for errors and suggests changes in the texts of managers and auditors. The Chief Data Officer noted the potential that technology could achieve with the incorporation of cognitive technology and began the project in the first half of 2016. For example, if the auditor is working on a text that proposes punishment to a company, it can indicate if there are sanctions against the company, or if it appears to be some previous punishment in the TCU. It points out whether the company has other contracts with the public administration (Audit Manager 5).

Those three initiatives were recognized in 2017 and 2018 with an internal award (Prêmio Reconhe-Ser ii ), which is one of the recognition initiatives by the TCU. These awards are organized into three categories: innovative work, outstanding work (external control or governance and management categories), and innovative ideas. The winners also hold presentations to disseminate experiences and promote knowledge within the organization. The awards event is one way to confer legitimacy on AI-based systems.

MONICA arose from the need to quickly visualize data such as the contractor's view, the most contracted suppliers and the most used service types. Initial training was conducted with those who volunteered to manipulate the system. Finally, in 2018, the TCU’s minister acknowledgediii the use of Artificial Intelligence-based systems in audit processes, but their use has not become mandatory.

The lack of AI systems diffusion

When there is low theorization, diffusion of the use of new technologies is compromised. Despite criticisms of the lack of theorization and suggestions for improvements in the systems, all the managers showed their agreement that the systems had brought many improvements, especially concerning the TCU's workflow. What was previously done reactively started to be carried out proactively, with more significant time and resource savings.

Despite that, it is noticed that Audit Managers do not encourage auditors to use the systems, which results, on the part of the auditors, in a certain lack of knowledge about some of the tools. “Even some managers are unaware of some of them” (Audit Manager 2), which may be a sign of a lack of theorizing that results in a lack of diffusion.

Audit managers revealed a lack of system diffusion by IT Developers. They understand that training would be necessary, which could increase the use and applicability of the tools. The IT Developers also acknowledge the lack of training.

It [TCU] should diffuse the domain definition of each [IA Artifacts], i.e., to what types of actions and circumstances they are useful and to what extent. Disseminate also the risks involved as well as their interactions with existing systems such as e-TCU [procedural management system], once this is defined, one can insert them into the priority list and customize the training (questionnaire respondent).

Nonetheless, the Chief Data Officer confirmed that non-training was intentional, and part of an internal strategy. According to him, the systems must be intuitive and friendly, and the need for training should be an indication that they are not following the organizational needs. The AI systems needed to be intuitive enough to be used without previous training.

It has to be like Netflix interaction. Have you ever been trained to use Netflix? The auditors don’t need to be trained to use the products, if does, something is wrong. If the solutions use complex algorithms based on machine learning and cognitive processing, what should matter to the auditor are the results that are obtained from those products and how reliable they are for each purpose (Chief Data Officer).

This bet, however, proved risky. The task of convincing managers and auditors that their work is more valuable when using the AI systems than when they do not use them is a constant process of convincing.

The Chief Data Officer pointed out the organization's initiative with a Master Business course in Data Analysis available to the TCU's auditors and mid-level technicians. The idea is to “sensitize and train the auditor to the potential of using data to improve their control activities” (Chief Data Officer). Nevertheless, the course is intended for professionals who are already familiar with databases and have notions of programming, which may limit the entrance population or some novice auditors' interest. The number of seats is also limited to 28 (two seats are available to CGU auditors). There are still meetingsiv and lectures given by the Chief Data Officer and some IT Developers, but the subject is superficially treated, and the Audit managers’ participation is not mandatory.

Among the Audit Managers interviewed, ALICE (⅗), followed by SOFIA (⅖), ADELE (⅕) and MONICA (⅕) are the most used systems, although, when questioned about the benefits that the use of the systems could bring to the workflow, many auditors from the survey affirmed that the systems did not bring changes or implications to their work, as shown in Figure 2.

AI Systems' impact on the auditors' workflow

Figure 2: AI Systems' impact on the auditors' workflow

If there is no incentive for Audit Managers to use IA systems although they believe the systems do not make a difference in their day-to-day practices, Audit Managers will not encourage auditors who are hierarchically below to use them as well.

Digital change among organizational members

Data revealed a weak internalization and control of AI initiatives in the organization by the IT Developers and Chief Data Officer. This could be explained by the lack of coordination by the departments, change of organizational strategies, or low understanding of IA systems by auditors shown in the questionnaire answers. Other auditors are resistant to the use of AI systems because of their negative perception of the technology. One respondent reported:

I confess that I do not use these tools. Perhaps because of the lack of disclosure, perhaps because of [my own] lack of interest in researching the models of tools in the workplace (questionnaire respondent).

The use of AI-based systems typically requires some computer and data skills. Auditors need to have at least a basic knowledge of data management. This suggests a need for general training for auditors to use AI-based systems, which comes with a corresponding cost in terms of time and money. This cost may outweigh the anticipated benefits derived from adopting technologies. Surprisingly, one of the auditors did not know that AI-based systems existed.

To be honest, I do not even know what ADELE is. What is ADELE? (Auditor Manager 2).

ALICE, ADELE, MONICA, and SOFIA are put aside by the auditors, who still follow old practices such as work with text editors and spreadsheets, as shown in Figure 3.

Most used tools by the auditors (in order of use)

Figure 3: Most used tools by the auditors (in order of use)

One of the managers pointed to the possibility that auditors perceive “the systems as extra work, with no solid benefits” (Audit Manager 2). Another respondent mentioned that:

From the results that we already presented, we would like it [ALICE] to be more used by the Court, and often it does not happen because the staff has a goal to fulfill, and ALICE is not exactly at the staff's target (IT Developer 1).

Other IT Developers evidenced that the low acceptance of ALICE could be related to the overloading of e-mails sent to the Audit Managers and auditors as a bottleneck, and the fact that :

Once you have information available, you may be penalized if you do not use this information (IT Developer 2).

Based on the above findings, is clear that the systems were designed and implemented to enhance organizational performance and, consequently, act in a proactive way against fraud and corruption. However, without their practical use, the digital change pictured by IT Developers may not occur.

6 CONCLUDING REMARKS

This study revealed how a Supreme Audit Institution is using AI capabilities to strengthen its actions on surveillance against possible irregularities of fraud and corruption in the public sector. The present case-based research suggests that internal theorization is influenced by the way these artifacts arrived in the organization and seemed useful with the individuals or not.

This research showed that auditors confer low legitimacy on the systems given by a slow integration of AI-based systems usage into their daily practices, which can hinder fraud and corruption surveillance with the use of digital innovations. However, the narratives from auditors, in general, indicate they have begun to use big data analytics, although at a slow pace.

The bottlenecks to the use of AI-based systems in the organization may restrict the potential of anticorruption control, requiring a more active institutional work by key actors and attention to legitimize these technological tools among organizational members. AI-based systems usage at TCU is voluntary. Additional research could investigate whether our findings differ in settings comparing voluntary versus mandatory usage.

A well-defined communication strategy could help to align the interests and expectations of all the organizational members, especially when it comes to discussing the opportunities that AI can bring to a particular project, meaning that, behind all the technical complexities of implementing AI initiatives, key actors should always transmit the values a project ultimately pursues.

Moreover, leveraging auditors' insights could ensure the use of these digital innovations in problem solving, increasing the quality and efficiency of their workflow. Different strategies for the use of these ‘Artificial Ladies’ would be dependent on their sensemaking process to redefine the value, services, required skills and future vision of auditing.

We focused on a single entity and do not intend to generalize the results to other organizations. However, extending the study to how other organizations are treating the emergence of these digital transformations in their workflow would also provide an excellent field for future studies. These further studies would be beneficial by pointing out new risks and challenges of the use of such technological innovations to account for what AI can and cannot do and by opening the smart algorithms' black box of active and/or latent errors, which humans may have difficulty peeking inside.

REFERENCES

  1. (). . . London: Sage Publications. .
  2. , (). Institutionalization and Structuration: Studying the Links between Action and Institution. Organization Studies 18(1), 93-117.
  3. , , (). The impact of information technology on the audit process: an assessment of the state of the art and implications for the future. Managerial Auditing Journal 16(3), 159-164.
  4. , (). Does organizational design of supreme audit institutions matter? A cross-country assessment. European Journal of Political Economy 27(2), 215-229.
  5. ..
  6. , (). Computer-assisted audit tools and techniques: analysis and perspectives. Managerial Auditing Journal 18(9), 725-731.
  7. , , , , , , , (). . . Washington, DC, US: American Psychological Association. .
  8. (). . (4th ). London: Oxford University Press. .
  9. , , , .. São Luiz: JIM.
  10. , , , , , (). Crime data mining: A general framework and some examples. Computer 37(4), 50-56.
  11. (). Mapping the terrain: The use of video-based research in top-tier organizational journals. Organizational Research Methods
  12. (). Contextualizing the IT artifact: towards a wider research agenda for IS using institutional theory. Information Technology & People 22(1), 63-77.
  13. , (). . . MIT Sloan Management Review (Spring). .
  14. , (). . . Qualitative Research. .
  15. , (). The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields. American Sociological Review 48(2), 147-160.
  16. , , , , , , , , , , , , , .. 181-187.
  17. (). . . Washington, DC: World Bank. .303-322.
  18. , , (). . . London: SAGE Publications. .
  19. (). Elite interviewing in professional organizations. Journal of Professions and Organization 5(1), 58-69.
  20. , (). The value of online surveys. Internet research 15(2), 195-219.
  21. , (). The future of employment: How susceptible are jobs to computerisation. Technological Forecasting and Social Change 114, 254-280.
  22. , , , (). Big data techniques in auditing research and practice: Current trends and future opportunities. Journal of Accounting Literature 40, 102-115.
  23. ..
  24. , (). Understanding radical organizational change: Bringing together the old and the new institutionalism. Academy of Management Review 21(4), 1022-1054.
  25. , , (). Theorizing change: The role of professional associations in the transformation of institutionalized fields. Academy of Management Journal 45(1), 58-80.
  26. , (). Institutional entrepreneurship in mature fields: The big five accounting firms. Academy of Management Journal 49(1), 27-48.
  27. , , , , (). Institutional complexity and organizational responses. Academy of Management Annals 5(1), 317-371.
  28. , , (). Digital Innovation and transformation: An institutional perspective. Information and Organization 28(1), 52-61.
  29. (). . . .
  30. , , (). Skype interviewing: The new generation of online synchronous interview in qualitative research. International Journal of Qualitative Studies on Health and Well-being 9(1), 24152.
  31. , , (). The case for process mining in auditing: Sources of value added and areas of application. International Journal of Accounting Information Systems 14(1), 1-20.
  32. , (). Machine learning: Trends, perspectives, and prospects. Science 349(6245), 255-260.
  33. (). Evolving challenges for supreme audit institutions in struggling with corruption. Journal of Financial Crime 15(1), 60-70.
  34. , (). The Emergence of Artificial Intelligence: How Automation is Changing Auditing. Journal of Emerging Technologies in Accounting 14(1), 115-122.
  35. (). . . Palgrave Macmillan. .439-455.
  36. , (). Institutions and Institutional Work. In: The Sage Handbook of Organization Studies 215-254.
  37. , , (). Some like it non-financial… Politicians’ and managers’ views on the importance of performance information. Public Management Review 14(7), 903-922.
  38. , (). New practice creation: An institutional perspective on innovation. Organization Studies 28(7), 993-1012.
  39. , (). Examining the adoption of computer-assisted audit tools and techniques: Cases of generalized audit software use by internal auditors. Managerial Auditing Journal 29(4), 327-349.
  40. , , (). Political and Institutional Checks on Corruption. Comparative Political Studies 42(9), 1217-1244.
  41. , (). Theorization as institutional work: The dynamics of roles and practices. Human Relations 69(8), 1669-1708.
  42. , (). Institutionalized Organizations: Formal Structure as Myth and Ceremony. American Journal of Sociology 83(2), 340-363.
  43. , , (). Artificial intelligence for the public sector: opportunities and challenges of cross-sector collaboration. Philosophical Transactions of the Royal Society A: Mathematical. Physical and Engineering Sciences 376(2128), 20170357.
  44. (). Triangulation between case study and survey methods in management accounting research: An assessment of validity implications. Management Accounting Research 16, 231-254.
  45. (). Alan Turing and the development of Artificial Intelligence. AI Communications 27(1), 3-10.
  46. , (). Event attention, environmental sensemaking, and change in institutional logics: An inductive analysis of the effects of public attention to Clinton's health care reform initiative. Organization Science 21(4), 823-841.
  47. (). Strategic Responses to Institutional Processes. The Academy of Management Review 16(1), 145.
  48. (). . . Paris: OECD Publishing. .
  49. , , , , (). Fraud Detection and Prevention Methods in the Malaysian Public Sector: Accountants’ and Internal Auditors’ Perceptions. Procedia Economics and Finance 28, 59-67.
  50. (). . . SAGE Publications. .
  51. , , , (). Finding needles in a haystack: Using data analytics to improve fraud prediction. The Accounting Review 92(2), 221-245.
  52. , (). A multi-agent data mining system for cartel detection in Brazilian government procurement. Expert Systems with Applications 39(14), 11642-11656.
  53. , (). Impact of Supreme Audit Institutions on the Phenomenon of Corruption: An International Empirical Analysis. Journal of Public Governance and Policy: Latin American Review 2(3), 34-59.
  54. (). . . London: The SAGE Handbook of Qualitative Data Analysis, Sage. .49-63.
  55. , , , , , , (). Sais work against corruption in Scandinavian, South-European and African countries: An institutional analysis. The British Accounting Review 100842.
  56. (). Analysing interviews. The SAGE Handbook of Qualitative Data Analysis 297-312.
  57. (). Beth Simone Noveck: Wiki Government: How Technology Can Make Government Better, Democracy Stronger, and Citizens More Powerful. Washington: Brookings Institution Press. Journal of Media and Communication Research 28(52), 4.
  58. (). The pillars of the data analysis strategy and consumption of information at the TCU. Federal Court of Accounts Journal 48(137), 13-17.
  59. , , (). From Practice to Field: A Multilevel Model of Practice-Driven Institutional Change. Academy of Management Journal 55(4), 877-904.
  60. , , , , (). How and where is artificial intelligence in the public sector going? A literature review and research agenda. Government Information Quarterly 101392.
  61. (). . . Veja online. .
  62. , (). Institutional conditions for diffusion. Theory and Society 22(4), 487-511.
  63. (). Managing legitimacy: strategic and institutional approaches. Academy Management Review 20(3), 571-560.
  64. , , , (). The Social Role of the Supreme Audit Institutions to Reduce Corruption in the European Union-Empirical Study. Revista de Cercetare si Interventie Sociala 52, 217-240.
  65. (). Technological innovations in government auditing. Federal Court of Accounts Journal 48(137), 7-12.
  66. , (). Institutional sources of change in the formal structure of organizations: The diffusion of civil service reform, 1880-1935. [Electronic version]. Administrative Science Quarterly 28, 22-39.
  67. , , , , (). . . London: SAGE. .
  68. , ..
  69. (). The use of audit software in fraud detection. Journal of Financial Crime 2(4), 305 - 310.
  70. , , (). An experimental investigation of the effects of artificial intelligence systems on the training of novice auditors. Managerial Auditing Journal 15(6), 306-318.
  71. , , , (). Organizing for Innovation in the Digitized World. Organization Science 23(5), 1398-1408.
  72. , , , (). . . Expert Systems with Applications. .
Our findings contribute to governmental agencies, researchers, and practitioners by developing and nurturing digital literacy within organizations, supporting educational diffusion and local developments by strengthening and utilizing cross-organizational networks to improve competencies of organizations and practitioners in the rapidly developing field of digital innovations.
The Brazilian Office of the Comptroller General (CGU) is an agency of the federal government in charge of internal control activities, public audits, corrective and disciplinary measures, corruption prevention and combat, and coordinating ombudsman's activities. CGU is also in charge of technically supervising all the departments making up the internal control system, the disciplinary system, and the ombudsman's units of the federal executive branch, providing normative guidance as required. http://www.cgu.gov.br/
As an example: https://www.cgu.gov.br/noticias/2018/04/servidores-sao-reconhecidos-em-premio-do-tcu.
The document is available at: http://www.tcu.gov.br/Consultas/Juris/Docs/CONSES/TCU_ATA_0_N_2018_19.pdf
One of the initiatives is the Seminar on Data Analysis in Public Administration promoted by TCU: https://www.youtube.com/watch?v=mHZIARyDTfk

Appendix A

Level
IT Developers Managers Auditors
1. Theorizing digital innovation change
“There was no such thing [no training], except for the Directors' Meeting, called Coffee with Analytics, but it was not mandatory. It was more for spreading what the tool is, what it could do. There was no training. Some ALICE presentation talks were held at TCU events”. (IT developer 2) “There was no training.” (managers 1, 2, 3, 4 and 5) “There was only initial dissemination of the tools, but there is no continuous stimulus in the work environment for the use of these tools” (questionnaire respondent)
“There was no training” (Manager 3, talking about MONICA and ADELE)
“I guess it could have been [training]. Not to teach us how to use the systems, but in a campaign to promote the use” [... “I guess it could have been [training]. Not to teach us how to use the systems, but a usage campaign.” (manager 4)
“It [TCU] should diffuse the domain definition of each [IA Artifacts], i.e. to what types of actions and circumstances they are useful and to what extent. Disseminate also the risks involved as well as their interactions with existing systems such as e-TCU [procedural management system], once this is defined, one can insert them into the priority list and customize the training.” (questionnaire respondent)
“No [talking over training], and there is no internal regulation in TCU on what to do with these reports, each secretary does it differently.” (manager 2) 90% of the survey respondents stated that there was no training.
“It has to be like Netflix interaction. Have you ever been trained to use Netflix? The auditors don’t need to be trained to use the products, if does, something is wrong. If the solutions use complex algorithms based on machine learning and cognitive processing, what should matter to the auditor are the results that are obtained from those products and how reliable they are for each purpose.” (Chief Data Officer)
Level
IT Developers Managers Auditors
2. The lack of AI systems diffusion
“[before the use of AI tools] It was very intuitive. Our job was more reactive; we kept waiting for the bidder company to make an official representation, to bring the problem to us.” (Manager 2, emphasis added) “I am not aware of the disclosure or use of these systems in my work unit.” (questionnaire respondent)
“[before the use of AI tools] It was more repressive. With ALICE, it kept more preventive.” (Manager 3, emphasis added) “I confess that I do not use these tools. Perhaps because of the lack of disclosure, perhaps because of [himself] lack of interest in researching the models of tools in the workplace.” (questionnaire respondent)
“Oh, it was terrible [before]. I had to do everything manually by checking thru the Federal Official Journal [Journal of official government acts].” (manager 1, emphasis added)
“I've never heard of it ‘Wow, what a brutal difference does SOFIA’ [with ironic tone]” (Manager 1, emphasis added)
“I've never used ADELE, but I know what it is” (Manager 4)
“To be honest, I do not even know what ADELE is. What is ADELE?” (Manager 2)
3. Digital change internalization among organizational members
“From the results that we already presented, we wanted it [ALICE] to be more used by the Court, and often it does not happen because the staff has a goal to fulfill, and ALICE is not exactly at the staff's target.” (IT developer 1, emphasis added) “Most of the time, it [ALICE] was an extra job [...] we worked with goals, and it has no goal, it doesn't count at all... it's a job that you do to the detriment of other jobs that you should be doing, but it's an important job [...] it should be part of the national unit plan workflow.” (manager 2, emphasis added)
“Since it was not something conceived as a product, it [talking about MONICA] turned out to be a sub product [...] Because it is not part of the work process, the person [the auditors] does not have to use [...] there are people who use it to have more productivity in their work. I think that's the difference.” (IT developer 3, emphasis added) “The department that developed the robots does not get in touch [with the managers] to try to find out how users are using it, in order to capture difficulties solving them.” (manager 1, emphasis added)
“Here at TCU, there was this fear of ... [overwhelming managers and auditors with ALICE's e-mails]. It's comfortable to say that you do not have the information, and you do not have the information. Then when you start to get the information, you kind of 'Okay, I'll be charged because I have the information and do nothing!' [...] We also worked internally on that.” (IT developer 2, emphasis added) “We don’t have contact with them [the IT developers]. There’s no institutional feedback.” (manager 4, emphasis added)

Appendix B

Interviews Protocols:
in Brazilian Portuguese
Interview protocol applied to the developer’s systems
Questions Specific Issues / Observations
Qual a origem dos Sistemas Alice, Mônica e Adele? Esses sistemas foram importados de outros órgãos? Caso positivo, qual o motivo? Há compartilhamento atual com esses órgãos de “origem”?
Qual o interesse estratégico do TCU na implantação desses sistemas? Foi uma demanda de natureza técnica (solicitação das unidades técnicas) ou uma determinação política (pressão de órgãos parceiros e/ou presidência/autoridades do TCU)? Quais as ferramentas utilizadas antes da criação dos sistemas?
A concepção dos sistemas contou com a participação de unidades técnicas? De quais unidades do TCU (Sede e Estados)?
Qual foi o tempo necessário para desenvolvimento e utilização dos sistemas? É possível mensurar o investimento realizado?
Como ocorre a operacionalização/utilização dos Sistemas Alice, Mônica e Adele? Qual a interface para os usuários dos Sistemas (unidades técnicas ou parceiros).
Houve divulgação dos Sistemas e capacitação para sua utilização (seja por demanda das unidades ou por uma necessidade previamente detectada)?
Há solicitações de ajustes/melhorias acerca dos sistemas pelos usuários?
Já receberam algum apontamento pelos auditores de irregularidades que não foram identificados pelos sistemas?
Qual o sistema mais utilizado? Por quê?
Existe alguma estratégica para desenvolvimento tecnológico em controle de auditoria?
Como são feitas as atualizações dos sistemas? Existem demandas para atualizações?
Quais os resultados potenciais e efetivos dos Sistemas? A utilização pelos usuários ocorreu como esperado?
As ações de controle utilizando os sistemas ocorrem como esperado?
É possível mensurar os ganhos de produtividade e eficácia do controle com os sistemas?
Se os resultados estão aquém das expectativas, quais os possíveis entraves dessa situação.
Como o(a) sr(a) vê a atuação dos sistemas Alice, Adele e Monica no sentido de auxiliar o trabalho dos auditores no combate a fraudes e corrupção?
Interview protocol applied to the Audit Managers
Como o(a) sr(a) vê os avanços em análises e inteligência artificial sendo usados em controle de auditoria?
Dentro do plano de desenvolvimento da unidade existe alguma estratégia de desenvolvimento de tecnologias?
Quais barreiras/dificuldades foram encontradas quando da utilização dos sistemas na unidade técnica?
Houve treinamento prévio para a utilização dos sistemas?
A forma de apresentação/disponibilização das informações permite um bom entendimento acerca do que precisa ser feito?
Você identificou alguma falha nos sistemas? E o que foi feito na tentativa de resolver esse caso?

Appendix C

Questionnaire protocol - in Brazilian Portuguese
Questionário aplicado aos auditores
Questões Categoria da resposta Alternativas de respostas
Parte I - Conhecimento geral sobre os sistemas e habilidades profissionais
1. Quais das ferramentas abaixo você usa no dia-a-dia do seu trabalho? Múltipla escolha + Sofia
+ Alice
+ Monica
+ Adele
+ Excel
+ Geocontrole
+ Sistema de Análise de Riscos (SAR)
+ DGI
+ LabContas
2. Considerando seu conhecimento geral, como você descreve seu entendimento sobre Inteligência Artificial? Escala Likert 0 nenhuma habilidade
1 pouca habilidade
2 alguma habilidade
3 habilidades suficientes
4 habilidades excelentes
5 sou perito em IA
Parte II - Uso das tecnologias e possíveis gargalos
3. Quais implicações as ferramentas Alice, Adele, Monica e/ou Sofia trouxeram para seu trabalho/organização? Múltipla escolha + Alterou a natureza do trabalho
+ Criou novas oportunidades para desenvolver meu trabalho
+ Alterou a forma que lido com minhas tarefas
+ Possibilitou mais tempo para processar as análises
+ Outros (aberto para escrita)
4. Na sua opinião, quais obstáculos/barreiras são observados no uso das ferramentas Alice, Adele, Monica e Sofia? Múltipla escolha + Falta de treinamento para operar as ferramentas
Atualização insufuciente (tornaram-se obsoletos)
+ Infraestrutura insuficiente (por exemplo, computadores, rede, acesso à Internet)
+ Aumento do volume da demanda de trabalho
+ Não foram identificadas barreiras
+ Outros (aberto para escrita)
5. Qual a razão para uso do Alice como ferramenta de trabalho? Múltipla escolha + É uma normativa interna do órgão
+ Traz maior celeridade ao meu trabalho
+ É essencial às minhas atividades
+ Não interfere nas minhas análises
+ Não utilizo essa ferramenta
6. De que forma você utiliza tecnologia em procedimentos de controle de auditoria? Múltipla escolha + No planejamento das ações de controle
+ Na execução de controle
+ Na confecção de relatórios de controle
+ Outros (aberto para escrita)
7. Qual sua frequência de acesso aos sistemas de inteligência artificial em suas atividades? Única opção + Diariamente
+ Duas vezes por semana
+ Três vezes por semana
+ Outros (aberto para escrita)
8. Houve treinamento prévio para as ferramentas de análise por recursos de inteligência artificial? Única opção + Sim
+ Não
Questionário aplicado aos auditores
Questões Categoria da resposta Alternativas de respostas
Parte III - Identificação das formas como as ferramentas são utilizadas
9. Você recebe informações do ALICE ("Informe de Licitações" e "Informe de Atas") via e-mail da sua Unidade Técnica? Única opção + Sim
+ Não
10. As informações produzidas pelas ferramentas abaixo (Alice, Monica, Adele e Sofia) são apresentadas de forma simples, e permitem o bom entendimento da demanda e desenvolvimento das minhas tarefas? Escala Likert + Totalmente em desacordo
+ Em desacordo
+ Não concordo nem discordo
+ De acordo
+ Totalmente de acordo
+ Não sei informar
11. Você acessa informações do ALICE (Aba "Ata/Edital") via sistema DGI Consultas? Única opção +Sim
+ Não
12. O uso das ferramentas Alice, Adele, Monica e Sofia aumentou a qualidade do seu trabalho Escala Likert + Discordo totalmente
+ Discordo em parte
+ Indiferente
+ Concordo em parte
+ Concordo totalmente
13. Qual o prazo de devolução dos processos relacionados às ferramentas Alice, Adele e Monica? Única opção + Depende da minha agenda de trabalho
+ O prazo é definido pelo meu superior
+ Dentro do prazo definido no protocolo de atendimento/instrução do órgão/unidade
+ Não há disciplinamento acerca de prazos
+ Não recebo processos das ferramentas Alice, Adele e Monica.
14. Você já identificou alguma irregularidade que não foi apontada pela ferramenta Alice? Única opção + Sim
+ Não
15. Neste caso, foi feita notificação ao pessoal de TI a respeito? Única opção + Sim
+ Não
16. Foi tomada alguma providência em relação ao caso? Única opção + Sim
+ Não
17. Qual sua formação? Única opção + Ciência da Computação
+ Matemática/ Física
+ Engenharia
+ Negócios (Administração, Contabilidade, Economia)
+ Direito
+ Outra (aberto para texto)