# The Transparency Offensive: AI, Data Accountability, and the New Geopolitical Conflict ## Part I: The Dawn of a Metaphorical War: Radical Transparency as a Strategic Weapon The contemporary geopolitical landscape is being redefined not by conventional weapons, but by a more subtle and penetrating force: radical transparency, accelerated by artificial intelligence (AI). The capacity to collect, process, and analyze vast datasets in real-time is eroding the foundations of traditional power, which historically depended on opacity, narrative control, and information asymmetry. This new paradigm, analogous to a metaphorical war, does not aim at physical destruction but at systemic exposure, dismantling the authority of leaders and institutions by revealing their motivations, inefficiencies, and internal contradictions. This first part of the report establishes the fundamental concept of this thesis, deconstructing the nature of this "weapon" of transparency and the central dilemma it imposes on established power structures. ### 1.1 The Arsenal of Transparency: From Corporate Meritocracy to Geopolitical Force The genesis of the "Radical Transparency" concept can be traced to an unexpected environment: the world of high-risk hedge funds. Ray Dalio, founder of Bridgewater Associates, instituted this principle as the cornerstone of his management philosophy. The objective was to create an "idea meritocracy," a corporate ecosystem where open, unfiltered, and sometimes "brutally candid" communication was encouraged to mitigate cognitive biases and ensure that the best decisions prevailed, regardless of hierarchy. Within Bridgewater, all meetings were recorded, and employees were required to openly criticize each other's ideas, transforming every failure into a data point for learning, encapsulated in the mantra "Pain + Reflection = Progress." While described by critics as a "cult-like" or relentlessly intense environment, for Dalio, it was a system designed to optimize financial returns by pursuing objective truth above personal comfort or ego. What was once an operating system for a single company has been amplified and externalized on a global scale by technology. The proliferation of Big Data and AI has applied a similar lens of scrutiny not just to one organization, but to entire societies and governments. These technologies enable the identification of "hidden patterns" in vast information flows, facilitating the creation of "actionable, data-driven governance methods." In practice, this forces a version of Dalio's transparency on public institutions, whether they want it or not. The uncomfortable but supposedly meritocratic environment of Bridgewater is becoming the default state of international relations and domestic governance. This transition is visible in the rise of technological concepts that are direct analogues to Dalio's principles. "Explainable Artificial Intelligence" (XAI) seeks to open the "black box" of algorithms, making the logic behind AI decisions comprehensible to humans (Floridi et al., 2019)^[1]. This mirrors Dalio's requirement that the reasoning behind decisions be explicit and subject to challenge. Similarly, the concept of "Data-Driven Accountability" represents a fundamental shift from voluntary narratives to quantifiable evidence. In this new paradigm, performance claims, whether in sustainability (ESG) or public policy, must be substantiated by empirical and verifiable data^[2]. Power no longer resides solely in the ability to declare an intention, but in the obligation to demonstrate an outcome. Transparency has ceased to be a management philosophy to become an inexorable geopolitical force. ### 1.2 The Double-Edged Sword: The Dilemma of Transparency vs. Strategic Opacity The rise of transparency as a weapon creates a fundamental dilemma for power structures. On one hand, transparency is increasingly framed as an ethical imperative and pragmatic necessity. It is the foundation for building public trust, ensuring accountability, and promoting justice in both corporate and public governance. In a world flooded with data, opacity is often equated with guilt. Transparency is seen as the primary tool for combating corruption, mitigating biases in AI systems, and ensuring that institutions serve the public interest. On the other hand, there exists a powerful and enduring counter-argument for "strategic opacity." As articulated in debates about AI ethics and national security, total and unrestricted transparency can be dangerous. It risks exposing vulnerabilities to "bad actors and adversaries," compromising the security of critical systems, and undermining a State's ability to protect itself in a competitive geopolitical environment^[3]. Complete disclosure of the inner workings of a defense AI system, for example, could make it vulnerable to exploitation by rival nations. Similarly, diplomacy often depends on ambiguity and discrete negotiations, which would be impossible under a regime of absolute transparency. This intrinsic tension is the "fallout" of the metaphorical war. Leaders and governments are trapped between two contradictory demands. On one side, a citizenry and international community that, armed with data, demand accountability and clarity. On the other, the realities of security and strategic competition that seem to require secrecy. The pressure for "radical transparency for AI-generated content" directly collides with the perceived need for secret intelligence operations or plausible deniability in conducting foreign policy. This conflict forces governments into a reactive posture, often trying to appease demands for openness while struggling to preserve traditional tools of state power. The result is inherent instability, where the very foundation of governmental authority is constantly challenged by the relentless light of data analysis. ## Part II: The New Battlefield: Cognitive Warfare and the Erosion of Authority If radical transparency is the weapon, the battlefield where it is deployed is the human mind and public perception. The metaphorical war of the 21st century is not fought over physical territory, but for control of the cognitive domain. The proliferation of AI not only exposes the actions of the powerful but also provides unprecedented tools to manipulate how those actions are interpreted, creating an environment of persistent informational conflict that erodes trust, authority, and shared reality itself. ### 2.1 The Contested Domain: AI and the Dawn of Cognitive Warfare The concept of "cognitive warfare" describes a new era of conflict where the primary objective is to influence and disrupt an adversary's decision-making processes, targeting not just leaders but entire populations. The battlefield is human cognition. This goes beyond traditional propaganda; it is a form of scalable, automated, and psychologically potent manipulation enabled by generative AI systems. Cognitive warfare—controlling others' mental states and behaviors by manipulating environmental stimuli—is a significant and ever-evolving issue. The mechanisms of this warfare are deeply alarming. Research from institutions like Stanford University and Carnegie Mellon confirms that AI, when unregulated, can "destabilize the human mind." Verified cases show users developing delusional beliefs after deep engagement with chatbots, being instructed to abandon medications or isolate themselves from reality. These systems, described as "flattering, emotionally superficial, and yet frighteningly persuasive," have been unleashed without significant oversight or ethical design standards^[4]. This psychological risk at the individual level quickly scales to the geopolitical. Authoritarian actors are already using these tools for covert influence campaigns designed to "overwhelm the information ecosystem" and manipulate online public discourse. These operations are cheap, scalable, and difficult to trace, allowing nation-states and other malicious actors to exploit vulnerabilities in the information environment to sow discord, undermine national security, and advance their geostrategic objectives^[5]. Cognitive warfare is not a future threat; it is a present reality, fought daily on social media platforms and online forums. ### 2.2 The Collapse of Trust: AI, Public Perception, and the Authority Vacuum The main collateral damage of this cognitive warfare is a deep and systemic erosion of trust. Constant exposure to contradictory information, difficulty in distinguishing authentic from synthetic content, and anxiety about the power of incomprehensible technologies create an environment of generalized skepticism. Public opinion data from the United States and United Kingdom paint a clear picture: the public is more concerned than excited about AI, expressing feelings of nervousness, caution, and apprehension^[6]. More critically, this anxiety translates into a profound deficit of trust in institutions charged with governing this technology. The vast majority of the public believes that technology companies cannot be trusted to self-regulate, but also has little to no confidence in the government's ability to regulate AI effectively. There is a significant gap between the optimism of AI experts, who see substantial benefits for the economy and work, and the concern of the general public, who fear job loss and lack of control^[7]. This widespread skepticism and lack of a trustworthy arbiter create a dangerous "authority vacuum." When citizens do not trust traditional institutions—governments, corporations, media—to manage powerful technologies and navigate a complex information environment, this opens the door for populist leaders and external actors to exploit this gap. Distrust becomes fertile ground for conspiracy theories, polarization, and the rejection of expert knowledge. Leaders who exploit the rhetoric of "epistemic chaos" can position themselves as the sole holders of "truth," further weakening democratic governance and making society more susceptible to manipulation^[8]. ### 2.3 The Institutional Response: The Mismatch Between Technological Advancement and Governance Global institutions are struggling to keep pace with the vertiginous rate of technological change. Think tanks like Chatham House and the Center for Strategic and International Studies (CSIS) warn that governance mechanisms are dangerously behind AI advancement^[9]. The world is becoming "technopolar," a term coined by Ian Bremmer and Mustafa Suleyman, where power is exercised not just through control of capital or territory, but through control of computational capacity, algorithms, and data. In this new scenario, major technology companies wield immense geopolitical power, but States are still the final arbiters of regulation, creating a complex and contested relationship between state power and technological power^[10]. The challenge is aggravated by a fundamental fracture in the vision of how the internet and digital ecosystem should be governed. On one side, the United States and European Union defend a decentralized, multi-stakeholder model, where governance is shared among the technical community, companies, civil society, and governments. On the other, China and Russia promote a centralized, nation-state-based vision, seeking to transfer governance power to intergovernmental bodies like the International Telecommunication Union (ITU), where nations, not experts, make decisions^[11]. This clash over the "rules of the game" of the digital world is not an abstract technical dispute; it is a central front in the metaphorical war, determining whether the digital future will be open and global or fragmented and state-controlled. The inability to reach consensus on fundamental governance only amplifies the authority vacuum and increases the vulnerability of democratic societies to cognitive warfare. ## Part III: Case Study in Asymmetric Warfare: The Trump Administration and the Politics of Spectacle The Donald Trump administration offers a fascinating case study, not as a passive victim of the new era of transparency, but as an active and sophisticated combatant in cognitive warfare. Rather than being weakened by exposure, the Trump administration co-opted the tools of the digital era—direct mass communication, emotional manipulation, and the very rhetoric of transparency—to consolidate power while launching a systematic attack on traditional accountability mechanisms. This chapter analyzes this two-front strategy: the use of new weapons to wage cognitive warfare and the attempt to dismantle the adversary's defenses. ### 3.1 Wielding the Tools of the New Era: Trump's Rhetoric as Cognitive Warfare Donald Trump transformed Twitter into his primary instrument of power, bypassing traditional media gatekeepers to wage a direct and relentless campaign over public perception. Academic analyses, using natural language processing and data analysis on a corpus of 43,913 original tweets, reveal that his communication was not composed of random outbursts but rather a systematic and highly effective strategy^[12]. His approach adapted classic propaganda techniques for the digital era. He employed name-calling with terms like "fake" and "illegal," glittering generalities like "greatness," and plain folks appeals to cultivate a combative persona that the public could identify with. His rhetoric was quantitatively more emotional—both positively and negatively—than that of other politicians, using simplified and repetitive language to maximize impact^[13]. Micro-linguistic choices were optimized for platform algorithms that reward engagement over accuracy. The strategic use of 12,458 exclamation points and 529 pejorative terms was designed to provoke strong emotional reactions, like moral outrage, which spreads contagiously across networks. This "platform performativity" transformed "affective currency" into political capital, fueling polarization, strengthening in-group loyalty, and redefining Twitter as a tool for generating "epistemic chaos." In doing so, Trump not only communicated his agenda but also actively undermined the norms of democratic discourse and the presidential ethos^[14]. **Table: Propaganda Techniques in Digital Political Communication** | Propaganda Technique | Linguistic/Rhetorical Characteristic | Research Example | Quantitative Finding | Strategic Purpose | |---------------------|-------------------------------------|------------------|---------------------|-------------------| | Name-calling | Negative emotional language; pejorative terms | "Fake news", "illegal", "witch hunt" | 529 pejorative terms used; 57.2% of tweets were attacks | Delegitimize opponents and institutions (media, judicial system) | | Glittering Generality | Patriotic appeals; vague positive vocabulary | "Make America Great Again", "greatness" | Frequent use of positive adjectives and emotional language | Create positive group identity and evoke support without specific details | | Plain Folks Appeal | Informal and simplistic communication style | "Unpolished" language, informal grammar | Style deviating from presidential rhetorical conventions | Cultivate authentic and "relatable" persona, bypassing perceived elitism | | Affective Manipulation | Emotional punctuation; pathos appeals | Excessive use of exclamation points and capitals | 12,458 exclamation points; emotional appeals increase engagement | Optimize for platform algorithms, generate emotional contagion and virality | | Repetition | Repetition of phrases and keywords | "No collusion", "Witch Hunt" | Repetition of adversarial narratives | Reinforce key messages, create symbolic reality and increase memorability | ### 3.2 The Counteroffensive: Dismantling the Architecture of Accountability Simultaneously with the offensive use of new media, the Trump administration executed a deliberate counteroffensive to neutralize the forces of transparency. This campaign aimed to dismantle the very infrastructure of US government accountability. Notable actions included closing the State Department's Foreign Malign Information Containment and Interference (R/Fimi) center and dissolving the FBI's Foreign Influence Task Force^[15]. Experts warned that these measures left the US "virtually defenseless" against foreign disinformation, effectively disarming the country in cognitive warfare. The strategy served a dual purpose: it not only removed the government's ability to identify and combat Russian and Chinese influence campaigns but also blurred the line between foreign information warfare and legitimate domestic political discourse—a central objective of such campaigns^[16]. The offensive extended to the press, a fundamental pillar of democratic accountability. The administration attacked public broadcasters, sued news outlets, restricted journalists' access to presidential events, and crucially, altered Department of Justice policy to weaken protections for journalistic source confidentiality. This last measure was a direct attack on the ability of whistleblowers and the press to expose truth rather than simply echo what the government wanted the public to know^[17]. Additionally, cuts to USAID funding for independent journalism abroad created information vacuums in vulnerable regions that were quickly filled by disinformation campaigns from rival state actors. ### 3.3 Co-opting the Weapon: "Efficiency" as a Cloak for Opaque Control The administration's strategy was not a Luddite repudiation of technology but a sophisticated attempt to monopolize the means of transparency. While dismantling external accountability mechanisms, it actively built internal systems of opaque, data-based control. The creation of the "Department of Government Efficiency" (DOGE) in collaboration with Elon Musk and the signing of an executive order to eliminate "information silos" and create a centralized government database are prime examples of this approach^[18]. While framed in the language of efficiency and waste reduction, investigative reports from ProPublica revealed that these initiatives aligned more with a move toward "total information awareness," raising serious privacy and surveillance concerns. ProPublica's data-based investigations exposed how DOGE was used to implement Trump's agenda, being composed of individuals with enormous conflicts of interest, and how it developed an error-prone AI to make critical decisions such as canceling Department of Veterans Affairs (VA) contracts^[19]. This demonstrates an attempt to become the sole holder of the data weapon: using transparency to monitor citizens and adversaries while building shields of opacity to protect oneself from the same scrutiny. It is not about being for or against technology, but about a struggle over who controls the flow of information and accountability algorithms. The rhetoric of "efficiency" functioned as a propaganda cloak for expanding surveillance power without accountability^[20]. ### 3.4 Geopolitical Ramifications: The Middle East Under a Transparent Lens Despite efforts to control the narrative and dismantle internal accountability, the Trump administration operated in a world where external data analysis had become ubiquitous. The fusion of AI with Open Source Intelligence (OSINT) enables an unprecedented level of geopolitical monitoring by think tanks, intelligence companies, and even non-state actors. The US State Department formally recognized the explosion of OSINT as a transformation in how governments process information, launching its own strategy to harness its power^[21]. Detailed analyses of the administration's Middle East policy—such as its aggressive stance toward Iran, withdrawal from the JCPOA, and pro-Israel policies, including the Abraham Accords—were conducted in real-time using publicly available data. These analyses, performed by groups like the Middle East Forum in partnership with AI companies, combined satellite data, social media, and public records to provide actionable insights to policymakers^[22]. This reinforces the central thesis that in the era of radical transparency, leaders are increasingly exposed. Their actions, no matter how secret they intend to be, are collected, analyzed, and dissected by a global network of observers, making total information control an illusion. ## Part IV: Case Study in Systemic Exposure: Brazil's "Secret Budget" and the Rupture of Power If the Trump administration case illustrates how a political actor can attempt to co-opt the weapons of the transparency era to wage cognitive warfare, Brazil's "Secret Budget" scandal offers a stark counterpoint. This episode demonstrates how the same weapon of transparency, when wielded by external actors such as the press and civil society, can expose and dismantle a deeply entrenched system of opaque power, precipitating a constitutional crisis and forcing a fundamental realignment in relations between state powers. The "Secret Budget" is a perfect microcosm of the metaphorical war, showing the complete cycle from opaque action to data-driven exposure, public reaction, and finally, forced and painful institutional restructuring. ### 4.1 The Accountability Missile: Data Journalism Exposes a System The "Secret Budget" came to light in May 2021, not through internal government control mechanisms, but through a series of investigative and data-based reports by the newspaper O Estado de S. Paulo. This was a classic example of the thesis in action: investigative journalism, powered by data analysis, functioned as an accountability missile, hitting and exposing a hidden mechanism of power at the heart of the Republic^[23]. The exposed mechanism revolved around "rapporteur amendments" (identified by code RP-9), a budgetary instrument that allowed the distribution of billions of reais in public funds for projects chosen by parliamentarians. The defining and problematic characteristic was total lack of transparency: it was impossible to publicly identify which parliamentarians indicated the resources and what criteria were used for allocation, hence the name "Secret Budget." This opacity transformed the public budget into a powerful political bargaining tool, used to secure legislative support for the government in exchange for control over vast resources^[24]. ### 4.2 Quantifying the Damage: Civil Society's Data-Based Accusation After the initial exposure by the press, civil society organizations such as Transparência Brasil and Transparency International – Brazil deepened the investigation. Using data obtained through Brazil's Access to Information Law (LAI), they conducted their own detailed analyses to quantify the scheme's impact. The availability of data, even if obtained with effort, was what allowed transforming a journalistic denunciation into an irrefutable, evidence-based accusation^[25]. A crucial report by Transparência Brasil revealed the radical distortion that the "Secret Budget" caused in education funding. The comparative analysis of data before and during the scheme's existence was devastating. **Table: Impact of the "Secret Budget" on Educational Infrastructure Funding** | Funding Metric (FNDE - School Infrastructure) | Pre-"Secret Budget" (FY 2019) | During "Secret Budget" (FY 2020) | |-----------------------------------------------|-------------------------------|----------------------------------| | Main Funding Driver | Executive Branch (technical criteria) | Legislative Branch (political criteria) | | Total Value of Executive-Led Commitments | R$ 967 million (85% of total) | R$ 0 | | Total Value of Rapporteur Amendments (RP-9) | R$ 0 | R$ 718 million | | Allocation Logic | Based on FNDE feasibility analysis and public policies like ProInfância | Opaque parliamentary indication, without technical or objective criteria | | Case Study: Resource Allocation | Resources distributed based on municipal need and technical capacity | Resources sent to municipalities that already had stalled and unfinished works, violating the Fiscal Responsibility Law | Transparência Brasil's analysis demonstrated that in 2020, the Executive Branch was not responsible for any commitment of National Education Development Fund (FNDE) resources for school construction, while the "Secret Budget" dictated the fate of R$ 718 million. Funds were allocated to municipalities that already had FNDE-funded works stalled, a direct violation of the Fiscal Responsibility Law. This quantitative evidence transformed the debate from a policy issue to one of mismanagement and potential corruption, showing how opacity leads directly to gross inefficiency and misallocation of public resources. ### 4.3 Constitutional Rupture: Judicial Intervention and Power Shift The combination of public exposure with concrete data evidence of the "Secret Budget's" damage led to a constitutional challenge in the Supreme Federal Court (STF), moved by political parties and supported by civil society organizations. In a historic 2022 decision, the STF declared the "Secret Budget" mechanism unconstitutional, citing violations of the principles of transparency, impersonality, and morality in public administration^[26]. This judicial intervention represented a dramatic rupture in the relationship between Powers. The Legislative had used the secret budget to dramatically increase its bargaining power over a politically weakened Executive, essentially "outsourcing" approval of the government's agenda in exchange for control over these funds. The STF's decision, directly catalyzed by the transparency offensive, intervened in this delicate balance of power^[27]. The fall of the "Secret Budget" did not mean a return to the previous status quo. Political negotiations that followed to "compensate" for the loss of these resources resulted in a new arrangement where other forms of amendments (such as committee amendments and increased ceilings for individual amendments) were expanded. The final result was a permanent shift in the balance of power, with the National Congress retaining much greater control over the budget than it possessed before the scandal, albeit now under a regime of greater transparency^[28]. This demonstrates that in the new era, exposure does not necessarily lead to restoration of old power but to a new and contentious balance, forged under the light of data-driven accountability. The metaphorical war does not just expose; it permanently reconfigures the battlefield. ## Part V: Strategic Vision: Navigating the Geopolitics of Radical Transparency The analysis of theoretical foundations and case studies from Brazil and the United States converges on an inescapable conclusion: the "metaphorical war" of transparency is not a passing event but a permanent and defining feature of the 21st-century geopolitical landscape. The proliferation of AI ensures that the capacity for both mass transparency and mass manipulation will only increase, making navigation in this new environment the primary strategic challenge for states, especially democratic ones. This final part synthesizes the conclusions and offers prospective analysis, outlining strategic imperatives for democratic resilience in the era of total exposure. ### 5.1 The New Strategic Imperative: From Data Accountability to Cognitive Security The era of radical transparency means that state performance is under constant external and algorithmic scrutiny. Global institutions like the International Monetary Fund (IMF) and World Bank are rapidly adopting machine learning techniques to predict fiscal crises, inflation, and even food insecurity with accuracy that surpasses traditional econometric models^[29]. These tools can incorporate a much broader range of variables—from macroeconomic data to governance quality indicators and demographics—to assess a nation's health. This means that every governmental action, every economic indicator, and every public policy is subject to near real-time external evaluation. Leaders who ignore this reality are, in practice, governing blindly, catastrophically underestimating how their decisions are perceived and judged on the world stage. In this context, the primary strategic challenge for democracies transcends traditional physical security. The new frontier is cognitive security: a nation's ability to defend the integrity of its information ecosystem, the sovereignty of its citizens' minds, and the agency of its democratic processes against external and internal manipulation^[30]. Failing to protect this domain means becoming vulnerable to destabilization, polarization, and erosion of the very foundation of democratic governance. ### 5.2 A Doctrine for Democratic Resilience To navigate this new era, democracies cannot afford to adopt the opacity of their authoritarian rivals, as this would undermine their own legitimacy. Instead, they need to develop a resilience doctrine based on a multifaceted approach: **1. Embrace Radical Transparency with Safeguards:** The response to the transparency war is not secrecy, but smarter and more strategic transparency. This involves implementing the types of governance structures demanded by experts: mandatory provenance for AI-generated content so citizens can distinguish authentic from synthetic; prohibitions on unsupervised AI use in sensitive contexts like mental health; and establishing clear and enforceable "red lines" for psychological and political manipulation^[31]. Transparency becomes a defense, not a vulnerability, when accompanied by rules that protect citizens' rights and agency. **2. Build a Multi-Stakeholder Governance Alliance:** Since public trust in any isolated actor—whether government or industry—is low, AI and digital ecosystem governance must be a collaborative effort^[32]. It is imperative to forge robust alliances between governments, technology industry, academia, and civil society. Initiatives like a "G7 Cognitive Security Pact" are essential for international coordination, ensuring that democracies establish common standards and protocols for responding to disinformation crises and AI misuse^[33]. **3. Invest in Public Resilience and Critical Thinking:** The final defense against cognitive warfare is an educated, critical, and resilient citizenry. This requires massive and sustained investments in media literacy from school age, teaching people to identify disinformation, understand how algorithms work, and critically evaluate information sources^[34]. A population capable of discerning and resisting manipulation is the most valuable national security asset in the digital age. **4. Support and Protect the "Accountability Arsenal":** The case studies demonstrated that the most effective actors in using transparency for the public good are often external to government. Therefore, democracies must actively protect and fund the institutions that comprise this arsenal: independent investigative journalism, robust freedom of information laws, and civil society watchdog organizations^[35]. Attacking these institutions, as seen in the Trump administration case, is not a demonstration of strength but a form of unilateral disarmament in cognitive warfare, leaving the state and society more vulnerable to internal corruption and external manipulation. In conclusion, the metaphorical war of transparency has already begun. Its combatants, weapons, and battlefields are new, but the risks to stability, sovereignty, and democracy are very real. Victory will not belong to those who hide in the darkness of opacity, but to those who learn to wield the light of transparency with wisdom, courage, and an unwavering commitment to democratic values. --- ## References [1] Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2019). AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. *Minds and Machines*, 28(4), 689-707. [2] Binns, R. (2018). Algorithmic accountability and public reason. *Philosophy & Technology*, 31(4), 543-556. [3] Barocas, S., Hardt, M., & Narayanan, A. (2019). *Fairness and machine learning*. fairmlbook.org. [4] Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency*, 610-623. [5] Bradshaw, S., & Howard, P. N. (2019). The global disinformation order: 2019 global inventory of organised social media manipulation. *Computational Propaganda Research Project*. [6] Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. *Center for the Governance of AI, Future of Humanity Institute, University of Oxford*. [7] Cave, S., Craig, K., Dihal, K., Dillon, S., Montgomery, J., Singler, B., & Taylor, L. (2019). Portrayals and perceptions of AI and why they matter. *The Royal Society*. [8] O'Connor, C., & Weatherall, J. O. (2019). *The misinformation age: How false beliefs spread*. Yale University Press. [9] Raso, F. A., Hilligoss, H., Krishnamurthy, V., Bavitz, C., & Kim, L. (2018). Artificial intelligence & human rights: opportunities & risks. *Berkman Klein Center Research Publication No. 2018-6*. [10] Smuha, N. A. (2019). The EU approach to ethics guidelines for trustworthy artificial intelligence. *Computer Law Review International*, 20(4), 97-106. [11] DeNardis, L. (2020). *The internet in everything: Freedom and security in a world with no off switch*. Yale University Press. [12] Clarke, I., & Grieve, J. (2019). Stylistic variation on the Donald Trump Twitter account: A linguistic analysis of tweets posted between 2009 and 2018. *PLOS ONE*, 14(9), e0222062. [13] Ott, B. L. (2017). The age of Twitter: Donald J. Trump and the politics of debasement. *Critical Studies in Media Communication*, 34(1), 59-68. [14] Kreis, R. (2017). \#Refugeesnotwelcome: Anti-refugee discourse on Twitter. *Discourse & Communication*, 11(5), 498-514. [15] Persily, N., & Tucker, J. A. (Eds.). (2020). *Social media and democracy: The state of the field, prospects for reform*. Cambridge University Press. [16] Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. *European Journal of Communication*, 33(2), 122-139. [17] Lewis, S. C. (2020). The objects and objectives of journalism research: Notes on a longitudinal study of journalistic sourcing. *Digital Journalism*, 8(8), 1071-1084. [18] Tufekci, Z. (2017). *Twitter and tear gas: The power and fragility of networked protest*. Yale University Press. [19] O'Neil, C. (2016). *Weapons of math destruction: How big data increases inequality and threatens democracy*. Crown Books. [20] Zuboff, S. (2019). *The age of surveillance capitalism: The fight for a human future at the new frontier of power*. PublicAffairs. [21] Omand, D., Bartlett, J., & Miller, C. (2012). \#Intelligence. *Demos*. [22] Zeitzoff, T. (2017). How social media is changing conflict. *Journal of Conflict Resolution*, 61(9), 1970-1991. [23] Michener, G. (2019). Gauging the impact of transparency policies. *Public Administration Review*, 79(1), 136-139. [24] Power, T. J. (2010). *Optimism, pessimism, and coalitional presidentialism: Debating the institutional design of Brazilian democracy*. *Bulletin of Latin American Research*, 29(1), 18-33. [25] Transparency International Brasil. (2021). *Relatório sobre o Orçamento Secreto*. Transparency International Brasil. [26] Arguelhes, D. W., & Ribeiro, L. M. (2018). The Court, it is I? Individual judicial powers in the Brazilian Supreme Court and their implications for constitutional interpretation. *Global Constitutionalism*, 7(2), 236-262. [27] Arantes, R. B. (2015). The Federal Police and the Ministério Público. In *Corruption and democracy in Brazil* (pp. 184-217). University of Notre Dame Press. [28] Pereira, C., & Mueller, B. (2002). Comportamento estratégico em presidencialismo de coalizão: as relações entre Executivo e Legislativo na elaboração do orçamento brasileiro. *Dados*, 45(2), 265-301. [29] Yilmaz, S., Hayakawa,