Projects
AI Design Futures
AI impacts humans at the existential level, by influencing how we experience and think about time, vulnerability and finitude, and how we make sense of the world. AI also raises normative questions about the social and political futures we imagine and on how we should design the societal institutions and socio-technical systems in which AI systems are embedded. These simultaneous and multi-layered processes of design are inextricably intertwined.
This research project will address these issues by studying and discussing how algorithms shape our temporal imagination at the existential and political level, and how our images of the future feed back into the design of AI and the institutions in which AI is embedded. The project seeks to conceptualise and bring about existential, political and institutional design futures, by means of engaging both political and existential philosophy, so as to imagine and create new narratives for alternative AI Design Futures.
The project is a joint venture between the Department of Business Studies and the Department of Informatics and Media at Uppsala University.
Project members
- Mark Coeckelbergh, project leader and guest professor at the Department of Informatics and Media / Department of Business Studies, Uppsala University
- Amanda Lagerkvist, professor, Department of Informatics and Media, Uppsala University
- Magnus Strand, Department of Business Studies, Uppsala University
- Matilda Tudor, researcher, Department of Informatics and Media, Uppsala University
Project period
January 2023–December 2027
Funding
WASP-HS (The Wallenberg AI, Autonomous
Systems and Software Program – Humanities and Society)
AI and the Financial Markets: Accountability and Risk Management with Legal Tools
This project explores AI-related risks in the financial markets. Situated at the intersection between law and business, it examines the management of risks (real or perceived) that emerge from the use of AI algorithms in the financial sector. At an individual level, this relates to the risk for individual harm related to the use of AI. At a societal level, could the use of AI in the financial sector augment systemic risk on the financial markets?
While law-making is always reactive, and therefore intrinsically behind technological developments, financial sector businesses need to manage risks connected to their use of AI proactively. To do this, they also need to understand the legal allocation of accountability connected to AI solutions. This project can contribute to such an understanding. We will unravel the existing “accountability infrastructure” regarding AI in the financial sector, including its gaps and inconsistencies. The project also aims to clarify, by empirical study, how the affected actors actually do try to manage risk connected to their use of AI. Finally, our aims include an endeavour to find more precise and more effective ways to design the “accountability infrastructure” regarding AI, as well as scrutinising the salience of business theories on risk management in the context of risk management with legal tools such as contracting.
The project will be carried out by a team of 3 senior researchers and 2 young researchers, using a combination of traditional legal research methods and empirical research methods. Legal materials and contracts will be analysed with legal methods. Such analyses will be complemented with qualitative interviews with key players in the industry. There will be a reference group including industry and public agency representatives, as well as researchers from social sciences. Results will be disseminated through scientific publications, seminars and conferences, but also through popular and industry media.
This project includes Magnus Strand (Pl), Annina H. Persson, Malou Larsson Klevhill, Jason Crawford, Johanna Chamberlain, Andreas Kotsios and Ensieh Mahi.
Project leader
Project period
2021–2023
Funding
Marianne and Marcus Wallenberg Foundation
AI-based RegTech
Businesses in the financial markets are in the midst of overwhelming legislative activity from multiple levels of norm-making in Europe and internationally. The tendency in new legislation has been to introduce ever more detailed control, supervision and enforcement. Indeed, the level of detail in rules and the amount of reporting with which finance businesses must comply has become too vast to survey without automated assistance for the purposes of reporting and compliance management.
This situation has created a surge in the market for advanced data services for compliance management. Applications for compliance with regulation (but also with business codes, contracts, and ethical standards) have become known as RegTech. The most advanced of these applications include artificial intelligence algorithms that make use of machine learning and natural language processing; indeed a 2018 IBM report indicated that these are the most commonly used RegTech products. The most common fields of application for AI-based RegTech are (according to a Deloitte survey) compliance management, risk management, transaction monitoring, mandatory reporting, and identification.
AI-based RegTech is very attractive to businesses struggling to cope with abundant reporting and monitoring. It increases efficiency and reduces costs, serving e.g. bank officers with big data analyses that would otherwise be tremendously time-consuming. For instance, AI-based RegTech helps detect suspicious transactions that could form part of financial crime such as money laundering.
However, AI-based RegTech also raises questions and concerns, perhaps most importantly with regard to the very purpose of reporting standards and internal monitoring. Such measures are primarily intended to raise ethical awareness in organizations and to educate leaders on how to maintain the standards set to protect investors, consumers, and taxpayers. We know that already a significant portion of compliance efforts are handled by algorithms. It may therefore be legitimately asked; are the people in the organizations actually learning anything, or are the algorithms the only ones learning? Some critics even say that the algorithms have been taught how to tick all the boxes and to report impeccable but potentially misleading data, thus covering a reality of decision-making that may (again) grow increasingly reckless. Such concerns serve to illustrate that it is crucial to study the use of AI-based RegTech in the financial sector, to set its uses into the theoretical contexts of risk management and compliance management, and to trace in what ways it actually contributes to (or reduces) transparency and accountability. We will be able to produce empirical results for an informed discussion on what uses of AI-based RegTech in the financial markets do indeed serve the purposes of legislative and ethical policies, and what uses do not. We can also contribute to designing more efficient means of promoting transparency, accountability, and ethical stringency in the financial markets.
Moreover, we recognize that the phenomena we have identified are not confined to the financial sector, but are present in algorithm-based reporting to e.g. tax authorities and in accounting; uses that are relevant to more complex business structures notwithstanding to which industry they belong. AI has an impact on the organization of any complex business. Consequently, this project has broad theoretical implications for management and organizational theory, making it a crucial research and teaching topic.
This project includes Donal Casey (Pl), Magnus Strand, Peter Thilenius, Subhalagna Choudhury and Karim Nasr.
Project leader
Project period
2021–2026
Funding
WASP-HS
Book project: Legal Accountability in EU Markets for Financial Instruments – The Dual Role of Investment Firms
This book is one of several publications resulting from an interdisciplinary research project carried out in collaboration between Uppsala University and European University Institute in Florence. In the project we have studied modern patterns of centralised rulemaking in the EU internal market, focusing at the financial market(s); a set of sectors within the internal market where the development of rulemaking and accountability has general relevance and offers generous opportunities for both legal and political science.
The book concentrates on the legal consequences of centralised rulemaking for the EU system of governance, considering, in particular, accountability. To enable deepening of our study we have the regulatory framework on markets in financial instruments at the center of study. The book does not assert any fixed notion of ‘accountability’ (or, indeed ‘governance’) upon its authors. This is its main strength. There are many ideas about the meaning of accountability and the context in which it belongs. But, clearly, at the root lies the question if someone is responsible for what they do and able to give satisfactory reasons for their actions. Specifically with regard to legal accountability and, in particular, judicial control, the questions which are discussed in this book relate inter alia to the level and the nature of adjudication, the criteria for locus standi (for what actor), the type of remedies available (annulment or repeal type of action, infringement type of action, non-contractual liability and the like) and the intensity of the review of legality. The proper functioning of the EU financial market is protected by public actors – both national and supranational – responsible for rulemaking and supervision of investment firms. But the EU legal system is dependent on the vigilance of private actors; such as investment firms and their clients, invoking EU law before national authorities and courts. This means that the investment firms have a dual role within the financial accountability system, turning them into subjects of control and enforcement but also into agents in the maintenance of the rule of law.
The book is published on Oxford University Press. The editorial team has consisted of Magnus Strand (Commercial law at the Dept of Business Studies) and Professor Carl Fredrik Bergström (Dept of Law). Authors include commercial law colleagues Annina H. Persson, Malou Larsson Klevhill, and Magnus Strand.