ARTIFICIAL INTELLIGENCE ACT AND CUSTOMER SAFETY IN THE BANKING SECTOR – LEGAL AND ORGANISATIONAL CHALLENGES
Keywords:
AI Act, artificial intelligence, customer security, algorithmic risk, financial regulationAbstract
The aim of the article is to comprehensively analyze the impact of the Artificial Intelligence Act (AI Act) on customer security in the banking sector and to identify the most important legal, technological and organizational challenges related to the implementation of artificial intelligence systems. The study used the method of systematic literature review, comparative analysis of regulatory documents and review of supervisory reports. The results indicate that AI systems used in banking – especially in the area of credit risk assessment and anti-fraud – generate new types of risk, including algorithmic risk, data bias, opacity of decision-making processes, and susceptibility of models to manipulation. The AI Act introduces obligations on data quality, auditability, human oversight and model monitoring to strengthen consumer protection. The identified challenges include a lack of specialist competence, immaturity of the data infrastructure, high implementation costs and the need to reorganise model management processes. The conclusions indicate that the effectiveness of the AI Act depends on the ability of financial institutions to adapt technologically and organizationally, as well as on building a culture of responsible and transparent use of AI in banking.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Taikomieji tyrimai studijose ir praktikoje - Applied research in studies and practice

This work is licensed under a Creative Commons Attribution 4.0 International License.

This work is licensed under a Creative Commons Attribution 4.0 International License.
Please read the Copyright Notice in Journal Policy.
