2025-03-03

Under the wave of DeepSeek, the data compliance risk of AI application in the field of financial technology

In recent years, artificial intelligence (AI) technology has been fully deployed and applied in all walks of life, and AI has also reshaped the service model and business form of the financial industry to a certain extent. In addition, with the explosion of DeepSeek in 2025, coupled with its open source strategy to support local privatization deployment, DeepSeek has swept the financial technology circle in the recent past, a large number of financial institutions will access DeepSeek to the local system, for example, in February 2025, One account of Finance released self-developed intelligent agent platform, And access to open source large models such as DeepSeek and Tongyi Qianwen, and launch a full-scene AI solution for the banking industry. In February 2025, Tencent Financial Communication announced the official access to DeepSeek-R1 model full blood version, upgraded access to DeepSeek, financial services more professional, more timeliness. Fintech companies have announced access to DeepSeek to join the tide of AI applications.

AI technology not only improves the efficiency of financial services, but also broadens the boundaries of financial innovation. However, the game between technological innovation and legal supervision has always existed, and the application of AI in the field of financial technology has also caused a series of legal risks such as data leakage and algorithm discrimination.

The "Financial Technology Development Plan (2022-2025)" announced by the People's Bank of China on January 4, 2022, makes it clear that "seize the new opportunities for the development of global artificial intelligence, comprehensively promote the deepening application of intelligent technology in the financial field, strengthen the ethical governance of science and technology, and strive to create a new smart financial business with scene perception, man-machine collaboration, and cross-border integration." To realize the intellectualization of the whole life cycle of financial services, and effectively enhance the people's sense of gain, security and happiness." The basic principles of... This paper will start from the legal perspective, systematically analyze the core risks in the combination of AI and financial technology, and explore the compliance response path.

PART.01

The integration status and application of AI and Fintech

The integration and application scenarios of AI and fintech are very extensive, and only some typical scenarios are listed and introduced below.

(1) Intelligent investment Adviser

Intelligent advisory, also known as robot advisory, is a service model that uses artificial intelligence algorithms, big data analysis, intelligent algorithmic trading and other technologies to provide customers with automated and personalized investment advice and asset allocation solutions according to their risk appetite, financial status, investment goals and other factors. Compared with traditional investment advisory, intelligent investment advisory is widely used because of its low service cost, personalized customization and high investment efficiency. For example, in February 2025, Jiufang Intelligent Investment Holdings (09636.HK) announced that its intelligent investment adviser digital person "nine brother" and a new generation of stock investment dialogue assistant "Jiufang Lingxi" have officially connected to DeepSeek-R1, which will serve the majority of investment users more professionally and efficiently. They can think deeply about complex investment problems, provide more specific scenarios or cases, and clearly demonstrate the thought process to help users better learn financial market analysis methods.

(2) Risk management

"Fintech Development Plan (2022-2025)" clearly "improve the automated risk control mechanism: in advance, the use of big data, artificial intelligence and other technologies to expand the dimension of risk information acquisition, build a customer-centric risk panorama view, intelligent identification of potential risk points and transmission paths, and enhance the forward-looking and predictable risk management." It follows that AI has an important role to play in risk management. Financial institutions can use AI technology to analyze massive amounts of data to more accurately assess credit risk, market risk, interest rate risk, and more. For example, by analyzing the customer's credit history, financial status, consumption behavior and other data, AI can assess the customer's credit risk; Through the analysis of market data, AI can predict market trends and help financial institutions develop more effective risk management strategies. By monitoring trading system logs and employee behavior data, AI can detect abnormal operations in time to prevent potential risks; AI is able to identify fraud in real time by analyzing transaction patterns, device fingerprints, and customer behavior. The application of AI in the field of financial risk is deepening, and it will provide financial institutions with more efficient and accurate risk management tools in the future.

(3) Customer service

The application of AI in customer service is also becoming more mature. Intelligent customer service system can provide customers with online services 24 hours a day, quickly and accurately answer customer questions, and solve customer concerns. This not only improves the quality and efficiency of customer service, but also reduces the operating costs of financial institutions. At the same time, intelligent customer service can also provide financial institutions with insight into customer needs through the analysis of customer problems, and help financial institutions optimize products and services. For example, GF Securities has launched an intelligent customer service system "Intelligent Ben" based on large model technology on its customer service platform. The system realizes accurate knowledge matching and intelligent content generation through high-quality Q&A data training, multi-dimensional parameter optimization and contextual semantic understanding technology, which can quickly respond to customer inquiries and improve service efficiency.

PART.02

Legal risks in the application of AI in fintech

(1) Data collection risks

The core of AI technology is data-driven, and the application of AI in fintech is highly dependent on the collection of massive data, but this process often conflicts with the basic principles of data protection. In addition, Technical Specifications for Personal Financial Information Protection (JR/T 0171-2020), Data Life Cycle Security Specifications for Financial Data Security (JR/T 0223-2021), and Financial Data Security Regulations such as the Data Security Classification Guide (JR/T0197-2020) also deal with financial data involved in the field of fintech (financial data refers to various types of data required or generated by financial institutions to carry out financial business, provide financial services and daily operation and management) and personal financial information (refers to financial institutions through the provision of financial products and services or other channels to obtain Processing and retention of personal information. Including account information, identification information, financial transaction information, personally identifiable information, property information, loan information and other information that reflects certain circumstances of specific individuals. Collection and use of special requirements and regulations.

The Personal Information Protection Act clearly stipulates the principle of minimum necessity, which means that only the minimum amount of data directly related to the purpose of processing can be collected. However, AI models (such as credit scores or anti-fraud systems) often require multiple dimensions of user information (such as spending history, social behavior, geographic location, etc.) to improve predictive accuracy, which can lead financial institutions to over-collect data in practice. When financial institutions promote smart advisory services by requiring users to authorize access to their social media accounts to analyze investment preferences, this behavior increases the level of service personalization, but it is likely to exceed the "minimum necessary" scope, causing the risk of user privacy disclosure.

When collecting personal financial information, the following requirements shall be met in accordance with the Financial Data Security Data Life Cycle Security Code (JR/T 0223-2021) and other relevant regulations: a) Institutions without financial industry qualifications shall not be entrusted or authorized to collect C3 and C2 information; b) Traceability of the sources from which the information is collected shall be ensured; c) Technical measures (such as pop-up Windows, URL links in obvious locations, etc.) should be taken to guide personal financial information subjects to consult the privacy policy and collect personal financial information after obtaining their express consent; d) When C3 information is collected through acceptance terminals, client application software, browsers, etc., technical measures such as encryption shall be used to ensure the confidentiality of the data and prevent it from being obtained by unauthorized third parties; e) When guiding users to enter (or set) bank card password and network payment password through the acceptance terminal, client application software and browser, measures such as display shielding should be taken to prevent passwords from being displayed in plain text, and other password information should be displayed shielding measures.

(2) Data sharing risks

Fintech innovation often requires data sharing across institutions and domains. Data sharing happens all the time, and security and compliance risks in the process cannot be ignored. First, data sharing may increase the risk of data leakage, and second, the compliance requirements of data sharing are complex. According to Article 23 of the Personal Information Protection Law, if a personal information processor provides the personal information it processes to other personal information processors, The name or name of the recipient, contact information, purpose of processing, method of processing and type of personal information shall be informed to the individual, and the individual's individual consent shall be obtained. The receiving party shall process personal information within the scope of the above processing purposes, processing methods and types of personal information. If the receiving party changes the original purpose or method of processing, it shall obtain the consent of the individual again in accordance with the provisions of this Law. In addition, data sharing can lead to ownership disputes. These issues highlight the multiple challenges of security and compliance in data-sharing mechanisms.

(3) The transparency and fairness dispute risk of automated decision-making

The "black box" nature of AI algorithms is a major hidden danger in data applications: a typical case is the Apple Card credit service launched in 2019 in partnership with Goldman Sachs. The service uses AI algorithms to automatically assess users' credit limits, but multiple users - including Apple co-founder Steve Wozniak - have publicly complained that female users generally have lower credit limits than male users, despite their similar financial situations. This incident exposed the possibility of implicit gender discrimination in the decision-making process of AI algorithms, and although Goldman Sachs claimed that its algorithm "does not make decisions based on gender, race, or other protected categories," the outside world could not verify its fairness due to the "black box" nature of the algorithm, which eventually triggered an investigation by regulators.

These cases show that the issue of transparency and fairness in AI automated decision-making is not only technical, but also goes to the heart of law and ethics. Therefore, it is necessary to find a balance between technological innovation and fairness and transparency. In order to deal with this problem, China has successively issued regulations such as "Regulations on the Management of Internet Information Service Algorithm Recommendation", "Regulations on the Management of In-depth Synthesis of Internet Information Service", and "Interim Measures for the Management of Generative Artificial Intelligence Service". It is required that algorithm recommendation service providers with public opinion attributes or social mobilization capabilities, deep synthesis service providers and technical supporters, and generative artificial intelligence service providers should perform algorithm filing procedures.

The algorithm filing is through the way of publicity, under the premise of not disclosing trade secrets, the name, basic principle, operating mechanism, application scenario, purpose and other information of the algorithm to be used by the enterprise is disclosed, which can help the public understand the basic logic of the operation of the algorithm to a certain extent, improve the user's trust in the product itself, and solve the hidden danger of the algorithm's "black box". Therefore, relevant enterprises should actively respond to laws and regulations and local policy requirements, and effectively fulfill the algorithm filing obligations.

PART.03

Conclusion

This paper lists some important legal risks in the application of AI in financial technology. In addition to the above data and algorithm-related risks, it also involves financial marketing, anti-money laundering, anti-fraud, and compliance risks in the use of open source software.

The integration of AI and fintech is both an opportunity and a challenge. The prevention and control of legal risks should take into account technological innovation and institutional constraints, balance efficiency and fairness, local practice and international rules. With the further advancement of the FinTech Development Plan (2022-2025) and the gradual improvement of the regulatory framework in 2025, in the future, AI should not only be a tool for financial services, but also a partner in compliance governance to jointly build a safe, inclusive and transparent financial ecology.

Share