INTRODUCTION: Gauging the potential

Our ability to adapt to change is remarkable. In just a few short years, artificial intelligence has become integrated into many aspects of our lives - and we have barely blinked. From the personal assistants in mobile phones to the driver assistance systems in cars, the use of AI technology has seamlessly shifted from science fiction to everyday fact.

Governments are hastening to tap its potential. For most businesses too, the application of AI has moved from being a “nice-to-have” to a “must”. In all of its forms and variations, AI and machine learning has started to deliver real value to organisations; automating manual processes, improving customer service and boosting enterprise-wide efficiency.

Financial institutions are leading the way, as AI and machine learning technology has come to play an integral role in a range of operations, from portfolio management to fraud prevention. It is also transforming the customer experience. Chatbots are interacting with customers and solving problems before human staff get involved. Automation is also being used to execute trades more efficiently and progress is being made towards a future where customisable solutions will be the norm.

Yet, while AI’s potential is huge, much of it is as yet unrealised. Many financial institutions have only begun to scratch the surface of AI functionality as adoption proves more complicated and slower than initial forecasts predicted.

The opportunities and challenges as we edge closer to an AI-centric world are highlighted in our global survey of 355 senior executives from financial institutions and FinTech companies, as well as from interviews conducted with leading experts in the field.

This report is a follow up to our 2016 research Ghosts in the Machine: Artificial intelligence, risks and regulation in financial markets. By comparing data across both surveys, we have been able to derive unique insights on the development of AI in financial markets to date and its potential for the future.

"Many financial institutions have only begun to scratch the surface of AI functionality as adoption proves more complicated and slower than initial forecasts predicted.”

In the past two years, lessons have been learned. Progress has been made as AI-driven models have moved out of the research phase and into the real world. However, things have not always developed as planned, as institutions have come to recognise the current limitations of AI and machine learning solutions. Practical challenges, from a shortage of skilled AI developers to the cost of implementation, have also inhibited the roll-out of AI solutions.

So has an uneven regulatory response. Compared with 2016, there is a greater recognition that forward-thinking regulators are getting it right, being proactive and formulating clear and sensible guidelines on AI for financial markets and companies. However, other regulators appear to be further behind in their thinking and remain largely silent on the potential uses and abuses of AI technology, making the regulatory landscape uncertain for businesses operating across multiple jurisdictions.

5 in 10 believe that AI will empower, rather than replace, the workforce

Technologies have changed fast, organisations less so. A large number of survey respondents remain concerned that all AI related legal risks have not been understood by their organisations. In the light of the ongoing debate about potential abuses of data there is a recognition that ethical risks need to be more carefully considered - as well as an admission by many that they are confused about how to respond.

Job displacement also remains a fear. However, there seems to be a greater appreciation of AI’s ability to enhance the workplace: that technology can free employees from routine, repetitive, tasks, enabling them to focus on higher value activities.

Although it will take time for the full potential of AI and machine learning to be realised, for those organisations ready to embrace new technologies, the opportunities are significant.

CHAPTER 1
CHAPTER 1: From scale of capital to scale of data

AI has been around for several decades, but it is only in the last few years that it has really taken off. That is down to two key factors: faster and smarter technology and “big data”.

Vast improvements in processing power mean that the performance of the neural networks of today far surpass those of previous algorithms. This has coincided with the availability of vast amounts of data, facilitated by rapid advances in the ability to store this data cheaply.

“A seismic shift has occurred over the last two years in terms of what it takes to succeed as a financial business,” Jesse McWaters, Financial Innovation Lead at the World Economic Forum, says. “We are seeing a shift from scale of capital being critically important to scale of data.”

“We are seeing a shift from scale of capital being critically important to scale of data.” Jesse McWaters, Financial Innovation Lead, World Economic Forum

For businesses able to access and extract valuable data and convert it into insight there are the rewards of what Andrew Ng, former Baidu Chief Scientist, Coursera co-founder, and Stanford Adjunct Professor, calls the “virtuous circle”. A product attracts users, the users generate data, this data is processed by AI algorithms. The insights that follow help the product to get better, it attracts more users, generating more data and so on.

“The big difference now is the sheer volume of the data available and the complexity of some of that data,” says Andrew Rear, Chief Executive of Digital Partners at Munich Re. “Using unstructured data to underwrite insurance forces you to use technologies at the cutting edge of data science.”

As a rule, the more data the better the results will be, notes Dan Latimore, Senior Vice President at Celent, the consulting firm. This is one area, he says, where larger financial firms have a definite size advantage.

“The amount of data that a machine learns from is critical. Gathering huge amounts of input will give the algorithms more opportunities to test and learn, thereby refining their rules and, over time, producing better results. Firms with access to more data will have a leg up over those firms whose data is sparser.”

Accordingly, the incentive to harness data and AI technologies is strong across the financial industry. The uses of AI in trading and investment programmes was covered in depth in our previous report Ghosts in the Machine. Two years on robo-advisors are embedded in the investment landscape, as both online-only and traditional wealth management companies see them as a tool to reach new customers. Meanwhile the world’s biggest investment group BlackRock has established a “BlackRock Lab for Artificial Intelligence”, dedicated to research in this area.

“Chatbots” and “conversational interfaces” are appearing in growing numbers, and improving as natural language processing and natural language generation advance. Bank of America’s 25 million mobile app customers have been able to chat via voice or text message with the bank’s virtual assistant named Erica - a play on the bank’s name - for more than a year. UBS has been experimenting with a digital clone of its chief economist using the computer gaming industry’s latest animation techniques.

Indeed, customer-facing applications are now seen as one of the key areas for the use of artificial intelligence in financial services. In 2016, only 20% of survey respondents said the biggest potential of AI was its ability to improve the customer experience. In 2018, this figure more than doubled to 42%. No other segment saw such a sizeable jump over just two years (see chart 1).

Arguably, insurers, sitting on a wealth of data, have been slower to monetise this asset than other segments of the financial sector. However, that is starting to change. China’s Ping An Insurance is one of the leaders in the field. Ping An is using AI-enabled solutions in its customer service centre, to speed up car insurance claims and even to develop music to boost “user stickiness” among its 400 million plus clients.

The goal, as it is for many financial institutions, is to use AI to grow market share by offering a more personalised user experience combined with lower fixed costs.

Meanwhile, MunichRe has been experimenting with AI in a wide range of areas, from using new data to distinguish between higher and lower-risk young drivers to using self-learning algorithms to combat claims fraud.

To fulfil a firm's AI objectives, in addition to internal research and development (57% of respondents), institutions are increasingly partnering with external consultants (46%), participating in innovation hubs and incubators (31%) and working with start-ups (25%).

CHAPTER 2
CHAPTER 2: New realism

In recent years countless predictions have been made about the far-reaching impact AI and machine learning will have on financial services.

Nevertheless, while it’s hard to overestimate AI’s long term potential for the sector, all of the experts interviewed for this report stressed that financial institutions should not get too carried away by the hype, just yet.

Celent’s Latimore says some banks have been guilty of overpromising and underdelivering on AI projects. However, he believes lessons have been learned as both institutions and their customers have become a lot more realistic about what can be achieved.

“All implementation of AI should be crafted for a specific purpose,” he says. “An axe and scalpel are both cutting instruments but you wouldn’t use a scalpel to cut down a tree or an axe to perform surgery. Banks need to be specific about what type of AI they want to focus on and what problem it is there to solve.”

Our survey respondents acknowledge that when AI is focused on clearly defined tasks it can live up to its potential. One commented: “With the help of AI we are able to generate tailored and actionable insights for our clients at a speed and scale human advisors would find hard to match.” Another noted: “Most of our underwriting process is automated, thanks to AI.”

However, there are also respondents who contend that progress has been slower than sometimes suggested. “As a company that has been utilising AI for the past eight years, I feel that much of the industry is merely using AI as a buzzword with little actual material implementation of the technology in any way that is truly impactful.” It will “take five to eight more years for AI to completely disrupt the market,” predicts another participant.

Greater realism has led to the recognition that AI is not a quick fix. A significant minority of 42% contend that AI capabilities in financial services have progressed slower than expected (chart 2). This rises to 47% amongst survey respondents from the Americas and 50% amongst larger firms with turnover greater than US$2bn.

A more mature understanding of AI and its current limitations has developed.

Janet Yuen, Asia Pacific Head of Innovation at HSBC, points out that even the most advanced conversational AI applications are still a long way from being able to cope with the subtleties of human speech.

“AI today can outperform people in situations with a constrained set of knowable outcomes,” she says. “Most conversations aren't like that at all.

“If we shift the topic ever so slightly, or speak abstractly about something, or use colloquialisms - what we might call noisy text - that the AI doesn’t understand, it can’t respond. It doesn’t know how to contextualise the meaning of all the words in a language in a way that humans can.”

"With the help of AI we are able to generate tailored and actionable insights for our clients at a speed and scale human advisors would find hard to match." Survey participant

For our survey respondents the potential challenges with AI are as much operational as technological. Competing against leading technology companies for a limited pool of top AI talent is never going to be easy. For 38% of survey respondents a shortage of specialist skills to operate and maintain the technology is the toughest obstacle faced by their organisation when seeking to introduce AI into new areas. Twenty-five percent point to a shortage of analytical skills.

There are also fears about the misuse or hidden risks of the emerging technologies, with 32% of respondents citing cyber security concerns as a factor slowing down the adoption of AI.

But, by far the biggest obstacle holding back AI implementation is the cost of AI systems (50%). Perhaps unsurprisingly it is a greater problem for smaller organisations: 53% of those with turnover of less than $2bn say it is the biggest hindrance, compared with 37% of firms with turnover of more than $2bn (chart 3 below).

CHART 3: Major obstacles for large and small organisations seeking to introduce AI (Multiple answers possible)
SMALLER COMPANIES: Shortage of specialist skills to operate/maintain technology: 36%,  Integrating humans + tech: 26%,  Regulatory constraints: 7%,  Board/ senior management buy-in: 26%,  Data privacy concerns: 21%,  Ethical concerns: 9%,  Cyber security concerns: 33%,  Identifying + mitigating all material legal risks: 12%,  Shortage of analytical skills: 27%,  Cost of AI systems: 53%.
LARGE COMPANIES (turnover above $2bn): Shortage of specialist skills to operate/maintain technology: 50%,  Integrating humans + tech: 32%,  Regulatory constraints: 17%,  Board/ senior management buy-in: 28%,  Data privacy concerns: 30%,  Ethical concerns: 3%,  Cyber security concerns: 30%,  Identifying + mitigating all material legal risks: 8%,  Shortage of analytical skills: 17%,  Cost of AI systems: 37%.

There is also the hurdle of embedded organisational intransigence and silo-based thinking. For Theodore Ling, Partner, Technology at Baker McKenzie, “technology isn’t the main problem”. Ling argues: “The bigger challenge seems to be organisational as financial institutions deal with legacy systems and platforms. We are really close, in terms of AI transforming financial institutions, but the last step is going to be a really difficult one.”

Firms need to bring together the right stakeholders and create an environment where they are collaborating towards the same objectives, he continues.

Focus on Risk

Credit assessment and risk management are the two areas where our survey respondents expect to see the most dramatic changes in the next three years.

CHART 4: Respondents expecting AI to “completely” or “substantially” change risk management within three years. 2016: 59%; 2018: 75%.

Indeed, risk management’s increasing importance in the innovation agenda is demonstrated by the 75% of respondents who expect to see substantial or complete change in this area, compared with 59% in 2016 (chart 4).

Specifically asked about their own organisations, 56% of respondents say they expect to introduce AI in risk management over the next three years.

Taking into account that 18% already use artificial intelligence in risk management today, this will make it one of the hottest areas for the use of AI within the next few years (chart 5).

Amongst insurers, the trend is even more pronounced. While only 16% use AI in risk management today, a further 67% expect to introduce it by late 2021.

Deepak Soni, Director of Commercial Intermediary at the insurer AXA, says that AI technologies are already helping his organisation to more accurately segment risk.

“Machine learning enables us to highlight the risks that different types of organisations and businesses face and how we can potentially mitigate that risk,” he says. “It enables us to better understand the risks and the opportunities to create more relevant products.”

Empowering the workforce

AI’s impact on the workforce remains a subject of debate. Consistent with our findings in 2016, AI is predicted to transform the labour market. Seventy-eight percent of survey respondents expect to see complete or substantial change to their own jobs within 15 years (chart 6).

CHART 6 - AI will substantially or completely change my job
Over the next 3 years: 17%
Over the next 15 years: 78%

Over a range of aspects we also asked survey participants whether the impact of AI in financial markets would be positive or negative. The “structure of the human workforce” was the only one receiving a net negative rating. While 21% say the impact would be positive or very positive, 43% expect it to be negative or even very negative, resulting in a 23% net negative rating (chart 7).

CHART 7: AI’s impact on financial services (Net positive/negative views* on expected impact of AI on financial services) Credit assessment: 77%; Risk management: 65%; Compliance: 61%; Competitiveness of markets: 53%; Market liquidity: 36%; Regulation of markets: 30%; Market stability: 4%; Structure of the human workforce: -23%. *Number of negative responses subtracted from positive responses

However, in this year’s survey a greater recognition of the potential benefits that AI could bring to employment also emerges: 52% of survey respondents expect AI to empower, rather than replace, the workforce.

“AI is enabling organisations to shift their human talent to higher value activities, taking things off their plate that machines can do better,” says Theodore Ling of Baker McKenzie.

For example, back office functions are being transformed as AI takes on rote tasks previously completed manually by thousands of employees. At JP Morgan Chase a Contract Intelligence (COiN) platform has been installed to analyse commercial credit agreements. The platform does in seconds what once kept legal teams busy for 360,000 hours annually.

Royal Bank of Canada has been working with a number of third party providers to test and implement AI-driven solutions to, for example, generate legal contracts. In Singapore, OCBC Bank has teamed up with FinTech firm ThetaRay to make its anti-money laundering process more efficient.

Where AI is employed to improve the customer experience, it often helps to make employees look smarter and more effective. However, human guidance is still an essential prerequisite for the effective application of AI, argues Celent’s Dan Latimore.

“Having a human check in on the results and give the algorithms nudges or feedback, based on perceptions that are intuitive to humans but difficult for models to pick up, can generally speed up the learning process significantly while improving accuracy.”

"To serve our customers artificial intelligence alone isn't sufficient - we really need human intelligence. It is about blending the best of the two." Janet Yuen, Head of Innovation APAC, HSBC

If anything, the advances in AI have given a deeper insight into just how complex the human mind is. Janet Yuen of HSBC points out that, in the 1968 sci-fi classic movie 2001, director Stanley Kubrick predicted a world where computers not just talked but were self-aware and capable of self-preservation.

“We are 17 years on from 2001 and not there yet because there is so much complexity in how we think and act - we interpret, we plan, we reason, we learn before we respond.” She adds that the approach HSBC is taking is to look at how to solve problems that are easy for humans and hard for computers, and in so doing develop a set of capabilities in conversational AI to work across customer needs, markets and channels.

“To serve our customers artificial intelligence alone isn't sufficient - we really need human intelligence. It is about blending the best of the two.”

CASE STUDY: Keeping a close eye on the robots

Artificial intelligence systems can read and analyse data in a way that humans often can’t match. However, fully-automated systems can have their own problems, according to Jeff Holman, Chief Investment Officer at Sentient Technologies, responsible for the company’s trading and investment arm.

Sentient Investment Management operates a hedge-fund strategy that uses deep learning and one of the world’s largest evolutionary computation systems to derive trading strategies from previously unseen patterns.

Originally, the idea was to offer a fully autonomous hedge fund that made all of its stock trades using AI. But, during testing, limitations were spotted in this approach. The San Francisco-based company now pairs its AI capabilities with human portfolio managers.

“We realised that a purely AI-driven approach can be very difficult to manage,” says Holman. “You might exclude them from strategies that you don’t want to engage in, like market timing, and occasionally, without human oversight, they might find ways to engage in them.”

"We realised that a purely AI-driven approach can be very difficult to manage. You might exclude them from strategies that you don’t want to engage in, like market timing, and occasionally, without human oversight, they might find ways to engage in them." Jeff Holman, Chief Investment Officer, Sentient Technologies

To prevent this, human portfolio managers are there to check that the AI isn’t misbehaving. Holman says that portfolio managers will, for example, run AI-generated strategies through standard performance analytics, like risk decomposition and performance attributions. “You are trying to ensure it isn’t generating returns in a way you find objectionable, such as systematic exposure to well-known risk factors or through excessive concentration,” he adds.

CHAPTER 3
CHAPTER 3: New limitations, new opportunities

In April 2018, the Monetary Authority of Singapore (MAS) announced that it was working with a group of industry players to develop a guide to promote the responsible and ethical use of AI and data analytics by financial institutions. Other regulators have been slower to respond to the emergence of AI technology.

Indeed, only 32% of survey respondents believe that financial regulators have sufficient understanding of financial technologies and their impact on the current financial services sector.

“There have been some regulators that have been slow to see this coming, adopting a wait and see approach,” says Maria McDermott, Director of Corporate Affairs and Risk Management at global FinTech company Know Your Customer. “Other regulators have been very proactive making statements about the rules they expect companies developing technology to follow. It would be good to see more regulators take the latter approach and make statements about what they expect to see.”

“It would be good to see more regulators make statements about what they expect to see.” Maria McDermott, Director of Corporate Affairs and Risk Management, Know Your Customer

In Europe, regulators have recently been focused on introducing a number of transformative regulations that could potentially affect the future development of AI capabilities: namely, the revised Payment Services Directive (PSD2) and General Data Protection Regulation (GDPR).

Jesse McWaters from the World Economic Forum says about the GDPR, which places new limitations and requirements on the collection, transmission, and storage of personal data: “We are seeing an environment where European financial institutions are significantly constrained in terms of the data they can use, whereas, say, Chinese firms’ data regulations are more expansive.”

6 in 10 say existing regulation isn’t sufficient to address the issues posed by AI

Although data protection compliance concerns could be seen as a blocker to AI deployment in Europe, it isn’t a black and white picture. Europe has been a leader in driving open banking and progressive regulators like the UK’s Financial Conduct Authority (FCA) have been at the forefront of innovation in financial services, including promoting using machine learning to make better decisions and creating machine readable rules.

In addition, the FCA has been a key proponent of greater global cooperation between regulators and recently unveiled a Global Financial Innovation Network, an alliance of regulators from around the world to share policy ideas and make sure they are up-to-date with developments in emerging technologies including AI. The network includes the US Bureau of Consumer Financial Protection, MAS, the Hong Kong Monetary Authority, the Australian Securities & Investments Commission (ASIC) and other agencies.

John Price, Commissioner at ASIC, says: “When you dig down to the details the sort of regulatory issues we all face are quite similar and it is really helpful to share experiences and cooperate across different markets.”

The regulatory issues concerning AI’s use in financial markets have become sharper and more nuanced recently, according to Price.

“More focused questions are being asked about accountability; who is responsible for the algorithms making what can be life-changing decisions. There is also a focus on issues like algorithm bias and how to deal with algorithms that have gone wrong.”

Regulation at the crossroads

Yet, despite the evident attention of some watchdogs, when asked if existing regulation is sufficient to address the issues posed by AI and machine learning, a majority 59% of survey respondents see gaps. Still, this represents a 10% decrease compared with our previous survey.

In 2018 one in five (21%) say existing regulation is not at all sufficient, while 38% think regulators are moving in the right direction but further regulation must be drafted and implemented.

In terms of what single step would respondents like regulators to take to manage the risks associated with machine learning, the most popular suggestion (26%) is to collect more data to understand how technology is changing financial markets (chart 8). The second largest group (23%) suggest increased collaboration between regulators and FinTech companies.

CHART 8: What market regulators should do to address the impact of new technologies. Collect more data to understand how technology is changing financial markets: 26%; Increase collaboration with fintech adopters: 23%; Co-ordinate regulatory efforts across markets, in a systematic global fashion: 18%; Oblige market participants to share more information on their technology and how it operates with regulators: 12%; Increase surveillance of markets by acquiring the latest technology: 12%; Introduce machine readable regulations, 5%; Other: 3%.

Managing legal challenges and ethical risk

There remains a great deal of uncertainty among survey respondents as to whether their own organisations understand the legal risks associated with emerging technologies. Some 53% say they are not very confident or not confident at all that they do (chart 9).

CHART 9: My organisation understands all material legal risks associated with new financial technologies. Very confident: 6%; Confident: 32%; Not very confident: 41%; Not confident at all: 12%; Don’t know: 8%.

Sue McLean, Partner, Technology at Baker McKenzie isn't surprised by the uncertainty when it comes to legal challenges. "It's still early days in terms of the deployment of these technologies and the legal risks and challenges are going to differ depending on the nature of the solution and the particular use case.” She notes that there's not going to be a one size fits all approach.

“Legal teams need to work closely with the business leads to identify early on the relevant legal issues - which may include regulatory compliance or data protection concerns or questions of liability or intellectual property ownership."

She adds: "Firms seeking to engage AI solution providers may also need to revisit their standard contract templates to consider whether they need adjustment to take account of some of the unique issues that AI projects raise."

Engaging early on with business partners is essential to mitigate risk, agrees Giuliana Marinelli, Senior Counsel at Royal Bank of Canada.

“Use of data raises the issue of data privacy law, so there are important questions to be answered about how we, the machine and our business partners are ultimately going to use this information,” explains Marinelli. “The use of certain AI technologies may trigger export controls, even in the early research or testing phase of an organisation. That should be considered. It is also essential to establish who is liable for the decisions made by a machine.”

"Use of data raises the issue of data privacy law. There are important questions to be answered about how we, the machine and our business partners are ultimately going to use this information." Giuliana Marinelli, Senior Counsel, Royal Bank of Canada

Giuliana Marinelli observes that ethical concerns related to the use of AI are also starting to rise up the corporate agenda. However, our survey demonstrates that a focus on ethics is still in its infancy.

Only 7% of our respondents identify ethical concerns as an obstacle to introducing AI in new areas.

A majority of AI users either don’t consider that their use of AI would trigger ethical risks (34%) or admit that there are ethical concerns but they are not addressing them (22%) (chart 10).

Regulators, however, are taking ethical risks very seriously, according to ASIC Commissioner Price. Attention is being focused on biases that may be baked into certain datasets and the algorithms used to mine them. On a practical level, biased data could lead to significant consumer detriment, for instance meaning the difference between getting a mortgage or not.

Interestingly, although a large proportion (41%) of respondents say that their compliance team oversees AI ethical issues at their organisation, 24% say their executive board is responsible for ethics.

Stephanie Magnus, Head of Baker McKenzie’s Singapore Financial Services Regulatory and FinTech Practice agrees that senior management need to be focused on these issues. “In jurisdictions like the UK, Singapore, Hong Kong and Australia measures have recently been introduced to hold senior management more accountable for decisions taken within their organisations,” she says. “That includes accountability for their use of technology.”

And it looks like organisations are starting to develop a more focused approach to ethical issues. Nearly one in five of our respondents (18%) indicates that their organisation already has either an independent ethics council overseeing AI-related issues or even a dedicated AI ethics team.

However, as with so many aspects related to AI and machine learning, addressing the ethical dilemmas associated with AI is still very much a work in progress.

As Sue McLean explains: "Even for those financial institutions who have created ethical principles for AI, the challenge is how to operationalise those principles. Firms are looking at how to embed those values in an effective manner, for example by carrying out ethical impact assessments when they kick off an AI project and carrying out ongoing monitoring and testing of their AI solutions. They are also looking for suppliers who share this commitment to responsible AI."

Conclusion

While the futuristic notion of businesses filled with robotic workers is worlds away, AI, in all its forms and variations, is already helping to transform the financial services sector. However, it's clear that institutions, from banks to insurers, asset managers to payment operators, are still feeling their way forward rather than racing towards an AI-enabled future.

There is a widespread recognition that forward-thinking organisations that step into the future with their eyes open, focusing on the risk and the possibilities of the new technology, could well gain a competitive edge.

Yet, while some organisations have their eyes on the prize, others have yet to formulate how AI might help their business and how to implement it to best effect. Almost a third (29%) of survey respondents say that their company still doesn’t have a strategy in place to deal with the wider impact of new technologies.

With use of AI still in its early stages, it is understandable that many financial services institutions are proceeding with caution. But, as the use of AI technology ramps up (and with other technologies like blockchain not far behind), not having a clearly defined strategy in place increasingly looks like a business risk.

About this report

Defining our terms

AI is an umbrella term encompassing several fields of research in computer science, all of which seek to enable computer systems to perform tasks normally requiring human intelligence, such as visual perception and decision-making. Machine learning is a branch of AI that provides computer systems with the ability to learn and adapt independently, based on algorithms and the analysis of data.

The research

For this report, commissioned by Baker McKenzie, Euromoney Thought Leadership Consulting surveyed 355 senior executives working for financial institutions globally. Fifty-nine percent of respondents work in C-level positions, the others are senior decision makers in a variety of roles including data, technology, legal and compliance.

Eighteen percent of participants work in asset management and 16% in investment banking, followed by retail banking (14%), insurance (12%), hedge funds and quant funds (9%), and other segments of the financial services industry.

Thirty-five percent of respondents work for companies with between 100 and 1,000 employees. A further 33% work for larger organisations.

Thirty-nine percent are based in the Americas, 36% in Europe, Middle East and Africa, and 25% in Asia-Pacific.

The survey was conducted during the period 19 June to 24 July 2018.

In addition, in-depth interviews with 12 senior industry executives and experts were held throughout July and August 2018.

Expert interviews were conducted with:

Jeff Holman, Chief Investment Officer, Sentient Technologies
Dan Latimore, Senior Vice President, Celent
Theodore Ling, Partner - Technology, Baker McKenzie, Toronto
Stephanie Magnus, Head of Financial Services Regulatory and FinTech Practice, Baker McKenzie, Singapore
Giuliana Marinelli, Senior Counsel, Royal Bank of Canada
Maria McDermott, Director of Corporate Affairs and Risk Management, Know Your Customer
Sue McLean, Partner - Technology, Baker McKenzie, London
Jesse McWaters, Financial Innovation Lead, World Economic Forum
John Price, Commissioner, Australian Securities and Investment Commission (ASIC)
Andrew Rear, Chief Executive, MunichRe Digital Partners
Deepak Soni, Director of Commercial Intermediary, AXA
Janet Yuen, Asia Pacific Head of Innovation, HSBC
Appendix

Q1.
How much do you think the following financial services functions will be changed by AI and machine-learning technology over the next 3 years?

Credit assessment, Completely: 24%,Substantially: 52%, Moderately: 19%, Little: 3%, Not at all: , Don’t know;  Risk management, Completely: 27%, Substantially: 48%, Moderately: 20%, Little: 5%, Not at all: , Don’t know;  Financial analysis, Completely: 23%, Substantially: 51%, Moderately: 21%, Little: 4%, Not at all: , Don’t know;  Trading, Completely: 15%, Substantially: 56%, Moderately: 22%, Little: 5%, Not at all: , Don’t know;  IT, Completely: 23%, Substantially: 40%, Moderately: 25%, Little: 10%, Not at all: , Don’t know;  Investment/Portfolio management, Completely: 12%, Substantially: 49%, Moderately: 30%, Little: 7%, Not at all: , Don’t know;  Clerks/Administrators, Completely: 13%, Substantially: 32%, Moderately: 28%, Little: 19%, Not at all: 6%, Don’t know;  Legal and compliance, Completely: 10%, Substantially: 31%, Moderately: 37%, Little: 16%, Not at all: , Don’t know;  Sales, Completely: 10%, Substantially: 24%, Moderately: 43%, Little: 19%, Not at all: , Don’t know;  General management, Completely: 11%, Substantially: 23%, Moderately: 33%, Little: 29%, Not at all: , Don’t know;

Q2.
In which financial services sectors do you expect AI and machine learning to have the most transformative impact over the next 5 years?
(Select up to three)

Mobile payments and digital wallets, 41%;  Asset management, 40%;  Provision of credit (eg. credit cards; unsecured loans; car finance), 37%;  Retail banking, 31%;  Stock and trading exchanges, 27%;  Insurance, 26%;  Hedge funds, 25%;  Investment banking, 15%;  Peer-to-peer lending/ Shadow banking, 14%;  Corporate finance, 12%;  Private wealth management, 11%;  Private equity, 10%;  Other, 1%.

Q3.
What impact will AI and machine learning have on the following aspects of the financial markets over the next 3 years?

Credit assessment, Very positive: 25%, Positive: 56%, Neutral: 14%, Negative: , Very negative: , Don’t know: ;  Risk management, Very positive: 21%, Positive: 50%, Neutral: 20%, Negative: 6%, Very negative: , Don’t know: ;  Compliance, Very positive: 22%, Positive: 45%, Neutral: 25%, Negative: 5%, Very negative: , Don’t know: ;  Competitiveness of markets, Very positive: 20%, Positive: 45%, Neutral: 23%, Negative: 10%, Very negative: , Don’t know: ;  Market liquidity, Very positive: 9%, Positive: 40%, Neutral: 35%, Negative: 11%, Very negative: , Don’t know: ;  Regulation of markets, Very positive: 10%, Positive: 37%, Neutral: 33%, Negative: 15%, Very negative: , Don’t know: ;  Market stability, Very positive: 12%, Positive: 24%, Neutral: 28%, Negative: 30%, Very negative: , Don’t know: ;  Structure of the human workforce, Very positive: , Positive: 18%, Neutral: 34%, Negative: 37%, Very negative: 7%, Don’t know: .

Q4.
How much do you think your own job will be changed by AI and machine learning technology over the medium and longer terms?

Over the next 3 years, Completely: , Substantially: 16%, Moderately: 48%, Little: 30%, Not at all: 5%, Don’t know;  Over the next 10 years, Completely: 14%, Substantially: 51%, Moderately: 26%, Little: 5%, Not at all: , Don’t know;  Over the next 15 years, Completely: 41%, Substantially: 37%, Moderately: 12%, Little: , Not at all: , Don’t know: 6%.


Q5.
Where in your organisation is AI being used today, and where do you expect it to be introduced in the next 3 years? (Select only those that apply)

Financial analysis/research, Today: 31%, Within the next 3 years: 50%; IT, Today: 35%, Within the next 3 years: 40%; Risk management, Today: 18%, Within the next 3 years: 56%; Trading, Today: 21%, Within the next 3 years: 51%; Investment/Portfolio management, Today: 23%, Within the next 3 years: 48%; Fraud prevention including KYC and AML, Today: 25%, Within the next 3 years: 43%; Credit approval process, Today: 22%, Within the next 3 years: 43%; Sales, Today: 12%, Within the next 3 years: 48%; Administration, Today: 21%, Within the next 3 years: 37%; Legal and compliance, Today: 15%, Within the next 3 years: 42%.

Q6.
By which means is your organisation developing AI capabilities?
(Select all that apply)

Internal research and development: 57%;  Working with advisors/consultants: 46%;  Participation in innovation hubs and incubators: 31%;  Talent acquisition: 30%;  Partnering with start-ups: 25%;  Commissioning or collaborating with universities or research institutes: 21%;  Joint venturing and M&A: 20%;  Outsourced R&D to technology firms: 13%;  Crowdsourcing: 9%;  Other: 1%;  We are not developing A.I./machine learning: 15%.

Q7.
For my organisation, AI’s biggest potential lies in:
(Select up to three)

Increased efficiency of operations: 52%;  Improving the customer experience: 42%;  Improving risk management: 40%;  Expanding into new business areas: 28%;  Keeping pace with competitors: 28%;  Reducing costs of human workforce: 26%;  Minimising emotions in decision making: 22%;  Improving fraud prevention including KYC/AML processes: 18%;  Other: 3%;  We have not identified any potential: 5%.

Q8.
What are the toughest obstacles your organisation faces in seeking to introduce AI in new areas?
(Select up to three)

Cost of AI systems: 50%;  Shortage of specialist skills to operate/maintain the technology: 38%;  Cyber security concerns: 32%;  Integrating humans and technology: 26%;  Senior management/ board buy-in: 25%;  Shortage of analytical skills: 25%;  Data privacy concerns: 23%;  Risks of malfunctioning technology: 15%;  Identifying and mitigating all material legal risks: 12%;  Customer buy-in: 10%;  Regulatory constraints: 9%;  Ethical concerns: 7%;  Other: 2%.

Q9.
Which of the following best describes your organisation’s approach to ethical risk arising from the use of AI?

We are in the process of developing an AI-related ethics policy/framework: 34%;  We don’t consider the way we use AI to raise ethical concerns: 34%;  Ethical concerns exist but we are not addressing them: 22%;  AI-related ethics policy/ framework already in place: 8%;  Other: 2%. *survey participants that are using AI only

*survey participants that are using AI only

*survey participants that are using AI only

Q10.
Who within your organisation oversees AI-related ethical issues?

Compliance team: 41%;  Executive board: 24%;  Independent ethics council: 13%;  Other: 11%;  Legal counsel: 7%;  Dedicated AI ethics team: 5%. *survey participants that are using AI only

*survey participants that are using AI only

*survey participants that are using AI only

Q11.
Does your company have a strategy in place to deal with the wider impact of new technologies on its business?

Yes: 61%; No: 29%; Don't know: 10%.

Q12.
What in your view best describes the evolution of AI capabilities in financial services over the past 3 years?

Evolution not revolution: AI improved slower than expected: 42%; AI capabilities progressed as expected: 36%; Revolution not evolution: AI capabilities progressed faster than expected: 22%.

Q13.
Do you agree or disagree with the following statements?

AI in financial services will empower, rather than replace, the workforce, Strongly agree: 10%, Agree: 43%, Neither agree nor disagree: 24%, Disagree: 15%, Strongly disagree: ;  The real potential of AI lies in new businesses and products, not back office efficiencies, Strongly agree: 11%, Agree: 28%, Neither agree nor disagree: 21%, Disagree: 28%, Strongly disagree: 8%;   The problems AI has solved so far are very narrow and computers won’t replace humans at any time soon, Strongly agree: 10%, Agree: 28%, Neither agree nor disagree: 23%, Disagree: 30%, Strongly disagree: 6%;  AI is just another means to reduce staff costs,Strongly agree: , Agree: 26%, Neither agree nor disagree: 28%, Disagree: 30%, Strongly disagree: 9%.

Q14.
Which of the following is the single most important step regulators should take to address the impact of new technologies on financial markets?

Collect more data to understand how technology is changing financial markets: 26%,  Increase collaboration with fintech adopters: ,23%  Co-ordinate regulatory efforts across markets, in a systematic global fashion: 18%,  Increase surveillance of markets by acquiring the latest technology: 12%,  Oblige market participants to share more information on their technology and how it operates with regulators: 12%,  Introduce machine readable regulations: 5%,  Other: 3%.

Q15.
Do you agree or disagree with the following statements?

Some financial firms will gain an unfair advantage with the introduction of AI, Strongly agree: 15%, Agree: 45%, Neither agree nor disagree: 21%, Disagree: 12%, Strongly disagree: , Don’t know;   Market abuse will rise as a result of AI, Strongly agree: 8%, Agree: 26%, Neither agree nor disagree: 27%, Disagree: 26%, Strongly disagree: , Don’t know;   Financial regulators have sufficient understanding of financial technologies and their impact on the current financial services sector, Strongly agree: 8%, Agree: 24%, Neither agree nor disagree: 16%, Disagree: 32%, Strongly disagree: 18%, Don’t know.

Q16.
Where do you think regulators should enforce/encourage the adoption of AI technology first, to reduce regulatory risk?

Anti-money laundering (AML) and Know Your Customer (KYC) processes: 39%,  Market misconduct: 30%,  Liquidity risk assessment: 16%,  Portfolio risk assessment: 14%,  Other: 1%.

Q17.
Do you think existing regulation is sufficient to address the issues posed by AI /machine learning?

Yes, financial institutions are already overregulated: 14%,  Yes, existing regulation is at the right level: 20%,  In part, but further regulation must be drafted and implemented: 38%,  No, not at all: 21%,  Don’t know: 7%.

Q18.
What is the biggest AI-related macro risk for financial services?

A widespread use of opaque models may result in unintended consequences: 24%,  Cyber security: 23%,  Emergence of new and unexpected forms of interconnectedness between financial markets & institutions: 20%,  Unintended creation of new systemically important players, possibly outside regulatory supervision: 19%,  Misplaced trust in AI increases risk appetite, making a new financial crash more likely: 14%.

Q19.
How confident are you that all material legal risks associated with new financial technologies have been properly understood by your organisation?

Very confident: 6%,  Confident: 32%,  Not very confident: 41%,  Not confident at all: 12%,  Don’t know: 8%.
DOWNLOAD PDF

Managing Editor: Ben Bschor
Writer: David Budworth
Designer: Claire Boston
https://protect-eu.mimecast.com/s/fHeUCzv99IqmJY4F4kZoWM?domain=thoughtleadershipconsulting.com