Artificial intelligence (AI) is one of these very interesting areas of development that could revolutionise our day to day lives, much as the internet did in the last century.
AI at work
The term AI is often used to refer to the development of computers, computer programs or applications that can, without direct human intervention, replicate processes which are traditionally considered to require a degree of intelligence, such as strategy, reasoning, decision making, problem solving and deduction processes. For example, an AI program can use algorithms to analyse datasets, and make decisions and take actions based on the output of the analysis – an analysis that would traditionally be done by a human. AI programs can also be developed to interact with people in ways that mimic natural human interaction, for example in online customer service support – sometimes into an extent that the difference is hard to recognise (the ‘uncanny valley’).
Potentially AI has the potential to supplant a great number of human processes, and it can do so cheaper, faster and without human error. However, in practice the current applications and opportunities are much more limited and constrained by practical factors such as the sheer processing power that is required, especially pending a breakthrough in quantum computing, and ‘design’ limitations such as the inability to learn by extrapolating from limited failures, or to apply common sense to scenarios.
Is this development a good thing? AI can cut costs, eliminate human error, and potentially make products and services available to those who might not otherwise be able to access them. But what about the possible downsides?
A fear of AI
50 years ago, in the film 2001: a Space Odyssey, an AI slowly turns from being the humans’ assistant to pitting itself against them. HAL, the Heuristically programmed ALgorithmic computer, ‘realises’ the fallibility of humans stands in the way of it achieving its operational objectives and therefore seeks to remove these obstacles. Presciently, this film encapsulated many of the present concerns about AI – what will stop the machines ‘deciding’ to exercise the powers they are given in a way that we don’t like? For example, what is our recourse if we need a computer to evaluate a request from us, such as deciding whether or not to accept a job application, and the computer says no? We can try to appeal to other humans on an emotional level, or challenge the basis for their decision; a computer programme that is implacably based on an incomprehensible algorithm does not present that option.
Regulation is the most frequent knee-jerk response to any such question of ‘what if…’. However, many regulators are cautious about imposing regulation in a vacuum, seeking to prescribe or proscribe technologies rather than focusing on particular applications of technologies. The well-known risk of doing otherwise is the outcome that technology will develop so quickly that regulation will always lag behind.
In the financial services space, AI has already been making inroads on market practices, as evidenced by:
- Behavioural Premium Pricing: Insurance companies have been deploying algorithms to, for example, price motor insurance policies based on data gathered about the prospective policyholder’s driving habits.
- Automated decision making: credit card companies can decide whether or not to grant a credit card application based on data gathered about the applicant’s spending habits and credit history as well as age and postcode.
- Robo-advice: a number of firms have developed offerings that can provide financial advice to consumers without the need for direct human interface, based on data input by the customer regarding means, wants and needs etc, and measured against product models and performance data to find appropriate investments.
Automating these processes with AI offers the ability to manage downwards the costs of servicing a given market while potentially eliminating rogue variables caused by human fallibility. AI could thereby help to make financial services products more accessible to the public, enabling them to be offered at a price that is affordable to a greater section of the public.
AI versus human emotion
However we cannot forget potential risks: what if an insurance pricing algorithm becomes so keenly aligned to risk that a segment of higher risk, and potentially vulnerable, customers are effectively priced out of the market? How can an algorithm be held accountable if a customer feels that a decision about their credit card application was wrong? And what if the questions about investment intentions are too focused on what customers say they want, and miss out on the nuances of a customer’s wishes and fears that an experienced human advisor may know to pick up on and pursue?
What could the regulators do to address these potential risks, and the consumer detriment that would ensue if they materialised? One option, and likely only part of any solution, is to ensure firms are mindful of the consumer and market protection outcomes and objectives at the root of the regulations with which they must comply, and they will be held accountable when their products and services fail to deliver those outcomes. For example, the UK’s Financial Conduct Authority (FCA) requires firms providing services to consumers to ensure that they are treating their customers fairly, and being clear, fair and not misleading. The onus is then on firms to ensure that whatever new developments they have, these outcomes are consistently being achieved. For the insurance firm described above, this could involve paying close attention to the parameters and design of the algorithm, to ensure that, for example, a certain pricing threshold is not breached. For the credit card firm, this could be ensuring that if a customer’s application is declined, they are provided with information about how that decision was reached, and what factors it was based upon. For the robo-adviser proposition, this could involve a periodic review of investments and portfolios by a human adviser.
Regulating AI
Practically, regulators will need to work with firms to ensure that the need to comply with such outcomes does not block development. Since 2016, the FCA has made available a regulatory ‘sandbox’ for firms, to let them develop new ideas in a ‘safe’ surrounding, to contain risks of customer detriment while products are in development, and to offer support in identifying appropriate consumer protection safeguards that may be built into new products and services. The FCA is now exploring the expansion of this sandbox to a global staging: working with other regulators around the world to support firms that may offer their products in more than one regulatory jurisdiction. The FCA has also been meeting with organisations who are working to expand the current boundaries and applications, at specialist events around the UK, such as the FinTech North 2018 series of conferences, which raise the profile of FinTech capability in the North of England.
What's Hot
HRD Roundtable Report: Levelling Up Onboarding for...
Attracting new talent is shooting up the priority list, but also proving more difficult than ever. A...
View event
HRD Roundtable Report: Redefining Company Culture ...
The process of redefining a company culture is a complex one. Culture contributes directly to the da...
View event
Learning to win the talent war: how digital market...
This report documents the findings of a Fireside chat held by ClickZ in the first quarter of 2022. I...
View resource
HRD Roundtable Report: Strategies For Re-Engaging ...
We know hybrid working is here to stay, forcing many organisations to experiment with innovative and...
View event
HRD Roundtable Report: Making it ‘Worth It’ – What...
We know the pandemic has caused many people to revaluate their careers and relationships with work a...
View event
Dave Ulrich: How can business and HR leaders simpl...
HR thought leader Dave Ulrich outlines ways leaders can deal with complexity in an increasingly busy...
View article
HRD Roundtable Report: Using HR Data to Inform Org...
Historically, HR hasn’t been as effective as it could be in sharing and communicating data with wide...
View event
Digital transformation investment grows but critic...
Covid-19 has accelerated the rate of digital learning on a global scale. Coursera's latest report pr...
View article
Wellbeing in the hybrid workplace: how to successf...
Remote working has impacted the way we communicate as a workforce, but striking the right balance be...
View article
How to create a company culture capable of empower...
While new working patterns that emerged from the pandemic have earned a permanent spot in the w...
View article
WATCH: Connection and collaboration in a hybrid wo...
Alison Noon-Jones, VP of People & Culture at Leidos UK & Europe, shares how crucial employee engagem...
View article
Turbulence ahead: Why it’s time to be bold in your...
HRD thought leader and Hack Future Lab founder Terence Mauri sets out why the biggest risk to leader...
View article
By working together to balance potentially competing factors such as technological development and consumer protection, regulators and the industry may be able to provide a stable platform to develop AI, while overcoming or at least assuaging the potential fears of the target audience for these developments. In 2001: a Space Odyssey, the conflict between AI and humans was only resolved by the ‘death’ of the AI. Let’s hope that in real life, a way of co-existence can be found instead.
Roseyna Jahangir, Associate at Womble Bond Dickinson (UK) LLP
Was this article helpful?
YesNo