Close This website uses modern features that are not supported by your browser. Click here for more information.
Please upgrade to a modern browser to view this website properly. Google Chrome Mozilla Firefox Opera Safari
Financial Services Intelligence Watch
Sub Menu
Search

Search

Filter
Filter
Filter
Filter
A A A

Ethics in AI

Publish date: 25 January 2019
Issue Number: 9
Diary: CompliNEWS Ethics
Category: Ethics

World Economic Forum 

Everyone from Stephen Hawking to Bill Gates and Elon Musk have discussed the philosophy of AI. Now that companies around the world are creating AI products at an incredible rate, it’s increasingly urgent that we stop talking about how to implement ethical safeguards into AI and start doing it.

The race to build the first fully autonomous vehicle (AV) has brought this issue front and centre. The death of a pedestrian in March has raised concerns not only about the safety of AVs but also their ethical implications. How do you teach a machine to 'think' ethically? And who decides who lives and who dies? While this is an obvious (and impending) example, ethical questions about AI are all around us.

Why are ethics so important?
The areas where AI stand to benefit us the most also have the most potential to harm us. Take healthcare, an industry where decisions are not always black and white. AI is far from being able to make complex diagnoses or replicate the 'gut feelings' of a human. Even if it could, are AI doctors ethical? Could AI be trained to increase profits at the patient’s expense? And in the case of malpractice, who would the patient sue? The robot?

AI has been projected to manage $1 trillion in assets by 2020. As in healthcare, not all financial decisions can be made on logic alone. The variables that play into managing a portfolio are complex and one false move could lead to millions in losses. Could AI be used to exploit customer behaviour and data? What about hacking? Would you trust a machine to manage your money?

AI warfare raises the most concerning ethical flags. Fully autonomous 'mobile intelligent entities' are coming and they promise to change warfare as we know it. What happens when an AI missile makes a mistake? How many errors are 'acceptable'?

The only way to make sure that a monster that could turn against its creator, is to incorporate ethical safeguards into the architecture of the AI being created today.

Three strategies anyone currently building AI should consider, are proposed:

  1. Bring in a human in sensitive scenarios
  2. Put safeguards in place so machines can self-correct
  3. Create an ethics code

This may seem obvious, but it is surprising how few companies are actually doing this. Whether it’s about data privacy, personalization or deep learning, every organization should have a set of standards it operates by. According to Apple CEO Tim Cook, 'the best regulation is self regulation'. For Apple, this means carefully examining every app on its platform to make sure they aren’t violating users’ privacy.

This is not a one-size-fits-all solution; the ethical code you enact must be dictated by the way you’re using AI. If your company breaks (or nears) a standard, employees should be encouraged to raise the flag and you, as its leaders, are responsible for taking these concerns seriously.

Here are some recommendations for creating an ethics code:

  • When personal data is at stake, pledge to aggregate and anonymize it to the best of your ability, treating consumers’ data as you would our own.
  • Pledge to enact safeguards at multiple intervals in the process to ensure the machine isn’t making harmful decisions.
  • Pledge to retrain all employees who have been displaced by AI in a related role.

As the architects of the future, there is a responsibility to build technologies that will enhance human lives, not hurt them. The opportunity to take a step back and really understand how these product decisions can impact human lives, is now. By doing so, humans can collectively become stewards of an ethical future.

This article is part of the World Economic Forum Annual Meeting

Working Smart

By Lee Rossini

A brand identity is an important factor in the success of a financial advice business; it is essential to be noticed in a competitive environment. Clients are becoming increasingly discerning about the businesses they trust with their financial well-being. Therefore, building a brand that resonates with your target audience is essential not only for attracting clients but also for fostering trust and credibility. Here are some guidelines on how you can successfully create a strong brand identity.

CPD

Subscribers are reminded that they can now complete their monthly CPD quizzes and claim CPD hours. For more on accessing the CPD quizzes, please click on the CPD FAQs button on the top bar of the screen. 

 
We use cookies to give you a personalised experience that suits your online behaviour on our websites. Otherwise, you may click here to learn more, or learn how to block or disable cookies. Disabling cookies might cause you to experience difficulties on our website as some functionality relies on cookie information. You can change your mind at any time by visiting “Cookie Preferences”. Any personal data about you will be used as described in our Privacy Policy.