Robeco, The Investments Engineers
blue circle

24-02-2023 · インサイト

Mitigating the risks of Artificial Intelligence

Companies are becoming more aware of the risks that artificial intelligence (AI) can pose to society, but much still needs to be done. That was the outcome of three years of engagement by Robeco’s Active Ownership team with ten companies at the forefront of the technology.

    執筆者

  • Daniëlle Essink-Zuiderwijk - Engagement Specialist

    Daniëlle Essink-Zuiderwijk

    Engagement Specialist

AI can offer considerable advantages, from simple machine learning that seems to know what you meant to type, to more complex algorithms that can predict health care needs and detect patterns in climate change. It is now routinely used across the tech spectrum and often kicks in without the user even realizing that it is there.

However, it also poses significant threats to privacy, data management, and the prospect of machine ‘learning’ that leads to unwanted surveillance, racial profiling or discrimination. And it is hard to know the true state of affairs due to a lack of disclosure about companies’ AI activities.

This lack of information was one of the reasons why the Active Ownership team was only able to successfully conclude four out of five cases in the engagement program that ran from 2019 to 2022. The other five cases were transferred to the team’s SDG Engagement theme to further engage these companies on their societal impact.

Aligning practices

“Through our engagement, we learned that companies are gradually aligning internal practices to principles of responsible AI,” says engagement specialist Danielle Essink. “Many companies formalized AI principles that address topics like inclusion, fairness and transparency.”

“Additionally, companies are increasingly pursuing a collaborative approach by actively contributing to initiatives that aim to advance responsible governance and best practices. These types of initiatives play a decisive role in guaranteeing trustworthy AI across the industry.”

“However, ethical principles on their own do not ensure the responsible development and deployment of AI. Businesses require robust governance mechanisms to effectively implement their principles.”

Lack of disclosure

A major stumbling block is the lack of disclosure about what companies are actually doing to address concerns, along with their willingness to engage in the first place. Much of the AI technology and how it is implemented on different platforms is still shrouded in secrecy.

“In our engagement, we observed that transparency around governance and implementation remained low, as most companies’ public disclosures lacked clarity about how such principles translate into practice, and which checks and balances are in place,” says engagement specialist Claire Ahlborn, who also worked on the program.

“After talking to the companies, we learned about the specifics of the implementation, which then gave us the confidence to close some of the objectives successfully. The engagement results of this theme are, therefore, highly correlated with the company's willingness to set up constructive dialogues.”

As technology advances, so do the opportunities for quantitative investors. By incorporating more data and leveraging advanced modelling techniques, we can develop deeper insights and enhance decision-making.

Huge growth predicted

The International Data Corporation’s Worldwide Artificial Intelligence Software Forecast 2022 said the worldwide AI market is set to show compound annual growth of 18.6% from 2022 to 2026 alone.

Yet the potential benefits come with risks that are not yet fully explored, let alone understood, Essink says. To achieve the full potential of AI, companies need to manage the associated risks that come with the development and use of the technology, including human rights-related risks.

“Given the speed at which AI is being developed, there is no doubt that in the next few decades, this technology will transform our economy and society in ways we cannot imagine,” Essink says.

Positive changes

“This type of growth represents massive opportunities for AI to contribute to positive changes, such as detecting patterns in environmental data, or improving the analysis of health information.”

“At the same time, AI could cause new problems or aggravate existing ones if companies do not have enough understanding of the risks associated with these technologies. For example, using AI algorithms for profiling can have discriminatory effects, such as credit rating algorithms disfavoring people from certain ethnic backgrounds, or those living in certain areas.”

“Similarly, AI can be used for surveillance – in public spaces but also in the workplace – putting the right to privacy at risk. This shows a growing need for the responsible governance of AI systems to ensure that such systems conform to ethical values, norms, and the growing number of AI regulations.”

Upcoming regulation

Such regulatory moves and policy proposals have already been launched by governments, ethics committees, non-profit organizations, academics and the EU. In April 2021, the European Commission issued the AI Act. It sets out clear requirements and obligations regarding the specific uses of AI for developers, deployers and users.

The proposal identifies four regulatory categories based on the level of risk. At the high end, AI systems identified as high-risk, such as CV-scanning tools that rank job applicants, will be subject to strict obligations including enhanced risk management processes and human oversight. AI systems with limited risks will remain largely unregulated.

“This growing legislative pressure around AI could pose serious regulatory risks for companies that are not well prepared to conform with the rising obligations,” says Ahlborn.

Aligning engagement with the SDGs

“Meanwhile, the alignment of AI technologies with ethical values and principles will be critical to promote and protect human rights in society. As a result, we will continue our engagement work with a selection of companies in the tech sector under our ‘Sustainable Development Goals (SDG) engagement’ theme.”

“These dialogues have a strong focus on human rights and societal impact, and highlight topics like misinformation, content moderation and stakeholder collaboration. We will focus on how companies can contribute to SDG 10 (Reduced inequalities) and SDG 16 (Peace, justice and strong institutions) by safeguarding human rights in the development and use of AI and promoting social, economic and political inclusion.”

Read the full Q4 Active Ownership report here


重要事項

当資料は情報提供を目的として、Robeco Institutional Asset Management B.V.が作成した英文資料、もしくはその英文資料をロベコ・ジャパン株式会社が翻訳したものです。資料中の個別の金融商品の売買の勧誘や推奨等を目的とするものではありません。記載された情報は十分信頼できるものであると考えておりますが、その正確性、完全性を保証するものではありません。意見や見通しはあくまで作成日における弊社の判断に基づくものであり、今後予告なしに変更されることがあります。運用状況、市場動向、意見等は、過去の一時点あるいは過去の一定期間についてのものであり、過去の実績は将来の運用成果を保証または示唆するものではありません。また、記載された投資方針・戦略等は全ての投資家の皆様に適合するとは限りません。当資料は法律、税務、会計面での助言の提供を意図するものではありません。 ご契約に際しては、必要に応じ専門家にご相談の上、最終的なご判断はお客様ご自身でなさるようお願い致します。 運用を行う資産の評価額は、組入有価証券等の価格、金融市場の相場や金利等の変動、及び組入有価証券の発行体の財務状況による信用力等の影響を受けて変動します。また、外貨建資産に投資する場合は為替変動の影響も受けます。運用によって生じた損益は、全て投資家の皆様に帰属します。したがって投資元本や一定の運用成果が保証されているものではなく、投資元本を上回る損失を被ることがあります。弊社が行う金融商品取引業に係る手数料または報酬は、締結される契約の種類や契約資産額により異なるため、当資料において記載せず別途ご提示させて頂く場合があります。具体的な手数料または報酬の金額・計算方法につきましては弊社担当者へお問合せください。 当資料及び記載されている情報、商品に関する権利は弊社に帰属します。したがって、弊社の書面による同意なくしてその全部もしくは一部を複製またはその他の方法で配布することはご遠慮ください。 商号等: ロベコ・ジャパン株式会社  金融商品取引業者 関東財務局長(金商)第2780号 加入協会: 一般社団法人 日本投資顧問業協会