Date: Thu, 30 May 2019
Time: 9.00am – 11.00am
Venue: Telenor Asia (1 Wallich St, Guoco Tower #28-01, S078881)
About the Forum:
Can the ‘perfect’ algorithm have a moral conscience?
The development of artificial intelligence (AI) is driven by the goal of optimising decision-making and improving the quality of life, be it in a business or personal setting. In a society that increasingly emphasises efficiency and dependency on the convenience provided by such technologies, ethical concerns can sometimes run contrary to these goals. Several AI tools have also been found to lack transparency, the ability to read emotions, offer empathy, or decipher social cues; making it all the more pertinent to evaluate the increasing influence of these “black boxes”. As we continue to look towards a more AI-driven future, how do we then ensure that we do not lose sight out of the need to balance development and application with ‘humanity’ and ‘ethics’.
To this extent governments in Singapore and Europe have developed guidelines on AI development. The Infocomm Media Development Authority (IMDA) recently released a Model AI Governance Framework that included provisions for human-centricity, transparency, fairness, and explainability. Within the EU, an AI group was established to develop guidelines for AI ethics and 25 European countries signed the “Declaration of Cooperation on Artificial Intelligence”. In April, the High-Level Expert Group on AI published its Ethics guidelines for trustworthy AI including the launch of a pilot project to test the guidelines in the actual development and application of AI.
In this session we will unpack the issues arising from the increasing deployment of AI, the cultural and ethical nuances of data collection for AI, the frameworks in place for businesses and policy makers, as well as address:
- How do we define ethics, and how then do we measure and assess how ‘ethical’ an AI is?
- Who should get to define/enforce the ethical regulation of AIs? Comparing Singapore and the EU, how have cultural differences influenced the conception of ethics in the AI government frameworks that were released?
- How should responsibility and accountability be effectively balanced upon the various stakeholders?
- Is there a line between how much consent is explicitly provided, and are users aware if consent is given or to what extent their data is being used?
- Can human instincts be developed or designed into AI?
- Ieva Martinkenaite – VP, AI and IoT Business Development, Telenor Group; Expert Member, High-level Expert Group on AI, European Commission
- Lee Wan Sie – Director (Data Innovation Programme Office), Infocomm Media Development Authority of Singapore
- Jana Marlé-Zizková – Co-Founder and CEO, Meiro; Co-Founder, She Loves Data
- Moderated by Jeremy Tan – Director, CMS Holborn Asia
Photos from the event