The priority for AI implementation should be trust, not regulation – UnionBank data science, AI expert – Manila Bulletin

Dr David Hardoon, UnionBank’s senior advisor for data and artificial intelligence, was recently invited to discuss his take on data science and artificial intelligence (AI) at a forum of best practices from the EFMA community on sustainability and regulation.

The Data & AI expert believes that “regulating AI is not the right goal”, arguing that the goal must be safety first and equality, promoting trust in technology. In addition, there are safety nets to mitigate the associated risks.

Focus on AI photography

“Data, and to some extent AI as a mechanism and tool that manifests possibilities from data, is an onion,” said Hardoon. “And what you find with this onion is that it’s not just about data. It’s not just about application. It’s not just about consumer engagement. It is also a question of history. It is also about our understanding of our own current behavior. It essentially opens up a huge view that we potentially ignored before. “

Dr Hardoon explains that AI can break down into at least three buckets. First, the data, which can be historically good or bad, as it is a true representation of problems or errors that have happened in the past or that could occur in the future. Then there is the AI ​​system itself, in particular the approach of extracting information from the available data. And finally, there is the operationalization of the information that comes out.

“When considering operationalizing AI governance, it’s imperative to have a broad appreciation of the risk that comes from your available historical data – the potential drawbacks, or the mistakes, or the issues, or the things that may lead to a lack of confidence that can stem from this, ”cited Dr Hardoon.

He stressed that the most important thing in operationalizing AI in an organization is trust. Dr Hardoon compared it to how individuals trust their closest friends and family members.

“Our self-confidence isn’t that they’re always right or even always telling the truth, but it’s in their ability to say, ‘I’m sorry, I made a mistake. Let me correct myself. This is exactly the same principle that we need to hold ourselves accountable for when we apply new technology, making sure that we put safety nets in place, making sure that we can validate what we’re doing and make sure we’re doing the right thing. good thing.

Dr Hardoon said that part of the “peeling” process, especially of a new set of technologies, is best to always have people in the know.

“Not that human is better, but we trust humans a little more so far, until we get to that point where we realize it’s good. Or maybe in some areas we just have to accept that AI should never play a role, because we want to have the capacity to continuously intervene in terms of results, ”concluded Dr Hardoon.



Comments are closed.