8 Feb 2022

Artificial intelligence (AI) has become ubiquitous in almost every part of everyone's lives and, in doing so, society is facing a challenge to keep pace with its advancements in a face of the Fourth Industrial Revolution.

Risk of AI in society

It holds exciting potential for many aspects of everyone's lives, from improving the safety and efficiency of cities, and guiding autonomous vehicles, to understanding consumer behaviour in a store, and supporting public health measures.

But with this potential, comes risk. That’s why more organisations are taking a hard look at how AI might worsen societal inequality and biases, and how to combat this by developing ethical and responsible AI. The Hanwha Group wants to elevate the quality of life with its innovations and solutions. AI plays a key role in this, as long as it’s developed and used responsibly.

Smarter, responsible, and scalable AI

Building integrity into organisations AI strategy and ensuring all products that use AI, do so ethically

Recently, Gartner identified ‘smarter, responsible and scalable AI’ as the number one market trend in 2021. A trend that will need to continue in 2022, with public trust remaining at significantly low levels.

Almost two-thirds of people are inclined to distrust organisations. The key to this is building integrity into organisations AI strategy and ensuring all products that use AI, do so ethically and responsibly. And simultaneously, any partner or vendor that an organisation aligns itself with shares the same values and sense of responsibility to do the right thing.

Collaborating to build integrity

Edelman CEO Richard Edelman recommends breaking the current cycle of distrust by uniting people together on common ground issues and making clear progress on areas of concern. Additionally, he advises institutions to provide factual information that doesn’t rely on outrage, fear, or clickbait, and that instead informs and educates on major societal issues.

Therein lies a clear opportunity to build greater integrity into AI. Trust relies on everyone being on the same page, with the same access to the facts, and an ability to relay their thoughts and feedback to product creators and business leaders.

Communication about new AI solution

In practice, that means communicating the use and benefits of AI to stakeholders, including customers, partners, investors, and employees. However, research has shown that people perceive the ‘threat’ of AI differently based on things like their age, gender, and prior subject knowledge. The same study found that there’s a huge gap between laypeople’s perception and reality when it comes to AI, with many AI applications (like crime prevention and AI art) still requiring significant explanation.

Therefore, when communicating any new AI solution, it’s worth considering the different knowledge levels that need to be accommodated. Better still, look at the differing priorities, pain points, and concerns of each audience group and tailor your message to this. This ensures everyone is coming to the table with the same basic level of knowledge about an AI use case.

Acting on values

Values are front and centre for all organisations. Yet, it’s one thing to have a set of values, and another thing to proactively act by those values, with every business decision. Planning is the secret to being a value-driven and responsible organisation.

Having strategies and tools in place will help meet corporate values at the moment, no matter the urgency

Having strategies and tools in place will help meet corporate values at the moment, no matter the urgency. Hanwha Techwin's values of “Challenge, Dedication, and Integrity” and spirit of “Trust and Loyalty” are built into everything it does, from their strategy to their products, to their innovation.

Integrity at Hanwha means that it sticks to principles, is impartial, and takes pride in doing so. In practice, it means that Hanwha follows through on its promises to customers and partners, that it doesn’t take shortcuts with product development, that its performance and quality remain consistent, and that every employee understands Hanwha's values and expectations.

Corporate governance

Educating leadership and the board about the potential and risks of AI is core to showing top-down ownership and ethical leadership. It makes sure that AI is overseen by the very top and this direction is set at every level of the organisation.

Some of the areas that senior leadership can steer on include:

  • Clear communication to all stakeholders about how data is collected, secured and used.
  • Transparency around an AI’s decision-making process and ensuring human oversight.
  • Ensuring AI systems are used ethically, are free from bias, and that protected attributes aren’t being used.
  • Data and AI systems are secured against malicious actors.

Continuing to lead

At Hanwha Techwin, integrity is in their DNA. Without trust in its products, Hanwha cannot continue to innovate and find solutions for the world’s challenges. AI itself will be hindered without widespread understanding and acceptance. That’s why integrity is as vital to AI as the data used for training it.