Most tech leaders agree that a technology business without an AI strategy is fast becoming a ship without a rudder. Once a strategy is in place, however, it is the task of senior leadership to ensure that responsible AI is integral to its design. Convincing enterprise stakeholders of the importance of an ethically designed AI strategy should not be difficult when presented with the alternative.
“No business wants to make headlines because they have used or misused customer or employee information in a way that causes harm,” says GlobalData enterprise practice director Rena Bhattacharya. And reputational damage is not the only potential outcome of poorly implemented AI, adds Bhattacharya, noting that in certain industries or regions, hefty fines or legal action could ensue.
There are numerous ways this could happen, from failing to adequately safeguard personal information or incorporate guardrails that prevent the distribution of misinformation, to using AI in a manner that is biased against or withholds privileges from a particular demographic group, explains Battacharya.
Five globally agreed principles of responsible AI
The principles governing responsible AI are broadly: fairness, explainability, accountability, robustness, security, and privacy. But assuring customers, employees, partners, and suppliers that an organisation is implementing these appropriately is hard to substantiate, according to Battacharya.
While it is early days for AI global standards, there are best practices related to security, data governance, transparency, use cases, model management, and auditing, that organisations can strive to implement. “But the concept of responsible AI is still fluid,” adds Bhattacharya.
“Companies should look towards their corporate ethics policies for guidance and pull together multidisciplinary teams that track and review not only how AI is being used across the organisation, but also understand the data sources informing the models,” says Battacharya.
Access the most comprehensive Company Profiles
on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Company Profile – free
sample
Thank you!
Your download email will arrive shortly
We are confident about the
unique
quality of our Company Profiles. However, we want you to make the most
beneficial
decision for your business, so we offer a free sample that you can download by
submitting the below form
By GlobalData
Susan Taylor Martin is CEO of the British Standards Institution (BSI) – the UK’s national standards body – and has worked for the UK Department for Science, Innovation and Technology towards the creation of a self-assessment tool that aims to help organisations assess and implement responsible AI management systems and processes.
Taylor Martin says that there is an onus on leaders to show how they are taking steps to manage AI responsibly within their companies. “The way forward on the safe use of AI may be unclear at this stage, but we are not powerless to influence the direction of travel, or to learn lessons from the past,” she says.
However, this is far from straightforward with companies facing a patchwork of global regulation with over 70 pieces of legislation under review and different jurisdictions taking very different approaches.
In 2023, the UK government published a white paper outlining its pro-innovation approach to AI. In February 2024, Rishi Sunak’s Conservative government issued guidance that built on the white paper’s five pro-innovation regulatory principles. To date, there is no statutory regulation of AI in the UK. The incoming Labour government’s AI Blueprint published on 13 January simply referenced regulation with: “the government will set out its approach on AI regulation.”
Europe leads on AI regulation with its EU AI Act entering into full force in August 2024 with potential fines of up to €35m or up to 7% of global annual revenue for some Big Tech companies in breach of the risk based rules.
China aims become the world’s AI leader by 2030 with top down edicts for AI governance. According to GlobalData TS Lombard macroeconomic research, the Chinese government has shown a surprising emphasis on promoting AI innovation and a willingness to tolerate some experimentation, albeit within CCP defined and -controlled areas.
In the US, President-elect Trump made an election manifesto promise to advance AI development with light-touch regulation. This follows outgoing President Biden’s Executive Order on Safe, Secure, and Trustworthy Development and Use of AI, passed in October 2023, which saw the establishment of a US AI Safety Institute.
While AI is by its very nature “boundaryless,” there are practical “no regrets” actions that organisations can take in the short-term, according to Taylor Martin.
There are myriad training opportunities available, from instructor-led training to on-demand e-learning courses, such as the ones the BSI offers. While global regulation takes shape, the BSI’s AI management system standard (BS ISO 42001), released a year ago is already being used by organisations including KPMG Australia. “Becoming certified will help organisations be prepared for the regulation coming down the track,” says Taylor Martin.
Other AI risk management standards have been under developed since 2017 by the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IEC). And to Taylor Martin’s point about preparation for more formal regulation, the EU AI Act draws heavily on ISO’s risk management standards and ISO/IEC’s AI terminology standards.
Aside from formal standards, many companies are devising their own internal risk based framework. Bosch UK & Ireland Steffen Hoffmann says any regulatory standards will first require a more common understanding of AI’s potential risk and effect. As a company, Bosch acknowledges the risks but also that these are manageable. The company has devised its own AI Code of Ethics based on best practices as a starting point. The central premise of which is that humans should be the ultimate arbiter of any AI-based decisions. “With innovation comes social responsibility, that is basically in the DNA of Bosch coming from our founder,” says Hoffmann.
French telecoms giant Orange is another large corporation prioritising responsible AI. “Transparency and accountability are key – our algorithms must allow users to understand how decisions are made, what data is being used, and whether any biases are present in these systems,” says Orange Business, international CEO, Kristof Symons.
Symons says this goes even further to include educating businesses about the biases that AI can introduce, explaining how their data is being processed, and offering clarity on where and how that data is stored. The company launched a responsible GenAI solution Live Intelligence for customers wishing to harness the benefits of GenAI without compromising data security.
Orange has also created a Data and AI Ethics Council comprising 11 independent experts to advise and ensure that, “humans remain central to technology and its benefits, and it’s crucial we stay in control,” says Symons who considers responsible practices as business critical as they directly impact an organisation’s reputation and long-term success.
Sally Epstein, chief innovation officer at Cambridge Consultants, which is the deep tech arm of IT services company Cap Gemini agrees that companies deploying best practice guidelines might find themselves ahead of the competition. Epstein warns against ignoring the principles of responsible AI, drawing a parallel with the development of genetically modified seeds.
“The science behind GM foods was fundamentally good, but public lack of confidence held the technology back. Decades on, we are still feeling the consequences of this, a technology that has the potential to reduce the amount of pesticides, herbicides, and fertilisers,” says Epstein.
As in the case of GM foods, Epstein puts user trust above all. “If people don’t trust AI, they won’t use it, no matter how advanced or beneficial it might be,” she adds.
Trust begins with purpose-driven applications, says Ulf Persson, CEO of AI software company ABBY. “Tailored AI solutions that address real business challenges while ensuring measurable outcomes not only boost efficiency but also mitigate risks associated with human error or bias, which is critical in regulated industries like healthcare and logistics,” says Persson. ABBY’s own AI driven intelligent document processing has helped some customers deliver goods to market a 95% faster and created efficiencies that continue to build trust.
Transparency is key to building trust in AI, says Nathan Marlor, head of data and AI at software services company Version 1. He recommends prioritising explainability in AI solution design. “The technical complexity of many AI models, especially those driven by neural networks, often limits their inherent explainability, making it challenging for users to understand how decisions are made,” says Marlor.
“AI is being used in some high-stakes scenarios that can have life-changing consequences – making understanding why an AI made a particular decision essential. People are more likely to trust AI if they understand how it arrives at decisions,” he adds.
To address this, tools such as counterfactual explanations, LIME (Local Interpretable Model-agnostic Explanations), and SHAP (SHapley Additive exPlanations) can be leveraged to shed light on these “black-box” systems. “Whether through LIME’s local approximations SHAP’s comprehensive global insights, or counterfactuals offering actionable feedback, explainable AI is key to building AI systems that we can truly understand and rely on when it comes to decision-making processes,” says Marlor.
AI hallucinations further risk undermining trust in AI. The antidote, says Marlor, is education. “Organisations should prioritise user education by clearly communicating the limitations of AI, offering guidance on validating outputs, and implementing mechanisms to identify or correct inaccuracies,” he adds.
The concept of responsible AI is still fluid, and will likely change over time and vary by region making it difficult to hold businesses accountable to the same standards globally. Indeed, a consensus on the definition of responsible AI may still be some way off. But businesses must still act now to ensure future compliance.
This post was originally published on here