How can startups create AI responsibly?

The AI hype cycle has moved faster than any other in human history. From the introduction of ChatGPT, the proliferation of AI applications in the last year, to investor reticence because of heavy GPU usage, the pace has been unprecedented. In addition, discussions on ethics have led to agreements like the recent AI Act and the Bletchley Declaration. 


But what is ‘responsible AI’? How can developers ensure that their concept, development and the ongoing application and future of their AI apps are handled responsibly? These questions pose something of a rabbit hole for most startups and founders, who may even begin to question what the very idea of ‘responsibility’ means!

This isn’t simply a philosophical question: given that the AI Act in Europe can currently enforce penalties of up to €35m for violations. Other regions are sure to follow, so startups should be considering this from the outset. Thankfully, several industry heavyweights have already made their views clear on what ‘responsible AI’ should look like.

How to create responsible AI in practice

There’s lots of guidance to follow when it comes to responsible AI. Three of the key institutes in this area are the World Economic Forum (WEF), the UK’s Alan Turing Institute, and the US’s National Institute of Standards and Technology (NIST) – all of whom broadly agree on the most important points. There was even an ISO standard (ISO 42001) set in 2023, specifying standards that AI providers and users should follow to ensure responsible development and use of AI systems. For clarity, our recommendations follow the Alan Turing Institute’s easy ‘FAST’ framework (Fair, Accountable, Sustainable and Transparent) and also contain content from all three organisations.

Fair

Startups and scaleups should always ensure that their AI systems are fair. This means avoiding both human bias and unfair harm.
From a process perspective, this means:

  • Making sure that training datasets are as bias- and discrimination-free as possible.
  • Checking that training models contain ethical content that has been obtained properly – for example, respecting copyright laws.
  • Having processes in place to look for and manage implicit human bias, such as using diverse ‘red teams’ who look for problems with systems.
  • Ensuring that AI systems always look after human autonomy, privacy and dignity – both from data management and an application perspective This includes not only safeguarding data but not creating apps that support weapons development.
Accountable

As well as fairness and minimising harm, AI systems must be auditable so that they’re trustworthy. The three organisations have the following advice for startups:

  • Build audit capability into AI systems: Ensure that code is explainable, the datasets retraceable, and that human teams can answer questions about how systems work
  • Enable user feedback and ensure that any non-human interactions are disclosed so that testers and users know when they’re talking to AI!
Transparency and more

There are a few other challenges for growing companies that the global organisations have highlighted, from ensuring good transparency to broader social issues. These include:

  • Being able to explain why AI models perform as they do, and what goes into the model to start with.
  • Being part of broader AI initiatives to help the public understand and use AI responsibly.
  • Being part of international initiatives to encourage collaboration.

As AI gains pace and systems become increasingly sophisticated, it’s more important than ever that we’re mindful of creating responsible AI systems that embody the principles of fairness, accountability, sustainability and transparency. However, this often seems far from straightforward, but by embracing the simple FAST principles, startup leaders can make sure that they’re on the right path to better, fairer, and ultimately more responsible AI.  


To read the reports in-depth, you can download the NIST AI Framework, the WEF Presidio Recommendations, the Alan Turing Institute’s guide to understanding AI ethics and safety, and the ISO/IEC 42001 standard.



If you’re not an OVHcloud Startup Program member and would like to get help building your responsible AI solution, you can apply to join the program here to benefit from an accompanied journey onto our cloud with up to €100k in free cloud credits and free tech consultation.

+ posts

Start-Up Program Manager

UK & North Europe Cluster

OVHcloud