As explored in the first article, generating innovative ideas is the starting point in harnessing the transformative power of Generative AI (Gen-AI) in healthcare and life sciences. Data and AI specialist at Microsoft UK, Carl Prest, describes the next stage of the journey – the prototype phase.
So, you’ve decided on the best ideas for making use of Gen AI. The next step is building upon the ideation process – the ‘prototype phase’, which serves as a crucial bridge between concept and reality. It’s where the potential of Gen-AI solutions begins to materialise in tangible form. In this phase, organisations have the opportunity to validate their ideas and ignite momentum within their ranks.
When embarking on the prototype phase, try to keep the scope focused—define precisely what you aim to achieve with your initial iteration. Avoid the temptation to incorporate everything from the outset and disregard concerns about achieving production-grade quality. Gen-AI’s remarkable agility enables rapid prototyping, experimentation, and decisive action—facilitating swift progress assessment and informed decision-making.
When outlining the scope of your inaugural prototype, remain mindful of your business case. Ask yourself: does this prototype demonstrate high repeatability within our organisation? Will it showcase cost savings or unveil new revenue streams?
Additionally, you will need to define your architecture and ensure clarity on what you want to build. Understanding common patterns is a good idea here – there is no point in reinventing the wheel if someone has already built something similar. Looking on platforms for repositories can help you get a head start and understand how others have approached similar use cases.
Getting your prototype to a point where it is demo-ready can bring it to life for anyone who might still need convincing. It might also help you move forward – momentum is key at this point and you want to make sure that people can really see what is achievable. That will help to get the ball rolling.
As you delve into Gen-AI development, you will naturally learn from the process of building the solution. It’s worth combining this with some of the training available from organisations leading the way with Gen-AI to enrich your skill set and further bolster your proficiency.
In addition to the learning resources inside your organisation, there is also a huge wealth of training and L&D resources available online, both free of charge and on demand. When choosing such training courses, ensure the course includes modules on Responsible AI.
Responsible AI – the framework that underpins all AI strategies
Before diving into development, organisations must first identify potential harms relevant to their planned solutions. This proactive approach involves meticulously examining the landscape and anticipating any adverse effects that may arise. By prioritising these potential harms based on their likelihood and impact, organisations can lay the groundwork for effective risk mitigation strategies.
The process involves several steps.
Identify potential harms that are relevant to your planned solution.
- As you build out your Responsible AI solution, you need to ensure that you identify any potential harms upfront.
- You should prioritise these based on the likelihood of them occurring and the level of impact if they were to occur.
- Test, test, test. Take your list of prioritised potential harms and verify whether they occur. A common approach to testing has been taken from the world of cyber security – using a ‘red team’ to deliberately probe for weaknesses.
- Document and share. When you have gathered evidence to support the presence of potential harms in your solution, make sure to document them and share them with stakeholders.
Measure the presence of these harms in the outputs generated by your solution.
- Measuring allows you to come up with a baseline that quantifies the potential harm produced by your solution and means you can start to track improvement against that baseline.
- You can follow this three-step approach to help you measure potential harms and then use a combination of manual and automatic testing to ensure your solution is at a point where your organisation is happy with it.
Mitigate the harms at multiple layers in your solution to minimise their presence and impact and ensure transparent communication about potential risks to users.
When looking at the mitigation of harm, we typically look at mitigation techniques across four layers.
- Model – selecting an appropriate model for the intended use case can help you mitigate harm in the first instance
- Safety system – platform-level configurations and capabilities that help mitigate harm.
- Metaprompt and grounding – metaprompts ‘define the rules of the game’ for Generative AI. Using metaprompts that define additional safety and behavioural guardrails can mitigate risk. This can be combined with a Retrieval Augmented Generation (RAG) approach to ensure the model is generating responses based on contextual data from trusted sources.
- User experience – consider the software application through which users interact with the Gen-AI model, plus any documentation or user guides that describe the solution to users.
Operate the solution responsibly by defining and following a deployment and operational readiness plan.
- Some common final checks ahead of the release of a Gen-AI solution include legal, privacy, security and accessibility.
- For a successful rollout, it’s also worth having a phased delivery plan, an incident response plan and a rollback plan so that you can limit the blast radius of any potential issue. These plans will also ensure you know how to respond if there is an issue.
Common Hurdles
Transitioning from ideation to prototype can be fraught with challenges. Guarding against common hurdles, such as analysis paralysis and attempting to do too much, is helpful to bear in mind as you embark on this journey.
- Analysis paralysis – sometimes perfect is the enemy of the good. I have seen a couple of organisations get stuck because they’re not 100% sure what the #1 use case should be. They get stuck analysing where to begin – and the result is not starting at all.
- Trying to boil the ocean – in a similar vein, you do not need to do everything at once. Starting small and making some real progress on a limited-scope pilot helps you build momentum. The overly used adage of ‘eat the elephant one bite at a time’ springs to mind here. The bells and whistles can be added at a later date. We are only at the start of the journey at this point.
Organisations can lay the foundations for impactful Gen-AI solutions by focusing on developing demo-ready prototypes, embracing lifelong learning, and prioritising Responsible AI principles. Once they have a developed prototype alongside foundational skills and a deep understanding of Responsible AI, organisations are poised to usher in a new era of innovation in healthcare.
Carl Prest is a data and AI specialist at Healthcare and Life Sciences, Microsoft UK