UX Healthcare 2025 Discussion
By Amanda Ho-Lyn and Katie Buntic, on 25 June 2025
Earlier this year, Katie and I attended a relatively small conference, hosted by UCL no less, for User Experience Design in healthcare. The talks were varied, with several personal stories highlighting some of the shortcomings in healthcare due to lack of or misaligned consideration of their users when designing their products and services.
In this post, we’ll explore some of the themes and ideas in a conversational q&a manner.
Genomics England
AMANDA: So Katie, we both enjoyed the Genomics England presentation, which I think was one of the few large-scale projects mentioned. What was the least expected aspect they undertook, in your opinion?
KATIE: One of the most unexpected but really insightful things Genomics England did, I thought, was fully embedding the study into existing hospital workflows. For example, they deliberately avoided using ‘Generation Study’ stickers and just used the hospital’s normal ones. That might seem like a small detail, but it made a big difference in how easily hospitals could take it on. Instead of adding a visible ‘research layer’ which could feel like extra work for staff, they designed the study to blend in. Midwives are already overstretched, they don’t have the time or capacity to manage a whole new process. By keeping things familiar, they made it feel like part of the usual care. To make this work, they really got to know each hospital. They would tour the sites, understand how things run locally, and adapt to fit. They also had research midwives who collected samples once a day, and set up clearly labelled boxes for staff to drop them into if they were taken overnight. Simple things but thoughtful.
AMANDA: That’s so true, I’m sure all the staff also appreciated that they were taken into consideration and accommodated rather than just being told what to do, regardless of how it might affect their schedules. It probably also made the adjustment period much shorter. The fact that they also went to the effort of getting floorplans to plot out the logistics of handing over samples most efficiently both surprised and impressed me; I somehow didn’t expect a huge endeavour to go to such lengths, but it probably makes more sense that they were able to do it because it was such a large undertaking. I don’t think I’d heard of The Generation Study by name but I did know about taking blood from the heel/umbilical cord and testing for disorders, had you?
KATIE: Yes! I thought the same, there’s something really respectful about that approach. Instead of just parachuting in a big national study and expecting hospitals to adjust, they actually adapted to the people already doing the work. I think that’s probably why it landed so well. It’s such a good reminder that if you want something to be sustainable, it has to feel doable for the people delivering it. And yeah, the floorplans! That level of detail surprised me too. But it’s those unglamorous logistics that make or break these kinds of studies. I really liked how much thought they put into getting the practical side right. I didn’t know the name “Generation Study” either at first, but I had heard about the cord blood sampling, I always assumed it was part of standard screening, so it’s interesting to realise it was part of this bigger initiative.
AMANDA: Hopefully that team will be in charge of designing some more UK-wide healthcare initiatives; they did a great job.
Accessibility
AMANDA: We both appreciated Kardo Ayoub’s talk as well. I had no idea some people could lose vision in that way (hemianopia), did you? After that, it’s tempting to want to try to cater to everyone, but that’s not really feasible, is it?. What do you think are the main takeaways from that that we can actually bear in mind and apply with our projects?
KATIE: Kardo’s talk really stayed with me too. I had no idea someone could lose vision in that particular way either! It was a reminder of how quickly things can change, and how easy it is to take accessibility for granted until it affects you or someone you know. It’s definitely tempting to want to design for everyone, especially after a talk like that… but you’re right, that’s not always realistic. I think the key takeaway for me was that accessibility doesn’t have to mean reinventing everything. It’s often about small, thoughtful decisions that make a big difference. For example, tagging images properly or checking how something sounds with a screen reader doesn’t take much extra time, but it can completely change someone’s experience. Kardo also made a really good point about prioritising practicality over aesthetics. Beautiful design means nothing if people can’t use it. For example, a scroll that goes sideways may look cool but is completely impractical in most designs. Can you think of any other examples? So for our own projects, I think we can aim to build accessibility in from the start, and keep asking ‘Have I done everything I can to make this as accessible as possible?’. Also, if we are unsure, to ask one of UCL’s Accessibility Champions for advice! What do you think?
AMANDA: Oh yes! He also mentioned that another thing we can do is make sure we have dynamic resizing – responsive design. I could imagine some of the artsy parallax websites are probably not particularly accessibility friendly – the ones where it’s basically one long page where elements move at different speeds as you scroll. Good idea to contact some of the Accessibility Champions, I believe Kimberly Meechan, Alessandro Felder and Jeremy Stein are all Champions from ARC.
AI in Healthcare
AMANDA: There were quite a few talks about the crossover of AI in healthcare, particularly regarding mental health chatbots. As you’re a Mental Health First Aider & Wellbeing Champion, do you think this is something we should use? Where’s the line for risk?
KATIE: That’s such an important question, there was such a theme around mental heath and AI! AI definitely has potential in mental healthcare, where access to support can be patchy or delayed. Chatbots can offer a kind of immediate, 24/7 presence that is helpful for things like signposting, check-ins, or even just reducing the barrier to talking about how you are feeling. For someone who is struggling but not ready to talk to a human, it might be the first step toward seeking help. But as a Mental Health First Aider and Wellbeing Champion, I think we have to be really careful not to overpromise what AI can (or should) do. It is not therapy and definitely not crisis care. And it definitely shouldn’t replace human connection, which makes the biggest difference I think. One of the problems is that AI it does not hold responsibility. If an AI tool is giving advice, making interpretations, or engaging with vulnerable users, there need to be clear safeguards. Can it pick up on body language? Can it read between the lines? What happens if someone discloses suicidal thoughts? Does the tool escalate? Who would be accountable? So yeah, I cautiously think there is space for it – but only as part of a wider, human-led support system.
AMANDA: All very good points. I agree, it does seem like it could probably help fill in the gaps for the lower level support where someone just needs to vent or get their thoughts out initially but needs to have some sort of system to loop in a real human professional if things – like mentions of suicide, self-harm etc – are mentioned. I do also think that even if a real human is looped in, the patient (for lack of a better term) might not really distinguish between the bot and the human if it’s through a chat, and I’m not sure if that’s a good thing or not, what do you think? Something else that occurred to me during those talks – and this might be a tad melodramatic – is how this might affect our already declining population. If people get more used to talking to bots, which tend to be sycophantic and can even be customised, than having relationships with real people, which often have conflict and disagreements, then I can imagine this resulting in a lot of people not forming deep, meaningful relationships with other people, accelerating the population decline because why endure something challenging if you can have your own hype-man dopamine echo chamber? But also – whilst we say they aren’t very good at picking up on the nuances of being a human now, do you think when we start putting them into robots and the technology progresses that they could actually end up replacing therapists, and maybe even some doctors? On a slightly lighter note, I think the ability of some of these chatbots to monitor and collate symptoms to come up with a diagnosis is probably quite helpful since humans often forget things, whereas the AI can remember all things. Of course this would still need to be checked by a professional, but I think it’s a bit like how AI is quite good at identifying cancerous growths from scans and stuff, you know?
KATIE: Yeah, I’ve thought about that too… it’s a bit of a grey area. If someone’s struggling and gets support through a chat, they might not care whether it’s a bot or a human as long as it feels helpful. But does that blur the lines too much? Should we be clearer about who or what people are talking to? I do wonder what impact that has on trust long term. And if people start turning to bots because they’re more agreeable or less emotionally demanding… is that helping, or just avoiding the harder, real-life stuff? I love the phrase hype-man dopamine echo chamber – yeah, it makes me think about how people might start avoiding difficult conversations entirely. If all we’re left with are shallow, curated interactions, won’t that just lead to more loneliness and disconnection? We’re already struggling with that as a society. In terms of AI replacing therapists or doctors – I think it could definitely support them, especially with remembering things, spotting patterns, or flagging concerns early. But therapy isn’t just about giving information; it’s about connection and presence. Can an AI ever really offer that? And even if it could… would we want it to? And yeah, the tracking and diagnostic side is probably where AI shines most right now. It doesn’t forget details like we do, especially in stressful moments. That could be so useful. But who’s checking its conclusions? And how do we make sure people don’t just accept what it says as gospel, especially if they’re in a vulnerable state?
AMANDA: I suppose all we can do for now is watch this space and see how things unfold…
So concludes our discussion, if you’d like to discuss further, feel free to comment or contact us!
Close