Britta Evans-Fenton

Accelerating Accessibility in React Native with AI: Meeting the EAA Deadline

Transcript

Britta

So according to the World Health Organization, approximately 90 million people in the European region alone have some form of visual impairment. Let that sink in. That is more than the population. And with the aging European population, this number is just expected to rise in the next coming decades. But this isn't just a statistic for me. My grandmother lived with macular degeneration. Her vision slowly declined, similar to how this representation is being done, which meant that she slowly lost her ability to read, recognize faces, navigate the physical and digital world independently. Right. And I carry that gene associated with this condition, which means when I advocate for digital accessibility, I'm not just thinking about some abstract user or the 90 million in Europe. I'm thinking about my late grandmother and I'm thinking about my own future. And there's really no better time than now to get these things fixed. Can I get a quick, just a single clap if you are aware of the European Accessibility Act or EAA? Okay, that's more than I thought. That's good. So for those of you who didn't clap, just a heads up, in less than three months from today, so June 28th, 2025, organizations with a European presence must comply to the EAA. So I am going to talk more about this later in my talk. But know that this is not an arbitrary deadline. It represents a critical turning point where digital services can either become part of the solution or will remain part of the problem. And don't worry, with AI and the prevalent use of large language models in our lives today, meeting the EAA requirements is no longer an impossible challenge, even for those who haven't started yet. But the trick is really recognizing when you can rely upon AI and when you need to question its output. So hi, I'm Britta. I am a senior software developer at Shopify working on the point-of-sale application. And AI has really changed how I do my daily tasks and my workflows, really transforming how I approach challenges and solve problems. And I'm going to talk a little bit about how I do that. And today I'm going to show you not just how to leverage AI for screen reader accessibility in React Native, but how to recognize when it's leading you astray. So again, we're going to do the single clap thing. Who here uses AI or LLMs on a regular basis for their development process? Yeah, I am so fascinated. A year ago, that would have been like less people. It's amazing. So similar to how I handle working with an LLM, I'm going to start with a bit of context first. I think that's a really good way to start. And it just has to speaking the same language and making the same assumptions about expectations. Accessibility. So the EAA and accessibility isn't just about screen reader use. While vital to many, the EAA addresses a much broader range of needs, including cognitive, physical, and sensory disabilities across multiple types of barriers. Screen readers represent an area that's a lot of misinformation, and there's a lot of common ground in mobile applications. And since I have a personal connection there, I do tend to focus there, and that's what I'll be focusing on at this talk as well. Blindness exists on a spectrum. In fact, 93% of people who are considered legally blind do have some form of vision. And this is why solutions like high contrast, scalable text, customized interfaces, matter just as much as screen reader capability. Brief note on language: disabled. Disabled is not a bad word. A couple years ago it was kind of a bit of a trend. But I will say that it is people-first language that is seen as the most appropriate or acceptable when addressing a person or the community as a whole. That being said, just note that there are some exceptions to that. So people-first language. There are some exceptions to that like identity-first language. So the autistic community is a really good example of this. And the only word I would really stay away from is handicap or handicapped. This is just seen as outdated now and unacceptable. So as promised, EAA. So it isn't just another regulation. It represents a fundamental shift of how we talk about digital inclusion across Europe. It aims to ensure that millions of Europeans with disabilities can fully participate in increasing digital society. So the EAA was adopted in 2019 and it has a compliance deadline of June 28th, 2025. What's critical to understand is that the EAA is actually a directive. So it means that there's a framework in place and each country is actually able to establish its own enforcements and penalties. And assuming that they allow, like they hit the minimum that's set in the directive, they can actually create stricter standards. And even businesses who are outside of the EU, who do trade within it, are expected to comply. So your specific obligation may vary depending on the app functionality and the countries that you operate in. So I strongly encourage you to consult your legal team or seek specialized legal counsel to understand your unique compliance requirements if you are unaware of them. I have included a link at the very end of my slides, there'll be a QR code that has a little bit more of there's a bunch of information in there, but including some resources and an in-depth analysis that's been put on by a Irish law firm, but as well as a specific link to something more French, if you're interested in that. But do remember that just because the headquartered in France doesn't mean that you only are concerned about French regulation if you do business outside of France. So now that I've just stressed everyone out, I started to kind of ask myself, okay, how can we accelerate accessibility implementation in React Native? Can AI help us meet these requirements efficiently? So I started to conduct an experiment. I built an app. Actually, another fun fact about me is that rather than most people just do one New Year's resolution, I actually do 101 things in a year, or I try to accomplish 101 things. And actually I lied to you, I didn't build the app. I had AI build it, Claude specifically. I vibe coded the app. I guess is what we're saying now. This really was the first part of the experiment, was to see if I could be as hands-off with the code as much as possible, because my question was simple. Would AI, unprompted, include accessible features in the development process? Spoiler alert, it did not. But let's take a look at how VoiceOver interprets code with no accessibility features added. You might be thinking that isn't that bad, but really the big offenders are that there's no rules and there's no states. So the whole point of the app is to keep track of what I've accomplished and what I haven't. And I can't, as a screen reader user, I wouldn't be able to know that. Labels are really kind of the only thing I say you get for free that the screen reader will actually see. There is a bit of like grouping with lists. I'm using flash list, but I believe flat list would do the same. So things to note for that as well. Those could be wins, they could be not, but something to remember for later. But yeah, as it stands, you would not be able to use this as a user. So let's kind of go from the top and see what you can and cannot use. So first of all, calling out all labels worked. Great. That's awesome. It didn't call it that that first thing is actually a button. It didn't group that. That's more of just like a wishful thinking that it would group those pieces of texture to all be one. It did skip that progress bar completely. It should have skipped that icon that's decorative that you don't, there's no information from that. It's no added value. It did call out that it was a text input, so that's good. Marked that as green. It grouped that, which was good. But it didn't call it a button, it didn't include any of the statuses, but it actually grouped, I have these quick actions in some of those list items, and it grouped those so you weren't actually able to see it. So not super accessible. But here's where things kind of start getting interesting. At this point in the experiment, I actually asked Claude, like, "How could I further improve my app?" And Claude suggested four out of the five times that I had asked some accessibility improvements. Now, those accessibility suggestions were all web-oriented, but the intention was there. And my next step would be much harder if I didn't-- or if it didn't understand the importance of accessibility. So that did feel like a win. The experiment revealed something crucial. AI tools understand the accessibility matters, but they don't inherently prioritize it unless directed to do so. And even when they try, the influence of web accessibility is just so much more prevalent in the internet that it bleeds through the recommendations of mobile applications. So I took the next logical step, I gave AI my code and asked it, make it accessible. Since it was a relatively small code base, I kind of figured it would at least meet me halfway. I was wrong. Rather than a targeted accessible improvement, it actually just added so many layers of complexity to it that I didn't actually have anything meaningful to show you on a slide, so this is what we have instead. It was like watching somebody try to do simple math and reinvent calculus from scratch. So when I had originally asked Claude to build the app, I was actually aware that there was gonna be some areas that were a little bit trickier when building it. So I scaled back my approach and it just focused on one component, which was the to-do item. So this seemingly component, or seemingly simple component, sorry, has several accessibility challenges. So it's a button, it has multiple elements to it, so there'll be some level of grouping, it has a mix of decorative and non-decorative icons, so ones that would need labels and ones that don't. It has text elements. It has a button, like a secondary button that's in it. That button actually only has an icon, so it definitely needs a label. And it's a nested element, just to add to all that. Nested interactive element. So I ran it through AI again, prompted asking for accessibility. And let's take a look at the code. So hopefully this is visible enough on the screen, but this is definitely a lot better than what we had before. So to start with some good things, we actually have the role there, so it is calling out that it's a button now. And it even recognized that some of the elements were not important for accessibility, so put those. I'm sorry, they're at the bottom. Now, I will say this is an Android-only prop. I would have expected the iOS prop to also be there, but because of the grouping, it actually didn't need to, so that's fine. But just so you know, if you're trying to figure out why it's working on Android and not iOS, like... and vice versa, that would be it. It did add an accessible hint. That's just bonus. You actually, that's another thing you do kind of get for free when you put a role, so you don't really need it. But it's a nice little add on, sure. Now let's talk about labels. Okay, so is this accessible? Yes. Is it an approach I like? No. Basically, like, maybe we should start with this. As you remember, we get a lot of labels for free. And so voiceover and talkback will read everything on the screen that is text already, a text component already. And so what you're doing here is you're just decoupling what's on the screen and visible versus what's being read by a screen reader you. And when you're dealing with a large code base, like I am with Shopify, there's multiple people working on these different pieces. You're now dealing with two different pieces of code that are maintained that you now need to make sure stay the same. Especially when there's actually like duplicate code here. Like why not just keep it as one? Why not just utilize the stuff that's already there? So I don't love this approach. That's more of my opinion. States, this is also like accessible states. So this is used to like know whether something's disabled or checked or what have you. And you don't need the falses there. That's just add on. So that was a bit of like an orange flag, not quite red flag, but orange flag for me. And finally we have, let's see, see, oh, the nested quick action. So the render quick action there, it didn't do anything to make sure it was pulled out. And it even, with the accessible true, it even squashed that. So accessible true takes that view and collapses it to be one group on iOS. And so it doubled down and made that mistake. So there's a couple of red flags that you should be worried about is if it's going to add accessible true everywhere, that's a bit of a red flag with AI. If it's adding in all these other things that you don't need, like false for your statuses, but also just general, like you don't need this many accessible props. So that's a bit of a red flag. So at this point, I wondered, is this just Claude's limitation? So I tried it with ChatGDP, with Mistral, with Gemini, and they all kind of had similar results. The pattern was consistent. AI tools would recognize the need for accessibility, but would struggle with the implementation correctly in React Native. When I asked for a baseline understanding of accessibility, it actually recommended this line, "use semantic when possible, add aria when necessary." This is actually kind of a great concept, and I was surprised that it made this recommendation, but it did highlight two issues that were happening. First misapplied knowledge. That line confirms that Claude is web-based mindset on accessibility, and sadly that line doesn't fully translate well into React Native. And second, the assumed assumptions. AI assumed that web patterns would work in React Native, which simply isn't the case accessibility. So what you get for free with semantic HTML versus React native components is just fundamentally different. With AI, when you're speaking different languages and you have different assumptions, it's really hard to get what you want. And this is exactly what's happening since LLMs are primarily trained on web accessibility content, not mobile. Similar to how I started my talk, I kind of started to work through building an understanding with Clud. I took several steps. I actually took the React Native documentation as a PDF, downloaded it, uploaded it as reference material in Clud's project instructions. I asked it to make a component accessible linking to that component file. I then corrected any of its assumptions that it got wrong, had it write some instructions for me that I could then add back to that cloud project instructions and I did and then I just repeated that process over and over and over again one quick caveat at some point I had just wasn't really seeing progression happen and this is because my instructions were just getting to be so long that I had cloud condense that down for me as well. So after several iterations and pumping out code and reviewing it, I eventually developed a workflow that actually produced fairly accessible screen reader components. So this is kind of what ended up the workflow being with the component. I go component by component. I ask AI to provide an analysis of that component, which forces it to think about the component's purpose before attempting to. to find the solution. I corrected its analysis if it needed it. That's very important. And then I requested accessibility implementation using that analysis. So I found that really only after AI understood the component and its challenges, did it make it properly accessible. I did try to condense this into one step, but I also found that that just wasn't as thorough. So, let's take a look at the results. There were two corrections I did make in the end, although I really should have probably fixed the toto I don't quirks that you learned from a screen reader. But the first was that it did actually make up an accessibility role, which was caught by my IDE. The second was that it actually added an accessible label to input text. So it actually was duplicating it because it was calling out the value and the label for the input text. So it's a search goal, search goal, which again, not super needed. And I found that through just manual testing. But if you remember, like when I was doing that iterative process, I was doing that on a component that didn't have a text element. So I never gave it back to that. If I had had a lot of input text in my app, I would have probably just gone back and added that context in there. But I had like a few places, so I just may I newly did it myself. One more thing is that every time you do ask Claude or any AI or LLM to interpret something, you are going to get a different output coming back out. So I was seeing a little bit of variation in that as well. But let's take a look at what maybe one of the other LLMs responded. So with Mistral, yes, have to use the local hero here. I created an agent and provided it with the same instructions that I did with Claude. So I gave it the code and asked it to describe it, then copied the accessible docs in the next step when asking to make it accessible. So I will admit the mistake that I made here was that I made the assumption that it would just take what it's understanding, the same like instructions that Claude had written and just be able to use it. But Claude's context and understanding was its own. And I really should have built that context up. with Mistral, which I didn't. So I did notice a few things. The first is that the React Native components themselves were actually, like, it seemed to understand that a lot better. So my custom components, which were mainly built on React Native components, it was able to get that actually to a standard that I preferred than what Cloud was pumping out. but when it came to the app.tsx file which was mainly using my custom components it kind of struggled a little bit because these were custom components and I was just making up accessibility props so I should have fixed that a lot better with the context and building up that instruction so it is a fixable problem for sure. And lastly, it didn't seem to know the difference off the bat from what was decorative and non-decorative. It just kind of labeled all the components. So this isn't inaccessible. It just slows down the user from calling out the icons because that just have no value. So... How about larger codebases? So the approach really is the same. I acknowledge that the temptation to skip steps increases as your codebase increases, but maintaining this structured approach is really critical. You also may have additional documents that you want to include, such as design system documentation, or maybe custom approaches that your company aligns with, principles or general, guidelines that you use. And I would strongly encourage you to build into your context to try to convey to your LLM to keep accessible labels to a minimum, because that is, like I was saying before, that does decouple things and it just means more maintainable code. So, conclusion. This experiment really taught me the valuable lessons about AI and accessibility in React Native, and these are the things that I'd like you to take away hopefully from this talk. AI will not prioritize accessibility unless you do. It's never explicit direction, or it needs explicit direction, sorry, to focus on accessible features. Context is everything. Providing reference materials and correcting assumptions dramatically improves results. Break down component by component. Analysis yields better results than attempting to fix everything all at once. And verify and test. I know I didn't, I kind of only briefly touched on this, but you really need to test with assisted technology. It is really the way that your users are using your apps. And if you don't know how to test with VoiceOver or TalkBack, I do have in my resource guide a link to that. So, yeah, thank you so much. This is the resource guide if you do want it. There's a lot of information on there on how to get started with accessibility and a few more of the talks and just a recap of the slides as well. So, yeah, thank you.

Mo

Thank you very much, Britta. Thank you. I think this is really important, especially given the... Deadline? The deadlines, yeah. I think it's probably something that people want to start looking at a little bit more seriously. So it's a very well-timed talk. Cool. I want to start with a fun fact. Oh, yeah, for sure. Until people get some of their questions in. But it seems like you've done everything, like paper crafting business. Ballroom dancer competitively, figure skating. God, I wish I was you. What else? Magician's assistant. Yeah, that was the wildest job I've had. Very cool. I never performed though. I have that in a fun fact. I never actually performed, but I did learn all the tricks. That's very, very cool. You met King Charles. That's impressive as well. That's not the fun fact of that one. Fun fact. No, no. I don't want to get in trouble. That's fair. But I actually want you to, if you can, I felt like I couldn't do it justice, but I really like the bus pass story. Do you want to tell everyone the bus pass story?

Britta

So, yeah, I had a more extended, I talked about this more last night, more extended version of that fun fact. But I was part of this program when, like, you know, bus passes like Oyster Card or like... Navigo here, yeah. The one that was done in Ottawa and Toronto was called Presto. And I was part of the beta program. I joined and helped out with their testing. And then they asked me to like be one of the people who they would take a photo of for the ads when they were like publicly launching it. And they thought I was younger because my email address kinda got a joke in there. And so they assumed I was like 16. And at the time I was probably like 25. And so I showed up and they were like, oh, you're, you're older than we thought. And I was like, yeah. And I understood at that point, because they kept pushing the, the like, hey, have your parents sign this. And I'm like, why don't I'm over the age of 18. It's fine. So I didn't think anything of it. And yeah, when the poster actually finally came out, I noticed that they had stretched my face out and made all my moles red. So they look like pimples. So they tried to like make me the 16 year old that they wanted to. So, no one in town, like, I'm, like, in Ottawa, I knew a lot of people, and everyone kept saying, oh, your younger sister is on all these, like, Presto ads, but I don't have a younger sister. Like, it was just me photoshopped to look, I guess, worse. It's concerning. Like, the other way than you'd expect. So, yeah.

Mo

Well, my condolences on that. Well, that's a great story. Okay, cool. I'll start with a quick one. So, it seemed like your workflow was... let's give it the React Native accessibility docs, go through, did you experiment with giving it more prescriptive instructions and creating almost your own custom context? Because one of the things that I've done is I've told, you must do this. Sometimes I capitalize it and the AI listens to me more when I yell at it in caps. Did you try to give it a workflow? Almost like step one, you need to look for accessibility labels and don't add them here, add them here.

Britta

Well, I use the strategy more to like, negotiate with Claude and be like, hey, this is what I'm looking for. What instructions would you need to get there if I wanted this to translate into this? And then it would write those instructions and then I would add them to the... the project instructions. So I know every, like, that's just the way that you can build the context in Cloud if you're unfamiliar with that, that LLM. But so I would actually just get it to write me the instructions that it wanted. And that's where my mistake went with Mistrez. I just grabbed the Cloud ones. And so like, I'd built that context up and like it was expecting, and I'm sure there's a lot of differences and I'm just unaware of them, but I was like, oh, Use one for them all. Like that was kind of my goal at the talk is I'm like, I wanted one thing for people to just be able to like use. And I realized there's a lot more nuance in that. And so when you do pull the because they do have that prompt available in that reference guide, when you do pull it and you use it, like make sure you use it with your. with your LLM that you're using, make sure that you actually like have some conversation with it and do a bit of that iterative process to make sure that it works for your, your app.

Mo

Cool. Someone's asked, have you tried using AI specifically cursor rules to improve the workflow? Do you have a set of rules that you would recommend or a specific agent to work with?

Britta

Ooh, that's, I have not, I will admit sometimes, especially when you're working with a very large code base, like I am at work, Cursor doesn't see, seem to see all of it, like we're hitting boundaries. And so I don't always find it to be helpful in that context, which is where I'm doing the majority of like, like, that's where I'm usually trying to like, accelerate that accessibility is because I'm trying and I'm trying to teach other people so I haven't used that but that is a really good question I know there's a bunch more things that I wanted to explore and I just didn't get time to do um and it's probably going to be just the thing I do this next year is just exploring all the tools for that so I'll keep that one in mind thank you.

Mo

Cool um So someone's mentioned that in their specific instance of an app, they're using, you know, your run of the mill average third party libraries, like what's there in the ecosystem, effectively. And they're quite lacking, you know, the libraries are quite lacking in accessibility support. Do you know of an initiative across the community to work with other React Native library maintainers on the EAA deadline? Or is there sort of no visibility on on this?

Britta

Well, one of the reasons I want to get the talk is because I just wasn't seeing a lot of a lot of things out there on the deadline. And like, it's coming up. You have three months and depending on how big your code base is. Now, mind you, I will say, and I'm not a lawyer, please seek legal advice. I'm just going to disclaimer that there is a bit of a like phased approach when it comes to like there's a grace period for products that are currently up running um but keep in mind it depends on which country it depends on so like just take a look at it just don't take my word please seek legal advice um but i forgot the question now i'm like is there is there an initiative across the community libraries i guess i'm kind of i feel like Since some of my talks, I've started to see more people do add even just accessibility in their documentation. So William Condillon, when we were chatting at AppJSConf, I asked him, oh, I really want to do a YouTube video with you on accessibility. And he's like, yeah, I'll make sure there's more accessibility docs in there. And he started to really put them in. And I've noticed that trend grow. even from last year, like even if it's just mentioning it, I think it's really important because a lot of people forget and that's, it's kind of one of those like out of sight out of mind things.

Mo

No, a hundred percent. Well, that's great. That's great to hear. This is something that I didn't realize bothered me, but now that the question's there, I do realize that it did bother me, which is can you add a pause between some of the like screen reader elements and how it's like reading out the elements on the screen? Because it's like reading everything in this like one long, string without any like pauses in between how much control do you have over that?

Britta

I mean most people do it in a hacky way but I would actually encourage you not to like when you're when you're able to see the screen you're you're quickly glancing at it and quickly looking at everything for someone who's using a screen reader the way that they get that information is by reading it really fast like having it read to them right and so that takes much longer to to navigate and so that's why i really encourage like if it's a decorative icon, don't put the details there. Like it does not need that details. Keep labels really short. Like you wanna provide that context as quickly as possible. So I would almost encourage, like, don't worry about it. In fact, like the videos I showed you, I really, really slowed down. Like I don't think, I'm not used to having the screen reader be that slow. It's usually like way faster. Every time I've showed videos of how fast I do it, people are like, I'm not hearing any of the words.

Mo

And do people get used to it being like that rapid fast?

Britta

Oh, yeah. It's really common because you're like, so you're like something being like listenable for you versus listenable for somebody who's using a screen reader all the time. It's going to be a very different speed. You can change that variable speed.

Mo

That makes sense. Okay, cool. Yeah. is the classic question with AI, is the time invested in providing context and doing loops over the context worth it? Or is it just faster doing it by yourself?

Britta

I think it depends on how big your code base is. And I think this talk was really about getting you, like, when you have when it's like feels like an impossible task getting you started on something and moving forward and having that um like just to help speed you up and get you in the right place um yeah it's i think it really depends like would does this app need that probably not did i spend more time building up the context than changing those three pieces of of code absolutely um but Shopify's apps. Like those are big code bases. And if it wasn't accessible and I had to do it this way, I would definitely probably want to use AI to help with that.

Mo

I think it's just the scale, right? Like I was working on migrating a component library from one solution, one like seat, styling solution to another and we used AI for it and it would take two hours per component manually and after spending a few hours to get the prompt right it was 15 minutes so it just depends on the scale and how much you're needing to go through um let's do the last question um And even though the pronunciation was Toto, which kind of sounds a little bit like Theodore, so it was a bit of free marketing for us. Is there a way to change how the voice pronounces certain words if it's just getting it really wrong, like Toto?

Britta

So that was because, like, as a human, we read it as to do because even though it's like one word, that was the thing. Like, I would have just added a space and it would have read it as to do. Like, that was just... I didn't catch that at the time because I was listening to it quickly. Like, um, but, uh, I wouldn't worry so much about that. That's one of the things that I've often been told is like people who use their devices, they're used to the quirks, they understand their quirks. And when you try to change that, um, you run the risk of having to have more maintainable code. That's maybe a disconnect from what's, what's on the screen. And so, um, like, there's been a couple of things like periods and dots and hyphens and dashes. Like sometimes they can be interpreted different ways. And we do try to solve those problems. But for the most part, if it's something like they're saying the word wrong, I wouldn't worry too much about that.

Mo

Fair enough. Cool. Well, there's a lot of questions, actually. There's still like six or seven questions. Come talk to me. Grab Britta during the breaks and at the after party. I think it sounds like people are very interested in the accessibility topic, which is great on you.

Britta

Yeah, I really appreciate that.

Mo

Cool. Well, massive round of applause for Britta. Thank you.

Edit on GitHub