Read our case study to learn more about our our collaboration with the Australian Government’s Department of Agriculture, Fisheries & Forestry.
Thank you. As Jed said, I'm from Queensland on the Sunshine Coast. This is actually my first React Sydney event ever. Thank you all for having me. Very exciting to be here tonight. I'm going to be talking about Accessibility and Design Systems, so let's get into it. I'm Jordan, and I work at a company called Thinkmill. For the last year, I've been working with the Federal Australian Government's Department of Agriculture, Fisheries, and Forestry on a new open-source design system. As it is open source, all the code is available to view on GitHub, and we publish everything into npm as well. So, I've put a little link there if anyone's interested in checking that out.
At the end of last year, we undertook a big accessibility and usability audit which was facilitated by a company called Intopia. Tonight, we'll talk about what the Agriculture Design System (AGDS) is, followed by a summary of the accessibility and usability audit last year. And finally, we'll look at how our team approaches accessibility and usability at a design system level.
Let's get into it by looking at what AGDS is. To give you some visual context, this is our Storybook kitchen sink example, essentially a dump of all our components in the system in no particular order. We're using Storybook here because it's a great tool for switching between our light and dark palettes. AGDS is also a themeable system. In this video right now, I'm switching between what we call the “Agriculture” theme (which is what we use in the department), and to what is known as the “Gold” theme (we’ll take a look at gold in just a minute).
As you can see one part of AGDS is a React component Library, but at a higher level it's really a shared language between the design, development, and content practitioners at the department. As well as the components that you just saw we do have templates for common page compositions and patterns that exist in the department and for the designers we have a Figma library where all the components and tokens live.
In the video I demonstrated switching between the Agriculture theme and the Gold theme. A lot of you probably don't know what Gold is so we're going to look into that right now. Gold is an open-source design system for building government services and products. It's designed to replace the former Australian government design system which was being funded by the DTA but unfortunately decommissioned in 2021.
Gold and AuDS have been widely used across many government products before: the Department of Health, the Department of Veterans Affairs, as well as the New South Wales government at some point have used Gold or AuDS before.
It's no secret that Gold has been the inspiration for AGDS. We've really tried to take the aesthetic and design principles but extend that to meet the needs of the department. The main reason for that is because Gold did a really good job in terms of usability and accessibility, so a lot of research went into the development of Gold and we wanted to leverage as much of that as possible. But gold was built a long time ago and things move very quickly in Tech, so I like to think of AGDS as a modern interpretation of Gold. We've built everything from the ground up using React and TypeScript and all the styling is done with Emotion which is another Thinkmill package for CSS-inJS, and we have a bunch of really cool shiny toys for our monorepo setup and documentation site.
At a really high level this is the architecture of the design system. We have the components which are things that you typically think of when you think of a design system – things like a
accordion, or a
select component. But really they are a composition of our primitive components: things like
We expose these components to our consumers to also build components that we don't have in the system. The cool thing with primitive components are they're really flexible and they have style props which allow us to interact with our design tokens, so the values for a colour, typography, border, etc., we have a nice way of of referencing those values in React in a way that's both type-safe and works in a responsive way as well.
As an example of that here's our global
alert component that typically sits at the top of the page, you can see it's got a background colour with an icon and some typography. The code for this component is pretty simple, it's just a composition of flex-box, an icon, and a couple of buttons in there as well. This syntax might look weird at first for people who aren't familiar with this, but it is a really powerful and flexible way for handling styling in our design system. As I mentioned before it's really good to be able to make sure everything's mapped back to our tokens correctly.
Now, let's get into the Accessibility audit. As mentioned, it was facilitated by a third-party company called Intopia. Neither the department nor Thinkmill has any affiliation with them whatsoever, and they did a fantastic job. The audit was split up into three phases. We did an audit of the components in isolation, followed by an audit of our page templates and documentation site, which is built using the design system components. Finally, we had some usability sessions. These sessions were with the users who had specific accessibility requirements: things like [requiring] a screen reader (we had some low vision users who needed screen magnification software), and neurodivergent users as well. Having Storybook for this phase was amazing because it does such a good job at being able to provide a playground for testing components in isolation. In this video, I'm just stepping through our text input component, seeing all the different variations that it can be in, and the auditors had a nice little playground down the bottom for changing all the props and performing their checks.
Phase one. What we were looking for here is making sure the components respond nicely to both screen-reader and keyboard input, making sure the components respond nicely when placed in Windows high contrast mode. When you zoom up all the way to 400%, make sure everything responds nicely. Of course, making sure that colors have sufficient contrast between background and foreground elements, and just making sure the HTML is valid. There were some cases where we were rendering a
div instead of a
span which yielded some HTML validation errors that can cause a lot of issues for screen readers. I'm pretty proud to say our team got an 80% pass for AA compliance before we did any remediation work.
A couple of issues that came out of this phase were things like a button loading state. What we were doing is we have a loading prop on the button, and when that's true, we just overlay the text with some loading dots. But what we actually had to do was make sure there was our Aria-live region inside the button, and when the loading prop is true, we insert the text that we want to be announced to the screen reader, because simply just toggling the buttons on and off with the ternary operator wasn't enough for some screen readers.
Things like Windows high contrast mode – our date range picker, the selected date range, you can't see in Windows high contrast mode because we were relying on background colour alone. So what we had to do was make sure we put a font weight to indicate that as well as an outline on the selected dates. Here's an example of content not reflowing when you zoom all the way to 400%. So we forgot to add some scrolling to our main nav component on the left here, so you couldn't see the items that were cut off by your screen.
Then phase two, once we made all the fixes in that were caught there, we moved on to phase two where we audited the templates and documentation site.
Here's an example of one of the templates we offer. It's just a simple sign-in form. Although it is a really simple layout, it does a good job of demonstrating how different components can be composed together on a page. We have some skip links at the top of the page followed by main nav, followed by main content area, and a footer. Then we have a form with client-side validation. We have a button with a loading state to mimic an API call, and then we have an API response returning an error. We're just testing here that our components fit nicely when placed together on a page and that we're managing focus correctly as well. We did a pretty good job in that phase as well. We got an 86% pass for AA compliance [WCAG 2.1]. A lot of the issues that came out of this were on our documentation. We have a live code editor for some snippets, and there's a couple of issues there with keyboard input.
Once that was done, we moved on to the usability sessions. What we were doing, we were observing users with specific accessibility requirements: screen readers, etc. What we asked them to do was register a pet for a fake government service we created. I'm just going to let this video play and talk a little bit about what's going on. This is a form flow for registering a pet for a fake government service. The reason why we built this instead of using a product that exists in the department already, was that in order to use some of these products, you need quite specific domain knowledge in agriculture, which is really hard to recruit for. So what we decided to do was take the UI and UX patterns that exist in the department, which is taking these huge forms, hundreds of form fields long, and they get split up into multiple tasks, which then gets further split up into multiple sections. The user has stepped through that in a flow. We tried to mimic that as best as we could using something that a regular user could understand. Here, we're just registering a fake pet.
The findings that came out of this were really fascinating. The screen reader users, they provided overall positive feedback. They were able to navigate around the page using shortcuts and interact with our components using keyboard inputs as expected. But the visual users were pretty confused by some elements on the page. They were confused by some icons looking like radio buttons or unclear with some copy that was on the page as well. So we knew we had to do some work for our visual users. Before we made any changes, here's what one page in that form looked like, and here it is after. I'll just quickly show before and after again. Some things to note, we added some colour and made the progress indicator (which is what you see on the left) to make it more look like a timeline of step test rather than just these individual tasks that can be completed in any order. Added some subheading above the progress indicator as well to make it more clear what step you're on. We consulted with the content team to make our labels in that component as clear as possible. And finally, down the bottom here, this is our summary list component. We made sure that the key and the value were as close together as possible because a lot of users were confused. They weren't sure that the key and the value were related. They were like, "Well, this one thing's all the way over here, and this is over here." This was really interesting to me because it showed that accessibility does not mean usability. You can spend a lot of time making everything as accessible as possible, but as soon as you get it in front of your users, they might not understand what that means. It's really important that usability sessions happen early.
At the end of this three-phase approach, Intopia gave the AGDS team a certificate of compliance, and they said, "Hey, you guys did a great job." Because it's open source, we can now show any issues that come up in the future to potential future clients as well, saying, "Here's how you can implement an accessible React-based design system in the future." So that was great, but that was no accident. There was a lot of hard work that went into making that possible. I wanted to share with you how we tackle that as a team.
Let's get into it. I think the main reason for this is that we make accessibility a core pillar of AGDS. It's really fundamental to everything that we do. Whenever a design decision is made or a code change is made, we're asking ourselves, "Is this the most accessible thing we can do, or is there an alternative solution?" It does take a lot of time, but it is really fundamental to what we do, and we have that printed on our website, so it's really easy to rationalise that. It is quite easy to justify this time spent in doing that because, yeah, it is a government service, and we are legally obliged to make it as accessible as possible. But I think for private companies who don't have that same requirement, you still have to find a reason why accessibility is important to you and write that somewhere because if you don't, it can very easily fall through the cracks.
Another thing we do is our way of working. We have a philosophy that design is everyone's job, and by design, I don't really mean jumping into Figma and changing border radii of components and things like that. I mean design as in the act of problem solving. Whenever a problem is brought to us, we jump on a call together and discuss what needs to be done (both engineers and designers). This gives everyone an opportunity to bring their areas of expertise in and just catch things early rather than a traditional software setup where you have designers in one room and developers in another room, and the developers just do whatever the designers say and slap some ARIA tags on the end and try to make it accessible. We try to do everything in a collaborative manner.
The other thing that we do is we try to get things into code as quickly as possible. While Figma is an awesome tool for coming up with ideas, and testing ideas quickly, it is just a picture at the end of the day (it's not what the end user is going to see). Being able to get ideas into code quickly, testing with a screen reader, testing with a keyboard, seeing how it behaves in a real browser is amazing for us. That's not just unique to our team; it's just the standard way of working at Thinkmill. Boris and Jed have done a great job of publishing that on the Thinkmill website. If anyone is interested, there's a link available.
Here’s a tool we use for prototyping in code. It's called Playroom, it's a website you can visit where you get a code editor at the bottom with all your components loaded in, providing autocomplete and snippets you can use. The top section shows a live demo of your code, and you can preview it in various screen sizes that you can configure. You can share that [demo] with your team as well - the URL syncs up with the code, making it easy to share around the team.
This an awesome tool for us to use internally (to be able to just jump into this write some code share it with the designers) but usually we have some designers or developers come to us and say “Hey like what's the most accessible way to to do this or or do that” and it's awesome just to jump on a call with them show them playroom do it together and then they can fix the problem in their code base. This has been an amazing tool for us.
Another thing we do is make sure to manually test in as many different browsers and assistive technologies as possible. I know as Engineers we love to automate as much as possible but unfortunately there are just so many issues that aren't captured in automated testing. Gov.uk published an article saying that only something like 30% of accessibility issues can be captured with automated tests so there's just no way around not doing manual testing. We try and do a bunch of different manual testing with every code change or design change.
I just wanted to give a shout out to this website called Assistiv Labs. It’s like Browser Stack but for screen readers so you can remotely log on test a website in NVDA, or Jaws, or voiceover, Windows high contrast as well, so that's that's a really great tool for testing.
And finally, having multiple respected accessibility references. Whenever you want to build something new it's it's good to reference someone has someone's done it before. I always love to reference the Working Aria Initiative (WAI-ARIA) authoring practice guides. They have really clear guidance on how to implement modern components such as dialogue, sliders, carousels, or anything like that. But it is quite verbose, and the code is very outdated at times, so seeing how people tackle those issues in a more modern application like WK or Reach UI, React ARIA, Radix UI s is really useful as well.
To wrap up um I think accessibility needs to be made a priority not just a nice to have. Check out the Thinkmill Method because this method of working has a direct impact on the accessibility of an application. And finally, do not solely rely on automated testing – there's there's just no way around it yet.
Thank you so much. Any questions or otherwise I'll leave it at that
Q: What are the most suitable methods for testing different screen resolutions across devices?
A: A bunch of different devices. A phone, iPad, a laptop, a smaller tablet as well. I try to do a bunch of Windows [testing] as well as Apple. I don't think there's like a silver bullet or anything just yet.
Q: Is the primary motivation of making AGDS open source for others outside government to use?
A: Mainly for other government departments to use as well that are not in the Department of Agriculture. The government are trying to work more in the open, but a lot of a lot of people don't. Mainly to allow other government departments to pick it up in the future (hopefully as a Gold replacement one day potentially).
Q: What kind of tools can you use for testing Accessibility?
Good question. Engaging with a third-party accessibility company would be good because they can perform audits and give you that certificate of compliance. But if you're just doing it on your own, you’re kind of just saying yeah like trust me in a way. You can use the WCAG guidelines as your Baseline, they they do have pretty clear guidance like saying this type of component needs to respond to these keyboard events, in order to do this pattern you need these elements, you can follow that but it is quite verbose.
What inspiration from React Aria, Radix UI, Radix Primitives, Reach UI were used to save development time?
We decided to try not to use any external libraries at first because we we were trying to be a Gold replacement at first (which had almost zero dependencies), but as the design system grew we needed a combobox component, so in that case we use uh Downshift. There's like some things where we would save a lot of time by using like Radix or Reach UI but we didn't want to be locked in to to their way of thinking. I think Reach UI is great but they just like got deprecated and don't support React 18 which is like... oh man like if you're relying on them what do you do?
Q: From a managing multiple design systems perspective with Gold and AGDS now. Do you think more departments will branch out and invest in their own design system?
I don't think we've we've reached that bridge yet, but hopefully other departments can can take this as a starting point and extend it and change it to meet the needs of their department, because every department has quite specialised requirements. It is possible now for another department to pick it up and and theme it, but we haven't seen that just yet.
Q: Why did you use Emotion, and did you evaluate anything else? What are your thoughts on CSS-in-JS?
Emotion is great for design systems right now. It doesn't really require tooling so getting it into existing products that have ways of styling - we can incrementally adopt components without having to change too much. I know a lot of Thinkmill people are exploring Vanilla Extract and some other cool styling libraries but we've gone with the classic Emotion for now.