Not in the loop on Assistive Access? Beyond Apple’s announcement earlier this year, there’s an awesome WWDC session on Assistive Technology by none other than Allen Whearry.
⚠️ Note that Assistive Access is currently still beta software, and may have bugs that I therefore won’t judge too harshly. ⚠️
It’s interesting that the initial setup of Assistive Access is a little different than the setup for subsequent changes; the initial setup is, however, nicely guiding you and introducing certain concepts and gotchas, which is nice.
For example, you can set up the “appearance” in Assistive Access to be row-based or grid-based.
After this, we can start setting up apps.
There are a few first-party apps that have been optimized for Assistive Access, by means of having fewer features and a bigger, bolder user interface. Unfortunately, this is not available for any third party apps… yet.
We can see that Calls has setup options to enable or disable certain functionalities. Note also that this “Calls” app actually combines features from both the “Phone” and “FaceTime” apps in one app — neat!
For third party apps, I was delighted to see that, whereas they are not optimized, we can still set them up specifically for Assistive Access. For example by settings its language, and allowing access to things like Camera, Contacts, Live Activities, etc. — depending on what the app supports, of course.
After setting up the apps, there’s some more screens in the initial setup, sharing some “things to know” about things that are unavailable in Assistive Access mode, like notifications, software updates and more. There’s also an explanation on how you exit Assistive Access (triple-clicking the home button; I assume this is triple-clicking the side button for devices without a home button).
… and with that, we’ve set up Assistive Access — now let’s explore it!
Enabling Assistive Access can be done through the Accessibility settings, or the Accessibility Shortcut after adding it there. It does not seem to work via Siri (just yet).
Requiring an “admin” password to enter and exit Assistive Access means users can’t accidentally exit the mode. Using the grid appearance, all we see is our apps in large tiles; no time, battery level, connectivity etc. It also seems like non-default app icons are not shown if set up.
Optimized apps see a similar layout to the grid appearance set, creating a similar experience to the “home screen”. Like Messages here, for example.
From there, we can enter a conversation and participate in it using, for example, the emoji keyboard.
The Camera app is even cleaner and simpler than its (non-Assistive Access) counterpart. A view finder, and a “Take Photo” button. Easy as that.
Looking at non-optimized apps… we immediately get a sense of just how large the gap in experience is between optimized apps and those that are not. Non-optimized apps only get a “Back” button that is not context-aware, but instead always goes back to the home screen.
What’s also interesting is that iPhone-only apps run in Assistive Access mode on iPad (which admittedly is quite an edge case) seem to “emulate” an iPad environment of sorts — and that iPad emulation may lead to UI bugs and crashes.
It can be seen that this mode is still under construction as there have been many improvements during the beta cycle, like supporting dark mode, tweaking how app settings can be changed, and more. It’s got a ways to go, but I’m beyond exited by this new assistive technology and what it can do.
Furthermore, I am so proud of all the people that have worked, and are working, on this new assistive technology. You know who you are; you rock!
And now we patiently wait for some more APIs to optimize third party apps..!
Thanks so much to James Sherlock for proofreading!
]]>Hover Text is a macOS feature that lets you view text at larger sizes, typically using a modifier key and hovering over the text or element (hence the name). What’s neat about it, is that is will show you the accessibility label of any element, not just text, including in the iOS simulator. Meaning Hover Text can be a rather quick and frictionless way to get an idea of elements not having an accessibility label, or having an awkward one. You can find Hover Text under System Settings (or System Preferences if you’re not yet running macOS 13 Ventura) > Accessibility > Zoom > Hover Text.
And this is what Hover Text looks like in practice, here as seen in the simulator.
A neat start, but let’s take a look at macOS VoiceOver next…
Using macOS VoiceOver to test your iOS app on the simulator is a bit of a power user trick. If you build apps for iOS, there’s a chance you’re not familiar with VoiceOver on macOS — I wasn’t until joining the macOS Accessibility team at Apple. There’s a steep learning curve, which people familiar with iOS VoiceOver will probably know.
System Settings > Accessibility > VoiceOver > Open VoiceOver Training… is a good place to start, but let’s go over an even quicker quickstart.
To turn on VoiceOver on macOS, given a keyboard with Touch ID, hold ⌘
,
then triple-press the Touch ID button. You can also ask Siri to turn it on or
off.
Then, as you can imagine, VoiceOver navigation is going to be quite different from iOS. No touch screen, but a keyboard to interact with things. Oh, the possibilities!
The first thing to know is there’s the so called “VoiceOver modifier” keys. By
default that’ll be either Caps Lock or ⌃
+⌥
. I’ve gotten used to the latter.
To navigate, hold your modifier key(s), and use the arrow keys. That’s like a
right (or left) swipe on iOS, and honestly, you’re starting to get off to the
races. Try it out!
Activating things like buttons is done with the modifiers keys and the space bar.
That concludes our quickstart for now, let’s get back to the (fun and) profit.
Macs with Apple’s M-chips can run iOS apps on the Mac — which is practically the same thing the simulator does, and how Catalyst apps work. And so we can leverage this with the Simulator app, simply by navigating our macOS VoiceOver cursor… to our iOS app!
You can already tell the… awkward bits that come with this, underlining the importance of treating iOS devices as the source of truth in all cases. I have no idea why the VoiceOver cursor is off — it normally works okay, but apparently not in the simplest of demo applications. Rest assured, things work. We can activate the button, and we can verify VoiceOver labels and elements.
Voice Control also works, but the element position is out of whack there too, by the looks of it.
What’s neat about this, is that it gives us a little more options than with the
Accessibility Inspector, although as you can see, either are going to have some
tricky things to work around. Some things that the simulator can’t do, but using
VoiceOver on macOS can, is to navigate and inspect AXCustomContent
, for
example, or to perform the “Zorro gesture”, or maybe more commonly known as the
“accessibility escape” by telling Voice Control to “go back”.
AXCustomContent
exampleAlright, let me also give an example using custom content, as that is something that you can’t verify with the accessibility inspector. It also requires a tiny bit more macOS VoiceOver magic. Let’s take a look first:
You might have been able to read along with the caption panel, but to show the
entries for custom content, use your VoiceOver modifier keys, plus ⌘
, plus
/
. You can then navigate through them using just arrow the up and down arrow
keys, without any modifiers.
These menu-style actions are what are the equivalent of iOS Rotors. And so as you might imagine, this only scratches the surface of the capabilities of just rotors on macOS. Ah, the power of macOS VoiceOver… perhaps that’ll leave you wanting to explore it further. (Pro tip: the hints will help you gradually explore more options and actions as you come across them, like custom actions and custom content.)
Making use of the tools we have to help ourselves is something I really like exploring. Sometimes things don’t work out, other times we take away these (little) things that can help our workflows and more. I’ve been using these tricks to drastically improve my testing workflows… and to have a lot of fun!
Whilst these tricks are a niche way of testing your iOS apps for accessibility, I feel they are great to know about and add to your toolbelt. But beware, as like (unfortunately) is also true for the accessibility inspector, only an actual iOS device is going to give you the exact experience your user has.
macOS is, as we know, quite different from iOS, and looking at the aforementioned iOS on Mac and Catalyst and how they work, they will change certain behaviors compared to on iOS. A major example is navigation: whilst on iOS we have a (mostly) flat structure, macOS knows a rich, hierarchical structure. What does that mean, you might wonder? Well, containers like table and collection views are to be drilled into. Where on iOS you’ll navigate a table view cell by cell (unless you have a (custom) rotor), on macOS you’ll be drilling into it using the VoiceOver keys plus down arrow (to enter), and up arrow (to exit).
So: always test on actual iOS devices, too, and take those as your source of truth.
And oh yeah… I’ve just been reminded how lucky we are that the iOS screen recording automatically includes Voice Control and VoiceOver sounds… which is not true for macOS. Guess I’ll add that to the pile of Feedbacks to file as a result of writing this.
I’d love to hear from you if you’ve tried this out!
Special thanks to Chris Wu, Nathan Tannar and Rob Whitaker for their proofreading and feedback!
]]>nil
after all. That crash because the constant that wouldn’t change… did in fact change.
Not that I’m speaking from experience… why’d you ask?
These things happen. And arguably, it’s OK for them to happen. Yes, there’s a miriad of things we do — conciously or not — to prevent these things from happening. In the end, however, it’s because of bugs, of issues, of changes, that we programmers are, well, programmers. We’d not be needed otherwise, I’d argue.
One of my favorite things in programming is being able to write a piece of software that is well-defined — as well as moulding a thought, requirement, or project to become well-defined.
It’s then, with that knowledge, we can work in a structured way. We can define logic that can be tested. Well tested. With test code that not only becomes as important as the code we ship, but also as understandable, as readable.
The magic moment kicks in when an issue or (other) edge case is found — as it eventually always will, even though our software was so well-defined. We’re humans after all, and we do seem to make mistakes.
I digress… but now we can write a new test — with our now known failing input — and have that test fail. Of course. But this is great! We have a newly defined input with a certain unexpected output. We use our newfound knowledge to fix the bug, et voilà, the test passes. What a wonderful feeling.
A perfect scenario, one might argue, but one that isn’t always as straightforward. Not as straightforward to write that (as) testable code. Not as straightforward to understand the underlying issue and apply a fix. Not as straightforward to find that issue or unexpected output in the first place.
We could see the code and the accompanying test that fixes the issue as some form of “pure” documentation: you can’t argue, by reading the fix and the test, that the issue isn’t fixed. And you can logically reason about it. (Of course, this isn’t exactly true. We could introduce a new issue with our fix, of course, especially when dealing with more complex challenges. Nor should we expect anyone to take a fix — however trivial — for granted, or to understand said fix in exactly the ways our mind does).
While our code is a form of documentation, we’re all humans, and we all think differently. Have a different perspective.
When it comes to documentation, different things work (better) for different people. There’s no one way to document things. One might prefer video content over documentation that is written. Examples can make something tangible. That oh-so-understandable code does not hold up when someone unfamiliar with code is looking into things. Or your future self, not having used that programming language in a while.
Having things documented in multiple forms can help keep things able to be picked up by different people, at their own speed, in their own way. And having that kind of track record of how we work will help us keep a better understanding of our software (this, of course, applies to much more than just fixing issues, like processes, decisions, and proposals).
Documentation, too, can help finding issues, non-expected outputs, and bugs, too. We certainly don’t always have a well-defined bug report, failing input, or other tangible information to guide us in the correct — or even a — direction. And so we should be extra careful in tracing our steps, documenting our findings, and understanding our path from an issue being flagged to understanding (and, I hope, fixing!) it.
… or, not. It just seems to happen that we’ll not always find the underlying issue, understand what’s responsible for that crash, or get the time to investigate something not criticial enough at that point in time.
What I’ve seen happen in this case, from time to time, is that rather than fixing the issue, we patch it. Whilst you could argue that patching something would also fix it, what I mean with patching is the fact of preventing the issue at hand without having a full understanding of why it happens in the first place.
For example, if we’re unexpectedly returning nil
given some unknown input, we can default to returning, say, an empty string ””
. If we know that sometimes we index into an array but go out of bounds, we can return early if we determine we would go out of bounds.
As we can see, however, these examples don’t fix the issue. They dodge it, move around it, ignore it. It’s that I see as a patch. And, alongside it, as something that could, down the line, cause undefined behavior — as in, it can cause us to evaluate code at a later point with input we don’t expect. Luckily for us, we have (among others) assertions to help us there.
Patching issues isn’t something objectively bad. Sometimes we deal with a complex piece of code and we’ll have to make do with what we do know. Add that extra piece of logging to help catch that gnarly input causes us problems. Write a workaround because an API that’s not in our hands causes issues.
It’s in those cases that proper documentation (and potential follow-up steps) is written. That means, first of all, to know and understand the difference between a patch and a fix. And the ideal goal of that documentation would be, as I’d argue the ideal goal of any documentation would be, is for anyone else (within reason, like others within the company, or other people in the team) to be able to go through it, learn from it, and figure out next steps.
Oh and as it turns out, doing that isn’t easy. Isn’t straightforward. Takes time, energy, experience and communication. But we can all try and learn.
Working on projects, we’ll all at some point have to deal with bugs, unexpected behavior, and other issues. They will all be of different magnitude and size, and we’ll have to approach them accordingly.
Whereas in cases we can be confident of a fix, and document it, in other cases fixing something isn’t straightforward, or (in the short term) feasable at all. It’s then that we may be patching issues, and making sure there’s a shared understanding of what has (not been) done, as well as providing documentation on the process and next steps, is a crucial step that should not be neglected.
]]>UITabBarController
.
Whilst possible, it was far from straightforward. In this post, I want to talk
about matching the built-in behavior, by supporting the Large Content Viewer as
well as the magical (as you will see later) .tabBar
trait.
You can find the code from this blog post on GitHub.
To start off, let’s take a look at what our custom tab bar looks like, and how we’ve built it.
What we’re looking at is a blank, plain UIViewController
that contains a
UIStackView
. That stack view has four buttons added as arranged subviews.
Grand. So these are buttons, which we can tap and react on. Now, we want to
make sure we can long-press these to show the large content viewer; something
that comes out-of-the-box with UITabBarController
. To do so, we first set
the font size to one of the accessibility sizes; it will not appear otherwise.
To do so, navigate to the debug bar at the top of the debug area in Xcode, activate “Environment Overrides”, enable “Text” and set the Dynamic Type to something in the Accessibility category.
Alternatively, you can do the same in the Accessibility Inspector’s “settings” tab.
Alright, so… here we go! Long press on a button…
… and observe nothing happens. What gives?
Now, Apple recommends to always prefer elements that can grow to smaller or bigger sizes. In many cases, that’s what you want — and you will not need to add support for the large content viewer, as the elements themselves grow.
But as you can imagine, this becomes tricky in certain scenarios, and a tab bar is one of them. Growing the tab bar means we’re taking up more and more screen real estate, meaning we have less space to show the rest of our app.
Hence why we want to support the Large Content Viewer for these buttons in our case.
To do so, we use the showsLargeContentViewer
API that is available on
UIView
.
button.showsLargeContentViewer = true
Alternatively, we can also set a largeContentTitle
to go alongside the viewer,
indicating the title of our tab/button.
button.largeContentTitle = NSLocalizedString(
"Camera",
comment: "The title describing the `Camera` tab."
)
Build and run the app and…
Oh no, it still does not work?!
showsLargeContentViewer
’s documentation mentions:
For this property to take effect, the view must have a
UILargeContentViewerInteraction
.
… yet neither largeContentTitle
nor largeContentImage
’s
documentation do. OK, so let’s add the interaction:
bar.addInteraction(UILargeContentViewerInteraction())
Tada!
Now, there’s one more thing we can do to improve this. The largeContentImage
is picked up by the UIButton
’s image, but it may not really grow to take up
the space in the large content viewer. Even though that’s quite a big part of
why we have this in the first place. Make sure that if you need this and enable
it, you may want to “preserve vector data” so that the image doesn’t get blurry
when scaled up.
button.scalesLargeContentImage = true
All of the above is also neatly summed up and touched upon by Sommer Panage in the WWDC video Large Content Viewer - Ensuring Readability for Everyone.
The other part of making a custom tab bar accessible, is making sure it is seen as a tab bar by assistive technologies like VoiceOver. If you’re unsure what that feels like, try out a standard tab bar in an app by navigating through it with VoiceOver; it’ll add a bunch of great information, like which tab you’re on, and that we’re dealing with a tab bar in the first place — as well as making a container element for it so it’s easier to navigate to.
Whilst in theory this seemed straightforward in my head — add a trait to those buttons — it wasn’t as easy as I’d hoped.
First of all, you don’t add a trait to the buttons themselves; instead you add a trait to the parent view — the “tab bar” if you wish. Which feels… weird. It’s certainly not something common in terms of assigning traits.
Anyway. So the documentation further mentions that
If an accessibility element has this trait, return
false
forisAccessibilityElement
.
When I read this, on top of the unusual way of adding a trait to the parent view, a visualization of my brain would’ve been this:
Anyhow, I tried doing what the documentation said:
tabBar.accessibilityTraits.insert(.tabBar)
tabBar.isAccessibilityElement = false
… and ran the app.
Womp womp. Nothing tab bar related, even though that is what we’d expect given the documentation. Even the spoken output that is supposed to mimic VoiceOver just speaks “Camera, button”.
So, this was not… completely unexpected? I was (still) confused about how
this was supposed to work in the first place. Or as I described it in the pull
request once I Sommer finally found out what
was going on:
The documentation says that this is how you set up a custom tab bar. You set the parent element to have the
.tabBar
trait, and then you setisAccessibilityElement
on said parent to false. Which makes like, zero sense. The API “contract” is to not listen to anything that hasisAccessibilityElement = false
in terms of accessibility. Yet here it does mean something and has a side effect.But so apart from that, things still did not seem to work. Which was like half expected, as per what I just noted. The inspector says “no bueno”, not adding the internal “tab” trait. So no idea how to debug this or look into it.
Turns out, it does actually work… only on device. Even if inspecting the device with Accessibility Inspector, it’ll still pretend that things don’t work (read: no tab trait), yet VoiceOver reads everything correctly.
sigh time to write a blog post and file a bunch of radars on this.
… seems like it all “worked” after all. Thanks (again) to Sommer (and someone at Apple) for helping me stay sane here.
Now the only thing you’d have left to do is to insert or remove the .selected
trait of the “active” tab bar item.
Phew. Though there are a bunch of gotchas when it comes to making a custom tab bar accessible, it is certainly possible. The good thing is that it all works for the end user. The not so great thing is that implementing it isn’t straightforward, and the usual tools like the Accessibility Inspector will give you wrong information.
Hopefully this post has been helpful if you have a custom tab bar in your app that you want to make accessible!
Let me know your thoughts on this post, and if you have any questions, I’d love to help!
You can find the code from this blog post on GitHub.
]]>… and there’s some smarts built into the system to support those cases where you’re limited on space and can’t really show a larger font.
The typography documentation in Apple’s Human Interface Guidelines gives you a good idea of the thought process behind typography as a whole, and Dynamic Type specifically, with another page giving more accessibility-specific information on how to deal with text.
The good news about Dynamic Type is that it almost comes out of the box. The bad news is the almost.
To make sure Dynamic Type is supported by your elements, verify that the
adjustsFontForContentSizeCategory
property for your element is set to true
:
let label = UILabel()
label.font = .preferredFont(forTextStyle: .body)
label.numberOfLines = 0
label.adjustsFontForContentSizeCategory = true
Make note of a few things:
preferredFont(forTextStyle:)
API that provides a system font.
If you’re using custom fonts, you’ll have to make sure you are properly
supporting Dynamic Type using UIFontMetrics
.numberOfLines
property to zero. While this is not a
strict requirement, you can image that with a larger font, more number of lines
may be used to properly layout your text.With this setup for your elements, you’ll be able to take a look at your app as
a whole with a different Dynamic Text size; either by changing the system-wide
text size (Settings > Accessibility > Display & Text Size > Larger Text Size
),
or on a per-app basis (with iOS 15): Settings > Accessibility > Per-App
Settings > Add App > Larger Text
.
You’ll probably note certain places in your app where, because of either the use of a much smaller (or much larger) font, certain layouts either break or become a little awkward.
While it’s challenging to keep all layouts optimal regardless of the user’s
text size, one useful API is isAccessibilityCategory
,
which allows you to query if an accessibility text size is being used. With
that information, you may consider switching your layout from something
horizontal to something vertical, giving you more space to gracefully handle
the larger text size.
Unfortunately, certain elements are built in such a way that they are not expected to increase beyond a certain size. Examples are tab bar items, segmented controls and things like overlays.
To solve this, Apple introduced the Large Content Viewer, in, I think, iOS 11. In iOS 13, an API was introduced, too, enabling us to adopt the large content viewer for custom controls. The Large Content Viewer is shown based on the Dynamic Type settings: when using an accessibility text size, elements that can’t grow in size are expected to show it; the previously mentioned tab bar items and segmented control work out of the box.
Note that at the time of writing this, Large Content Viewer is unfortunately not supported in SwiftUI; you’ll have to implement it for custom controls using UIKit.
Supporting Dynamic Type may seem to require only few changes, yet in reality most apps won’t come with a great experience for it just like that.
Luckily, there are APIs that let us customize our layouts based on text size, allowing us to improve that experience for smaller and larger text sizes.
Let me know your thoughts on this post, and if you have any questions, I’d love to help!
Whereas VoiceOver is a screen reader, reading what’s on the screen, Voice Control lets users navigate their devices by voice.
In this post, we’ll look at Voice Control, how it works, and how you can make improvements to your app to make the Voice Control experience even better.
Apple announced Voice Control for iOS and macOS at WWDC in 2019; the video below gives a great understanding of what Voice Control can do.
This is so cool! As you may have noticed, Voice Control definitely builds on top of accessibility labels to activate certain controls, like share buttons and more.
While macOS lacks some functionality in Voice Control compared to iOS, where we can show labels for elements, proper Voice Control-friendly labels are crucial for a great Voice Control experience on iOS.
Take a look at how to use Voice Control on iOS:
You’ll notice that commands like “Go back” or “Swipe left” are something we can make work as expected with the existing accessibility APIs that are also leveraged by VoiceOver.
And even more so than with VoiceOver, having succinct labels to use makes or breaks your Voice Control experience. Consider the following example:
Tap Outdoor Walk 2.13km today
// vs
Tap Outdoor Walk
// When there are multiple elements with the same name, iOS will overlay
// numbers for the matching elements.
Tap 2
Furthermore, you may want to provide multiple labels that can trigger Voice Control. Imagine a settings button in your app is indicated with a cog. You can make sure users can activate it with all of the following:
Tap Settings
Tap Cog
Tap Preferences
Tap Prefs
Tap Gear
To do so (and also improving your app for Full Keyboard Access’s Find),
look no further than accessibilityUserInputLabels
:
Use this property when the
accessibilityLabel
isn’t appropriate for dictated or typed input. For example, an element that contains additional descriptive information in itsaccessibilityLabel
can return a more concise label. The primary label is first in the array, optionally followed by alternative labels in descending order of importance.
For the aforementioned settings button, we can do the following:
settingsButton.accessibilityUserInputLabels = [
NSLocalizedString("Settings", comment: ""),
NSLocalizedString("Preferences", comment: ""),
NSLocalizedString("Prefs", comment: ""),
NSLocalizedString("Gear", comment: ""),
NSLocalizedString("Cog", comment: "")
]
Et voilà; any of these inputs will now be supported to either speak (with Voice Control) or to find the element (with Full Keyboard Access).
Kristina Fox wrote a great blog post on Voice Control back in 2019, which I’d highly encourage you to check out. It gives a great example on how to make those small tweaks to make Voice Control easier to use, and gives a great overview of the idea with example images, too!
Supporting VoiceOver lays groundwork for other assistive technologies, like Voice Control. With limited changes, we can build upon our accessibility work to branch out to other assistive technologies and make our apps even more accessible.
Let me know your thoughts on this post, and if you have any questions, I’d love to help!
Well, good news: there are a bunch of options to test your accessibility-related code. Let’s take a look at the Accessibility Inspector.
Accessibility Inspector is an app that comes bundled with Xcode, alongside other apps like the Simulator and Instruments. As the name suggests, the accessibility inspector lets you inspect apps, scanning its accessibility elements to get a better understanding (or to verify) its accessibility.
Apart from (manually) inspecting apps, the Accessibility Inspector comes with an automated audit tool, an (accessibility) notification console, and a color contrast calculator. It’s quite packed!
To inspect elements in apps, start by choosing your device — by default the Mac the inspector is running on. You can (and should) change that to the appropriate device, like the simulator or your (wirelessly) connected device. When inspecting on macOS, also make sure to pick the process you want to inspect.
On the Mac, you’ll be presented with multiple sections:
Inspection > Show Ignored Elements
.When inspecting on the simulator, or an iOS device, the sections are mostly the same, yet missing the “advanced” view… which is a shame, as it houses a bunch of useful information that would also be interesting to inspect for iOS apps. Alas.
Custom Content, unfortunately, can’t be checked using the inspector (FB9824602), making it a bit trickier to verify. A shame, as it can really improve the accessibility experience.
What you could do, is to a) verify on-device, or b) by enabling macOS
VoiceOver, navigating to the view (including the iOS-view in the simulator),
and using the VoiceOver modifier keys + command + /
; note that the latter
requires macOS Monterey or later.
With the Accessibility Audit, you can run an audit on a specific view. By default, it scans element descriptions, hit regions, contrast, element detection, parent/child relationships and actions on Mac, and on iOS, element descriptions, contrast, hit regions, element detection, clipped text, traits and large text.
Running an audit can be a fantastic way to get an overview of what can be done to make a specific view more accessible. While it isn’t guaranteed to point out everything, it will most certainly give you some good indications of what can be improved, from color contrast issues labels duplicating information included through traits.
In the Accessibility Inspector’s menu bar, go to Window > Show Notifications
to open the notification console. This console polls for accessibility
notifications an app sends out, and can help you debug accessibility issues
that are relying on notifications, like moving an assistive technology’s pointer
focus or announcements.
In the menu bar, there’s another tool: Window > Show Color Contrast Calculator
opens a window with a tool to calculate color contrast for text on a background.
Useful to check if those custom (brand) colors you’re using pass the expected
color contrast.
The Accessibility Inspector gives you a great tool to better understand accessibility in your app, allowing you to verify information without needing a physical (iOS) device. Aside from the manual verification it allows, it comes with a built-in audit that gives you a rundown of a certain view’s accessibility.
From here, with your app starting to become more accessible, and insights on how to verify it, you may want to (further) look into testing certain parts of accessibility with unit and/or UI tests.
Let me know your thoughts on this post, and if you have any questions, I’d love to help!
We’ll take a look at accessibilityValue
s, accessibilityHint
s,
accessibilityPerformEscape()
, accessibilityCustomAction
s, and
accessibilityCustomContent
. Quite a bit to get through, so let’s get started!
Where previously we talked about accessibility labels, there’s also the concept of accessibility values. From the documentation:
The value is a localized string that contains the current value of an element. For example, the value of a slider might be 9.5 or 35% and the value of a text field is the text it contains.
While I think these are two great examples, it may be hard to wrap your head
around what should (and shouldn’t) go into the accessibilityValue
. Note,
though, that it is more than OK (and not unlikely) for it to be empty. Plus,
it describes a value of the element. So it wouldn’t make too much sense
to put in some data of a complicated cell, for example.
Going back to the example of a tweet, it wouldn’t make sense to put in, for example, the amount of likes; this is not a value of the element.
Accessibility hints are, as they say on the tin, hints. There’s two important things to note about them: one is that they should never contain crucial information, as a user can turn them off in their settings.
Secondly, there are some guidelines on their contents, as per the documentation:
The hint is a brief, localized description of the result of performing an action on the element without identifying the element or the action. For example, the hint for a table row that contains an email message might be “Selects the message,” but not “Tap this row to select the message.”
Particularly, while this documentation doesn’t go into great detail, the “tap” terminology is something to avoid — for accessibility hints and, really, any other strings alike. Users may not be interacting with your application by tapping; imagine a Catalyst app running on the Mac, for example, or an iOS app running on the Mac… or on an iPad with a hardware keyboard with trackpad. Even for Voice Control this becomes awkward… but we’ll talk about that more in a following post.
VoiceOver has the concept of performing an “escape”, and is a way to dismiss the currently shown view — where sensible. Think a modal view, or popping a view in a navigation stack.
The system does a great job of giving you this for free when, for example,
using UINavigationController
. But it’d be good to audit your application and
check if it is possible to perform these escapes where you expect them to —
especially around custom (modally displayed) views.
You may be wondering how a user can perform this escape, though? And thus, how you can test it? Well…
With two fingers, draw a “Z” on the screen: the Zorro gesture, as I like to call it.
In those views that require adding support for escapes, override accessibilityPerformEscape()
,
implement your dismissal, and return if it succeeded, like so:
override func accessibilityPerformEscape() {
dismiss()
return true
}
Let’s go back to imagining a tweet. It has replies, retweets, and likes; those are (already) three actions within one element; one cell. When directly interacting with it (outside of VoiceOver), we can activate those buttons as we would with any other button.
But in VoiceOver, while we could expose all of these buttons as separate elements, that would add a lot of elements to swipe through, and with that, possibly quite a lot of clutter. It’s unlikely you’ll want to reply to, retweet, and like every tweet you navigate to. So what can we do? Meet custom actions.
Custom actions let you add, well, custom actions to accessibility elements.
Actually, there’s a few places where the system does this automatically already,
such as with UISwipeActionsConfiguration
s.
To use actions, you’ll first need to get acquainted with the rotor, which is being used in the following video.
Now, knowing how to use the rotor “knob”, navigate to an app with those swipe actions, like Mail.app, Notes.app, or perhaps your own app. If turned on in VoiceOver settings, note VoiceOver will say “Actions available” for a mail in Mail.app, a note in Notes.app, and when in the Actions rotor, swiping up or down with one finger will guide you through the list of custom actions. From there, you can double tap to activate it.
To add your own custom actions, like in our example with replying, retweeting,
and liking, use UIAccessibilityCustomAction
.
let tweetCell = TweetCell()
tweetCell.accessibilityCustomActions = [
.init(
name: NSLocalizedString("Reply", comment: ""),
image: replyImage
) { action in
reply()
return true
},
// don't forget to update the name and image of this action when it changes;
// i.e. "Undo retweet"
.init(
name: NSLocalizedString("Retweet", comment: ""),
image: retweetImage
) { action in
showRetweetOptions()
return true
},
// don't forget to update the name and image of this action when it changes;
// i.e. "Undo like"
.init(
name: NSLocalizedString("Like", comment: ""),
image: likeImage
) { action in
toggleLike()
return true
}
]
Note that we pass in an image
. You may wonder why. Which, if you did wonder,
is a great observation. This would be something that is interesting to other
assistive technologies, like Switch Control. For more information on Switch
Control, I’d highly encourage you to watch this WWDC session,
which, in fact, introduces this exact addition to the custom action API!
Finally, let’s talk custom content. Remember the advice to keep your labels short, even if that would mean you’d leave out some information?
To prevent long accessibility labels, and to make information accessible in a more piecemeal fashion, Custom Content was introduced with iOS 14. Let’s take a look at its documentation:
An
AXCustomContent
object contains the accessibility strings for the labels you apply to your accessibility content [..] to allow your users to experience the content in a more appropriate manner for each assistive technology.
While significantly improving the VoiceOver experience for users, this API is,
in my opinion, rather awkward to work with, requiring the addition of quite
some (unnecessarily?) complicated code. While we’ve seen APIs like
accessibilityCustomActions
, being available on an(y) NSObject
, this is not
exactly the case for AXCustomContent
.
Not only does AXCustomContent
live in its own framework, Accessibility
, it
comes with a protocol that needs to be implemented, and some awkward baggage
from it being written in Objective-C, and arguably not bridged to Swift in the
best way possible.
I unfortunately don’t know the reasons behind this — all I know is that it hurts the developer, and I really, really wish it wouldn’t have had to.
Because why can’t we have
let tweetCell = TweetCell()
tweetCell.accessibilityCustomContent = [
.init(
label: NSLocalizedString("Replies", comment: ""),
value: String(describing: 30)
),
.init(
label: NSLocalizedString("Retweets", comment: ""),
value: String(describing: 9)
),
.init(
label: NSLocalizedString("Likes", comment: ""),
value: String(describing: 95)
)
]
… which neatly mirrors the accessibilityCustomActions
API, but instead have
to go through the following hoops:
import Accessibility
// Note this is (and has to be) a class, as `AXCustomContentProvider` inherits
// from `NSObjectProtocol`?!
class MyObjectContainingTweetCell: AXCustomContentProvider {
// Required for storage, as we won't necessarily have all information to
// compute the custom content when initializing.
var _accessibilityCustomContent: [AXCustomContent] = []
// Note this is an implicitly unwrapped optional, as this "supports"
// `null_resettable` — meaning setting it to `nil` means it internally
// is expected to (but not guaranteed by the compiler) set itself to `[]`
var accessibilityCustomContent: [AXCustomContent]! {
get { _accessibilityCustomContent }
set { _accessibilityCustomContent = newValue }
}
func setupTweetCell(with tweet: Tweet) {
accessibilityCustomContent = [
.init(
label: NSLocalizedString("Replies", comment: ""),
value: String(describing: tweet.replies.count)
),
.init(
label: NSLocalizedString("Retweets", comment: ""),
value: String(describing: tweet.retweets.count)
),
.init(
label: NSLocalizedString("Likes", comment: ""),
value: String(describing: tweet.likes.count)
)
]
}
}
… from importing another framework, to requiring classes (although one could
still store the information in, say, a ViewModel
that is a struct, and pass
it along to the view that is a class), having to deal with an awkward
null_resettable
, as well as backing storage. Woof.
And then there’s one more note: the AXCustomContent
API has a notion of
importance. By default, content has an importance of .default
, and it can
be set to .important
. I would recommend against using this. What it does is it
will add the value (or values in case you mark multiple custom content items
as important) to the end of your initial VoiceOver output — i.e. after the
label and value.
This can (more often than not) lead to information missing — as
with the tweets, for example, say likes are important, “95” itself doesn’t give
enough information as to what this is about. So, if there’s important
information, consider making it part of your accessibilityLabel
instead.
Alas. The good news: like we’ve seen before with custom actions, users can now use the rotor to navigate to custom content, then swiping up or down with one finger to go through this information.
If you want to try out how this works, consider checking out Photos.app — it uses custom content to give a bunch more information about pictures. And as with actions — depending on your settings — the system will indicate more content is available.
For more on custom content, check out Rob Whitaker’s blog post on the topic.
While after reading all of this (and perhaps applying some of it in your app), this all might feel like a lot of work. But take a step back, and appreciate how a lot of the what to do to make your app accessible is, arguably, part of the design.
Try to take this VoiceOver-mindset to this new feature, or new design, and think: what are the actions here? Are we using any images to represent actions? If so, what do these represent, in words? How can we split up information, making use of custom content?
And with that, you’re halfway there already… now the (other) fun part of implementing it is a lot clearer, and part of the design rather than an afterthought. Heck — I would be surprised if this doesn’t affect your initial design at all, perhaps making small tweaks that benefit everyone.
Let me know your thoughts on this post, and if you have any questions, I’d love to help!
I wanted to write some thoughts on this topic. For one, because I think it’s others than the people that have reached out to me would be interested in, and to have a reference for those reaching out.
More specifically, what I want to look at is not “what APIs are there to help make an app accessible?”, but rather “how do I use the available APIs to make my app accessible as best as possible?”. For the former, I highly, highly recommend Rob Whitaker’s blog, Mobile A11y, as well as the Developing Inclusive Mobile Apps book he wrote, which covers both iOS & Android, which can give another layer of both appreciation and insights into accessibility.
In this post, I want to look at — what I would say — one of the three major assistive technologies you may want to pay attention to on Apple platforms. There’s Dynamic Type, VoiceOver and Voice Control.
So you’re interested in improving the VoiceOver experience for your users. VoiceOver is Apple’s screen reader, which also comes with support for Braille.
Where do we start? Probably the best way to start is to get a feel of the technology by using it yourself. As neatly put into words by Tommy Edison:
Try and use accessibility on your phone or computer for a day. When you get to experience some of the frustrations of a site that’s not done properly or an app that doesn’t work for you.
Also — I highly, highly recommend taking a look at Tommy’s YouTube channel to get a better understanding of how users rely on VoiceOver (and much more).
The easiest way to turn VoiceOver on (and off again), would probably be through
Siri: “Hey Siri, turn on VoiceOver” will get you off to the races. What’s
trickier is knowing how to navigate in VoiceOver. Apple Support made a great
introduction video, that, honestly, is a must watch:
Now that you’ve seen some basic usage of VoiceOver — moving forward (with swipe-right), backward (with swipe-left), and activating items (by double-tapping), try it out in your own app.
A great place to start making your app more accessible is here, with VoiceOver.
It is not unlikely you’ll be navigating past items that will be announced as
nothing else than “button” to VoiceOver. Or, perhaps, you’ll navigate to
elements that are decorative, and do not add (but rather distract) from the
VoiceOver experience. You’ll improve this using accessibilityLabel
,
which we’ll get to in just a second.
Then, perhaps, your actionable cells aren’t announced as buttons to VoiceOver, your images aren’t announced as such, or a header isn’t, well, a header. This is another crucial VoiceOver need — both as information for the user, but also the system… so it can do some really neat work under the hood.
For this information, rather than adding this to your label, you’ll be wanting
to use accessibilityTraits
—
ranging from the aforementioned .button
, .image
, .header
, to .selected
,
.notEnabled
, and .adjustable
.
Take another pass through your app, or another (system) app, and note these traits being part of the respective elements.
So now the task is for you to go through these elements that aren’t accessible
as expected — adding accessibilityLabel
s, as well as accessibilityTraits
.
Note that the latter is an OptionSet
— an element can have multiple traits.
APIs like UIKit and SwiftUI often give great defaults, so you’ll (most often)
want to insert new traits, rather than override the traits completely.
myCustomControl.accessibilityTraits.insert(.button)
// rather than
myCustomControl.accessibilityTraits = .button
For your labels, try to keep these short. At a later stage, we’ll be adding more information to elements, but not necessarily in the form of a label.
VoiceOver users navigate the app mainly based on these labels, and if they are very long, or contain a lot of information, that makes an app a lot harder to parse and use. For a tweet, for example, the label might include the author and the tweet itself, but exclude the amount of retweets and likes.
There’s a lot more to learn about VoiceOver, and a bunch more you can do to further enhance the experience; we’re only partially there.
Yet, you’ve learned a thing or two about using VoiceOver, navigated your app using the technology, and perhaps made some improvements using labels and traits. For now, try to use your app with these improvements in place and ask yourself: what information is missing for these users? How would I express this information to them?
Then, we’ll take a look at custom actions, custom content, performing escapes, hints, and values, next.
Let me know your thoughts on this introduction, and if you have any questions, I’d love to help!
Writing code for Apple platforms, the default (and pretty snazzy!) framework
for testing is XCTest. There are some amazing people working on it, and it has
seen some great improvements over the last few major releases, like throwing
tests, setUpWithError
, XCTUnwrap
, etcetera. The team really seems to keep
up well with new (Swift) features, like async/await, making sure that writing
tests is as fun as possible, while also being as understandable as possible.
On that note of understandability: treat your test code as if it were production code; or at least close to it. If your tests are hard to understand, their value will eventually be impacted.
One of the things to keep in mind to keep your tests understandable, is their
names. I prefer making them very explicit, to the extent that I introduce
snake-case in them (you read that right!). So, test_thatFunctionDoesNotThrow
instead of testThrowingFunction
, that is more ambiguous.
The “given, when, then” strategy can also greatly help understand (and break up) your tests.
Perhaps even more important than the things mentioned above, is to have understandable error messages. Your tests will eventually (or more often) fail; whether in the above example of adding a test for a bug you found, when refactoring. While the test’s structure can help you understand what is being tested, the most immediate starting point (and thus, arguably, one of the most important things) is the error message(s) your test produces.
Surely we’ve seen these alerts indicating “something went wrong”. Well, duh! But what? Luckily, XCTest has a whole set up assertion functions that each cater to specific, well, assertions, producing helpful and understandable error messages.
Note that all tests in this post fail; this is on purpose, so that we can show and inspect their failure messages!
The most basic set of assertion functions test booleans. Understandable, as for any tests, we’ll be wanting to verify something against something else. So, technically, all assertions are boolean assertions. Let’s take a look at this most basic set of assertions in XCTest:
XCTAssert(false)
// XCTAssertTrue failed
XCTAssertTrue(false)
// XCTAssertTrue failed
XCTAssertFalse(true)
// XCTAssertFalse failed
As we can see, these basic assertions come with basic failure messages; simply because there is not much more information to present.
Having said that, note that every assertion function comes with an optional
parameter message
, where you can pass a String
further explaining the
failure:
struct Engine {
var isOn = true // Oops!
mutating func start() {
isOn = true
}
mutating func stop() {
isOn = false
}
}
let engine = Engine()
XCTAssertFalse(
engine.isOn,
"The engine isn't expected to have been started yet!"
)
// XCTAssertFalse failed - The engine isn't expected to have been started yet!
Having these more specific assertion functions that provide better, more insightful failure messages, however, will render a bunch of the manual messages less useful; which is great, as they are arguably a “weak” point in the test. Imagine changing a test, but forgetting to update the message… that could quickly become confusing, with all the consequences that may entail.
As we’ll dive into further here, we will see how specific assertion functions
become more and more useful, given that we pass them more information to work
with. nil
isn’t that useful for XCTest just yet; it could pretty much be
compared with “true” or “false”, but does become a little more useful:
var myString: String?
myString = "Hello"
XCTAssertNil(myString)
// XCTAssertNil failed: "Hello"
myString = nil
XCTAssertNotNil(myString)
// XCTAssertNotNil failed
_ = try XCTUnwrap(myString)
// XCTUnwrap failed: expected non-nil value of type "String"
As we can see, these provide a little more information than just “assertion
failed”. Perhaps their error messages could benefit from being a tad more
verbose; I think the second failure message would be easier to parse when it’d
have been the same as XCTUnwrap
’s failure message. (FB9681950)
Although, note you can do exactly what was done above (discarding the result
of XCTUnwrap
) and you get the same test as XCTAssertNotNil
with a better
diagnostic. And, arguably, verifying the result of your unwrap is something
you may want to consider anyhow, so win-win.
While not being used here, XCTUnwrap
is a particularly neat addition to
XCTest, introduced in Xcode 11. Where before we’d have to manually verify
something would be non-nil, then force unwrap it, now this is “baked into” this
assertion, which will return the unwrapped value if present, and otherwise, as
we can see above, throw an error.
Equality assertions are pretty much just “assertions”. As I mentioned earlier, when we assert, we verify x against y, and thus, arguably, their equality. We could “rewrite” the most basic assert:
XCTAssertEqual(false, true)
// XCTAssertEqual failed: ("false") is not equal to ("true")
… and we can see how this impacts the failure message. Anyway, on to some more descriptive examples:
var myString: String?
let myOtherString = "Hello"
XCTAssertEqual(myString, myOtherString)
// XCTAssertEqual failed: ("nil")
// is not equal to ("Optional("Hello")")
myString = "Hello"
XCTAssertNotEqual(myString, myOtherString)
// XCTAssertNotEqual failed: ("Optional("Hello")")
// is equal to ("Optional("Hello")")
let myObject = NSDate(timeIntervalSince1970: 10)
let myOtherObject = NSDate(timeIntervalSince1970: 0)
XCTAssertIdentical(myObject, myOtherObject)
// XCTAssertIdentical failed: ("1970-01-01 00:00:10 +0000")
// is not identical to ("1970-01-01 00:00:00 +0000")
XCTAssertNotIdentical(myObject, myObject)
// XCTAssertNotIdentical failed: ("1970-01-01 00:00:10 +0000")
// is identical to ("1970-01-01 00:00:10 +0000")
let percentage = 0.333
let otherPercentage = 0.666
XCTAssertEqual(percentage, otherPercentage, accuracy: 0.1)
// XCTAssertEqualWithAccuracy failed: ("0.333")
// is not equal to ("0.666") +/- ("0.1")
XCTAssertNotEqual(percentage, percentage, accuracy: 0.3)
// XCTAssertNotEqualWithAccuracy failed: ("0.333")
// is equal to ("0.333") +/- ("0.3")
There’s equality, and there’s comparability. Let’s take a look at some of the examples of the latter below.
XCTAssertGreaterThan(1, 1)
// XCTAssertGreaterThan failed: ("1") is not greater than ("1")
XCTAssertGreaterThanOrEqual(0, 1)
// XCTAssertGreaterThanOrEqual failed: ("0") is less than ("1")
XCTAssertLessThan(1, 1)
// XCTAssertLessThan failed: ("1") is not less than ("1")
XCTAssertLessThanOrEqual(1, 0)
// XCTAssertLessThanOrEqual failed: ("1") is greater than ("0")
I’m not sure if I love how “Objective-C like” XCTest is here in its function
names. Instead of XCTAssertGreaterThan(1, 1)
, I could imagine an
XCTAssert(1, greaterThan: 1)
be more readable. The same could apply to
equality, actually. Alas.
struct MyError: Error {}
func throwingFunc(shouldThrow: Bool) throws {
if shouldThrow {
throw MyError()
}
}
try XCTAssertThrowsError(throwingFunc(shouldThrow: false))
// XCTAssertThrowsError failed: did not throw an error
try XCTAssertNoThrow(throwingFunc(shouldThrow: true))
// XCTAssertNoThrow failed: threw error "MyError()"
Note that these assertions, like XCTUnwrap
above, are throwing (and thus
prefixed with try
). There’s no need to wrap these expressions themselves in
a do/catch
block; instead, the test function itself can be marked throws
,
making the code less distracting (and prevent too much indentation). Neat!
Sometimes, you will need to unconditionally fail a test. For example when a setup can’t be completed.
XCTFail()
// failed
To the point.
From time to time, a failure may be expected, or something that can’t (or shouldn’t) be fixed in a current patch. In Xcode 12.5, it is now possible to expect these kind of failures, and better reason about them (as well as having better diagnostics).
XCTExpectFailure { // Expected failure but none recorded
XCTAssertFalse(false)
}
XCTExpectFailure {
XCTAssertFalse(false)
// Expected failure: XCTAssertFalse failed
}
let options = XCTExpectedFailure.Options()
options.issueMatcher = { issue in
issue.compactDescription.contains("Hello")
}
XCTExpectFailure(options: options) { // Expected failure but none recorded
XCTAssertFalse(false, "Hello")
}
XCTExpectFailure(options: options) {
XCTAssertFalse(true, "Hello")
// Expected failure: XCTAssertFalse failed - Hello
}
I’d be weary of expected failures with complex issue matching. The last example here, I think, already adds additional overhead that may make things more complex than they need to be.
Whilst expected failures can be used when things are (temporarily) expected to not pass, we can skip assertions entirely, too. We may want to do this, for example, if a feature isn’t implemented on a specific platform.
try XCTSkipIf(true)
// Test skipped
try XCTSkipUnless(false)
// Test skipped
Note also that with expected failures, assertions are still ran. Assertions below a “skip” function, are entirely skipped, as per the name, so take extra precaution not to write false-positive expressions within, potentially skipping tests when you didn’t intend to.
I hope this overview gave you some insights into XCTest’s various assertion
functions, and how they can help make your test failures more understandable,
especially at a glance.
Perhaps you can adopt XCTUnwrap
in places previously using XCTAssertNotNil
and sequential unwrapping. Or perhaps your assertions exclusively rely on
XCTAssert()
? Let’s hope not, but in that case, you’re going to be able to
make some awesome improvements to your test code.