Author Archives: Sarah Horton

The latest from Rosenfeld Media

The latest from Rosenfeld Media

  • HTML5 with Steve Faulkner

    Posted on

    A Podcast for Everyone coverWeb accessibility takes place on a foundation of technologies, the most common of which are developed and maintained by the Worldwide Web Consortium, or W3C. Its success is dependent on how well these underlying technologies support accessible user experiences. Fortunately for us, people like Steve Faulkner devote much of their time to ensure technology specifications, such as HTML5, include the hooks that make it possible to build an accessible and enjoyable user experience for everyone, including people who use assistive technologies, such as screen reader and screen magnification software, and different display and interaction modalities, such as user stylesheets and keyboard navigation.

    Photo of Steve FaulknerThe web was created with accessibility as part its framework. Steve’s focus is to ensure accessibility remains a fundamental component of the web’s foundational technologies. Steve is co-editor of the HTML5 specification. He has been closely involved in other W3C specifications development, including the Accessible Rich Internet Applications (WAI-ARIA) specification. In this podcast Steve joins Sarah Horton to tell us about:

    • The current status of the HTML5 specification
    • How WAI-ARIA and HTML5 work together to support accessibility
    • How accessibility is integrated into specification development
    • What it’s like to work on a W3C specification

    Transcript available · Download file (mp3, duration: 44:19, 25MB)

    Steve Faulkner has been working in accessibility since 2001, first with Vision Australia and currently with The Paciello Group (TPG), where he is Principal Accessibility Engineer. He is involved with several W3C working groups, including the HTML Working Group and the Protocols and Formats Working Group, and is author of the helpful resource, Techniques for providing useful text alternatives. He is also creator and lead developer of the Web Accessibility Toolbar, a resource for evaluating web accessibility.

    A Podcast for Everyone is brought to you by UIERosenfeld MediaThe Paciello Group, and O’Reilly.

    Subscribe on iTunes · Follow @awebforeveryone · Podcast RSS feed (xml)

    Transcript

    Sarah Horton: Hi, I am Sarah Horton, and I am co-author with Whitney Quesenbery of A Web for Everyone, with Rosenfeld Media. I am here today with Steve Faulkner. Steve is my friend and colleague.

    He is technical director with The Paciello Group. Before joining TPG, he worked as an accessibility consultant with Vision Australia. Steve has been a mentor for me since I started working in accessibility. Much of what I know about accessibility, I learned from Steve.

    In addition to his day job, Steve is an active participant in key W3C initiatives promoting web accessibility. His tireless work has led to key accessibility resources, including the very helpful, Techniques for Providing Useful Text lternatives document.

    As co-editor of the HTML5 specification, much of his current work is focused on ensuring the specification moves forward in a way that supports and extends all the great work that’s been done to bring accessibility to the web.

    We’re here to learn from Steve about the status of the specification and whether there are changes that we should anticipate as this spec is approved and implemented.

    We also want to get a sense for what it’s like to work on this spec and how accessibility concerns are addressed in that process. Steve, thanks for joining us.

    Steve Faulkner: Thank you.

    Sarah: Let’s start by talking about how you got involved in accessibility in the first place.

    Steve: In the late ’90s, I became involved in doing some HTML development. It was at a time where if you could spell HTML then you could get a job as a HTML coder, and I did some work over here in the UK—I was in the UK at the time, and I did some workup over here.

    When I decided to go back to Australia, which is my home country, I looked for a job, and there was a job at Vision Australia. At that point, I didn’t have any idea about accessibility. I had very little knowledge.

    But I had an interest in working with people with disabilities, and aging people, which I had done previous to starting to do the HTML development. I did a degree in psychology. I gravitate towards social work type, career path, I suppose you could say.

    It interested me to be able to combine both the technical aspects, both the HTML development work and working with people with disabilities, and hopefully making a positive difference.

    When I started the job, as I said, I didn’t know anything about accessibility. This was around 2000, and it was at a time where the accessibility field was still in its infancy, I suppose you could say. It probably wasn’t an infant—it was probably more of a toddler.

    My first day on the job my then boss, Andrew Arch, threw a copy of the WCAG 1.0 Guidelines at me and said, “Learn this.” So that’s where I started, auditing and reviewing websites for accessibility against the WCAG 1.0 Guidelines.

    Sarah: Did you read the guidelines from beginning to end?

    Steve: Yes, I did. In those days, there weren’t a lot of the resources that are available now, 13 or 14 years later. It was very much explore and find out. There wasn’t the community that we have today through Twitter, Facebook, and all those things.

    There were mailing lists and there was WCAG, the Web Accessibility Initiative, but those were not exactly as user-friendly as the informal networks that have grown up over time.

    Sarah: When you think about the changes from WCAG 1.0 and now, and the issues that you were encountering when you were auditing things back then, are you seeing different types of issues? Are things getting better, are they worse, or both?

    Steve: Obviously, we’re seeing different types of issues in that the interactions and processes that we regularly encounter of websites and web applications are much more complex than they were at that point.

    We weren’t dealing with a static web at that point, but we were dealing with web that was a lot more server-side functionality, so a lot of the processing went on the server side. You’d fill in a form, you’d input it, and then you’d get some results sent back, for example.

    These days, everything happens in the browser, or a lot of things happen in the browser. The user-interface elements that you’re dealing with, you could describe them as a house of cards built in HTML.

    They often don’t have the required accessibility information, such as the roles, states, and properties. They don’t work correctly with the keyboard, or work as expected they do.

    Things look like a button, but you can’t use it with a keyboard, for example. Whereas, in the olden days, when people stuck more to using the basic, native HTML controls, it was a bit easier in that sense.

    The complexity of the content, in general, was less problematic. Because it was less complex, there were less things to go wrong.

    Back to your question, “Are things getting better or worse,” as complexity increases, then obviously new issues arise. At the same time, there’s a lot more awareness, these days, about accessibility. There’s a lot of information out there to be able to draw upon, to be able to fix issues.

    Hopefully, developers, in general, are more aware of taking into account building usable and accessible interfaces from the get-go, rather than having to come back and add this stuff in all the time, even though we still do encounter that a lot.

    Sarah: Then things are getting worse, in the sense the interfaces are getting more complex and more difficult to support and code accessibly, but you’re seeing, at the same time, a lot more awareness on the developer-end of things, and more resources to support accessible coding practices?

    Steve: Yeah, I would generally agree with what you’ve just said. Except that I don’t think they’re becoming more difficult to make accessible because of the tools that we have available now, such as the Accessible Rich Internet Applications specification and that features that describes, that are implemented in browsers, make it a lot easier and possible to make more complex user interface elements that are non-standard user-interface elements, accessible.

    Previously, they couldn’t be. In that sense, things have got a lot better. But, as I say, the complexity does breed new issues. There’ll be challenges, and we face them and find out a way to fix them.

    Sarah: You do a good deal of work with W3C working groups and task forces. What about that work interests you?

    Steve: The main thing that interests me is that I have the opportunity to influence and contribute to defining the standards that define the web and how it communicates. For probably about seven years now, I’ve been involved in HTML Working Group at the W3C, and HTML is the language of the web.

    Every web page is written in HTML. A lot of the information that’s communicated to users with disabilities who have to use assisted technology is communicated through the HTML content and structures that are defined within the HTML specifications.

    To be able to contribute to the further development of the HTML language is both an honor and a very exciting privilege.

    Sarah: What about the work you’re doing right now with the HTML spec? I see a lot going on about that.

    Steve: It’s got to do with process within the W3C. The specifications proceed along a particular track. They’re initially published as working drafts, and then they’re reviewed and further work is done on those.

    They get input from implementers. Anybody is free to provide feedback, and that feedback is taken into account. The editors make changes to the documents.

    At the same time, new features are being added as either things that implementers are thinking, “We want to have this feature,” or developers have developed some custom feature that is popular, and then it becomes standardized. There are all things constantly being drawn into the specification.

    At the same time, they’re trying to stabilize the specification at a particular point to get it through the W3C standardization process. It goes through an iterative process of feedback, editing, and then moving to the next stage.

    The stage we’re at in the current process for HTML5, you could call it, is that we have reached what is last call. That means that what’s in the HTML5 specification is fairly stable, all the requirements are there and the features are there.

    There’s not going to be any new features added to HTML5. There’s a last call, which means that people get to provide some feedback.

    Once that feedback’s been dealt with, it moves to proposed recommendation, and then, ultimately, to become a recommendation, where the content of that is set in stone, so we’ll have the HTML5 recommendation.

    At the same time, we are working on the HTML 5.1 recommendation, which is essentially a super set of what’s in the HTML5 specification plus new features. And so, things are being added in parallel, there’s a number of different versions of the document.

    That’s the most stable version, which eventually will be locked down. And then, there’s the continuing 5.1 spec, and that will go through the same process. In 2016, its set to again go through this process of becoming a proposed recommendation.

    People in the meantime, will start working on a 5.2 so there will be further extensions. It’s a living language, things are constantly being added. What’s written in this specification is, essentially a set of rules for how browsers are supposed to implement.

    And browsers being Chrome, Firefox, et cetera, are supposed to implement the features defined in the specifications. If you’re familiar with HTML, all of the HTML elements, and attributes are all defined, and what they do is defined within the specification, how they’re supposed to be implemented, is defined in the specification.

    As well as that, there is lot of what we call “implementor requirements”—these are requirements on authors, how to use particular features of HTML. That’s where I have a lot of interest, is defining the author requirements, because how authors use HTML affects the accessibility of HTML.

    If authors misuse the features—write the code incorrectly, don’t use the headings in the appropriate way, don’t mark up tables, use data tables correctly, without using the correct features—then that has a negative affect upon users, especially users with disabilities. I spend a lot of the time working on the text of the specification in those areas, in particular.

    Sarah: I see. You’re focusing on authoring aspects of the specs, primarily?

    Steve: Yeah. That’s primarily what I do. But I also, obviously, review and feedback with implementers about aspects of the implementations, but mainly, in regards to the accessibility of the implementation.

    If a particular feature is a new control but it doesn’t describe how it’s going to be used by someone with a keyboard, for example, or it doesn’t seem to have those hooks in there to be able to make it keyboard accessible or provide an accessible name and label, appropriately for it, then the process is, we file bugs against the specifications, but, also, when things get implemented, we file bugs upon the software itself.

    I spend quite a bit of time, filing bugs against Chrome, Firefox, Internet Explorer, and Safari, to say, “Well, this is supposed to be implemented this way, but you’ve implemented it in some other way, and it’s made it less accessible. So, let’s try to get it implemented as it’s defined in the specification.”

    Or, if the way it’s defined in the specification is wrong for particular reasons, then, provide some information about the reasons and we can see if we can change the specification to fit the reality.

    Sarah: You’re looking out for accessibility as the specification is built out, and how it’s supported within the browser.

    Steve: That’s right. Over the last weekend, or week, I’ve done a lot of testing. There’s a process of going through this recommendation process, one of the criteria for the specification to become a recommendation is, what is defined as a requirement upon browsers for particular implementation requirements is tested.

    You need to write tests, and you test that the assertions in the specification are implemented as they are asserted in the specification.

    There’s a fairly big section of the specification that defines how the roles, states, and properties for HTML elements are supposed to be exposed by the browsers within the accessibility APIs in the software.

    A browser must, in simple terms, if you have a button element, then that button element must be exposed with a role of button within the appropriate accessibility API. This is by the browse. So you need to go in and check that that assertion is being supported within the browsers.

    You create the files, you check, you use some software that looks at what the accessibility information is, and in the case of the button element, is it exposing a role of button? And happily, it is. But you can imagine that, for the accessibility aspects, there’s a lot of testing.

    That is a similar model that’s used for the whole specification, and remembering how HTML specification is 600, 800 pages long, with lots and lots of assertions in there, there’s hundreds of thousands of tests that need to be written and tested.

    There’s a big effort within the W3C and within the development community to create these tests and do the testing, to show what the HTML specification defines is in-line with what is implemented in browsers.

    Sarah: I seem to remember reading somewhere along the way that, in order for a specification to be finalized, it needs to be fully implemented in two browsers—is that correct?

    Steve: Yeah. What are known as the exit criteria, where the thing can pass—they are not always the same across specifications—but pretty much the rule of thumb is that, if the browser has an assertion in there, that says that X must do Y, then that assertion needs to be implemented in two browsers, at least two.

    Obviously, the more the better, but the whole idea is to get the features of HTML implemented inter-operably—implemented in the same way in multiple pieces of software.

    Sarah: Sounds like a pretty rigorous process.

    Steve: It can be. A lot of time consumed, a lot of people spending time in this thankless task, that’s for sure.

    Sarah: We’re all very thankful that you are doing it. Just so you know.

    Steve: Thank you. As always, what’s wonderful about doing this work is you’re working with a large community of people. A lot of people are giving their time and knowledge to improve the web.

    Sarah: Pretty impressive. Another specification that you have worked on quite a deal is the WAI-ARIA spec. Could you tell us a bit about that specification, and what that’s for, and the role of that moving forward, as this new HTML spec comes into play?

    Steve: Sure. I will prefix that by saying that, I have been involved with the development of the WAI-ARIA specification. But my main work has been in integrating the HTML specification and the appropriate bits of the ARIA specification. The majority of the work for that specification was done with some wonderful other folk within the accessibility community.

    As I was explaining before, the button element has a button role, and that role is conveyed, or is hooked up in the software, by the browser.

    Previous to ARIA coming along, there was no way for an author to set that button role within the browser. What ARIA has done is allowed authors to set that information, to expose that information, to say, “This is a role=button,” and then browser sends that information to the accessibility API, so it’s exposed correctly within the accessibility API.

    For most HTML elements, you don’t need to do this because the browsers do it automatically, but as we mentioned previously, what we have is a lot of development of custom widgets—things that aren’t represented as native HTML controls—being developed, using quite often, non-interactive HTML elements that have no meaning within themselves, like divs and spans, or building on top of native controls that do have meaning, such as elements, such as anchor or link elements and buttons and things like that, but actually adding to their functionality and changing the meaning of what they are.

    Before ARIA, you couldn’t do anything about that, or it was very difficult to convey the changing meaning that the author has imposed upon it through scripting and coding.

    With ARIA, if you change a link into a button—I’m not saying that you should do that, but people often do it—that you can, through the use of ARIA roles, space and properties, you can provide that information to let the user know that, hey, this is not a link, it’s a button, and you can interact with it as a button. It does button-like things, not link-like things.

    Sarah: When you look at that specification, are there things happening within the new HTML spec that would change how ARIA is used?

    Steve: To a certain extent. There are some of the newer HTML features, such as the dialog element, which didn’t make it into 5.0 because it hasn’t been implemented. It was implemented in one browser, but it’s not inter-operably implemented, and it was very unlikely that it was going to be implemented within the timescale of the 5.0 specification becoming a recommendation, but it is still present and living on in the 5.1 specification.

    The dialog element, as the name implies, provides a native dialog control. Instead of having to script divs and spans to make it look like a dialog, you can use the dialog element and it has more of the features of a dialog.

    Some of the interaction that you need to script in for custom dialogs is already built in. For example, when you open the dialog and you have a modal dialog using the dialog element, the keyboard interaction is that the keyboard interaction will stay inside the modal dialog until the dialog is closed, whereas with custom elements, custom dialogs, you have to use scripting to listen for the events and control the keyboard behavior via scripting.

    The good thing is that with the dialog, you no longer have to do that once it’s implemented. Another thing is that, like most user interface elements, it has a role. With custom elements, as I said with the advent of ARIA, you could add a role as well as all the other things you need to do.

    You could represent it as what it is, but with a built-in native dialog element, you don’t have to do that. There’s other things like the visual effects that you get. You get this dimming of the screen behind. You get those automatically.

    It makes it a lot simpler for developers to create the things that they’re already creating using divs, spans, CSS and JavaScript trickery. It’s done natively in the browser, by simply having the dialog element and controlling the display of that element.

    Sarah: That sounds great. Less trickery.

    Steve: Less quackery even.

    Sarah: [laughs] Do you see a time where ARIA would be obsolete in the sense that there would be native elements that make those additional attributes, that way ARIA brings to the process, unnecessary?

    Steve: No.

    Sarah: Why not?

    Steve: The reason being is that there is a lot of controls that come in and fall out of favor, but they’re never standardized. Not every type of UI control will be standardized and implemented. There’ll always be room for the use of ARIA or reasons you need to use ARIA.

    Also, note that one of the things I always like to say is that people see ARIA as a bridging technology—that at some point, everything will be wonderful and we will have native support for all of these interactions and roles, states and properties.

    But when we look at, for example, the humble button, as I was mentioning earlier, the button element has been around since ’94, so we’re talking about 20 years. Still today, people are building buttons out of any other element you can think of but a button.

    I’m not saying that’s correct and they should be doing that, but they do it, and there’s no way to stop them doing it, because part of the integral features of HTML is that every element, whether it be a control or a paragraph element, can have interaction and events associated with it. If it can, then people will do it.

    Another aspect is that the focus of ARIA is shifting to a certain extent, in the sense that instead of becoming this bridging technology, it’s starting to become the abstract layer, the way that accessibility semantics are described.

    They’re starting to use ARIA as the generic way to describe the accessibility semantics for a number of platforms. What happens is that if you have an ARIA button role, that equates or maps to a number of accessibility APIs and is coded into various platforms and browsers.

    In that sense, it’s being used more as a generic identifier these days. Even if its usefulness as a way to plug gaps in native functionality, that it’s got a different usefulness within itself.

    To give you another example—or to give you an example, even—the way that the requirements for HTML elements, the accessibility mappings that they require, are couched in terms of ARIA requirements.

    In the specifications, it doesn’t say, “A button element needs to have a button role in accessibility API X, Y, or Z on platform X.” It says, “It needs to have an ARIA button role.”

    It’s describing in terms of ARIA, because ARIA provides that mapping information through a sister specification to the ARIA specification called the ARIA Implementation Guide.

    That has the nuts and bolts of how an ARIA button role equates to a button in Safari on the Mac, for example, what role that equates to. I don’t know if I’m getting it across, but it’s like a dictionary of terms that are defined for a variety of platforms and software.

    On the one hand, you have that usefulness of ARIA, but on the other hand, you also have ARIA-like other specifications still in development. The 1.0 version of ARIA has been shipped, but there is currently work on ARIA 1.1, which is in active development.

    There’s work around, for example, extending ARIA properties to be able to describe annotation information. For example, currently there’s no way for me as an author to adequately define if I have crossed something out, using a line through or whatever in a word processing program.

    There’s no way for me to convey that unambiguously to an assistive technology user through the accessibility API because there’s no semantics for that that are clearly available.

    What they’re looking into in developing ARIA 1.1 is additional attributes that will allow you to describe all those sorts of things like footnotes, comments, et cetera, that typically we’re used to using within Word documents and things like that, or on the web.

    You’ll be able to use ARIA to provide clear, unambiguous descriptions for those things. And also, as new roles and states are being developed, one of the new controls that has become ubiquitous, especially on mobile, is the toggle button. The “switch” is another word for it.

    Previously, there hasn’t been a role for that type of control. There’s roles being added for that and for some other things, so there’s further refinements and additions as new things get invented.

    The other aspect, talking about, things becoming native in HTML, one of the exciting and also potentially problematic from accessibility standpoint is the new developments within what is known as “web components.”

    This is a whole suite of applications that essentially allow developers to create new HTML elements that they can use in their pages, and add new functionality. It’s like they’re supercharged custom controls, like supercharged jQuery, but they allow developers to do all these great new things.

    A lot of the accessibility semantics, a lot of the information that will be required to be provided, will need ARIA to be able to do it. If anything, ARIA is becoming more important and more integral to the story for HTML going forward, rather than becoming less important.

    Sarah: It seems that these things move forward in tandem, one supporting the other. How is ARIA support on mobile devices, for that switch, for example, that’s used a lot in iOS? The little on/off switch, is that what you were talking about?

    Steve: Yeah. Obviously, if something like that which is still being specified, there is no support. In general, the differences, the problems associated with the support for accessibility in general in various devices is that there’s a different browser culture within mobile devices than there is in the desktop.

    On the desktop, you have the four major browsers. Depending upon the device that you are using, saying if you’re using an iOS device, on the surface you might have Chrome and WebKit and even Firefox…I don’t know if you can get Firefox for iOS…but underneath it, the iOS platform, the rendering engine, the engine that powers the browser, is the WebKit engine, which is the Safari browser engine.

    It’s much more dependent on one vendor or one browser engine implementing something. But on the desktop, Chrome isn’t that great, IE’s pretty shabby, but then Firefox is great, and WebKit on OS X is great, as far as accessibility support, so you’ve got some choice.

    Where you have the more controlled environments of the mobile then things aren’t good. Having said that, WebKit and Apple on both iOS and OS X, they put a lot of work into it. It’s obviously not perfect, but they put a lot of work into it.

    In a general sense, you would find that the accessibility support for ARIA implementation is a little bit behind on the mobile devices, for a variety of reasons.

    Sarah: Recently I read, somewhere on the Twitters or something, that the alt attribute on the images isn’t required for validation with the new HTML specification. Did I get that right?

    Steve: Not exactly, no. Within HTML5, the alt attribute is required. To have a conforming document, you need to have an alt attribute with an appropriate text alternative on every image element.

    Except in the cases where you have an image element that is contained within a figure element that has a non-empty figcaption element. Essentially, it means that if you have a caption for an image, you don’t need an alt attribute, necessarily.

    It’s quite a refined rule, and it’s quite narrow in its scope. What it’s designed to do is to reflect the reality of situations like where you have image upload sites.

    You have photos sites, where you have people uploading thousands of images with no desire to add text alternatives to each of them, or oftentimes no time, or they’ll have no opportunity to do that. At least, if you have a caption, you have something that identifies that image.

    If you have something that identifies the image and gives some information about the image. If you had an empty alt on there, which is what, in the past, people have put on there, what that essentially does is say, “This image is of no interest,” to the assistive technology user.

    With the implementation of the HTML5 semantics for images with an empty alt, it will mean it will no longer be represented in the accessibility tree if it has an empty alt on it. It’s a way of saying, “This image may have some information of interest. It doesn’t have a text alternative, but it does have a caption.” It tells you something about the image, in that the image is there. Where, if you put an empty alt on there, then it would disappear.

    Sarah: I think I got that. I understand the use case. You’re saying I have an image and I have a caption, and so that will be wrapped in this figure element.

    Because it’s within the figure element, it knows that it has a caption, and so doesn’t feel as though the image needs to be described in any way. It’s essentially like automatically adding null alt text to the image?

    Steve: Yeah. The spec doesn’t say the image doesn’t need to be described. The spec’s quite clear in that it says that if there’s any chance at all that the image can be provided with an appropriate text alternative, it must be provided. It’s the reality is that quite often it’s not.

    What’s happened in the past is that, quite often when it’s not, the software or authors put an empty alt tag on there. In doing that, they’re essentially hiding the image and the user’s not going to know there’s an image there at all, even if they wanted to interrogate it.

    Sarah: Okay.

    Steve: They won’t know, so they can’t even say to their mate, “Hey, there’s an image here. You’ve got a pair of eyes. Can you tell me what it is?” because they won’t know the image is there.

    Previously, a way to provide the caption for an image was via the title attribute. Number of problems with this is the title attribute itself doesn’t carry the semantics of a caption, and also that the title attribute has various problems the way it’s implemented in browsers.

    On mobile, you don’t see it. You can’t access the title attribute, anyway. Or, if you’re using a keyboard, you were hiding information.

    With the use of the figure and figcaption pattern, what you’re doing is providing a programmatically associated, because the figure wraps around the image and the caption. You’re saying these things are associated, and you’re providing a visible label for the image, so the user knows something about the image.

    Sarah: Nice.

    Steve: I like it.

    Sarah: [laughs] It sounds like a really elegant way to present the design pattern in a consistent way. Hopefully, it will lead to more accurate and descriptive and helpful descriptions of images, since it displays on the page.

    Steve: That’s the hope. One of the issues is getting people to provide a text alternative, full stop— Maslow’s hierarchy of needs. [laughs] Get them to provide some text, and then make that text an appropriate text alternative, make it useful and meaningful. Each step is a little bit harder to make.

    The thing is, at least quite often what’s happened in the past is that a text alternative will be provided using the alt attribute, where as a text alternative it will be what is, semantically, a caption. Now, we have a caption element that you can provide a caption for something.

    It has a different meaning and it has a prescribed meaning that’s conveyed. For example, in Firefox, the figcaption element is exposed with a role of caption. That can be conveyed to the user, so they get told that this is a caption. This text is a caption. It’s not a text alternative or a replacement for the image.

    Sarah: Oh, OK. That’s what I was thinking, that captions might have a really different function than a text alternative.

    Steve: They do, in most circumstances. But, if people aren’t going to provide the text alternative, but they are going to provide a caption, at least the user knows that this is a caption and it’s a caption for something that’s not adequately described.

    Sarah: It sounds like you can really look at the specifications. This is an instance where you’re looking at the real world and thinking about how things are implemented, the process for putting images on pages, and stuff like that.

    When you’re working on specifications, do you talk a lot about the implications of changes on processes? In this case, we spend a lot of time in accessibility talking to people about creating good text alternatives for images.

    This adds a little twist to that. Is that part of the spec development process, thinking about the cultural impacts of these things?

    Steve: I think it definitely is. That’s the thing. A lot of the things that make it into HTML specification, other specifications, is that they look at the way authors use HTML in the world, they look at design patterns that are used, and say, “Let’s formalize that design pattern.”

    The provision of a caption for an image is.…It’s not ubiquitous, but there’s a lot in the print media, academic textbooks, anything that has captions.

    Newspapers and online news media have captions for images. There are a lot of circumstances where you have a visible piece of text that is associated with a particular image.

    Previously, there was no way to say, “Hey, this is a caption for this image.” Now there is. It’s really formalizing design patterns or code structures that developers have been using.

    Sarah: As someone who builds things and evaluates things that other people build, I find it really reassuring that the web standards people are looking at what’s happening and adapting those standards to support it. That’s great.

    Steve: Yeah, that’s one of the things I’ve been particularly interested in. There have been big changes over the last 10 years in the way that the HTML specification, particularly, has been developed, how things get added, and what get described.

    A large part of that has been a renewal of browser implementation information, ensuring that that is correct, and working closely with the browser vendors, and things like that. The authoring guidance, within the specification has not kept pace in the same rigor, and the real-world perspective hasn’t been applied to that.

    One of the things that I’ve tried to do since I’ve been involved is get the information of the guidance and the specification to reflect both the constraints and the best practices that are in the real world.

    To give you an example, one of the new features in HTML5—it’s new, it’s been around for a while now—is the placeholder attribute, which essentially allows you to define a piece of text that is displayed within a text-edit field or a text area in a control.

    The thing that’s usually light gray and difficult to read, and then disappears. It’s a feature that a lot of people were doing via scripting.

    You could use JavaScript to change the value of the control so it’d have that. It’s also a feature that’s available in other platforms, desktop platforms, et cetera, so it’s been added…but.

    One of the things that I wanted to insure is that the advice around the use of that is clear that it should not be abused. It shouldn’t be used as a label in place of a visible label. We can’t always stop people doing that.

    At the same time as not wanting to have it used as a visible label, one of things that’s important, and have worked and tested, is that in the absence of any other label text that it is exposed as a label through the accessibility APIs to assist technology users.

    At least, again, a realistic point: they get some information. At the same time, in the specification, which echoes information that is provided by people like usability gurus like Nielsen Group, that you shouldn’t use a place holder as a replacement for a label.

    It states that clearly in the specifications as author guidance, and states the reasons why. It’s problems with the color, that it disappears so it has an affect on users with cognitive impairments. There’s a whole litany of reasons why it’s not a good idea.

    Those are clearly stated within the specification. It’s the thing that wasn’t there, but I have been working to get into the specification.

    Even though the HTML specification is meant to be primarily for browser implementers, it’s also looked to, used, and is a valuable resource for HTML authors and developers, et cetera.

    Sarah: I know I speak from many, many people in thanking you for bringing that perspective into the development of the specifications, and all of the hard work you and everyone else who’s involved in developing these specifications put in. Thanks very much for talking to us today.

    Steve: Thank you.

    Sarah: It’s been very informative and and enlightening.

    Steve: I hope I haven’t gabbled on too much. I tend to gabble, especially when I’m sitting alone in my hallway talking to people.

    Sarah: [laughs] It’s good gabbling. Let me put it that way.

    Steve: Excellent. Thanks very much, Sarah. I had an enjoyable time speaking with you today.

    Sarah: This has been Steve Faulkner sharing insights on how specifications like HTML5 and WAI-ARIA are there to help us in building a Web for everyone.

    Many thanks to you for listening, and to our sponsors, UIE, Rosenfeld Media, the Paciello Group, and O’Reilly for making this podcast possible. Follow us @awebforeveryone on Twitter, that’s @awebforeveryone. Until next time.

    Audio Accessibility with Svetlana Kouznetsova

    Posted on

    A Podcast for Everyone coverAudio accessibility is concerned with making information provided audibly available to people who are deaf and hard of hearing. We see examples of audio accessibility in captions and live captioning. Like all forms of accessibility, there is a spectrum that is defined by features that influence the quality of the experience. At one end of the spectrum, a text version of the spoken content is provided and is somewhat accurate. At the other end, the text closely matches the audio, with accuracy, sound description, and punctuation helping to provide an equivalent experience. To provide a quality user experience, we must make use of all the features that go into an accessible and enjoyable experience of audio content for people who are deaf or hard of hearing.

    Photo of Svetlana KouznetsovaIn this podcast we hear from Svetlana Kouznetsova. Sveta is a user experience designer and appreciates the value of providing a good experience. She brings this perspective to her work as an audio accessibility consultant. She joins Sarah Horton for this episode of A Podcast for Everyone to answer these questions:

    • What is the current state of audio accessibility?
    • What are different features that influence user experience with regard to audio accessibility?
    • Does speech-to-text technology help in creating accessible audio experiences?
    • What should we be thinking about with speech-based interfaces?
    • How can we better promote audio accessibility?

    Sveta is deaf, so we did the interview on Skype using video and chat. That way we could see each other’s expressions and reactions to the text-based conversation. It was Sveta’s idea to conduct the interview in this format and it worked very well. Also, at the end of the interview we had the transcript, included below (note the transcript is a little different since it came from a text conversation—for example, it has smileys). To make the interview accessible to podcast listeners, Elaine Matthias from Rosenfeld Media and Sarah connected on Skype to read and record the interview. It was a great process, and a wonderful way to share Sveta’s insights with both readers and listeners.

    Transcript available · Download file (mp3, duration: 17:31, 10.8MB)

    Svetlana Kouznetsova is a user experience designer, accessibility specialist, and captioning consultant at SVKNYC Web Consulting Services. She also provides audio accessibility services and resources through the Audio Accessibility website, including information about deafness and hearing loss and best practices for accessible media.

    A Podcast for Everyone is brought to you by UIERosenfeld MediaThe Paciello Group, and O’Reilly.

    Subscribe on iTunes · Follow @awebforeveryone · Podcast RSS feed (xml)

    Transcript

    Sarah Horton: Hi, I’m Sarah Horton, and I’m co-author with Whitney Quesenbery of A Web for Everyone from Rosenfeld Media.

    I’m here today with Svetlana Kouznetsova. Sveta is a user experience designer who knows the value of a successful and enjoyable user experiences, and focuses on usable and accessible design, above all else. She is also an audio accessibility consultant, helping companies produce accessible video and audio experiences. Sveta is very active in the user experience, design, and accessibility communities, advocating for audio accessibility and advancing access to media for people who are deaf or hard of hearing.

    As an aside, Sveta is deaf, so we did the interview as a text conversation using Skype. We both prefer talking face-to-face, so we used the video feature, so we could see each other’s expressions, smiles, and laughter, without relying on emoticons. After the interview, I read the transcript of my part of the conversation and Elaine Matthias from Rosenfeld Media read Sveta’s part, and we recorded the reading to use as the audio podcast. Kind of a reverse process, where typically we record audio and transcribe to text for accessibility. It was Sveta’s idea to text and then voice the interview, and it worked really well. And it’s wonderful to be able to share her insights, with listeners and readers alike.

    Sveta, many thanks for joining us.

    Svetlana Kouznetsova: Nice “meeting” you.

    Sarah: First of all, let me say how happy I am that we figured out how to make this work. I really wasn’t sure how a podcast, which is first an audible experience, would work, and it was great thinking it through with you and exploring other options, learning from you other ways to communicate. I’m grateful for the insights, and wonder if you would share them with our listeners, on ways you use technology to communicate?

    Sveta: Okay. Email is my primary way to communicate if people want to reach me. I also use texting when needed—mostly to send brief messages.

    When communicating with people online in real time, I usually use instant messaging features like Skype, Gtalk, AIM, etc.

    I also use video when Skyping with people so that we could see each other and each others’ facial expressions and body language—it’s similar to when people listen to each other voices over phone and hear voice intonations.

    Sarah: Nice, makes sense, and works well! It’s great to see you and talk to you.

    Sveta: Likewise. 🙂

    Sarah: So, how did you get started working in user experience?

    Sveta: I was originally trained as a graphic designer, but I also liked doing websites, and I did a lot of web design work at my first job. I got interested in coding and got a degree in Internet Technology. While in graduate school, I took some business classes—marketing and management.

    I liked marketing classes a lot and had fun doing customer research, but I did not like the idea of asking them to buy things—I wanted products to be more usable to them. Later I found out that it’s part of user experience that is similar to marketing, but the difference is that it focuses on improving experience for users.

    Also, when working on websites, I did sketches and wireframes and loved it. I had no idea that it is also part of user experience and information architecture.

    At another job I was collaborating with a developer who encouraged me to make coding cleaner, and from there I somehow found out about web accessibility and user experience.

    I learned more about it from reading online information and attending events and conferences.

    Sarah: When you talk about user experience and marketing, is that like “experience marketing”—where in providing an excellent and enjoyable experience you end up getting people to buy things?

    Sveta: Yes, I believe that user experience and marketing can go hand in hand and are interdependent.

    I think that marketing is about attracting customers and user experience is about keeping them.

    Sarah: Yes, I like that part, too. It’s not always easy to convince companies that UX will help with the bottom line, though. Have you found that to be the case?

    Sveta: It’s hard. And many businesses think that UX = visual design and coding.

    Sarah: It’s a tough nut to crack, for sure. Let’s talk about audio accessibility; what’s your sense of where we are? Are people thinking about this? Are you finding that companies are receptive to working toward audio accessibility?

    Sveta: It’s something that I still keep needing to educate more people. Even when talking about accessibility in general, many think it’s about coding and doing alternative description for images, and focus more on people with visual and mobility difficulties. But less often on hearing and cognitive difficulties.

    Many think that it’s enough to just have hearing aids or turn up volume to listen to audio, which is not necessarily true. Sadly, hearing loss is very stigmatized by society, so it’s not discussed that much. For example, more people take eye exams— but how many people would take a hearing test?

    More people are willing to wear eyeglasses, but many would try to conceal hearing aids or not to wear them at all. Those who are deaf and hard of hearing would not ask for access—only those who are involved in advocacy work. So it gives the impression to people not familiar with deafness that if a few or no people ask for access to aural information, there’s no demand for it.

    When I ask people to caption video or provide transcripts to audio or to provide real time captioning at events, I’m often being told that I’m the only person asking for it. They do not realize that I speak for about 50 millions of deaf and hard of hearing people in USA. Also, captioning benefits many more people than those who are deaf —like people who are foreign language learners, remedial readers, having a hard time understanding foreign accents, or happen to be in noisy or quiet situations, for example.

    Sarah: It’s true that discussions about accessibility often come down to numbers—how many people will really benefit from this? Unfortunately, since as you point out, so many people benefit from it. With captions, it seems like the benefits are more widely understood and felt than with some other accessibility features, like alt text.

    What I encounter a lot is a resource argument against captions, because it’s something extra.

    Some aspects of accessibility require people—designers, developers—to do things differently. Like using a different design or interaction pattern—like a disclosure widget instead of a tooltip to display supplementary information. Or in code, adding attributes to code so information is available to assistive technology. But captions are different—they require people to do something more. And usually they cost money. In my experience it can be a hard sell, making a case for spending additional time and resources on captions. How to you approach that challenge in your work? What arguments can we use to make a compelling case?

    Sveta: It would be no different from spending money on making buildings accessible for wheelchair users, for example. You need to add ramps and elevators. They are as universal as captioning in the sense that they benefit more people than those who are in wheelchairs—like parents with baby strollers or workers pushing carts.

    It is also no different from spending on other things like editing audio and video—it also costs extra money.

    It is also a better investment than spending more money on lawsuits.

    Lawsuits would not only cause businesses lose money, but also give them a bad reputation. Providing accessibility is the right thing to do. Like ramps and captions, they benefit more people than just those with disabilities.

    Many businesses do not realize that people with disabilities make the largest minority with significant spending power. They make $1 trillion market in USA and $4 trillion market in the world—the latter is about the same size of China.

    And they would also get more customers like families, friends, coworkers—they would make additional 2 billion people in world with a disposable income of $8 trillion.

    So it’s a pretty significant number of potential customers that many businesses ignore.

    Sarah: Yes, there is a strong business case. Also, when it comes right down to getting videos transcribed, I’ve found the costs is not that significant. I think we all wring our hands about how expensive it is, but when it comes right down to it, it’s pretty small in comparison with other costs, as you mention, like shooting, editing especially.

    Part of the difficulty is getting the process embedded smoothly into the overall process. Some people hope technology is the solution.

    I remember getting very excited back in 2006 reading an article about IBM’s “superhuman speech recognition,” which set a goal and timeline for recognizing speech as well as humans. At the time I was working on lecture capture project where we were trying to automate transcription of recorded lectures. Since then we have Siri, which works pretty well, and YouTube auto captions, which don’t work so well. How realistic is it to look to speech to text technology for help with creating accessible audio-based experiences?

    Sveta: That’s the issue. It’s important not to just provide speech to text translation, but also to make it of good quality. No matter how much speech recognition has advanced lately, machines are still not as good as humans. Even people who use speech recognition to provide real time captioning—called voice writers or re-speakers—are not as good as steno captioners. For example, BBC uses voice writers for real time captioning, and that makes many deaf Brits frustrated because they have so many errors.

    And a couple weeks ago I was provided with voice writers by an event organizer who went against my advice for hiring steno captioners. Those voice writers have years of experience, and yet they made more errors than skilled steno captioners.

    If human voice writers using speech recognition cannot provide smooth real time captioning, machines cannot do it even better.

    Sarah: Can you explain what a voice writer is?

    Sveta: There are 2 types of real time captioners. One is a steno captioner that uses a steno machine—like a court reporter. Another one is a captioner who uses speech recognition like Dragon to voice instead of typing.

    I’m not a captioner so I cannot explain it in details. From what I have seen, they speak into a microphone and make words appear on screen. To reduce mistakes, they would need to practice a lot to add words into vocabulary. Steno captioners also practice by adding specific strokes into vocabulary.

    Sarah: Ah, I think I get it.

    Sveta: Another thing about automatic speech recognition is that machines are not able to add proper punctuation, speaker identification and sound description—it can be done only by humans. Proper punctuation is as important in transcription as voice intonation in human speech.

    Another thing also is that speech recognition is not good at foreign accents and background noises.

    Research shows that errors of more than 3% make it harder to read and understand material in print. That’s why real time captioners are expected to type at least 220 words per minute with at least 98 to 99% accuracy in real time.

    For these reasons you need to train a machine to recognize your voice. You cannot just use speech recognition and then start captioning. From what I understand, it takes as much practice for a voice writer as a steno captioner to provide smoother real time captioning with less errors.

    Voice writing may be better for transcribing recorded audio and video so that they can take time to clean up. For real time captioning, however, I would recommend steno captioners.

    Sarah: Speaking of voice and dictation, what about speech-based interfaces, like Siri? What should we be thinking about to make sure those tool are accessible?

    Sveta: I think that Siri may be good for voice commands or other functions than real time captioning. I did try myself—sometimes it may transcribe speech well, sometimes not. The main issue is the time lag to have speech transcribed.

    Siri may be fun for informal conversation—some people tried to use that with me. However, it’s not good for real time captioning.

    Sarah: Thinking about using speech to interact with features, do we need to be sure to have alternative interaction features, like with Siri you can speak a search term or enter it as text. And as long as the text option is available there won’t be barriers for people who can’t speak?

    Sveta: Yes.

    Sarah: Great. I see more interfaces, particularly apps, that use speech, and I’m not sure we are all thinking about having that redundancy.

    Sveta: Even though I can speak, I have Russian accent and deaf voice, so speech recognition would not understand me. It’s hard enough for some humans to understand my speech, to say nothing about machines.

    Sarah: 🙂

    So the last thing is, if you have one bit of advice to offer someone on a product team who needs to advocate for audio accessibility, what would it be? What one argument or rationale could we use to persuade people to commit across the board to, for example, CART for events or transcribing podcasts?

    One note on that—when Whitney and I started doing podcasts with UIE, we didn’t need to convince—they were already transcribing all their media, which was pretty awesome!

    So how do we get others to do that?

    Sveta: I really appreciate that you and Whitney try to make sure that aural information is accessible via quality transcription.

    It may be surprising, but even some people who advocate for accessibility do not practice what they preach as they do not think of making their audio accessible.

    And when posting podcasts or videos, they say “Transcript and captions to come soon.” This is not a good practice because many of us deaf and hard of hearing people are often being told, “I’ll tell you later,” when we ask people to repeat what they say or discuss in group conversations. That message is equivalent to saying, “I’ll tell you later.” So it is advised to post audio and video online only after they are made accessible.

    The example is the recent CSUN conference about accessibility—they posted videos online without captions! They said to be patient and wait. Why should deaf people be patient and wait? Why the rush for hearing people to listen to audio and video? If we are told to wait, so can hearing people, too.

    To answer the last question, I would say that captioning and transcription is universal access and not something that needs to be asked for in advance.

    Hearing loss is very stigmatized and many deaf and hard of hearing people would not ask for it. There are also many people who are late deafened and trying to cope with their hearing loss. So captioning would benefit everyone.

    Generally I would say that accessibility and user experience are not to be separate—things need to be usable and accessible to everyone regardless of whether you have a disability or not. What benefits people with disabilities also benefit others.

    Last but not least, it’s important to provide quality transcription and captioning. It’s more than just converting speech to word. And it also depends on type of audio—transcription for a podcast would be different from transcription for video and live events.

    Sarah: So, don’t wait for someone to ask for captions or transcripts—just do them, do them well, and everyone benefits.

    Sveta: Yes. Just like you don’t wait for someone to ask for ramps and elevators—they benefit everyone.

    Sarah: Right! This has been very helpful and informative. Thanks very much, Sveta!

    Sveta: My pleasure! 🙂

    Sarah: This has been Svetlana Kouznetsova sharing insights on what we can do to design accessible audio experiences, and work toward building a web for everyone.

    Thanks, also, to Elaine Matthias at Rosenfeld Media for voicing Sveta’s part of the text interview for the audio version of the podcast.

    And many thanks to you for listening, and to our sponsors—UIE, Rosenfeld Media, The Paciello Group, and O’Reilly—for making this podcast possible. Follow us @awebforeveryone on Twitter. That’s @awebforeveryone. Until next time!

    Join us in celebrating Global Accessibility Awareness Day

    Posted on

    Global Accessibility Awareness Day officially begins at 8pm EST on May 15, 2014. The first GAAD was in 2012, and was inspired by Joe Devon’s challenge to mainstream accessibility. He called for “a day of the year where web developers across the globe try to raise awareness and know-how on making sites accessible.”

    While GAAD continues to be an awareness-raising event, this year it has taken on a celebratory tone. There has been real progress in the last years, as reflected by the number and diversity of GAAD events—in-person and virtual. People are talking about and engaging with accessibility around the globe. It’s definitely an exciting time to be involved with accessibility.

    This year’s GAAD coincides with UXPA Boston, and Whitney will be presenting on Personas for Accessible UX. I will be presenting on Involving People with Disabilities in UX Research as part of Inclusive Design 24, a 24-hour online GAAD celebration hosted by The Paciello Group and Adobe. And Rosenfeld Media is celebrating GAAD by offering a 40% discount on A Web for Everyone. Just choose your format, click Add to Cart, and enter the discount code GAAD.

    We hope you will join us in celebrating all the great work and progress, and take time to learn more about how you can help in making a web for everyone!

    Accessibility Research Methods with Jonathan Lazar

    Posted on

    A Podcast for Everyone coverAccessibility research can help us better understand how people with disabilities use the web and what we in product design and development can do to make that experience more successful and enjoyable. However, accessibility research is often carried out in academia. The valuable insights gained through research are shared and built upon among scholars, but often do not make their way into the practice of people who are designing and building digital products and services.

    Photo of Jonathan LazarIn this podcast we hear from Dr. Jonathan Lazar, a computer scientist specializing in human-computer interaction with a focus on usability and accessibility. Jonathan has done a great deal of work bridging the gap between research and practice. He joins Sarah Horton for this episode of A Podcast for Everyone to answer these questions:

    • What are different accessibility research methods and what they are good for? And when are they most effective in the product development lifecycle?
    • What are the broad benefits of accessibility research?
    • How can you get organizational buy-in for conducting accessibility research?
    • How can researchers and practitioners work together to advance accessibility?

    Transcript available · Download file (mp3, duration: 37:38, 34.9MB)

    Jonathan Lazar is Professor of Computer and Information Sciences at Towson University, where he directs the Information Systems program and the Universal Usability Laboratory. He has been working in accessibility research for 15 years, most recently focusing on law and public policy related to web accessibility for people with disabilities. His publication credits include Research Methods in Human-Computer InteractionWeb Usability: A User-Centered Design Approach, and Universal Usability: Designing Computer Interfaces for Diverse User Populations.

    Resources mentioned in this podcast

    A Podcast for Everyone is brought to you by UIE, Rosenfeld Media, The Paciello Group, and O’Reilly.

    Subscribe on iTunes · Follow @awebforeveryone · Podcast RSS feed (xml)

    Transcript

    Sarah Horton: Hi, I’m Sarah Horton. I’m co-author with Whitney Quesenbery of “A Web for Everyone,” from Rosenfeld Media. I’m here today with Jonathan Lazar.

    Jonathan is a computer scientist who works on topics related to accessibility and universal usability. Among his many activities, he directs the Information Systems program at Towson University conducting research on what people with disabilities need to successfully use the Web and how well or how poorly we, who build the web, are meeting those needs.

    Jonathan has a great deal of experience with different research methods. We’re here to learn the benefits and drawbacks of different methods for evaluating accessibility of websites, applications, and apps, and how we can incorporate accessibility assessment into practice. Jonathan, thanks so much for joining us.

    Jonathan Lazar: It’s a pleasure to be here with you today, Sarah.

    Sarah: First of all, can you tell us about the Information Systems program at Towson?

    Jonathan: Certainly. Since 2003, I’ve been director of the undergraduate program in Information Systems at Towson University. At Towson, all of the computing programs are in one department. We have computer science, information technology, and information systems.

    In our program at Towson, we like to say that information systems focus on the four Ps — People, Process, Policy, and Profit.

    Students in our program learn a lot about human computer interaction. They learn a lot about using technology, business needs, hence the profit. They learn about international technical standards and the laws related to technology. They learn about design methods for including users into design to ensure a good outcome. They learn about testing and evaluation.

    Again, starting with a P for People, they learn all about human computer interaction and how to build interfaces that meet the needs of users.

    We’re really excited. This fall, we’re actually implementing four new career tracks at Towson University for our undergraduate students. Students will pick from one of these career tracks to help them focus their electives towards a specific job goal.

    Those four career tracks are user interface design, system analyst, business analyst, and E-government. Those are the four career tracks and we’re excited. Students are already signing up for them. We’re going to help them go directly into these different careers and really helps define what we’re interested in and what our goals are in information systems.

    Sarah: That sounds like a great program. Sounds like something I’d like to sign up for. [laughs]

    Also, I know you direct the Universal Usability Lab, which is also somewhere I’d like to spend some time. It sounds like it’s right up my alley. Can you tell us a little bit about that as well?

    Jonathan: I founded the Universal Usability Laboratory at Towson in 2003. We just celebrated our 10th anniversary. The goal of our laboratory is really to do research but research focused on practitioners and policy makers. We’re not doing theoretical models. What we do is research that can really help improve the outcomes of technology in our community and in society.

    It’s a number of people involved. It’s myself, Heidi Feng, Joyram Chakraborty, and Suranjan Chakraborty, who are all faculty and a number of doctorate students, as well, and a number of graduates of the lab.

    The idea is we do research about user diversity. We’re interested in people with perceptual impairments, motor impairments, and cognitive impairments. We’re also interested in users with very little computer experience. We’re interested in older users and younger users. We’ve very interested in these issues of user diversity.

    Our research tends to be targeted towards either industry and practitioners. For instance, we do a lot of work that we publish in the UXPA. We’re also very interested in doing research that informs policy makers.

    I can talk about this a little bit more later, if you want. Typically, if you’re aiming your research towards policy makers versus impacting those who fund public policy and government policy, it’s a little bit different.

    Our research is really aimed at impacting policy makers and impacting practitioners and developers.

    Sarah: You must have quite a toolbox of methods for doing that research. One thing we’d really like to learn from you today is about some of those methods.

    Whether you have favorite methods, and are some things good for some tasks and not for others? How do you convey that information from your research to those policy makers in a way that is persuasive, and hopefully prompts some action on their part?

    Jonathan: Generally, we have three different types of evaluation methods related to accessibility. You could have your typical usability testing where you have people with disabilities attempt to perform task on whatever level of interface you have, whether it’s an early prototype or a fully finished interface.

    You can have expert inspections or expert reviews where you have interface experts or accessibility experts that go through a series of interfaces whether we’re talking about screens on a desktop, or whether we’re talking about mobile devices, or whether we’re talking about an operating system but you have expert inspections which are, again, inspections. It’s not users attempting tasks.

    For things like websites, you also could do automated accessibility reviews, where you have a software tool. So something like Deque WorldSpace. Something like the free tools on the web, like for instance you have WAVE. Something like the paid tools, so you could have Comply First.

    There are a lot of tools out there that you could use. They have a limit though. Really what you need is some combination of user testing, expert reviews, and automated reviews.

    The question really is, which are the appropriate methods do use, in which scenario?

    Let’s start from a practitioner point of view. The most important thing to do is actually impact the design. Your number one goal is to impact the design. You could do a perfect method and a perfect data collection, and it could last six months. If you spent six months doing it, you wouldn’t actually influence the design. They would have moved ahead without you.

    Let’s first say that the most appropriate method that you can use is the method that will actually lead to results. Given that, you have to look at what the budget is. You have to look at what the timeline is. You have to look at where you could actually impact the design to improve accessibility. That’s the first thing.

    Sarah, I think you would agree with me. If you’re going to do a perfect study that won’t get anything done, why do it?

    Sarah: Exactly. It’s a very good point about timing, and things like that. That really influences whether anyone can actually do something with the results of your research.

    Jonathan: I always when I talk with my students, I tell them, “What’s the right number of users? What’s the right level of testing?” It’s whatever you can get into the timeline, given the budget, given the limitations, that actually will allow you to influence the design. That’s what we’re interested in. We’re interested in influencing the design.

    For someone to say, “Well, we must do a strict 40 user Usability Test,” that’s not that likely to actually impact. That’s not likely to impact the design.

    Let’s talk about some of the strengths and weaknesses about each one of these three methods. When we talk first of all about Usability Testing — we’re talking about getting users with disabilities — one of the challenges there is that you have to determine which users with disabilities and which disabilities are likely to use whatever that interface is, that website, that device.

    One of the challenges is if you say, “Well, I’m going to have blind people test it. I’m going to assume that applies to everyone with every disability.” Clearly, that’s not true.

    What you need to do is get a sampling of different disabilities that represent the target user for whatever operating system, interface, website, device you’re referring to.

    You need to make sure that you have a sampling of not just people with one disability. People with a disability typically can only find accessibility problems that relate to their disability.

    One of the strengths of user testing, first of all, it goes beyond simple technical accessibility to make sure it’s easy to use.

    One of the core problems we often talk about is that someone will say, “Well, I followed the technical standards, so I think it’s accessible.” In fact, it may be accessible, but really hard to use. So it’s not really usable.

    User testing is also really good for determining usability of multi-step processes. Let’s say, if you have a dialogue boxes or a series of screens, something like signing up for a new email service, something like purchasing an item online, something like registering for classes, where it’s not just one screen, but five or six different screens that you have to go through, Usability Testing is most accurate on that.

    Next, on to Expert Inspections. Expert Inspections, as you know, should always be done before Usability Testing if possible in the schedule. Experts can find some of the obvious flaws first, get those improved, and then hopefully you can have users actually find sort of the more fine grain problems related to accessibility. So if possible, an Expert Inspection first.

    We typically have people who are experts in accessibility. Sarah, I’m sure you’ve done many expert inspections, right?

    Sarah: That’s right. What you’re saying is you do an Expert Inspection prior to Usability Testing?

    Jonathan: Absolutely. If you can do that in the schedule with the budget, you should do that. Hopefully, the expert in interface design can find a few major accessibility flaws, get those fixed, and remove those by the time that the users actually get to evaluate the interface.

    Expert Inspections typically won’t figure out if the interface works really well, in terms of ease of use. They can determine if it technically meets the requirements, but there may be things that relate specifically to the task. Obviously, the expert may not have deep task knowledge. They may have deep knowledge of the interface.

    One thing though an Expert Inspection is really good at, is very good at determining compliance with either technical guidelines, or with legal requirements.

    Typically, usability testing will tell you where there are some flaws. You can’t run three blind users, have them evaluate an interface and they can’t tell you if it complies with section 508 of the Rehab Act or not.

    An expert inspection is typically better at evaluating a series of interfaces for strictly on compliance. Basically, the expert’s going, “OK, does it meet paragraph A? Does it meet paragraph B?” That’s expert inspections.

    Automated reviews are actually the weakest evaluation method, however they have one strength.

    I’ll tell you why they’re weak first. They’re weak because, obviously if you could have human expertise inspects an interface, that’s better. Or, if you can have real users with disabilities inspect an interface, that’s better. The real weakness of automated accessibility testing tools is that they often can’t tell if something is useful.

    The example that’s given often is that all web pages need to have alt text. Alternative text that describes what is on the screen for a certain image or something like that. If there’s alt text for an image and the alt text is the word “blank,” or the alt text is “picture here.” Or, let’s say there are 20 pictures on a web page and the alt text for every single picture is “hamburger.”

    Unless you’re actually running a hamburger store and all your pictures are indeed hamburger, an automated tool would mark this as you’ve met that paragraph of the law because you have alternative text for a picture, even though the alternative text might not be useful at all. It’s “hamburger, hamburger,” or it’s “picture here.”

    The idea is that automated accessibility inspections…the software tool is actually the weakest. However, they scale really well.

    Given that with user testing…You want to get as many users as possible, but you’re probably not going to be able to get 50, 100, 150 users. Given that expert inspections are great but you’re probably not going to have time for the expert to inspect every single web page in a website, the automated accessibility tools scale really well.

    You can have a tool spider through your whole website. You could have it examine 10,000 web pages.

    Automated tools are good at giving you the overall picture of how your website is doing and what types of flaws occur most often. A lot of times the automated tools will either give you information that can be misleading. “Oh, it has alt text,” but the alt text isn’t actually useful.

    Or, it may give you results that you have to interpret. It’ll say “There are manual checks required due to the presence of these features.”

    Really, you want some mix of automated review, expert inspection and usability testing. Of course what combination you use and how much depends on the timeline for your project. It depends on the budget for your project. It depends on how you will be able to influence the design.

    We shouldn’t focus on doing perfect experimental design of our research here. We should focus on, “How do I influence the design and what methods will give me the information I need to fit into the timeline and budget, to actually influence the design?”

    Sarah: Thanks, Jonathan, for that very clear and thorough exposition of those three methods.

    It’s great to think about how each of them might be used in different ways within a project and certainly, how each has strengths within the context of different constraints. Like time, resources and numbers of people available to do usability testing, for example.

    A couple questions come to mind. One is, who should be asking these questions about what needs to be done and when? How can, particularly user experience designers know when and what to do? What tools to use and add these tools into their tool sets for making decisions about design and integrating these research questions into the process of designing and building.

    Jonathan: User experience professionals are really in the right situation to be able to have an impact. User experience professionals really need to advocate for accessibility.

    Accessibility happens as a group effort. If you think about like in a university, you have to have a number of different people involved. Not only do you have to have the disability student services office, you also have to have the CIO’s office, the people who control the technology. You need to have any people who are involved with diversity. You need to have the provost involved.

    The user experience professionals have an advocacy role to get out there and inform people about technology accessibility. Really informing people and making them aware is what I’d say is the biggest challenge. In many cases, people simply don’t know.

    If they didn’t grow up with people who are blind, people who are deaf, people in wheelchairs, they don’t know. They’ve never considered how someone who’s different than them might use technology. Very often it’s an awareness issue.

    User experience professionals, because they’re already going out there informing people about usability and all those issues for user centered design, they’re really in a great position to advocate for accessibility. To let people know, “Hey, you have all these different users. Here’s how they use your technology.

    Here’s what you can do to make it better. We have these technical standards. Do you love what we’ve already done with user experience? Guess what we can do related to accessibility.”

    You asked also, I believe about tools, right?

    Sarah: Yeah.

    Jonathan: One of the problems with most developer tools is that there isn’t much front and present to developers about accessibility in the tools.

    Imagine if you had a web development tool that every time you insert an image, it would immediately say, “What is the alt text?” If you didn’t type in alt text, it would stop you from moving any further. Wouldn’t that be great?

    Sarah: It would be great.

    Jonathan: One of the problems is that developer tools — whether they’re web developer tools or any other developer tools — often are really transparent about accessibility. Yeah, there might be some accessibility features hidden away somewhere, that you have to look and search for. It’s not front and present.

    That’s a common problem in many organizations. You ask them, “How are we doing on IT accessibility?” They say, “Well, we don’t know,” rather than saying, “Here’s how we’re doing. In the month of May, here’s the type of evaluation we did for accessibility. Here’s how we did, but we’re going to work on improvement.”

    Think about the airlines. The airlines post, “In the last month we are 87 percent on time, which was an improvement over last year.” Why can’t we get organizations to do that for accessibility?

    “We’re 87 percent accessible, and we’re getting better every month.” You rarely see any organizations with that level of transparency, right Sarah?

    Sarah: Right, but I guess my initial question would be, are they doing those assessments to begin with? Are there numbers to report on?

    That actually feeds very well into my next question about what organizations can do to incorporate some of the research methods that you’ve been talking about to build better products. Then they would have numbers to share.

    At this point, I don’t know if many companies, organizations or design teams are actually monitoring accessibility in quite that way.

    Jonathan: You read my mind, because that was the next thing I was going to talk about. Why do they often not report statistics? Because very often they’re not doing evaluations. The organizations — whether they’re government agencies, universities or companies — they don’t know how they’re doing. They’re not doing the evaluations.

    It’s hidden secret. “We don’t know how we’re doing, so don’t ask about it.” Instead, what we need to get organizations to do is to talk about it and say, “Look, we haven’t done evaluations. We’re going to start.”

    If you look at, in 2010, there was a memo coming out of the US federal government, that talked about the fact that the Justice Department hadn’t been doing their evaluations of federal government websites since 2003, but they were going to start doing it again.

    I really give them credit for that, that federal government, and there were a number of parties involved, the Justice Department, OMB, GSA, they really all said, “Yeah, we haven’t done this, and the law requires this, and we’re going to start doing it again.” And the Justice Department did.

    What we need is to get companies to follow the lead of the Justice Department, in saying that, and admitting that. “Yeah, we haven’t really paid attention, but we’re going to start paying more attention, we’re going to start doing more evaluations, we’re going to start making this a priority.”

    Because once you find out how you’re doing, of course, what’s the next step? “Oh, we’re not doing well. Well, we need to start making some improvements.”

    Sarah: Jonathan, I really agree with what you’re saying about organizations needing to be more transparent about the work that they are doing in accessibility, and I think what I’m trying to get to in this conversation is some practical guidance, as to how to go about doing that research. This is one thing that organizations struggle with, and not really having the tools and the knowledge that you’ve been sharing with us, today, about how to assess accessibility, and improve it.

    As you know, because you were there, I recently attended the Cambridge Workshop on Universal Access and Assistive Technology, which you co-direct. It was a really great conference, and I was delighted to be able to participate. I was so struck by the insights that were shared at that conference, from the research community and the scholarly community, primarily.

    I also came away, though, with the feeling that all of that great information and insight isn’t making its way into practice, into organizations, so that groups know how to move forward, based on the insights gained through that knowledge. People who are looking really deeply into accessibility are in one place, and the people who are trying to provide accessibility are in another. There is a big gap, in between.

    From what I know of your experience, you’re a person who is both a researcher and a practitioner. You work with researchers and you work with practitioners, and have found a way to bridge that gap. If you could tell us a bit about how you’ve found ways to do that, so that organizations could move forward with accessibility in a more deliberate and an informed way, benefiting from all the insights in the research community.

    Jonathan: My approach is simply that I talk with everyone. If a group wants to have me present, if there are some people I want to do outreach to, I just go talk with them. A lot of times, there’s a hesitation by, especially university researchers, not as much the industrial researchers, but there’s a hesitation to work with practitioners.

    There’s this silly hesitation about, “Well, we want to stay in our research lab, we want to do a clean study. We want to do it this way.”

    I say, the world is messy. Let’s get out there, and work on influencing the world. Yes, it’s messy. Yes, you can’t control as many factors. Yes, it may not be a neat study that you can publish in a certain journal. But the reality it, what could be better than actually influencing either practitioners in UX, or developers, or policy-makers.

    The key thing is to first engage with people and say, “I’d really like to talk with you more about the topic of accessibility.” UX developers go out and talk to researchers, researchers go talk to practitioners and policy-makers.

    The first step is simply to engage, and to talk. The next step is to find out, what do you need? What information do you need, and in what format do you need it? Because, if you present to different communities, if you present to all these different people I talk with, researchers and practitioners, they all have different needs. We talk about user-centered design, you need to understand the user needs.

    For instance, if you look at public policy-makers, there’s very little data from the UX and HCI communities that actually is influencing public policy. Why? Because we don’t have things prepared in the format that policy-makers need. For instance, policy-makers are very interested in year-after-year studies.

    They don’t want to know if we’re doing perfect work, they want to know, are we improving? So you’re saying, maybe, “Three years ago, our websites were 50 percent accessible, and now we’re at 70 percent. Our goal next year is 75 percent.” That’s great, we’re making progress. You might say, “But we’re only at 75 percent.” But a policy-maker sees progress.

    They’re very much interested in longitudinal studies. In the HCI research community, we don’t do many longitudinal studies. On the other hand, if you look at, let’s say, the healthcare community, the medical research community, they do tons of longitudinal studies. We have to figure out, what do other communities need?

    When I present to practitioners, I always make sure to give lots of examples, and be very specific about policies. One problem, that as researchers we often do, is we say, “There’s a law that says that it must be accessible.” We need to learn, when speaking with these other communities, to be specific. What, specifically, is the law? Is it a federal law? Is it a state law? What does it cover? Who’s covered? What type of compliance mechanisms are in place?

    It’s very often about, first engaging with these other communities, and then really trying to figure out, what do they need? What format do they need things in? What are their questions? Say, “What are the questions you’d really like to have answered?” That’s how you do it and not to be scared of messiness. The world is messy. Yes, we have to get out of our university and engage with the world.

    I’ll give you an example. I’m working on a project, right now, with my undergraduate students. I teach a class that’s just about technology designed for blind users. The students are working with Baltimore County Public Library, to evaluate the services that the Baltimore County Public Library offers for people who are blind or low-vision.

    I tell the students up front, “I think this is going to be the most awesome project we’ve ever done in this class. I could be wrong. It may not work out well. But that depends partially on the amount of work you put into it.” When you do a real world project, you have to be up front about that. Yes, it’s going to be messy.

    Rather than be in our usability lab and focused just on experimental design, if we’re going out and trying to implement things in the real world, there will be lots of unexpected things that we find along the way. We’ll probably find some new flaws in our interface. We’ll probably find some technical challenges that we have. That’s exactly why we should do it.

    Because, if we do only experimental studies in the lab, very controlled, we’re not really able to influence the world. We have to get out there in the world, and see what real challenges people face. What are the real challenges with the technologies we build? How do we tweak the technologies? Where are there problems in our technologies?

    How do we impact on public policy? How do we impact developers? How do we impact practitioners’ end-user experience?

    These are really my ideas. Engage with people, find out what they need, and don’t be afraid of failure, don’t be afraid of messiness.

    Sarah: Sounds good. That’s really great. You’re putting the onus primarily on the research community. Is there something that the practitioner community can do to work the other way? There were very few people at that workshop that I mentioned, that were from industry. How should the practitioners, and people who are out there building things, how should they be engaging with the research community?

    Jonathan: Practitioners should do everything they can to reach out to university researchers, as well as industrial researchers, and say, “Here are some questions that we don’t have resolved in our community. What could you contribute to making this happen? We want to find a way to work together.”

    And realize that, as they do that, you have to get to know other communities. You have to get to know, what are their reward structures? What do they get credit for? What do they get dinged for? That’s part of it.

    I wasn’t just speaking as a researcher. I was saying practitioners should absolutely go out, talk with researchers. Both practitioners and researchers should go talk with policy-makers. Go talk with your local government official, who probably will actually, really want to talk with you. They’ll want to say, “Yeah, I’ve been having these problems, and I don’t have data on this. I don’t understand the problem. Can you give me some more information?”

    It’s many different communities. It’s technology developers and software engineers. It’s UX practitioners. It’s researchers. It’s policy-makers. We all have to get out of our comfort shell, and go out there and explore with other communities. Be willing to fail. Be willing to say, “It may be messy, but we need to at least start the engagement process.” A lot of people never even get that far.

    Sarah: You’ve mentioned a few times, policy. I know that you spent a year at Harvard, as a Fellow, researching public policy. It’s interesting how some of your insights today relate to ways to use the research techniques and results in a way that are going to be persuasive, and affect public policy. If you could talk to us a little bit about how you ended up taking this path of researching public policy as a computer scientist. It’s not a common path.

    Jonathan: [laughs] No, it certainly is not. It was very interesting. I’ve been doing accessibility research for about 15 years, now. I’d been doing user diversity, before that, but accessibility for about 15 years, now. What kept happening is that I would get calls and emails from policy-makers, asking me, “Hey, do you have any data on this?”

    I would receive requests from the disability community, from the advocacy community, saying, “Can you go talk as a researcher, about research foundations for this bill in the legislature?” Policy-makers at the federal level, too, would reach out to me. Over time, I kept seeing this pattern. I kept getting requests for information to really help influence public policy.

    No one’s asking me to be an expert on law or policy. What they’re saying is, “Can you give me data? Do you have research studies that can answer some of our policy questions?” That’s, at a core, what I’m interested in. I’m interested in, at least from the policy point of view, how can we use human-computer interaction, usability experience, accessibility research data, to help inform policy-making?

    Because a lot of policy-making in this area doesn’t have any data, it doesn’t have any research behind it. There are a lot of other fields that do much better at this, than we do.

    Over the years, I kept getting requests for, again, “Do you have any data on the following topic? How is our state government doing? Can you talk a little bit about this bill and, from a scientific point of view, what this bill in the legislature means?” Over time, I kept getting more and more requests. Freely admitted, I don’t have a public policy or a law background. My background is in human-computer interaction. I was getting request after request.

    I’d been involved with SIGCHI, a special-interest group on computer-human interaction. I had been a founding member of the US Public Policy Committee for SIGCHI. Later, that role was expanded, where in 2010, they created a new position, called the International Chair of Public Policy, and they asked me to serve in that role.

    I’m doing more and more public policy, and I thought, because I’m doing public policy, I really need to have a little bit more of a foundation in disability rights law and public policy.

    I applied a number of different places. I was very thrilled that I won a fellowship at the Radcliffe Institute for Advanced Study at Harvard University. The Radcliffe Institute is a fantastic place. They specialize in people who do interdisciplinary work, and they will fund people. They will fund a portion of your salary, to spend a year at the Radcliffe Institute.

    You apply, it’s about a five percent acceptance rate, and you apply for one of these Radcliffe fellowships. They have 50 every year, across all fields.

    I was thrilled. I won one of the Radcliffe Fellowships, so I was the Shutzer Fellow at the Radcliffe Institute for Advanced Study. I spent a year investigating and researching the intersection between disability rights law and public policy, and human-computer interaction for people with disabilities.

    As you know, I was really involved with the Harvard Law School Project on Disability and Michael Stein. In fact, we had some publications out already about these topics related to, for instance, societal discrimination. There’s a great video, also, if you go to YouTube and search on Jonathan Lazar Harvard. There is a great YouTube video of my fellowship presentation, which talks all about societal discrimination against people with disabilities occurring when a website is inaccessible.

    If a technology is inaccessible, how does that lead to a form of discrimination, like employment discrimination or pricing discrimination? Over time, I got more and more involved with public policy and I said, “I want to do something related to policy and law for my sabbatical.”

    Again, I applied and I was thrilled that I won one of the fellowships at the Radcliffe Institute. That really has helped me get a much deeper understanding of public policy and disability rights law related to my human-computer interaction.

    For instance, I continue to do work in ACM SIGCHI, where I continue to serve as International Chair of Public Policy. We’re working on a rapport to service, a foundation, for understanding human-computer interaction and public policy.

    Also, if you look at SIGCHI, I’ve been involved with, but I’m not leading the effort, related to making SIGCHI more inclusive for researchers, practitioners, developers and students with disabilities. SIGCHI has been working both on conference accessibility, making sure that our conference locations are accessible for people with physical disabilities.

    We also have been working on digital accessibility, working on improving our conference website, working on improving our submissions to the digital library, so we are making progress on making SIGCHI a more inclusive organization.

    Sarah: Now that you’re back at Towson after that year of doing research into public policy, are there things that have changed the way that you approach accessibility research on, for example, the three methods that we talked about earlier and pulling that all together at this point? Are there ways of doing accessibility research that we should be looking to in the profession, in the UX profession, to influence things like public policy, in terms of how we administer and how we use our research methods to learn about accessibility in products and services?

    Jonathan: I certainly learned a lot last year on the fellowship. I learned a lot about disability rights law and have a much deeper understanding of the law. One of the things I think that is important for all UX professionals to understand is that anytime you talk about policies or laws, be very specific.

    That’s something I really learned last year. That they cite specifically, “Title II of the American Disabilities Act, Paragraph III.” That’s the way that people do policy in law, typically refer to things rather than saying, “There’s a law that said so.”

    Anytime we reference a law or a policy, we need to be very specific about what we’re referring to. I do think that when you look into not only the laws, but the regulations, when you look into legal settlements, you see a little bit of a trend where the legal settlements now are being much more specific about the evaluation methods required.

    You didn’t used to see that. It used to be that some form of testing will be required, some evaluation. Now, for instance, if you look at the two legal settlements recently with the University of Montana and Louisiana Tech, they’re very specific about the type of evaluation methods required.

    For instance, for one of the settlements, the university has to file an annual report documenting compliance with the Department of Justice. With the other one, they have to do user testing involving people with disabilities.

    That’s slowly starting to become more encoded in all the various forms of policy, the statutory laws, the regulations, the legal settlements and such. That’s something that we really could help with. The more that the UX profession could help inform policymakers about the different methods of evaluating for accessibility, the strength and weaknesses, the more information we can put out there.

    Again, the more transparency we can get, the more we can talk about it because a lot of people still don’t know. If you went to these universities, a lot of the higher-ups say, “Well, I had no idea. I didn’t know.” We need to do a much better job educating people out there about accessibility and different evaluation methods for accessibility and why it’s important.

    That’s my charge to everyone who’s listening to this podcast, get out there. Talk with people. Connect with people. Inform them about accessibility state-wide, it’s important. Give them your business card. Make sure that you do your best to get the word out because there are still a lot of people out there who are not aware.

    Awareness, openness, and transparency are really the best ways that we can move this topic and this agenda forward.

    Sarah: Thank you so much, Jonathan. That’s all really helpful and insightful. This has been Jonathan Lazar talking to us about the best ways to gain and share insights through research to help in building a web for everyone.

    Many thanks to you all for listening and to our sponsors — UIE, Rosenfeld Media, The Paciello Group, and O’Reilly — for making this podcast possible. Follow us @awebforeveryone on Twitter. That’s @awebforeveryone. Until next time.

    Design Education: An interview with Valerie Fletcher

    Posted on

    An edited version of this interview appears in Chapter 11 of A Web for Everyone.

    Photo of ValerieValerie Fletcher, Executive Director of the Institute for Human Centered Design since 1998, helped shape the Principles of Universal Design. With many years of engagement in advancing accessibility and universal design in the public and private sectors, Valerie has a deep knowledge and clear perspective of the challenges and opportunities that exist in moving forward the agenda of universal design for web accessibility.

    We wanted to learn what she considers to be the greatest challenge in integrating accessibility into the practice of web design.

    The state of accessibility and universal design

    Valerie Fletcher has been Executive Director of the Institute of Human Centered Design since 1998. That year the Institute, at that time called Adaptive Environments, took the lead and collaborated with the Center for Universal Design in Raleigh, North Carolina, Hofstra University, and the Universal Design Newsletter, on sponsoring the first International Conference on Universal Design, called “Designing for the 21st Century.” It was at this conference that the Principles of Universal Design were disseminated for the first time to an international audience.

    The Institute of Human Centered Design was a key partner is developing and promulgating the principles, and has been instrumental in promoting universal design through training, education, and by its multi-disciplinary design services. They provide consulting services that include accessibility compliance and design solutions that integrate universal design features in built environments, products, and Information and Communication Technology.

    With many years of engagement in advancing accessibility and universal design in the public and private sectors, Valerie has a deep knowledge and clear perspective of the challenges and opportunities that exist in moving forward with the agenda of universal design for web accessibility. “I have both tremendous optimism and tremendous anxiety that we’re never going to get anywhere. Good ideas don’t thrive just because they are good ideas. But still, I feel more optimistic than I did five years ago.”

    Trends transform the practice of design

    Design responds to trends. Back in the 1970s and 1980s, architects were thinking about such challenges as designing affordable housing so people could live in peace[md]so they would not deface walls or be at war with their neighbors. This notion of “behavioral design” focused on the function and power of design[md]on how design could be put to the task of creating better living environments.

    In the 1990s, functional or environment/behavioral design was replaced by “form as the ultimate good.” “Function became the thing you had to live with because the law required it.” And the field of architecture became one for “lone wolves who felt they had to be more creative, more brilliant than others to succeed.” Much was lost in the shift from human-centered to designer-centered design, including attention to the power of architecture to influence human experience.

    In architecture, Valerie notes, “The only time an architect is likely to touch accessibility in a serious way is during licensing. Most commonly in the U.S., the only time one is taught anything about accessibility, let alone universal design, is likely in the context of an introduction to code requirements that teach plumbing code, electrical code, and accessibility. Is it any wonder people think [accessibility and universal design] is about cutting into your creative brilliance?”

    Education is a catalyst for change

    Education often drives design trends.  Students identify with what they are taught, and how they are introduced to their field. If they are taught that form is paramount, then more functional concerns, such as accessibility, will always be secondary.

    Today’s design curriculum does not do an adequate job of covering accessibility and universal design, and the information that is provided is geared toward meeting compliance standards. This approach results in a “just tell me what I have to do” approach to accessibility, which, in turn, produces inadequate designs. “Just tell me what I have to do has not resulted in the kind of creative energy and true innovation we need to make progress in this area,” says Valerie.

    What is needed is for universal design to be integrated into the practice of designing buildings, spaces, communications, products, interiors, and software into the design of the things we use, the spaces we inhabit, and the way we learn and communicate.

    Building a curriculum in universal design and accessibility

    Valerie sees design education as the critical component, but notes a lack of rigor in the curriculum requirements, and a lack of commitment from the instructors. “There is a readiness among the students that is not quite met by the readiness of the faculty, but it is a short bridge. I think they can get there especially in light of the adoption of the value of environmental sustainability.”

    Universal design needs to be adopted by faculty and then taught to students. With students, universal design needs to be intrinsic to their practice[md]fundamental to how they brand their work and pitch themselves professionally. “If we miss that opportunity, then it becomes a case of the one-off student[md]the one who, against all odds, persists.”

    And she sees responsiveness in the students. Because universal design and accessibility have not been perceived as significant elements of the curriculum, schools do not have faculty teaching the subject. This is where the Institute for Human Centered Design has stepped in, teaching class sessions and seminars on accessibility and universal design. Valerie has found a great deal of receptivity among the students. “With every year, the students have become more and more interested.” Many students seek internships with the Institute.

    She also sees a rise in attention paid to accessibility by the accrediting organizations, especially for interior design and architecture. The fields of interior and industrial design are the most progressive today, whereas architecture has historically fallen short. It’s reported that a dominant shortcoming in schools of architecture during accreditation visits is the availability and quality of instruction for accessibility and universal design.

    Accessibility guidelines set the baseline

    Regarding the accessibility of the digital environment, Valerie notes that there are good efforts underway worldwide. She notes in particular the work of the Web Accessibility Initiative (WAI) of the Worldwide Web Consortium in promoting web accessibility through the development of standards, guidelines, and best practices that is clearly the most widely accepted global source. She also notes that web accessibility has more policy supporting its efforts than other design fields. There are many examples of organizations and institutions adopting a policy of meeting accessibility or universal design standards, even when it’s not a legal obligation. With reliable guidance such as Section 508 in the United States or the W3C/WAI, “mandating policy for inclusive design is a choice any organization can make.”

    But Section 508 and the Web Content Accessibility Guidelines are in some ways equivalent to building codes in the build environment. They help in establishing “a floor of accessibility and, in the case of W3C/WAI, beyond that to a high standard of usability.” However, design and designers are too often not part of the discussion, and Valerie sees this as a major failing. “There’s still a big gap in being able to identify great websites that look as good as they act. A lot of designers have yet to be convinced that you can get great design and great usability performance in a single site.”

    Great examples inspire great designs

    Bringing universal design to bear adds a whole new dimension to the discussion. “You need to drive the conversation by capturing people with great case studies, great examples, great leaders, and bringing in a global perspective. You will see better outcomes if you inspire and catalyze.”

    Take, for example, the NAO robot and the iPhone and iPad. NAO is a programmable humanoid robot that is being used for such tasks as providing assistance to people with significant dexterity and mobility limitations and teaching social interaction to children with autism. Apple’s mobile devices offer a wide array of interaction modes, including speech recognition and text-to-speech, so people can interact with software on the device in whatever way suits their context. As Valerie notes, “People learned a lot about design that is user friendly from Apple.”

    Looking ahead, Valerie sees the demand for customization and personalization of technology products and services as a catalyst for adoption of accessibility and universal design concepts by designers. “If you take ‘we’re all different’ as the starting point, and then train designers to respond to that reality, then you hit the sweet spot.”

    Toward Universal Usability: An interview with Ben Shneiderman

    Posted on

    An edited version of this interview appears in Chapter 10 of A Web for Everyone.

    Photo of BenFor over 30 years, Human-Computer Interaction (HCI) pioneer Ben Shneiderman has worked to keep the “human” in HCI broadly defined. Through research and teaching, writing and speaking, convening and facilitating, he has advocated for and assisted in the creation of technology tools in support of the common good. His award-winning book, Leonardo’s Laptop: Human Needs and the New Computing Technologies, is a call to action, urging users to expect success from their technology tools, and challenging designers and developers to satisfy those expectations.

    Since Ben invented the concept of universal usability, we wanted to get his take on how designers are measuring up, and what is keeping them from moving forward more effectively.

    We are making progress toward universal usability

    In his May 2000 Communications of the ACM article, Ben raises the bar from accessibility to “universal usability,” going beyond technical accessibility for people with disabilities to successful use of computers by everyone. To achieve universal usability, we need to account for technology and user diversity, as well as gaps in knowledge—to “bridge the gap between what users know and what they need to know.” And he establishes as a success measure “more than 90% of all households as successful users of information and communications services at least once a week.”[1]

    Now, more than a decade later, Ben is optimistic. “The software we have today is far better than what we had ten years ago.” We have examples of software and devices, including mobile apps, mobile phones, and digital cameras, where “most people can succeed most of the time.”

    Also, many more people are engaged in promoting accessibility and universal usability. For example, the Association for Computing Machinery, or ACM, recognizes universal usability as an important issue, as indicated by their journal Transactions on Accessible Computing.[2] “As a research topic, accessibility has become a respected part of the computer science discipline.”

    “There are many people at work on universal usability. It’s gratifying to see that when we speak about it, students, researchers, professionals, and policy makers listen.” And along with attention comes results.

    To illustrate, Ben tells a story of a recent plane trip. He was seated next to a businesswoman who was blind, which he knew because of the cane he helped tuck away in the overhead bin. “She sat down next to me, took out her iPad and keyboard, plugged in her earphones, and began to work.” During the flight Ben had the opportunity to chat with her. “I asked whether she was using special software and she said no.” The current implementation on the Apple iPad provided everything she needed to perform her work. “That’s the kind of progress that inspires me in a wonderful way. It is gratifying to know that thoughtful design enables users with disabilities to hold challenging jobs and lead more fulfilling lives.”

    Universal usability is about satisfying experiences

    ““Accessibility’ defines a set of technical requirements that could be met and yet the result may not be universally usable. ‘Universal usability’ specifies not just the attributes of the technology but the experience of the users.” Universal usability is evaluated and measured very differently than accessibility, by way of real users. And this, Ben acknowledges, “is a serious challenge.”

    “The expectation of satisfying the full range of human diversity is an enormously high achievement to push toward.” But he also believes it is achievable if people give it the care and attention that they give to other priorities. “Health is achievable. We have times when our health is better than others, but we strive to be healthy all the time.” Similarly, we should strive to satisfy people “with different hardware, different network connections, different abilities, and different levels of knowledge about using computer technology.”

    Expecting to be successful in our use of technology

    Much has to do with our expectations as consumers of technology—whether we expect to be satisfied, or to satisfice.

    Take, for example, digital cameras. We started out with small digital cameras that were able to take fuzzy images, and built up to easy-to-use, high-resolution cameras that are integrated into other devices. “As time goes by and technology improves and advances, our expectations of what we can accomplish grow ever higher. We now expect to be able to take good photos indoors without a flash on a cell phone.”

    But in many cases, our expectations have not been forceful enough to affect change. Software still produces “frustration and difficulty.” University and commercial websites are not accessible. Even government agency websites that are under strict legal requirements to be accessible often aren’t. To make real gains toward universal usability, people must expect satisfying and successful experiences from all of our technology tools. Every user must become an activist, speaking up to influence those who can make change happen.

    Strategies for delivering universally usable experiences

    One approach to designing universally usable software is using multi-layer interfaces that include a basic mode that is easy to use and error free, but with more features and functions available as users become more proficient. Ben calls these “karate interfaces,” in that users move metaphorically from white belt to black. At each step, there are different things to learn, and with mastery of each step comes increased proficiency. “More attention to multi-layer interfaces could make systems usable by people with low skills and low needs, as well as people with high ability and high needs.”[3]

    Ben recognizes that this type of interface requires more effort from designers and developers, but asserts, “It’s something we should all expect. Moderate effort by the design team can bring huge benefits for millions of users.”

    We expect automobiles to have levels of adjustability. We can move the seat, tilt the steering wheel, angle the mirrors, raise the lighting—there are so many adjustable features. Of course, it takes more time to design and may cost more, but the benefits to usability and safety are enormous.

    Mature technologies have many forms of adjustability that are easy to use, enabling people to move gracefully from simple use to more elaborate use. They empower people to do remarkable things.

    Building awareness and expertise in the profession

    Ben sees examples such as Apple’s iOS as influential in raising awareness and moving toward universal usability, since the main force holding people back is lack of knowledge in the profession. “We need to tell the good stories about those who have done the right thing and have done a good job. That will encourage others to follow in the same way.”

    One byproduct of a lack of knowledge is general uneasiness about the implications of building software for universal usability. Since accessibility and universal usability are not typically part of education and training, most people who are building sites and applications are not proficient in these areas. “The expectation of many designers, engineers, and programmers is that it’s going to be very difficult to do.”

    A key way to address this knowledge gap is through textbooks. Ben suggests a checklist review process, in which any book intended to support the computer science curriculum is checked for whether it includes universal usability. “That kind of review would make authors, adopters, professors, and university departments aware that universal usability is an essential part of computer science.”

    Ideally, the topic would be integrated into every aspect of the book, as with Ben’s seminal textbook, Designing the User Interface: Strategies for Effective Human-Computer Interaction, with Catherine Plaisant, Maxine Cohen, and Steven Jacobs. In the current 5th edition, Ben notes that “There is no chapter about universal usability—the whole book is about universal usability!”

    In addition, Ben would like to see more rigorous professional standards to support the practice of universal usability. “There is a growing movement in support of software engineer certification. I’m in favor of that, and I think one of the criteria should be that their training covers accessibility and universal usability.”

    Universal usability shouldn’t be a special course that someone has to take. It should be part of the preparation for anyone who learns about computer science and training for every computing professional. I want to be in a discipline and part of a profession that is proud of its role in achieving universal usability.

    CVAA with Larry Goldberg

    Posted on

    A Podcast for Everyone coverIf you work in media broadcasting or telecommunications you have probably heard of the U.S. legislation called CVAA, shorthand for the 21st Century Communications and Video Accessibility Act. This law, signed by President Obama in October 2010, seeks to ensure that accessibility requirements keep pace with advances in communication technologies.

    Like most legal documents, CVAA is difficult to decipher. It’s difficult to extract the key points, and determine what actions we need to take.

    Photo of Larry GoldbergLucky for us, Larry Goldberg is here to help. Larry was co-chair of the Video Programming Accessibility Advisory Committee (VPAAC), which provided reports that helped shape the legislation. He joins Sarah Horton for this episode of A Podcast for Everyone to answer key questions, including:

    • How did CVAA get started and what is it for?
    • What do web professionals need to know about CVAA?
    • Are there standards we should be looking to for guidance on CVAA compliance?

    Transcript available · Download file (mp3, duration: 24:33, 14.4MB)

    Larry Goldberg is Director of Community Engagement at WGBH, the company that pioneered captioned television in 1972. He has been with WGBH since 1985, for many years of which as Director of the Carl and Ruth Shapiro Family National Center for Accessible Media, and has been a leader in advancing accessible media at WGBH and worldwide.

    Resources mentioned in this podcast

    A Podcast for Everyone is brought to you by UIE, Rosenfeld Media, The Paciello Group, and O’Reilly.

    Subscribe on iTunes · Follow @awebforeveryone · Podcast RSS feed (xml)

    Transcript

    Sarah Horton: Hi, I’m Sarah Horton, and I’m coauthor with Whitney Quesenbery of “A Web For Everyone,” from Rosenfeld Media. I’m here today with Larry Goldberg, who is Director of Community Engagement at WGBH, the company that pioneered captioned television in 1972.

    Larry has been at WGBH since 1985, and has been a leader in advancing accessible media at WGBH and worldwide. We’re here to learn from Larry about the 21st Century Video Accessibility Act or CVAA.

    Larry was closely involved in getting this legislation passed and we would like to know more about what it is, and how it affects people who make websites and applications.

    Hi, Larry, thanks so much for joining us.

    Larry Goldberg: Great to join you Sarah.

    Sarah: Can you tell us briefly about the origins of CVAA, how it got started, what problems it was meant to address?

    Larry: CVAA came out of a roadblock that the blind community was facing back in the early 2000s. We had found ways to achieve requirements for closed captioning of television for the deaf community, but the blind community wanted to see more video description.

    The Telecommunications Act that was passed in 1996, where captioning was required, video description was left somewhat open, but the FCC at that time decided they had a mandate to require video description, limited amount, only on certain channels.

    They went ahead and recorded it, but the communications industry took the FCC to court and won. The video description mandate was overturned, and the only solution was to go back to Congress, and try to get a new law passed that would actually give the FCC jurisdiction to require video description.

    At the same time, many others in the disability community were concerned that digital technology was changing everything, that there were new forms of media coming out, new ways of using telephony, that weren’t really covered by existing laws and regulations, so a combination of these pressures came together to craft a new law that became the 21st Century Communications and Video Accessibility Act.

    Sarah: Sounds like quite a journey. Our audience is mainly people who make websites and apps, sort of strategy people, designers, people who code, people responsible for content. What do they need to know about CVAA in order to do their work?

    Larry: Well, there are two basic sections of the CVAA. Title One covers mostly telephony, including smart phones and mobile devices used for advanced communication services, and Title Two focuses on online media and Internet communication.

    For anyone who is developing mobile technology in tablets or smartphones where one can surf the web, receive email, get text messages, all of that must be made accessible now. By being made accessible, we traditionally are focusing on the needs of people who are blind or visually impaired.

    The user-interface needs to be navigable by audio and the content needs to be accessible, usually by transforming it into speech or into braille tactile versions. So, from the beginning of the design on a mobile device, it is now required that mobile browsers have to be usable by people who can’t see.

    The other advanced communication services, you need to be able to email on a mobile device, if you can’t see, text services, and various other related activities on both, mobile devices and really any kind of new digital technology that’s giving you access to these kinds of what’s called “ACS,” advanced communication services.

    Title Two is a very interesting different area, and that is where the requirement for video description on television came into play. There are now nine channels, the major four broadcast channels and five top cable channels now have to provide 50 hours per quarter of video description on their TV channels.

    On the Internet, closed captioning that has existed in broadcast, if it’s now carried on the Internet, needs to retain those captions. It’s important to note that, that means the video players need to be able to support display of captions, but it’s also important to note that this only covers previously captioned material that has been aired on television.

    It is not user generated content, it is not the entire world of YouTube, it really is those channels that are carrying previously broadcast content.

    Sarah: What is the status of all of this at this point in time?

    Larry: Well, the bill was signed into law in 2010, and immediately the FCC got to work issuing a long series of proposed rulemakings, law comment periods went into effect and then the laws began to take effect as well.

    Captioning is now required on the Internet under those parameters. Video description is required on TV. Smart phones presently have to be accessible, and the next round of accessibility is a really fascinating one. It is the world of over-the-top devices, set-top boxes for cable systems, smart TVs will soon, in two years, have to be navigable by persons who can’t see.

    I know many companies are working quite hard right now to make their user interface designs accessible without vision.

    Sarah: Those are like the menu systems and things like that?

    Larry: The menus, the programming grid, basically command and control.

    Sarah: I see.

    Larry: What’s interesting is that some companies are actually providing voice command of these user interfaces. That’s not required, but it’s a nice combination of audio feedback and voice input. You’ll begin seeing some of that come out in technologies that exist.

    In fact, there’s already navigable user interfaces on an Apple TV box. It has built-in VoiceOver, which is the screen reader software. You can navigate, turn on captions or subtitles on programming you’re watching from iTunes on an Apple TV.

    Sarah: This sounds like another of those advances for accessibility that everyone’s going to enjoy.

    Larry: I think it’s going to be a great time, I think engineers, people who are developers, who read your book could have a lot of fun with this because it’s a whole new way of thinking about good user interface design. The notion that the mouse, the keyboard, and the display monitor, we’ve been stuck with that for a long time now.

    For many years, people have been thinking that there must be a new paradigm. This need of this particular community is really now driving it. What’s interesting is the law and the FCC are not mandating a particular way to make your user interface accessible.

    It’s really up to you. We’re going to see all kinds of ways of controlling your online experience, and already there are many apps that can control a set-top box, and the apps are accessible.

    You can download the AT&T U-verse app, the Verizon FiOS app, or the Comcast app and use your smart phone to control your TV by voice and in an accessible app. In many ways, the emergence of small devices and this need for making accessible user interfaces are coming together right at the right time.

    Sarah: Sounds like it. If I’m someone building one of these apps that’s running on one of these devices and I need to comply with CVAA, how do I make sure that I’m complaint? Is there a standard that I can measure against?

    Larry: There are functional requirements and there are best practices that have been derived from the world of computer-based software and accessibility. We’ve built a lot on the basis of the work the W3C has done with their Web Content Accessibility Guidelines, their User Agent Guidelines.

    You should find all of that at w3c.org, but we also have the federal regulation, Section 508, and many of those will give you the basic equivalents or requirements on the functionality. There is not an explicit standard for a smart device to be accessible via audio.

    There’s no ISO standard for instance, but there are significant amounts of existing technologies and the functional requirements are fairly clear.

    Sarah: One thing that is really helpful for developers, designers, and content people who are trying to build websites and applications for accessibility is having those guidelines to measure against. It’s just a very straightforward way to try to meet accessibility requirements.

    You’ve mentioned that there are guidelines that are good as companions, but will there ever be a standard in place that would be used by CVAA to test for compliance, something like WCAG or 508.

    Larry: The way the CVAA was written and negotiated really shaped how the technology will roll out and how compliance will roll out. It’s very much a bill that was negotiated by industry with consumers. Both sides won some aspects, some lost. It was the telephone industry and the TV industry that was subject to these requirements.

    They argue pretty strongly that they did not want to have the FCC mandate a particular standard or a particular way of achieving accessibility, especially when they were talking about things like smart TVs that were rolling out when the law was written.

    They didn’t really exist then, or smart phones and the kind of capabilities, so there was a real hesitation to have the law or the FCC explicitly name a particular standard.

    They will make reference to the functional equivalence. They will make reference to the kind of things that you will see in the W3C Web Accessibility Initiative or Section 508, but the law really said that manufacturers have the leeway to develop their accessibility according to their own best attempts at achieving the end goal of making their technology accessible.

    If any of the industry standards groups gets together and creates a voluntary standard to achieve, for instance, a talking electronic program guide, they could do that, but it’s actually an area where these companies compete with each other. They compete very strongly on who’s got the best EPG.

    In some ways, this is a competitive issue. At this point it’s unlikely that you will see an explicit standard for how to command and control a TV set or a cable box. As I mentioned before, some cable companies might want to proliferate small devices with apps on them, and not build speech into their set-top box, but actually have it external, they’re allowed to do that.

    The industry was very clear that the technology is emerging and changing, and they weren’t prepared to have a standard imposed on them, which tends to be the way they go.

    There will be industry groups that, for reasons of interoperability, for reasons of consumer friendliness, might get together and say, “Let’s all follow this one particular route to accessibility,” but most of the time, it is pretty much every device will have its own way of achieving these requirements.

    Sarah: That’s interesting. It sounds like an opportunity and a challenge at the same time, but there are the performance objectives. Can you talk a bit about those, what those are, and how those play a part?

    Larry: Yes, they’re different for whichever portion of the law we’re talking about. For instance, we’ve talked about a mobile device, a tablet or a phone.

    The functional equivalent is basically usable without sight, so you start off with the notion that, if you’ve got a touchscreen or you’ve got soft buttons, they need to be programmed so that they can be discovered and used by a person who can’t see your device. It really starts there.

    How you navigate to those controls again will be a usability test as much as an accessibility. The notion of usable accessibility is an important aspect because there are devices, particularly in the Android world and some of the other platforms where, yes, you can make them accessible to a blind person.

    They can do it, but it would be quite a struggle, with special software that has to be downloaded, special configuration you have to do, and that’s not necessarily the best way to do it. The built-in inherent accessibility at startup, as set up, is really the way to go.

    From the box, you open it up, you start it up, it should present you with a talking user interface right away, it starts there, and then all the set up and all the way of setting up your device.

    That could be done on a website and then transferred to your device, that would be fine. A very accessible website, which is perhaps a common practice today, could certainly be a way of setting up a new device, but that’s where the frustration begins.

    The kind of things, you need to be able to move from one screen to another, you need to be able to enter characters, you need to have those characters echoed back to you, you need to be able to switch between modes, caller ID needs to be audible, your battery level needs to be audible.

    All of those aspects that you might take for granted if you’re sighted are things that need to be spoken out loud, and hopefully, presented to the user in a way that is readily achievable.

    That’s really the challenge for a good designer, to really come up with ways that make it easy, with gestures, with buttons, hard buttons, soft buttons, that really lead the person to a comfortable way of navigating.

    I don’t know that anyone’s come up with that best way yet. Of course, we all look to Apple, how they’ve done with the iPhone VoiceOver. There may be better ways they do it and others will.

    Sarah: It seems like this is a good point to talk about one of the things that I like about CVAA, which is this notion about considering performances objectives early in the design process that’s written into some of those lengthy tomes that I have dug my way through. I wondered if you could talk a little bit about that.

    What I understand is that one of the requirements for CVAA is this notion that you would be considering those performance objectives of, How is this product usable without sight, right at the beginning of the process versus at the end of the process, looking at something you’ve built and saying, “OK, how do I make this usable without sight?”

    I think that’s a brilliant inclusion. I would love to hear about how that got there.

    Larry: You would think that that is common sense. It is less expensive to build it in from beginning, very difficult to retrofit a product when it’s about to go out the door. It is quite a groundbreaking aspect of the CVAA, where they not only require that companies take into account the accessibility of their devices, their software, their hardware at the design phase, and certify that they’re doing that.

    They have to take into account the opinions and expertise of the consumer community. It’s required, and must be certified, that you have reached out to the disability community early in your design process to get their input.

    I’m not a lawyer or a legislator, but I don’t know of other legislation that has required what you and I might consider pure common sense. Of course, you ask your users how their technology is working, but as we’re into the law, a document must be signed off by the responsible party annually to show that you have considered accessibility in your design and you have reached out to the user community and experts in the field, so pretty impressive.

    Sarah: I love that part. Getting people to think about accessibility right from the start is a big part of the book, A Web For Everyone, we recognize that it’s a real challenge for many organizations to actually make that change because it is a change in a lot of cases.

    Do you see any opportunities coming out of CVAA that might help make a case for that kind of change, so that people move from being reactive to proactive about accessibility?

    Larry: For those of us who have been in this field for many years, toiling and trying to raise awareness, and convince people that it’s the right thing to do. We’re beginning to actually see a change in the mindset, a heightened awareness, from many of the major companies.

    Perhaps some of the smaller companies, some of the smaller app developers are still a bit confused about what they’re supposed to do, but I have seen the pervasiveness among the major carriers of equipment, the hardware and software manufacturers, that they need to follow these.

    They’ve hired staff, they’ve reached out to consultants, and organizations that are experts in this. There’s now even an International Association of Accessibility Professionals being formed, and we’ll see how that goes.

    To really professionalize the standard, the way privacy originally emerged, and security, which today is a key function for anyone dealing with technology. You would absolutely put a lot of resources into ensuring privacy of your technology and security.

    Accessibility will eventually match that as well and that you would never consider putting out a product where you haven’t at least considered the question, as how accessible this is to people with varying disabilities.

    My hope is and a lot of professionals in the field have seen that what has been a growing grassroots movement might finally become embedded in common practice.

    Sarah: In terms of looking forward, what do we have to look forward to about media accessibility in general?

    Larry: Well, we’ve had a great sea change in the world of deliverable media. We’re no longer watching our TVs though the latest high-def TV sets are pretty astounding.

    Captioning is pretty pervasive there, works in significant number of cases, almost 100 percent of television is now captioned with rare exceptions. Now we’re watching television and videos on our devices, and the new law actually carries a lot of captions over to those devices, and it’s working remarkably well.

    You could take your tablet, your iPhone, your iPad, your Android tablet and download the Hulu, the Netflix, the iTunes apps, and captions will work beautifully, very visible, where there’s a requirement now where the user can adjust the size, the font, the color. It works well.

    That has been one of the greatest successes since the initiation of the CVAA. I have to say that WGBH’s National Center for Accessible Media created this thing called the “Internet Captioning Forum” ten years ago, with the participation of AOL, Google, Yahoo, and Microsoft.

    This is what we’re working towards, it’s working, so we’re proud of that.

    Video description available, 450 hours per quarter of video description on television. We are still encountering a lot of problems with delivery of description, getting it from broadcast to the cable outlet to your TV set due to some legacy standards in technology. It’s great to see the pervasiveness of description on commercial and public television too.

    I think the new challenges are getting that description onto the web. There is no description on any of the streaming media sites. You won’t find it yet though we hear that some initial inquiries that have begun. It’s not required under the CVAA, but I know a lot of people who are blind and visually impaired have really been pushing hard for description online.

    After that it will be a question of what’s next in the media. Will you be looking at other kinds of devices, Google Glass, there’s no reason why that couldn’t be an accessible and usable device for people with disabilities and to display enhancements of media, captions, descriptions, other languages through those devices.

    The whole world of second and third screens will be a very interesting way of delivering captions and descriptions to personal viewing experience. You’re gathered with your family, you’re all watching TV. In my family, my wife and child do not like captions. I need captions, and of course, it’s my living.

    What about turning the captions on on my iPhone. I’ll watch them privately and everyone else will watch the uncaptioned version. That is actually quite doable today. No one’s implemented it yet. Give me a call anyone who wants to try it, same thing for description.

    It’s already been done in movie theaters with private viewing devices and there’s no reason it couldn’t be done on digital television. Of course, the web gives you full control over your own personal experience.

    The concept is personalized video, shape it to the way you like it, to the way it works for you, including even pausing to explain what’s going on if you have a cognitive disability. All those are quite doable today with the web architecture we have and the way media is being served up.

    Sarah: That sounds really great. Thank you so much Larry. This has been Larry Goldberg talking to us about CVAA and what it means to you and me, as we work toward building a web for everyone.

    Many thanks to you for listening and to our sponsors, UIE, Rosenfeld Media, and The Paciello Group for making this podcast possible.

    Follow us at “A Web for Everyone” on Twitter, that’s @AWebforEveryone.

    Until next time…

    Accessible Media: An interview with Larry Goldberg

    Posted on

    An edited version of this interview appears in Chapter 9 of A Web for Everyone.

    Photo of LarryLarry Goldberg was Director of the Carl and Ruth Shapiro Family National Center for Access Media (NCAM) at WGBH Boston, one of the most accessibility-aware media companies in the world. He is now Director of Community Engagement. In addition to producing award-winning, captioned, and described television and web programs, WGBH hosts the National Center for Accessible Media, or NCAM, a research and development group focused on ensuring equity in media access. Larry oversees NCAM, where his dedication to developing technologies, policies, and practices to support accessible media has been instrumental in mainstreaming captions and video description and other innovative technologies.

    We asked Larry what we could learn from the process of bringing captioning to television that will help us mainstream accessible media on the web.

    Integrated technology as the tipping point

    Captioned television is everywhere: in bars, airports, gyms—wherever hearing is difficult, and we need to see what is said. But that wasn’t always the case. There was a time when captions were an add-on, delivered using separate technology in the form of a set-top box purchased by deaf and hard-of-hearing television viewers. The tipping point for captions came when the capability for displaying captions was built into standard television sets—by means of an act of Congress. With the technology in place, the challenge became to produce captions for all television programming.

    Getting there wasn’t easy, as Larry can attest, having been around through much of the process. The first step was to dispel the notion that captions were costly and benefitted only a small number of viewers. “You don’t want to forget the primary purpose—that deaf people needed captions—but when it became obvious that captions helped comprehension and late-night television watching, and when the TV production community saw that they could integrate captioning into the production process without a lot of time and expense, they said, ‘Fine, go ahead.’”

    Like captions for television, most web media players can have caption display capacity built in. With Flash and QuickTime and Windows Media, you can add a caption into any video; however, in most cases, captions are not required. In the United States, captioning recently became required under the new 21st Century Communications and Video Accessibility Act (CVAA), but only for previously broadcast video, not for user-generated or web-only video.

    Perhaps the “CC” button on the YouTube player will play a role similar to what built-in captioning technology did for television, by compelling web media producers to provide a caption track in response to viewer expectations.

    Becoming part of the process

    Process integration was key to mainstreaming captions. Captioning services like those at WGBH had to get faster and more efficient, and integrate seamlessly into the media production workflow. “We had to work fast so we didn’t hold up delivery deadlines.” This meant overnight shifts, but also creating better tools so the captioners could work more quickly. It also meant coming up with workflows that would integrate into production. “Once captioning became a line-item in budgets, and an expected check-point in the production flow, it became an accepted way of doing things.” Expectations for captioned television in bars and health clubs also helped.

    When a TV producer who may never have met a deaf person goes to the gym every day and sees captions, they just accept it. Or they look and go, “Hey! Why isn’t that show captioned? There’s an interview on—I want to know what he’s saying!” Or they’re at a bar and there’s a game on, and they say, “What just happened? What’s that call? Hey! Could somebody turn on the captions?” These wider circles of usage certainly help.

    Once people stopped asking the question, “How many people are going to see these captions,” and captioning services became fast and cost-effective, captioning became part of the process of producing and distributing television programs.

    Enhancing media with accessible features

    With web-based digital technology, the broad benefits of accessibility features are even greater than with television. “In the earliest days, even in QuickTime 1.0, the benefits of searchability were fairly obvious,” offering the ability to find key words in a video by searching a synchronized text track. “Captions became a universal design enhancement that was feeding the world of search.”

    There is evidence that the presence of captions increases the attention to and time spent with video. “We believe captions are driving viewership and ‘stickiness.’” And text has myriad benefits over other media when it comes to sharing.

    You put time into creating a video, even if it’s a throwaway, even if it’s only going to be online for half a day. If there’s value to it and you want people to see it, then creating a text enhancement is going to help—for cut-and-paste, for sharing. Sharing video is kind of hard, especially since different devices have different support. But sharing text is pervasive. So if you have a text file of your media, whether it starts as audio or video, it’s much more readily shared. And you can tell people about it in all your social media tools by pulling pieces of text out, posting or tweeting the text, and driving people to your media.

    Some companies are starting to exploit accessibility features for other purposes, such as popping up advertisements based on what is said within a piece of media. “We will see a lot more targeted advertising in video,” Larry predicts.

    Making text from audio

    “It’s the transcribing aspect that takes time,” and speech-to-text software is only partially helpful, such as YouTube’s automatic speech transcription, “which is frequently only partially accurate.” Most media companies outsource transcription and captioning because the expertise needed is not typically part of a media production team. Some, like Netflix, are even now experimenting with crowd-sourcing their caption work. Services like the WGBH Media Access Group make it easy to outsource caption creation, and the prices for transcribing and captioning have come way down. Plus, there’s more to good captions than simply transcribing audio to text.

    High-quality captions are crafted to make the captions more readable. YouTube and other auto-captioning tools won’t do that: things like breaking the sentence in the right place, and removing captions during long pauses. Our captioners do everything in one step: they transcribe, time, place, and add extra stylistic aspects. So far, we have found that using speech transcription as a first step does not save us time because our captioners are trained to do all the enhancements in the first pass.”

    But there are instances when outsourcing may not be necessary. If you start your media production process with a transcript or teleprompter text, it can become the basis for captions. Services like YouTube’s auto-timing work fairly well for synchronizing a prepared and accurate transcript with video.

    Partnering with transcription software

    Speech transcription software can be a help, but only with clear audio. “You can’t just take random, noisy, multi-speaker audio and expect high quality automatic transcription.” But, with care, it’s possible to transcribe, with enough accuracy, a clean recording of clearly spoken audio using speech-to-text software like Dragon Naturally Speaking. As an example, Larry cites the Liberated Learning Consortium, an IBM research project in which professors record lectures using high-quality mics and trained software to produce accessible lecture materials.

    I know we want the tools to shape themselves to us and not us shape ourselves to the tools, but… if you talk a little bit more robotically and you enunciate properly you can actually get a decent transcript using automatic speech recognition tools.

    Adding captioning to the web media production workflow

    As for who should be responsible for integrating captions, Larry suggests it’s all part of post-production—editing the media and digitizing for different platforms. “The people who know video and editing tools get this, ” as adding captions is similar to titling video and adding credits—and adding other forms of metadata.

    Some organizations offer services to their constituencies to support the practice of accessible media. Several California colleges and universities offer an online service that manages the captioning workflow. Faculty submit a lecture recording, for example, and the service manages the transcription, captioning, and publishing, typically outsourcing at least the transcription part of the process. With low-cost transcription services available, the overall cost for the service becomes quite manageable.

    Looking ahead for accessible media

    The research and development aspect of Larry’s NCAM work looks at new technologies, “making sure that everyone can use whatever new, essential, or cool thing that’s coming up that will have an effect on people’s lives—at home, in school, and in the office and community. Can we make sure it’s a level playing field?” The other aspect is finding ways to exploit those technologies for accessibility.

    For example, the HTML5 media architecture offers capabilities for specialized content. “With HTML5, you can link to different types of synchronized streams within the same webpage.” For an instructional video containing information written on a board, the same information could open in a new window as text. Or the information could be inserted into the video as a text track, and viewers could pause the video, listen to the synthesized text, and resume playing the video, making the content accessible to people who were blind or visually impaired.

    Given the demonstrated value-added nature of captions and other accessibility features, Larry predicts that “as more captions come online as part of the new requirements, others not covered by the rules will too begin providing captions because they see the value.”

    Universal Plain Language: An interview with Ginny Redish

    Posted on

    An edited version of this interview appears in Chapter 8 of A Web for Everyone.

    Photo of GinnyGinny Redish has been helping people write clearly for all of her career. She does research and analysis to understand what’s hard about reading and writing, and follows up with guidelines that people can use to make reading and writing easier.

    In our experience, language and content often get less attention than other elements in design projects. We wanted to learn from Ginny how to make language more of a priority.

    Plain language is important for accessibility

    Plain language is all about accessibility—making information understandable for everyone. Of course, plain language specifically benefits people with low literacy, of which there are many. ”The rate of functional illiteracy in many countries of the world is shockingly large.”

    However, we all have difficulty reading at some time or another, for physical or cognitive reasons, or when encountering an unknown topic or language. “Even people who are high literacy sometimes have problems reading—when they are under stress, or when it’s an unfamiliar topic.” In the end, “plain language is valuable—even necessary—to just about everybody.”

    Plain language is particularly important on “functional websites,” where people go to get information and accomplish tasks. When you use plain language, “people can find what they need, understand what they find, and use that to accomplish whatever it is they need to accomplish.” And plain does not mean dull.

    “It’s not a matter of dumbing down. It’s a matter of meeting people where they are and saving people’s time. People don’t come to functional websites to waste time. They are very busy with other parts of their lives. They need to be able to find, understand, and use the information in the time and effort that they think it’s worth.”

    Plain language fits well with the concepts of universal design

    Can one source of information work for everyone? For universal plain language, “whenever possible, you want to have one source that works for everybody, and when that is not possible, you want to satisfy everybody’s needs.” In this way, Ginny’s approach maps well to the universal design concepts of same means of use, equivalent use, and accommodation.

    Same means of use. In universal design, the idea is to build in flexibility so that different people can use the same design with individual tweaks to met their needs. “In some aspects of the web, you can build in flexibility that allows people to take something and make it work for them.” For example, people with low vision can enlarge text or switch to a high contrast view.

    “However, with the language part of a design it’s harder because you have to choose which words to use.” Plain language gives you the broadest “same means of use.” In most cases, following plain language guidelines will allow you to reach all your audiences with a single content source.

    Equivalent use. For times when one size does not fit all, it may be necessary to provide different versions. “When you have different audiences who are coming to the same topic with different backgrounds, different needs, and different vocabularies, you may need to provide different content.” For example, Ginny worked on the National Cancer Institute website, which provides two sets of information: one for patients and families and one for health professionals. “We can think of this as ‘equivalent means of use’ because the more technical language used for health professionals is ‘plain’ from their perspective. And both sets of information are available to everyone, so individuals choose which to read—or read both.”

    In all cases, following plain language guidelines is critical. As Ginny stresses, “You always want to write straightforward sentences.”

    Accommodation. “If you write your main content in plain language, you will reach a wider range of your audience, but there could be people for whom that is not simple enough.” In these cases, it may be necessary to look to an accommodation, such as Easy Read Online. This service uses techniques like video, images, and simplified text to modify documents for people with learning disabilities and little or no reading ability.

    The risk with alternative versions is maintenance–keeping the main and alternative content in sync and up to date.

    Ten years ago, the solution to accessibility for many websites—if they did anything at all—was to create a separate, text-only version. That turns out to be a very bad idea. They are meant to be equivalent, but after a short time they aren’t equivalent. Separate but equal is never equal.

    For this reason, if you decide to create multiple versions, you should do so deliberately and with caution. When working on a project that appears to require different versions, Ginny notes, “I only agree if I know we have different audiences who need different content.”

    Design projects need content people

    Key decisions such as this illustrate the importance of having a content strategist on the design team. Typically, teams don’t consider content until the very end of the design process, and then content providers scramble to replace “lorem ipsum” placeholder text with actual information. And more often than not, the people producing the words are not trained as writers, never mind in the techniques of plain language. As a result, the very thing people come for—information—is often the most poorly implemented part of a design.

    People who come to websites don’t come to navigate. They don’t come to admire your design. Obviously, the design and navigation are potential barriers, and they have to be good so as not to be barriers. But what people come for is the content, and the content is both information design and language. Understanding its importance and making content an integral part of the process is critical.

    Planning is critical to successful plain language

    With Ginny, plain language starts at the beginning of the design process, with three planning questions.

    The first question is: “Why? What are you trying to achieve?” In considering content, you may have one purpose or many, but the purposes cannot be vague, like “to give people information.” They must be “actionable purposes with measurable results.” Ginny gives as an example the purpose of information contained in a university catalog: “We want people who have never been to a university to make good choices about programs that would be appropriate for them to take, and we want them to choose to come to our university.”

    The second planning question is: “Who? Who are your site visitors?” “One of the problems with websites is that the people writing the website are often extremely knowledgeable about the domain of the website. They forget that a lot of the people coming to the site are not as familiar.” Personas are a great way to “see” the people you are writing for. For universal plain language, personas should represent varying abilities with language and literacy, including non-native-language speakers, people with little education, and people with cognitive disabilities.

    The third planning question is: “Why are these people coming to your website?” To answer this question, you need to get inside the heads of your personas to learn what they want to know.

    When users go to visit a website, they have a goal, a need, a task in their heads. They are starting the conversation with the site.

    In that way, the web is different from paper. If I get an envelope in the mail, the writer has started the conversation. Sure, I have to open and read it; but I don’t start with something in mind. My first question is: “What is the author trying to do or say to me?”

    Online, it’s always the site visitor who starts the conversation. So the only way to write clearly is to ask yourself: “If someone comes to my website and they are interested in the topic, what do they want to know? What is the conversation?”

    This notion of writing for conversation can be a difficult concept to get across, especially to people who have no training in writing or who were trained to write for academic journals. “When people hire me to conduct a workshop on writing for the web, they assume I’m going to jump in and teach 10 plain language guidelines, but I don’t start there.” Instead, she starts with purposes and personas and the need for conversation. “You have to convince them that the only way to achieve business goals is to satisfy the site visitor’s conversation. Only then are they ready to work through the guidelines.”

    Plain language must be part of the design process from the start

    Implementing plain language in the design process requires content people, real content, and a commitment to conversation.

    • Content people. Every project should have professional content people on the team from the start–people who know how to write “clearly and conversationally.”
    • Real content. Prototypes should use real content from the beginning. And teams should test and modify content throughout the process, along with other design elements. Ginny urges, “No more lorem ipsum!”
    • Commitment to conversation. The design team should adopt a philosophy based on engaging in a conversation. “Your content strategy can’t be a one-way spewing of information. It needs to be answering site visitors’ questions. And if you think about content as a conversation, you are much more likely to write in plain language.”

    Making a commitment to plain language and integrating plain language into the design process improves accessibility in an integrated and holistic way. No one is adversely affected by language that is clear and to the point—in fact, everyone understands better. Working toward the goal of universal plain language is one of the best ways to improve the user experience for everyone.

    One of the most interesting aspects of the ADA movement has been how often something created to meet the needs of a special group of people has turned out to be useful for everybody. Plain language is the same. People think of plain language for a low literacy audience. But when we simplify and clarify for a low literacy audience, high literacy people benefit just as much, and sometimes even more.

     

     

    Responsive Design: An interview with Ethan Marcotte

    Posted on

    An edited version of this interview appears in Chapter 7 of A Web for Everyone.

    Photo of EthanEthan Marcotte is a designer who codes and a coder who designs. While many web professionals have this combined skillset, Ethan brings a high level of mastery to both disciplines. And like all great masters, Ethan is also a teacher. Ethan literally wrote the book on a flexible design approach called “responsive design.”

    We wanted to learn from Ethan how a flexible approach supports accessibility.

    Ethan started his career in the late 1990s, and has worked in design studios for most of his career, including several years as Interactive Design Director for the award-winning design studio, Happy Cog. Currently, he runs his own web practice, and has worked for a variety of clients including Stanford University, Sundance Film Festival, and New York Magazine. He recently finished a large-scale and groundbreaking design of the Boston Globe, with Filament Group, in which he employed the “flexibility in use” principle in a design approach he invented, and he literally wrote the book on “responsive design.”

    Balancing control and flexibility through responsive design

    “Flexibility is near and dear to my heart.” Ethan’s early career was spent creating flat graphics in Photoshop or Illustrator, vetting the designs with clients, and then implementing the design in code. “I started off building sites that were 620 pixels wide, then 760, then 960. Every couple of years there was a universal consensus established: ‘Okay, now it’s safe to upgrade.’ But natively, the web doesn’t understand width or height or anything like that.” This disconnect got him thinking about a more flexible approach.

    The essay by John Allsopp, “A Dao of Web Design,” helped Ethan see that by bringing preconceptions to the web based on a completely different medium–print–designers were getting in the way of the flexibility inherent in the web. And that flexibility was key to accessibility. Ethan began to explore ways to achieve “controlled flexibility” in web design, a place somewhere between absolute flexibility and absolute control.

    What is so fundamentally powerful about the web is that promise of access. We have the technology and approach in place to design sites that are as viewable on a feature phone as they are on a 27-inch monitor with the latest Chrome browser. Flexibility doesn’t require a sacrifice in the quality of the experience on one end or the other. It’s all about delivering content to people.

    Ethan says that content delivery is the primary goal of a responsive design approach: making sure that content is universally accessible, regardless of how it is accessed. It starts with structured HTML for content, and then uses CSS and JavaScript to progressively enhance the experience. HTML5 media queries provide the opportunity to “tune the display,” adapting elements such as color and size, depending on the needs of the user.

    “Responsive design is really just a new name for a lot of old thinking about capitalizing on the flexibility inherent in the web. We use flexible layouts by default, but then get some of the control and constraint we need as designers using media queries and other approaches to enrich the experience.” It’s an approach that looks at flexibility “not as something we need to work around or constrain, but rather as an opportunity to enhance the design.”

    In essence, responsive design is about “bringing the design to meet the users.”

    Redesigning the Boston Globe website

    Ethan and his colleagues at Filament Group implemented responsive design on the Boston Globe website. “It was a seriously fun project. There were a lot of interesting problems to solve. And the Globe was really committed to the idea of making their content as universally accessible as possible.”

    “We didn’t want to think about accessibility at the end of the project, as is usually the case. We tried to think broadly throughout the process, and work proactively rather than reactively.” Rather than work from a list of browsers and devices, they focused on characteristics, such as the size of the screen, the input model, and the availability of technologies such as CSS and JavaScript. “Those kinds of categories really helped when we were thinking more broadly about the design, both from the layout perspective and an interface perspective.” For example, rather than test how a carousel would work on a specific device, they asked how it would be accessible to someone who didn’t have JavaScript. Working from a master list of categories, they established a baseline for accessible content, and used progressive enhancement techniques to enhance the experience for more capable systems.

    The new bostonglobe.com, launched in September of 2011, has been cited as a “major step in the evolution of website design,” (Beaconfire) providing an “elegant, readable website no matter what screen size you’re using.” (Webmonkey) Ethan has been getting positive feedback on the site’s responsiveness, including in some scenarios they did not even consider during the design process. “Somebody posted a screenshot on Flickr after the site launched of the Globe as rendered on an Apple Newton. It’s structured HTML–there are headings, paragraphs, lists. You can still browse the site. That’s a testament to the portability of the technologies we work with.”

    Supporting responsive design in the design and development process

    The design process for responsive design is a departure from more commonly used methods. “It’s common in web shops to finish a design and then throw it over the wall to the development group, and those two groups never interact.” With responsive design, the approach needs to be more collaborative and iterative, testing ideas in a responsive framework and then iterating as needed.

    Also, with responsive design, it’s critical to move to HTML and CSS early in the design process, allowing for testing across browsers and devices, and using the built-in capabilities of different devices, such as VoiceOver on iOS devices. “If you treat mock-ups in the design phase as a ‘catalog of assumptions,’ and move quickly to responsive prototypes, you can test those assumptions directly in the browser. That cuts down the number of surprises.” And testing is key to accessibility, making sure that content was accessible and legible at the baseline and then enhancing up. For the Globe, they used Blackberry 4 as one such baseline: “If you give that browser more than a couple kilobytes of JavaScript to parse, it completely falls on its face.” In considering the experience of the site at the baseline, “You could argue that some of those experiences are less rich. What we found was that they matched the readers’ expectations for the device they had in their hand.”

    In some ways, the key to a successful responsive design is not the design or the technologies used: it’s the content. A “mobile first” approach is a great way to regulate the content requirements. “We talked about whether the content was useful to our mobile users. It the answer was ‘no,’ then we asked if it had value to anyone. Making a commitment to content across every device and every context was key.” The content management system and publishing workflows are important considerations as well. “If you are hoping to deploy the same content in different scenarios, then you have to look at the quality of the markup.”

    Ethan is a designer who appreciates the fine details of the craft, such as the rhythm and measure of a well-set block of text. In discussing design, he invoked Robert Bringhurst in speaking about “tuning the measure to the distance between the eye and the screen.” At the same time, he acknowledges, “We have no way to predict that.”

    But Ethan does not feel limited as a designer by web technologies. “Some people see HTML and CSS as a limiting factor. For me, there’s always been a way to do what I need to do.” In large part, that is because Ethan does not try to control his designs: “The notion of control is foreign to the web.” He finds that “letting go of the assumption of what you expect your design to look like is actually liberating.”

    I like thinking about how a design is going to read on a Kindle or a feature phone. Those might not be the most visually arresting experiences, but there are still opportunities for things to be designed. Even if someone is having content read aloud on a page, that experience is something we can design–we have the technology to do that now. For me it’s a process of discovery, trying to establish ideas on a page and then moving into code to finish the design process. That’s been fun for me.

    Ahead: More opportunities for responsiveness

    Looking ahead, Ethan sees more opportunities to build responsiveness into websites and web applications, including new CSS3 modules like flex box and grid layout, which will allow greater control over the display order of elements. But in order to move forward with a responsive approach, he believes that we need to revise our expectations about control. For so long. our efforts have been about “slotting paragraphs into specific positions on pages.” For universally accessible content, Ethan urges that designers work with the native flexibility of the web, rather than trying to constrain it. “If your web app is a call to one JavaScript function that spits out HTML to the page, think about what that experience is going to be like for less capable browsers and readers. If you think flexibly from the outset, it makes a lot of things much easier.”