Accessibility Research Methods with Jonathan Lazar

Posted on | Leave a comment

  • A Podcast for Everyone coverAccessibility research can help us better understand how people with disabilities use the web and what we in product design and development can do to make that experience more successful and enjoyable. However, accessibility research is often carried out in academia. The valuable insights gained through research are shared and built upon among scholars, but often do not make their way into the practice of people who are designing and building digital products and services.

    Photo of Jonathan LazarIn this podcast we hear from Dr. Jonathan Lazar, a computer scientist specializing in human-computer interaction with a focus on usability and accessibility. Jonathan has done a great deal of work bridging the gap between research and practice. He joins Sarah Horton for this episode of A Podcast for Everyone to answer these questions:

    • What are different accessibility research methods and what they are good for? And when are they most effective in the product development lifecycle?
    • What are the broad benefits of accessibility research?
    • How can you get organizational buy-in for conducting accessibility research?
    • How can researchers and practitioners work together to advance accessibility?

    Transcript available · Download file (mp3, duration: 37:38, 34.9MB)

    Jonathan Lazar is Professor of Computer and Information Sciences at Towson University, where he directs the Information Systems program and the Universal Usability Laboratory. He has been working in accessibility research for 15 years, most recently focusing on law and public policy related to web accessibility for people with disabilities. His publication credits include Research Methods in Human-Computer InteractionWeb Usability: A User-Centered Design Approach, and Universal Usability: Designing Computer Interfaces for Diverse User Populations.

    Resources mentioned in this podcast

    A Podcast for Everyone is brought to you by UIE, Rosenfeld Media, The Paciello Group, and O’Reilly.

    Subscribe on iTunes · Follow @awebforeveryone · Podcast RSS feed (xml)

    Transcript

    Sarah Horton: Hi, I’m Sarah Horton. I’m co-author with Whitney Quesenbery of “A Web for Everyone,” from Rosenfeld Media. I’m here today with Jonathan Lazar.

    Jonathan is a computer scientist who works on topics related to accessibility and universal usability. Among his many activities, he directs the Information Systems program at Towson University conducting research on what people with disabilities need to successfully use the Web and how well or how poorly we, who build the web, are meeting those needs.

    Jonathan has a great deal of experience with different research methods. We’re here to learn the benefits and drawbacks of different methods for evaluating accessibility of websites, applications, and apps, and how we can incorporate accessibility assessment into practice. Jonathan, thanks so much for joining us.

    Jonathan Lazar: It’s a pleasure to be here with you today, Sarah.

    Sarah: First of all, can you tell us about the Information Systems program at Towson?

    Jonathan: Certainly. Since 2003, I’ve been director of the undergraduate program in Information Systems at Towson University. At Towson, all of the computing programs are in one department. We have computer science, information technology, and information systems.

    In our program at Towson, we like to say that information systems focus on the four Ps — People, Process, Policy, and Profit.

    Students in our program learn a lot about human computer interaction. They learn a lot about using technology, business needs, hence the profit. They learn about international technical standards and the laws related to technology. They learn about design methods for including users into design to ensure a good outcome. They learn about testing and evaluation.

    Again, starting with a P for People, they learn all about human computer interaction and how to build interfaces that meet the needs of users.

    We’re really excited. This fall, we’re actually implementing four new career tracks at Towson University for our undergraduate students. Students will pick from one of these career tracks to help them focus their electives towards a specific job goal.

    Those four career tracks are user interface design, system analyst, business analyst, and E-government. Those are the four career tracks and we’re excited. Students are already signing up for them. We’re going to help them go directly into these different careers and really helps define what we’re interested in and what our goals are in information systems.

    Sarah: That sounds like a great program. Sounds like something I’d like to sign up for. [laughs]

    Also, I know you direct the Universal Usability Lab, which is also somewhere I’d like to spend some time. It sounds like it’s right up my alley. Can you tell us a little bit about that as well?

    Jonathan: I founded the Universal Usability Laboratory at Towson in 2003. We just celebrated our 10th anniversary. The goal of our laboratory is really to do research but research focused on practitioners and policy makers. We’re not doing theoretical models. What we do is research that can really help improve the outcomes of technology in our community and in society.

    It’s a number of people involved. It’s myself, Heidi Feng, Joyram Chakraborty, and Suranjan Chakraborty, who are all faculty and a number of doctorate students, as well, and a number of graduates of the lab.

    The idea is we do research about user diversity. We’re interested in people with perceptual impairments, motor impairments, and cognitive impairments. We’re also interested in users with very little computer experience. We’re interested in older users and younger users. We’ve very interested in these issues of user diversity.

    Our research tends to be targeted towards either industry and practitioners. For instance, we do a lot of work that we publish in the UXPA. We’re also very interested in doing research that informs policy makers.

    I can talk about this a little bit more later, if you want. Typically, if you’re aiming your research towards policy makers versus impacting those who fund public policy and government policy, it’s a little bit different.

    Our research is really aimed at impacting policy makers and impacting practitioners and developers.

    Sarah: You must have quite a toolbox of methods for doing that research. One thing we’d really like to learn from you today is about some of those methods.

    Whether you have favorite methods, and are some things good for some tasks and not for others? How do you convey that information from your research to those policy makers in a way that is persuasive, and hopefully prompts some action on their part?

    Jonathan: Generally, we have three different types of evaluation methods related to accessibility. You could have your typical usability testing where you have people with disabilities attempt to perform task on whatever level of interface you have, whether it’s an early prototype or a fully finished interface.

    You can have expert inspections or expert reviews where you have interface experts or accessibility experts that go through a series of interfaces whether we’re talking about screens on a desktop, or whether we’re talking about mobile devices, or whether we’re talking about an operating system but you have expert inspections which are, again, inspections. It’s not users attempting tasks.

    For things like websites, you also could do automated accessibility reviews, where you have a software tool. So something like Deque WorldSpace. Something like the free tools on the web, like for instance you have WAVE. Something like the paid tools, so you could have Comply First.

    There are a lot of tools out there that you could use. They have a limit though. Really what you need is some combination of user testing, expert reviews, and automated reviews.

    The question really is, which are the appropriate methods do use, in which scenario?

    Let’s start from a practitioner point of view. The most important thing to do is actually impact the design. Your number one goal is to impact the design. You could do a perfect method and a perfect data collection, and it could last six months. If you spent six months doing it, you wouldn’t actually influence the design. They would have moved ahead without you.

    Let’s first say that the most appropriate method that you can use is the method that will actually lead to results. Given that, you have to look at what the budget is. You have to look at what the timeline is. You have to look at where you could actually impact the design to improve accessibility. That’s the first thing.

    Sarah, I think you would agree with me. If you’re going to do a perfect study that won’t get anything done, why do it?

    Sarah: Exactly. It’s a very good point about timing, and things like that. That really influences whether anyone can actually do something with the results of your research.

    Jonathan: I always when I talk with my students, I tell them, “What’s the right number of users? What’s the right level of testing?” It’s whatever you can get into the timeline, given the budget, given the limitations, that actually will allow you to influence the design. That’s what we’re interested in. We’re interested in influencing the design.

    For someone to say, “Well, we must do a strict 40 user Usability Test,” that’s not that likely to actually impact. That’s not likely to impact the design.

    Let’s talk about some of the strengths and weaknesses about each one of these three methods. When we talk first of all about Usability Testing — we’re talking about getting users with disabilities — one of the challenges there is that you have to determine which users with disabilities and which disabilities are likely to use whatever that interface is, that website, that device.

    One of the challenges is if you say, “Well, I’m going to have blind people test it. I’m going to assume that applies to everyone with every disability.” Clearly, that’s not true.

    What you need to do is get a sampling of different disabilities that represent the target user for whatever operating system, interface, website, device you’re referring to.

    You need to make sure that you have a sampling of not just people with one disability. People with a disability typically can only find accessibility problems that relate to their disability.

    One of the strengths of user testing, first of all, it goes beyond simple technical accessibility to make sure it’s easy to use.

    One of the core problems we often talk about is that someone will say, “Well, I followed the technical standards, so I think it’s accessible.” In fact, it may be accessible, but really hard to use. So it’s not really usable.

    User testing is also really good for determining usability of multi-step processes. Let’s say, if you have a dialogue boxes or a series of screens, something like signing up for a new email service, something like purchasing an item online, something like registering for classes, where it’s not just one screen, but five or six different screens that you have to go through, Usability Testing is most accurate on that.

    Next, on to Expert Inspections. Expert Inspections, as you know, should always be done before Usability Testing if possible in the schedule. Experts can find some of the obvious flaws first, get those improved, and then hopefully you can have users actually find sort of the more fine grain problems related to accessibility. So if possible, an Expert Inspection first.

    We typically have people who are experts in accessibility. Sarah, I’m sure you’ve done many expert inspections, right?

    Sarah: That’s right. What you’re saying is you do an Expert Inspection prior to Usability Testing?

    Jonathan: Absolutely. If you can do that in the schedule with the budget, you should do that. Hopefully, the expert in interface design can find a few major accessibility flaws, get those fixed, and remove those by the time that the users actually get to evaluate the interface.

    Expert Inspections typically won’t figure out if the interface works really well, in terms of ease of use. They can determine if it technically meets the requirements, but there may be things that relate specifically to the task. Obviously, the expert may not have deep task knowledge. They may have deep knowledge of the interface.

    One thing though an Expert Inspection is really good at, is very good at determining compliance with either technical guidelines, or with legal requirements.

    Typically, usability testing will tell you where there are some flaws. You can’t run three blind users, have them evaluate an interface and they can’t tell you if it complies with section 508 of the Rehab Act or not.

    An expert inspection is typically better at evaluating a series of interfaces for strictly on compliance. Basically, the expert’s going, “OK, does it meet paragraph A? Does it meet paragraph B?” That’s expert inspections.

    Automated reviews are actually the weakest evaluation method, however they have one strength.

    I’ll tell you why they’re weak first. They’re weak because, obviously if you could have human expertise inspects an interface, that’s better. Or, if you can have real users with disabilities inspect an interface, that’s better. The real weakness of automated accessibility testing tools is that they often can’t tell if something is useful.

    The example that’s given often is that all web pages need to have alt text. Alternative text that describes what is on the screen for a certain image or something like that. If there’s alt text for an image and the alt text is the word “blank,” or the alt text is “picture here.” Or, let’s say there are 20 pictures on a web page and the alt text for every single picture is “hamburger.”

    Unless you’re actually running a hamburger store and all your pictures are indeed hamburger, an automated tool would mark this as you’ve met that paragraph of the law because you have alternative text for a picture, even though the alternative text might not be useful at all. It’s “hamburger, hamburger,” or it’s “picture here.”

    The idea is that automated accessibility inspections…the software tool is actually the weakest. However, they scale really well.

    Given that with user testing…You want to get as many users as possible, but you’re probably not going to be able to get 50, 100, 150 users. Given that expert inspections are great but you’re probably not going to have time for the expert to inspect every single web page in a website, the automated accessibility tools scale really well.

    You can have a tool spider through your whole website. You could have it examine 10,000 web pages.

    Automated tools are good at giving you the overall picture of how your website is doing and what types of flaws occur most often. A lot of times the automated tools will either give you information that can be misleading. “Oh, it has alt text,” but the alt text isn’t actually useful.

    Or, it may give you results that you have to interpret. It’ll say “There are manual checks required due to the presence of these features.”

    Really, you want some mix of automated review, expert inspection and usability testing. Of course what combination you use and how much depends on the timeline for your project. It depends on the budget for your project. It depends on how you will be able to influence the design.

    We shouldn’t focus on doing perfect experimental design of our research here. We should focus on, “How do I influence the design and what methods will give me the information I need to fit into the timeline and budget, to actually influence the design?”

    Sarah: Thanks, Jonathan, for that very clear and thorough exposition of those three methods.

    It’s great to think about how each of them might be used in different ways within a project and certainly, how each has strengths within the context of different constraints. Like time, resources and numbers of people available to do usability testing, for example.

    A couple questions come to mind. One is, who should be asking these questions about what needs to be done and when? How can, particularly user experience designers know when and what to do? What tools to use and add these tools into their tool sets for making decisions about design and integrating these research questions into the process of designing and building.

    Jonathan: User experience professionals are really in the right situation to be able to have an impact. User experience professionals really need to advocate for accessibility.

    Accessibility happens as a group effort. If you think about like in a university, you have to have a number of different people involved. Not only do you have to have the disability student services office, you also have to have the CIO’s office, the people who control the technology. You need to have any people who are involved with diversity. You need to have the provost involved.

    The user experience professionals have an advocacy role to get out there and inform people about technology accessibility. Really informing people and making them aware is what I’d say is the biggest challenge. In many cases, people simply don’t know.

    If they didn’t grow up with people who are blind, people who are deaf, people in wheelchairs, they don’t know. They’ve never considered how someone who’s different than them might use technology. Very often it’s an awareness issue.

    User experience professionals, because they’re already going out there informing people about usability and all those issues for user centered design, they’re really in a great position to advocate for accessibility. To let people know, “Hey, you have all these different users. Here’s how they use your technology.

    Here’s what you can do to make it better. We have these technical standards. Do you love what we’ve already done with user experience? Guess what we can do related to accessibility.”

    You asked also, I believe about tools, right?

    Sarah: Yeah.

    Jonathan: One of the problems with most developer tools is that there isn’t much front and present to developers about accessibility in the tools.

    Imagine if you had a web development tool that every time you insert an image, it would immediately say, “What is the alt text?” If you didn’t type in alt text, it would stop you from moving any further. Wouldn’t that be great?

    Sarah: It would be great.

    Jonathan: One of the problems is that developer tools — whether they’re web developer tools or any other developer tools — often are really transparent about accessibility. Yeah, there might be some accessibility features hidden away somewhere, that you have to look and search for. It’s not front and present.

    That’s a common problem in many organizations. You ask them, “How are we doing on IT accessibility?” They say, “Well, we don’t know,” rather than saying, “Here’s how we’re doing. In the month of May, here’s the type of evaluation we did for accessibility. Here’s how we did, but we’re going to work on improvement.”

    Think about the airlines. The airlines post, “In the last month we are 87 percent on time, which was an improvement over last year.” Why can’t we get organizations to do that for accessibility?

    “We’re 87 percent accessible, and we’re getting better every month.” You rarely see any organizations with that level of transparency, right Sarah?

    Sarah: Right, but I guess my initial question would be, are they doing those assessments to begin with? Are there numbers to report on?

    That actually feeds very well into my next question about what organizations can do to incorporate some of the research methods that you’ve been talking about to build better products. Then they would have numbers to share.

    At this point, I don’t know if many companies, organizations or design teams are actually monitoring accessibility in quite that way.

    Jonathan: You read my mind, because that was the next thing I was going to talk about. Why do they often not report statistics? Because very often they’re not doing evaluations. The organizations — whether they’re government agencies, universities or companies — they don’t know how they’re doing. They’re not doing the evaluations.

    It’s hidden secret. “We don’t know how we’re doing, so don’t ask about it.” Instead, what we need to get organizations to do is to talk about it and say, “Look, we haven’t done evaluations. We’re going to start.”

    If you look at, in 2010, there was a memo coming out of the US federal government, that talked about the fact that the Justice Department hadn’t been doing their evaluations of federal government websites since 2003, but they were going to start doing it again.

    I really give them credit for that, that federal government, and there were a number of parties involved, the Justice Department, OMB, GSA, they really all said, “Yeah, we haven’t done this, and the law requires this, and we’re going to start doing it again.” And the Justice Department did.

    What we need is to get companies to follow the lead of the Justice Department, in saying that, and admitting that. “Yeah, we haven’t really paid attention, but we’re going to start paying more attention, we’re going to start doing more evaluations, we’re going to start making this a priority.”

    Because once you find out how you’re doing, of course, what’s the next step? “Oh, we’re not doing well. Well, we need to start making some improvements.”

    Sarah: Jonathan, I really agree with what you’re saying about organizations needing to be more transparent about the work that they are doing in accessibility, and I think what I’m trying to get to in this conversation is some practical guidance, as to how to go about doing that research. This is one thing that organizations struggle with, and not really having the tools and the knowledge that you’ve been sharing with us, today, about how to assess accessibility, and improve it.

    As you know, because you were there, I recently attended the Cambridge Workshop on Universal Access and Assistive Technology, which you co-direct. It was a really great conference, and I was delighted to be able to participate. I was so struck by the insights that were shared at that conference, from the research community and the scholarly community, primarily.

    I also came away, though, with the feeling that all of that great information and insight isn’t making its way into practice, into organizations, so that groups know how to move forward, based on the insights gained through that knowledge. People who are looking really deeply into accessibility are in one place, and the people who are trying to provide accessibility are in another. There is a big gap, in between.

    From what I know of your experience, you’re a person who is both a researcher and a practitioner. You work with researchers and you work with practitioners, and have found a way to bridge that gap. If you could tell us a bit about how you’ve found ways to do that, so that organizations could move forward with accessibility in a more deliberate and an informed way, benefiting from all the insights in the research community.

    Jonathan: My approach is simply that I talk with everyone. If a group wants to have me present, if there are some people I want to do outreach to, I just go talk with them. A lot of times, there’s a hesitation by, especially university researchers, not as much the industrial researchers, but there’s a hesitation to work with practitioners.

    There’s this silly hesitation about, “Well, we want to stay in our research lab, we want to do a clean study. We want to do it this way.”

    I say, the world is messy. Let’s get out there, and work on influencing the world. Yes, it’s messy. Yes, you can’t control as many factors. Yes, it may not be a neat study that you can publish in a certain journal. But the reality it, what could be better than actually influencing either practitioners in UX, or developers, or policy-makers.

    The key thing is to first engage with people and say, “I’d really like to talk with you more about the topic of accessibility.” UX developers go out and talk to researchers, researchers go talk to practitioners and policy-makers.

    The first step is simply to engage, and to talk. The next step is to find out, what do you need? What information do you need, and in what format do you need it? Because, if you present to different communities, if you present to all these different people I talk with, researchers and practitioners, they all have different needs. We talk about user-centered design, you need to understand the user needs.

    For instance, if you look at public policy-makers, there’s very little data from the UX and HCI communities that actually is influencing public policy. Why? Because we don’t have things prepared in the format that policy-makers need. For instance, policy-makers are very interested in year-after-year studies.

    They don’t want to know if we’re doing perfect work, they want to know, are we improving? So you’re saying, maybe, “Three years ago, our websites were 50 percent accessible, and now we’re at 70 percent. Our goal next year is 75 percent.” That’s great, we’re making progress. You might say, “But we’re only at 75 percent.” But a policy-maker sees progress.

    They’re very much interested in longitudinal studies. In the HCI research community, we don’t do many longitudinal studies. On the other hand, if you look at, let’s say, the healthcare community, the medical research community, they do tons of longitudinal studies. We have to figure out, what do other communities need?

    When I present to practitioners, I always make sure to give lots of examples, and be very specific about policies. One problem, that as researchers we often do, is we say, “There’s a law that says that it must be accessible.” We need to learn, when speaking with these other communities, to be specific. What, specifically, is the law? Is it a federal law? Is it a state law? What does it cover? Who’s covered? What type of compliance mechanisms are in place?

    It’s very often about, first engaging with these other communities, and then really trying to figure out, what do they need? What format do they need things in? What are their questions? Say, “What are the questions you’d really like to have answered?” That’s how you do it and not to be scared of messiness. The world is messy. Yes, we have to get out of our university and engage with the world.

    I’ll give you an example. I’m working on a project, right now, with my undergraduate students. I teach a class that’s just about technology designed for blind users. The students are working with Baltimore County Public Library, to evaluate the services that the Baltimore County Public Library offers for people who are blind or low-vision.

    I tell the students up front, “I think this is going to be the most awesome project we’ve ever done in this class. I could be wrong. It may not work out well. But that depends partially on the amount of work you put into it.” When you do a real world project, you have to be up front about that. Yes, it’s going to be messy.

    Rather than be in our usability lab and focused just on experimental design, if we’re going out and trying to implement things in the real world, there will be lots of unexpected things that we find along the way. We’ll probably find some new flaws in our interface. We’ll probably find some technical challenges that we have. That’s exactly why we should do it.

    Because, if we do only experimental studies in the lab, very controlled, we’re not really able to influence the world. We have to get out there in the world, and see what real challenges people face. What are the real challenges with the technologies we build? How do we tweak the technologies? Where are there problems in our technologies?

    How do we impact on public policy? How do we impact developers? How do we impact practitioners’ end-user experience?

    These are really my ideas. Engage with people, find out what they need, and don’t be afraid of failure, don’t be afraid of messiness.

    Sarah: Sounds good. That’s really great. You’re putting the onus primarily on the research community. Is there something that the practitioner community can do to work the other way? There were very few people at that workshop that I mentioned, that were from industry. How should the practitioners, and people who are out there building things, how should they be engaging with the research community?

    Jonathan: Practitioners should do everything they can to reach out to university researchers, as well as industrial researchers, and say, “Here are some questions that we don’t have resolved in our community. What could you contribute to making this happen? We want to find a way to work together.”

    And realize that, as they do that, you have to get to know other communities. You have to get to know, what are their reward structures? What do they get credit for? What do they get dinged for? That’s part of it.

    I wasn’t just speaking as a researcher. I was saying practitioners should absolutely go out, talk with researchers. Both practitioners and researchers should go talk with policy-makers. Go talk with your local government official, who probably will actually, really want to talk with you. They’ll want to say, “Yeah, I’ve been having these problems, and I don’t have data on this. I don’t understand the problem. Can you give me some more information?”

    It’s many different communities. It’s technology developers and software engineers. It’s UX practitioners. It’s researchers. It’s policy-makers. We all have to get out of our comfort shell, and go out there and explore with other communities. Be willing to fail. Be willing to say, “It may be messy, but we need to at least start the engagement process.” A lot of people never even get that far.

    Sarah: You’ve mentioned a few times, policy. I know that you spent a year at Harvard, as a Fellow, researching public policy. It’s interesting how some of your insights today relate to ways to use the research techniques and results in a way that are going to be persuasive, and affect public policy. If you could talk to us a little bit about how you ended up taking this path of researching public policy as a computer scientist. It’s not a common path.

    Jonathan: [laughs] No, it certainly is not. It was very interesting. I’ve been doing accessibility research for about 15 years, now. I’d been doing user diversity, before that, but accessibility for about 15 years, now. What kept happening is that I would get calls and emails from policy-makers, asking me, “Hey, do you have any data on this?”

    I would receive requests from the disability community, from the advocacy community, saying, “Can you go talk as a researcher, about research foundations for this bill in the legislature?” Policy-makers at the federal level, too, would reach out to me. Over time, I kept seeing this pattern. I kept getting requests for information to really help influence public policy.

    No one’s asking me to be an expert on law or policy. What they’re saying is, “Can you give me data? Do you have research studies that can answer some of our policy questions?” That’s, at a core, what I’m interested in. I’m interested in, at least from the policy point of view, how can we use human-computer interaction, usability experience, accessibility research data, to help inform policy-making?

    Because a lot of policy-making in this area doesn’t have any data, it doesn’t have any research behind it. There are a lot of other fields that do much better at this, than we do.

    Over the years, I kept getting requests for, again, “Do you have any data on the following topic? How is our state government doing? Can you talk a little bit about this bill and, from a scientific point of view, what this bill in the legislature means?” Over time, I kept getting more and more requests. Freely admitted, I don’t have a public policy or a law background. My background is in human-computer interaction. I was getting request after request.

    I’d been involved with SIGCHI, a special-interest group on computer-human interaction. I had been a founding member of the US Public Policy Committee for SIGCHI. Later, that role was expanded, where in 2010, they created a new position, called the International Chair of Public Policy, and they asked me to serve in that role.

    I’m doing more and more public policy, and I thought, because I’m doing public policy, I really need to have a little bit more of a foundation in disability rights law and public policy.

    I applied a number of different places. I was very thrilled that I won a fellowship at the Radcliffe Institute for Advanced Study at Harvard University. The Radcliffe Institute is a fantastic place. They specialize in people who do interdisciplinary work, and they will fund people. They will fund a portion of your salary, to spend a year at the Radcliffe Institute.

    You apply, it’s about a five percent acceptance rate, and you apply for one of these Radcliffe fellowships. They have 50 every year, across all fields.

    I was thrilled. I won one of the Radcliffe Fellowships, so I was the Shutzer Fellow at the Radcliffe Institute for Advanced Study. I spent a year investigating and researching the intersection between disability rights law and public policy, and human-computer interaction for people with disabilities.

    As you know, I was really involved with the Harvard Law School Project on Disability and Michael Stein. In fact, we had some publications out already about these topics related to, for instance, societal discrimination. There’s a great video, also, if you go to YouTube and search on Jonathan Lazar Harvard. There is a great YouTube video of my fellowship presentation, which talks all about societal discrimination against people with disabilities occurring when a website is inaccessible.

    If a technology is inaccessible, how does that lead to a form of discrimination, like employment discrimination or pricing discrimination? Over time, I got more and more involved with public policy and I said, “I want to do something related to policy and law for my sabbatical.”

    Again, I applied and I was thrilled that I won one of the fellowships at the Radcliffe Institute. That really has helped me get a much deeper understanding of public policy and disability rights law related to my human-computer interaction.

    For instance, I continue to do work in ACM SIGCHI, where I continue to serve as International Chair of Public Policy. We’re working on a rapport to service, a foundation, for understanding human-computer interaction and public policy.

    Also, if you look at SIGCHI, I’ve been involved with, but I’m not leading the effort, related to making SIGCHI more inclusive for researchers, practitioners, developers and students with disabilities. SIGCHI has been working both on conference accessibility, making sure that our conference locations are accessible for people with physical disabilities.

    We also have been working on digital accessibility, working on improving our conference website, working on improving our submissions to the digital library, so we are making progress on making SIGCHI a more inclusive organization.

    Sarah: Now that you’re back at Towson after that year of doing research into public policy, are there things that have changed the way that you approach accessibility research on, for example, the three methods that we talked about earlier and pulling that all together at this point? Are there ways of doing accessibility research that we should be looking to in the profession, in the UX profession, to influence things like public policy, in terms of how we administer and how we use our research methods to learn about accessibility in products and services?

    Jonathan: I certainly learned a lot last year on the fellowship. I learned a lot about disability rights law and have a much deeper understanding of the law. One of the things I think that is important for all UX professionals to understand is that anytime you talk about policies or laws, be very specific.

    That’s something I really learned last year. That they cite specifically, “Title II of the American Disabilities Act, Paragraph III.” That’s the way that people do policy in law, typically refer to things rather than saying, “There’s a law that said so.”

    Anytime we reference a law or a policy, we need to be very specific about what we’re referring to. I do think that when you look into not only the laws, but the regulations, when you look into legal settlements, you see a little bit of a trend where the legal settlements now are being much more specific about the evaluation methods required.

    You didn’t used to see that. It used to be that some form of testing will be required, some evaluation. Now, for instance, if you look at the two legal settlements recently with the University of Montana and Louisiana Tech, they’re very specific about the type of evaluation methods required.

    For instance, for one of the settlements, the university has to file an annual report documenting compliance with the Department of Justice. With the other one, they have to do user testing involving people with disabilities.

    That’s slowly starting to become more encoded in all the various forms of policy, the statutory laws, the regulations, the legal settlements and such. That’s something that we really could help with. The more that the UX profession could help inform policymakers about the different methods of evaluating for accessibility, the strength and weaknesses, the more information we can put out there.

    Again, the more transparency we can get, the more we can talk about it because a lot of people still don’t know. If you went to these universities, a lot of the higher-ups say, “Well, I had no idea. I didn’t know.” We need to do a much better job educating people out there about accessibility and different evaluation methods for accessibility and why it’s important.

    That’s my charge to everyone who’s listening to this podcast, get out there. Talk with people. Connect with people. Inform them about accessibility state-wide, it’s important. Give them your business card. Make sure that you do your best to get the word out because there are still a lot of people out there who are not aware.

    Awareness, openness, and transparency are really the best ways that we can move this topic and this agenda forward.

    Sarah: Thank you so much, Jonathan. That’s all really helpful and insightful. This has been Jonathan Lazar talking to us about the best ways to gain and share insights through research to help in building a web for everyone.

    Many thanks to you all for listening and to our sponsors — UIE, Rosenfeld Media, The Paciello Group, and O’Reilly — for making this podcast possible. Follow us @awebforeveryone on Twitter. That’s @awebforeveryone. Until next time.