Minutes from 10-11 April UA face-to-face available

Hello all,

The minutes are available online ([1], [2]) and linked from the home
page. The Working
Group was able to make a pass through all of the issues, and to resolve
almost
all of them. I will update the issues list as soon as possible. I have
quoted the
minutes below in text. These are raw minutes and may be edited.

On behalf of the Working Group, I would like to thank George Kerscher
and RFB&D for hosting our meeting. Everything went very smoothly.

Thank you,

 - Ian

[1] http://www.w3.org/WAI/UA/2000/04/rfbd-20000410
[2] http://www.w3.org/WAI/UA/2000/04/rfbd-20000411

-- 
Ian Jacobs (jacobs@w3.org)   http://www.w3.org/People/Jacobs
Tel:                         +1 831 457-2842
Cell:                        +1 917 450-8783


              Minutes from 10 April 2000 UA face-to-face at RFBD

Participants

     * Jon Gunderson (Chair)
     * Ian Jacobs (Scribe)
     * Harvey Bingham
     * Mickey Quenzer
     * Gregory Rosmaita
     * Charles McCathieNevile
     * Hans Riesebos
     * Rich Schwerdtfeger
     * George Kerscher
     * Eric Hansen

   By phone:
     * Jim Allan
     * Kitch Barnicle
     * Mark Novak
     * Madeleine Rothberg

   Agenda

Issue 208

   Resolved:
    1. This checkpoint was meant to allow configuration of prompting;
       this is a clarification.

   Action IJ: Propose wording to the list.

Issue 209

   Resolved:
    1. If document is not rendered when there is no style sheet, this is
       not an accessibility issue.
    2. The user agent should make available in an accessible manner the
       fact that only the default style sheet is active, which may cause
       no rendering.

   Action IJ: Add to techniques.

Issue 210

   Resolved:
    1. Define author-specified to mean that the UA recognizes in markup.

   Action IJ: Add to document.

Issue 211

   Resolved: Add to checkpoint 2.5 that equivalent alternatives need to
   be recognized.

   Action IJ: Propose change to the list.

Issue 212

   RS: We talked about different versions of the Guidelines in the
   future.

   HB: We want new technologies to come on board with where we are.

   RS: We don't have a crystal ball to predict all the devices that are
   coming out.

   JG: We may want to attend conferences where new devices are presented
   and discuss participation in WAI by those developers.

   JG: One reason we did not expand the scope of the guidelines was that
   some AT developers were not thrilled with the requirement for
   interoperability/communication.

   EH: Definitions seem tidy in PR version of guidelines, so we don't
   need to say much more about what devices or not are "covered". What
   more do we need to do?

   RS: Why don't these guidelines cover more? E.g., could have a palm
   device with the DOM.

   CMN: I gave a talk in Japan. Mobile guys didn't understand why they
   were concerned? It's still valuable to point people to these
   guidelines, even if they don't apply entirely to your device.

   RS: There is a need for more detailed guidelines for other devices. I
   think we should say that these guidelines were designed primarily to
   address desktop and other "heavy-weight" user agents and that we'll
   try to address pervasive user agents in future work.

   MQ: We don't want to discourage readers. Some of the guidelines apply
   anyway.

   EH: I think that we ought to use the terms already defined and say
   that not every checkpoint is applicable to every user agent. And
leave
   it at that. This means that some specialized user agents may have
only
   6 applicable checkpoints, but they're still a user agent and should
   satisfy them.

   Resolved:
    1. The Guidelines address applicability already. This means that we
       recognize that some checkpoints don't apply to all user agents,
       but many may apply to new user agents coming to market and we
want
       these developers to consider how these checkpoints apply in their
       devices.
    2. The WG intentionally chose to limit the intended audience of user
       agents in this version to graphical desktop user agents (e.g.,
       graphical browsers, media players). In part, this was decided
       because assistive technology developers were concerned by the
       requirements for communication with other user agents. The WG
       chose to address general purpose graphical user agents instead of
       specialized user agents because the latter generally address
their
       target audience well and are not intended to be universally
       accessible.
    3. The WG is developing a new charter and plans to address how
future
       versions of the Guidelines may apply more specifically to
       assistive technologies, mobile devices.
    4. Rapid evolution of these devices makes it difficult today to
       provide them with more guidance than we already do.

   Action HB:
    1. Ensure that the EO considers this question for the FAQ: Which
user
       agents are covered by these Guidelines? Mention that the WG will
       consider other classes of user agents in future work and will
seek
       participation by developers.

Issue 213

   MQ: It's a problem that applications don't know about each others hot
   keys.

   CMN: We can't solve that, but we can say don't make it worse.

   Resolved:
    1. Change "language features" to "markup language features".
       Clarification.
    2. Add examples to end of paragraph before note what readers will
       find in UI style guidelines.

Issue 214

   RS: You can't get at image map information for server-side image maps
   since it's on the server.

   CMN: You could get at the information on the server by flooding the
   server. If we don't say "client-side" then we are requiring flooding.

   MQ: We don't support server-side image maps. I don't quite understand
   the reason.

   RS: Server-side image maps are based on a pointing device.

   CMN: The other alternative is to expose the image map and ask users
to
   name points. And you've met the requirement.

   Resolved: Leave it as is.

Issue 215

   Resolved: Say that the button should have a text equivalent (adopt
the
   proposal).

Issue 216

   CMN: It's just an example.

   Resolved: Add "For example" at the beginning of the sentence.

Issue 217

   CMN: Yes, this is an alternative equivalent.

   MQ: I think we're covered.

   HR: A location equivalent would mean something like "these controls
   are close together and that means that they're related". The sounds
   are not equivalent alternatives if they don't provide that
   information. Three-d sound does exist...

   IJ: This grouping should be addressed by authors with markup (e.g.,
   FIELDSET, OPTGROUP). We are covered by structured navigation.

   CMN: My interpretation of the question was that it was about "where
am
   I right now?"

   RS: ATs provide "where am I" functionality.

   Resolved: No change.
    1. The author should not rely on graphical rendering to convey
       semantics. When markup is provided, the UA should follow the
spec.
    2. "Where am I" functionalities are offered by ATs.

Issue 218

   Resolved:
    1. Move and edit the sentence to 3.1 Note.

Issue 219

   Resolved:
    1. Adopt proposed editorial change.

Issue 220

   EH: I think that the expression "respect sync cues" is nice, more
   general. Some times tighter or looser synchronization.

   KB: There may be times when people want to view them at different
   rates, and this doesn't exclude that.

   HR: Not all media can be slowed down to the same rate.

   Resolved:
    1. No change. Add a cross reference to 2.6
    2. In 2.6, change to author-specified.

Issue 221

   RS: Recall that if the OS feature is used, it must be accessible.

   CMN: The time when you need controls is not the gross volume, it's
the
   mixing volume. You'll need the UA to provide easy access to that
   (which typically will be by punting to the OS).

   Resolved:
    1. This is covered by definition of native support.

   Action IJ: Add note to Techniques.

Issue 222

   Resolved:
    1. This is covered by definition of native support.

   Action IJ: Add note to Techniques.

Issue 223

   IJ: "Changes" is generic. Not clear what is required.

   RS: I don't think we intended to allow the user to configure keyboard
   focus changes.

   GR: There's the issue of how ATs pick up focus on new windows.

   Resolved:
    1. Allow the user to configure how the focus moves when there are
       multiple viewports.

   Action IJ: Tailor this wording and propose to list.

Issue 224

   HB: Include notification that another viewport has opened.

   JG: The two main things have been focus changes and programmatic
   notification (covered elsewhere). Also, that user configurations are
   inherited by new viewport instances.

   JA: I think that the minimum is what 5.7 says (notification).

   CMN: I think that the minimum requirement relates to "configuration":
   The minimum requirement is to configure those things specified by the
   document. And perhaps that turning off is part of the minimum
   requirement. Point people to the definition of configure.

   EH: We can add examples to the checkpoint. If you're relying on the
   definition of "configure" and it has circularity with the
checkpoints,
   that needs to be corrected.

   EH: I don't think that our clarification means we need to state a
   minimal requirement, but clarification is necessary.

   The WG feels that this checkpoint includes:
     * Notification (covered by 5.7)
     * Prompting for opening or not, plus configuration of prompting
     * Focus changes (covered by 4.15)
     * Inheritance of configuration.

   MN: These are listed in the techniques document.

   RS: We shouldn't have dialogs prior to opening new viewports.

   IJ: Recall that we used to allow turn on/off, but the SYMM WG said
   this didn't work for SMIL presentations. Thus, configurability.

   GR: I think that notification is key. Duplicate views should respect
   focus position, otherwise might be disorienting.

   IJ: I don't believe that we have a requirement for notification
   through the UI when the focus changes viewports.

   IJ:
    1. I don't think having several viewports is an accessibility issue.
    2. I don't think changes in number of viewports is an accessibility
       issue if (1) there is notification programatically and through
the
       UI and (2) the user can navigate to the new viewport and three
(3)
       the user can configure how and when the focus changes.

   RS: User agents today don't have a mechanism for programmatic
   notification of change other than a focus change. DOM 3 has a notion
   of views and we should address this in DOM 3.

   Action RS: Take this to PF as a DOM 3 requirement.

   IJ Proposed: The requirement is that the user be informed
(accessibly)
   when a viewport is created or destroyed (that has not been created or
   destroyed on request from the user). The same requirement should
apply
   for the focus.

   RS: Both of these requirements are covered inherently in the user
   interface design. People with ATs get the information
   programmatically, which we cover elsewhere (5.7).

   MN: IE 4 notifies you programmatically when a new viewport has been
   created.

   KB: There are situations where the user may actually request
something
   but they don't realize it. The user should be able to query how many
   viewports are open.

   Action GR: Send to list screen shot of JFW Window list.

   EH: We may need a definition of window...

   HB: There's a class of things we don't expect to cover (e.g., MS blue
   screens).

   RS: I think we are covered except those things that we have no
control
   over. Do we want to limit to application-generated events and not all
   system-generated events?

   KB: If a Web page says "Following this link will open a new window",
   is that considered an explicit user request?

   CMN: I would have thought that was an explicit request, but that's
   hard to find for the user agent. There can be something in markup
   saying a new window will open. The UA should provide this type of
   information.

   Proposed: Delete 4.16.

   Is inheritance of configuration in new windows a requirement?

   Proposed: Add "inherit configurations in new viewports" to definition
   of "configure".

   JG: In G8, we have some checkpoints about links. We might want to
   require that the user agent inform the user that following a link may
   open a new window (recognized in markup).

   RS: When you attach javascript, it may open a new window.

   IJ: Note that UI notification of changes to prompts/windows not
   covered in the guidelines. However, like changes to viewports, there
   is notification programmatically and the assumption that users of the
   primary interface will know.

   RS: I think that it's implied that new viewports inherit features.

   /* The WG will chew this over */

Issue 225

   Resolved: This is covered by checkpoint 9.3

Issue 226

   Resolved: This is editorial.

   Action IJ: Clarify definitions of content, user interface (possibly
   chrome), etc. Refer also to issue 207.

Issue 227

   Resolved: Don't add "where available" since we have applicability
that
   applies globally.

Issue 228

   Resolved: No change. There's already a cross-reference to checkpoint
   5.5, which talks about standard API.

Issue 229

   CMN: There are a "bizillion" examples of accessibility settings in
   earlier checkpoints.

   Resolved: Editorial

   Action IJ: Add a couple of examples (sticky keys, mouse keys, show
   sounds).

   /* Lunch 12:30 ET */

Issue 230

   Does default keyboard configuration mean look at style guide
   recommended key sequences for accelerators and things, Qwerty vs
   Dvorak vs ?, both, or more? It should be clearer.

   CMN: The answer is surely dependent on the system. If your system
   doesn't care what keyboard you have, one set of guidelines. If
there's
   a keypad, another.

   RS: Default keyboard is some combination of what is specified by the
   OS user interface and what the application specifies as its default
   keyboard interface.

   Resolved: Delete "default" from "default keyboard configuration". The
   details of which keyboards are supported, etc. depend on the system.

Issue 231

   IJ: The proposal seems to suggest another checkpoint requiring the
use
   of accessible specifications.

   Resolved:
    1. Reverse sentences of Guideline subhead.
    2. Add a note to G6 rationale that the scope is more than W3C specs.
    3. Add a note to 6.1 as well to this effect.
    4. Do not add a checkpoint requiring support for accessible
       specifications. You can still provide accessibility even if you
       don't.
    5. Editorial: change "supported" to "implemented" (with
rewording...)

   Action IJ: Add to techniques document Java (and point to Java
   accessibility). SAMI?

Issue 232

   IJ: Two issues:
    1. Education
    2. Scope (discussed in issue 212).

   JG: Point the reviewer back to discussion about the multitude of
   navigation checkpoints and how they got reduced: different display
   control functionalities have their own place; we included those that
   crossed boundaries into the guidelines themselves.

   IJ: Refer to UA Responsibilities document for rationale.

   EH: We can't predict every type of AT. We have extensive treatment of
   applicability. We also have the impact matrix.

   EH: I don't want to change the scope of the document.

   GR: I think one of the main points of the comment is that it should
be
   highlighted to AT developers what's expected of a "mainstream" UA.
   It's bidirectional.

   Resolved:
    1. Since this document is meant for a certain class of UAs, we are
       not concerned about the redundancy. The "line is drawn" by these
       Guidelines.

   Action IJ:
    1. Editorial change: add explanation of how UAs and ATs interact.
And
       or point to UA Responsibilities.
    2. Editorial change: say to ATs that this is what they can expect
       conforming UAs to do; if the mainstream browsers don't, then they
       may pick up the slack.
    3. Maybe add another statement about audience to the conformance
       section.

   GR: One of the biggest advantages for AT developers is the use of
   standard interfaces (the DOM). A single navigation mechanism may be
   used with different independent UAs.

   RS: One complaint about HPR was that it didn't support Windows
   navigation mechanisms. We will fix this; the market demands it and I
   don't know whether we need to require ATs to support them. Just
   because an AT implements these guidelines doesn't mean it's a
   general-purpose user agent.

Issue 233

   Proposed: Checkpoint 7.6: Change "structure" to "document object".

   RS: If the UA provides programmatic access to the DOM, does this
   suffice?

   CMN: No. You may require access through the UI. The minimum
definition
   of structural navigation in ATAG is "element by element". In many
   cases, this will be painful, but it's clearly identifiable.

   EH: Will switching to the term "document object" extend the scope?
(Or
   narrow it?)

   CMN: I don't think that the new term extends the scope. However, I
   don't think that "document object" by itself provides the necessary
   piece for a developer. (Ian notes that he has an action item to
   include a definition.) What needs to be addressed is what nav
   mechanisms are (minimally) required (e.g., up the tree, next sibling,
   back, etc.). There are markup languages that don't have an inherent
   tree language (e.g., Postscript). What navigation is required for
such
   markup languages? You do cover the structure of a language like
   Postscript by referring to "document object".

   IJ: Note that we've already had a long discussion about the myriad
   useful navigation techniques and resolved to have a single (open)
   checkpoint since we could not come up with a minimal set.

   JG: I'm concerned about saying what the minimal set of strucured
   navigation techniques should be used. It depends on the content, how
   its rendered, etc.

   CMN: I would be very concerned about not specifying a minimum
   conformance requirement for navigation mechanisms.

   RS: You want to be able to navigate to all rendered content. You use
   the DOM to traverse it in a logical sequence.

   HR: I think that structural navigation has a purpose: get the
   structure of the document without the details of the content. As long
   as it meets this goal, sufficient.

   EH: "Allow the user to navigate according to structure (e.g., forward
   and backward through rendered elements)."

   CMN: We're not talking about the W3C DOM per se; we're talking about
a
   generic document model. I think HR is saying that this is a way of
   getting around the document (in addition to the linear reading).

   IJ: Speed/efficiency is the other advantage. Note that outline view
   also gives you a vision of the structure.

   GR: I like having open-endedness and configurability (chunk-by-chunk,
   then lower detail).

   IJ: The minimal requirement is access to every piece of the document
   object.

   RS: If you're navigating through the UI, it's only access to what's
   rendered in the UI.

   EH: You have different classes of object within the object model. One
   piece of efficiency is the ability of navigate objects of the same
   class (e.g., headings).

   Proposed:
    1. Change "structure" to document object.
    2. Minimal requirement of navigation is access to every element of
       the document object.

   EH: Note that point two misses the point of efficiency, which was the
   key to the checkpoint. This is the same as viewing the content
   serially.

   JG: Add a note that this checkpoint is designed to improve efficient
   access.

   IJ: What about a minimal requirement of "more than sequential access"
   to the document object.

   MN: "Document object" confuses me more than "structure". Also, there
   are objects within the document, etc.

   IJ: Propose adding point 3: Because this checkpoint is meant to make
   access more efficient, user agents are expected to provide more than
   minimal access. Point to techniques.

   RS: Should we add the term "iterator"?

   HR: I am in favor of both improved efficiency and local/global
   inspection of the structure.

   MN: I agree with HB - elements and attributes.

   JA: I agree with points 1, 2, and 3 together.

   Resolved (pending proposal from Ian on definition of content/document
   object/user interface/element, etc.
    1. Change "structure" to document object.
    2. Minimal requirement of navigation is access to every element of
       the document object.
    3. Add a note that we expect more than minimal access (point to
       techniques)
    4. Make clear that this is not about the W3C DOM for all types of
       content (it's about a "document object".

   RS: Ensure that the user agent doesn't supply a different object
model
   than the DOM for XML/HTML content.

Issue 234

   Resolved: Editorial. Move to G9 (Checkpoint 9.4).

Issue 235

   Resolved:
    1. Add HTML examples to 8.1
    2. In section 1.3 of document, explain that some checkpoints are
       important special cases of others and have been included to
       highlight particularly important requirements.

Issue 236

   Resolved: Editorial. Add a cross-ref to G5.

Issue 237

   CMN: I suggest we move the word "mobility" from the note.

   MN: We usually talk about "built-in" accessibility features.

   CMN: How about "default"?

   Resolved: Editorial. Delete "mobility". Maybe add a required term to
   glossary.

Issue 238

   CMN: This is "applicability of available keys".

   Resolved: Add a note that in some modes (e.g., text input mode), is
   not required due to the nature of the mode.

Issue 239

   Proposed: For example, on some operating systems, when developers
   specify which command sequences will activate which functionalities,
   standard user interface components display those bindings to the
user.
   For example, if a functionality is available from a menu, the letter
   of the activating key is underlined in the menu.

   Resolved: Editorial. Adopt some form of above proposal.

Issue 240

   Resolved: Editorial. Don't feel it's necessary to add.

Issue 241

   IJ: Relates to issue 207. Does a structured view suffice for some
   types of content? Previous discussions about 207 suggest that the WG
   feels that a source view does not suffice for content that may be
   rendered through the UI (it's too hard to navigate the entire
   structure to get at the "title" attribute).

   Resolved: This is resolved according to the outcome of issue 207.

Issue 242

   Resolved: No, DOM access is not sufficient. All checkpoints meant to
   be satisfied natively through the UI unless explicitly stated
   otherwise. Refer also to issue 233.

   (Note to self: ensure to say that all checkpoints imply through the
UI
   unless it's stated explicitly that it's programmatic or both
   programmatic/ui.)

Issue 243

   CMN: IE ? has an option that prompts you whether you want to submit.

   JG: This is done for reasons of security as well.

   EH: (refer also to issue discussed previously).

   RS: How hard will this be to implement?

   CMN: Easy: UA knows when it's about to send a POST request.

   MQ: I'm not sure that having to answer a "don't post yet" prompt
every
   time that you select a new item is a good idea.

   CMN: You are not forced every time - you can turn it off.

   GR: I proposed a two part solution: submit mechanism was one part,
   another was scripted stuff. The first part was for inadvertent form
   submission. The second was to disable behavior like selecting a menu
   item triggers the form.

   IJ: MQ's concern is addressed by the ability to turn off scripts
   (although it may be burdensome to have to do this repeatedly for a
   given form).

   RS: When do you know to turn off scripts (how does the user know that
   there are scripts bound to a select item)?

   Resolved: Leave P2 since it affects users who may be disoriented
   (blindness, CD).

   IJ: I propose addressing MQ's concern more in the techniques for 9.2
   (e.g., for long lists, don't prompt 100 times).

   RS: This is more of a usability than an accessibility issue.

   MQ: In WebSpeak, if there is no explicit submit button, we create
one.

   Action IJ: Add this to the techniques document...

   GR: For the "50 states in a list" box example, this form is embedded
   in a larger form.

   RS: I have the same problem as users with disabilities.

   GR: But you know the change has taken place visually. For me, it's
   much more difficult and disorienting to get back to the previous
   state. I think that this is P3 for usability, P2 for accessibility.

   CMN: This is a "curb cut" type checkpoint. It's helpful for many
   people (P3), but very important (P2) for some users.

   GR: Refer to my (archived) problem statement that explains all of the
   accessibility problems associated with this situation.

   RS: What if the AT says "A new page has been loaded."

   MQ: Users don't know that they need to turn off scripts to achieve
   this goal.

   GR: Also, by turning off scripts, they may lose other capabilities.

   MQ: I think this needs to be a P1, not a P2. (Ready to register a
   minority objection to it being P2).

   HR: In the example we are using (menu items), do you have to use the
   mouse?

   JG: Yes.

   JA, MN: Leave a P2.

   GR: I think in Austin, we also talked about notification, and that
   helped out (less than P1).

   Proposed:
    1. There is an authoring problem (that is addressed in WCAG).
    2. This is both a usability and an accessibility issue.
    3. In cases of lists that trigger scripts, if sequential access to a
       list triggers a form submission, you might never get past list
       item 1 unless you can control it. You can do this by turning off
       scripts. However, you don't know in advance that you have to do
       this. (or turn them back on).

   Straw poll:
    1. P1: GR, MQ
    2. P2: CMN, HB, EH, JG, HR, MN, JA, IJ
    3. P3: RS

   IJ: I feel that without new information, I will hesitate to allow
   proposed changes at this time without new information. The WG already
   agreed to make this a P2 as of Last Call. It should be harder to make
   changes at this point. I want to make it harder.

   CMN: I propose that we put an action item on anyone who feels this
has
   to be changed to register a minority objection (that will be
presented
   to the Director).

   HR: Outspoken (screen-reader) handles this case gracefully.

   Resolution: Leave a Priority 2.

   Action: Anyone who objects can register their objection on the list.

Issue 244

   CMN:
    1. This is comment in speech presentations. I don't know of
reference
       implementations for video. You can do it in hardware.
    2. I spoke to the Real Media people and they said that it would be
       difficult due to timing issues and slowing audio.

   RS: Unless there's a reference implementation, I don't think this
   should be a P1 requirement.

   CMN: I don't find that argument convincing. People might not do it
   because it's hard, but that's not sufficient.

   MQ: "LP Player" lets you speed up and slow down audio.

   IJ: Priority levels are based on user need, not implementability.

   RS: I don't want to make the guidelines so strict that it's not
   possible to reach P1. I think we lose our credibility if it's too
hard
   to do.

   IJ: Applicability kicks in here: if not possible by spec, you aren't
   required to do it.

   GR: About raising the bar - the reason we are here is not to raise
the
   bar but to put the bar where it belongs (since it may have been
   knocked down in some implementations). I understand the concerns of
   developers, but it's not just about developers - it's for users, too.
   We need to work with them.

   CMN: There's an unresolvable tension between losing "credibility"
with
   developers and losing credibility with the users who are meant to
   benefit. The priority scheme is based entirely on user need (and this
   is a fairly important feature). All three guidelines groups have
tried
   other systems, but in each case, it's been a complete minefield.

   IJ: Would users not have access if they couldn't slow down the
   presentation?

   MQ: Partially deaf people can have access to audio by slowing it
down.

   MQ: Have you looked at "Sound Forge"? It plays back MP3 (and authors
   it) and allows you to change the presentation playback rate (you
   change time base and pitch).

   JG: Why is slowing video important?

   IJ: Physical disabilities may require slowing down.

   CMN: One possibility is that being able to step through frames is a
   sufficient slowing of video. You can't step through slowing of audio,
   however.

   IJ: If we split into separate requirements (video, audio, animations)
   does this get easier? Does it get easier for us to resolve if we talk
   about synchronized multimedia separately?

   CMN: It is difficult to change time base, change pitch, keep it
   synched, etc.

   EH: If we deconstruct the checkpoint, we should look at the usage of
   the words (audio, video, and animation) as well. Perhaps we should
   distinguish "audio presentation" from "multimedia presentation". An
   "audio presentation" is audio only (e.g., a radio broadcast). A
   multimedia presentation is either movies or animations.

   CMN: Another question is "What is the need for slowing down
   presentations?"
     * For audio presentations: slowing down the presentation will
       benefit users with cognitive disabilities.
     * For animation, may be be seizure information.
     * Who benefits from slower video?

   CMN: For slowing pure audio presentation, that's fairly easy. For
   video only, also fairly easy. The problem is combining them as
   multimedia.

   RS: Yes, I have a problem with the combination.

   CMN: Synchronization is an accessibility issue because the audio
   information is required to make the presentation accessible.

   GR: We should review EH's last call comments...Recall also that DA
   required slowing down by configuration (rather than dynamic
   button-based slowing) since otherwise some users with physical
   disabilities would not be able to slow down.

   MQ: LP Player basically does the sync of audio and text.

   CMN: Another nearly reference implementation is video-editing
   software.

   RS: This is expensive...

   CMN: Yes, but it can be done. And it's not tremendously expensive.
   What this software doesn't do as a rule is adjust audio when the rate
   changes.

   Review:
     * For auditory presentations, slowing down a P1.
     * For video presentations, slowing definitely helps users with CD,
       probably helps users with low vision. So P1.
     * For animations, ability to slow might help users with seizure
       disorders. So P1.

   MN: No one has said that this can't be done. It can be difficult.

   IJ: Maybe this is the case: synchronization is required at "normal"
   rate (2.6). But the "slowing" requirement may not be the same
priority
   if those who benefit from slowing are not an intersection of those
who
   benefit from the individual pieces.

   HR: I don't think that slowing should these media is a P1 since you
   can start, stop, pause, rewind.

   JG: It is a P1 for audio since you can't step through audio. You
could
   step through video and get some information out.

   CMN: Use case - cricket! You get a noise from the "stumps" and video
   from the "thingy". You need to synchronize the two at which point the
   two came together. You can get this information through step through
   (since you don't care about the quality of the sound). But I imagine
   that the quality of sound is important in some cases.

   GR: When I hear things like animation, I think of things like
   macromedia, flash, shockwave (and not just SMIL). When you have to
   respond to an on-screen video event and a sound, you need to be able
   to slow down.

   Resolved:
     * Slowing down audio a P1 (you can't step through it). You may not
       be able to slow down past a certain point (but some access better
       than no access).
     * Need to review the priority of slowing down video presentation.
     * Need to distinguish "sounds" from "audio presentations" (refer to
       EH's proposal in last call).

   RS: I don't think that the benefits of slowing down the presentation
   warrant it being a P1. And the cost is very high.

   /* Discussion of slowing according to pre-determined increments,
e.g.,
   half-speed */

   /* Madeleine Rothberg joins */

   MR: I think that a lot of the slowing down issues (especially for
   animations), were intended for users with CD and that's not my
   expertise. The techniques for this say that this is for people with
   CD, new to a language, and newly acquired sensory disabilities. For
   auditory presentations, what if you understand it by listening
several
   times? I'm not sure about the P1 level for audio presentations.

   JG: I don't think that we've heard P1 for sight disabilities.

   CMN: I have to speak (Australian) more slowly to be understood in the
   states than I would at home.

   MQ: People that are partially deaf have to slow down material in
order
   to understand.

   CMN: Maybe we should assign actions to review this carefully?

   HB: In the Daisy guidelines, they have a speech range for the player.
   It's something like half-speed to 2.5 times.

   JA: Yes, I think it's town to 25% and up to double speed.

   JG: Note that speeding up audio for users who are blind is not an
   accessibility issue but a usability issue.

   JA: Depending on the type of vision, some students cannot use a
   presentation at full speed. They simply can't see it. If they can
slow
   down video, they can get at the information.

   JG: Is there a range that we can specify?

   JA: No, the kids have very different requirements.

   MR: It also depends on the initial rate of the animation. Many
   variables.

   JA: I think that much beyond a 25% reduction, you start losing your
   audio anyway.

   MR: The players I've seen that play video more slowly, keep sync of
   captions, but turn off audio. Windows Media player can be scripted to
   slow down caption presentation rate (it's not in the player itself).

   /* George Kerscher pulled in from the hall */

   GK: There are many implementations that let you slow down audio
   presentations (VisuAid, Victor, PlexTalk (by Plextor), LP Player by
   PW, and Labyrinten). These are standard tools on the market to do
   this.

   Action MR: Talk to Geoff Freed about implementations that slow down
   multimedia presentations.

   GK: Some people with learning disabilities need to slow down the
audio
   in order to process the information. Synchronized with text.

   MN: I'm concerned that we're going to establish priorities based on
   reference implementations. This is not what we're charged to do.

   GK: In the SMIL WG, there's a requirement to have two implementations
   of any feature.

   JG: Yes, but our references are not based on existing implementations
   (though implementations help us show how it's down); it's based on
   user needs.

   Action JG: Write email to the list asking for information about which
   user groups require the ability to slow down presentations othewise
   access it impossible. (Get information from people with
   experience/research in this area).

   GK: I believe that in the SMIL specification, the notion of the wall
   clock is there. For people who benefit from a complex multimedia
   presentation, it's clear that slowing down is obviously needed by
some
   users.

   IJ: Please note that I believe we're only talking about explicitly
   synchronized multimedia presentations.

   RS: Yes, that's fine. But you shouldn't be required to slow down to
   the same rate two pieces of content that have not been explicitly
   synchronized. Otherwise there would be no use for SMIL.

   EH: Even though impact determines priority, it's my opinion that we
   should not include impossible or extremely costly checkpoints. I
don't
   have a lot of expertise on how important these things are to users
   with disabilities. I'm withholding judgment now since I suppose that
   this checkpoint has had a lot of review at this level.

   CMN: I don't think there are many cases when you have several pieces
   of content together but aren't synchronized explicitly.

   Proposed: Clarify for this checkpoint that we do not require slowing
   down of pieces of content that haven't been synchronized in the
format
   but are playing together.

   RS: I don't think Quicktime counts as a syncronization format. Same
   for AVI.

   EH: This is a very late stage in the process. I assume that this has
   had a lot of review. I'm inclined to go with P1, unless this is a
   total show-stopper.

   RS: Once again, I don't want to make the barrier too high initially.
   We need to weigh the benefits against the costs.

   EH: I hammered on WCAG for this - how do you define the reference
   groups? You can always find individuals who need a particular
feature.
   I pushed WAI to identify target groups. I suggested (even though I
   knew it wouldn't be popular) that we say something like "a
substantial
   majority would find it impossible, beneficial, etc.".

   GR: Users with a disability who get to the table early and who are
   vocal tend to get their issues addressed. But users who haven't had
   access to information, and who haven't been able to speak up, are
   being overloooked.

   JG: We will postpone this issue and not start with it first thing
   tomorrow. I suggest that we address it on Thursday.

   /* 5:30 pm adjourned */
------------


              Minutes from 11 April 2000 UA face-to-face at RFBD

Participants

     * Jon Gunderson (Chair)
     * Ian Jacobs (Scribe)
     * Harvey Bingham
     * Mickey Quenzer
     * Gregory Rosmaita
     * Charles McCathieNevile
     * Hans Riesebos
     * Rich Schwerdtfeger

   By phone:
     * Jim Allan
     * Kitch Barnicle
     * Mark Novak
     * Madeleine Rothberg (before the break)
     * Eric Hansen (late)

   Agenda

Issue 245

   Proposed: s/functionality/information and s/button/graphical icon.

   Resolved: The checkpoint is only about messages, not all UI
   components.

   Action IJ: Propose a new note to the WG (with graphics and sounds).

Issue 249

   (Refer also to Issue 271).

   CMN: CSS2 positioning. Problem with zooming languages that have no
   text flow (e.g., SVG).

   IJ: I've also heard that arbitrary positioning isn't required.

   MR: An issue about content being obscured. And if the user magnifies
   the screen, they need to be able to move captions.

   IJ: I don't think you can make an absolute statement that one piece
of
   content must not obscure another piece of content.

   CMN: I think that Sausage SMIL Composer lets you more captions.

   Resolved: No change.
     * There's a user need for this functionality; priority not based on
       existing implementation.
     * Arbitrary repositioning is not required - the goal is to ensure
       that text is not obscured.
     * Quicktime allows you to do this.

Issue 271

   CMN: The question is what's the minimal requirement? I think that:
    1. You should first implement the capabilities of the spec (e.g.,
       SMIL layout, Quicktime, SAMI?).
    2. The user has to at least be able to ensure that text equivalents
       are not obscured by other content.

   IJ Proposed:
     * Add "When text equivalents may obscure or be obscured by other
       content, ...."

   IJ: Does the player need to allow repositioning when it knows (e.g.,
   geometries) that the content does not overlap?

   CMN: Yes, when zoomed.

   MR: One scenario when it's useful to overlap: when you're viewing on
a
   small screen - or when your screen geometry is different from what
the
   author intended.

   MQ: The AT might be able to find the information when it's in a
   particular position.

   CMN: I think that arbitrary positioning is what you need to be able
to
   do.

   IJ: Why in this case and not others (e.g., colors, fonts, etc.)

   MQ: Putting alt content in another window may make it more available
   to other technologies.

   CMN: I think that the checkpoint should require the user agent to
   allow arbitrary repositioning. For example, if captions overlap
   subtitles, need to move them out of the way.

   IJ: I don't think the text says that today. Given the definition of
   "configure".

   IJ: I hear two cases for minimal requirement:
    1. Minimal requirement is to prevent content from being obscured.
    2. Device limitations mean that I need to overlap information.

   GR: You need both at the same time in both cases (you need the
   synchronization)

   KB: If I have narrow vision, need to move things into my range of
   vision.

   Resolved:
     * Goal: Ensure that the user has access to content (whether it
means
       choosing to obscure other content or preventing that).
     * Minimal requirement:
         1. Implement the capabilities for repositioning for the markup
            language that the UA can recognize.
         2. If the UA can recognize different pieces but no the markup
            language doesn't have features for repositioning, use repair
            techniques.
         3. If the UA cannot recognize different pieces, then
            applicability (e.g., GIF).
     * Examples of user needs that mean that arbitrary repositioning
       required: restricted field of view requires overlapping, don't
       want to obscure content and don't always know where the key
       information on the display is, both for screen mag and
refreshable
       braille you can position in a given place and have ATs monitor
       that.

   Action IJ: Propose a Note stating the minimal requirement and
   emphasizing the goal. Add examples to techniques document.

Issue 246

   Resolved: Editorial - make suggested change in light of other
   discussions on "author-specified" and the results of checkpoint 2.1

Issue 247

   Resolved:
     * Add "When the user agent can recognize..."
     * Add technique that prose doesn't count.

Issue 248

   GR: There are fonts that people cannot use for a variety of reasons.
   At any size.

   Resolved: No change.

   Action:
     * Add minimal requirement that for CSS fonts, use the generic
fonts.

   Action CMN: Find out from I18N how to generalize the accessibility
   provided by sans-serif fonts.

   IJ: Any minimal font size?

   GR: I would make my font as small as possible to get more content on
   the page.

Issue 250

   CMN: The use of micropayments is a technique in this case. The
   requirement is about spending money, but micropayments are just an
   example.

   HR: Is paying the only bad thing that can happen when you can follow
a
   link?

   JG: The WG felt that this was important enough.

   GR: I would have pulled out other ones (and not this one) but we were
   able to separate this one because of the micropayments draft.

   CMN: Al Gilman said that this is a CD issue. You have to make it
clear
   to people that their money is disappearing.

   HR: I have problems with the current wording. I think the general
   requirement is to avoid that bad things happen when you follow a
link.

   CMN: I spoke to developers about a checkpoint for making available
   useful information (this was the original checkpoint). It was too
   ambiguous. Checkpoint 8.4 says "make available everything you know."
   But since paying money was more serious, it was special cased at a
   higher priority level.

   IJ: Just because we identified one important requirement and not
   others, doesn't mean we should delete the first one. Also, the fact
   that we require making available all information (8.4) means you can
   avoid bad situations as well as pursue good ones.

   Resolved: No change.
     * Important special case of 8.4 as indicated.
     * This is important for users with CD.

   /* Break */

Issue 251

   CMN: How does the author know what affects accessibility?

   IJ: Read this document!

   IJ: Does new support for the Thai language affect accessibility?

   GR: Yes, if you're a Thai user with a disability.

   RS: But that affects all users who speak Thai.

   CMN: The things that affect accessibility are those things mentioned
   in this document. There are specific checkpoints about language
   support in the guidelines.

   Resolved:
    1. Adopt proposal.
    2. Minimal requirement is to cover the features of this document.

Issue 252

   Resolved:
    1. No change. The WG has already considered two conformance schemes
       that would allow for more granularity (based on user needs and
       checklist-based).
    2. The WAI CG might want to consider this issue at a higher level.

   Action JG: Take this to the WAI CG.

Note:

   Issues 253 to 276 were not part of the formal review but should be
   considered by the WG.

Issue 253

   RS: MSAA does not solve all problems. It doesn't provide access to
   text in all applications. You need to do both. There are some ATs
that
   don't support all of MSAA (for example).

   HR: On the Mac, there isn't an accessibility API. Even for our PC, we
   rely on the offscreen model, in part because we cannot use MSAA on
Win
   95 because it's not internationalized.

   RS: You are a UA with a custom control that MSAA doesn't provide
   access to. The AT (like a screen reader) would need to read the text
   that you drew to the screen.

   IJ: Are you always required to use the devices?

   RS: MSAA is limited today to standard controls. It doesn't handle
well
   custom controls.

   MN: MSAA is for getting input, not writing to the screen.

   JG: There are two parts - the part where the developer creates
   objections compatible with MSAA, and the other where the UA gets
   events from it.

   RS: For input, for custom controls, you want to be able to respond to
   serial keys.

   MN: I don't think it's important. MSAA is just one of several
   technologies.

   RS: You need to always support standard input since MSAA or others
   have nothing to do with standard input. For standard output ("Can I
   write to the screen or use MSAA?"), if you use standard controls you
   don't have to do anything anyway. If you are going to write a custom
   control, MSAA is not always reliable and there are older screen
   readers that don't use MSAA. So there is a requirement to do both.

   GR: There is also an I18N lag with MSAA.

   IJ: It seems new to me that we are requiring redundant output.

   RS: Suppose I'm writing a custom button that has it's own window
   class. To be accessible to an AT that doesn't support MSAA for the
   custom component, you have to use std API so that the AT can get the
   information. Another example: Suppose that you're Mozilla, designed
   for cross platform. For that reason, you don't support MSAA and need
   to draw text to the screen (until you support the DOM...).

   MN: I don't think you need to get into details about which a
developer
   needs to use. I agree with the reviewer: the UA should be able to
   conform by providing info through either one API or the other.

   RS: If there's an engineered API (e.g., MSAA) you should use that API
   and ensure that it works. And if this is more accessible, this should
   take precedence over drawing to the screen.

   MN: I see 5.5 as a special case of 2.1.

   RS: For output, you should implement MSAA or the DOM first (if
   applicable). If they don't apply, draw text to the screen.

   IJ: Add a note to checkpoint 1.2 that says "When available, it is
   preferable to use the APIs discussed in G5 instead of using standard
   device APIs directly."

   CMN: It's more preferable to use both directly. If you use MSAA or
the
   DOM and then you also rasterize a picture, then you need to use std
   APIs.

   IJ: I have heard:
    1. Do both
    2. Do either
    3. Use MSAA as a preference.

   RS: In JAVA 2, you have to use the accessibility APIs.

   Resolved: No change.

   Action IJ: Add a cross reference to guideline 5. In techniques,
   discuss advantages of doing both.

Issue 254

   JG: Is "zoom" the right term.

   CMN: In HTML and CSS, you can increase the font size and text reflows
   nicely. And the reviewer's comment is true. In SVG, you get no reflow
   when the font size it changed: text may overlap when the text is
   resized, so zoom is the preferred technique in this case.

   JG: "Zoom" in one context can mean to take one pixel and make it four
   pixels.

   IJ: I think that "zoom" must means go in and out.

   HR: I don't think that "zoom" is an adequate term.

   CMN: Some user agents rescale and reflow as their zoom.

   HB: I think that magnify and reflow and one of the most important
   accessibility techniques for someone with low vision.

   MQ: We want to be able to make content more accessible, and word wrap
   is important to this.

   Resolved: No change.

   Action IJ: In techniques document, discuss what CMN has been
   discussing. Just changing font size may obscure information and
   scaling would be better. Reflowing (e.g., word wrap) is a good thing
   to do and should be discussed in techniques.

Issue 255

   Various pieces required:
    1. Use standard APIs for devices, as opposed to non-standard APIs.
    2. Support devices considered standard for the platform.
    3. Support the keyboard (on systems where standard).

   CMN: If the keyboard is a standard input API for your system, you
have
   to use it.

   JG: I think we resolved that you don't have to support all standard
   APIs.

   CMN: I'm not sure I agree. Depends on the meaning of "standard".

   Proposed:
     * 1.2 For all supported input and output devices, use the standard
       device APIs of the operating system. (i.e., supported by the UA).
     * 1.4 On systems where a keyboard API is available, ensure that
       every functionality available through the user interface is
       available through it.
     * The definition of standard device API includes info about
expected
       support.

   CMN: I think the UA should support all standard APIs for the
operating
   system. It's not sufficient to expect support for a subset of them.

   JG: The WG has already agreed that this is an undue burden - I don't
   have to support the bar code reader API.

   CMN: If the standard API allows you just to dump a rasterized image
to
   the screen, does this suffice? This does not make the information
   accessible to ATs.

   IJ: I don't see how the misuse of an API is resolved by requiring the
   use of more APIs.

   GR: We need to (a) highlight in the text of the guideline that user
   agents should use higher level routines.

   RS: In Windows, you would use "textout" or "exttextout" to draw
   text...

   JG: CMN, do you want UAs to draw information more than one way to the
   screen?

   CMN: I want one standard API that does the redundant work for you
(and
   so that you don't have to draw manually through the other API).

   RS: If you use MFC or visual basic, you should ensure that those
   libraries default to the standard system API for drawing text.

   CMN: You could imagine a system where there is a keyboard API and a
   generic text input API.

   Proposed:
    1. Use appropriate APIs. (Use the generic one, use the right device
       API for a given content type, etc.).

   GR: I think this should be in the prose.

   CMN Proposes: Delete "device" from 1.1 and 1.2. The question remains
-
   if you use a good programming language, it will automatically put
your
   information through the standard device APIs. "Use the standard input
   and output APIs for the operating system."

   IJ: What's the scope of "input and output"? Does this include port 80
   for HTTP? The "stdout" on Unix?

   IJ Questions:
     * How much redundancy required? If you have to support all input or
       output APIs, do you have to use all of them for all input or
       output?
       CMN: Redundancy only required when information isn't propagated
by
       the API to others.
     * How many APIs required?
       CMN: This has been answered by MN and HR. MSAA may not be the
best
       way to get access.

   JG: Where do you stop? Infrared access? Writing to disk?

   CMN: On most systems, redirect is automatic.

   RS: You could say "for those devices that allow the user to interact
   with the system."

   CMN: You can push info around through MSAA. But if you put it into
   MSAA, it gets propagated.

   HR: You don't have to give all functionality to the user through the
   voice API. You do through the keyboard.

   Resolved:
     * 1.1: Delete "device"
     * 1.2: Use the standard input and output APIs of the operating
       system.
       - Point out that APIs should be used appropriately - use the text
       API for text, don't use the graphical API.
       - Point out that people should not work around standard APIs.
       - Point out that there may be preferences in APIs (e.g., use more
       abstract over lower-level, but ensure that information reaches
       lower-level APIs).
     * 1.4: On systems that support a keyboard API, ensure that every
       functionality available through the user interface is available
       through the keyboard API.

Issue 256

   JG: Conformance does not *necessarily* guarantee accessibility (and
   non-conformance doesn't guarantee inaccessibility). Refer to last
call
   issue.

   CMN: In 5.5, we guarantee programmatic access. This means you can run
   whatever device you want.

   CMN: Max (Nakane) has a telephone that he uses to access the Web. But
   the output mode is through the screen, and that's all. The phone
   doesn't export anything as far as I know.

   IJ: Consider for this case, a kiosk that doesn't allow you to plug
   into it (Guideline 5 drops) or a handheld device that has limited RAM
   (no room for other software, or it's not a multitasking system).

   MQ: This means no mobile device can conform to these guidelines.

   CMN: If you have just speech output, or just keyboard input, and
   hardwired programming and no way in, you can claim that 5.5 doesn't
   apply. This means that inaccessible device could comply.

   IJ: I consider dropping this clause of the applicability provision a
   significant change to the guidelines.

   CMN: As do I.

   HR: I don't think it's not conforming then - we want the devices to
   meet as many checkpoints as they can.

   GR: The way the current conformance statement is stated, those
   considered inapplicable must be stated up front.
   Consumers/Purchases/Regulators can establish whether it meets their
   particular needs.

   CMN: Conformance is not the end-all of the accessibility of the tool.

   IJ: I still think that hardward and software limitations affect the
   range of configurability (e.g., colors, fonts).

   RS: We don't need to delete the provision since we are not really
   addressing mobile devices. More work needs to be done.

   JG: Do people understand that the current applicability provision
   means that for any mobile device that doesn't allow communication
with
   other devices means that Guideline 5 doesn't apply?

   /* Everyone agrees */

   JG: How many feel the document should become a Recomendation this
way?

   HB: I would like to make it explicit that we are excluding devices.

   JG: We do not have guidelines for a user agent that does
"everything".
   We can always find some group that doesn't have access, so we require
   communication of content and user interface (interoperability). We've
   already discussed "stand-alone" conformance.

   CMN: I don't think the guidelines should become a Rec with the
current
   provision about hardware limitations.

   Resolved:
    1. Delete the provision about applicability about hardware.
    2. Add a comment about how system limitations may affect ranges of
       configurability.
    3. We argue that this change is in line (i.e., a clarification) of
       what the intended and documented audience of UAs is meant to be.

Issue 257

   IJ: Two parts
    1. We could improve the techniques document by classifying
techniques
       (informative, sufficient, beneficial).
    2. We should make clear in the guidelines the minimal requirement
for
       each checkpoint.

   CMN: I think that number 2. is very important. The Director has said
   that the Recommendation (the guidelines) must be able to stand on its
   own - you must be able to derive what's required for conformance.

   Action JG: Identify the minimal requirement for each checkpoint.

Issue 258

   Resolved: Adopt proposal

   Action IJ: Add statement up front that everything through UI except
   where stated that through API or both.

Issue 259

   Resolved: No change. Using OS features is a good thing, but must be
   accessible.

Issue 260

   Resolved: Editorial.

   Action IJ: Propose changed second sentence of 1.1 to the list.

Issue 261

   Resolved: The intent is indeed support for every supported input
   device. No change.

   /* Eric joins */

Issue 262

   CMN: As we've discussed at length today, there are a lot of operating
   system conventions for accessibility (e.g., standard APIs). There's
   nothing in the guidelines that says "don't provide a better
   installation setup." The Guidelines do say "use the standards since
   some device you didn't think of may not be able to use it.

   EH: Need to clarify what the accessibility settings are. Are we the
   judges about what the conventions are?

   IJ: We refer to system guidelines for accessibility. I propose:
    1. New wording: "Follow operating system conventions that affect
       accessibility."
    2. This means that you can be better and single-A, but only double-A
       if you use standards.
    3. The conventions are those of OS guidelines and what is described
       in this document.

Issue 263

   JG: For users with screen magnifiers, context-sensitive access
   important.

   GR: Two-dimensional tables rely on understanding relationships
   expressed through layout.

   CMN: I've argued in the past that the "standard" graphical rendering
   of a table in two-dimensional layout is a sufficient technique for
   making clear the relationship among table cells.

   Resolved:
    1. This is a P1 requirement since relationships need to be available
       to users to understand the table.
    2. Grid rendering graphically is sufficient to meet the requirement
       for a graphical desktop user agent. This is a P1 because it helps
       users with screen magnifiers or CD and large tables.
    3. You must be able to get at all the cells in their relationship.
       Scrolling in two dimensions is a sufficient technique. Structured
       navigation is a sufficient technique. Lynx fails because the
table
       cannot be understood.

   Action CMN: Propose a technique that explains how serialization plus
   navigation would suffice.

Issue 264

   CMN: Users have access to the text according to 2.5 (being able to
   select alternatives). Thus no changes required to provide what he's
   asking for.

   EH: The definition of alternative equivalent does make a distinction
   between primary and alternative content. A strict reading of the term
   "equivalent alternative" would mean that the image wouldn't count.

   IJ: Have we heard that images are distracting to users with CD? If
   not, why is this checkpoint here?

   EH: Images may also bother users with low vision (who may be
   distracted).

   EH: Up to this point, people I've spoken to would distinguish between
   CD and learning disabilities.

   Action IJ: Ask reviewer for more data.

Issue 265

   IJ: It is disorienting for users with CD, or who are blind or
   accessing information serially. I can see that it doesn't prevent
   access to content, however, it may make it near impossible for some
   users (e.g., with short-term memory problems) to locate where they
   were.

   JG: At some point, inconvenience makes something unusable.

   RS: Very large documents are a P1 problem.

   Resolved: No change.

Issue 266

   Resolved: This is covered by structure navigation.

Issue 267

   CMN: The WG intentionally did not choose a relative priority rating
   for this and other checkpoints related to Web content. In this case,
   knowing the feature is there is critical to being able to learn to
use
   the tool.

   Resolved: This is critical for using the tool. No change.

Issue 268

   Resolved: Editorial. Adopt suggestion.

Issue 269

   Resolved:
    1. There is no guarantee that the reviewer's recommended strategy
       will provide access.
    2. This is an authoring problem, not a UA responsibility.

Issue 270

   Resolved: Editorial

   Action IJ: Clarify the usage of "checkpoints for content
   accessibility", notably in G2.

Issue 272

   Resolved: This is covered by the structured navigation requirement

Issue 273

   Resolved: Editorial

   Action IJ: Clarify checkpoint wording:

   For graphical user interfaces, allow the user to configure the
   arrangement of user interface controls.

Issue 274

   Resolved: Editorial.

   Action IJ: Verify how it's used in the document. If not used, moved
to
   techniques. Or move to glossary. Or replicate in glossary.

Issue 275

   Resolved: Editorial.

   Action IJ: In that paragraph, move usability and accessibility to the
   sentences about consistency.

Issue 276

   Resolved: Editorial. No change.

   /* Adjourned 15:40 EDT */

Received on Tuesday, 11 April 2000 16:40:03 UTC