From kb8aey at verizon.net Sat May 1 00:54:55 2010 From: kb8aey at verizon.net (Mike Coulombe) Date: Fri, 30 Apr 2010 17:54:55 -0700 Subject: movie player and youtube Message-ID: <4BDB7BDF.5040007@verizon.net> Hi, I am having a problem that just started with movie player. Here is the message I get when using movie player to search on you tube. Nice that this can be red in movie player with orca using the arrow keys. The response from the server could not be understood. Please check you are running the latest version of libgdata. What package do I have to install to get this. My regular ubuntu machine gives the same message when I try to search for something on you tube in movie player. By the way, this happened before and after I installed the ubuntu-restricted-extra package. Mike. From tcross at rapttech.com.au Sat May 1 00:59:49 2010 From: tcross at rapttech.com.au (Tim Cross) Date: Sat, 1 May 2010 10:59:49 +1000 Subject: upgrading to lucid In-Reply-To: References: <19418.20433.66012.560917@rapttech.com.au> Message-ID: <19419.32005.827432.762616@rapttech.com.au> I notice your still back on hardy. I expect you will need to upgrade through jaunty and karmic to get to lucid. Its going to take some time. Probably best to wait until things quieten down. It took several attempts to download the packages yesterday due to high traffic. Tim aerospace1028 at hotmail.com writes: > Thanks Tim, > I was leaning towards do-release-upgrade because I'm a little more comfortable with the comand-line than graphical applications. I think I'm going to wait a couple of days before attempting the upgrade to let the traffic through the repositories slow down. > > thanks:-) > > > Date: Fri, 30 Apr 2010 13:34:41 +1000 > > To: aerospace1028 at hotmail.com > > CC: ubuntu-accessibility at lists.ubuntu.com > > Subject: Re:upgrading to lucid > > From: tcross at rapttech.com.au > > > > > > I have always used do-release-upgrade. This morning, I used it to upgrade to > > lucid and all looks OK so far. > > > > I'm a bit old fashioned though. My main interface is based on emacspeak. I've > > not used orca that much. Therefore, I tend to use text based apps over > > graphics based ones, despite the fact I run under X. > > > > Tim > > > > aerospace1028 at hotmail.com writes: > > > greetings, > > > I did some more research on how to update ubuntu distributions from the command line. The two options appear to be: > > > > > > (1) sudo update-manager > > > > > > (2) sudo do-release-upgrade -m desktop > > > > > > method 1 would just launch the graphical application from the command line (gnome-terminal) and in general is the recommended distribution upgrade method from the ubuntu wiki. I can't find much doccumentation on do-release-upgrade, are there any drastic differences btween what these two programs do? > > > > > > Does anyone have any advice on which is the better (faster? more accessible?) method for updateing my Ubuntu system to lucid? > > > > > > Thanks:-) > > > > > > > > > _________________________________________________________________ > > > The New Busy is not the old busy. Search, chat and e-mail from your inbox. > > > http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_3 > > > ---------------------------------------------------------------------- > > > -- > > > Ubuntu-accessibility mailing list > > > Ubuntu-accessibility at lists.ubuntu.com > > > https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility > > > > -- > > Tim Cross > > tcross at rapttech.com.au > > > > There are two types of people in IT - those who do not manage what they > > understand and those who do not understand what they manage. > > -- > > Tim Cross > > tcross at rapttech.com.au > > > > There are two types of people in IT - those who do not manage what they > > understand and those who do not understand what they manage. > > _________________________________________________________________ > The New Busy think 9 to 5 is a cute idea. Combine multiple calendars with Hotmail. > http://www.windowslive.com/campaign/thenewbusy?tile=multicalendar&ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_5 -- Tim Cross tcross at rapttech.com.au There are two types of people in IT - those who do not manage what they understand and those who do not understand what they manage. -- Tim Cross tcross at rapttech.com.au There are two types of people in IT - those who do not manage what they understand and those who do not understand what they manage. From cjk at teamcharliesangels.com Sat May 1 01:05:31 2010 From: cjk at teamcharliesangels.com (Charlie Kravetz) Date: Fri, 30 Apr 2010 19:05:31 -0600 Subject: upgrading to lucid In-Reply-To: <19419.32005.827432.762616@rapttech.com.au> References: <19418.20433.66012.560917@rapttech.com.au> <19419.32005.827432.762616@rapttech.com.au> Message-ID: <20100430190531.1411bb65@teamcharliesangels.com> On Sat, 1 May 2010 10:59:49 +1000 "Tim Cross" wrote: > > I notice your still back on hardy. I expect you will need to upgrade through > jaunty and karmic to get to lucid. Its going to take some time. Probably best > to wait until things quieten down. It took several attempts to download the > packages yesterday due to high traffic. > > Tim Actually, there is an upgrade direct from hardy 8.04 to Lucid 10.04, since they are both LTS. LTS to LTS is always okay, and the only time you can skip versions on upgrading. -- Charlie Kravetz Linux Registered User Number 425914 [http://counter.li.org/] Never let anyone steal your DREAM. [http://keepingdreams.com] From phillw at phillw.net Sat May 1 01:21:56 2010 From: phillw at phillw.net (Phillip Whiteside) Date: Sat, 1 May 2010 02:21:56 +0100 Subject: upgrading to lucid In-Reply-To: <20100430190531.1411bb65@teamcharliesangels.com> References: <19418.20433.66012.560917@rapttech.com.au> <19419.32005.827432.762616@rapttech.com.au> <20100430190531.1411bb65@teamcharliesangels.com> Message-ID: Hi, It is vitally important that you update your 8.04 system just before you make the leap to 10.04. 8.04.4 has the required files for the LTS --> LTS transfer and they must be on your 8.04 system before you update to 10.04, updating will ensure that they are there. (Yes, I know you already know that, but someone may not) Regards, Phill. On Sat, May 1, 2010 at 2:05 AM, Charlie Kravetz wrote: > On Sat, 1 May 2010 10:59:49 +1000 > "Tim Cross" wrote: > > > > > I notice your still back on hardy. I expect you will need to upgrade > through > > jaunty and karmic to get to lucid. Its going to take some time. Probably > best > > to wait until things quieten down. It took several attempts to download > the > > packages yesterday due to high traffic. > > > > Tim > > Actually, there is an upgrade direct from hardy 8.04 to Lucid 10.04, > since they are both LTS. LTS to LTS is always okay, and the only time > you can skip versions on upgrading. > > > -- > Charlie Kravetz > Linux Registered User Number 425914 [http://counter.li.org/] > Never let anyone steal your DREAM. [http://keepingdreams.com] > > -- > Ubuntu-accessibility mailing list > Ubuntu-accessibility at lists.ubuntu.com > https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcross at rapttech.com.au Sat May 1 02:06:54 2010 From: tcross at rapttech.com.au (Tim Cross) Date: Sat, 1 May 2010 12:06:54 +1000 Subject: upgrading to lucid In-Reply-To: <20100430190531.1411bb65@teamcharliesangels.com> References: <19418.20433.66012.560917@rapttech.com.au> <19419.32005.827432.762616@rapttech.com.au> <20100430190531.1411bb65@teamcharliesangels.com> Message-ID: <19419.36030.801093.448183@rapttech.com.au> Hi Charlie, thanks for the clarification. I wasn't sure and figured best to play it safe. I thought it might only have been OK to jump the .10 i.e. 9.04 to 10.04, skipping 9.10 was OK but 8.04 had to go via 9.04 to get to 10.04. I guess it makes sense since 8.04 would still be 'supported'. Personally, I'm just too impatient to wait that long! Bring on October! Tim Charlie Kravetz writes: > On Sat, 1 May 2010 10:59:49 +1000 > "Tim Cross" wrote: > > > > > I notice your still back on hardy. I expect you will need to upgrade through > > jaunty and karmic to get to lucid. Its going to take some time. Probably best > > to wait until things quieten down. It took several attempts to download the > > packages yesterday due to high traffic. > > > > Tim > > Actually, there is an upgrade direct from hardy 8.04 to Lucid 10.04, > since they are both LTS. LTS to LTS is always okay, and the only time > you can skip versions on upgrading. > > > -- > Charlie Kravetz > Linux Registered User Number 425914 [http://counter.li.org/] > Never let anyone steal your DREAM. [http://keepingdreams.com] -- Tim Cross tcross at rapttech.com.au There are two types of people in IT - those who do not manage what they understand and those who do not understand what they manage. -- Tim Cross tcross at rapttech.com.au There are two types of people in IT - those who do not manage what they understand and those who do not understand what they manage. From everett at zufelt.ca Sat May 1 04:36:30 2010 From: everett at zufelt.ca (E.J. Zufelt) Date: Sat, 1 May 2010 00:36:30 -0400 Subject: Lucid accessible install Message-ID: <25446AA6-FF76-49BB-83A0-061EE0C24E75@zufelt.ca> Good evening, I don't monitor this list very well, so apologies if I have missed something. A couple of months ago I received some convoluted directions on launching Orca from the Lucid live CD. I'm curious if the process has been streamlined at all? Directions I received were: Press space every 3-4 seconds, several times, then enter, then F5, then 3, then enter Thanks, Everett Zufelt http://zufelt.ca Follow me on Twitter http://twitter.com/ezufelt View my LinkedIn Profile http://www.linkedin.com/in/ezufelt -------------- next part -------------- An HTML attachment was scrubbed... URL: From oyvind.lode at lyse.net Sat May 1 22:35:33 2010 From: oyvind.lode at lyse.net (=?iso-8859-1?Q?=D8yvind_Lode?=) Date: Sun, 2 May 2010 00:35:33 +0200 Subject: test message - please ignore! Message-ID: <00c901cae97e$9cb81020$d6283060$@lode@lyse.net> I'm just testing my new email server settings. Please ignore! From pstowe at gmail.com Sun May 2 15:05:46 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Sun, 2 May 2010 11:05:46 -0400 Subject: Next Meeting **Revised Date**: May 6 2010 10:00 UTC Message-ID: Hi, Due to feedback from people, the next meeting will now be on Thursday May 6 2010 at 10:00 UTC (that's 11:00AM BST). I look forwatd to seeing you all there! Thanks! Penelope From vilmar at informal.com.br Sun May 2 16:45:23 2010 From: vilmar at informal.com.br (jose vilmar estacio de souza) Date: Sun, 02 May 2010 13:45:23 -0300 Subject: eclipse Message-ID: <4BDDAC23.2060901@informal.com.br> Hi all, There is an annoying problem when I use eclipse on ubuntu, although I can not say that only happens with ubuntu. Every time I close eclipse, the window disappears but the eclipse and the java VM are still running, and I am forced to kill the java VM. I read in several places that if I disable the option of assistive technologies the problem goes away, which I naturally can not do. Recently I discovered that if instead of killing the Java VM I kill the process at-spi-registryd, the eclipse is terminated normally. Any suggestions of what can be done? Thanks! From brunogirin at gmail.com Sun May 2 17:07:23 2010 From: brunogirin at gmail.com (Bruno Girin) Date: Sun, 02 May 2010 18:07:23 +0100 Subject: eclipse In-Reply-To: <4BDDAC23.2060901@informal.com.br> References: <4BDDAC23.2060901@informal.com.br> Message-ID: <1272820043.1564.4.camel@nuuk> On Sun, 2010-05-02 at 13:45 -0300, jose vilmar estacio de souza wrote: > Hi all, > There is an annoying problem when I use eclipse on ubuntu, although I > can not say that only happens with ubuntu. > Every time I close eclipse, the window disappears but the eclipse and > the java VM are still running, and I am forced to kill the java VM. > I read in several places that if I disable the option of assistive > technologies the problem goes away, which I naturally can not do. > Recently I discovered that if instead of killing the Java VM I kill the > process at-spi-registryd, the eclipse is terminated normally. > Any suggestions of what can be done? > Thanks! Jose, This is bug 68714 [1]. It may also be related to bug 477978 [2]. Both have been confirmed and reported upstream. Maybe the best thing to do is contribute to the comments in the upstream bug tracker to help the developers isolate the problem. Bruno [1] https://bugs.launchpad.net/ubuntu/+source/at-spi/+bug/68714 [2] https://bugs.launchpad.net/at-spi/+bug/477978 From hammera at pickup.hu Mon May 3 05:01:02 2010 From: hammera at pickup.hu (Hammer Attila) Date: Mon, 03 May 2010 07:01:02 +0200 Subject: An interesting problem when I using longer time with Ubuntu 10.04 release Message-ID: <4BDE588E.10606@pickup.hu> Dear List, I see an interesting problem with my final Ubuntu 10.04 system, prewious time I not see this problem. I installed with my system the 2010.04-27 daily live CD, and install updates with all day. Some time if I using longer time the Orca screen reader and for example browsing the internet, and try changing for example top/bottom panel with Ctrl+Alt+Tab key combination, Orca is does'nt spokening the actual choosed panel before I not release this combination. Another issue if I try change task with Alt+Tab key combination, I not hear application name with I jump before I release this key combination. If I restart Orca Screen Reader, again working fine this two task. I see this type problem after 20-30 minute use, and this problem repeat again after this time period. Compiz is not enabled. Now, all updates are installed. gnome-orca package version is 2.30.0-0ubuntu3, and at-spi package version is 1.30.0-0ubuntu2. Anybody see this type problem? Possible help if I reinstall my system with final Ubuntu CD iso? This problem is related with following at-spi bug? The link is following: https://bugs.launchpad.net/bugs/562776 Thank you the answers, Attila From hammera at pickup.hu Mon May 3 16:11:07 2010 From: hammera at pickup.hu (Hammer Attila) Date: Mon, 03 May 2010 18:11:07 +0200 Subject: When I launch Help browser, press a Tab key and Shift+Tab key, I hear a "Gecko based application application Ubuntu help center HTML content" message Message-ID: <4BDEF59B.3040409@pickup.hu> Dear List, Please anybody confirm following bug if a normal new installed Ubuntu 10.04 system this problem is reproducable with another machine: https://bugs.launchpad.net/bugs/574549 The problem is following: When I clicking system/help and support menu item, tabbing for example the "New Ubuntu user?" link and press a Shift+Tab key, Orca spokening following message: "gecko based application application Ubuntu help center HTML content" Prewious I do a bugreport with Orca with this related problem, but I am not sure this is an Orca or Yelp bug. Prewious Ubuntu Lucid packaged Yelp package versions I not see this bug. Yelp Ubuntu package version now following: 2.30.0-0Ubuntu2 This problem related Orca bugreport is following: https://bugzilla.gnome.org/show_bug.cgi?id=616650 Thank you the help, Attila From pstowe at gmail.com Mon May 3 16:59:16 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Mon, 3 May 2010 12:59:16 -0400 Subject: Next Meeting **Revised Date**: May 6 2010 10:00 UTC In-Reply-To: References: Message-ID: The meeting will be in #ubuntu-accessibility. And for the person who asked, yes, that's 6AM EDT in the US. I know it's really early, but last time most of the people at the meeting were in the UK/Europe and also due to my work schedule this week, I needed to hold an early meeting. Please let me know if you have any further questions! ~Penelope From esj at harvee.org Mon May 3 17:06:51 2010 From: esj at harvee.org (Eric S. Johansson) Date: Mon, 03 May 2010 13:06:51 -0400 Subject: Next Meeting **Revised Date**: May 6 2010 10:00 UTC In-Reply-To: References: Message-ID: <4BDF02AB.5020508@harvee.org> On 5/3/2010 12:59 PM, Penelope Stowe wrote: > The meeting will be in #ubuntu-accessibility. > > And for the person who asked, yes, that's 6AM EDT in the US. I know > it's really early, but last time most of the people at the meeting > were in the UK/Europe and also due to my work schedule this week, I > needed to hold an early meeting. > > Please let me know if you have any further questions! I could use a graphics person with some animation knowledge. I've been advocating a tool and a new form of user interface to make speech user interface is far more practical and discoverable. Obviously my mouse hand is on the fritz and, well speech just as the wrong tool for creating graphics. I could really use the help with this because I believe that these user interface models or something derived from them would be the next step in a better direction for us. Everyone I've given the whiteboard talk to really loves it but fuzzy whiteboard and me waving my hands around just doesn't translate to the Internet. :-) --- eric From phillw at phillw.net Mon May 3 18:23:09 2010 From: phillw at phillw.net (Phillip Whiteside) Date: Mon, 3 May 2010 19:23:09 +0100 Subject: Next Meeting **Revised Date**: May 6 2010 10:00 UTC In-Reply-To: <4BDF02AB.5020508@harvee.org> References: <4BDF02AB.5020508@harvee.org> Message-ID: Hi, not sure I understand, do you want something like a slide-presentations (power-point style) as done on these https://wiki.ubuntu.com/UbuntuDeveloperWeek or a screen cast such as http://lubuntu.net/node/32? If it is the latter, I will ask leszek if he could spare some time to help you. If it is the former, I will make some further enquiries Regards, Phill. On Mon, May 3, 2010 at 6:06 PM, Eric S. Johansson wrote: > On 5/3/2010 12:59 PM, Penelope Stowe wrote: > > The meeting will be in #ubuntu-accessibility. > > > > And for the person who asked, yes, that's 6AM EDT in the US. I know > > it's really early, but last time most of the people at the meeting > > were in the UK/Europe and also due to my work schedule this week, I > > needed to hold an early meeting. > > > > Please let me know if you have any further questions! > > I could use a graphics person with some animation knowledge. I've been > advocating a tool and a new form of user interface to make speech user > interface > is far more practical and discoverable. Obviously my mouse hand is on the > fritz > and, well speech just as the wrong tool for creating graphics. > > I could really use the help with this because I believe that these user > interface models or something derived from them would be the next step in a > better direction for us. Everyone I've given the whiteboard talk to really > loves > it but fuzzy whiteboard and me waving my hands around just doesn't > translate to > the Internet. :-) > > --- eric > > -- > Ubuntu-accessibility mailing list > Ubuntu-accessibility at lists.ubuntu.com > https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility > -------------- next part -------------- An HTML attachment was scrubbed... URL: From esj at harvee.org Mon May 3 22:07:51 2010 From: esj at harvee.org (Eric S. Johansson) Date: Mon, 03 May 2010 18:07:51 -0400 Subject: user interface animations Re: Next Meeting **Revised Date**: May 6 2010 10:00 UTC In-Reply-To: References: <4BDF02AB.5020508@harvee.org> Message-ID: <4BDF4937.9070908@harvee.org> On 5/3/2010 2:23 PM, Phillip Whiteside wrote: > Hi, > > not sure I understand, do you want something like a slide-presentations > (power-point style) as done on these > https://wiki.ubuntu.com/UbuntuDeveloperWeek or a screen cast such as > http://lubuntu.net/node/32? > > If it is the latter, I will ask leszek if he could spare some time to > help you. If it is the former, I will make some further enquiries The end result I want is an animated sequence intermixing audio and graphics representing a user interface and its operation. F take a simple example of changing a directory. For the most part, the is an open loop system where you have the grammar of "cd " And dir is just a list of names. Usually static or, at best customized by the developer on each machine. This sucks. open a terminal window Try "push to " observe: sidebar with last 10 reference directories on-screen. "Push to 10" The 10th item on the list is used to execute a pushd command and some marker is left on the screen somewhere indicating it's been pushed. Maybe a [dir] in the title bar. But what if you didn't want one of the 10 on the first list. You want something from the most frequently used list "Go frequently used" and the sidebar indication would change to list of most frequently used directories. One could also search so with something like "Name starts with albc" a search box would open up on top of the sidebar and you could edit by voice or by hand search box. However, at no point the user interface give you any capability whatsoever to do anything by hand other than the simple editing feature. Everything must be driven by voice As you can see, this is a moderately complex user interaction that should be simpler when presented visually and by request for a helpful animator. Things get more complicated why start talking about the enhanced dictation box I've been pushing for a couple of years. It's probably the best solution available for bridging between NaturallySpeaking and Linux. I advocate for the solution because I do not believe it is possible for the open-source community to come up with a workable speech recognition solution by the time I die. I would like to spend the next 15 years of my life being effective, working with speech recognition to make money, so I can write, and so I can have a comfortable life. I want to make programming by voice work far better than we've done so far. Enhanced dictation box is core to my ideas on that topic. But, enhanced dictation box is a simple complicated idea that needs animation to make the concept accessible to many people. now that I'm already wound up on the topic. :-) Enhanced dictation box is the same as the regular dictation box with four major differences. 1) user definable cut and paste sequences (mouse control, keystroke sequences) etc. 2) persistence. Does not go away until you tell it to. Implies one copy per active application but active dictation box matches window focus 3) internal log/journaling system. Can't lose data, cannot figure out what you said 4) input and output transformations. After you grab data, and transform it into some speech recognition friendly form. When you paste it, the data returns to the speech recognition hostile form If one is truly clever, the application we split into two parts. The first part is the user interface. Light, fast, doesn't degrade recognition. The second part talks to the first part by the net which means that do not need to be on the same machine. Do it right and you can dictate on Windows or wine and have your results show up in Linux. Not that difficult to build if you have hands but potentially extremely useful. > > Regards, > > Phill. > > > On Mon, May 3, 2010 at 6:06 PM, Eric S. Johansson > wrote: > > On 5/3/2010 12:59 PM, Penelope Stowe wrote: > > The meeting will be in #ubuntu-accessibility. > > > > And for the person who asked, yes, that's 6AM EDT in the US. I know > > it's really early, but last time most of the people at the meeting > > were in the UK/Europe and also due to my work schedule this week, I > > needed to hold an early meeting. > > > > Please let me know if you have any further questions! > > I could use a graphics person with some animation knowledge. I've been > advocating a tool and a new form of user interface to make speech > user interface > is far more practical and discoverable. Obviously my mouse hand is > on the fritz > and, well speech just as the wrong tool for creating graphics. > > I could really use the help with this because I believe that these user > interface models or something derived from them would be the next > step in a > better direction for us. Everyone I've given the whiteboard talk to > really loves > it but fuzzy whiteboard and me waving my hands around just doesn't > translate to > the Internet. :-) > > --- eric > > -- > Ubuntu-accessibility mailing list > Ubuntu-accessibility at lists.ubuntu.com > > https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility > > From pstowe at gmail.com Thu May 6 09:50:23 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Thu, 6 May 2010 05:50:23 -0400 Subject: Reminder: Meeting This Morning at 10:00 UTC! Message-ID: Just a final reminder that we have a meeting in about 12 minutes in #ubuntu-accessibility ! The agenda is at https://wiki.ubuntu.com/Accessibility/Team/MeetingAgenda I look forward to seeing whoever can make it! From waywardgeek at gmail.com Thu May 6 21:23:32 2010 From: waywardgeek at gmail.com (Bill Cox) Date: Thu, 6 May 2010 17:23:32 -0400 Subject: Accessibility improved in Synaptic Message-ID: I've patched gtk+ to allow programmers to easily add descriptions to images. This is probably useful in many places, but I decided to start with Synaptic. Users can now hear the status of a package read to them, not just "icon". Remember to right-click on package items with Orca+8 to get the package menu. I've filed a bug at bugzilla.gnome.org on this, and submitted a patch: https://bugzilla.gnome.org/show_bug.cgi?id=617629 Without this patch or something like it, it is not possible for programmers to add accessible descriptions to icons in a tree view, which is also used for lists of items with check boxes. Should I file a bug at launchpad.net also? Vinux users can test the new synaptic version if they add the Vinux/Ubuntu Lucid Testing PPA. Bill From hanke at brailcom.org Fri May 7 10:56:32 2010 From: hanke at brailcom.org (Hynek Hanke) Date: Fri, 07 May 2010 12:56:32 +0200 Subject: Speech Dispatcher 0.7 Beta3 -- Please help with testing Message-ID: <4BE3F1E0.8020509@brailcom.org> Dear all, we have uploaded a second public beta version for the 0.7 release of Speech Dispatcher The Beta 3 differs from Beta 1 in the following aspects: * Unix sockets are by default placed in ~/.speech-dispatcher/ thus fixing a DoS security concern,libraries now respect the SPEECHD_SOCK environment variable * Speech Dispatcher now compiles on MacOS * Generic module fixed to respect new audio setting mechanism * Bugfixes We would like to ask you to help us with testing and report any issues so that we can fix them before the final release. You can find the 0.7 Beta 3 version here: http://www.freebsoft.org/pub/projects/speechd/speech-dispatcher-0.7-beta3.tar.gz This release is based on the great work done in the unofficial development branch managed by Luke Yelavich, but some parts needed to be reworked before an official release to ensure a cleaner design, conformance to standards and smoother interoperability with the rest of the system; the new changes were also documented etc. Most important improvements in the 0.7 version are: * Speech Dispatcher uses UNIX style sockets as default means of communication, thus avoiding the necessity to choose a numeric port and greatly easying session integration. Inet sockets are however still supported for communication over network. * Autospawn -- server is started automatically when a client requests it It can be forbidden in the appropriate server configuration file. * Pulse Audio output reworked and fixed * Dispatcher runs as user service (not system service) by default and doesn't require the previous presence of ~/.speech-dispatcher directory * All logging is now managed centrally, not by separate options * Graceful audio fallback (e.g. if Pulse is not working, use Alsa...) * Various bugfixes and fine-tunnings * Updated documentation For more detailed description of the changes, please see the Git log: http://git.freebsoft.org/?p=speechd.git The documentation can be found in the doc/ directory of the .tar.gz package. With Best regards, Hynek Hanke Brailcom, o.p.s. From labrad0r at edpnet.be Sat May 8 14:58:45 2010 From: labrad0r at edpnet.be (Labrador) Date: Sat, 8 May 2010 16:58:45 +0200 Subject: Lucid and brltty v. 4.1 - bug in brltty.conf Message-ID: <20100508145845.GA3663@jupiter> Hi, as long as I kept Serial: as param for the braille-device in the Ubuntu Lucid brltty.conf instead of /dev/ttyS0, my Alva ABT380 wasn't starting, even if the RUN_BRLTTY=yes in /etc/default/brltty. Why not fixing such a bad bug definitely, or explaining why puting Serial: as default (probably wrong) param ? Labrador From samuel.thibault at ens-lyon.org Sun May 9 22:02:12 2010 From: samuel.thibault at ens-lyon.org (Samuel Thibault) Date: Mon, 10 May 2010 00:02:12 +0200 Subject: Lucid and brltty v. 4.1 - bug in brltty.conf In-Reply-To: <20100508145845.GA3663@jupiter> References: <20100508145845.GA3663@jupiter> Message-ID: <20100509220212.GC5381@const.famille.thibault.fr> Labrador, le Sat 08 May 2010 16:58:45 +0200, a écrit : > Why not fixing such a bad bug definitely, or explaining why puting > Serial: as default (probably wrong) param ? Because nobody reported such issue. Are you really using an upper case for the 's' of Serial? It might just be it. Samuel From waywardgeek at gmail.com Mon May 10 10:47:55 2010 From: waywardgeek at gmail.com (Bill Cox) Date: Mon, 10 May 2010 06:47:55 -0400 Subject: Fix to ibmtts.c for Ubuntu Lucid x64 Message-ID: I've figured out how to make voxin work on Ubuntu Lucid x64. I don't understand why the fix works, but here it is: Just delete the "DBG" statement from the module_audio_init function in src/modules/ibmtts.c in the speech-dispatcher package. Recompile speech-dispatcher on a 32 bit machine, with voxin already installed, and it will create a new sd_ibmtts file that works in Lucid x64. Copy that to your 64-bit machine at /usr/lib/speech-dispatcher-modules/sd_ibmtts. I've attached the diff file, which is a one line delete. I don't understand why this make sd_ibmtts work on Lucid x64. I did read that ctime is not thread safe, and that may have something to do with it. It may also be interacting with an odd optimization bug. More likely, something else is trashing a bit of memory. To cause the bug to occur, you only need three lines from the DBG macro at the top of module_audio_init: if (Debug){ time_t t = time(NULL); ctime(&t)); } Bill -------------- next part -------------- A non-text attachment was scrubbed... Name: ibmtts.c.diff Type: text/x-patch Size: 246 bytes Desc: not available URL: From pstowe at gmail.com Mon May 10 13:46:49 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Mon, 10 May 2010 15:46:49 +0200 Subject: Ubuntu Accessibility at the Ubuntu Developer Summit Message-ID: Hiya, There are currently two different sessions at UDS dealing with accessibility. The first one is on the desktop track so will be development oriented (I suspect). It's called "Get the accessibility infrastructure updated for use with GNOME 3 in Ubuntu" and is currently scheduled for Wednesday at 9:00 UTC in the Cocobolo 1 room. (If you participate remotely, the corresponding IRC channel is #ubuntu-uds-cocobolo-1 ). The second one is on the community track and is "Reviving and Reorganizing the Ubuntu Accessibility Team" and is about getting ourselves as a community team more organized (things like creating our goals and plans as was discussed in the last meeting -which I will have notes from up later today). It's currently scheduled for Friday at 15:15 UTC, however, if we want, we can move it earlier in the week. Personally, I'd prefer to move it earlier, however, if any of you have any objections, please let me know ASAP. If I haven't heard any objections by the time I go to bed tonight, I'm going to ask them to move it earlier. Feel free to also let me know if certain times are better for you. We're in Belgium, so figure we will be having sessions between 8:00 to 18:00 UTC most days. For those of you who aren't here (which I know is most of you), you can find out information about participating remotely on https://wiki.ubuntu.com/UDS-M/RemoteParticipation I hope as many people can make these sessions (either in person or remotely) as possible! And please do give me feedback on timing for the community-related Ubuntu Accessibility Team session! Thank you! Penelope (Pendulum) From aerospace1028 at hotmail.com Mon May 10 13:47:13 2010 From: aerospace1028 at hotmail.com (aerospace1028 at hotmail.com) Date: Mon, 10 May 2010 09:47:13 -0400 Subject: upgrade failure Message-ID: Greetings, I seem to have run into trouble upgrading from Ubuntu8.04 to Ubuntu10.04. The other night, I figured I would try the upgrade process while the internet trafic should be low. I logged into my administrator account and ran sudo aptitude update followed by sudo aptitude safe-upgrade. There were about 8 packages that were updated and none listed as "NOT UPGRADABLE." After the standard update process completed, from the gnome-terminal, I ran sudo do-release-upgrade. Every thing started off fine. It took about half an hour to download the necessary packages from the repositories. I monitored the process for a little while (10-15 minutes) after that: It looked like the standard update messages to me ("preparing to replace package version x with package version y ..."). Since the entire process might take afew hours (Before downloading, it said there were over 600 packages to instal and well over 1,000 to upgrade) I decided to stretch my legs a little and came back to check on the progress every 15-30 minutes. After about 2 hours, orca stopped speaking and I found a small rectangular dialog box in the middle of the screen. I wasn't sure if orca just lost focus so first I tried altTabbing to give the dialog focus. Nothing seemed to happen. I tried bringing up the run dialog to re-launch orca (I'm still in gnome-2.22, orca randomly stops and I find re-launching it manually appears to work), but the run dialog didn't appear to pop up. Before I could try anything else, the screen went almost blank. To me it looked deep blue, and I saw some stuff an the extreme top and bottom of the screen which I assume was the outline of the top and bottom panels. I wasn't sure what was going on, but the processor sounded like it was still chugging along, so I left everything alone until it slowed down. When the processor sounded like it stopped cranking, I waited about fifteen minutes and nothing happened. i switched to text console 1 and logged in. I connected my braillenote and attempted to launch brltty. I never recieved any output on the braille display, but I noticed the screen filled with messages. They kept coming and coming. Visually, it looked like when I run an update process; They had a little bit of variation in length and some popped up quickly while others took longer before the next message joined the queue. I hoped the release process was still going. I stepped back before my impationtce got the better of me. I came back about every half hour and pressed the control key to reactivate the screen (I'm on a laptop and every thirty or so minutes of inactivity and the screen goes to sleep i guess). Eventually, the screen appeared to stop scrolling. All the messages stacked up in columns. It took another hour before I could get sighted assistance to review what was going on (about 7 hours from the start of do-release-upgrade). I was getting an infinite loop of messagesstating brltty could't open device. I had disconnected the braillenote a while back to keep it from getting dropped. I tried reconnecting it, but it didn't appear to have any effect. After another 15 or so minutes, I just gave up and pressed the power button on the laptop and killed everything. When i came back to reboot, I got the regular grub menu (I duel boot with windows XP). Sighted assistance says the ubunntu entry is 8.04. When I trie to launch buntu I'm not surprised to get a static screen full of messages. >From talking with sighted assistance (who doesn't know linux) it appears a basic shell is launched after several modues nd devices fail to load. There's no log-in so I presume the users aren't being initialized. help age a long list of commands that appear to be standard shell commands "cat, mount, umount, &c). Before I do anything drastic, is there a way to recover? from the basic shell, can i remount my root an home partitions and either re-start or continue the installation process? or is there at least a way to capture the messages (or any other relevant information) say to a text file on a USB stick? thanks in advance for any suggestions. _________________________________________________________________ The New Busy is not the too busy. Combine all your e-mail accounts with Hotmail. http://www.windowslive.com/campaign/thenewbusy?tile=multiaccount&ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_4 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pstowe at gmail.com Mon May 10 14:52:13 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Mon, 10 May 2010 16:52:13 +0200 Subject: Notes from the May 6 2010 meeting Message-ID: Hiya! So we met last meeting and I just wanted to post to the list notes from the last meeting (I'll also put this up on the wiki). The main reason for this meeting was to start thinking about things we wanted to have covered in the session this week at the Ubuntu Developer Summit. Here are some things which people came up with. I'd like some feedback on list as well as it would be wonderful if as many people as possible could make the session! So you know, the Accessibility Team session that I set up is in the community track, however, there is also a desktop track session that I think Luke is doing. I'll post the information for both sessions to the list in a separate e-mail. The main things we discussed as needing to work on for the Maverick cycle (in no particular order): 1) Organizing and getting structure to the team 2) Creating a statement of where the team is and where we want to be. 3) Documentation (ranging from how-tos to basic information about what accessibility programs are in universe and what other things might be useful) 4) structure for the team Please feel free to discuss any of these things here on the list (especially if you can't make the sessions) and we'll try to discuss any concerns. Also if you have any other things you think might be good to discuss specifically to be done during the next 6 months (during the Maverick cycle), please send them to list. Please note that the track that I am running at UDS is really based on community and getting the team working as a functioning group and less on specific development. I certainly think it would be good for as many developers interested in accessibility to attend as possible, however, this particular session is community driven. Thank you! Penelope From cjk at teamcharliesangels.com Mon May 10 18:00:07 2010 From: cjk at teamcharliesangels.com (Charlie Kravetz) Date: Mon, 10 May 2010 12:00:07 -0600 Subject: Ubuntu Accessibility at the Ubuntu Developer Summit In-Reply-To: References: Message-ID: <20100510120007.5af774c3@teamcharliesangels.com> On Mon, 10 May 2010 15:46:49 +0200 Penelope Stowe wrote: > Hiya, > > There are currently two different sessions at UDS dealing with accessibility. > > The first one is on the desktop track so will be development oriented > (I suspect). It's called "Get the accessibility infrastructure updated > for use with GNOME 3 in Ubuntu" and is currently scheduled for > Wednesday at 9:00 UTC in the Cocobolo 1 room. (If you participate > remotely, the corresponding IRC channel is #ubuntu-uds-cocobolo-1 ). > > > The second one is on the community track and is "Reviving and > Reorganizing the Ubuntu Accessibility Team" and is about getting > ourselves as a community team more organized (things like creating our > goals and plans as was discussed in the last meeting -which I will > have notes from up later today). It's currently scheduled for Friday > at 15:15 UTC, however, if we want, we can move it earlier in the week. > Personally, I'd prefer to move it earlier, however, if any of you have > any objections, please let me know ASAP. If I haven't heard any > objections by the time I go to bed tonight, I'm going to ask them to > move it earlier. Feel free to also let me know if certain times are > better for you. We're in Belgium, so figure we will be having sessions > between 8:00 to 18:00 UTC most days. > > For those of you who aren't here (which I know is most of you), you > can find out information about participating remotely on > https://wiki.ubuntu.com/UDS-M/RemoteParticipation > > > I hope as many people can make these sessions (either in person or > remotely) as possible! And please do give me feedback on timing for > the community-related Ubuntu Accessibility Team session! > > Thank you! > Penelope (Pendulum) > I am attending remotely, from Idaho in the USA. I show "Get the accessibility infrastructure updated for use with GNOME 3 in Ubuntu" at 08:00 UTC, on the revised schedule. I have both Wednesday and Thursday afternoon on the schedule free for the most part. Don't know if it matters much, but I will try to be there for both sessions. -- Charlie Kravetz Linux Registered User Number 425914 [http://counter.li.org/] Never let anyone steal your DREAM. [http://keepingdreams.com] From pstowe at gmail.com Tue May 11 08:43:52 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Tue, 11 May 2010 10:43:52 +0200 Subject: Fwd: The Excalibur System In-Reply-To: <4BE916C9.8010000@gmx.net> References: <4BE916C9.8010000@gmx.net> Message-ID: I got this from the Gnome Accessibility list, although it looks like the original e-mail went to ubuntu-devel-discuss . I'm ccing the original writer just so he knows we exist! I thought it's something you all will also be interested in. Penelope > > -------- Original Message -------- > Subject: The Excalibur System > Date: Mon, 10 May 2010 14:20:54 -0400 > From: Ryan Oram > To: ubuntu-devel-discuss at lists.ubuntu.com > > I've caught a big fish for you guys. My university (Trent University) > has agreed to sponsor me to develop a Ubuntu-based system to replace > the current Windows/Netware system currently employed at Trent. > > This system will be centered around thin clients, running NX Client, > remote desktoping into a Lucid-based server with NX Server installed. > It will be called the Excalibur System. Trent IT has also agreed to > put NX Client on the Windows Image at Trent, so every computer will be > able to access the Excalibur System. > > A copy of my proposal is availible here: > http://tinyurl.com/excalibur-system > > I have also posted screenshots of my prototype here: > http://tinyurl.com/excalibur-screens > > There is a caveat. The accessibility frameworks on Linux are frankly > crap. Because of this, the Excalibur thin client OS will always be > dual-booted with Windows on any computers it is installed on. > Additionally, it will not be made default on any public labs at Trent. > These stipulations will stay in place until the accessibility > frameworks meet the requirements of the Disability Services Office. > > > The requirements of the Disability Services Office are as follows: > > 1. A comprehensive reading and writing support framework (such as Read > & Write or Kurzweil). > > Ocra and aspell could likely be used for this, but grammar support > would be needed as well. > > 2. Mindmapping software (such as Inspiration) > > The DSO has told me that the current open source solutions are > insufficient but could be extended to fit their needs. > > 3. A speech recognition application (like Dragon Naturally Speaking) > > This can come later. > > > You may ask why Canonical would even develop this software. There is a > simple reason: It would make Edubuntu feasible. If Canonical writes > the software that the Disability Services Office wants (which were a > voice recongition system, a replacement for Kurzweil, and extending > the open source mind-mapping software), Edubuntu would instantly > become the preferred platform for every school on the planet. Why > spend money on Windows and Mac OS X when you can get the software you > license for thousands upon thousands of dollars for free, with the > exception of tech support costs? Canonical would be able to make a > killing on supporting schools using this software, easily getting back > their investment. > > Keep in mind too, this is a university. I'm sure there would be a big > list of alumni willing to fund such a project, if external funding is > needed. I'm already working on getting the current head of the > Concurrent Education program at Trent to support the proposal and get > the teacher's union in Ontario aboard. The possibility of having a > Kurzweil equivalent available to every student regardless of wealth or > background is frankly the dream of every teacher. > > > Please let me know what you guys think of all of this. > > Thanks, > Ryan Oram > > -- > Ubuntu-devel-discuss mailing list > Ubuntu-devel-discuss at lists.ubuntu.com > Modify settings or unsubscribe at: > https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss > > _______________________________________________ > gnome-accessibility-list mailing list > gnome-accessibility-list at gnome.org > http://mail.gnome.org/mailman/listinfo/gnome-accessibility-list > From pstowe at gmail.com Tue May 11 09:33:56 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Tue, 11 May 2010 11:33:56 +0200 Subject: New Time and Location: Reorganizing and Reviving the Ubuntu Accessibility Team Message-ID: Hello, The community-based Reorganizing and Reviving the Ubuntu Accessibility Team has been moved to tomorrow (Wednesday, May 12) at 13:00 UTC in the Snakewood room (#ubuntu-uds-snakewood) as the feedback I got from people was unanimously in favour of moving the meeting up. Hopefully none of the scheduling changes which have been happening here will change any of this, however, I'll keep you all informed if it does! Thanks, Penelope From francesco.fumanti at gmx.net Tue May 11 12:25:56 2010 From: francesco.fumanti at gmx.net (Francesco Fumanti) Date: Tue, 11 May 2010 14:25:56 +0200 Subject: Notes from the May 6 2010 meeting In-Reply-To: References: Message-ID: <4BE94CD4.4000302@gmx.net> Hi, It might also be good to have a page listing the various accessibility problems, shortcomings and useful enhancements of the accessibility in Ubuntu. So let me begin with three points: - The incompatibility of gksu with at-spi: There are applications like the Synaptic Package Manager that use gksu to get root privileges. However, gksu is not compatible to at-spi, resulting in a partially freezed desktop when there is an application that actively uses at-spi. Thus, if Maverick is not shipping at-spi2, I might be good considering whether it would make sense and be feasable to replace gksu with something compatible to at-spi. Some time ago, I started a thread about this on the Ubuntu development discussion list: https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2010-March/010770.html - Dwell click during GDM: The Ubuntu desktop ships the dwell click feature, that enables users to do clicks by software. (In other words, it enables users to perform the various mouseclicks without using a hardware button.) This feature is not yet available during GDM and the problem has already been discussed and a patch provided in GNOME: https://bugzilla.gnome.org/show_bug.cgi?id=589906 The solution chosen in the patch adds an icon to the panel in GDM; this item activates dwelling when the user hovers with the mouse over the icon. Another solution would have been to make the already available accessibility icon dwellable; this would however also require a dwellable item in the accessibility dialog of GDM; and above all, I have been told that because of the nature of the accessibility icon on the GDM panel, this solution would require considerably more work. (Unfortunately, I don't know the exact details.) - Do not hide the Universal Access menu: Each time I submit a new version of onboard to the sponsors of main, the package gets patched to hide the desktop file and the Universal Access menu. I would appreciate if the Universal Access menu and the items in it would be visible by default. This might especially be important for new users (not only disabled users, but for example also TabletPC users) that do not know that Ubuntu ships some accessibility tools or that do not know how to make them appear. If these points (especially the first) might be relevant during UDS, it would be great if there would be somebody to represent them. Thanks in advance for reading this, Francesco. PS: Cc'ing Gerd Kohlberger,the author of mousetweaks and the patch to add dwelling to GDM. On 05/10/2010 04:52 PM, Penelope Stowe wrote: > Hiya! > > So we met last meeting and I just wanted to post to the list notes > from the last meeting (I'll also put this up on the wiki). > > The main reason for this meeting was to start thinking about things we > wanted to have covered in the session this week at the Ubuntu > Developer Summit. > > Here are some things which people came up with. I'd like some feedback > on list as well as it would be wonderful if as many people as possible > could make the session! So you know, the Accessibility Team session > that I set up is in the community track, however, there is also a > desktop track session that I think Luke is doing. I'll post the > information for both sessions to the list in a separate e-mail. > > The main things we discussed as needing to work on for the Maverick > cycle (in no particular order): > > 1) Organizing and getting structure to the team > > 2) Creating a statement of where the team is and where we want to be. > > 3) Documentation (ranging from how-tos to basic information about what > accessibility programs are in universe and what other things might be > useful) > > 4) structure for the team > > Please feel free to discuss any of these things here on the list > (especially if you can't make the sessions) and we'll try to discuss > any concerns. > > Also if you have any other things you think might be good to discuss > specifically to be done during the next 6 months (during the Maverick > cycle), please send them to list. > > Please note that the track that I am running at UDS is really based on > community and getting the team working as a functioning group and less > on specific development. I certainly think it would be good for as > many developers interested in accessibility to attend as possible, > however, this particular session is community driven. > > Thank you! > Penelope > From mj at mjw.se Tue May 11 21:13:18 2010 From: mj at mjw.se (mattias) Date: Tue, 11 May 2010 23:13:18 +0200 Subject: brltty Message-ID: Are brltty included in the alternate cd yet I meen 10.04 From pstowe at gmail.com Wed May 12 05:47:54 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Wed, 12 May 2010 07:47:54 +0200 Subject: Remind: Ubuntu Accessibility Sessions today at UDS! Message-ID: Hiya, Just a final reminder that there are two accessibility sessions today at UDS. Information about remote participation: https://wiki.ubuntu.com/UDS-M/RemoteParticipation 8:00 UTC - Get the accessibility infrastructure updated for use with Gnome 3.0 (desktop track) --Cocobolo 1 (#ubuntu-uds-cocobolo-1 on IRC and see remote participation link for how to get audio) 13:00 UTC - Reorganizing and Reviving the Ubuntu Accessibility Team (community track) --Snakewood (#ubuntu-uds-snakewood on IRC and see remote participation link for how to get audio) I hope as many of you as possible can attend! Thanks, Penelope From themuso at ubuntu.com Wed May 12 12:43:31 2010 From: themuso at ubuntu.com (Luke Yelavich) Date: Wed, 12 May 2010 14:43:31 +0200 Subject: Heads up, shortcut key to access messaging/sound indicator menus. Message-ID: <20100512124331.GA32707@barbiton.yelavich.home> Hi guys I just found out today that there is a shortcut key to access the messaging, bluetooth, power and sound indicator menus from anywhere within the GNOME/Ubuntu desktop. At the moment, I believe this keyboard shortcut is not configurable, however perhaps someone can look into making this configurable, if upstream haven't looked into doing so already. To access the messaging, bluetooth, sound, and power indicator menus, press Windows key + M, or if you prefer the Unix term, Super + M. Then you use the arrow keys to move between the menus. When you press escape, focus will return to whereever you were prior to entering the menus. I really hope this helps some people. I know I am finding this useful myself. Luke From hammera at pickup.hu Wed May 12 12:59:09 2010 From: hammera at pickup.hu (Hammer Attila) Date: Wed, 12 May 2010 14:59:09 +0200 Subject: Heads up, shortcut key to access messaging/sound indicator menus. In-Reply-To: <20100512124331.GA32707@barbiton.yelavich.home> References: <20100512124331.GA32707@barbiton.yelavich.home> Message-ID: <4BEAA61D.7000004@pickup.hu> Hy Luke, Unfortunately my system this key combination is nothing to do. What packages and applets need installed to work this? I removed my system the evolution related packages, because I better like with Thunderbird, and removed Empathy related packages, because better like Pidgin. :-):-) Possible this is the matter? Attila From huntp at ukonline.co.uk Wed May 12 13:09:46 2010 From: huntp at ukonline.co.uk (Paul Hunt) Date: Wed, 12 May 2010 14:09:46 +0100 Subject: Heads up, shortcut key to access messaging/sound indicator menus. In-Reply-To: <4BEAA61D.7000004@pickup.hu> References: <20100512124331.GA32707@barbiton.yelavich.home> <4BEAA61D.7000004@pickup.hu> Message-ID: <4BEAA89A.3090501@ukonline.co.uk> Hi Attila, The keystroke isn't working on my laptop either. And I haven't removed either Evolution or Empathy, although I have installed Pidgin and Thunderbird as well. Paul On 12/05/10 13:59, Hammer Attila wrote: > Hy Luke, > > Unfortunately my system this key combination is nothing to do. What > packages and applets need installed to work this? > I removed my system the evolution related packages, because I better > like with Thunderbird, and removed Empathy related packages, because > better like Pidgin. :-):-) > Possible this is the matter? > > Attila > > From j.schmude at gmail.com Wed May 12 13:11:01 2010 From: j.schmude at gmail.com (Jacob Schmude) Date: Wed, 12 May 2010 09:11:01 -0400 Subject: Heads up, shortcut key to access messaging/sound indicator menus. In-Reply-To: <20100512124331.GA32707@barbiton.yelavich.home> References: <20100512124331.GA32707@barbiton.yelavich.home> Message-ID: <4BEAA8E5.1000305@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi Luke Awesome, this is a huge help. Configurability would be a good thing though, and maybe the default could be mapped to something a bit more natural, e.g. panel menu is alt+f1, run is alt+f2, search is alt+f3 etc, perhaps alt+f4 or something? Super+M just seems arbitrary by GNOME convensions. On 05/12/2010 08:43 AM, Luke Yelavich wrote: > Hi guys > I just found out today that there is a shortcut key to access the messaging, bluetooth, power and sound indicator menus from anywhere within the GNOME/Ubuntu desktop. At the moment, I believe this keyboard shortcut is not configurable, however perhaps someone can look into making this configurable, if upstream haven't looked into doing so already. > > To access the messaging, bluetooth, sound, and power indicator menus, press Windows key + M, or if you prefer the Unix term, Super + M. Then you use the arrow keys to move between the menus. When you press escape, focus will return to whereever you were prior to entering the menus. > > I really hope this helps some people. I know I am finding this useful myself. > > Luke > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkvqqOQACgkQybLrVJs+Wi451ACfaHw2dJ1RLpyQsaeWRoI3aZ5I LtMAn2jkW+bq6vYIDRuiAVuGShAI7UCu =7YxA -----END PGP SIGNATURE----- From j.schmude at gmail.com Wed May 12 13:12:28 2010 From: j.schmude at gmail.com (Jacob Schmude) Date: Wed, 12 May 2010 09:12:28 -0400 Subject: Heads up, shortcut key to access messaging/sound indicator menus. In-Reply-To: <4BEAA89A.3090501@ukonline.co.uk> References: <20100512124331.GA32707@barbiton.yelavich.home> <4BEAA61D.7000004@pickup.hu> <4BEAA89A.3090501@ukonline.co.uk> Message-ID: <4BEAA93C.5000200@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi Do either of you have another key mapped to the super key (or windows key if you prefer)? It's common to map this to the compose key, however this changes the key from a Super to a Multi and thus renders the shortcut invalid. On 05/12/2010 09:09 AM, Paul Hunt wrote: > Hi Attila, > > The keystroke isn't working on my laptop either. > > And I haven't removed either Evolution or Empathy, although I have > installed Pidgin and Thunderbird as well. > > Paul > > > On 12/05/10 13:59, Hammer Attila wrote: >> Hy Luke, >> >> Unfortunately my system this key combination is nothing to do. What >> packages and applets need installed to work this? >> I removed my system the evolution related packages, because I better >> like with Thunderbird, and removed Empathy related packages, because >> better like Pidgin. :-):-) >> Possible this is the matter? >> >> Attila >> >> > > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkvqqTwACgkQybLrVJs+Wi5PmACePEhIje5Hk6V1Uphj/09jmDPg aIYAnjLN2Ksvb6bKg4Qnp+WTXswvLVz3 =47ON -----END PGP SIGNATURE----- From huntp at ukonline.co.uk Wed May 12 13:53:43 2010 From: huntp at ukonline.co.uk (Paul Hunt) Date: Wed, 12 May 2010 14:53:43 +0100 Subject: Heads up, shortcut key to access messaging/sound indicator menus. In-Reply-To: <4BEAA93C.5000200@gmail.com> References: <20100512124331.GA32707@barbiton.yelavich.home> <4BEAA61D.7000004@pickup.hu> <4BEAA89A.3090501@ukonline.co.uk> <4BEAA93C.5000200@gmail.com> Message-ID: <4BEAB2E7.6000908@ukonline.co.uk> Hi, I've got it working. Seems I was missing the panel applet from my top panel. Paul On 12/05/10 14:12, Jacob Schmude wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi > Do either of you have another key mapped to the super key (or windows > key if you prefer)? It's common to map this to the compose key, however > this changes the key from a Super to a Multi and thus renders the > shortcut invalid. > > On 05/12/2010 09:09 AM, Paul Hunt wrote: > >> Hi Attila, >> >> The keystroke isn't working on my laptop either. >> >> And I haven't removed either Evolution or Empathy, although I have >> installed Pidgin and Thunderbird as well. >> >> Paul >> >> >> On 12/05/10 13:59, Hammer Attila wrote: >> >>> Hy Luke, >>> >>> Unfortunately my system this key combination is nothing to do. What >>> packages and applets need installed to work this? >>> I removed my system the evolution related packages, because I better >>> like with Thunderbird, and removed Empathy related packages, because >>> better like Pidgin. :-):-) >>> Possible this is the matter? >>> >>> Attila >>> >>> >>> >> >> > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAkvqqTwACgkQybLrVJs+Wi5PmACePEhIje5Hk6V1Uphj/09jmDPg > aIYAnjLN2Ksvb6bKg4Qnp+WTXswvLVz3 > =47ON > -----END PGP SIGNATURE----- > > From themuso at ubuntu.com Wed May 12 15:02:16 2010 From: themuso at ubuntu.com (Luke Yelavich) Date: Wed, 12 May 2010 17:02:16 +0200 Subject: Heads up, shortcut key to access messaging/sound indicator menus. In-Reply-To: <4BEAA8E5.1000305@gmail.com> References: <20100512124331.GA32707@barbiton.yelavich.home> <4BEAA8E5.1000305@gmail.com> Message-ID: <20100512150216.GB32707@barbiton.yelavich.home> On Wed, May 12, 2010 at 03:11:01PM CEST, Jacob Schmude wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi Luke > Awesome, this is a huge help. Configurability would be a good thing > though, and maybe the default could be mapped to something a bit more > natural, e.g. panel menu is alt+f1, run is alt+f2, search is alt+f3 etc, > perhaps alt+f4 or something? Super+M just seems arbitrary by GNOME > convensions. Alt + F4 is close window/app, so thats no use, but I wil see what I can do about changing it for future releases. Luke From j.schmude at gmail.com Wed May 12 15:31:13 2010 From: j.schmude at gmail.com (Jacob Schmude) Date: Wed, 12 May 2010 11:31:13 -0400 Subject: Heads up, shortcut key to access messaging/sound indicator menus. In-Reply-To: <20100512150216.GB32707@barbiton.yelavich.home> References: <20100512124331.GA32707@barbiton.yelavich.home> <4BEAA8E5.1000305@gmail.com> <20100512150216.GB32707@barbiton.yelavich.home> Message-ID: <4BEAC9C1.5010103@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Lol, true. That was an idiot moment on my part. On 05/12/2010 11:02 AM, Luke Yelavich wrote: > Alt + F4 is close window/app, so thats no use, but I wil see what I can do about changing it for future releases. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkvqycEACgkQybLrVJs+Wi56KwCbBJ35KiW2xErHeTpZlLSkPHBp SdEAnAqU22s/wBeH81WxY/xIFI+zNrtu =Jnpo -----END PGP SIGNATURE----- From elle.uca at ubuntu.com Wed May 12 16:00:26 2010 From: elle.uca at ubuntu.com (Luca Ferretti) Date: Wed, 12 May 2010 18:00:26 +0200 Subject: Heads up, shortcut key to access messaging/sound indicator menus. In-Reply-To: <4BEAA61D.7000004@pickup.hu> References: <20100512124331.GA32707@barbiton.yelavich.home> <4BEAA61D.7000004@pickup.hu> Message-ID: <1273680026.1753.5.camel@turnip> See https://bugs.launchpad.net/bugs/577226 It seems there is a bug preventing you to use this shortcut if you are using Compiz window manager. Basically Win+M is used by "neg" and "mag" Compiz plugin too. See the bug for details, but currently I can't suggest a workaround that work for sure, maybe switch to Metacity window manager. However, there is another useful shortcut: Win+S to open the session indicator menu. Cheers, Luca Il giorno mer, 12/05/2010 alle 14.59 +0200, Hammer Attila ha scritto: > Hy Luke, > > Unfortunately my system this key combination is nothing to do. What > packages and applets need installed to work this? > I removed my system the evolution related packages, because I better > like with Thunderbird, and removed Empathy related packages, because > better like Pidgin. :-):-) > Possible this is the matter? > > Attila > From hammera at pickup.hu Thu May 13 04:37:06 2010 From: hammera at pickup.hu (Hammer Attila) Date: Thu, 13 May 2010 06:37:06 +0200 Subject: Heads up, shortcut key to access messaging/sound indicator menus. In-Reply-To: <1273680026.1753.5.camel@turnip> References: <20100512124331.GA32707@barbiton.yelavich.home> <4BEAA61D.7000004@pickup.hu> <1273680026.1753.5.camel@turnip> Message-ID: <4BEB81F2.6040006@pickup.hu> Hy Luca, This shortcut (Super+s) is working if the indicator applet and indicator applet session applet is added the top or bottom panel. But, if the indicator session applet is added any panel, this applet remove the logout user and shutdown menus from system menu. I think if an accessible install happening with any blindness profile, the indicator applet session is not default added a panel. For example, if I added this applet my bottom panel and press Super+s key combination, the guest user choose is awailable. If this applet is not enabled, have another way to choose the guest session? Guest session user have a default password, or nothing defined and enough to choose for example switch user, and the accessible login dialog enough to choose other... item and type guest username and not type any password? Prewious time I newer tryed the guest session, sorry the possible trivial question. Attila From thomaslloyd at yahoo.com Thu May 13 11:43:50 2010 From: thomaslloyd at yahoo.com (Tom Lloyd) Date: Thu, 13 May 2010 04:43:50 -0700 (PDT) Subject: Ubuntu-accessibility Digest, Vol 54, Issue 12 In-Reply-To: Message-ID: <342040.3757.qm@web54101.mail.re2.yahoo.com> I posted the bug about super+m and compiz negative. The way to change it is to install the compiz config manager and then change the preferred key combination to something like ctrl+super+m for negative. The negative plug-in by default on 10.04 is broken. You also have to change the exclude value in the negative configuration and delete the contents to get it to work. Sometimes it does other times is doesn't with the default settings. As of yesterday. Again i have posted a bug report on this matter. The disadvantage of doing this is that the desktop background is also inverted. But a least the rest of it works. Tom From elle.uca at ubuntu.com Thu May 13 21:28:00 2010 From: elle.uca at ubuntu.com (Luca Ferretti) Date: Thu, 13 May 2010 23:28:00 +0200 Subject: Heads up, shortcut key to access messaging/sound indicator menus. In-Reply-To: <4BEB81F2.6040006@pickup.hu> References: <20100512124331.GA32707@barbiton.yelavich.home> <4BEAA61D.7000004@pickup.hu> <1273680026.1753.5.camel@turnip> <4BEB81F2.6040006@pickup.hu> Message-ID: <1273786080.22381.16.camel@turnip> Il giorno gio, 13/05/2010 alle 06.37 +0200, Hammer Attila ha scritto: > Hy Luca, > > This shortcut (Super+s) is working if the indicator applet and indicator > applet session applet is added the top or bottom panel. But, if the > indicator session applet is added any panel, this applet remove the > logout user and shutdown menus from system menu. That's true, if you remove the "indicator session" applet from a panel, the System menu shows the upstream GNOME configurations, i.e. menu entries to lock the screen, logout (logout or change user) and shutdown (shutdown, reboot, suspend). > I think if an accessible install happening with any blindness profile, > the indicator applet session is not default added a panel. Unfortunately I've no confirm on this. I'll try on a virtual machine, I hope to find some time... > For example, if I added this applet my bottom panel and press Super+s > key combination, the guest user choose is awailable. If this applet is > not enabled, have another way to choose the guest session? It seems that Ubuntu and Canonical provided a graphical user interface for guest session only for "indicator session" applet. But, basically, that menu entry should simply launch the script /usr/share/gdm/guest-session/guest-session-launch You could try adding a launcher for this script > Guest session > user have a default password, or nothing defined and enough to choose > for example switch user, and the accessible login dialog enough to > choose other... item and type guest username and not type any password? The guest session is not related to any existing user account, but in order to use a guest session you need to have a "real" user account logged in. We could say that running a guest session the operating system adds a temporay (and restricted) user with a temporary home folder. When you log out from gust session, both temporary user and folder are removed. Cheers, Luca From labrad0r at edpnet.be Sat May 15 09:43:16 2010 From: labrad0r at edpnet.be (Labrador) Date: Sat, 15 May 2010 11:43:16 +0200 Subject: still no any effort noted to make braille-support MUCH easier in Ubuntu Message-ID: <20100515094316.GD3672@jupiter> Hello, I'm still (and in fact more and more) disappointed and irritated to see no milligram of effort taken to implement the braille support in Ubuntu to match with some reality: when someone desired brailel support, why does it requires after installation to go and set /etc/default/brltty's run_brltty= to yes ? what is this for an absurdity ? a) or the person is blind and desires braille support if a brailel display is detected b) or he / she isn't blind and then the problem is unexistant. I'm really tired about seeing things hanging on the level they were a few years ago: why not resolving them definitely and using a good long term method ? BTW launching Orca for the first time is also a problem: when people have to choose for the right language it talks until infinity, instead of letting you choose simply by using the cursorkeys up/down: is this so unrealistic to implemenet such a simple methods ? Btw why not telling what's teh default at taht moment ? isn't it OK to press Ok if agree with the default setting ? nothing told it ! Note: of course I may be wrong on several points: my constatations are made from an upgraded lts to Lucid, please don't hesitate to correct this if it's not corresponding to reality; btw lots of problems from lts to lts, but this is OT here. I would appreciate any explanations/state of the implementations if there were some be done. Labrador From esj at harvee.org Sat May 15 21:21:45 2010 From: esj at harvee.org (Eric S. Johansson) Date: Sat, 15 May 2010 17:21:45 -0400 Subject: ideological speed bumps Message-ID: <4BEF1069.8020901@harvee.org> I've had this conversation with a couple of OSS developers and the answers always leave me very uncomfortable. The problem is how does one live by OSS principals when essential tools are vehemently closed and the barriers to replacements are decade scale and no one is working on them? The problem I refer to is the use of speech recognition as a tool for dealing with upper extremity disabilities. There is only one vendor for continuous speech large vocabulary recognition. There is maybe one or two universities in the world conducting research into speech recognition. All the open-source toolkits are hampered by design criteria (fixed grammar small vocabulary) and there is no corpus sufficient to build acoustical models. Recognition engines are multimillion dollar efforts to build and corpus collection is even more expensive. Speech recognition also requires very specialized knowledge and the people skilled in the art are owned by industry. Therefore, if a rational person would assume that OSS speech recognition is not coming anytime in the near future, maybe not even in my lifetime. A rational person would also assume that part of the way to tackle the problem is to nibble at the edges from the application side to the recognizer side, gradually increasing the availability of OSS components so that the disabled person can minimize their dependence on proprietary or closed source applications. A lot of us disabled programmers have done a good job the nibbling around the edges but there's a lot of cases where we don't have the knowledge and need the help of project related people for example, Emacs integration mode with NaturallySpeaking (VR-mode) doesn't work right. It is incredibly fragile and breaks apparently at random. When I asked for help from various Emacs wizards to help keep it up-to-date and maybe even integrated into Emacs source in the hopes that it would be less likely to break, I was told there was no chance of help because it was linked to a proprietary package. That doesn't leave us in a very good place because if that attitude persists from the ideologically pure, disable users have a shrinking number of open-source applications they can use because, the users require the use of a proprietary package. How does one deal with the real world issue that disabled users will need proprietary packages integrated with open source applications to keep them from being forced into using 100% proprietary applications with no options? From tcross at rapttech.com.au Sun May 16 00:59:55 2010 From: tcross at rapttech.com.au (Tim Cross) Date: Sun, 16 May 2010 10:59:55 +1000 Subject: ideological speed bumps In-Reply-To: <4BEF1069.8020901@harvee.org> References: <4BEF1069.8020901@harvee.org> Message-ID: <19439.17291.713808.930013@rapttech.com.au> Eric S. Johansson writes: > I've had this conversation with a couple of OSS developers and the answers > always leave me very uncomfortable. > > The problem is how does one live by OSS principals when essential tools are > vehemently closed and the barriers to replacements are decade scale and no one > is working on them? > > The problem I refer to is the use of speech recognition as a tool for dealing > with upper extremity disabilities. There is only one vendor for continuous > speech large vocabulary recognition. There is maybe one or two universities in > the world conducting research into speech recognition. All the open-source > toolkits are hampered by design criteria (fixed grammar small vocabulary) and > there is no corpus sufficient to build acoustical models. Recognition engines > are multimillion dollar efforts to build and corpus collection is even more > expensive. Speech recognition also requires very specialized knowledge and the > people skilled in the art are owned by industry. Therefore, if a rational > person would assume that OSS speech recognition is not coming anytime in the > near future, maybe not even in my lifetime. > > A rational person would also assume that part of the way to tackle the problem > is to nibble at the edges from the application side to the recognizer side, > gradually increasing the availability of OSS components so that the disabled > person can minimize their dependence on proprietary or closed source applications. > > A lot of us disabled programmers have done a good job the nibbling around the > edges but there's a lot of cases where we don't have the knowledge and need the > help of project related people for example, Emacs integration mode with > NaturallySpeaking (VR-mode) doesn't work right. It is incredibly fragile and > breaks apparently at random. When I asked for help from various Emacs wizards to > help keep it up-to-date and maybe even integrated into Emacs source in the hopes > that it would be less likely to break, I was told there was no chance of help > because it was linked to a proprietary package. > > That doesn't leave us in a very good place because if that attitude persists > from the ideologically pure, disable users have a shrinking number of > open-source applications they can use because, the users require the use of a > proprietary package. > > How does one deal with the real world issue that disabled users will need > proprietary packages integrated with open source applications to keep them from > being forced into using 100% proprietary applications with no options? > Hi Eric, the points you raise and your observations are all true, but I don't think there is a good answer. What it really boils down to is that OSS is largely about solutions that have been developed by users scratching their own itch. Unfortunately, voice recognition is an extremely complex and difficult to scratch itch and the number of developers with the necessary skills that want to scratch it is very small. I don't think the problem is impossible to fix, but it is likely that it will take some time. In the mid 90's, after losing my sight, there were no decent OSS text-to-speech systems and hardly anything available for blind users to use Unix or Linux. Essentially, we had to use windows/dos and a terminal. Now, 15 years later, the situation is very different. There are some good quality TTS engines, some quite sophisticated TTS baed interfaces for both terminal and GUI environments and both good quality free and relatively cheap TTS engines available. Back int he mid 90's, many thought we would never have good quality OSS TTS engines. It has been a umber of years since I've looked at the status of voice recognition in the OSS world. Working on these projects would seem to be a good proactive approach. In addition to this, two other approaches that might be worth pursuing, especially by anyone who is interested in this area and doesn't feel they have the technical skill to actually work in the development area, would be to lobby commercial vendors to either make some of their code open source or to provide low cost licenses and to lobby for project funding to support OSS projects that are working on VR. A significant amount of the work done to improve TTS interfaces has been made possible because of effective loggying and gaining of support from commercial and government bodies. As an example of what can be achieved here. Through lobbying efforts, it is now possible to obtain an end-user license for ViaVoice TTS at a very reasonable price. Previously, you had to purchase the whole SDK to get the runtime and it was very expensive. This provided users with a good quality TTS. While it is not OSS, it has provided a 'bridge' while decent OSSS TTS engines have been developed. I used this solution for a number of years. Now, since swithcing to 64bit, I use a good quality OSS TTS engine. An option that was not available, or more precisely, was not yet mature enough, only a few years ago. I'm possibly a little more optimistic regarding the future of OSS VR. Voice recognition is rapidly moving from living in a very specialised domain to being much more general purpose. This is largely due to the growth in small form factor devices, such as mobile phones. I've been told that the Google Nexus 1 phone has quite good VR support. This is an indication that decent VR applications that run in an OSS environment are becoming more prevalent. While its true that most of these apps have been developed commercially and are not OSS, I suspect they will 'leak' into the OSS world over time. The gorwth in commercial VR solutions also adds to our knowledge and understanding of VR. While much of this knowledge may still be proprietary in nature, this sort of knowledge tends to find it way out into the public domain over time. It is also likely as demand increases for VR solutions that more University research will occur as it will be seen as something with good commercial potential i.e. good funding opportunities. Unfortunately, it is also true that the accessibility benefits of technology such as VR will all too often be a secondary issue to commercial interests. There will be a lag time between this technology existing and it being accessible to those who would really benefit from it. This is probably the downside of the free market economy where developments are driven by profits. However, it is also the percieved profits that ensure commercial resources are invested into understanding the problem and developing workable solutions. We are still a long way from the sort of society that would put the accessibility needs before individual and corporate greed. In fact, we are still a long way from getting mainstream recognition of accessibility issues to the level they should be, which is why I think lobbying and raising issues outside the accessibility community is so important. Tim -- Tim Cross tcross at rapttech.com.au There are two types of people in IT - those who do not manage what they understand and those who do not understand what they manage. From esj at harvee.org Sun May 16 03:06:13 2010 From: esj at harvee.org (Eric S. Johansson) Date: Sat, 15 May 2010 23:06:13 -0400 Subject: ideological speed bumps In-Reply-To: <19439.17291.713808.930013@rapttech.com.au> References: <4BEF1069.8020901@harvee.org> <19439.17291.713808.930013@rapttech.com.au> Message-ID: <4BEF6125.9080501@harvee.org> On 5/15/2010 8:59 PM, Tim Cross wrote: > Hi Eric, > > the points you raise and your observations are all true, but I don't think > there is a good answer. What it really boils down to is that OSS is largely > about solutions that have been developed by users scratching their own itch. > Unfortunately, voice recognition is an extremely complex and difficult to > scratch itch and the number of developers with the necessary skills that want > to scratch it is very small. thanks for a great series responses to a complex question. As for scratching your own it, there's one big difference. I can't scratch my own itch because my hands don't work right. It's roughly the same problem as telling a blind person that they can write their own code in an IDE that has lots of wonderful graphical images that tell you what you need to do... whoops > It has been a umber of years since I've looked at the status of voice > recognition in the OSS world. Working on these projects would seem to be a > good proactive approach. In addition to this, two other approaches that might > be worth pursuing, especially by anyone who is interested in this area and > doesn't feel they have the technical skill to actually work in the development > area, would be to lobby commercial vendors to either make some of their code > open source or to provide low cost licenses and to lobby for project funding to > support OSS projects that are working on VR. A significant amount of the work > done to improve TTS interfaces has been made possible because of effective > loggying and gaining of support from commercial and government bodies. The vast majority of the speech recognition efforts today are for IVR, interactive voice response systems such as those you would ask. "Weather in Boston" and get a text-to-speech response like "the weather in Boston is hostile to out-of-towners and not very kind to locals either" The difference between speech recognition and text-to-speech today is that usable text-to-speech is easy to create with a team of grad students. Speech recognition takes generations of grad students. Witness how little progress has been made on the Sphinx toolkit's since its creation. We have three different engines all with different characteristics but all on the same problem space. We don't have proper acoustic modeling. We don't have proper language modeling etc. etc. I know I'm being a broken record but, these are huge obstacles to general-purpose use. I would love to see us license for little or no money the nuance NaturallySpeaking toolkit for purposes of developing accessibility interfaces. I can't even get them to return a phone call what I'm calling about a commercial application. If it's for accessibility, they don't even pick up the phone. This tells me it may be time for some guerrilla action. If someone has a spare $2000, I have a scanner and I'm sure we can find some good friends in Europe and Japan. not that I'm saying or even suggesting we should violate nuanc's copyright of course because that would be as wrong as denying disabled people information they need to make themselves more independent and increasing their prospects for working. > I'm possibly a little more optimistic regarding the future of OSS VR. Voice > recognition is rapidly moving from living in a very specialised domain to > being much more general purpose. This is largely due to the growth in small form > factor devices, such as mobile phones. I've been told that the Google Nexus 1 > phone has quite good VR support. This is an indication that decent VR > applications that run in an OSS environment are becoming more prevalent. here's a dirty little secret. They didn't do the speech recognition in the phone. Not enough horsepower or memory space for vocabularies. They ship the audio to a server which then does speech recognition not real-time and shoves the text back to the cell phone. this may unfortunately be our future for disability use. We'll no longer have control over speech recognition engines but instead rent recognition time off the cloud. I really hate the cloud. I understand why pilots hate them as well because if you fly to the big fluffy thing, the fluffy soft thing can turn really really hard as you run into a mountain hidden within the cloud. boink! I'm wait for the equivalent to happen in the software cloud world. > It is > also likely as demand increases for VR solutions that more University research > will occur as it will be seen as something with good commercial potential i.e. > good funding opportunities. Speech recognition research is aimed at IVR. Funding has plateaued or even dropped because recognition accuracy is not improving. The techniques have run out of steam. It will take a radically new approach to put any fire under speech recognition again. Sometimes I think the only way nuance is improving NaturallySpeaking is by fixing bugs. I doubt there's no new technology going on inside. > Unfortunately, it is also true that the accessibility benefits of technology > such as VR will all too often be a secondary issue to commercial interests. > There will be a lag time between this technology existing and it being > accessible to those who would really benefit from it. This is probably the > downside of the free market economy where developments are driven by profits. > However, it is also the percieved profits that ensure commercial resources are > invested into understanding the problem and developing workable solutions. > We are still a long way from the sort of society that would put > the accessibility needs before individual and corporate greed. In fact, we are > still a long way from getting mainstream recognition of accessibility issues > to the level they should be, which is why I think lobbying and raising issues > outside the accessibility community is so important. yes. It would be interesting to do the calculation but I think there's a good chance that sticking disabled people on disability and low-income housing may be cheaper to society than all the efforts put into making software and life space disabilities accessible. This is also why I advocate for putting disability hooks in every machine (i.e. low-cost, at little or no administration) and every disabled person carry their own machine with a disability user interface (i.e. text-to-speech or speech recognition) so that the cost of enabling a machine for accessibility is lower than it is now. it's all economics. I think if we can come up with a way that tweaks or leverages economics in our favor, we can make a big difference. If it's strictly "do it because this is right", it cannot fail. Another example of this in a different field is light pollution. Light pollution is a good idea to control because it reduces energy, makes nighttime driving safer, makes it possible for elderly to drive out there is insufficient economic incentive to fix streetlights and high glare security lighting to make any progress. Therefore any changes based on moral arguments are hard fought hard one battles and usually overturned when the people driving the argument vanished from th political scene because economic/business people push back to the status quo (i.e. short-term goal driven) We will suffer the same fate with our arguments if we can't provide a good economic argument in addition to our technical and moral/ethical arguments. From tcross at rapttech.com.au Sun May 16 07:27:43 2010 From: tcross at rapttech.com.au (Tim Cross) Date: Sun, 16 May 2010 17:27:43 +1000 Subject: [OFF-TOPIC] Re: ideological speed bumps In-Reply-To: <4BEF6125.9080501@harvee.org> References: <4BEF1069.8020901@harvee.org> <19439.17291.713808.930013@rapttech.com.au> <4BEF6125.9080501@harvee.org> Message-ID: <19439.40559.874107.506443@rapttech.com.au> Hi Eric, I've added comments in-line below. However, this is probably beginning to get a little off topic for the list. Maybe take further discussion off list if you want to respond further. Alternatively, maybe you have some suggestions or ubuntu specific points that could be brought in to get things more on-topic? I'm not up-to-date enough with current VR issues to be able to provide any really constructive advice. However, I also understand how important it can be to have general discussion and possibly find the ideas or energy to carry things forward further. I'm happy to discuss further off list if you wish. regards, Tim Eric S. Johansson writes: > On 5/15/2010 8:59 PM, Tim Cross wrote: > > > Hi Eric, > > > > the points you raise and your observations are all true, but I don't think > > there is a good answer. What it really boils down to is that OSS is largely > > about solutions that have been developed by users scratching their own itch. > > Unfortunately, voice recognition is an extremely complex and difficult to > > scratch itch and the number of developers with the necessary skills that want > > to scratch it is very small. > > thanks for a great series responses to a complex question. As for scratching > your own it, there's one big difference. I can't scratch my own itch because my > hands don't work right. It's roughly the same problem as telling a blind person > that they can write their own code in an IDE that has lots of wonderful > graphical images that tell you what you need to do... whoops > Yes, I understand the difficulty and frustration. I wasn't meaning to imply that you or any other individual should fix the problem directly, though I suspect there are some who would benefit from VR who are in a position to assist in code writing for OSS projects. The other point I wanted to make is that coding is not the only way to help. To a large extent, the lobbying aspect is also important. A lot of the battle is getting the recognition of the importance of OSS and low cost solutions in the adaptive technology space. This is an area most can asist with and in fact, you have demonstrated in starting this thread. What is needed is to move this sort of discussion more into the mainstream development area and working towards adaptive tech being considered as a first class consideration and not as an afterthought, as is too often the situation. > > It has been a umber of years since I've looked at the status of voice > > recognition in the OSS world. Working on these projects would seem to be a > > good proactive approach. In addition to this, two other approaches that might > > be worth pursuing, especially by anyone who is interested in this area and > > doesn't feel they have the technical skill to actually work in the development > > area, would be to lobby commercial vendors to either make some of their code > > open source or to provide low cost licenses and to lobby for project funding to > > support OSS projects that are working on VR. A significant amount of the work > > done to improve TTS interfaces has been made possible because of effective > > loggying and gaining of support from commercial and government bodies. > > The vast majority of the speech recognition efforts today are for IVR, > interactive voice response systems such as those you would ask. "Weather in > Boston" and get a text-to-speech response like "the weather in Boston is hostile > to out-of-towners and not very kind to locals either" > > The difference between speech recognition and text-to-speech today is that > usable text-to-speech is easy to create with a team of grad students. Speech > recognition takes generations of grad students. Witness how little progress has > been made on the Sphinx toolkit's since its creation. We have three different > engines all with different characteristics but all on the same problem space. We > don't have proper acoustic modeling. We don't have proper language modeling etc. > etc. I know I'm being a broken record but, these are huge obstacles to > general-purpose use. > I only partially agree on both your points. Yes, much of the VR work to date has been for IVR systems, but as the technology improves, I believe this is changing. For example, the VR support I mentioned on the Nexus phone is for dictation of SMS text messages. The new phone system we recently installed at work has VR capabilities that translates voice messages to text and sends it via SMS an email. I think this type of application of VR will see rapid development over the next few years and represents the next sttage and a higher level of sophistication past the IVR model with its limited recognition abilities. Yes, this is a difficult problem. However, it is interesting to note that your arguments are very similar to the ones that were common in the mid 90's. At that time, software TTS was thought to be too comutationally intensive to be practicle for real-time TTS. Creating voices was considered to be an art that only a very few people could do and many argued it would be many years before we had a decent OSS TTS engine available. I don't think we will see anything in the OSS world that is of production quality and able to meet the needs of adaptive tech users next month or even next year. It is a hard problem and will take considerable resources to address. However, it may not take as long or as many resources as you fear. It is very difficult to predict the rate of development in these areas. For all we know, there may be ground braking hardware or algorithms just around the corner that will completely change the landscape. I feel quite positive about developments in this area because I can see generalised VR becoming more common. The growth in demand/popularity for smart phones and other small form factor devices is being hampered because keyboards, both software and hardware, are still the main interface. However, hardware keyboards are difficult to fit in small devices and software ones are slow and somewhat inconvenient. Generalised VR will be the commercial solution in this area. Initially, much of it will be limited IVR type solutions, but as shown with the Nexus, more general support to dictate messages etc will also increase in popularity. If this technology becomes part of things like the Android OS, then this technology will slowly find its way into the OSS world. > I would love to see us license for little or no money the nuance > NaturallySpeaking toolkit for purposes of developing accessibility interfaces. I > can't even get them to return a phone call what I'm calling about a commercial > application. If it's for accessibility, they don't even pick up the phone. This > tells me it may be time for some guerrilla action. If someone has a spare $2000, > I have a scanner and I'm sure we can find some good friends in Europe and Japan. > not that I'm saying or even suggesting we should violate nuanc's copyright of > course because that would be as wrong as denying disabled people information > they need to make themselves more independent and increasing their prospects for > working. > There were a number of attempts to get IBM to make their ViaVoice Outloud TTS engine available as open source and to make the runtime free or at a low license cost before someone was actualy successful in finding a model that was acceptable to the vendor and provided a reasonable outcome for users. I suspect it depends on individual tenacity, personality and possibly some degree of luck. I think having a good understanding of business and things that are likely to motivate any business into accepting or supporting any proposal is also essential. Most businesses are not well motivated by altruistic concerns. Many of them still don't undersatnd OSS - some have even believed the FUD put out by companies like Microsoft. Some even fear losing lucrative contracts with anti-OSS vendors if theya re seen to support such initiatives. Trying to convince a large vendor to provide their product at a lower prices for people with a disability is unlikely to gain much traction unless it boils down to good business sense. The difficult part is in identifying a strong convincing buinsess case that the vendor will see as a positive and which has benefits for those with a disability that need such solutions. > > I'm possibly a little more optimistic regarding the future of OSS VR. Voice > > recognition is rapidly moving from living in a very specialised domain to > > being much more general purpose. This is largely due to the growth in small form > > factor devices, such as mobile phones. I've been told that the Google Nexus 1 > > phone has quite good VR support. This is an indication that decent VR > > applications that run in an OSS environment are becoming more prevalent. > > here's a dirty little secret. They didn't do the speech recognition in the > phone. Not enough horsepower or memory space for vocabularies. They ship the > audio to a server which then does speech recognition not real-time and shoves > the text back to the cell phone. I wasn't aware of that. So, if I've got this right, you speak the message you want to send, this is recorded and sent to a removte server and then a text version is returned that is sent as the SMS message? It must be fairly close to real-time as the person I was talking to said that as they speak the message it is rendered as text on the screen, which enables them to correct any errors before sending. > > this may unfortunately be our future for disability use. We'll no longer have > control over speech recognition engines but instead rent recognition time off > the cloud. I suspect this could well be the model we are moving to generally and not just with respect to adaptive technology. From this perspective, provided the costs are reasonable, we will not be any worse off than other users who are also just as dependent for all their services. Of course, this does not address the issue of anyone being or becoming dependent on technology that we don't have control over or access to. This is largely the underlying concern that RMS had when forming the FSF. While you could argue that those with a disability are possibly at a greater disadvantage because the technology is percieved as being more important or critical to them. However, I think we need to be careful of such arguments. Yes, technology enables me as someone with a disability to do things, many of them independently, that were not possible before we had this technology. However, to argue that my needs are greater or that my pain would be greater if I lost access to this technology than it would be for someone without a disability who has lost control or access to some technology they rely on is dangerous. It runs the risk of creating an 'us and them' paradigm and is based on subjective value statements that are impossible to quantify. It distracts from the real issue - ensuring all have access and the ability to control or own the technology that becomes critical in how we live our lives. > > I really hate the cloud. I understand why pilots hate them as well because if > you fly to the big fluffy thing, the fluffy soft thing can turn really really > hard as you run into a mountain hidden within the cloud. boink! > > I'm wait for the equivalent to happen in the software cloud world. > The 'cloud' is just marketing hype. Its like Web 2.0 - it means nothing and everything all at the same time. Technically, there is nothing new here. It is just a swing back to the old 'thin client' and centrally provided service model that I've seen come and go already during my short career. Yes, its more sophisticated in some ways and has some improved architecture - thank god we have learnt something in the last 40 years! Some of the cloud services being provided are good, some are bad and some are dangerous. There have already been major stuff ups - ask a sidekick user wha they think! However, I don't feel anyone should fee any more threatened by the cloud than they do regarding the many proprietary systems they have been putting data into for the last 20 years. As a friend of mine says - "Its all just hem lines, they will go up and they will go down". > > It is > > also likely as demand increases for VR solutions that more University research > > will occur as it will be seen as something with good commercial potential i.e. > > good funding opportunities. > > Speech recognition research is aimed at IVR. Funding has plateaued or even > dropped because recognition accuracy is not improving. The techniques have run > out of steam. It will take a radically new approach to put any fire under speech > recognition again. Sometimes I think the only way nuance is improving > NaturallySpeaking is by fixing bugs. I doubt there's no new technology going on > inside. > Possibly, I am not up on current research in this area and can only speculate. I once had similar concerns regarding TTS. Nearly all the research was towards the fdevelopment of more natural sounding voices, usually using the concatenative approach. While this style of TTS does appear to generate more human sounding voices, it also suffers from the limitation that pronounciation quality falls dramatically as speech rates increase. Sounds wonderful when the rate is a normal speaking rate, but you cannot understand it once you increase the rate. As a blind user, I'm use to listening at high speech rates. If I had to lisen at a normal speaking rate to all the data I need to process each day, I would never get things done. However, the newer TTS engines are less useful to me than older systems that use mathematically derived approximations of speech, which sound less natural, but at least can be understood at high speech rates. > > Unfortunately, it is also true that the accessibility benefits of technology > > such as VR will all too often be a secondary issue to commercial interests. > > There will be a lag time between this technology existing and it being > > accessible to those who would really benefit from it. This is probably the > > downside of the free market economy where developments are driven by profits. > > However, it is also the percieved profits that ensure commercial resources are > > invested into understanding the problem and developing workable solutions. > > We are still a long way from the sort of society that would put > > the accessibility needs before individual and corporate greed. In fact, we are > > still a long way from getting mainstream recognition of accessibility issues > > to the level they should be, which is why I think lobbying and raising issues > > outside the accessibility community is so important. > > yes. It would be interesting to do the calculation but I think there's a good > chance that sticking disabled people on disability and low-income housing may be > cheaper to society than all the efforts put into making software and life space > disabilities accessible. This is also why I advocate for putting disability > hooks in every machine (i.e. low-cost, at little or no administration) and every > disabled person carry their own machine with a disability user interface (i.e. > text-to-speech or speech recognition) so that the cost of enabling a machine for > accessibility is lower than it is now. > > it's all economics. I think if we can come up with a way that tweaks or > leverages economics in our favor, we can make a big difference. If it's strictly > "do it because this is right", it cannot fail. Another example of this in a > different field is light pollution. Light pollution is a good idea to control > because it reduces energy, makes nighttime driving safer, makes it possible for > elderly to drive out there is insufficient economic incentive to fix > streetlights and high glare security lighting to make any progress. Therefore > any changes based on moral arguments are hard fought hard one battles and > usually overturned when the people driving the argument vanished from th > political scene because economic/business people push back to the status quo > (i.e. short-term goal driven) > > We will suffer the same fate with our arguments if we can't provide a good > economic argument in addition to our technical and moral/ethical arguments. Yep, the moral agument tends to fail because corporate capitalism is largely amoral. We need to demonstrate strong business cases to justify the outcomes we want. If a decision is percieved as a good business choice, it is far more likely to be adopted. However, sometimes, we really need to be quite creative and use a lot of imagination to formulate such a business case. -- Tim Cross tcross at rapttech.com.au There are two types of people in IT - those who do not manage what they understand and those who do not understand what they manage. From pmikeal at comcast.net Sun May 16 13:51:28 2010 From: pmikeal at comcast.net (Pia) Date: Sun, 16 May 2010 09:51:28 -0400 (EDT) Subject: [OFF-TOPIC] Re: ideological speed bumps In-Reply-To: <19439.40559.874107.506443@rapttech.com.au> References: <4BEF1069.8020901@harvee.org> <19439.17291.713808.930013@rapttech.com.au> <4BEF6125.9080501@harvee.org> <19439.40559.874107.506443@rapttech.com.au> Message-ID: I just wanted to ask that you guys not take this topic off list. It was one of the most seriously useful conversations that has been on here for a long time, because it looks at the future of a barely functional state of things which is really what we all should be concerned about. So, I have been reading the thread closely. I just have not added much yet, because I would just be repeating much of what has already be said at this point. Kind Regards, Pia From waywardgeek at gmail.com Sun May 16 14:19:17 2010 From: waywardgeek at gmail.com (Bill Cox) Date: Sun, 16 May 2010 10:19:17 -0400 Subject: [OFF-TOPIC] Re: ideological speed bumps In-Reply-To: References: <4BEF1069.8020901@harvee.org> <19439.17291.713808.930013@rapttech.com.au> <4BEF6125.9080501@harvee.org> <19439.40559.874107.506443@rapttech.com.au> Message-ID: I'm also following this thread. I had to program by voice for three years in the '90s, first with Dragon Dictate, and then with Naturally Speaking. I eventually wrote 1,600 voice macros mostly to control emacs to help me do my job. When I started with Dragon Dictate, I was excited about the rapid progress for the disabled. Dragon Systems was doing wonderful things for us. Then, Dragon Systems shipped a tool for voice-dictation aimed at regular users. Progress stopped, almost dead right then, and never picked up again. I want to add voice recognition solutions to Vinux, which is built on Ubuntu Lucid. However, Naturally Speaking remains the best voice recognition engine, and there's little reason to believe the recent owners, Nuance, will port it. Nuance also bought Eloquence, the best TTS engine for the blind, IMO, since it can be well understood at very high speeds. Eloquence use to run on Linux, but there is no evidence that Nuance will release any new version for our platform. Modern open-source research and advancement is somewhat promising. Espeak seems to get better each year, though it's far behind Eloquence for high speed. Then there's svox pico around the corner from Google, which may help bring open-source natural voices along. On the recognition side, there's some advancement, but I have yet to see any good FOSS demo on Linux. One dumb thought I had this morning: Could we just call the original developers and ask for their help as consultants on FOSS ASR and TTS? They must be long gone from their companies, and I imagine that their non-competes have expired. What really counts is the know-how. If they could consult on algorithm specification and development, without giving up any trade-secrets, they wouldn't have to write one line of code. I'd be we'd find FOSS devs willing to code it up. Bill On Sun, May 16, 2010 at 9:51 AM, Pia wrote: > I just wanted to ask that you guys not take this topic off list.  It was > one of the most seriously useful conversations that has been on here for a > long time, because it looks at the future of a barely functional state of > things which is really what we all should be concerned about.  So, I have > been reading the thread closely.  I just have not added much yet, because > I would just be repeating much of what has already be said at this point. > > Kind Regards, > > Pia > > > -- > Ubuntu-accessibility mailing list > Ubuntu-accessibility at lists.ubuntu.com > https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility > From hgs at dmu.ac.uk Sun May 16 14:40:04 2010 From: hgs at dmu.ac.uk (Hugh Sasse) Date: Sun, 16 May 2010 15:40:04 +0100 (BST) Subject: [OFF-TOPIC] Re: ideological speed bumps In-Reply-To: <19439.40559.874107.506443@rapttech.com.au> References: <4BEF1069.8020901@harvee.org> <19439.17291.713808.930013@rapttech.com.au> <4BEF6125.9080501@harvee.org> <19439.40559.874107.506443@rapttech.com.au> Message-ID: On Sun, 16 May 2010, Tim Cross wrote: > Hi Eric, > > I've added comments in-line below. However, this is probably beginning to get > a little off topic for the list. Maybe take further discussion off list if you I beg to differ. This is an accessibility list, and these questions are surely fundamental to accessibility issues. This is about finding out how to deal with issues that affect all access technologies in an open source ecosystem. Most other questions get shunted out to the orca list, or the gnome list, etc, because they end up being too specific for this list. > want to respond further. Alternatively, maybe you have some suggestions or > ubuntu specific points that could be brought in to get things more on-topic? Alternatively, would you like to suggest a list that is better for this kind of question, given that it relates to open source accessibility issues? > I'm not up-to-date enough with current VR issues to be able to provide any > really constructive advice. However, I also understand how important it can be > to have general discussion and possibly find the ideas or energy to carry > things forward further. I'm happy to discuss further off list if you wish. > > regards, > > Tim > Hugh From hgs at dmu.ac.uk Sun May 16 15:07:39 2010 From: hgs at dmu.ac.uk (Hugh Sasse) Date: Sun, 16 May 2010 16:07:39 +0100 (BST) Subject: ideological speed bumps In-Reply-To: <4BEF6125.9080501@harvee.org> References: <4BEF1069.8020901@harvee.org> <19439.17291.713808.930013@rapttech.com.au> <4BEF6125.9080501@harvee.org> Message-ID: One would have hoped that 19 years after the Americans with Disabilities Act, and 15 years after similar UK legislation was enacted, things would have improved. I wonder if the Electronic Frontier Foundation could use such legislation to get more cross-platform support from the large commercial interests. As for the "This won't happen, because that application is commercial": producing an interface standard for Voice Recognition would allow the open source community to program to an interface without having to compromise with whatever is on the other side. VoiceXML is not it, because that is only for the voice response, "weather in Boston", systems AFAIK. The incentive for vendors is that compliance to the interface is another feature to sell. A common interface would probably start as a lowest common denominator, so this would not solve the "only one good (unfortunately commercial) system" problem immediately. But a standard could drive innovation in some cases. This hasn't worked perfectly for HTML and browsers, but it has worked to some extent, I think. Hugh From lists at janc.be Sun May 16 21:24:03 2010 From: lists at janc.be (Jan Claeys) Date: Sun, 16 May 2010 23:24:03 +0200 Subject: [Fwd: Taking A Break from Ubuntu] Message-ID: <1274045043.3785.31446.camel@saeko.local> Hello Michael, I'm forwarding your mail to the Ubuntu Accessibility mailing list, as the Accessibility team got revived recently, and although they have heaps of work already, the team can probably help you raise this issue (or at least document it in some central space). ------- Doorgestuurd bericht ------- Van: Michael Haney Aan: sounder at lists.ubuntu.com Onderwerp: Taking A Break from Ubuntu Datum: Sun, 16 May 2010 08:31:39 -0400 I've decided to take a nice long break from Ubuntu. This wasn't an easy decision for me because I really like Ubuntu. What I don't like is having to fight with the desktop screen resolution every time I install a new version. I refuse to fight that battle again. Every time it happens its NEVER really fixed. Oh, I get the resolution I want (1024x768) but only via a down-and-dirty workaround that ends up breaking something else. I depend on the Desktop Zoom and color inversion capabilities of Compiz Fusion because of my visual disability. The problem is NOT getting the Nvidia drivers installed. Its getting a desktop resolution of higher than 640x480. My monitor is non-standard. Its a Sun Microsystem CRT with dual inputs, one is a huge plug for a Sun workstation, and the other is for standard VGA. Prior to Ubuntu 7.10, where you selected your screen resolution there was a tab where you could scroll through a list of hardware manufacturers and select your specific model Monitor. If it wasn't listed you could at least select one of the Default options. I usually selected Generic 1024x768 Monitor from the list, and I was good to go. This feature was removed. I issued bug reports about it, made complaints, and not a GODDAMN thing has been done to address the problem. This is such a simple problem. Why can't Canonical just make a separate GUI for selecting your monitor's make and model? WHY THE FUCK HASN'T THIS BEEN FIXED YET? I'm not the only one with this problem. Don't they pay attention to the bug reports? Didn't someone look at it and think "hey, that feature we removed that let you change what kind of monitor you from the screen resolution GUI is causing big problems for a lot of our users, lets fix it!" But, no, it just sat there for more than a year with barely any activity and no announcement at all that anyone was going to try to fix it or propose that it should be fixed. This is a crippling problem keeping a lot of people from using Ubuntu and its been ignored. WTF? The Nvidia X Server Configurator doesn't fix this problem. You cannot select your monitor make and model or change anything about your monitor in any way from that GUI. I tried using my down-and-dirty workaround by editing the xorg.conf file using my monitor settings from a version of Ubuntu that does this correctly. I searched Google for solutions too. I've tried randr, I've tried tried running the setup wizard for X.org from the command line after finally getting X.org to shut down. The wizard didn't even give me the option of choosing my monitor, just the keyboard. In many instances X.org wouldn't even start afterward. So, for now and until this is fixed I'm done. Maybe one of you has more influence in the community or knows the right strings to pull to get this issue looked into and corrected. For now, I refuse to go through any more frustration and pain to fix something that is so basic. I can't just run out and get a better monitor or video card, not on my budget. One day maybe Canonical will get a clue and fix this problem. Until then I have to so goodbye to Ubuntu. I've found that many other distros DO THIS CORRECTLY, and Mandriva actually installs Compiz Fusion with the Nvidia drivers installed at 1024x768 by default. So, I have Mandria 2010 running on my machine right now. If anyone has a "known to actually work" solution to this specific monitor problem I'll reinstall 10.04 and try it. -- Michael "TheZorch" Haney "The greatest tragedy in mankind's entire history may be the hijacking of morality by religion." ~ Arthur C. Clarke "The suppression of uncomfortable ideas may be common in religion and politics, but it is not the path to knowledge, and there is no place for it in the endeavor of science. " ~ Carl Sagan Visit My Site: http://sites.google.com/site/thezorch/home-1 To Contact Me: http://sites.google.com/site/thezorch/home-1/zorch-central---contacts Free Your PC from the Bondage of Windows http://www.ubuntu.com -- sounder mailing list sounder at lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/sounder -- Jan Claeys From tcross at rapttech.com.au Sun May 16 23:18:39 2010 From: tcross at rapttech.com.au (Tim Cross) Date: Mon, 17 May 2010 09:18:39 +1000 Subject: [OFF-TOPIC] Re: ideological speed bumps In-Reply-To: References: <4BEF1069.8020901@harvee.org> <19439.17291.713808.930013@rapttech.com.au> <4BEF6125.9080501@harvee.org> <19439.40559.874107.506443@rapttech.com.au> Message-ID: <19440.32079.30700.8356@rapttech.com.au> I suggested moving this discussion off list as I've found in the past that general accessibility issues on a distribution specific list are not always welcomed. If the general feeling of list participants is that this sort of discussion is on topic, interesting and useful, I have no problems keeping it on the list. Having said that, I also think it would be valuable if we try to also consider and discuss, what, if anything, we can do to improve the situation on Ubuntu in particular and linux in general. While I do feel slightly out of my depth with respect to VR related issues, I am still interested in the topic. I do think we need to have some focus on what we can do to improve the situation, even if that improvement is only to increase awareness and understanding of the issues. What I'd like to avoid is ending up in a circular discussion of the issues which ends up being only a philosophical debate that simply re-hashes the same old accessibility issues we are all too familiar with. It would be good if we could arrive, after discussion and debate, at a point where some strategy could be defined to actually move things forward. Is this possible or do we still need to understand the issue more? Are we in a position to even look at this yet or do we still need to work at understanding the issue and what may be possible. Do we run a risk of over thinking things and what we really need to do is just push full speed ahead and damn the torpedoes or do we need to gather more resources and people before we can do anything? So many questions, so little time! Tim Pia writes: > I just wanted to ask that you guys not take this topic off list. It was > one of the most seriously useful conversations that has been on here for a > long time, because it looks at the future of a barely functional state of > things which is really what we all should be concerned about. So, I have > been reading the thread closely. I just have not added much yet, because > I would just be repeating much of what has already be said at this point. > > Kind Regards, > > Pia -- Tim Cross tcross at rapttech.com.au There are two types of people in IT - those who do not manage what they understand and those who do not understand what they manage. -- Tim Cross tcross at rapttech.com.au There are two types of people in IT - those who do not manage what they understand and those who do not understand what they manage. From pmikeal at comcast.net Mon May 17 03:12:03 2010 From: pmikeal at comcast.net (Pia) Date: Sun, 16 May 2010 23:12:03 -0400 (EDT) Subject: [OFF-TOPIC] Re: ideological speed bumps In-Reply-To: <19440.32079.30700.8356@rapttech.com.au> References: <4BEF1069.8020901@harvee.org> <19439.17291.713808.930013@rapttech.com.au> <4BEF6125.9080501@harvee.org> <19439.40559.874107.506443@rapttech.com.au> <19440.32079.30700.8356@rapttech.com.au> Message-ID: One thing I have been wondering about for a long time is how to actually contribute as an Ubuntu Accessibility maintainer. Though I have asked and tried to get some input on it, no one has offered assistance from the official team. My real problem is that the directions and documentation for steps on how to help out seem difficult and convoluted to try and understand. It appears very time consuming to try and get onto a team such as becoming a MOTU or joining the accessibility team and so with no guidance or everyone who has the power to make changes to the distro ignoring requests for information about how to get changes into the official release, I have had trouble contributing. One example is that it would be easy for me to package a speakup kernel module, but the ones they have had in the past in source code would be broken or not even patch correctly. It would be nice to have a binary package or at least source that would work via module assistant. I would be glad to help with that, but no one ever responds as to how I can help in that way for the official distro. It isn't just the accessibility team though. I have wanted to help where certain scientific packages were concerned but did not find it easy to figure out how to help and submit a package. Sometimes technologies can be glued together to work within a distro by people who are familiar with using them, but it seems difficult to try and get on the official team. Thanks, Pia On Mon, 17 May 2010, Tim Cross wrote: > > I suggested moving this discussion off list as I've found in the past that > general accessibility issues on a distribution specific list are not always > welcomed. If the general feeling of list participants is that this sort of > discussion is on topic, interesting and useful, I have no problems keeping it > on the list. > > Having said that, I also think it would be valuable if we try to also consider > and discuss, what, if anything, we can do to improve the situation on Ubuntu > in particular and linux in general. While I do feel slightly out of my depth > with respect to VR related issues, I am still interested in the topic. > > I do think we need to have some focus on what we can do to improve the > situation, even if that improvement is only to increase awareness and > understanding of the issues. What I'd like to avoid is ending up in a circular > discussion of the issues which ends up being only a philosophical debate > that simply re-hashes the same old accessibility issues we are all too > familiar with. It would be good if we could arrive, after discussion and > debate, at a point where some strategy could be defined to actually move > things forward. Is this possible or do we still need to understand the issue > more? Are we in a position to even look at this yet or do we still need to > work at understanding the issue and what may be possible. Do we run a risk of > over thinking things and what we really need to do is just push full speed > ahead and damn the torpedoes or do we need to gather more resources and people > before we can do anything? > > So many questions, so little time! > > Tim > > Pia writes: > > I just wanted to ask that you guys not take this topic off list. It was > > one of the most seriously useful conversations that has been on here for a > > long time, because it looks at the future of a barely functional state of > > things which is really what we all should be concerned about. So, I have > > been reading the thread closely. I just have not added much yet, because > > I would just be repeating much of what has already be said at this point. > > > > Kind Regards, > > > > Pia > > -- > Tim Cross > tcross at rapttech.com.au > > There are two types of people in IT - those who do not manage what they > understand and those who do not understand what they manage. > -- > Tim Cross > tcross at rapttech.com.au > > There are two types of people in IT - those who do not manage what they > understand and those who do not understand what they manage. > From valdis at odo.lv Mon May 17 08:15:09 2010 From: valdis at odo.lv (Valdis) Date: Mon, 17 May 2010 08:15:09 +0000 (UTC) Subject: [Fwd: Taking A Break from Ubuntu] References: <1274045043.3785.31446.camel@saeko.local> Message-ID: ... > I depend on the Desktop Zoom and color inversion capabilities of > Compiz Fusion because of my visual disability. The problem is NOT > getting the Nvidia drivers installed. Its getting a desktop > resolution of higher than 640x480. My monitor is non-standard. Its a > Sun Microsystem CRT with dual inputs, one is a huge plug for a Sun > workstation, and the other is for standard VGA. Prior to Ubuntu 7.10, > where you selected your screen resolution there was a tab where you > could scroll through a list of hardware manufacturers and select your > specific model Monitor. If it wasn't listed you could at least select > one of the Default options. I usually selected Generic 1024x768 > Monitor from the list, and I was good to go. This feature was > removed. I issued bug reports about it, made complaints, and not a > GODDAMN thing has been done to address the problem ... IMHO forcing monitor to low resolution to show bigger fonts is outdated/wrong solution, because it causes weird aliasing artifacts. Much better is to change dots per inch for the monitor, i.e. leave monitor native resolution, but go to system-preferences-appearance-fonts-details and set "dots per inch" for about 200 or even more. Valdis From thomaslloyd at yahoo.com Mon May 17 11:30:45 2010 From: thomaslloyd at yahoo.com (Thomas Lloyd) Date: Mon, 17 May 2010 12:30:45 +0100 Subject: How and where to develop In-Reply-To: References: Message-ID: <1274095845.3258.16.camel@ubuntu-10> Hi Pia, Thought i would point you in the direction of Vinux that is ubuntu based. They are welcoming contributions that improve accessibility and I am sure could benefit from your efforts. Ubuntu are making noises about including some of there developments into the main distro but I ave no experience that this is happening yet. Back to Speech to Text. I am going to plug my project again because I am still working on open-sapi that is an interface into the commercial MS TTS & STT system. I have not heard anyone using the MS systems or that have trained it up to be anything worth considering. This is why I have left the SR elelment of the project well alone at the moment. But I am implementing the text to speech so that any SAPI compliant voice will work under in Linux. I am close to a stable release but have not been working along on this for over a year or so. I have struggled to get speech-dispatcher integration and again I would ask if there was anyone able to help please drop me a line. If I can get it all to work under ubuntu 10.04 then I will release a deb but until then I will keep working away. Getting back to my original point is that commercial engines for assistive technologies can be incorporated into Linux with a bit of glue and sticky tape. You can get to the project site here. Project site: http://code.google.com/p/open-sapi/ Discussion Group: http://groups.google.com/group/open-sapi?pli=1 The project has been designed with the cloud in mind so until MS allow us to use MS sapi software without an OS license Cloud bases it might have to stay. That should not be such a big problem though. Someone might just have to foot the bill :( From esj at harvee.org Mon May 17 13:04:00 2010 From: esj at harvee.org (Eric S. Johansson) Date: Mon, 17 May 2010 09:04:00 -0400 Subject: ideological speed bumps In-Reply-To: References: <4BEF1069.8020901@harvee.org> <19439.17291.713808.930013@rapttech.com.au> <4BEF6125.9080501@harvee.org> Message-ID: <4BF13EC0.9010909@harvee.org> On 5/16/2010 11:07 AM, Hugh Sasse wrote: > One would have hoped that 19 years after the Americans with > Disabilities Act, and 15 years after similar UK legislation was ... > (unfortunately commercial) system" problem immediately. But a > standard could drive innovation in some cases. This hasn't worked > perfectly for HTML and browsers, but it has worked to some extent, I > think. warning: I had way too much caffeine yesterday and I'm running on about two hours sleep so this may not make sense. I would delay responding normally but, I'm running short on time in the near future and don't want to lose the thread hence the too much caffeine and two hours sleep. Hugh, I think this is a good segue from some of the other things I was talking about. As one person said (effectively) wwUd, what would Ubuntu do? Canonical has a long history of crossing the boundaries between commercial and open source components and so they would be a natural to help support an effort for handicap accessibility using both commercial and open-source components. I also have an unshakable belief that across machine solution is the best one for the short and long-term. As of said elsewhere, I will be a broken record and say here that accessibility features should be owned by the disabled person. If you're blind, you own the text-to-speech engine, if your hands are broken, you on the speech recognition engine. If all you can do is use a unicorn stick, then you on the stick and the special keyboard. It is not the responsibility of the host running the application to provide any accessibility tools or capabilities. It is the responsibility of the application host to provide you access to every single function the application provides publicly to any interface. This also includes the information the application operates on. I prefer this solution because it is far more testable with lower resources on any application developer therefore they have no excuse for not making it work. No more will we hear the "we don't have a disabled people therefore we can't test text-to-speech or speech he keys or or or". A standard API drivenn test is simply and easily validate accessibility functionality. one great test for the usefulness of the API in an application is if you can extract the GUI and make it run separately from the application. Like I said, all interfaces, one API. Another reason why across machine solutions are better is the simple issue of licensing. I'm not on the run NaturallySpeaking on every single platform I use. Never mind most of them are Lennox but I would need to run on something like 30 to 50 machines per year and that $500 per license, that would break the bank. But if I have a single license on my portable platform and connect to virtually any machine, this is a good thing because it reduces the number of instances of closed source software you need to use and maximizes OSS visibility for disabled users and hopefully their employers once they can now do more things with a linux computer then they can Windows. What this interface looks like is, quite frankly not clear. For example, one of my favorite tools to build would be the enhanced dictation box. How do you define that? Is it an application built on top of a standard disability API? Don't know. on the legislative/legal question, one axiom I've come to embrace is law cannot prevent what technology enables. There lots of practical examples of this in a disability world, law cannot prevent disabled people from being completely locked out by application technology. Every time a new revision comes, a new technology arrives (think iPhone), or a new business practice (cloud computing) arrives on the scene, we keep fighting the same old battle over and over again about putting in disability access and business criminals whingeing about how hard it is. If we can fight back with our own technology which reduces cost of inclusion, and, if it's built in, make the cost almost nothing, then then legislative efforts have a chance at succeeding. This is especially true if we could use public watchdogs to run test suites to say "fails or passes". If they want the answers of how, then they pay the watchdog organization for that information so the watchdog organization can keep doing what it does. Anyway, got to go do other things which will hopefully involve sleep sometime the next few hours. :-) From waywardgeek at gmail.com Tue May 18 08:21:58 2010 From: waywardgeek at gmail.com (Bill Cox) Date: Tue, 18 May 2010 04:21:58 -0400 Subject: Anyone know how to activate iaccessible2 interface to Qt apps? Message-ID: I saw a demo of a calculator showing it's accessible objects through dbus from 2007. I also see that the code to present iaccessible2 objects is in the Qt source code. Anyone know how to turn it on? Thanks, Bill From pstowe at gmail.com Tue May 18 09:50:28 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Tue, 18 May 2010 05:50:28 -0400 Subject: Follow-up to UDS Meeting Message-ID: Hi, We had a very successful session about the team at UDS and decided to have a meeting next week to follow-up with the entire team. Please e-mail me your availability by this Friday, May 21, at 12:00 UTC so I can come up with a meeting time. Please note that I will probably try to turn this meeting time into a monthly meeting time. Thanks! Penelope From pstowe at gmail.com Tue May 18 09:51:47 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Tue, 18 May 2010 05:51:47 -0400 Subject: IRC Channel Message-ID: I just wanted to send out a reminder that we do have an IRC channel that people can use and/or hang out in. It's #ubuntu-accessibility on freenode. I've noticed in meetings that there's a lot of general discussion that's happened that isn't always on-topic for the meeting and wanted to remind people that they're welcome to discuss things in the channel all the time! Thanks, Penelope From milton at tomaatnet.nl Tue May 18 11:18:56 2010 From: milton at tomaatnet.nl (milton) Date: Tue, 18 May 2010 13:18:56 +0200 Subject: Orca disappears in Lucid Message-ID: <1274181536.1797.6.camel@milton-desktop> Dear List, I'm an enduser and try to work with Orca in Ubuntu. I did succesful a fresh installation of Lucid with speech. Everything went fine and Orca 2.30.0 was doing great. With the instruction on live.gnome.org/Orca I did the git thingfor atk, at-spi and orca to try 2.31.1 pre. When I start up the machine this morning Orca did not comes up. Can you help me to solve this please? I can sure do a fresh install but how can I prevent to run in the same problem? With the script command I capture the following below. Thank you in advance. Regards, Milton milton at milton-desktop: ~milton at milton-desktop:~$ orca orca:1516): atk-bridge-WARNING **: AT_SPI_REGISTRY was not started at session startup. (orca:1516): atk-bridge-WARNING **: IOR not set. (orca:1516): atk-bridge-WARNING **: Could not locate registry ** (orca:1516): WARNING **: Trying to register gtype 'WnckWindowState' as enum when in fact it is of type 'GFlags' ** (orca:1516): WARNING **: Trying to register gtype 'WnckWindowActions' as enum when in fact it is of type 'GFlags' ** (orca:1516): WARNING **: Trying to register gtype 'WnckWindowMoveResizeMask' as enum when in fact it is of type 'GFlags' Contraction tables for liblouis cannot be found. This usually means orca was built before liblouis was installed. Contracted braille will not be available. Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.6/dist-packages/orca/orca.py", line 1805, in main init(pyatspi.Registry) File "/usr/local/lib/python2.6/dist-packages/orca/orca.py", line 1303, in init registry.registerEventListener(_onChildrenChanged, File "/usr/lib/python2.6/dist-packages/pyatspi/registry.py", line 331, in __getattribute__ raise RuntimeError('Could not find or activate registry') RuntimeError: Could not find or activate registry ]0;milton at milton-desk (orca:1558): atk-bridge-WARNING **: AT_SPI_REGISTRY was not started at session startup. ]0;milton at milton-desktop: ~milton at milton-desktop:~$ uname -a Linux milton-desktop 2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 13:27:30 UTC 2010 i686 GNU/Linux ]0;milton at milton-desk From j.orcauser at googlemail.com Tue May 18 12:06:06 2010 From: j.orcauser at googlemail.com (Jon) Date: Tue, 18 May 2010 13:06:06 +0100 Subject: Orca disappears in Lucid In-Reply-To: <1274181536.1797.6.camel@milton-desktop> References: <1274181536.1797.6.camel@milton-desktop> Message-ID: <20100518120446.GA22589@jupiter.uk.to> Hi, What happens when you do: orca -t on the command line? and then log out and log back in at the end. -Jon On Tue 18/05/2010 at 13:18:56, milton wrote: > Dear List, > I'm an enduser and try to work with Orca in Ubuntu. > I did succesful a fresh installation of Lucid with speech. Everything > went fine and Orca 2.30.0 was doing great. > With the instruction on live.gnome.org/Orca I did the git thingfor atk, > at-spi and orca to try 2.31.1 pre. > When I start up the machine this morning Orca did not comes up. > Can you help me to solve this please? > I can sure do a fresh install but how can I prevent to run in the same > problem? > With the script command I capture the following below. Thank you in > advance. > Regards, Milton > milton at milton-desktop: ~milton at milton-desktop:~$ orca > orca:1516): atk-bridge-WARNING **: AT_SPI_REGISTRY was not started at > session startup. > > (orca:1516): atk-bridge-WARNING **: IOR not set. > > (orca:1516): atk-bridge-WARNING **: Could not locate registry > > ** (orca:1516): WARNING **: Trying to register gtype 'WnckWindowState' > as enum when in fact it is of type 'GFlags' > > ** (orca:1516): WARNING **: Trying to register gtype 'WnckWindowActions' > as enum when in fact it is of type 'GFlags' > > ** (orca:1516): WARNING **: Trying to register gtype > 'WnckWindowMoveResizeMask' as enum when in fact it is of type 'GFlags' > Contraction tables for liblouis cannot be found. > This usually means orca was built before > liblouis was installed. Contracted braille will > not be available. > Traceback (most recent call last): > File "", line 1, in > File "/usr/local/lib/python2.6/dist-packages/orca/orca.py", line 1805, > in main > init(pyatspi.Registry) > File "/usr/local/lib/python2.6/dist-packages/orca/orca.py", line 1303, > in init > registry.registerEventListener(_onChildrenChanged, > File "/usr/lib/python2.6/dist-packages/pyatspi/registry.py", line 331, > in __getattribute__ > raise RuntimeError('Could not find or activate registry') > RuntimeError: Could not find or activate registry > ]0;milton at milton-desk > (orca:1558): atk-bridge-WARNING **: AT_SPI_REGISTRY was not started at > session startup. > > ]0;milton at milton-desktop: ~milton at milton-desktop:~$ uname -a > Linux milton-desktop 2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 > 13:27:30 UTC 2010 i686 GNU/Linux > ]0;milton at milton-desk > > > -- > Ubuntu-accessibility mailing list > Ubuntu-accessibility at lists.ubuntu.com > https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility From hammera at pickup.hu Tue May 18 12:24:06 2010 From: hammera at pickup.hu (Hammer Attila) Date: Tue, 18 May 2010 14:24:06 +0200 Subject: Orca disappears in Lucid In-Reply-To: <20100518120446.GA22589@jupiter.uk.to> References: <1274181536.1797.6.camel@milton-desktop> <20100518120446.GA22589@jupiter.uk.to> Message-ID: <4BF286E6.90708@pickup.hu> Hy, Milton, you using automatic login or normal GDM accessible login feature with screen reader support? If you use accessible login, Orca is talking with GDM screen and not talking after you login? If this problem is happening, when you logged in, you possible need do killall speech-dispatcher command to stop Speech-dispatcher and press some keys to reinitialize in gnome-terminal. If this workaround help you, we little begin found what the problem with your machine. If this step is help your problem, this problem is happening in Git master Orca version because Ubuntu packaged Orca version Luke do a patch with fix this problem after login with kill speech-dispatcher automaticaly before Orca is starting, but this patch is not part of Orca git master version. For example, because I using Orca git master version, I doed following workaround my system to prewent this problem, this workaround is not need if anybody using Ubuntu packaged original Orca versions: 1. I put .bash_logout file the killall speech-dispatcher command with end of file. 2. I put prewious wroted line with /etc/gdm/PostSession/Default file before the exit line. Hope this helps, Attila From milton at tomaatnet.nl Tue May 18 15:13:07 2010 From: milton at tomaatnet.nl (milton) Date: Tue, 18 May 2010 17:13:07 +0200 Subject: Orca disappears in Lucid Message-ID: <1274195587.1763.1.camel@milton-desktop> Hi attila, Before I had an accessible login with speech after login Orca started automatically. Right now I hear only the drums at login and after login only the startup sound of Ubuntu. So I tried with putting the killall speech-dispatcher command as you described. After logout en ogin nothing happens, still no Orca. I also run Ubuntu from an external drive and had upgraded Karmic to Lucid. Here I have nog problems at all doing the git thing and using Orca 2.31.1 pre. Milton ----- Original Message ----- From: "Hammer Attila" To: "ubuntu" Sent: Tuesday, May 18, 2010 2:24 PM Subject: Re: Orca disappears in Lucid > Hy, > > Milton, you using automatic login or normal GDM accessible login feature > with screen reader support? If you use accessible login, Orca is talking > with GDM screen and not talking after you login? > If this problem is happening, when you logged in, you possible need do > killall speech-dispatcher command to stop Speech-dispatcher and press > some keys to reinitialize in gnome-terminal. If this workaround help > you, we little begin found what the problem with your machine. > If this step is help your problem, this problem is happening in Git > master Orca version because Ubuntu packaged Orca version Luke do a patch > with fix this problem after login with kill speech-dispatcher > automaticaly before Orca is starting, but this patch is not part of Orca > git master version. > For example, because I using Orca git master version, I doed following > workaround my system to prewent this problem, this workaround is not > need if anybody using Ubuntu packaged original Orca versions: > 1. I put .bash_logout file the killall speech-dispatcher command with > end of file. > 2. I put prewious wroted line with /etc/gdm/PostSession/Default file > before the exit line. > > Hope this helps, > > Attila > > -- > Ubuntu-accessibility mailing list > Ubuntu-accessibility at lists.ubuntu.com > https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility From milton at tomaatnet.nl Tue May 18 15:15:07 2010 From: milton at tomaatnet.nl (milton) Date: Tue, 18 May 2010 17:15:07 +0200 Subject: re Orca disappears in Lucid Message-ID: <1274195707.1763.3.camel@milton-desktop> Hi Jon, Than Orca ask to choose for standard, espeak or dummy and keeps repeating it. I cannot stop the speech until typing killall speech-dispatcher in a console. But when I do orca -t in a terminal I can continuing the steps to configure till the end to press enter for to logout. When I press enter everyting is silent and a message appears of cannot registry. I also run Ubuntu from an external drive and had upgraded Karmic to Lucid. Here I have nog problems at all doing the git thing and using Orca 2.31.1 pre. ----- Original Message ----- From: "Jon" To: Sent: Tuesday, May 18, 2010 2:06 PM Subject: Re: Orca disappears in Lucid > Hi, > > What happens when you do: > orca -t > on the command line? > > and then log out and log back in at the end. > > -Jon > On Tue 18/05/2010 at 13:18:56, milton wrote: >> Dear List, >> I'm an enduser and try to work with Orca in Ubuntu. >> I did succesful a fresh installation of Lucid with speech. Everything >> went fine and Orca 2.30.0 was doing great. >> With the instruction on live.gnome.org/Orca I did the git thingfor atk, >> at-spi and orca to try 2.31.1 pre. >> When I start up the machine this morning Orca did not comes up. >> Can you help me to solve this please? >> I can sure do a fresh install but how can I prevent to run in the same >> problem? >> With the script command I capture the following below. Thank you in >> advance. >> Regards, Milton >> milton at milton-desktop: ~milton at milton-desktop:~$ orca >> orca:1516): atk-bridge-WARNING **: AT_SPI_REGISTRY was not started at >> session startup. >> >> (orca:1516): atk-bridge-WARNING **: IOR not set. >> >> (orca:1516): atk-bridge-WARNING **: Could not locate registry >> >> ** (orca:1516): WARNING **: Trying to register gtype 'WnckWindowState' >> as enum when in fact it is of type 'GFlags' >> >> ** (orca:1516): WARNING **: Trying to register gtype 'WnckWindowActions' >> as enum when in fact it is of type 'GFlags' >> >> ** (orca:1516): WARNING **: Trying to register gtype >> 'WnckWindowMoveResizeMask' as enum when in fact it is of type 'GFlags' >> Contraction tables for liblouis cannot be found. >> This usually means orca was built before >> liblouis was installed. Contracted braille will >> not be available. >> Traceback (most recent call last): >> File "", line 1, in >> File "/usr/local/lib/python2.6/dist-packages/orca/orca.py", line 1805, >> in main >> init(pyatspi.Registry) >> File "/usr/local/lib/python2.6/dist-packages/orca/orca.py", line 1303, >> in init >> registry.registerEventListener(_onChildrenChanged, >> File "/usr/lib/python2.6/dist-packages/pyatspi/registry.py", line 331, >> in __getattribute__ >> raise RuntimeError('Could not find or activate registry') >> RuntimeError: Could not find or activate registry >> ]0;milton at milton-desk >> (orca:1558): atk-bridge-WARNING **: AT_SPI_REGISTRY was not started at >> session startup. >> >> ]0;milton at milton-desktop: ~milton at milton-desktop:~$ uname -a >> Linux milton-desktop 2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 >> 13:27:30 UTC 2010 i686 GNU/Linux >> ]0;milton at milton-desk >> >> >> -- >> Ubuntu-accessibility mailing list >> Ubuntu-accessibility at lists.ubuntu.com >> https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility > > -- > Ubuntu-accessibility mailing list > Ubuntu-accessibility at lists.ubuntu.com > https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility From cjk at teamcharliesangels.com Tue May 18 17:28:55 2010 From: cjk at teamcharliesangels.com (Charlie Kravetz) Date: Tue, 18 May 2010 11:28:55 -0600 Subject: Follow-up to UDS Meeting In-Reply-To: References: Message-ID: <20100518112855.19946659@teamcharliesangels.com> On Tue, 18 May 2010 05:50:28 -0400 Penelope Stowe wrote: > Hi, > > We had a very successful session about the team at UDS and decided to > have a meeting next week to follow-up with the entire team. > > Please e-mail me your availability by this Friday, May 21, at 12:00 > UTC so I can come up with a meeting time. > > Please note that I will probably try to turn this meeting time into a > monthly meeting time. > > Thanks! > Penelope > Well, that's pretty wide a time frame. However, I can be available almost anytime that is good for the group. However, in my own best interests, the following are ideal: Monday-Friday, 16:00 - 23:30 UTC except Wednesday 17:00 - 18:00 UTC. -- Charlie Kravetz Linux Registered User Number 425914 [http://counter.li.org/] Never let anyone steal your DREAM. [http://keepingdreams.com] From milton at tomaatnet.nl Wed May 19 15:27:31 2010 From: milton at tomaatnet.nl (milton) Date: Wed, 19 May 2010 17:27:31 +0200 Subject: help to restore Orca in Lucid Message-ID: <1274282851.1685.4.camel@milton-desktop> Dear List, I'm an end user. Can you please tell me how to restore Orca 2.30 in Lucid? After I tried Orca 2.31.1 pre something went wrong so Orca disappears. Starting up typing orca in alt_F2 box won bring even the Orca window. Thank you in advance. Milton From milton at tomaatnet.nl Thu May 20 09:14:58 2010 From: milton at tomaatnet.nl (milton) Date: Thu, 20 May 2010 11:14:58 +0200 Subject: Help to restore Orca in Lucid Message-ID: <1274346898.1816.3.camel@milton-desktop> Hi Bill, Still Orca refuses to come up. I just did a fresh install of Lucid and I will stick to Orca's stable branche. How can I update Orca within the stable branche? Thanks anyway. Milton If you can get to a console and log in, you can try "sudo apt-get install --reinstall gnome-orca". That should do it. Bill On Wed, May 19, 2010 at 11:27 AM, milton < milton at tomaatnet.nl > wrote: > Dear List, > I'm an end user. > Can you please tell me how to restore Orca 2.30 in Lucid? > After I tried Orca 2.31.1 pre something went wrong so Orca disappears. > Starting up typing orca in alt_F2 box won bring even the Orca window. > Thank you in advance. > Milton > > > -- > Ubuntu-accessibility mailing list > Ubuntu-accessibility at lists.ubuntu.com > https://lists.ubuntu.com/mailman/listinfo/ubuntu- accessibility > From pstowe at gmail.com Thu May 20 09:57:16 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Thu, 20 May 2010 05:57:16 -0400 Subject: Follow-up to UDS Meeting In-Reply-To: <20100518112855.19946659@teamcharliesangels.com> References: <20100518112855.19946659@teamcharliesangels.com> Message-ID: Just a reminder that I really need to know what times people are available! I've heard from only two people. The last time I scheduled a meeting several people complained about the time, so I'd really like to get feedback from more people. Thank you! Penelope From hammera at pickup.hu Fri May 21 05:56:23 2010 From: hammera at pickup.hu (Hammer Attila) Date: Fri, 21 May 2010 07:56:23 +0200 Subject: When I booting Lucid final live CD, setting accessibility mode and press a Tab key and choose the install Ubuntu old menu item, Orca is not talking when installer is present Message-ID: <4BF62087.6010802@pickup.hu> Hy List, When I tryed only install ubuntu with following way, Orca is not talking the installer: 1. I press tab key when need to get back old menu style. 2. I choosed the language, and setted the accessibility mode with screen reader mode. 3. I choosed the install ubuntu menu item and press Enter key. When the installer is present the display, Orca is not speak. If I use the try ubuntu menu item, Orca is wonderful talking and possible install the system fine with the desktop icon launch. Luke, when I would like reporting this bug, this is an Ubiquity or casper related issue? Possible fix this problem under Lucid 10.04.2 maintenance version or only Maverick Merkant? Attila From hammera at pickup.hu Fri May 21 06:42:33 2010 From: hammera at pickup.hu (Hammer Attila) Date: Fri, 21 May 2010 08:42:33 +0200 Subject: When I booting Lucid final live CD, setting accessibility mode andpress a Tab key and choose the install Ubuntu old menu item, Orca is nottalking when installer is present In-Reply-To: <91DEC33393514898AA4C27C04A3F987A@milton> References: <4BF62087.6010802@pickup.hu> <91DEC33393514898AA4C27C04A3F987A@milton> Message-ID: <4BF62B59.1010701@pickup.hu> Hy Milton, I see you wroted problem if have a not already partitioned area in hard disk, for example a free space area after you delete a partition with the partitioner. If I tryed choose a filesystem type the modify dialog, Orca only spokening filesystem word, not the correct file system type (ext4, ext3 etc). You see this similar problem? If a partition is exists and I do modifications (for example choosing another type filesystem and mark the partition with formatting), Orca correct spokening the right filesystem type. You see similar problem? Attila From rao.nischal at gmail.com Fri May 21 15:04:37 2010 From: rao.nischal at gmail.com (Nischal Rao) Date: Fri, 21 May 2010 20:34:37 +0530 Subject: VEDICS Speech Assistant Message-ID: Hi, I and some of my friends have created a speech assistant software for linux called VEDICS(Voice Enabled Desktop Interaction and Control System). Using this software the user can access any element found on the user's screen through speech. The user can also navigate the filesystem through speech. We have created some demo screencasts of the software: 1. Accessing the gnome panel and application. http://www.youtube.com/watch?v=WrVaJXtv0WU 2. Changing the theme and background. http://www.youtube.com/watch?v=zRgX94qGj3g 3. Navigating directories and playing songs: http://www.youtube.com/watch?v=kVQwAoeIavk 4. Running a slide show: http://www.youtube.com/watch?v=JtzA8TFwvuI 5. Running default applications and window operations: http://www.youtube.com/watch?v=iCEANbu8p50 6. Stopping and starting vedics: http://www.youtube.com/watch?v=TLFtdrlt3lM 7. Creating and deleting files: http://www.youtube.com/watch?v=_3CFAl22h2o 8. Navigating links: http://www.youtube.com/watch?v=AufBaaJazKU Currently the software doesn't support the dictation facility. However, we are planning to add this feature in the future. The best part of this software is that it is speaker independent, no training is required and it can recognize words not present in the English dictionary. Currently it works well on ubuntu 9.10 and ubuntu 10.04 You can find the source code at : http://sourceforge.net/projects/vedics/ -- regards, Nischal E Rao blogs.sun.com/nischal Join RVCE OSUM at http://osum.sun.com/group/rvceosum -------------- next part -------------- An HTML attachment was scrubbed... URL: From esj at harvee.org Fri May 21 20:21:53 2010 From: esj at harvee.org (Eric S. Johansson) Date: Fri, 21 May 2010 16:21:53 -0400 Subject: VEDICS Speech Assistant In-Reply-To: References: Message-ID: <4BF6EB61.6080500@harvee.org> On 5/21/2010 11:04 AM, Nischal Rao wrote: > Hi, > > I and some of my friends have created a speech assistant software for > linux called VEDICS(Voice Enabled Desktop Interaction and Control > System). Using this software the user can access any element found on > the user's screen through speech. The user can also navigate the > filesystem through speech. > > We have created some demo screencasts of the software: > > 1. Accessing the gnome panel and application. > http://www.youtube.com/watch?v=WrVaJXtv0WU > > 2. Changing the theme and background. > http://www.youtube.com/watch?v=zRgX94qGj3g > > 3. Navigating directories and playing songs: > http://www.youtube.com/watch?v=kVQwAoeIavk > > 4. Running a slide show: > http://www.youtube.com/watch?v=JtzA8TFwvuI > > 5. Running default applications and window operations: > http://www.youtube.com/watch?v=iCEANbu8p50 > > 6. Stopping and starting vedics: > http://www.youtube.com/watch?v=TLFtdrlt3lM > > 7. Creating and deleting files: > http://www.youtube.com/watch?v=_3CFAl22h2o > > 8. Navigating links: > http://www.youtube.com/watch?v=AufBaaJazKU > > > Currently the software doesn't support the dictation facility. However, > we are planning to add this feature in the future. > The best part of this software is that it is speaker independent, no > training is required and it can recognize words not present in the > English dictionary. > > Currently it works well on ubuntu 9.10 and ubuntu 10.04 > > You can find the source code at : http://sourceforge.net/projects/vedics/ very nice. have you thrown away your keyboard yet? please do so and send a message to the list without keyboard. From kenny at hittsjunk.net Sat May 22 07:03:21 2010 From: kenny at hittsjunk.net (Kenny Hitt) Date: Sat, 22 May 2010 02:03:21 -0500 Subject: VEDICS Speech Assistant In-Reply-To: <4BF6EB61.6080500@harvee.org> References: <4BF6EB61.6080500@harvee.org> Message-ID: <20100522070321.GE14174@blackbox.hittsjunk.net> Hi. On Fri, May 21, 2010 at 04:21:53PM -0400, Eric S. Johansson wrote: > On 5/21/2010 11:04 AM, Nischal Rao wrote: > > > > Currently the software doesn't support the dictation facility. However, > > we are planning to add this feature in the future. > > The best part of this software is that it is speaker independent, no > > training is required and it can recognize words not present in the > > English dictionary. > > > > Currently it works well on ubuntu 9.10 and ubuntu 10.04 > > > > You can find the source code at : http://sourceforge.net/projects/vedics/ > > very nice. have you thrown away your keyboard yet? please do so and send a > message to the list without keyboard. > Before you post such a negative message, you should really read first. This is not even a stable tarball release yet. The author stated clearly dictation wasn't available, but is planned to be added. If he had claimed that you could do dictation, your post would make since, but since he didn't, you look like a winy ass. When a project like this is still at such an early stage, bad attitude will cause a developer to wonder if the trouble is really worth it. One note to those people who have recently started to get a ubuntu accessibility group going again: you really need to subscribe to gnome-accessibility. All the developments in accessibility are happening in upstream gnome and not ubuntu. There was a more complete discussion about this particular app on gnome-accessibility. Kenny From esj at harvee.org Sat May 22 13:13:47 2010 From: esj at harvee.org (Eric S. Johansson) Date: Sat, 22 May 2010 09:13:47 -0400 Subject: VEDICS Speech Assistant In-Reply-To: <20100522070321.GE14174@blackbox.hittsjunk.net> References: <4BF6EB61.6080500@harvee.org> <20100522070321.GE14174@blackbox.hittsjunk.net> Message-ID: <4BF7D88B.7040304@harvee.org> On 5/22/2010 3:03 AM, Kenny Hitt wrote: > Hi. > > On Fri, May 21, 2010 at 04:21:53PM -0400, Eric S. Johansson wrote: >> On 5/21/2010 11:04 AM, Nischal Rao wrote: > > >>> >>> Currently the software doesn't support the dictation facility. However, >>> we are planning to add this feature in the future. >>> The best part of this software is that it is speaker independent, no >>> training is required and it can recognize words not present in the >>> English dictionary. >>> >>> Currently it works well on ubuntu 9.10 and ubuntu 10.04 >>> >>> You can find the source code at : http://sourceforge.net/projects/vedics/ >> >> very nice. have you thrown away your keyboard yet? please do so and send a >> message to the list without keyboard. >> > Before you post such a negative message, you should really read first. > This is not even a stable tarball release yet. The author stated clearly > dictation wasn't available, but is planned to be added. > If he had claimed that you could do dictation, your post would make since, but since > he didn't, you look like a winy ass. > When a project like this is still at such an early stage, bad attitude will cause a developer > to wonder if the trouble is really worth it. those who are unaware of history are doomed to repeat it... badly This is about the 5th time I've seen this sort of project get started. I've seen every single commercial equivalent fail. I've watched people get excited over and over again thinking that at IVR level recognition engine can be used to replicate NaturallySpeaking functionality only to have their hopes crushed and energy wasted when they discover the two engine types are radically different. This is not to say the project could be useful in a particular problem domain such as robotic control or command and control by telephone it's just that history shows that this idea has failed when applied to accessibility needs because the vast majority of speech recognition use by disabled person is the creation of text, not noodling around on the desktop. After all, what value does setting the font have when your hands won't let you type the text. I've been involved in the Linux desktop recognition issue for a very long time. I've had conversations with senior management at Dragon Systems (pre-buyout) on the market strategy (they still can't figure out how to make money in Linux today because it will only cannibalize the Windows market and depressed pricing). I've participated in the creation of a nonprofit focused on creating Linux desktop speech recognition systems and watched its dissolution because we couldn't get the technology and, we couldn't get sufficient technical support from the OSS community to build what was needed. They wanted to build something based on sphinx or Julius, both of which would not meet our needs. this opinion of suitability came from the developers of the SR engine projects. If by being a whiny ass, you mean being a historian and making people aware of how they are wasting their time, unfairly raising people's hopes and building something for which they've not even studied the basic use case, then yes, I'll be a whiny ass. Ever since I've been injured, I've been watching upper extremity disabled people jumping up and down, waving their hands or something that doesn't hurt as badly, saying "hey hey hey, we need help over here!" unfortunately, the people who can write code are somewhere else saying "this should be useful because we know how to write this code." End result being being we get nothing we can use and developers think we are a bunch of ungrateful shits because we don't think their projects are wonderful. A further bit of insanity comes when someone asks for help integrating an open-source program (Emacs via VR-mode) to work with NaturallySpeaking. If we get a response, it's frequently "we can't do that, it would encourage people to use proprietary software". Head-desk. There seems to be a blind spot recognizing that what it takes to make a good speech user interface is complex, in many ways far more complex than almost all accessibility interfaces put together. Speech UIs really any need to be built at first and let the recognition engine come second or third in your priority list because the first priority should be making an environment work for speech recognition users. a related winey ass bit is that I've got ideas for speech UIs, I can't implement them because my hands are broken. I need someone to be a coding Buddy to work with as a team (two minds, one set of hands) to bring my ideas to fruition and find them solve the problems with them. This pair problem is also one of the reasons why speech recognition users have made so little progress improving their own lot over the past decade. We can't find developers with hands who are willing to truly listen to us. As result, we pick out code slowly and with lots of errors and sometimes, it just gets to be too much, too exhausting and projects fall by the wayside. hopefully this is my last whiny ass bit. OSS is nice, OSS should be added incrementally but if the ideology gets in the way of people being able to make money, to live independently then it should be sidelined until it no longer gets in the way. Otherwise, why should you bother with accessibility at all? > One note to those people who have recently started to get a ubuntu accessibility group going again: > you really need to subscribe to gnome-accessibility. All the developments in accessibility are happening > in upstream gnome and not ubuntu. > There was a more complete discussion about this particular app on gnome-accessibility. do you have a URL to the archives or a rough of conversation time so I can take a look? From tcross at rapttech.com.au Sun May 23 06:49:54 2010 From: tcross at rapttech.com.au (Tim Cross) Date: Sun, 23 May 2010 16:49:54 +1000 Subject: VEDICS Speech Assistant In-Reply-To: <4BF7D88B.7040304@harvee.org> References: <4BF6EB61.6080500@harvee.org> <20100522070321.GE14174@blackbox.hittsjunk.net> <4BF7D88B.7040304@harvee.org> Message-ID: <19448.53266.142791.77211@rapttech.com.au> Eric S. Johansson writes: > On 5/22/2010 3:03 AM, Kenny Hitt wrote: > > Hi. > > > > On Fri, May 21, 2010 at 04:21:53PM -0400, Eric S. Johansson wrote: > >> On 5/21/2010 11:04 AM, Nischal Rao wrote: > > > > > >>> > >>> Currently the software doesn't support the dictation facility. However, > >>> we are planning to add this feature in the future. > >>> The best part of this software is that it is speaker independent, no > >>> training is required and it can recognize words not present in the > >>> English dictionary. > >>> > >>> Currently it works well on ubuntu 9.10 and ubuntu 10.04 > >>> > >>> You can find the source code at : http://sourceforge.net/projects/vedics/ > >> > >> very nice. have you thrown away your keyboard yet? please do so and send a > >> message to the list without keyboard. > >> > > Before you post such a negative message, you should really read first. > > This is not even a stable tarball release yet. The author stated clearly > > dictation wasn't available, but is planned to be added. > > If he had claimed that you could do dictation, your post would make since, but since > > he didn't, you look like a winy ass. > > When a project like this is still at such an early stage, bad attitude will cause a developer > > to wonder if the trouble is really worth it. > > those who are unaware of history are doomed to repeat it... badly > > This is about the 5th time I've seen this sort of project get started. I've seen > every single commercial equivalent fail. I've watched people get excited over > and over again thinking that at IVR level recognition engine can be used to > replicate NaturallySpeaking functionality only to have their hopes crushed and > energy wasted when they discover the two engine types are radically different. > > This is not to say the project could be useful in a particular problem domain > such as robotic control or command and control by telephone it's just that > history shows that this idea has failed when applied to accessibility needs > because the vast majority of speech recognition use by disabled person is the > creation of text, not noodling around on the desktop. After all, what value does > setting the font have when your hands won't let you type the text. > > I've been involved in the Linux desktop recognition issue for a very long time. > I've had conversations with senior management at Dragon Systems (pre-buyout) > on the market strategy (they still can't figure out how to make money in Linux > today because it will only cannibalize the Windows market and depressed > pricing). I've participated in the creation of a nonprofit focused on creating > Linux desktop speech recognition systems and watched its dissolution because we > couldn't get the technology and, we couldn't get sufficient technical support > from the OSS community to build what was needed. They wanted to build something > based on sphinx or Julius, both of which would not meet our needs. this opinion > of suitability came from the developers of the SR engine projects. > > If by being a whiny ass, you mean being a historian and making people aware of > how they are wasting their time, unfairly raising people's hopes and building > something for which they've not even studied the basic use case, then yes, I'll > be a whiny ass. > > Ever since I've been injured, I've been watching upper extremity disabled people > jumping up and down, waving their hands or something that doesn't hurt as badly, > saying "hey hey hey, we need help over here!" unfortunately, the people who can > write code are somewhere else saying "this should be useful because we know how > to write this code." End result being being we get nothing we can use and > developers think we are a bunch of ungrateful shits because we don't think their > projects are wonderful. > > A further bit of insanity comes when someone asks for help integrating an > open-source program (Emacs via VR-mode) to work with NaturallySpeaking. If we > get a response, it's frequently "we can't do that, it would encourage people to > use proprietary software". Head-desk. There seems to be a blind spot recognizing > that what it takes to make a good speech user interface is complex, in many ways > far more complex than almost all accessibility interfaces put together. Speech > UIs really any need to be built at first and let the recognition engine come > second or third in your priority list because the first priority should be > making an environment work for speech recognition users. > > a related winey ass bit is that I've got ideas for speech UIs, I can't implement > them because my hands are broken. I need someone to be a coding Buddy to work > with as a team (two minds, one set of hands) to bring my ideas to fruition and > find them solve the problems with them. This pair problem is also one of the > reasons why speech recognition users have made so little progress improving > their own lot over the past decade. We can't find developers with hands who are > willing to truly listen to us. As result, we pick out code slowly and with lots > of errors and sometimes, it just gets to be too much, too exhausting and > projects fall by the wayside. > > hopefully this is my last whiny ass bit. OSS is nice, OSS should be added > incrementally but if the ideology gets in the way of people being able to make > money, to live independently then it should be sidelined until it no longer gets > in the way. Otherwise, why should you bother with accessibility at all? > > > > One note to those people who have recently started to get a ubuntu accessibility group going again: > > you really need to subscribe to gnome-accessibility. All the developments in accessibility are happening > > in upstream gnome and not ubuntu. > > There was a more complete discussion about this particular app on gnome-accessibility. > > do you have a URL to the archives or a rough of conversation time so I can take > a look? > Hi Eric, while I can appreciate the frustration you express in your posts, I have to agree with Kenny on this one. Your points regarding history being repeated etc mayb e valid. However, you made no reference to any of the points you later expanded upon in your original post. As Kenny points out, you didn't even acknowledge what the OP stated as the limitations in their system. I suspect you didn't even look into it any further than that simple introductory post. Your response was flippent and negative. The issues you raise are real and complex. They are going to be difficult to resolve and ther are almost certainly going to be many failures before we have some success. I suspect you are correct in that many with the technical skills don't understand the underlying issues well and frustratingly, we are destined to see the same mistakes being made. I believe this is because the problem is generally not well understood and as a consequnce, the outcomes are less than we would hope. However, I also feel that this is part of the process and it very much mirrors developments in other areas. Frequently, we learn more from our failures than we do from our successes. A frustrating part of software development is that, unlike the real sciences, we don't document and publish our failures. If we did, maybe the forward progress would be better. I also disagree with the view/belief that ignorance of history always means that the same mistakes are just repeated. Sometimes, ignorance of history results in fresh new approaches that find a solution. In some cases, awareness of history can have negative impact as well. It tends to constrain/define the approaches taken. In computing in particular, there have been a number of great advances made by people who did not come from a computing background, who were not aware of past history and attempts. In some cases, they did things that those who were more aware of the past and informed about the technology had already discounted because of their past experiences or because of theoretical limitations. In fact, this is a frequent pattern in many areas. Consider where we would be now if the Wright brothers had just looked at the past history of our attempts to fly! We should be aware of past history and we should try to learn from it. However, we also need to be balanced and sometimes, we just need to have a go. We may well fail, thats not the issue. What we need to do is pick ourselves up again after the failure, learn fromt he experiences and try again. I also have a very different view to yours regarding OSS. I don't see OSS as some separate culture or group. OSS is only an ideology and you cannot give up that ideology for expedience. Doing so means you end up with something else completely. It is true that adopting such an ideology can make some things more difficult and it is true that it will impose different limits or constraints. However, you adopt the ideology because you believe that in the end, the results will be, on the whole, better. However, I also think its a bit like religion. Its not for everyone and there are many different forms. Some people will get great comfort and inspiration from it, othes will not. For those who find it beneficial, great, for those who don't, great. The example you give regarding emacs and VR is a limited perspective. Write now, I'm writing to you using emacspeak, which also uses proprietary software. While we would not be able to get emacspeak bundled into emacs and while many hard core OSS developrs would not work on it because of this, it has not stopped its development and use. Likewise, finding new profitable business models that are self-sustaining is difficult because you really do need to approach things from a very different perspective, its not impossible and there are a growing number of successful businesses built on top of OSS. You and I may not be able to define or recognise such business models, but that does not mean it is impossible. Likewise, Dragon may have difficulty at the moment in recognising how to make their products profitable on the Linux platform, but that does not mean it cannot be done. Take a look at Oracle for an example of a company that is successful and has successfully moved their product to being supported under Linux. As I mentioned in an earlier post, often, companies are just not in a position to recognise the potentials of either OSS or supporting their product on other platforms. They may never do this or they may have a strategic change next week. As I've mentioned before, in OSS and I believe in the areas of adaptive technology, we need to scratch our own itch. Often when I say this, the response comes back that the individual doesn't have the technical skill, the time or cannot do it because of their disability. I think this is just a total cop out. There are many ways of helping to scratch your own itch. Even just getting the issues out ther in front of people is a start. Yes, it might take me longer to code the program because of my disability, but maybe the result will be better because of my close association and understanding or simply because it more precisely scratches my own individual itch. My strength lies in programming. I would be less successful in other areas, such as convincing a commercial entity into porting their product to Linux, supporting an OSS project or raising awareness of the issues amongst others. We all have skills and ways to contribute. The tricky part is recognising what our skills really are and how they can be applied. I disagree with your assessment that you cannot do much because of your disability. You have mentioned you need someone to code because you cannot due to your injuries. Yet, you are able to write these messages. If you can write an email, then why can you not write code? I recognise it may be slow and/or it may be difficult, but as you have demonstrated the ability to write reasonably long emails, you could put that effort into writing code as well. I'm not saying it is easy, but it would be the best way to get what *you* want - at least better than waiting for someone else to do it for you. Maybe coding isn't the best way for you to contribute. Maybe it is design, or lobbying, or testing, or ....... You mentioned that you have lots of good ideas and indicate you even know how to solve some of the issues, but need someone to help you code. Maybe this would be easier to do if you document, plan and design what you want done. Maybe someone looking for a project will see it and think your ideas are interesting. Maybe others will have some suggestions and improvements to make or maybe someone out there is already working on similar ideas. The point is, get it down and out of your head and then in front of people and you are likely to get more real progress than is currently occuring. For example, if you had a clearly defined project, maybe it wold be possible to find participants to work on it as part of the next Google Summer of Code? Maybe someone will pick it up as part of a reserach or teaching project or maybe you will write it up in such a way that it inspires somoene to contribute, support or fund. I am quite sure there will be other reasons you can point out to why this still won't work and maybe many of them are valid. I don't know the precise circumstances you find yourself in and I'm not trying to be 'nasty' or overly critical. However, in all your posts I've seen so far, essentially all that has come across has been very winy and negative assessments of why it is all no good. You have indicated that you know of things that can be done to improve matters, but not provided anything of any real substance. Write your ideas up, put them on a web page and then start asking people for input and feedback. To make things change, you have to generate some interest and some motivation. Nobody is going to be as motivated to address the limitations you face as much as you are. If your not able to get motivated enough to change the situation, it is very unlikely anyone else will. This will probably come across as harsh, I don't mean it to be, but believe it needs to be said. Much of what you have written is true and it is obvious that you are frustrated. I'd even go so far as to say there is a strong element of negativity and some underlying anger in what you have written. There is also an element of 'hopelessness'. Parts of it even come across a little bitter and can sound like being resigned to be a victim. I know this feeling and I know how hard it can be to not let the frustrations, lack of change and feelings of injustice become all encompassing. I sincerely hope this is just a temprary downswing. Possibly there is just a need to vent a little to reduce the pressure - I get that and I've been there. An old boss of mine you to say that on some days, all you can do is hold the line. Thats fine. What we need to do is recongise when things are like this and acknowledge there are times we probably just need to let things go. At the end of the day, much of what you have written is true and all to familiar to all of us with a reliance on adaptive technology. It hasn't added anything new. This is possibly my main issue with what you have posted. From what you have written, it is apparent you have considerable first hand knowledge and experience in the VR field. Unfortunately, there is little of substance that could be used to either move things forward or assist others in avoiding some of the pitfalls. This is a pity. Perhaps the question to ask is how can we change things. What can we do as individuals to improve the situation. If you have ideas I'd strongly recommend putting them up on a website and then post to the various lists asking others to read and provide input/feedback. While this will almost certainly not result in any great fundamental change, it may just provide the inspiration or prevent/reduce wasted effort. If we don't want history to be repeated, it needs to be documented and accessible. Of course, we all also need to recognise that sometimes our abilities to communicate and motivate also fail, so try not to be discouraged if initial responses are poor or there appears to be little interest or acknowledgement. Instead, adapt and try again. The important things are never easy and we rarely succeed initially. We need to have confidence and belief in what we are doing and kee pushing forward. regards, Tim -- Tim Cross tcross at rapttech.com.au There are two types of people in IT - those who do not manage what they understand and those who do not understand what they manage. -- Tim Cross tcross at rapttech.com.au There are two types of people in IT - those who do not manage what they understand and those who do not understand what they manage. From kenny at hittsjunk.net Sun May 23 09:23:32 2010 From: kenny at hittsjunk.net (Kenny Hitt) Date: Sun, 23 May 2010 04:23:32 -0500 Subject: VEDICS Speech Assistant In-Reply-To: <19448.53266.142791.77211@rapttech.com.au> References: <4BF6EB61.6080500@harvee.org> <20100522070321.GE14174@blackbox.hittsjunk.net> <4BF7D88B.7040304@harvee.org> <19448.53266.142791.77211@rapttech.com.au> Message-ID: <20100523092332.GF14174@blackbox.hittsjunk.net> Hi. On Sun, May 23, 2010 at 04:49:54PM +1000, Tim Cross wrote: > > do you have a URL to the archives or a rough of conversation time so I can take > > a look? > > Try looking at: http://mail.gnome.org/archives/gnome-accessibility-list/2010-May/msg00099.html That was the start of the thread. Thanks for you post. You did a much better job at expressing my views than I would have done. Also, you were much more polite than I will be in this situation. I have no tolerance for disabled people wanting someone else to do it for them in open source. The thing people don't seem to get is open source puts the power in the hands of the user instead of some developer out to make money. The high cost of access technology was the first reason I switched to Linux. Once I started using it and learning about what it offered, I realized Linux was the best thing to happen for accessibility in a long time. Linux is still the only operating system I can install without sighted help. That was true even back in 2000 when I did my first Linux install. Access isn't perfect, but I have the power to fix any issues that really bother me enough. Kenny > > Hi Eric, > > while I can appreciate the frustration you express in your posts, I have to > agree with Kenny on this one. Your points regarding history being repeated etc > mayb e valid. However, you made no reference to any of the points you later > expanded upon in your original post. As Kenny points out, you didn't even > acknowledge what the OP stated as the limitations in their system. I suspect > you didn't even look into it any further than that simple introductory post. > Your response was flippent and negative. > > The issues you raise are real and complex. They are going to be difficult to > resolve and ther are almost certainly going to be many failures before we have > some success. I suspect you are correct in that many with the technical skills > don't understand the underlying issues well and frustratingly, we are destined > to see the same mistakes being made. I believe this is because the problem is > generally not well understood and as a consequnce, the outcomes are less than > we would hope. However, I also feel that this is part of the process and it > very much mirrors developments in other areas. Frequently, we learn more from > our failures than we do from our successes. A frustrating part of software > development is that, unlike the real sciences, we don't document and publish > our failures. If we did, maybe the forward progress would be better. > > I also disagree with the view/belief that ignorance of history always means > that the same mistakes are just repeated. Sometimes, ignorance of history > results in fresh new approaches that find a solution. In some cases, awareness > of history can have negative impact as well. It tends to constrain/define the > approaches taken. In computing in particular, there have been a number of > great advances made by people who did not come from a computing background, > who were not aware of past history and attempts. In some cases, they did > things that those who were more aware of the past and informed about the > technology had already discounted because of their past experiences or because > of theoretical limitations. In fact, this is a frequent pattern in many areas. > Consider where we would be now if the Wright brothers had just looked at the > past history of our attempts to fly! > > We should be aware of past history and we should try to learn from it. > However, we also need to be balanced and sometimes, we just need to have a go. > We may well fail, thats not the issue. What we need to do is pick ourselves up > again after the failure, learn fromt he experiences and try again. > > I also have a very different view to yours regarding OSS. I don't see OSS as > some separate culture or group. OSS is only an ideology and you cannot give up > that ideology for expedience. Doing so means you end up with something else > completely. It is true that adopting such an ideology can make some things > more difficult and it is true that it will impose different limits or > constraints. However, you adopt the ideology because you believe that in the > end, the results will be, on the whole, better. However, I also think its a > bit like religion. Its not for everyone and there are many different forms. > Some people will get great comfort and inspiration from it, othes will not. > For those who find it beneficial, great, for those who don't, great. > > The example you give regarding emacs and VR is a limited perspective. Write > now, I'm writing to you using emacspeak, which also uses proprietary software. > While we would not be able to get emacspeak bundled into emacs and while many > hard core OSS developrs would not work on it because of this, it has not > stopped its development and use. Likewise, finding new profitable business > models that are self-sustaining is difficult because you really do need to > approach things from a very different perspective, its not impossible and > there are a growing number of successful businesses built on top of OSS. You > and I may not be able to define or recognise such business models, but that > does not mean it is impossible. Likewise, Dragon may have difficulty at the > moment in recognising how to make their products profitable on the Linux > platform, but that does not mean it cannot be done. Take a look at Oracle for > an example of a company that is successful and has successfully moved their > product to being supported under Linux. As I mentioned in an earlier post, > often, companies are just not in a position to recognise the potentials of > either OSS or supporting their product on other platforms. They may never do > this or they may have a strategic change next week. > > As I've mentioned before, in OSS and I believe in the areas of adaptive > technology, we need to scratch our own itch. Often when I say this, the > response comes back that the individual doesn't have the technical skill, the > time or cannot do it because of their disability. I think this is just a total > cop out. There are many ways of helping to scratch your own itch. Even just > getting the issues out ther in front of people is a start. Yes, it might take > me longer to code the program because of my disability, but maybe the result > will be better because of my close association and understanding or simply > because it more precisely scratches my own individual itch. My strength lies > in programming. I would be less successful in other areas, such as convincing > a commercial entity into porting their product to Linux, supporting an OSS > project or raising awareness of the issues amongst others. We all have skills > and ways to contribute. The tricky part is recognising what our skills really > are and how they can be applied. > > I disagree with your assessment that you cannot do much because of your > disability. You have mentioned you need someone to code because you cannot due > to your injuries. Yet, you are able to write these messages. If you can write > an email, then why can you not write code? I recognise it may be slow and/or > it may be difficult, but as you have demonstrated the ability to write > reasonably long emails, you could put that effort into writing code as well. > I'm not saying it is easy, but it would be the best way to get what *you* want > - at least better than waiting for someone else to do it for you. Maybe coding > isn't the best way for you to contribute. Maybe it is design, or lobbying, or > testing, or ....... > > You mentioned that you have lots of good ideas and indicate you even know how > to solve some of the issues, but need someone to help you code. Maybe this > would be easier to do if you document, plan and design what you want done. > Maybe someone looking for a project will see it and think your ideas are > interesting. Maybe others will have some suggestions and improvements to make > or maybe someone out there is already working on similar ideas. The point is, > get it down and out of your head and then in front of people and you are > likely to get more real progress than is currently occuring. > > For example, if you had a clearly defined project, maybe it wold be possible > to find participants to work on it as part of the next Google Summer of Code? > Maybe someone will pick it up as part of a reserach or teaching project or > maybe you will write it up in such a way that it inspires somoene to > contribute, support or fund. > > I am quite sure there will be other reasons you can point out to why this still > won't work and maybe many of them are valid. I don't know the precise > circumstances you find yourself in and I'm not trying to be 'nasty' or overly > critical. However, in all your posts I've seen so far, essentially all that > has come across has been very winy and negative assessments of why it is all > no good. You have indicated that you know of things that can be done to > improve matters, but not provided anything of any real substance. Write your > ideas up, put them on a web page and then start asking people for input and > feedback. To make things change, you have to generate some interest and some > motivation. Nobody is going to be as motivated to address the limitations you > face as much as you are. If your not able to get motivated enough to change > the situation, it is very unlikely anyone else will. > > This will probably come across as harsh, I don't mean it to be, but believe it > needs to be said. Much of what you have written is true and it is obvious that > you are frustrated. I'd even go so far as to say there is a strong element of > negativity and some underlying anger in what you have written. There > is also an element of 'hopelessness'. Parts of it even come across a little > bitter and can sound like being resigned to be a victim. I know this feeling > and I know how hard it can be to not let the frustrations, lack of change and > feelings of injustice become all encompassing. I sincerely hope this is just a > temprary downswing. Possibly there is just a need to vent a little to reduce > the pressure - I get that and I've been there. An old boss of mine you to say > that on some days, all you can do is hold the line. Thats fine. What we need > to do is recongise when things are like this and acknowledge there are times > we probably just need to let things go. > > At the end of the day, much of what you have written is true and all to > familiar to all of us with a reliance on adaptive technology. It hasn't added > anything new. This is possibly my main issue with what you have posted. From > what you have written, it is apparent you have considerable first hand > knowledge and experience in the VR field. Unfortunately, there is little of > substance that could be used to either move things forward or assist others in > avoiding some of the pitfalls. This is a pity. > > Perhaps the question to ask is how can we change things. What can we do as > individuals to improve the situation. If you have ideas I'd strongly recommend > putting them up on a website and then post to the various lists asking others > to read and provide input/feedback. While this will almost certainly not > result in any great fundamental change, it may just provide the inspiration or > prevent/reduce wasted effort. If we don't want history to be repeated, it > needs to be documented and accessible. Of course, we all also need to > recognise that sometimes our abilities to communicate and motivate also fail, > so try not to be discouraged if initial responses are poor or there appears to > be little interest or acknowledgement. Instead, adapt and try again. The > important things are never easy and we rarely succeed initially. We need to > have confidence and belief in what we are doing and kee pushing forward. > > regards, > > Tim > > -- > Tim Cross > tcross at rapttech.com.au > > There are two types of people in IT - those who do not manage what they > understand and those who do not understand what they manage. > -- > Tim Cross > tcross at rapttech.com.au > > There are two types of people in IT - those who do not manage what they > understand and those who do not understand what they manage. > > -- > Ubuntu-accessibility mailing list > Ubuntu-accessibility at lists.ubuntu.com > https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility From saatyan.kfb at gmail.com Sun May 23 10:21:53 2010 From: saatyan.kfb at gmail.com (nalin linux) Date: Sun, 23 May 2010 06:21:53 -0400 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: References: Message-ID: Dear friends,easy-ocr 1.5 is released and we have made it more user friendly by introducing 2 engines, clean output folder, and facility to read unlimited number of pages. now there is no more inconvenience of creating a user in advance of installation. please. go through the read me carefully and give suggestions to improve the package. Read me easyocr 1.5 perfect for ubuntu 10.4. you can download from here http://code.google.com/p/easy-ocr/ now a visually impaired person can read print using freely distributable software. Easy to install. we have provided 3 types of installation mode. gui mode, easy mode, and text mode.. first enter the easyocr package and extract it to your computer and then go to the folder, select mode and enter. in gui mode just tab to run and enter then follow the instructions. easy installation in this mode after entering the folder select easy installation and enter and tab to select run and then enter password and wait . system will reboot after installation automatically. text mode installation after entering the easyocr folder select text mode and enter. then tab to run in terminal. then enter and do as instructed. steps to be followed. be careful to select a scanner which has full support of x sane. 1 after rebooting and connecting the scanner to the computer press super +x to open x sane. 2 tab and come to -- directory /home/username/OCR/1.png and delete the word username and write your username. 3 change colour to lineart,binary or gray (if there is no option as written above you can proceed with colour option) 4 change brightness and resolution as needed resolution usually 300. 5 changing the rotation. if you are keeping the book or letter on the scanner on 90 or 270 degree. please do the following. alt+tab to go to preview menu and press shift tab to reach 000 combo box and change it to 90 or 270 as needed. a caution for visually impaired, please select 90 which comes below 000. Though, all the 4 changes will remain the same you will have to set the rotation each time you open x sane. 6 alt tab again and come back to scanner menu. and press control+enter for start scanning. now go on scanning as many number of pages as you wish. 7 for converting and reading , press super+f9 and enter first page number and last page number as asked by the programme. after the text appeared you can use the reading key to read it. please note that it is not the page number of the book but the number of the page in the directory /home/username/OCR/1.png 8 if you are reading the document later, use super+f, go to output file to read your text material. you have pages and entire document here. 9 you can clear the output folder by pressing super+delete. . 10 there is facility for converting text in to wav format. super+a will help you for it.and the output will appear on Desktop. Special features two engines. easyocr 1.5 has two engines. you can select engine1 by pressing super+f1 (window key+f1) and engine2 super+f2. engine1 is good for fast text conversion, and picture skipping engine2 is , good for layout analysis. both engines are almost 99 percent accurate in picking . no limitation to number of pages and text conversion. Now, one can go on scanning and convert the text by following steps 1 after scanning press supper+f9. 2 easyocr will ask you to enter the number of the beginning page and then it will ask you to enter the number of end page. enter the number and enter. then conversion will start and it is noteworthy that orca will announce the number of the page being converted. now after conversion the text will appear and you can press the add button to read. output folder is now clean. at any time you can go to your text material by pressing super+f and then output folder will appear. from the folder you can select any page by pressing the number of the page and you can select your full text by pressing first number of the page +dash. you can clean the output folder by pressing super+delete. reading letters or checking output quality. super+1 will always read the first page in the directory.after opening x sane by pressing super+x you can tab to the directory and change the page number to 1.png and again tab to plus1 combo box and press space and bring it to 0 and now you will remain at the same page even after scanning. wav conversion. you can convert the text in to wav by pressing super+a. As in the case of text conversion, you can enter the page number and output will be saved on the desktop. easyocr is made as user friendly as possible. you can make it more friendly through your suggestions. please contact the following emails. saatyan.kfb at gmail.com and nalin4linux77 at gmail.com Copyright (c) 2010 easy ocr development team All rights reserved . Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the nor the easyocr team names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE." From kenny at hittsjunk.net Sun May 23 10:37:51 2010 From: kenny at hittsjunk.net (Kenny Hitt) Date: Sun, 23 May 2010 05:37:51 -0500 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: References: Message-ID: <20100523103751.GG14174@blackbox.hittsjunk.net> Hi. On Sun, May 23, 2010 at 06:21:53AM -0400, nalin linux wrote: > Dear friends,easy-ocr 1.5 is released and we have made it more user > friendly by introducing 2 engines, clean output folder, and facility to > read unlimited number of pages. now there is no more inconvenience of > creating a user in advance of installation. > > in this mode after entering the folder select easy installation and > enter and tab to select run and then enter password and wait . system > will reboot after installation automatically. > What the fuck! Why are you rebooting a Linux system after a simpl istall of a user space app? Kenny From pstowe at gmail.com Sun May 23 13:16:41 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Sun, 23 May 2010 09:16:41 -0400 Subject: Next Meeting: May 25 at 21:00 UTC Message-ID: Hi, I just wanted to announce that the next meeting will be Tuesday, May 25, at 21:00 UTC. Meetings after this one will be on Wednesdays, however, I'm moving this Wednesday so had to push it up a day. I hope to see as many of you there as possible! Thanks, Penelope From esj at harvee.org Sun May 23 14:54:49 2010 From: esj at harvee.org (Eric S. Johansson) Date: Sun, 23 May 2010 10:54:49 -0400 Subject: VEDICS Speech Assistant In-Reply-To: <19448.53266.142791.77211@rapttech.com.au> References: <4BF6EB61.6080500@harvee.org> <20100522070321.GE14174@blackbox.hittsjunk.net> <4BF7D88B.7040304@harvee.org> <19448.53266.142791.77211@rapttech.com.au> Message-ID: <4BF941B9.5010503@harvee.org> On 5/23/2010 2:49 AM, Tim Cross wrote: > while I can appreciate the frustration you express in your posts, I have to > agree with Kenny on this one. Your points regarding history being repeated etc > mayb e valid. However, you made no reference to any of the points you later > expanded upon in your original post. As Kenny points out, you didn't even > acknowledge what the OP stated as the limitations in their system. I suspect > you didn't even look into it any further than that simple introductory post. > Your response was flippent and negative. Yes, I accept your correction. I was being flippant and negative. I am extremely frustrated by developers who do not ask the users what they need. I don't mean short-term users but, people who have been in a particular field for a long time. In my sarcasm, I did acknowledge the limitations and the type of limitations. Problem being that you had to understand and know the types of systems, the limitations, and the tasks users need to perform in order understand sarcasm. This also points to another problem in the disability community, a lack of knowledge of necessary components. > > The issues you raise are real and complex. They are going to be difficult to > resolve and ther are almost certainly going to be many failures before we have > some success. I suspect you are correct in that many with the technical skills > don't understand the underlying issues well and frustratingly, we are destined > to see the same mistakes being made. I believe this is because the problem is > generally not well understood and as a consequnce, the outcomes are less than > we would hope. However, I also feel that this is part of the process and it > very much mirrors developments in other areas. Frequently, we learn more from > our failures than we do from our successes. A frustrating part of software > development is that, unlike the real sciences, we don't document and publish > our failures. If we did, maybe the forward progress would be better. I spent 18 years as a software developer before my hands went pop. I've since then spent 15+ years as an analyst/designer. The points you made are true for the coder world. There is very little knowledge of what has gone before, or whether professionals or do. But if you spend a few hours a month reading research papers on what people are doing with software development and especially the psychology of software development, you could learn some amazing things. I particularly loved "software practice and experience". I don't know if it's still a good given the current model of no editing, no vetting, publish on the net culture of today but, it's worth a shot. The ACM has some good journals as well. Point being, you don't have to make the same mistakes. You can learn from others and make different mistakes. I'm not seeing different mistakes being made here. I would be much more supportive if I was. > > I also disagree with the view/belief that ignorance of history always means > that the same mistakes are just repeated. Sometimes, ignorance of history > results in fresh new approaches that find a solution. In some cases, awareness > of history can have negative impact as well. It tends to constrain/define the > approaches taken. In computing in particular, there have been a number of > great advances made by people who did not come from a computing background, > who were not aware of past history and attempts. In some cases, they did > things that those who were more aware of the past and informed about the > technology had already discounted because of their past experiences or because > of theoretical limitations. In fact, this is a frequent pattern in many areas. > Consider where we would be now if the Wright brothers had just looked at the > past history of our attempts to fly! I know I sound like an ass by continually disagreeing but, the Wright brothers were aware of the history of other flyers. They knew you needed to make a lightweight motor and have a certain amount of lift . They chose Kitty Hawk for the steady laminar flow winds and the landscape. they also lifted a lot of work from Otto Lilienthal http://en.wikipedia.org/wiki/Otto_Lilienthal You'll notice that his work was well documented and repeatable so you could always build a wing that glided consistently. In any case, your point is made and wonderfully highlighted the absolutely unprofessional and unacceptable shortcomings of the software development arena. > > We should be aware of past history and we should try to learn from it. > However, we also need to be balanced and sometimes, we just need to have a go. > We may well fail, thats not the issue. What we need to do is pick ourselves up > again after the failure, learn fromt he experiences and try again. My original background was astronomy and physics. One thing that was apparent once I got into the experimental portion was that you replicate a mistake to understand the mistake and then you go to make a new mistake which you document so somebody else can repeat the cycle and hopefully make progress. as you've pointed out elsewhere, there's insufficient documentation of mistakes and analysis of those mistakes. I've obviously failed in showing how historically, particular approaches have failed. So how do we document failures and get people to read them before they try to do them again? I think this may be a horse and water problem where the water is smarter. > > I also have a very different view to yours regarding OSS. I don't see OSS as > some separate culture or group. OSS is only an ideology and you cannot give up > that ideology for expedience. Doing so means you end up with something else > completely. It is true that adopting such an ideology can make some things > more difficult and it is true that it will impose different limits or > constraints. However, you adopt the ideology because you believe that in the > end, the results will be, on the whole, better. However, I also think its a > bit like religion. Its not for everyone and there are many different forms. > Some people will get great comfort and inspiration from it, othes will not. > For those who find it beneficial, great, for those who don't, great. what I'm about to say is potentially far more offensive than anything else I've said so far but it is necessary, I believe, to get the point across. I apologize for any offense I cause but look deeper at why you're offended and feel free to talk with me about it privately. In a horribly vicious and ugly part of American history, we had institutionalized racism/discrimination in the ideology of "separate but equal". This meant that whites and nonwhite minorities had different facilities and responsibilities within the culture. In reality, it was "separate but unequal" in that whites had better facilities or more services available to them. For examples, see the history books on race relations. OSS ideology effectively creates a "separate but equal" computing environment. I am discriminated against in the OSS world because I can't get tools that are ideologically pure (i.e. white) in order for me to use computers. I am denied assistance by those who hold the OSS ideology dear. Solutions that incrementally move my assistance closer to an OSS ideal are also discouraged leaving me in that separate but unequal world. ths OSS world claims to be inclusive but in practice they're not. To be completely frank, the OSS world is like a child of bigoted parents. in this case, the parents are even more bigoted against disabled people and there is a lot significant amount of pressure to isolate, to ghettoize the disabled. For example, I use speech recognition (obviously) and using speech recognition in an open office environment is incredibly disruptive and therefore, it is cheaper to replace "me" with someone who isn't disabled. If I am allowed to work in a company, I'm usually given some broom closet as a "separate but equal" workspace where I'm isolated from the team and not allowed to integrate. again, I encourage you to look at American history of where black people died because they were denied medical care. They were denied transportation, they were forced to live in ghettos because nobody would sell the property in the "good sections of town". The parallels between race relations and disability relations are discouraging and frightening. > The example you give regarding emacs and VR is a limited perspective. Write > now, I'm writing to you using emacspeak, which also uses proprietary software. > While we would not be able to get emacspeak bundled into emacs and while many > hard core OSS developrs would not work on it because of this, it has not > stopped its development and use. that's because your hands work and you've been bootstrapped. See discussion later > Likewise, finding new profitable business > models that are self-sustaining is difficult because you really do need to ... > often, companies are just not in a position to recognise the potentials of > either OSS or supporting their product on other platforms. They may never do > this or they may have a strategic change next week. According to the 10K filed by nuance. In 2009 they spent $120 million on R&D. If the usual proportions apply, that means the total annual expenses for the products run around $250-$300 million per year. I haven't had the chance to sit down and tear apart the 10K completely but I'm guessing that NaturallySpeaking's portion of this is probably in the $20-$40 million range. This is just the cost of maintaining the product in the marketplace. Do you have any examples of single applications that make that much money? Right now, they support Windows, Mac OS 10, and let's pretend they support Linux. This means they make money 80%, 15%, 5% ratio but the development costs are more likely to be 60%, 30%, 30%. If you look at the numbers, you need to raise the price of Linux product so that with 5% of your sales, you make 30% of your revenue. Isn't going to happen. People will stay on Windows because it's so much cheaper. As for making NaturallySpeaking an OSS application, I think the sun will go dark long before that happens. They have literally hundreds of millions of dollars in R&D invested in the product and they have responsibility to people who invested in the company that helped leverage multiple buyouts to make money to pay them back. Assuming they'll, for the moment that they have no such debit. How are you going to generate the hundreds of millions of dollars necessary to do that R&D over 20 years? These are the financial realities you need to look at in order to figure out how to make your business stay alive. I'm not seeing any narrow focus OSS businesses of sufficient scale to be able to support a product like NaturallySpeaking. Personally, something like this is so culturally important for disableed users the core should be socialized (workers own the means of production socialized) and licensed to all application developers but that's just me. > As I've mentioned before, in OSS and I believe in the areas of adaptive > technology, we need to scratch our own itch. Often when I say this, the > response comes back that the individual doesn't have the technical skill, the > time or cannot do it because of their disability. I think this is just a total > cop out. There are many ways of helping to scratch your own itch. Even just > getting the issues out ther in front of people is a start. Yes, it might take > me longer to code the program because of my disability, but maybe the result > will be better because of my close association and understanding or simply > because it more precisely scratches my own individual itch. My strength lies > in programming. I would be less successful in other areas, such as convincing > a commercial entity into porting their product to Linux, supporting an OSS > project or raising awareness of the issues amongst others. We all have skills > and ways to contribute. The tricky part is recognising what our skills really > are and how they can be applied. you know, I'm afraid I'm going to have to cry BS on this one. I had the technical chops to write the code. I do the design work or its equivalent everyday that I can work. Touching a keyboard causes significant pain. Imagine touching a keyboard and having someone set your forearm on fire. Not a very big fire but just enough to really let that burn sink into your skin. And then it happens again and again on every keystroke. Writing code is the verbal equivalent of touching the keyboard. So much of code is non-pronounceable. Yes, we can write macros to generate symbol names and painstakingly constructs statements but, you're lucky to write one line of debugged code per day. If you have something like programming focused macros, you might get to 1 1/2 lines of code a day. With something like VR-mode, you might get to 2.5 to 3 lines of code per day. If you are extremely lucky, and you've taken care of your throat, you might be able to do this two or three days a week without damaging your throat. Believe me, you do not want to damage your voice because if you do, just hang it up, get disability, go sit in some disabled complex until your life ends. This is not just my experience but, the experience reported by multiple people in the field. This is why we are searching for something that makes it possible to run at a rate of 5 to 10 lines of code per day both creating and editing. Yes I have a solution that's been vetted. It's not easy but it will work well. If you don't believe me, give me a hunk of code and I will tell you what has to be said to make it happen and why that utterance is totally unacceptable from operational and physical health perspective. > I disagree with your assessment that you cannot do much because of your > disability. You have mentioned you need someone to code because you cannot due > to your injuries. Yet, you are able to write these messages. If you can write > an email, then why can you not write code? I recognise it may be slow and/or > it may be difficult, but as you have demonstrated the ability to write > reasonably long emails, you could put that effort into writing code as well. > I'm not saying it is easy, but it would be the best way to get what *you* want > - at least better than waiting for someone else to do it for you. Maybe coding > isn't the best way for you to contribute. Maybe it is design, or lobbying, or > testing, or ....... I'm sorry, I'm have a little bit of trouble with this. how can I be disabled, extremely knowledgeable about speech recognition, and write long e-mail messages like this? it's because I'm using speech recognition! Haven't you noticed the errors? The missing words, the wrong words, the wrong verb tenses? It's not because I'm a literate, it's because I'm a lousy editor when I'm writing on the fly. Speech recognition is great for writing natural language. Your brain has to go through a process of training itself how to speak written speech but, once you've done that, you can write and think in very different ways. > > You mentioned that you have lots of good ideas and indicate you even know how > to solve some of the issues, but need someone to help you code. Maybe this > would be easier to do if you document, plan and design what you want done. > Maybe someone looking for a project will see it and think your ideas are > interesting. Maybe others will have some suggestions and improvements to make > or maybe someone out there is already working on similar ideas. The point is, > get it down and out of your head and then in front of people and you are > likely to get more real progress than is currently occuring. That's a good point. One of the problems I have is that graphical tools make me hurt in a very nasty way that is different from normal mouse use. Again, I can sketch things that with paper and pencil but I need someone to translate them. To illustrate some of these design concept, animation is useful. Again, not really possible with speech recognition. I've written up some of these ideas and presented them to nuance. The developers really like them, the management was noncommittal, probably because I've just done an end run around some of their UI concepts and IP (heh) > > For example, if you had a clearly defined project, maybe it wold be possible > to find participants to work on it as part of the next Google Summer of Code? > Maybe someone will pick it up as part of a reserach or teaching project or > maybe you will write it up in such a way that it inspires somoene to > contribute, support or fund. > > I am quite sure there will be other reasons you can point out to why this still > won't work and maybe many of them are valid. I don't know the precise > circumstances you find yourself in and I'm not trying to be 'nasty' or overly > critical. However, in all your posts I've seen so far, essentially all that > has come across has been very winy and negative assessments of why it is all > no good. You have indicated that you know of things that can be done to > improve matters, but not provided anything of any real substance. Write your > ideas up, put them on a web page and then start asking people for input and > feedback. To make things change, you have to generate some interest and some > motivation. Nobody is going to be as motivated to address the limitations you > face as much as you are. If your not able to get motivated enough to change > the situation, it is very unlikely anyone else will. you just haven't seen my writing in other arenas over the past few years. I work best when I have a collaborative team. A team that will challenge my ideas so they can make them better, an editor to catch my language problems, and a graphics person to put pictures, illustrating the concepts. Haven't been able to pay for team like that for quite a few years but that's what I know I work well with. Again, your comment about "if you're not able to get motivated" speaks to a certain lack of understanding. If your hands don't work, you have a hard time working and making money. Sometimes you even have a hard time feeding yourself. how can you write the code necessary (tens of thousands of lines) to make your environment better? How does that physically work? Do you see the disconnect? do you see the need for a bootstrap process that isn't happening? If you're using text-to-speech, I bet you'd make as much progress as I am if I sat in front of a dumb terminal with no audio output. Try writing code without seeing the screen, without getting any feedback on how the compiler operates and see how well you bootstrap yourself. But in your defense, let me point out that text-to-speech is trivially easy for bootstrapping than anything in speech recognition world. How do I know this? You have been bootstrapped and we have not. RSI is generating some 40,000 to 80,000 disabled software developers per year in the US alone and speech recognition for software development is in the same state it was 10 years ago. The fact it hasn't changed means it's a hard problem, that people keep repeating the same mistakes looking for simple solutions because it is a hard problem. I also think there is more the little bit of the "separate but equal" thing going on as well. Maybe after you've been around for a while and see people in your chosen field making the same mistake over and over again, you'll understand the crankiness. Then imagine living in a world where people tell you can't possibly be smart because you can no longer do a mechanical operation. It's not unlike the experience many women have when they get pregnant. The company/boss treats them as if half their IQ adjustable out of their ears solely because of the pregnancy. I've seen time and time again how good high-quality developers are treated as idiots just because their hands do not work and they cannot write code. Amazing how smarts are tied to a low-level physical ability. > > This will probably come across as harsh, I don't mean it to be, but believe it > needs to be said. Much of what you have written is true and it is obvious that > you are frustrated. I'd even go so far as to say there is a strong element of > negativity and some underlying anger in what you have written. There > is also an element of 'hopelessness'. Parts of it even come across a little > bitter and can sound like being resigned to be a victim. I know this feeling > and I know how hard it can be to not let the frustrations, lack of change and > feelings of injustice become all encompassing. I sincerely hope this is just a > temprary downswing. Possibly there is just a need to vent a little to reduce > the pressure - I get that and I've been there. An old boss of mine you to say > that on some days, all you can do is hold the line. Thats fine. What we need > to do is recongise when things are like this and acknowledge there are times > we probably just need to let things go. there is some truth in that. I have an idiot Dr. that doesn't quite get chronic pain. She put me on satins which are known to cause upper extremity pain and after one dose, I have been left with the worst physical pain I've had for years and it's not going away. She sent me to a specialist which, if I'm lucky, I will see by the beginning of August in the meantime, I get to sleep about 4-5 hours a night before the pain wakes me up and I'm still trying to work, to make money and survive. Yes, I get angry. I get so *fucking* angry because speech recognition users get little or no love in the disability accommodation world. there's all this attention for blind users and extreme physical disability but for disabled programmers, people that used to be their peers, who are now totally screwed, their careers shot in the head and the tab geeks go around with their fingers in ears going "La la la la" as if that will protect them from being injured. I don't get it. I really don't get it. Whenever I have money, I donate something to charity. If I don't have money, I give some time or collect material for thrift shop. I cannot imagine living without helping somebody else. how can developers turn their backs on their peers in need. I just don't get it. How can they call themselves human beings when they do that. I think that's why rejection from the OSS ideologically pure hurt so much. It's supposed to be about helping people make a better life through freedom of choice. But their actions show that to be a lie. They'll help you make a better life only if you can use ideologically pure choices. That is just wrong. > > At the end of the day, much of what you have written is true and all to > familiar to all of us with a reliance on adaptive technology. It hasn't added > anything new. This is possibly my main issue with what you have posted. From > what you have written, it is apparent you have considerable first hand > knowledge and experience in the VR field. Unfortunately, there is little of > substance that could be used to either move things forward or assist others in > avoiding some of the pitfalls. This is a pity. Yes, the conversation did go away I didn't want to go but, I wasn't sure how to present ideas without just pushing them through. I am going to try and write a plaintext descriptions of what I think is appropriate but it's damned hard without pictures and I just can't do pictures. > Perhaps the question to ask is how can we change things. What can we do as > individuals to improve the situation. If you have ideas I'd strongly recommend > putting them up on a website and then post to the various lists asking others > to read and provide input/feedback. While this will almost certainly not > result in any great fundamental change, it may just provide the inspiration or > prevent/reduce wasted effort. If we don't want history to be repeated, it > needs to be documented and accessible. Of course, we all also need to > recognise that sometimes our abilities to communicate and motivate also fail, > so try not to be discouraged if initial responses are poor or there appears to > be little interest or acknowledgement. Instead, adapt and try again. The > important things are never easy and we rarely succeed initially. We need to > have confidence and belief in what we are doing and kee pushing forward. Oh, I've often wondered if a webpage is published on the Internet, will anyone notice? that's a semiserious point. If I put the effort into documenting, will it do any good? Or will it just collect dust and get ignored as people go off and reinvent the same miserable wrong path? I know what I want to do is a right way, I don't know if it's right enough. The only way to tell is through experimentation and that means I need a coding buddy to work with me. From esj at harvee.org Sun May 23 14:56:53 2010 From: esj at harvee.org (Eric S. Johansson) Date: Sun, 23 May 2010 10:56:53 -0400 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <20100523103751.GG14174@blackbox.hittsjunk.net> References: <20100523103751.GG14174@blackbox.hittsjunk.net> Message-ID: <4BF94235.1050809@harvee.org> On 5/23/2010 6:37 AM, Kenny Hitt wrote: >> > What the fuck! Why are you rebooting a Linux system after a simpl istall of a > user space app? I wouldn't be surprised if there was some form of kernel module that needed boot time initialization or appropriate sequencing in order to get it to work. Of course, I'd be wondering about the need for kernel module but that's another problem. :-) From kenny at hittsjunk.net Sun May 23 15:26:03 2010 From: kenny at hittsjunk.net (Kenny Hitt) Date: Sun, 23 May 2010 10:26:03 -0500 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <4BF94235.1050809@harvee.org> References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> Message-ID: <20100523152603.GI14174@blackbox.hittsjunk.net> Hi. On Sun, May 23, 2010 at 10:56:53AM -0400, Eric S. Johansson wrote: > On 5/23/2010 6:37 AM, Kenny Hitt wrote: > > >> > > What the fuck! Why are you rebooting a Linux system after a simpl istall of a > > user space app? > > I wouldn't be surprised if there was some form of kernel module that needed boot > time initialization or appropriate sequencing in order to get it to work. Of > course, I'd be wondering about the need for kernel module but that's another > problem. :-) > > -- There isn't a kernel module in this case since they are using sane. I regularly build and install kernel modules without needing to reboot. Maybe these notes were for Windows? That is the only explanation I can come up with to explain this. Fortunately for me, I don't need this app since I already have a functional ocr solution . in Linux. My solution involves a few shell commands. It seems much simpler than this app in any case. Kenny From esj at harvee.org Sun May 23 16:16:12 2010 From: esj at harvee.org (Eric S. Johansson) Date: Sun, 23 May 2010 12:16:12 -0400 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <20100523152603.GI14174@blackbox.hittsjunk.net> References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> Message-ID: <4BF954CC.90303@harvee.org> On 5/23/2010 11:26 AM, Kenny Hitt wrote: > There isn't a kernel module in this case since they are using sane. > I regularly build and install kernel modules without needing to reboot. > Maybe these notes were for Windows? That is the only explanation I can > come up with to explain this. I went and read which reveals that is a Linux solution. I have observed that scanner interfaces are, fragile at best, and I'm not surprised they want to reboot with the device turned on. > Fortunately for me, I don't need this app since I already have a functional ocr solution . > in Linux. > My solution involves a few shell commands. It seems much simpler than this app in any case. from reading the documentation, this application looks very simple and it is aimed at visually impaired users. if you can use a keyboard, this shouldn't be a problem. As for a few shell commands, that's a reasonably inaccessible especially from speech recognition. Shell commands fail accessibility for a couple reasons. First the discoverability. You have to know that command exists in order to find out what it does unless you happen to remember it. I think I know of about 30 commands in the shell environment and I need to look at the man pages on 28 of them but I do anything more than the basics. Yet there are hundreds of shell commands that will probably do what I need except, I don't know they exist and I don't know what they do. The second way they fail is presentation. The name of the command, how it's invoked etc. it is not accessible either to speech recognition or text-to-speech. The last one, text-to-speech, may do a more credible job at presenting garbled text (command names, commandline arguments etc.) than speech recognition will when generating the same. You are correct however that once you have a CLI idiom memorized it does become easier to use because you associate a concept with a more complicated structure and then just use the concept as shorthand for that structure. --- eric From kenny at hittsjunk.net Sun May 23 16:40:15 2010 From: kenny at hittsjunk.net (Kenny Hitt) Date: Sun, 23 May 2010 11:40:15 -0500 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <4BF954CC.90303@harvee.org> References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> <4BF954CC.90303@harvee.org> Message-ID: <20100523164015.GJ14174@blackbox.hittsjunk.net> On Sun, May 23, 2010 at 12:16:12PM -0400, Eric S. Johansson wrote: > On 5/23/2010 11:26 AM, Kenny Hitt wrote: > > > There isn't a kernel module in this case since they are using sane. > > I regularly build and install kernel modules without needing to reboot. > > Maybe these notes were for Windows? That is the only explanation I can > > come up with to explain this. > > I went and read which reveals that is a Linux solution. I have observed that > scanner interfaces are, fragile at best, and I'm not surprised they want to > reboot with the device turned on. > I just switched scanners yesterday with no need to reboot. That idea about scanners doesn't match with my experience in Linux. > > Fortunately for me, I don't need this app since I already have a functional ocr solution . > > in Linux. > > My solution involves a few shell commands. It seems much simpler than this app in any case. > > from reading the documentation, this application looks very simple and it is > aimed at visually impaired users. if you can use a keyboard, this shouldn't be > a problem. > Since I'm totally blind, that means I'm likely supposed to be one of the users of this product. Since I have years of Linux experience, I don't have much confidence in any app that tells me I need to reboot after installing a user space app. > As for a few shell commands, that's a reasonably inaccessible especially from > speech recognition. Shell commands fail accessibility for a couple reasons. > First the discoverability. You have to know that command exists in order to find > out what it does unless you happen to remember it. I think I know of about 30 > commands in the shell environment and I need to look at the man pages on 28 of > them but I do anything more than the basics. Yet there are hundreds of shell > commands that will probably do what I need except, I don't know they exist and I > don't know what they do. > I find I'm still faster and more productive in the text console at a bash prompt than I've ever been in a GUI like Gnome. Gnome has never been stable or reliable enough for me to stick with it for more than a few months at a time. I had 4 years of Windows experience and was one of the early adopters of Gnome accessibility, but Gnome hasn't lived up to it's marketing. > The second way they fail is presentation. The name of the command, how it's > invoked etc. it is not accessible either to speech recognition or > text-to-speech. The last one, text-to-speech, may do a more credible job at > presenting garbled text (command names, commandline arguments etc.) than speech > recognition will when generating the same. > I don't follow this one. help $command works for me with a screen reader any time I need a reminder of a built in command $command --help works when I need a reminder for an external command. > You are correct however that once you have a CLI idiom memorized it does become > easier to use because you associate a concept with a more complicated structure > and then just use the concept as shorthand for that structure. Makes since. Bash programming is very much like C. I've spent long enough in Linux that I think in Linux terms instead of DOS or Windows. Even my GUI concepts are Gnome like instead of Windows like now days. Kenny From phillw at phillw.net Sun May 23 21:38:01 2010 From: phillw at phillw.net (Phillip Whiteside) Date: Sun, 23 May 2010 22:38:01 +0100 Subject: Life Message-ID: Hi, I joined this mailing list via the ubuntu forum area. I write web sites and was interested in how much more difficult it is to write the code so that it complies with whatever standard is standard of the day (The wonderful thing about standards, is everyone can make their own). A bit of my background may be in order. When I was 20 years old I had written a programme that could do what Stephen Hawkins still uses using an 8 bit computer (an Atari 640 XL with additional memory board soldered in). There was no interest in me going forward with that for about 1/100th of the cost of what was being sold commercially by any of charities. My heart drops when the longest emails are about 'failed' projects, people's ascertations that future projects are doomed to failure. As a non-disabled person, can I please ask that the bickering of who / what / where / when is to fault stop? As has been pointed out on this thread, there are young programmers coming on-line. This next bit of news may come of a shock to some of you, but they do not actually care about a disability - it is such a 'non-event' to them - They focus on the person, if that person is a happy person they see happy. I that person is some what frustrated but articulate and realises that an able bodied can never fully understand how it is to be so then progress can be made. If their first contact is for a major doom and gloom assesment of how they will fail like everyone else has done, it is hardly going to keep them around for long? VEDICS is nice, they have a little funding and would possibly be interested in taking it forward, they are certainly not going to 1) be interested in so doing with such negativity 2) be able to get any funding based on 'testimonial' emails. This is another young programmer who has written stuff http://bloc.eurion.net/archives/2010/espeak-gui-0-2/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From esj at harvee.org Sun May 23 23:23:51 2010 From: esj at harvee.org (Eric S. Johansson) Date: Sun, 23 May 2010 19:23:51 -0400 Subject: VEDICS Speech Assistant In-Reply-To: References: <4BF6EB61.6080500@harvee.org> <20100522070321.GE14174@blackbox.hittsjunk.net> <4BF7D88B.7040304@harvee.org> <19448.53266.142791.77211@rapttech.com.au> <4BF941B9.5010503@harvee.org> Message-ID: <4BF9B907.8010703@harvee.org> On 5/23/2010 6:02 PM, Phillip Whiteside wrote: > > > As has been pointed out on this thread, there are young programmers > coming on-line. This next bit of news may come of a shock to some of > you, but they do not actually care about a disability - it is such a > 'non-event' to them - They focus on the person, if that person is a > happy person they see happy. I that person is some what frustrated but > articulate and realises that an able bodied can never fully understand > how it is to be so then progress can be made. If their first contact is > for a major doom and gloom assesment of how they will fail like everyone > else has done, it is hardly going to keep them around for long? When I started, I learned two very important lessons in the first five years. 1) don't fuck up 2) don't be afraid to fail If I hadn't internalized those axioms, I wouldn't have learned as much, done as much, acquired the respect of people I've worked with over the years. Screwing up, in the beginning of your career, is usually nonfatal unless it is some career limiting move like sleeping with your bosses partner. But it's also a requirement for learning new stuff. If you work with someone 20 or even 30 years your senior, you will undoubtedly hear tales about "what I did that" you'll learn about what went wrong and how to not do it again. If you get a different result, and you have a very cool conversation analyzing what's going on. In any case, you learn. When you get tired of getting hit for screwing up, you start being more cautious and learn more about why you do things and when. Then you get to start paying attention to the second axiom for a variety of reasons. The change happens because you've learned a lot more about the practical world and how to take chances so they are seen as successful failures instead of screwing up. At that point, you move up in your career. The important thing is that you've also learned how to learn from people with more experience. How to make better judgments in terms of what projects should be tackled and went to pay attention to that little tickle in the back of your skull that says "there's something here". > If these younger programmers (and us older ones) only hear of inward > bickering they will just shrug their shoulders say "well, I did have a > look into it" and walk away. but the existence of the bickering says there's something wrong. Not that people can get together but there's a fundamental disagreement on the usability/suitability/approach. Which, is not a bad description of tthe conversation we've been having. > Maybe it is time some of the minority heard some shocking news. No one > does accessibility with the hope of being the next Bill Gates. I would > wager a bet that the majority of younger programmers that are interested > in the subject is for personal reasons. They may well not wish to even > say why they are interested (let's be honest, it's still not a 'cool' > thing for teenagers to be doing amongst their peers). yes. I agree with this wholeheartedly. > > I do not do coding of things like the kernel, easy-speak, etc. etc. My > interests are in trying to herd cats, that is get the Web browser side > agreed on a standard. My own views on these matters can be found here > http://forum.phillw.net/viewforum.php?f=14 I do keep the ubuntu forum up > to date with what ever news I get. thank you.I took a look and, I'm really sorry to be critical but you only dealt with the easy stuff. We need to come up with some form of standard for working with speech recognition that is testable. I cannot say the number of times I've tried to use JavaScript enabled editors etc. text region and had a misrecognition throw me into some random page on the site and my content is God knows where. That shouldn't happen. Ever. > > I also agree that the Google Summer of Codes are a wonderful thing to be > able to put up a project that is sufficiently challenging but be > reasonably achieved, some of these young projects would flourish if > there were mentors who would help them. I was listening when the other person suggested. Might not be a bad idea but I believe the ideas are sufficiently radical in contrast to the usual accessibility thoughts that they may have a hard time getting traction without someone demonstrating a prototype first. > > So, learn from history, yes, - but these 'kids' think out of the box, > They will sure surprise you. I've known lots of people to think out-of-the-box. It's not as rare a talent as you might think. My reputation among business associates is that I accurately identify technology trends about 2 to 5 years ahead of time. I have peers in their 50s who are far more creative and open to new aspects of doing things than "hotshot" developers in their 20s typically are. you see, arrogance comes from experience. in the beginning, you don't know how little you know, you can feed your ego on a small amount of success to become quite arrogant. If you've been beaten up and acquired lots of scar tissue, you can be arrogant because you have a reasonable amount of experience telling you what you know, what you don't know, and you know you don't know a lot. :-) From esj at harvee.org Sun May 23 23:39:33 2010 From: esj at harvee.org (Eric S. Johansson) Date: Sun, 23 May 2010 19:39:33 -0400 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <20100523164015.GJ14174@blackbox.hittsjunk.net> References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> <4BF954CC.90303@harvee.org> <20100523164015.GJ14174@blackbox.hittsjunk.net> Message-ID: <4BF9BCB5.2040400@harvee.org> On 5/23/2010 12:40 PM, Kenny Hitt wrote: > On Sun, May 23, 2010 at 12:16:12PM -0400, Eric S. Johansson wrote: >> On 5/23/2010 11:26 AM, Kenny Hitt wrote: >> >>> There isn't a kernel module in this case since they are using sane. I >>> regularly build and install kernel modules without needing to reboot. >>> Maybe these notes were for Windows? That is the only explanation I can >>> come up with to explain this. >> >> I went and read which reveals that is a Linux solution. I have observed >> that scanner interfaces are, fragile at best, and I'm not surprised they >> want to reboot with the device turned on. >> > I just switched scanners yesterday with no need to reboot. That idea about > scanners doesn't match with my experience in Linux. fair enough. I don't use scanners except for one and that's under Windows because I haven't had time to set up on my wife's machine. (Yes, her Facebook workstation is the house linux box unless you count the mini ITX system running virtual machines for my firewall and internal print services. Yes, let's not count that :-) > Since I'm totally blind, that means I'm likely supposed to be one of the > users of this product. Since I have years of Linux experience, I don't have > much confidence in any app that tells me I need to reboot after installing a > user space app. really good point. And I'm glad to hear you talk about your experiences. We need more user stories to help extract a better than the current model for accessibility. This is really great. > I find I'm still faster and more productive in the text console at a bash > prompt than I've ever been in a GUI like Gnome. Gnome has never been stable > or reliable enough for me to stick with it for more than a few months at a > time. I had 4 years of Windows experience and was one of the early adopters > of Gnome accessibility, but Gnome hasn't lived up to it's marketing. right. That makes sense. What I'm hearing from your experience is that you build a mental model of all the commands, you can type them in and get feedback through text-to-speech or a braille output device to confirm that you entered the right data. The unpronounceable nature of the commandline doesn't bother you??? Is that right? I think the big problem with putting accessibility features for blind users on a GUI is that you try to map a two-dimensional shallow but wide user interface into an aural format. similar problem to what we deal with speech recognition. > >> The second way they fail is presentation. The name of the command, how >> it's invoked etc. it is not accessible either to speech recognition or >> text-to-speech. The last one, text-to-speech, may do a more credible job >> at presenting garbled text (command names, commandline arguments etc.) than >> speech recognition will when generating the same. >> > I don't follow this one. help $command works for me with a screen reader any > time I need a reminder of a built in command $command --help works when I > need a reminder for an external command. Okay. I was channeling from too deep inside my head on the theory behind accessibility. Sorry about that cp -al [UcWd]* . How do you pronounce that? In simplest form, its Charlie papa space minus sign space left bracket cap uniform charlie cap whiskey delta close bracket no space asterisk space dot ugly as hell and rife with potential for speech recognition errors which makes it even harder to speak! If I was to make a little smarter using some macro capability it might be something like: Copy with links source pattern cap uniform charlie cap whiskey delta close with wildcard destination there (memorized target location) little more verbose but, far more resilient against speech recognition errors. It's also form one could translate command into for a text-to-speech user. The downside with this model is that you need to create special macros for every stupid command and work out the appropriate argument handling grammar. fortunately, I think there's a better way I like to see other ideas if people have them. From themuso at ubuntu.com Mon May 24 00:20:04 2010 From: themuso at ubuntu.com (Luke Yelavich) Date: Mon, 24 May 2010 10:20:04 +1000 Subject: When I booting Lucid final live CD, setting accessibility mode and press a Tab key and choose the install Ubuntu old menu item, Orca is not talking when installer is present In-Reply-To: <4BF62087.6010802@pickup.hu> References: <4BF62087.6010802@pickup.hu> Message-ID: <20100524002004.GB2551@strigy.yelavich.home> On Fri, May 21, 2010 at 03:56:23PM EST, Hammer Attila wrote: > Hy List, > > When I tryed only install ubuntu with following way, Orca is not talking > the installer: > 1. I press tab key when need to get back old menu style. > 2. I choosed the language, and setted the accessibility mode with screen > reader mode. > 3. I choosed the install ubuntu menu item and press Enter key. > When the installer is present the display, Orca is not speak. If I use > the try ubuntu menu item, Orca is wonderful talking and possible install > the system fine with the desktop icon launch. This is a known bug, which I spent quite a while trying to fix, with no luck. There are going to be a few changes in maverick, so maybe as a result of these changes, things will be better for accessibility with the install option, however I won't know till development is well under way. Since I don't know exactly what is causing this bug, I don't know what package it should be reported against. Luke From phillw at phillw.net Mon May 24 01:08:11 2010 From: phillw at phillw.net (Phillip Whiteside) Date: Mon, 24 May 2010 02:08:11 +0100 Subject: VEDICS Speech Assistant In-Reply-To: <4BF9B907.8010703@harvee.org> References: <4BF6EB61.6080500@harvee.org> <20100522070321.GE14174@blackbox.hittsjunk.net> <4BF7D88B.7040304@harvee.org> <19448.53266.142791.77211@rapttech.com.au> <4BF941B9.5010503@harvee.org> <4BF9B907.8010703@harvee.org> Message-ID: On Mon, May 24, 2010 at 12:23 AM, Eric S. Johansson wrote: > On 5/23/2010 6:02 PM, Phillip Whiteside wrote: > >> >> >> I do not do coding of things like the kernel, easy-speak, etc. etc. My >> interests are in trying to herd cats, that is get the Web browser side >> agreed on a standard. My own views on these matters can be found here >> http://forum.phillw.net/viewforum.php?f=14 I do keep the ubuntu forum up >> to date with what ever news I get. >> > > thank you.I took a look and, I'm really sorry to be critical but you only > dealt with the easy stuff. We need to come up with some form of standard for > working with speech recognition that is testable. I cannot say the number of > times I've tried to use JavaScript enabled editors etc. text region and had > a misrecognition throw me into some random page on the site and my content > is God knows where. That shouldn't happen. Ever. > > And ... guess why I have only dealt with the easy stuff? There were when I >> wrote that tabling system two entirely different and mutually exclusive >> 'standards'. > > don't complain to me that I only spend so much time on the matter - get onto the likes of http://www.w3.org/WAI/ it IS about time those with disablilties TOLD these people to stop bickering ...... except that you are all still, erm ..... bickering. Let me repeat what I intimated in my posts on that link, until standards are decided upon, people will not put in the effort to comply with them. I asked on the forum for someone to check and see if my coding was correct - I had exactly zero replies back. How do you expect me to push forward people to include the minor code changes as they are learning when none of "you" are even prepared to see if it is correct? So, I shrug my shoulders and say "well, at least I tried". It is not my loss that you have gotten yet another person do that, it is your loss as a group. Sadly, Phill. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kenny at hittsjunk.net Mon May 24 06:18:37 2010 From: kenny at hittsjunk.net (Kenny Hitt) Date: Mon, 24 May 2010 01:18:37 -0500 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <4BF9BCB5.2040400@harvee.org> References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> <4BF954CC.90303@harvee.org> <20100523164015.GJ14174@blackbox.hittsjunk.net> <4BF9BCB5.2040400@harvee.org> Message-ID: <20100524061837.GA2195@blackbox.hittsjunk.net> Hi. On Sun, May 23, 2010 at 07:39:33PM -0400, Eric S. Johansson wrote: > On 5/23/2010 12:40 PM, Kenny Hitt wrote: > > On Sun, May 23, 2010 at 12:16:12PM -0400, Eric S. Johansson wrote: > >> On 5/23/2010 11:26 AM, Kenny Hitt wrote: > >> > >>> There isn't a kernel module in this case since they are using sane. I > >>> regularly build and install kernel modules without needing to reboot. > >>> Maybe these notes were for Windows? That is the only explanation I can > >>> come up with to explain this. > >> > >> I went and read which reveals that is a Linux solution. I have observed > >> that scanner interfaces are, fragile at best, and I'm not surprised they > >> want to reboot with the device turned on. > >> > > I just switched scanners yesterday with no need to reboot. That idea about > > scanners doesn't match with my experience in Linux. > > fair enough. I don't use scanners except for one and that's under Windows > because I haven't had time to set up on my wife's machine. (Yes, her Facebook > workstation is the house linux box unless you count the mini ITX system running > virtual machines for my firewall and internal print services. Yes, let's not > count that :-) > Setting up a scanner in Linux is easy. 1. plug the scanner into the computer and the power outlet. 2. install the sane package if it isn't already installed. 3. decide on what app you want to use for ocr and install if needed. 4. start using it. Note none of these steps require a reboot of the system. > > Since I'm totally blind, that means I'm likely supposed to be one of the > > users of this product. Since I have years of Linux experience, I don't have > > much confidence in any app that tells me I need to reboot after installing a > > user space app. > > really good point. And I'm glad to hear you talk about your experiences. We need > more user stories to help extract a better than the current model for > accessibility. This is really great. > > > I find I'm still faster and more productive in the text console at a bash > > prompt than I've ever been in a GUI like Gnome. Gnome has never been stable > > or reliable enough for me to stick with it for more than a few months at a > > time. I had 4 years of Windows experience and was one of the early adopters > > of Gnome accessibility, but Gnome hasn't lived up to it's marketing. > > right. That makes sense. What I'm hearing from your experience is that you build > a mental model of all the commands, you can type them in and get feedback > through text-to-speech or a braille output device to confirm that you entered > the right data. The unpronounceable nature of the commandline doesn't bother > you??? Is that right? > actually, I have a visual picture of a GUI desktop. In Gnome, that isn't as important as it was for access to Windows. In Windows, I had to use the screen reader's mouse control to find objects. The same concept isn't needed for Gnome. In the console, there is no need to picture anything. Think of command line as a conversation. A screen reader will try to pronounce anything sent to the synth, so everything will have some word or phrase associated with it. It might not sound like English, but it will be consistant as long as you use the same screen reader and synth. > I think the big problem with putting accessibility features for blind users on a > GUI is that you try to map a two-dimensional shallow but wide user interface > into an aural format. similar problem to what we deal with speech recognition. > > Actually, that isn't my problem with Gnome. My problem is lack of stability and slow response. My time in Gnome usually ends when Orca crashes and nothing I try can get it to restart. At that point, anything in the Gnome session is lost. My only option is to kill the Xserver and clean up the resulting mess in a console. If Orca were a C program, I would just attach to the process with gdb in a console and wait for the crash. Then I would have a good backtrace to attach to a bug report. I don't know how to do the same with a Python app. The debug methods I know about for Orca create large files. Since the crash can take a while to happen, letting Orca write large amounts of data to a file isn't an option. > >> The second way they fail is presentation. The name of the command, how > >> it's invoked etc. it is not accessible either to speech recognition or > >> text-to-speech. The last one, text-to-speech, may do a more credible job > >> at presenting garbled text (command names, commandline arguments etc.) than > >> speech recognition will when generating the same. > >> > > I don't follow this one. help $command works for me with a screen reader any > > time I need a reminder of a built in command $command --help works when I > > need a reminder for an external command. > > Okay. I was channeling from too deep inside my head on the theory behind > accessibility. Sorry about that > > cp -al [UcWd]* . > > How do you pronounce that? In simplest form, its > The answer will depend on screen reader and synth. With my current default punctuation setting in speakup and espeak, it sounds like: c p al up w d Notice I only hear the punctuation and case of the command if I review the screen. This behavior is my default by choice. I could set punctuation to a higher value and hear more, but I only do that when reading code and not mail. In this case, espeak doesn't get the command right since I heard u p w d instead of the actual u c w d. When I reviewed the screen, I saw the correct command. > Charlie papa space minus sign space left bracket cap uniform charlie cap whiskey > delta close bracket no space asterisk space dot > > ugly as hell and rife with potential for speech recognition errors which makes > it even harder to speak! > According to the posts I've seen about VEDICS Speech Assistant this problem will be less with there solution. It can recognize commands that aren't English. > If I was to make a little smarter using some macro capability it might be > something like: > > Copy with links source pattern cap uniform charlie cap whiskey delta > close with wildcard > destination there (memorized target location) > > little more verbose but, far more resilient against speech recognition errors. > It's also form one could translate command into for a text-to-speech user. The > downside with this model is that you need to create special macros for every > stupid command and work out the appropriate argument handling grammar. > Actually, your example would be way to verbos for text to speech. I would prefer something like c p dash a l bracket u c w d right bracket star dot The u w d would be at a higher pitch to show they are caps, while the rest would be at normal pitch to show they are lower case. Since I already know what star and dot mean in Unix, no need to say that. Kenny From kenny at hittsjunk.net Mon May 24 07:18:58 2010 From: kenny at hittsjunk.net (Kenny Hitt) Date: Mon, 24 May 2010 02:18:58 -0500 Subject: disabilities In-Reply-To: References: <4BF6EB61.6080500@harvee.org> <20100522070321.GE14174@blackbox.hittsjunk.net> <4BF7D88B.7040304@harvee.org> <19448.53266.142791.77211@rapttech.com.au> <4BF941B9.5010503@harvee.org> <4BF9B907.8010703@harvee.org> Message-ID: <20100524071858.GC2195@blackbox.hittsjunk.net> Hi. On Mon, May 24, 2010 at 02:08:11AM +0100, Phillip Whiteside wrote: > don't complain to me that I only spend so much time on the matter - get onto > the likes of http://www.w3.org/WAI/ it IS about time those with disablilties > TOLD these people to stop bickering ...... except that you are all still, > erm ..... bickering. > I can't speak for others, but I have complained with the same result as you. Please don't make the mistake of deciding all disabled people are part of some big group. Even blind Linux users aren't part of the same group. I see at least 2 different groups. People who run Windows and play around with Linux and people like me who run Linux full time. Our priorities are different. I want access to the web, while the users who just play in Linux want it to behave like Windows. In my case I can't afford Windows, so I do my best to get by with Linux. I do believe open source is better, so won't switch back to a model that forces me to constantly pay money I don't have to companies who only want to make as much money as possible just to keep access to the computer. I'm not against commercial programs or companies making money. I own several Cepstral voices, but I'm against the price gouging you have in Windows access. > Let me repeat what I intimated in my posts on that link, until standards are > decided upon, people will not put in the effort to comply with them. > I agree. While these people are sitting on there ass, I loose access to more and more web sites each day. I had to switch from elinks to Firefox last week because kgoradio changed there site. All I wanted to do was download a mp3 file. Aparently, the download area didn't look good enough with the old page, so they updated it to something that won't work with elinks. I'm not suggesting all sites consider elinks as a standard, but for simple things like downloading a file or filling out a simple form, the browser shouldn't make any difference. What makes this worse is the Mozilla project puts there resources in Windows while I run Linux. They made a change to Firefox a few years ago that really made sites less usable. Dialogs no longer get focus in Firefox. This forces you to tab around until you find them. Since the existance of a dialog isn't always obvious, you can visit sites and not be able to use them because you don't know what's actually happening. This problem was brought up with the Mozilla developers with no solution. They just ignored the problem and left it to the Orca developers to try to figure out a solution. So far, no success. > I asked on the forum for someone to check and see if my coding was correct - > I had exactly zero replies back. How do you expect me to push forward > people to include the minor code changes as they are learning when none of > "you" are even prepared to see if it is correct? > I don't know for sure, but there are likely very few disabled people on the standards committy. There is likely a token member, but the real power is with sighted people who consider this as just some cool project and don't really get that there delay causes real problems for the disabled. > So, I shrug my shoulders and say "well, at least I tried". > > It is not my loss that you have gotten yet another person do that, it is > your loss as a group. > Actually, it is my loss since I don't know anything about web design or standards. Once again, I'm not part of the "group" you are talking about. I'm just a user who is loosing access to more and more sites because some "educated" sighted people don't get it and don't listen. The "educated" sighted people in this case are the web standards group. BTW, my experiences with Firefox and Gnome are making me do the same as you. I am finding myself lumping all sighted people into the same group of fuckers who don't get it. This is bad for both of us. Kenny From j.orcauser at googlemail.com Mon May 24 08:01:23 2010 From: j.orcauser at googlemail.com (Jon) Date: Mon, 24 May 2010 09:01:23 +0100 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <20100524061837.GA2195@blackbox.hittsjunk.net> References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> <4BF954CC.90303@harvee.org> <20100523164015.GJ14174@blackbox.hittsjunk.net> <4BF9BCB5.2040400@harvee.org> <20100524061837.GA2195@blackbox.hittsjunk.net> Message-ID: <20100524080123.GA26991@jupiter.uk.to> Hi, On Mon 24/05/2010 at 01:18:37, Kenny Hitt wrote: [snip] > Actually, that isn't my problem with Gnome. My problem is lack of > stability and slow response. My time in Gnome usually ends when Orca > crashes and nothing I try can get it to restart. At that point, > anything in the Gnome session is lost. My only option is to kill the > Xserver and clean up the resulting mess in a console. If Orca were a C > program, I would just attach to the process with gdb in a console and > wait for the crash. Then I would have a good backtrace to attach to a > bug report. I don't know how to do the same with a Python app. The > debug methods I know about for Orca create large files. Since the > crash can take a while to happen, letting Orca write large amounts of > data to a file isn't an option. Most of these crashes occur due to speech-dispatcher, or the poor sound integration that ubuntu had for a while, while switching between alsa and pulse audio. If you update to Lucid, I believe your experience might be far improved. We try to keep the actual Orca code as clean as possible, performing regression testing on regular basis. But because we dont provide the actual speech, when the tts crashes/hangs, then sadly often Orca gets the blame. Providing the debug file is very helpful, because then we can see exactly what has happend, and at what stage the problem occured, and with what parameters. Only then we can try to work with opentts/sd to fix the issue, or work around it ourselves. In any case, we are not asking you to read the debug file, unless you want to be involved with locating the problem and thinking about possible solutions. Remember Orca is a community project, and we are all volinteers, so every little helps. Thank you. -Jon From oyvind.lode at lyse.net Mon May 24 08:36:35 2010 From: oyvind.lode at lyse.net (=?iso-8859-1?Q?=D8yvind_Lode?=) Date: Mon, 24 May 2010 10:36:35 +0200 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <20100524061837.GA2195@blackbox.hittsjunk.net> References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> <4BF954CC.90303@harvee.org> <20100523164015.GJ14174@blackbox.hittsjunk.net> <4BF9BCB5.2040400@harvee.org> <20100524061837.GA2195@blackbox.hittsjunk.net> Message-ID: <001e01cafb1c$38420fb0$a8c62f10$@lode@lyse.net> I'm a blind Linux user as well. I don't use GNOME and probably never will. It's to slow (Orca is very slow). I use the console with ease. The commands is not a problem for me at all using both speech and Braille. I'm a reasonable fast Braille reader so reading commands etc is no problem for me. I'm a quite fast touch typist as well he he. So the command line is very accessible for a blind person. Especially if he or she use a Braille display. But I can understand the frustration for a person using text recognition to input commands. -----Original Message----- From: ubuntu-accessibility-bounces at lists.ubuntu.com [mailto:ubuntu-accessibility-bounces at lists.ubuntu.com] On Behalf Of Kenny Hitt Sent: 24. mai 2010 08:19 To: ubuntu-accessibility at lists.ubuntu.com Subject: Re: Ubuntu-accessibility Digest, Vol 54, Issue 23 Hi. On Sun, May 23, 2010 at 07:39:33PM -0400, Eric S. Johansson wrote: > On 5/23/2010 12:40 PM, Kenny Hitt wrote: > > On Sun, May 23, 2010 at 12:16:12PM -0400, Eric S. Johansson wrote: > >> On 5/23/2010 11:26 AM, Kenny Hitt wrote: > >> > >>> There isn't a kernel module in this case since they are using sane. I > >>> regularly build and install kernel modules without needing to reboot. > >>> Maybe these notes were for Windows? That is the only explanation I can > >>> come up with to explain this. > >> > >> I went and read which reveals that is a Linux solution. I have observed > >> that scanner interfaces are, fragile at best, and I'm not surprised they > >> want to reboot with the device turned on. > >> > > I just switched scanners yesterday with no need to reboot. That idea about > > scanners doesn't match with my experience in Linux. > > fair enough. I don't use scanners except for one and that's under Windows > because I haven't had time to set up on my wife's machine. (Yes, her Facebook > workstation is the house linux box unless you count the mini ITX system running > virtual machines for my firewall and internal print services. Yes, let's not > count that :-) > Setting up a scanner in Linux is easy. 1. plug the scanner into the computer and the power outlet. 2. install the sane package if it isn't already installed. 3. decide on what app you want to use for ocr and install if needed. 4. start using it. Note none of these steps require a reboot of the system. > > Since I'm totally blind, that means I'm likely supposed to be one of the > > users of this product. Since I have years of Linux experience, I don't have > > much confidence in any app that tells me I need to reboot after installing a > > user space app. > > really good point. And I'm glad to hear you talk about your experiences. We need > more user stories to help extract a better than the current model for > accessibility. This is really great. > > > I find I'm still faster and more productive in the text console at a bash > > prompt than I've ever been in a GUI like Gnome. Gnome has never been stable > > or reliable enough for me to stick with it for more than a few months at a > > time. I had 4 years of Windows experience and was one of the early adopters > > of Gnome accessibility, but Gnome hasn't lived up to it's marketing. > > right. That makes sense. What I'm hearing from your experience is that you build > a mental model of all the commands, you can type them in and get feedback > through text-to-speech or a braille output device to confirm that you entered > the right data. The unpronounceable nature of the commandline doesn't bother > you??? Is that right? > actually, I have a visual picture of a GUI desktop. In Gnome, that isn't as important as it was for access to Windows. In Windows, I had to use the screen reader's mouse control to find objects. The same concept isn't needed for Gnome. In the console, there is no need to picture anything. Think of command line as a conversation. A screen reader will try to pronounce anything sent to the synth, so everything will have some word or phrase associated with it. It might not sound like English, but it will be consistant as long as you use the same screen reader and synth. > I think the big problem with putting accessibility features for blind users on a > GUI is that you try to map a two-dimensional shallow but wide user interface > into an aural format. similar problem to what we deal with speech recognition. > > Actually, that isn't my problem with Gnome. My problem is lack of stability and slow response. My time in Gnome usually ends when Orca crashes and nothing I try can get it to restart. At that point, anything in the Gnome session is lost. My only option is to kill the Xserver and clean up the resulting mess in a console. If Orca were a C program, I would just attach to the process with gdb in a console and wait for the crash. Then I would have a good backtrace to attach to a bug report. I don't know how to do the same with a Python app. The debug methods I know about for Orca create large files. Since the crash can take a while to happen, letting Orca write large amounts of data to a file isn't an option. > >> The second way they fail is presentation. The name of the command, how > >> it's invoked etc. it is not accessible either to speech recognition or > >> text-to-speech. The last one, text-to-speech, may do a more credible job > >> at presenting garbled text (command names, commandline arguments etc.) than > >> speech recognition will when generating the same. > >> > > I don't follow this one. help $command works for me with a screen reader any > > time I need a reminder of a built in command $command --help works when I > > need a reminder for an external command. > > Okay. I was channeling from too deep inside my head on the theory behind > accessibility. Sorry about that > > cp -al [UcWd]* . > > How do you pronounce that? In simplest form, its > The answer will depend on screen reader and synth. With my current default punctuation setting in speakup and espeak, it sounds like: c p al up w d Notice I only hear the punctuation and case of the command if I review the screen. This behavior is my default by choice. I could set punctuation to a higher value and hear more, but I only do that when reading code and not mail. In this case, espeak doesn't get the command right since I heard u p w d instead of the actual u c w d. When I reviewed the screen, I saw the correct command. > Charlie papa space minus sign space left bracket cap uniform charlie cap whiskey > delta close bracket no space asterisk space dot > > ugly as hell and rife with potential for speech recognition errors which makes > it even harder to speak! > According to the posts I've seen about VEDICS Speech Assistant this problem will be less with there solution. It can recognize commands that aren't English. > If I was to make a little smarter using some macro capability it might be > something like: > > Copy with links source pattern cap uniform charlie cap whiskey delta > close with wildcard > destination there (memorized target location) > > little more verbose but, far more resilient against speech recognition errors. > It's also form one could translate command into for a text-to-speech user. The > downside with this model is that you need to create special macros for every > stupid command and work out the appropriate argument handling grammar. > Actually, your example would be way to verbos for text to speech. I would prefer something like c p dash a l bracket u c w d right bracket star dot The u w d would be at a higher pitch to show they are caps, while the rest would be at normal pitch to show they are lower case. Since I already know what star and dot mean in Unix, no need to say that. Kenny -- Ubuntu-accessibility mailing list Ubuntu-accessibility at lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility From brunogirin at gmail.com Mon May 24 09:52:59 2010 From: brunogirin at gmail.com (Bruno Girin) Date: Mon, 24 May 2010 10:52:59 +0100 Subject: disabilities In-Reply-To: <20100524071858.GC2195@blackbox.hittsjunk.net> References: <4BF6EB61.6080500@harvee.org> <20100522070321.GE14174@blackbox.hittsjunk.net> <4BF7D88B.7040304@harvee.org> <19448.53266.142791.77211@rapttech.com.au> <4BF941B9.5010503@harvee.org> <4BF9B907.8010703@harvee.org> <20100524071858.GC2195@blackbox.hittsjunk.net> Message-ID: <1274694779.1591.69.camel@nuuk> On Mon, 2010-05-24 at 02:18 -0500, Kenny Hitt wrote: > Hi. > On Mon, May 24, 2010 at 02:08:11AM +0100, Phillip Whiteside wrote: [snip] > > I asked on the forum for someone to check and see if my coding was correct - > > I had exactly zero replies back. How do you expect me to push forward > > people to include the minor code changes as they are learning when none of > > "you" are even prepared to see if it is correct? > > > I don't know for sure, but there are likely very few disabled people on the standards committy. There > is likely a token member, but the real power is with sighted people who consider this as just > some cool project and don't really get that there delay causes real problems for the disabled. There are more than disabled people on standard committees than you think. In practice, the problem is not with web and accessibility standards themselves, they are with their implementation in browsers and how well (or not) they are followed by web site designers. My experience in the industry is that there are very few designers who are aware of standards and why they should be followed. And even when they are aware of accessibility standards, they don't understand them well enough to argue the case for following them, especially when it is perceived that following the standards will increase the development cost. I constantly face this problem in my day job: every time I need to write specifications for a new web based system, I include accessibility guidelines and invariably I get answers like "that will increase the cost by X" or "that will delay delivery by Y" when it's not an outright "we can't do that". > > > So, I shrug my shoulders and say "well, at least I tried". > > > > It is not my loss that you have gotten yet another person do that, it is > > your loss as a group. > > > Actually, it is my loss since I don't know anything about web design or standards. > Once again, I'm not part of the "group" you are talking about. I'm just a user who is loosing access to more and more > sites because some "educated" sighted people don't get it and don't listen. > The "educated" sighted people in this case are the web standards group. > BTW, my experiences with Firefox and Gnome are making me do the same as you. I am finding myself > lumping all sighted people into the same group of fuckers who don't get it. > This is bad for both of us. It's true, as a person with no disability, it took me a long time to get it. And I don't think I completely get it yet but at least I'm now able to make a judgement call on whether some code uses techniques that are likely to cause accessibility issues. This is to be expected: it is extremely difficult for someone who does not have a given disability to understand what it is like to live with that disability. In fact, I suspect it is difficult for a blind person to understand the challenges faced by people with motor disabilities for instance. What really opened my eyes was attending a talk by Robin Christopherson from AbilityNet [1] at the @media conference [2] a few years ago. What made the difference was not the content of the presentation but the fact that it was delivered by a blind user and got me to see first hand what issues blind people face when using a computer. And that's the problem with accessibility: even with the best will in the world, it's impossible for non-disabled people to understand the challenges faced by disabled people without witnessing them first hand. And very few developers ever see first hand the software they produce used by disabled users. All this to say that to solve accessibility problems, we need to talk to each other and understand that "getting it" is very difficult for able people. Which means that able people need to be ready to listen and see their assumptions and "cool ideas" challenged; while disabled people need to be patient in explaining why a particular design doesn't work for them and suggesting constructive alternatives. [1] http://www.abilitynet.org.uk/webteam#robin [2] http://atmedia.webdirections.org/ Bruno From valdis at odo.lv Mon May 24 10:16:36 2010 From: valdis at odo.lv (Valdis) Date: Mon, 24 May 2010 10:16:36 +0000 (UTC) Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> <4BF954CC.90303@harvee.org> <20100523164015.GJ14174@blackbox.hittsjunk.net> <4BF9BCB5.2040400@harvee.org> <20100524061837.GA2195@blackbox.hittsjunk.net> Message-ID: ... > Actually, that isn't my problem with Gnome. My problem is lack of stability and slow response. > My time in Gnome usually ends when Orca crashes and nothing I try can get it to restart. At that > point, anything in the Gnome session is lost. My only option is to kill the Xserver and clean ... can you start orca with wollowing command: orca >~/orca.log 2>&1 & And then check what appears in the log file? Valdis From brunogirin at gmail.com Mon May 24 10:17:13 2010 From: brunogirin at gmail.com (Bruno Girin) Date: Mon, 24 May 2010 11:17:13 +0100 Subject: Life In-Reply-To: References: Message-ID: <1274696233.1591.82.camel@nuuk> On Sun, 2010-05-23 at 22:38 +0100, Phillip Whiteside wrote: > Hi, > > > I joined this mailing list via the ubuntu forum area. I write web > sites and was interested in how much more difficult it is to write the > code so that it complies with whatever standard is standard of the day > (The wonderful thing about standards, is everyone can make their own). It's not that difficult when starting from scratch. In practice, it's the same techniques you would use to make your web site be usable on a wide range of browsers: start with standard HTML and CSS, using HTML tags according to their semantic meaning (h[1-6] for titles, ul or ol for lists, etc) and CSS to handle the look and feel. Then add visual improvements in such a way that they degrade gracefully (e.g., when using Javascript, do it in such a way that your site still works when Javascript is disabled) or that there is an alternative way to do the same thing that uses HTML and CSS only. Some resources I've found very useful in this regard: http://diveintoaccessibility.org/ http://www.alistapart.com/topics/topic/accessibility/ > > > A bit of my background may be in order. When I was 20 years old I had > written a programme that could do what Stephen Hawkins still uses > using an 8 bit computer (an Atari 640 XL with additional memory board > soldered in). There was no interest in me going forward with that for > about 1/100th of the cost of what was being sold commercially by any > of charities. > > > My heart drops when the longest emails are about 'failed' projects, > people's ascertations that future projects are doomed to failure. As a > non-disabled person, can I please ask that the bickering of who / > what / where / when is to fault stop? > > > As has been pointed out on this thread, there are young programmers > coming on-line. This next bit of news may come of a shock to some of > you, but they do not actually care about a disability - it is such a > 'non-event' to them - They focus on the person, if that person is a > happy person they see happy. I that person is some what frustrated but > articulate and realises that an able bodied can never fully understand > how it is to be so then progress can be made. If their first contact > is for a major doom and gloom assesment of how they will fail like > everyone else has done, it is hardly going to keep them around for > long? Agreed. Even with the best will in the world, it is extremely difficult for able people (of all ages) to understand the challenges faced by disabled people. > > > VEDICS is nice, they have a little funding and would possibly be > interested in taking it forward, they are certainly not going to > 1) be interested in so doing with such negativity > 2) be able to get any funding based on 'testimonial' emails. > > > This is another young programmer who has written stuff > > > http://bloc.eurion.net/archives/2010/espeak-gui-0-2/ > > > > > > From kenny at hittsjunk.net Mon May 24 10:26:13 2010 From: kenny at hittsjunk.net (Kenny Hitt) Date: Mon, 24 May 2010 05:26:13 -0500 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> <4BF954CC.90303@harvee.org> <20100523164015.GJ14174@blackbox.hittsjunk.net> <4BF9BCB5.2040400@harvee.org> <20100524061837.GA2195@blackbox.hittsjunk.net> Message-ID: <20100524102613.GA14537@blackbox.hittsjunk.net> Hi. On Mon, May 24, 2010 at 10:16:36AM +0000, Valdis wrote: > ... > > Actually, that isn't my problem with Gnome. My problem is lack of stability > and slow response. > > My time in Gnome usually ends when Orca crashes and nothing I try can get it > to restart. At that > > point, anything in the Gnome session is lost. My only option is to kill the > Xserver and clean > ... > can you start orca with wollowing command: > orca >~/orca.log 2>&1 & > > And then check what appears in the log file? > no, when it crashes, nothing I do can get it to restart. Before you ask, it isn't a tts issue since speech-dispatcher is still up and running. I've been running Linux for 10 years now, so I'm not your normal stupid Windows user. I know how to debug problems. Like I said in my earlier post, if this were a C program I would already have filed the bug report. The fact you can't easily debug a Python app is a big weakness in Orca. I don't have enough disk space to just leav the debug options in Orca enabled either, so this will likely be a bug that won't get resolved any time soon. Kenny From kenny at hittsjunk.net Mon May 24 10:46:27 2010 From: kenny at hittsjunk.net (Kenny Hitt) Date: Mon, 24 May 2010 05:46:27 -0500 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <20100524102613.GA14537@blackbox.hittsjunk.net> References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> <4BF954CC.90303@harvee.org> <20100523164015.GJ14174@blackbox.hittsjunk.net> <4BF9BCB5.2040400@harvee.org> <20100524061837.GA2195@blackbox.hittsjunk.net> <20100524102613.GA14537@blackbox.hittsjunk.net> Message-ID: <20100524104627.GA15377@blackbox.hittsjunk.net> Hi. Just to clarify something: my attitude isn't directed at any of the people who have asked me questions about my Orca crash. My attitude comes from the fact I can debug Linux kernel code but can't debug a fucking gnome screen reader. In my opinion, switching to Python from C was a mistake for a screen reader. Kenny On Mon, May 24, 2010 at 05:26:13AM -0500, Kenny Hitt wrote: > Hi. > On Mon, May 24, 2010 at 10:16:36AM +0000, Valdis wrote: > > ... > > > Actually, that isn't my problem with Gnome. My problem is lack of stability > > and slow response. > > > My time in Gnome usually ends when Orca crashes and nothing I try can get it > > to restart. At that > > > point, anything in the Gnome session is lost. My only option is to kill the > > Xserver and clean > > ... > > can you start orca with wollowing command: > > orca >~/orca.log 2>&1 & > > > > And then check what appears in the log file? > > > no, when it crashes, nothing I do can get it to restart. Before you ask, it isn't a tts > issue since speech-dispatcher is still up and running. > I've been running Linux for 10 years now, so I'm not your normal stupid Windows user. > I know how to debug problems. Like I said in my earlier post, if this were a C program > I would already have filed the bug report. The fact you can't easily debug a Python > app is a big weakness in Orca. > I don't have enough disk space to just leav the debug options in Orca enabled either, so this > will likely be a bug that won't get resolved any time soon. > > Kenny > From laura at lczajkowski.com Mon May 24 10:48:41 2010 From: laura at lczajkowski.com (Laura Czajkowski) Date: Mon, 24 May 2010 11:48:41 +0100 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <20100524104627.GA15377@blackbox.hittsjunk.net> References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> <4BF954CC.90303@harvee.org> <20100523164015.GJ14174@blackbox.hittsjunk.net> <4BF9BCB5.2040400@harvee.org> <20100524061837.GA2195@blackbox.hittsjunk.net> <20100524102613.GA14537@blackbox.hittsjunk.net> <20100524104627.GA15377@blackbox.hittsjunk.net> Message-ID: <4BFA5989.1060207@lczajkowski.com> On 24/05/10 11:46, Kenny Hitt wrote: > questions about my Orca crash. My attitude comes from the fact I can debug > Linux kernel code but can't debug a fucking gnome screen reader. > > > Could you please moderate your language on this list. Laura -- https://wiki.ubuntu.com/czajkowski http://www.lczajkowski.com Skype: lauraczajkowski From carlos.mayans at gmail.com Mon May 24 13:22:43 2010 From: carlos.mayans at gmail.com (Carlos Mayans) Date: Mon, 24 May 2010 14:22:43 +0100 Subject: Input method for users who can only operate (press/control/click) 1(or limited number of) key(s) Message-ID: Hey guys, this is my the first time I write to the list, so first of all my name is Carlos, I am from Spain and I have been working with people with different types of disabilities in Kolkata India for almost two years, mainly deaf and kids with cerebrl palsy. I was little surprised I couldn't find any software to work as an only 1 key input methods. This could be easily programmed, I am a software developer myself, but I have absolutely no idea if there are already people working on it, or anything related to this. Sorry if this issue has already been discussed before, and thank you in advance for your advices. Regards, Carlos. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgs at dmu.ac.uk Mon May 24 15:34:59 2010 From: hgs at dmu.ac.uk (Hugh Sasse) Date: Mon, 24 May 2010 16:34:59 +0100 (BST) Subject: Input method for users who can only operate (press/control/click) 1(or limited number of) key(s) In-Reply-To: References: Message-ID: On Mon, 24 May 2010, Carlos Mayans wrote: > Hey guys, > > this is my the first time I write to the list, so first of all my name is > Carlos, I am from Spain and I have been working with people with different > types of disabilities in Kolkata India for almost two years, mainly deaf and > kids with cerebrl palsy. > > I was little surprised I couldn't find any software to work as an only 1 key > input methods. This could be easily programmed, I am a software developer > myself, but I have absolutely no idea if there are already people working on > it, or anything related to this. I know of Dasher, which I've not tried on Linux: http://www.inference.phy.cam.ac.uk/dasher/ The book Beautiful Code (not here at the moment) has a chapter on the system Stephen Hawking uses, which is open source, but when I went to the site mentioned in the book the code had vanished. Maybe someone else knows whether it has moved or just died. Chapter 30 When a Button Is All That Connects You to the World http://oreilly.com/catalog/9780596510046#toc There is also software out there which allows communication with a text interface using extended morse code, which may or may not be applicable. There is this: http://morseall.org/ and there was the morse 2000 project, but this seems to have only left traces on the net. > > Sorry if this issue has already been discussed before, and thank you in > advance for your advices. > > Regards, > Carlos. > HTH Hugh From j.orcauser at googlemail.com Mon May 24 16:00:45 2010 From: j.orcauser at googlemail.com (Jon) Date: Mon, 24 May 2010 17:00:45 +0100 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <20100524104627.GA15377@blackbox.hittsjunk.net> References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> <4BF954CC.90303@harvee.org> <20100523164015.GJ14174@blackbox.hittsjunk.net> <4BF9BCB5.2040400@harvee.org> <20100524061837.GA2195@blackbox.hittsjunk.net> <20100524102613.GA14537@blackbox.hittsjunk.net> <20100524104627.GA15377@blackbox.hittsjunk.net> Message-ID: <20100524160045.GA2058@mars.uk.to> Please have a look at what is available before having a burst. if you want something simular to gdb, then you should look into ipython. Yes, Orca could have been written in c or c++, but the speed of development and ease of participation would be diffrent. Please, taking a posative step and helping out is far better than being negative and talking the project down. Most debug files are not more than a few k, I really dont understand what you mean by large files, and not having space for them. Anyway, there are levels and filters in the orca debug module, I would recommend you have a look through and see what is best for your situation. If you dont want to help out, then this is a diffrent matter. Thanks. -Jon On Mon 24/05/2010 at 05:46:27, Kenny Hitt wrote: > Hi. > Just to clarify something: my attitude isn't directed at any of the people who have asked me > questions about my Orca crash. My attitude comes from the fact I can debug > Linux kernel code but can't debug a fucking gnome screen reader. > In my opinion, switching to Python from C was a mistake for a screen reader. > > Kenny > > On Mon, May 24, 2010 at 05:26:13AM -0500, Kenny Hitt wrote: > > Hi. > > On Mon, May 24, 2010 at 10:16:36AM +0000, Valdis wrote: > > > ... > > > > Actually, that isn't my problem with Gnome. My problem is lack of stability > > > and slow response. > > > > My time in Gnome usually ends when Orca crashes and nothing I try can get it > > > to restart. At that > > > > point, anything in the Gnome session is lost. My only option is to kill the > > > Xserver and clean > > > ... > > > can you start orca with wollowing command: > > > orca >~/orca.log 2>&1 & > > > > > > And then check what appears in the log file? > > > > > no, when it crashes, nothing I do can get it to restart. Before you ask, it isn't a tts > > issue since speech-dispatcher is still up and running. > > I've been running Linux for 10 years now, so I'm not your normal stupid Windows user. > > I know how to debug problems. Like I said in my earlier post, if this were a C program > > I would already have filed the bug report. The fact you can't easily debug a Python > > app is a big weakness in Orca. > > I don't have enough disk space to just leav the debug options in Orca enabled either, so this > > will likely be a bug that won't get resolved any time soon. > > > > Kenny > > > > -- > Ubuntu-accessibility mailing list > Ubuntu-accessibility at lists.ubuntu.com > https://lists.ubuntu.com/mailman/listinfo/ubuntu-accessibility From pstowe at gmail.com Mon May 24 19:00:18 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Mon, 24 May 2010 15:00:18 -0400 Subject: Fwd: [Ubuntu-Classroom] Call for Ubuntu User Days Instructors In-Reply-To: <1272982896.5645.11.camel@aries> References: <1272982896.5645.11.camel@aries> Message-ID: Hiya, We're still looking for instructors and it might be cool if someone from here could do a class on setting up accessibility features (or just on what accessibility features exist) in Lucid! Thanks, Penelope ---------- Forwarded message ---------- From: Chris Johnston Date: Tue, May 4, 2010 at 10:21 AM Subject: [Ubuntu-Classroom] Call for Ubuntu User Days Instructors To: ubuntu-classroom at lists.ubuntu.com, ubuntu-news-team at lists.ubuntu.com, loco-contacts at lists.ubuntu.com, ubuntu-users at lists.ubuntu.com Greetings! It's time to start planning for the second Ubuntu User Day! This time it will be held on June 5, 2010. We are going to attempt to fill 24 time slots so that everyone around the world has the ability to participate in the User Day! You can find out more information about Ubuntu User Days by visiting the Ubuntu User Day wiki page [1] or the planning wiki page [2]. To sign up to lead a session, visit the Course Suggestions wiki page [3] and look through the course suggestions that we have provided. We are also willing to take your suggestions on other courses to teach, just keep in mind that Ubuntu User Days are geared towards new and newer Ubuntu Users. You can see the logs [4] from the last Ubuntu User Day to see some of the courses that were taught then. Please feel free to email me if you have any questions and I look forward to working with you soon. [1] https://wiki.ubuntu.com/UserDays [2] https://wiki.ubuntu.com/UserDaysTeam [3] https://wiki.ubuntu.com/UserDaysTeam/CourseSuggestions [4] https://wiki.ubuntu.com/UserDays/Logs/January2010 On behalf of the Ubuntu User Days Team, -- Chris Johnston - cjohnston Ubuntu Member chrisjohnston at ubuntu.com www.chrisjohnston.org -- Ubuntu-classroom mailing list Ubuntu-classroom at lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-classroom -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From esj at harvee.org Mon May 24 20:57:24 2010 From: esj at harvee.org (Eric S. Johansson) Date: Mon, 24 May 2010 16:57:24 -0400 Subject: disabilities In-Reply-To: <1274694779.1591.69.camel@nuuk> References: <4BF6EB61.6080500@harvee.org> <20100522070321.GE14174@blackbox.hittsjunk.net> <4BF7D88B.7040304@harvee.org> <19448.53266.142791.77211@rapttech.com.au> <4BF941B9.5010503@harvee.org> <4BF9B907.8010703@harvee.org> <20100524071858.GC2195@blackbox.hittsjunk.net> <1274694779.1591.69.camel@nuuk> Message-ID: <4BFAE834.1060305@harvee.org> On 5/24/2010 5:52 AM, Bruno Girin wrote: \> > There are more than disabled people on standard committees than you > think. In practice, the problem is not with web and accessibility > standards themselves, they are with their implementation in browsers and > how well (or not) they are followed by web site designers... this common experience is why I've come to the conclusion that are accessibility APIs and design models are fundamentally doomed to failure. Why? History. Also because anytime you expect somebody else to change something to accommodate you, they will not do it. Having been in the software biz, having run companies, I will tell you accessibility needs fall dead last both in terms of project and financial expenditures. They fall dead last because they do not add anything to the bottom-line. The number of disabled users of software is almost vanishingly small when compared to the larger market. http://www.practicalecommerce.com/articles/1417-Accessibility-How-Many-Disabled-Web-Users-Are-There- unfortunately, the article above doesn't deal with upper extremity disabilities like mine so one probably should assume the numbers given are the lower limit on disabled users. They estimate something like 7% of the population is disabled. That's on a par with number of Linux users and we see how well the marketplace accommodates TAB users who have disposable income in contrast to disabled users who have trouble finding jobs and have correspondingly less disposable income. I think the current models also doome because it puts the administrative load for accessibility on every system to disabled person uses. Further increasing cost for little benefit especially for employers which will probably never see a disabled person cross the threshold to apply for job let alone hold one.remember, 7% disabled in a total population approximately works out to something like one person in 20 to one person 30 in the actual working population. In my 30 year career, I'm the first, maybe second disabled person I've seen in any of the companies I worked for and these were not small companies. So, how do we change this? We changes by minimizing the changes necessary to applications and hopefully, embed them in libraries so they are used automatically without any work on the part of the developer. We built clients to handle the disability user interface and talk to the back doors in those libraries to do the disability work. We lower the costs/barrier to entry for employers and application developers alike and we end up with a greater range of applications that can be used. cultural and technical challenges discussed later if you care. > It's true, as a person with no disability, it took me a long time to get > it. And I don't think I completely get it yet but at least I'm now able > to make a judgement call on whether some code uses techniques that are > likely to cause accessibility issues. This is to be expected: it is > extremely difficult for someone who does not have a given disability to > understand what it is like to live with that disability. In fact, I > suspect it is difficult for a blind person to understand the challenges > faced by people with motor disabilities for instance. I'm puzzled by this. If you going to work with disability issues, one on handicap yourself in the same way. For example, gloves that restrict finger movement or induced pain when you touch something. Blind folds or having someone remove your keyboard, or worse, generate random keystrokes when you touch a key? I would think that a couple of days with nothing but speech recognition and the mouse would give you a feel for the panic the disabled user feels and a week might give you the first glimmers of understanding to how the solver is problems. A month, and you'll be one of us. :-) > All this to say that to solve accessibility problems, we need to talk to > each other and understand that "getting it" is very difficult for able > people. Which means that able people need to be ready to listen and see > their assumptions and "cool ideas" challenged; while disabled people > need to be patient in explaining why a particular design doesn't work > for them and suggesting constructive alternatives. good point. I will also add that patience runs out somewhere around 10 to 12 years of explaining to yet another generation of clueless programmers what's wrong with their approach and being told "get off of your own fucking lawn grandpa, we know what we are doing" only to see them crash, burn, and walk away saying "that wasn't really an interesting problem after all". From esj at harvee.org Mon May 24 21:39:53 2010 From: esj at harvee.org (Eric S. Johansson) Date: Mon, 24 May 2010 17:39:53 -0400 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <20100524104627.GA15377@blackbox.hittsjunk.net> References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> <4BF954CC.90303@harvee.org> <20100523164015.GJ14174@blackbox.hittsjunk.net> <4BF9BCB5.2040400@harvee.org> <20100524061837.GA2195@blackbox.hittsjunk.net> <20100524102613.GA14537@blackbox.hittsjunk.net> <20100524104627.GA15377@blackbox.hittsjunk.net> Message-ID: <4BFAF229.7010607@harvee.org> On 5/24/2010 6:46 AM, Kenny Hitt wrote: > Hi. > Just to clarify something: my attitude isn't directed at any of the people who have asked me > questions about my Orca crash. My attitude comes from the fact I can debug > Linux kernel code but can't debug a fucking gnome screen reader. > In my opinion, switching to Python from C was a mistake for a screen reader. just a side note we may want to talk about a different thread. I really like Python because, if I control how names are formatted, I can write Python code with a small number of macros. Editing is a bit of a bitch because I don't have the right feedback from Emacs (fookin OSS purists getting in the way). Unfortunately, most Python code from tabs has pep-8 formatted names which totally screws up speech recognition accessibility. Sometimes I take the code and use a global search and replace to create accessible code. :-) From tcross at rapttech.com.au Mon May 24 23:22:38 2010 From: tcross at rapttech.com.au (Tim Cross) Date: Tue, 25 May 2010 09:22:38 +1000 Subject: disabilities In-Reply-To: <1274694779.1591.69.camel@nuuk> References: <4BF6EB61.6080500@harvee.org> <20100522070321.GE14174@blackbox.hittsjunk.net> <4BF7D88B.7040304@harvee.org> <19448.53266.142791.77211@rapttech.com.au> <4BF941B9.5010503@harvee.org> <4BF9B907.8010703@harvee.org> <20100524071858.GC2195@blackbox.hittsjunk.net> <1274694779.1591.69.camel@nuuk> Message-ID: <19451.2622.250551.118633@rapttech.com.au> Bruno Girin writes: > On Mon, 2010-05-24 at 02:18 -0500, Kenny Hitt wrote: > > Hi. > > On Mon, May 24, 2010 at 02:08:11AM +0100, Phillip Whiteside wrote: > > [snip] > > > > I asked on the forum for someone to check and see if my coding was correct - > > > I had exactly zero replies back. How do you expect me to push forward > > > people to include the minor code changes as they are learning when none of > > > "you" are even prepared to see if it is correct? > > > > > I don't know for sure, but there are likely very few disabled people on the standards committy. There > > is likely a token member, but the real power is with sighted people who consider this as just > > some cool project and don't really get that there delay causes real problems for the disabled. > > There are more than disabled people on standard committees than you > think. In practice, the problem is not with web and accessibility > standards themselves, they are with their implementation in browsers and > how well (or not) they are followed by web site designers. My experience > in the industry is that there are very few designers who are aware of > standards and why they should be followed. And even when they are aware > of accessibility standards, they don't understand them well enough to > argue the case for following them, especially when it is perceived that > following the standards will increase the development cost. I constantly > face this problem in my day job: every time I need to write > specifications for a new web based system, I include accessibility > guidelines and invariably I get answers like "that will increase the > cost by X" or "that will delay delivery by Y" when it's not an outright > "we can't do that". > > > > > > > So, I shrug my shoulders and say "well, at least I tried". > > > > > > It is not my loss that you have gotten yet another person do that, it is > > > your loss as a group. > > > > > Actually, it is my loss since I don't know anything about web design or standards. > > Once again, I'm not part of the "group" you are talking about. I'm just a user who is loosing access to more and more > > sites because some "educated" sighted people don't get it and don't listen. > > The "educated" sighted people in this case are the web standards group. > > BTW, my experiences with Firefox and Gnome are making me do the same as you. I am finding myself > > lumping all sighted people into the same group of fuckers who don't get it. > > This is bad for both of us. > > It's true, as a person with no disability, it took me a long time to get > it. And I don't think I completely get it yet but at least I'm now able > to make a judgement call on whether some code uses techniques that are > likely to cause accessibility issues. This is to be expected: it is > extremely difficult for someone who does not have a given disability to > understand what it is like to live with that disability. In fact, I > suspect it is difficult for a blind person to understand the challenges > faced by people with motor disabilities for instance. > > What really opened my eyes was attending a talk by Robin Christopherson > from AbilityNet [1] at the @media conference [2] a few years ago. What > made the difference was not the content of the presentation but the fact > that it was delivered by a blind user and got me to see first hand what > issues blind people face when using a computer. And that's the problem > with accessibility: even with the best will in the world, it's > impossible for non-disabled people to understand the challenges faced by > disabled people without witnessing them first hand. And very few > developers ever see first hand the software they produce used by > disabled users. > > All this to say that to solve accessibility problems, we need to talk to > each other and understand that "getting it" is very difficult for able > people. Which means that able people need to be ready to listen and see > their assumptions and "cool ideas" challenged; while disabled people > need to be patient in explaining why a particular design doesn't work > for them and suggesting constructive alternatives. > > [1] http://www.abilitynet.org.uk/webteam#robin > [2] http://atmedia.webdirections.org/ > > Bruno > Hi Bruno, I pretty much agree with what you wrote. I've known a number of people on the standards committees, all of which have had some form of disability. A few things I would add...... * Standards are a very very difficult thing to formulate. As you point out, many of the browser implementations either fail to implement the standard correctly or just ignore it because it makes things too complex or too expensive. I think this is largely due to the difficulty in being able to express the standard in a non-ambiguous way. More often than not, failure to comply with a standard is due to misunderstanding or misinterpretation of the standard. * Standards are difficult because ther are so many separate parties with their own agenda they want pushed. Look at the issues that have arisen with HTML 5. At one point, the standard was going to define the format to use for audio and video. In the end, this was dropped because those involved could not come to an agreement. I suspect this is mainly because major players like Adobe, Microsoft and Apple had their own technology they wanted to push. Look at other standards, such as the ANSI standard for Common Lisp. This took years to finish and many argue that it destroyed the language. Many of the issues were due to the fact the language had developed for years without any standard and they wanted to both keep as much backward compatibility as possible and make it as easy as possible for all commercial vendors to become compliant with the standard. * There is no single model that will represent the requirements of someone with a disability. Even within one small disability group, such as blind or deaf or those with impaired motor skills etc, the range and impact is different. Furthermore, the individual's ability to work with their disability varies enormously as do their requirements. I am frequently asked, as someone who is blind and is a software developer, to check to see if a new bit of software or website is accessible. Frequently, I have to say that I find it accessible, but many other blind users would not. This is partially because I have a better than average technical skill, have developed my own tools and techniques and possibly because I'm also very stubborn and refuse to let a machine get the better of me. * Different people have different desires and needs. This is also true amongst those with a disability. For example, I don't use Orca or speech-dispatcher. I use one tool, emacspeak. With that tool, I have been able to hold down a senior management position running a large corporate data centre, project manage large development projects with multi-million dollar budgets, and been the senior sys admin for a large ISP. I now have returned to development rather than management because I love technology. My preferred tool has provided me with all I need. Some will ask, but how do you use facebook, or youtube or .... and my answer is I don't. I have no interest in these things and my tool provides me with what I need. However, another person with exactly the same level of disability but with different needs would find my tol completely useless. This difference in needs and desires means that it is very difficult to develop one true solution or specify one comprehensive standard to meet all our needs. Part of the reason adaptive technology is not more advanced than it is is because we are dealing with a very complex issue. There is no one right solution that will satisfy everyone. This is an area that will never be solved - it does get better and improvements do happen, but we need to realise that it is evolving and will need constant and continious work. Likewise, the importance of education, both for those with disabilities and those wihtout, cannot be under stated. We need to avoid creation of 'us' and 'them' paradigms. It is a mistake to lump people into groups and become angry or disallusioned because people 'don't get it'. We need to both learn and understand the dynamics of the situation and find ways to educate and inform. We need to be aware of the differences and the facts that just because we have a disability doesn't mean we understand it. We need to recognise that some are better able to deal with their disabilities than others and we need to realise that as a community, we are a reflection of the wider non-disabled society. We have some who are bitter and angry, some who are self reliant and some who believe society owes them, some who are victims and some who are not and some who blame the world and some who do not. Likewise, we all have different perspectives on what should and should not happen. I frequently get frustrated with disabled people who constantly find problems and focus on what is wrong rather than on what can be done to make things better. I've come across many with a disability who believe others need to change to meet their needs. While I agree there should be more awareness and recognition of special needs and while I would hope as a society we see such things as being very important and will argue in support of programs to address the inequalities, I also think we have a responsability to inform and educate and adapt as much as possible rather than try to make tthe rest of the world adapt to our needs. I also try to remember, when I get frustrated, that we all have different abilities and requirements and we all need to deal with our own situation in whatever way we can. In the end, I remind myself that nowhere is it written that life is meant to be fair and its not anyones fault I have a disability and nor is it anyones problem except mine. Tim -- Tim Cross tcross at rapttech.com.au There are two types of people in IT - those who do not manage what they understand and those who do not understand what they manage. From tcross at rapttech.com.au Mon May 24 23:30:10 2010 From: tcross at rapttech.com.au (Tim Cross) Date: Tue, 25 May 2010 09:30:10 +1000 Subject: Ubuntu-accessibility Digest, Vol 54, Issue 23 In-Reply-To: <20100524104627.GA15377@blackbox.hittsjunk.net> References: <20100523103751.GG14174@blackbox.hittsjunk.net> <4BF94235.1050809@harvee.org> <20100523152603.GI14174@blackbox.hittsjunk.net> <4BF954CC.90303@harvee.org> <20100523164015.GJ14174@blackbox.hittsjunk.net> <4BF9BCB5.2040400@harvee.org> <20100524061837.GA2195@blackbox.hittsjunk.net> <20100524102613.GA14537@blackbox.hittsjunk.net> <20100524104627.GA15377@blackbox.hittsjunk.net> Message-ID: <19451.3074.537966.413663@rapttech.com.au> Kenny Hitt writes: > Hi. > Just to clarify something: my attitude isn't directed at any of the people who have asked me > questions about my Orca crash. My attitude comes from the fact I can debug > Linux kernel code but can't debug a fucking gnome screen reader. > In my opinion, switching to Python from C was a mistake for a screen reader. > > Kenny > > On Mon, May 24, 2010 at 05:26:13AM -0500, Kenny Hitt wrote: > > Hi. > > On Mon, May 24, 2010 at 10:16:36AM +0000, Valdis wrote: > > > ... > > > > Actually, that isn't my problem with Gnome. My problem is lack of stability > > > and slow response. > > > > My time in Gnome usually ends when Orca crashes and nothing I try can get it > > > to restart. At that > > > > point, anything in the Gnome session is lost. My only option is to kill the > > > Xserver and clean > > > ... > > > can you start orca with wollowing command: > > > orca >~/orca.log 2>&1 & > > > > > > And then check what appears in the log file? > > > > > no, when it crashes, nothing I do can get it to restart. Before you ask, it isn't a tts > > issue since speech-dispatcher is still up and running. > > I've been running Linux for 10 years now, so I'm not your normal stupid Windows user. > > I know how to debug problems. Like I said in my earlier post, if this were a C program > > I would already have filed the bug report. The fact you can't easily debug a Python > > app is a big weakness in Orca. > > I don't have enough disk space to just leav the debug options in Orca enabled either, so this > > will likely be a bug that won't get resolved any time soon. > > > > Kenny > > > I disagree that switching from C to python was a mistake. While I personally don't like python as a language and C was always my favorite language, I wonder if what is really frustrating for you is really that your more familiar with C and its debugging techniques. From a technical perspective, they are really equivalent. In the end, it just comes down to 0 and 1. Having worked with many different languages, I do know that an average python, perl, ruby, java etc programmer will be more productive and produce more reliable code than an average C programmer. C is a wonderful language, but is much easier to shoot yourself in the foot with than something like python. Yes, a team of really good C programmers cold probalby rpoduce a really nice Orca, but do we have such a team and how long would it take? On the other hand, a team of average python prorammers will likely be more productive and the code will likely be more stable. These days, you are more likely to find competant python programmers than C programmers. An important part of any software project is maintainability. The fact we have more python programmmers available probably means Orca is worked on by more people than it would be if it was all in C. Tim -- Tim Cross tcross at rapttech.com.au There are two types of people in IT - those who do not manage what they understand and those who do not understand what they manage. From kenny at hittsjunk.net Tue May 25 08:25:21 2010 From: kenny at hittsjunk.net (Kenny Hitt) Date: Tue, 25 May 2010 03:25:21 -0500 Subject: disabilities In-Reply-To: <1274694779.1591.69.camel@nuuk> References: <4BF6EB61.6080500@harvee.org> <20100522070321.GE14174@blackbox.hittsjunk.net> <4BF7D88B.7040304@harvee.org> <19448.53266.142791.77211@rapttech.com.au> <4BF941B9.5010503@harvee.org> <4BF9B907.8010703@harvee.org> <20100524071858.GC2195@blackbox.hittsjunk.net> <1274694779.1591.69.camel@nuuk> Message-ID: <20100525082521.GA13071@blackbox.hittsjunk.net> Hi. On Mon, May 24, 2010 at 10:52:59AM +0100, Bruno Girin wrote: > how well (or not) they are followed by web site designers. My experience > in the industry is that there are very few designers who are aware of > standards and why they should be followed. And even when they are aware > of accessibility standards, they don't understand them well enough to > argue the case for following them, especially when it is perceived that > following the standards will increase the development cost. I constantly > face this problem in my day job: every time I need to write > specifications for a new web based system, I include accessibility > guidelines and invariably I get answers like "that will increase the > cost by X" or "that will delay delivery by Y" when it's not an outright > "we can't do that". > One point you might want to mention when they start talking about cost is that if I can't use the web site I'll ve forced to call and talk to a person to get what I want. Which has a lower cost, making the site accessible or paying someone to answer the phone? I don't have any data to know the answer. Hopefully, someone has done such studys. Kenny From brunogirin at gmail.com Tue May 25 09:48:29 2010 From: brunogirin at gmail.com (Bruno Girin) Date: Tue, 25 May 2010 10:48:29 +0100 Subject: disabilities In-Reply-To: <20100525082521.GA13071@blackbox.hittsjunk.net> References: <4BF6EB61.6080500@harvee.org> <20100522070321.GE14174@blackbox.hittsjunk.net> <4BF7D88B.7040304@harvee.org> <19448.53266.142791.77211@rapttech.com.au> <4BF941B9.5010503@harvee.org> <4BF9B907.8010703@harvee.org> <20100524071858.GC2195@blackbox.hittsjunk.net> <1274694779.1591.69.camel@nuuk> <20100525082521.GA13071@blackbox.hittsjunk.net> Message-ID: <1274780909.1549.49.camel@nuuk> On Tue, 2010-05-25 at 03:25 -0500, Kenny Hitt wrote: > Hi. > On Mon, May 24, 2010 at 10:52:59AM +0100, Bruno Girin wrote: > > > > how well (or not) they are followed by web site designers. My experience > > in the industry is that there are very few designers who are aware of > > standards and why they should be followed. And even when they are aware > > of accessibility standards, they don't understand them well enough to > > argue the case for following them, especially when it is perceived that > > following the standards will increase the development cost. I constantly > > face this problem in my day job: every time I need to write > > specifications for a new web based system, I include accessibility > > guidelines and invariably I get answers like "that will increase the > > cost by X" or "that will delay delivery by Y" when it's not an outright > > "we can't do that". > > > > One point you might want to mention when they start talking about cost is that if I can't use the > web site I'll ve forced to call and talk to a person to get what I want. > Which has a lower cost, making the site accessible or paying someone to answer the phone? > I don't have any data to know the answer. Hopefully, someone has done such studys. Yes, that point is always made. But because nobody ever has any numbers to identify how much business they would lose by not making their web site accessible, that argument generally doesn't work as well as it should. And what generally happens is that accessibility is added to the requirements as a low level priority (which usually means "we'll do it in release 2, 3 or whenever we can"). This is obviously the wrong way to do it, considering that web site accessibility is similar to multi-browser support and multi-language support in the sense that it's not rocket science and it's quite cheap to do if you include it from day 1 in your design. On the other hand, if you try to retrofit it to an existing system, it can be prohibitively expensive because it potentially requires a complete re-factoring of that system. As a result, getting accessibility accepted as an essential requirement is a lot easier on a green field project. The worst situation is when the business users have decided to buy a piece of software from a third party vendor without involving IT in the initial discussions. In the first meeting I have with the vendor I'll always ask about accessibility support. The response tends to be a blank stare, then a statement like "sorry, our product hasn't been designed for this" then some more argument and a statement like "ok, we can do this but it will cost you a lot and you will lose functionality X and Y", which are of course the functionality that sold the product to the business because they looked cool and use so much Javascript wizardry that they are completely un-accessible. That sort of response is usually a symptom of a badly designed product, which will potentially be a support nightmare once it's in production. But of course, that's of no interest to the project's stake holders as it's a potential future cost rather than an immediate one. Sorry about the rant, it's probably completely off topic by now! On the other hand, open source has an advantage here, in the sense that the money considerations that usually become barriers in the corporate world are not (or less of) an issue with open source. Instead they translate to time, effort and knowledge on the part of the application developers. The good thing is that we can all do something in terms of spreading the knowledge and some of us can help with time and effort. Bruno From pstowe at gmail.com Tue May 25 12:24:48 2010 From: pstowe at gmail.com (Penelope Stowe) Date: Tue, 25 May 2010 08:24:48 -0400 Subject: Reminder: Meeting Today at 21:00 UTC Message-ID: Hiya, Just a reminder that there is a meeting today at 21:00 UTC in #ubuntu-accessibility We'll be updating everyone on what was discussed at the Ubuntu Developer Summit and start getting some structure into the group for the next 6 months. I look forward to seeing you all there! Thanks, Penelope From saatyan.kfb at gmail.com Fri May 28 12:07:29 2010 From: saatyan.kfb at gmail.com (sathyan) Date: Fri, 28 May 2010 08:07:29 -0400 Subject: LMMS In-Reply-To: References: Message-ID: <1275048449.2074.20.camel@linux-desktop> hello, as you know there is no accessibility in lmms, a wonderful tool for producing music.still a visually impaired person can play instruments in it.install lmms first. please copy and paste the attatched file on your desktop. then enter the file. so far, we have the orca support. now press shift-tab and press space.now the first instrument will appear. use the key board to play different notes. to go to next instrument, press control w and shift-tab press space and play. continue the process for other instruments. please let me know if you could play it at all. TO QUIT THE PROGRAMME alt,f4 AND THEN PRESS alt tab AND PRESS n for not to save. -------------- next part -------------- A non-text attachment was scrubbed... Name: LMMS4UBUNTU10.04.mmpz Type: application/x-lmms-project Size: 2538 bytes Desc: not available URL: From zps098 at gmail.com Sun May 30 16:43:03 2010 From: zps098 at gmail.com (abdul basit) Date: Sun, 30 May 2010 09:43:03 -0700 Subject: vinux Message-ID: <000601cb0017$312b7310$0401a8c0@zps0f0dbb9f6b9> hello group today i install ubuntu on my pc i want to install voxsin tts i don't no Howe it will work please tell me how to do thanks! abdul basit email: zps098 at hotmail.com skipe: zps2121 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony.sales at rncb.ac.uk Mon May 31 21:20:08 2010 From: tony.sales at rncb.ac.uk (Anthony Sales) Date: Mon, 31 May 2010 22:20:08 +0100 Subject: Vinux 3.0 Released! Message-ID: On behalf of the whole Vinux community I am happy to announce the 3rd release of Vinux - Linux for the Visually Impaired, based on Ubuntu 10.04 - Lucid Lynx. This version of Vinux provides three screen-readers, two full-screen magnifiers, dynamic font-size/colour-theme changing as well as support for USB Braille displays. Vinux is now available both as an installable live CD and as a .deb package which will automatically convert an existing installation of Ubuntu Lucid into an accessible Vinux system! In addition, we now have our own Vinux package repository (from which you can install our customised packages with apt-get/synaptic) and a dedicated Vinux IRC channel. In the very near future we will also be launching a Vinux Wiki and releasing special DVD, USB and Virtual Editions of Vinux 3.0! To download Vinux 3.0 or to get more information on the project please visit the Vinux Project Homepage at http://vinux.org.uk or use these direct links: Download: http://sina.fi.ncsu.edu/Vinux-3.0.iso (685MB, MD5: 7cc8ac0ed5eaef45dbf215279da3660f) Mirrors: http://vinux.org.uk/downloads.html drbongo