[Athen] Response to questions about accessible lecture capture solutions

Pat BROGAN pat at automaticsync.com
Fri Jun 26 13:29:19 PDT 2009


EA and Gerry raised some questions about accessible lecture capture
options and captioning solutions. Having just come from Educomm it
seems like the main sponsors were all lecture capture companies. I'll
present some information but expose my bias upfront. I work now for
Automatic Sync, the company referenced in the query, and previously
worked for echo360. In the context of my role, I did manage the
partnership between echo360 and Automatic Sync. Since coming to
Automatic Sync I am working with Mediasite, Panopto, and TechSmith. I
have worked for years with many universities on accessibility in my
former role as VP of education and elearning at Macromedia and written
the standards part of The elearning Handbook. Bias out of the way... I
wrote a whitepaper "Making Lectures Accessible" which is posted at:
http://www.automaticsync.com/caption/echo.htm,
along with a research paper for UW Australia on the benefits of lecture
capture for students with disabilities.
Accessibility really means two different things in the context of
acquiring lecture capture systems. In the US, section 508 has a FAR
(Federal acquisition regulation) which says that government agencies and
organizations receiving funds from the government must buy the most
compliant system. This is exposed in the VPAT (vendor product
accessibility template). This gets registered with the government at:
http://www.section508.gov/index.cfm?FuseAction=content&ID=12. The
Automatic Sync VPAT is at:
http://www.automaticsync.com/caption/govtregs.htm . My understanding is
that most of the lecture capture systems are not compliant because of
some of the database issues. but their output is or can be made to be
compliant through captioning.
The consequences of not buying the most compliant system can be economic
penalties. This generally focuses on the use of the tool itself--can a
disabled person use the lecture capture system?

The second and more important aspect is--Is the content the system
generates compliant? For the lecture capture vendors, this means that
one can navigate with tools other than a mouse, users can control
content navigation and flow. But the big challenge is to really make the
content accessible, the audio and video need to have synced captions. In
the workflow AST generated with our partners, the goal is to schedule a
class to be captured and at that time designate that it will be
transcribed and captioned, and then have the workflow happen
automatically (thought a stenographer does the transcription). Under a
department of Ed grant, we looked at how to automate the workflow and
unfortunately, the entire process can't be automated now with quality
good enough through speech recognition tools. I'll be glad to share
information about error rates and comprehension from the research. So
our focus was on reducing costs by automating the process, and we offer
very competitive prices and very high quality.

Once the captions are generated, they are synced with the audio, video
and VGA. Users can turn on captions or turn them off. Transcripts can be
uploaded and searched in most systems which adds real value in the
reusability aspect of content.
I could go on and on about how different approaches can work, does this
help?






More information about the athen-list mailing list