1 View 0 Ratings Rate it

This demo showcases how the Sign & Speak program uses machine learning (ML) to build a tool that facilitates communication between users of sign language and users of spoken language. By combining artificial intelligence (AI) models trained to transcribe speech and interpret sign language with a camera and a microphone, the tool enables two-way conversation in situations where communication was previously challenging.

Ref Link: https://github.com/aws-samples/aws-builders-fair-projects/blob/master/reinvent-2019/sign-and-speak/README.md

Amazon Transcribe makes it easy for developers to add speech to text capabilities to their applications. Audio data is virtually impossible for computers to search and analyze. Therefore, recorded speech needs to be converted to text before it can be used in applications. Historically, customers had to work with transcription providers that required them to sign expensive contracts and were hard to integrate into their technology stacks to accomplish this task. Many of these providers use outdated technology that does not adapt well to different scenarios, like low-fidelity phone audio common in contact centers, which results in poor accuracy.

Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, automate subtitling, and generate metadata for media assets to create a fully searchable archive. You can use Amazon Transcribe Medical to add medical speech to text capabilities to clinical documentation applications.

1 View 0 Ratings Rate it

Written by admin