Originally Published: 03 October 2018
The worlds first machine capable of turning British Sign Language (BSL) into written English is set to be built by the University of Surrey as part of a project funded by the Engineering and Physical Sciences Research Council.
BSL is a language in its own right, with its own grammar, and it is very different from English. BSL uses several parts of the body simultaneously to fully express a range phrases, ideas and emotions.
The project, worth just under £1million, will see the Centre for Vision Speech and Signal Processing (CVSSP) at the University of Surrey develop a new system that will look to recognise not only hand motion and shape, but also the facial expression and body posture of the signer. This unique machine will then work out how these aspects can be put together into phrases and how these phrases can then be translated into written and spoken language.
The project is being carried out in partnership with linguists from the Deafness Cognition and Language Research Centre at University College London, and the Engineering Science team at the University of Oxford.
Richard Bowden, Professor of Computer Vision at the University of Surrey, said: We believe that this project will be seen as an important landmark for deaf-hearing communications allowing the deaf community to fully participate in the digital revolution that we are all currently enjoying.
We are passionate about sign language at CVSSP, so much so that everyone who works in this area within our lab is asked to learn how to sign.
Professor Adrian Hilton, Director of CVSSP, said This project builds on a track-record of internationally leading research on visual understanding and translation of sign-language in CVSSP led by Prof. Bowden and his team. ExTOL is an exciting opportunity to realise the first end-to-end system for translation of British Sign Language leveraging recent advances in Machine Perception and AI.
For more details on this, take a look at the ExTOL: End to End Translation of British Sign Language project (opens in new tab/window).
Earlier this year, Professor Bowdens team published a paper detailing the first AI and deep learning system that can perform end-to-end translation directly from sign language to spoken language. This paper was published at IEEE Computer Vision and Pattern Recognition in Salt Lake City, the top conference for computer vision.