Project Update – Improving the Automated Recognition of Bentham’s handwriting
By uczwlse, on 28 November 2018
As our volunteer transcribers know, getting to grips with Bentham’s handwriting can be a steep learning curve. Bentham never wrote particularly neatly and his scrawl became increasingly difficult to comprehend as he grew older. Since 2013, the Bentham Project has been experimenting with advanced machine learning technology via the Transkribus platform in an attempt to train algorithms to automatically decipher Bentham’s handwriting. And we have lately seen vastly improved results!
HTR technology is open to anyone around the world thanks to Transkribus and the READ project. Once users have installed the platform, they can set about processing images and transcripts as training data for automated text recognition. The software uses computational models for machine learning called neural networks. These networks are trained to recognise a style of writing by being shown images and transcripts of that writing. Anyone can start a test project in Transkribus by uploading around 75 pages of digitised images to the platform and transcribing each page as fully as possible. The software learns from everything it is shown and so the more pages of training data, the better! Find out more about getting started with Transkribus in the Transkribus How to Guides.
When we started working with HTR, it is fair to say that we were somewhat uncertain about the capabilities of the technology. So we decided to focus on training a model to recognise some of the easier papers in the Bentham collection – those written by Bentham’s secretaries who tend to have neat handwriting. Using around 900 pages of images and transcripts, we trained a model that is now publicly available to all Transkribus users under the name ‘English Writing M1’. This model can produce transcripts of pages from the Bentham collection with a Character Error Rate (CER) of between 5-20%. It produces good transcripts of pages written by Bentham’s secretaries but struggles to decipher Bentham’s own hand.
So our next challenge was to improve the recognition of Bentham’s most difficult handwriting. For the past 18 months we have been continually creating training data in Transkribus based on very complex pages from the Bentham collection, periodically retraining HTR models and then assessing the results. Until recently, our best result was a model trained on 81,000 transcribed words (around 340 pages) which used the ‘English Writing M1’ model as a base model. By using a base model, Transkribus users can give the system a boost and ensure that it builds directly on what it has already learnt from the creation of an earlier model. In this case, our resulting model could produce transcripts with an average CER of 17.75%.
The great thing about working with Transkribus is that the technology is improving all the time, thanks to the efforts of the computer scientists who work on the READ project. The latest innovation is HTR+, a new form of Handwritten Text Recognition technology formulated by the CITlab team at the University of Rostock. HTR+ is based on TensorFlow, a software library developed by Google. It is works similarly to the existing HTR but processes data much faster, meaning that the algorithms can learn more quickly and so produce better results. We used HTR+ to train a model on 140,000 transcribed words (or 535 pages) of Bentham’s most difficult handwriting. This model can generate transcripts with a CER of around 9%.
HTR+ is not yet available to all Transkribus users – but users can request access by sending an email to the Transkribus team (firstname.lastname@example.org)
We are getting closer to the reliable recognition of Bentham’s handwriting and this is very exciting! As a scholarly editing project dedicated to producing Bentham’s Collected Works, we require highly accurate transcripts as a basis for our work. The experience of other Transkribus users suggests that transcripts which have a CER of around 5% can be corrected rapidly and easily. So our next priority is to conduct some tests to see how easy Bentham Project researchers find it to correct and edit transcripts generated by this model where the CER is 9%.
We will also continue creating new pages of training data in Transkribus using images and transcripts of Bentham’s most difficult handwriting. As well as retraining our current model with additional pages of data, we want to create smaller models focused on specific hands and languages in the Bentham collection. This new training data could also be used to improve our Keyword Spotting tool, which was set up by the PRHLT research center at the Universitat Politècnica de València.
We are also preparing a large-scale experiment with Text2Img matching technology devised by the CITlab team. This technology allows users to use existing transcripts as training data for HTR, rather than creating transcripts afresh in Transkribus. We hope that this technology will allow us to create a new model based on several thousand pages from the Bentham collection – watch this space!
And of course, we can’t forget Transcribe Bentham. We still plan to be able to integrate HTR technology directly into our crowdsourcing platform over the next few years. The idea is that users will be able to check and correct automated transcripts or simply transcribe as normal and receive computer-generated suggestions of words that are difficult to decipher. We believe that new users, who tend to be daunted by the complexity of Bentham’s handwriting, are likely to find these transcription options more attractive. Experienced users may also appreciate word suggestions to assist their transcription work.
The Bentham Project is at the cutting-edge of this transformational technology and we hope that these advances will ultimately bring us closer to the complete transcription and publication of Bentham’s Collected Works.
My thanks go to Chris Riley, Transcription Assistant at the Bentham Project for his assistance with the preparation of training data in Transkribus.