Project Update – teaching a computer to READ Bentham

By Louise Seaward, on 9 June 2017

The difficulty of Bentham’s handwriting is notorious.  At the Bentham Project, we have years of experience of transcribing Bentham but you will still regularly find us hunched over a manuscript with a magnifying glass or blankly staring at a digital image on a computer screen, zooming in and out on a particular word.

One of the Bentham Project's favourite tools

One of the Bentham Project’s favourite tools

 

Across the last few years, we have been working closely with various teams of computer scientists in the hope of making progress on the automated recognition of Bentham’s writing.  This collaboration started under the tranScriptorium project in 2013 and now continues in its successor project READ (Recognition and Enrichment of Archival Documents).

READ’s mission is to make archival collections more accessible through the development and dissemination of Handwritten Text Recognition (HTR) technology.  This technology is freely available through the Transkribus platform.  Using algorithms of machine learning, it is possible to teach a computer to read a particular style of writing.  The technology is trained by being shown images of documents and their accurate transcriptions.  Anyone can start a test project with around 20,000 words or around 100 pages.

Under the tranScriptorium project, we initially had some success in training a model to process manuscripts from the Bentham collection.  Using around 900 pages of Bentham images and transcripts, researchers from the Pattern Recognition and Human Language Technology (PRHLT) research centre at the Universitat Politècnica de València created a HTR model for us using statistical algorithms called Hidden Markov Models.  This model was able to produce relatively accurate transcriptions of the Bentham papers, with a Character Error Rate of around 18% (meaning that around 82% of the characters in a transcript would be correct).

In the first stage of the READ project, we have already been able to enhance the accuracy of the HTR technology.  The team at the Computational Intelligence Technology Lab (CITlab) at the University of Rostock created a new model using this same dataset.  This model was based on Neural Networks, computational models for machine learning which work similarly to the human brain.  This model can produce automatic transcripts of the Bentham papers with a Character Error Rate of only 5-10%.

Now it’s time to take things up a notch!  In our first experiments with HTR, we put forward ‘easier’ documents for the computer to process.  These tended to be pages written by Bentham’s secretaries where the layout is clear and the handwriting relatively neat.  Now we want to test how the computer copes with some of the worst examples of Bentham’s writing.  We are producing a new set of training data based on a selection of manuscripts which were written by Bentham himself when the philosopher was in his eighties.  Box xxx of the Bentham Papers in UCL Special Collections contains the Blackstone Familiarized papers.  These were part of Bentham’s lifelong obsession with critiquing the work of William Blackstone, the English jurist who was most famous for his Commentaries on the Laws of England (1765-9).  Bentham first turned against Blackstone as a teenage student when he attended his lectures at the University of Oxford.  In several published works and unpublished papers, Bentham argued that Blackstone was an apologist for the obvious inadequacies in the English legal system and blind to the necessity of reform.

 

Screenshot of page from Blackstone Familiarized in Transkribus. UCL Special Collections, Bentham Papers, Box xxx, fo. 156 [Image: UCL Special Collections]

Screenshot of page from Blackstone Familiarized in Transkribus. UCL Special Collections, Bentham Papers, Box xxx, fo. 156 [Image: UCL Special Collections]

The Blackstone Familiarized papers have been digitised by UCL Creative Media Services and transcribed by Professor Philip Schofield, the Director of the Bentham Project and General Editor of the Collected Works of Jeremy Bentham.  The images were uploaded to Transkribus and Chris Riley, a PhD student from the Faculty of Laws, has been marking the lines of text on each image and then copying the transcripts into the platform.

We are aiming to produce 200 pages of ‘difficult’ Bentham training data which can be fed into a new version of our latest HTR model.  We are also interested in comparing the accuracy of different models.  How far does this new material enhance the accuracy of the models we already have and would it be worthwhile to have separate models for Bentham himself and his secretaries?

The prospect of the automated recognition of Bentham’s handwriting would considerably speed up the full transcription of Bentham’s writings and the publication of his Collected Works.  We also want to experiment with HTR technology in a new version of Transcribe Bentham where volunteer transcribers could ask the computer to provide suggested readings of words that they are difficult to decipher.  Until then, we have some more transcribing to do!