What is the difference between OMR, OCR and ICR?
OMR, OCR and ICR are all methods of capturing data from external sources (mostly paper scans) by recognizing marks and symbols. OMR (Optical Mark Recognition) only aims to detect existence of marks (such as filled bubbles, checks, crosses etc) in predefined areas. OCR (Optical character recognition) on the other hand, is the process of detecting machine printed character patterns (symbols and letters) in scanned images. ICR (Intelligent Character Recognition) is very similar to OCR, but it is a more sophisticated technology that can read handwritten document images and produce textual data.
OMR (Optical Mark Recognition)
OMR hardware and software detect marks, such as checks, crosses and filled areas and gather information from the state of mark areas.
OCR (Optical Character Recognition)
OCR technologies recognize printed characters and symbols and convert paper scans into text. Some OCR recognizers also capture style (font, character size, character style) and obtain documents in rich text formats.
ICR (Intelligent Character Recognition)
ICR algorithms can read handwriting from images. They need to be trained for different languages and character sets. In order to achieve better accuracy, some hand filled forms are designed to let letters in distinct boxes. Optical forms might be decorated by ICR boxes besides OMR bubbles. Less critical information can be collected from ICR boxes (such as comments)
.
Error Margin
OMR applications require almost perfect sensing of marks, because the data produced have critical results (such as exam scores of students, winner of a lottery, counting of ballot votes etc). OCR and ICR technologies have an acceptable error margin. ICR’s error margin is more than OCR, because reading handwriting can be difficult even for human beings.