Study of using hybrid deep neural networks in character extraction from images containing text

<p>Character segmentation from epigraphical images helps the optical character recognizer (OCR) in training and recognition of old regional scripts. The scripts or characters present in the images are illegible and may have complex and noisy background texture. In this paper, we present an aut...

詳細記述

保存先:
書誌詳細
主要な著者: P Preethi (著者), HR Mamatha (著者), Hrishikesh Viswanath (著者)
フォーマット: 図書
出版事項: Trends in Computer Science and Information Technology - Peertechz Publications, 2021-08-04.
主題:
オンライン・アクセス:Connect to this object online.
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!

MARC

LEADER 00000 am a22000003u 4500
001 peertech__10_17352_tcsit_000039
042 |a dc 
100 1 0 |a P Preethi  |e author 
700 1 0 |a  HR Mamatha  |e author 
700 1 0 |a Hrishikesh Viswanath  |e author 
245 0 0 |a Study of using hybrid deep neural networks in character extraction from images containing text 
260 |b Trends in Computer Science and Information Technology - Peertechz Publications,   |c 2021-08-04. 
520 |a <p>Character segmentation from epigraphical images helps the optical character recognizer (OCR) in training and recognition of old regional scripts. The scripts or characters present in the images are illegible and may have complex and noisy background texture. In this paper, we present an automated way of segmenting and extracting characters on digitized inscriptions. To achieve this, machine learning models are employed to discern between correctly segmented characters and partially segmented ones. The proposed method first recursively crops the document by sliding a window across the image from top to bottom to extract the content within the window. This results in a number of small images for classification. The segments are classified into character and non-character class based on the features within them. The model was tested on a wide range of input images having irregular, inconsistently spaced, hand written and inscribed characters.</p> 
540 |a Copyright © P Preethi et al. 
546 |a en 
655 7 |a Research Article  |2 local 
856 4 1 |u https://doi.org/10.17352/tcsit.000039  |z Connect to this object online.