A month after announcing changes to ML Kit, its toolset for developers to infuse apps with AI, Google today launched the Digital Ink Recognition API on Android and iOS to allow developers to create apps where stylus and touch act as inputs. As the name implies, the API — which is powered by the same technology underpinning Google’s Gboard software keyboard, Quick Draw, and AutoDraw — looks at a user’s strokes on the screen and recognizes what they’re writing or drawing.
Google says that with the new Digital Ink Recognition API, developers can enable users to input text and figures with a finger and stylus or transcribe handwritten notes to make them searchable. Classifiers parse written text into a string of characters; other classifiers describe shapes such as drawings, sketches, and emojis by the class to which they belong (e.g., circle, square, happy face, etc).
The Digital Ink Recognition API performs processing in near-real-time and on-device, according to Google, with support for over 300 languages and more than 25 writing systems including all major Latin languages, Chinese, Japanese, Korean, Arabic, and Cyrillic. Developers must download one or more classifiers weighing in around 20MB in size, and Google says the time to perform recognition takes about 100 milliseconds depending on the device hardware and the size of the input stroke sequence.
The new API comes after Google added new natural language processing services for ML Kit, including Smart Reply, last year. (Smart Reply suggests text responses based on the last 10 exchanged messages and runs entirely on-device, and it’s been incorporated into Gmail, Google Chat, and Google Assistant on smart displays and smartphones.) Last year during Google’s I/O 2019 developer conference, three new capabilities came to ML Kit in beta, starting with a translation API supporting 58 languages and a pair of APIs that let apps locate and track objects of interest in a live camera feed in real time. More recently, ML Kit gained support for custom TensorFlow Lite image labeling, object detection, and object tracking models as it transitioned from ML Kit for Firebase’s on-device APIs to a new standalone SDK (ML Kit SDK) that doesn’t require a Firebase project.
Earlier this year, Google noted that more than 25,000 applications on Android and iOS now use ML Kit’s features, up from just a handful at its introduction in May 2018. Much like Apple’s CoreML, ML Kit is built to tackle challenges in vision and natural language domains, including text recognition and translation, barcode scanning, and object classification and tracking.
Credit: Source link