Interactive Ink SDK WebSockets API architecture
MyScript iink SDK web exposes the WebSockets API, enabling remote access to the Interactive Ink technology.
WebSockets API is the lowest level of API available in a client-server mode. This documentation section assumes that you are familiar with interactive ink and WebSocket concepts. To use this API, you will have to capture, render, edit strokes and rich content on the client side with your own piece of software. Consider using the client side libraries if you want to integrate handwriting recognition into a web application.
The iink WebSocket API is built for modern browsers and good network conditions. As handwriting recognition is very CPU-intensive, we can not deploy it client-side. We have made the choice to use client side only for ink capture and content rendering. All the recognition is done server-side.
Interactive Ink SDK brings the notion of content package and content part. Currently only MATH and TEXT content part can be manipulated by the iink WebSocket API. In brief, you have to create an editor that creates a content page containing a content part. It is not possible to add several content parts in a web content package.
Before using the WebSockets API, you have to understand the lifecycle of handwriting recognition.
Information like the type of content you want to recognize (TEXT or MATH), the language, the size of your input and other parameters have to be provided.
User input is what we call a stroke, i.e. a series of points with the timestamp of their capture. To be even more precise, user input is a pending stroke, meaning that it is not already processed by the server recognition engine. The application capturing strokes generally renders the temporary strokes to give an immediate feedback to the user.
This stroke is sent to the server that acknowledges the reception. Then it answers with a SVG patch containing the temporary stroke to display to the user. A good practice is to replace your temporary rendering of the stroke with the content of this patch.
Gesture detection is done as soon as the server acknowledges the reception of the stroke. Then it tries to recognize the content. A message sharing the state is sent to the client at each step.
While writing, the user may want to undo, redo or clear the input zone. Each of those actions has to be shared by the client with the server. It is also possible to ask for an export. The server will then answer with the last recognized result in the desired format. You can import some content in the format managed by the content part. The user may also want to convert the content from its handwriting form to a digital one, called typeset. This action is called conversion. The server will answer with a patch containing text and glyph instead of strokes.