MyScript iink SDK web exposes the WebSockets API, enabling remote access to the Interactive Ink technology.
WebSockets API is the lowest level of API available in a client-server mode. This documentation section assumes that you are familiar with interactive ink and WebSocket concepts. To use this API, you will have to capture, render, edit strokes and rich content on the client side with your own piece of software. Consider using the client side libraries if you want to integrate handwriting recognition into a web application.
The iink WebSocket API is built for modern browsers and good network conditions. As handwriting recognition is very CPU-intensive, we can not deploy it client-side. We have made the choice to use client side only for ink capture and content rendering. All the recognition is done server-side.
To ensure a good recognition and an accurate gesture detection, the strokes’ position client-side and server-side have to be absolutely the same. The following protocol is built assuming that the server knows the exact position of strokes and content-like in the user display.
Before using the WebSockets API, you have to understand that we offer two different interactivity modes with WebSockets:
The server maintains an interactive context and performs the rendering too, thanks to WS APIs allowing you to edit the content and get the updated recognition result dynamically. The server computes the rendering updates as well that it notifies with SVG patches that your application/iinkTS use to update their view. There is a single ink model that is on the server side. As the rendering is performed by the server, even if the server sends SVG content and patches, these can not be re-styled client-side with CSS. You have to use JavaScript exclusively to manipulate the content inside the editor.
MyScript iink SDK brings the notion of content package and content part. Currently MATH and TEXT content part can be manipulated by the iink WebSocket onscreen interactivity APIs. In brief, you have to create an editor that creates a content page containing a content part. It is not possible to add several content parts in a web content package.
Before using the WebSockets API, you have to understand the lifecycle of onscreen interactivity handwriting recognition.
Information like the type of content you want to recognize (TEXT or MATH), the language, the size of your input and other parameters have to be provided.
User input is what we call a stroke, i.e. a series of points with the timestamp of their capture. To be even more precise, user input is a pending stroke, meaning that it is not already processed by the server recognition engine. The application capturing strokes generally renders the temporary strokes to give an immediate feedback to the user.
This stroke is sent to the server that acknowledges the reception. Then it answers with a SVG patch containing the temporary stroke to display to the user. A good practice is to replace your temporary rendering of the stroke with the content of this patch.
Gesture detection is done as soon as the server acknowledges the reception of the stroke. Then it tries to recognize the content. A message sharing the state is sent to the client at each step.
While writing, the user may want to undo, redo or clear the input zone. Each of those actions has to be shared by the client with the server. It is also possible to ask for an export. The server will then answer with the last recognized result in the desired format. You can import some content in the format managed by the content part. The user may also want to convert the content from its handwriting form to a digital one, called typeset. This action is called conversion. The server will answer with a patch containing text and glyph instead of strokes.
➤ For more details, refer to the onscreen interactivity mode messages
This programmatic interactivity is typically useful when you want to integrate editable content into an existing application with its own rendering. In this case, the application has its own ink stroke model that it needs to keep synchronized with the iink stroke model. The server maintains an interactive context, thanks to WS APIs, allowing you to update the ink content and your application dynamically gets the updated recognition result. All of these features are provided through APIs, and your application controls all of the implemented behaviors.
The interactivity is based on stroke updates (adding, erasing strokes) that your application sends to the server, which manages ink strokes using their IDs. These stroke IDs can be mapped to strokes in your application’s data model. Based on the server’s gesture recognition notifications and recognition results, the application updates its rendering and strokes model, and also sends corresponding changes to the server.
Currently RAW CONTENT content part can be manipulated by the iink WebSocket offscreen interactivity APIs.
Before using the WebSockets API, you have to understand the lifecycle of offscreen interactivity handwriting recognition.
Information like the type of content you want to recognize (RAW CONTENT), the language, the size of your input and other parameters have to be provided.
User input is what we call a stroke, i.e. a series of points with the timestamp of their capture. To be even more precise, user input is a pending stroke, meaning that it is not already processed by the server recognition engine. The application capturing strokes renders the strokes to give an immediate feedback to the user.
This stroke is sent to the server that acknowledges the reception.
Gesture detection is done as soon as the server acknowledges the reception of the stroke. Then it tries to recognize the content. A message sharing the state is sent to the client at each step.
While writing, users may want to select, undo, redo or clear the input zone. Or, when a gesture is notified, application may apply an action. Each of these interactions must be translated by the application into the corresponding stroke updates, which the client communicates to the server by asking to add or delete the corresponding ink strokes.
➤ For more details, refer to the offscreen interactivity mode messages