Overview

The purpose of this document is to describe the APIs used to interface with AdHawk’s Eye Tracking module.

Communication with the AdHawk Eye Tracking module can be accomplished via:

  1. AdHawk API Communication protocol

  2. C SDK, which is built on top of the communication protocol and communicates with the device directly over USB or SPI.

C SDK via USB or SPI
  1. Python SDK, which is built on top of the communication protocol, but proxies commands through the AdHawk Backend Service. The C SDK can also be set up to proxy commands in this fashion. The advantage of running in this mode is that multiple apps can be run at the same time.
Python or C SDK via Backend

Communication protocol

The communication packets consist of a packet type followed by zero or more data bytes:

ByteTypeDescription
0uint8The packet type
1…Payload
  • The length of payload is variable depending on the packet type
  • All values are stored in little endian (least significant byte first)

Control interface

The control interface provides the ability to interact with AdHawk’s Eye Tracking module. The client/host initiates a command over the control interface by sending a request packet. The Control packets section describes the list of available commands that the client/host can request over the control interface.

A request to the control interface may take up to 8 seconds to complete. All requests to the control interface will receive a response, either containing the requested information, or containing the result of the request. The clients should only issue one request at a time.

The responses that contain the result of the request consist of a 1-byte integer, where 0 indicates no error and successful execution of the requested command, and a non-zero value denotes the nature of error encountered while servicing the request. See the return codes section for details.

Stream interface

AdHawk’s Eye Tracking module streams status and tracking data. Some streams like the tracker ready signal is always broadcasted. Other streams, such as the gaze vector stream, must be subscribed to.

Backend service

The AdHawk Backend Service is the PC host component that communicates with AdHawk’s eye tracking hardware over USB, and allows multiple clients to interact with and receive data from the eye tracking module. In order to support multiple clients simultaneously, all communications between clients and the Backend Service is performed using UDP sockets. There are two sockets: control and data.

The control socket is constant, and is configured to 11032. The backend service is always listening on the control socket for client requests, and the responses to those requests are sent back to the source UDP port that was specified in the request packet.

The data socket is used to stream periodic eye tracking data to clients. Each client specifies its data port by sending a register endpoint packet to the control socket. The client data port may be the same as its control data port if desired.

Getting started

Installation

Python SDK

pip install adhawk_py_sdk-<version>.tar.gz

C SDK

Refer to the README.md in the C SDK package

Starting the SDK

If communicating with the AdHawk Backend Service, a communication channel must be established over UDP. Start by registering as an endpoint by sending the register endpoint packet with the UDP port of the peer.

If using the Python SDK, this is handled by the FrontendApi and can be accomplished as follows:

import adhawkapi
import adhawkapi.frontend

def on_connect(error):
    if not error:
        print('Connected to AdHawk Backend Service')

def on_disconnect(error):
    print('Disconnected from AdHawk Backend Service')

api = adhawkapi.frontend.FrontendApi()
api.start(connect_cb=on_connect, disconnect_cb=on_disconnect)
...
# Terminate communication
api.shutdown()

The C SDK only requires the AdHawk Backend Service when running in UDP mode. The USB and SPI modes communicate directly with the device and do not require the AdHawk Backend Service. The API initialization call will take care of all setup required depending on the specified mode. See ah_com.h for details on how to connect in different modes

ah_api_init(params)

Auto-tune

To tune the device, send the trigger autotune packet while the user is looking straight ahead. The response indicates whether the Auto-Tune procedure was successful.

C

ah_result ah_api_runAutotune(ah_autotuneReferenceGazeVector *referenceGazeVec);

Python

api.trigger_autotune(reference_gaze_vector)

The caller can optionally provide a reference gaze vector to trigger autotune for more accurate results. The reference gaze vector is a 3-dimensional vector representing the point in world space the user is looking at during the tuning procedure. It consists of an X, Y, and Z component (all values are expected to be in meters).

Calibrate

Once the tracker ready signal is received or get tracker status returns success, the device is ready to be calibrated.

Calibration is the process of mapping the eye tracking signal to an individual’s visual axis, which makes the eye tracking output meaningful for typical applications. To initiate the calibration procedure send a calibration start packet.

The calibration procedure consists of the collection of a set of X, Y, Z gaze references. The gaze references are collected by instructing the user to look at a prescribed point and signaling the registration of the calibration data point through a register calibration point packet. When registering a calibration data point, the client must instruct AdHawk’s eye-tracking module of the user’s expected gaze during the collection of the calibration point. The expected gaze is specified in the payload of the Register calibration data point packet. Note that a response of ‘Left Eye Not Found’ or ‘Right Eye Not Found’ can be treated as a warning in binocular operation, but ‘Eyes Not Found’ is an error.

After collecting the data points, send the calibration complete packet to indicate that no further points will be collected and that a new calibration should be generated.

Once the calibration complete packet is received the eye-tracking module will compute the calibration coefficients based on the correspondence of sensor data and gaze references. The AdHawk’s eye-tracking module will respond with a zero (0) return code for successful calibration, and a non-zero value if calibration fails. See the return codes section for details.

If calibration is successful, the device is capable of streaming the gaze vector stream and the pupil position stream.

C

ah_api_calibrationStart();
for (unsigned i = 0; i < num_points; i++)
{
    float x, y, z;
    // As the eye moves and fixates on each point on the screen,
    // register the position of the point
    // get_reference_point(&x, &y, &z);
    ah_api_calibrationRegisterPoint(x, y, z);
}
ah_api_calibrationComplete();

Python


def grid_points(nrows, ncols, xrange=20, yrange=12.5, xoffset=0, yoffset=2.5):
    '''Generates a grid of points based on range and number of rows/cols'''
    zpos = -0.6  # typically 60cm to screen
    # calculate x and y range for a xrange x yrange degree calibration window
    xmin, xmax = np.tan(np.deg2rad([xoffset - xrange, xoffset + xrange])) * np.abs(zpos)
    ymin, ymax = np.tan(np.deg2rad([yoffset - yrange, yoffset + yrange])) * np.abs(zpos)

    cal_points = []
    for ypos in np.linspace(ymin, ymax, nrows):
        for xpos in np.linspace(xmin, xmax, ncols):
            cal_points.append((xpos, ypos, zpos))

    print(f'grid_points(): generated {nrows}x{ncols} points'
          f' {xrange}x{yrange} deg box at {zpos}: {cal_points}')

    return cal_points

npoints = 9
nrows = int(np.sqrt(npts))
reference_points = grid_points(nrows, nrows)

api.start_calibration()
for point in reference_points:
    # As the eye moves and fixates on each point on the screen,
    # register the position of the point
    api.register_calibration_point(*point)
api.stop_calibration()

Using backend built-in calibration procedure with GUI

Only supported when using the AdHawk Backend Service

Currently not supported in c sdk

The calibration of gaze tracking can also be done with a graphical user interface provided by the AdHawk Backend Service that guides the user through the procedure. This API launches a built-in calibration screen provided by the AdHawk Backend service. The user needs to fixate at the center of a sequence of targets displayed on the screen and register the calibration samples. The GUI will be closed automatically at the end of the calibration. See Marker Sequence Procedures for more details about the packet.

Python


api.start_calibration_gui(mode=adhawkapi.MarkerSequenceMode.FIXED_GAZE.value,
                          n_points=5, marker_size_mm=35,
                          randomize=True,
                          fov=(30, 25, 0, -10),
                          callback=lambda *x: None)

Enable data streams

Once the device is calibrated, to start receiving eye tracking data from the device, send a set stream control packet with the stream you want to enable and the rate at which you want to enable it. See Supported rates.

At this point, the device will start streaming packets to the registered endpoint. The format of the streamed packets are listed in the Data Streams section.

C

static void gazeDataCallback(void *data)
{
    ah_gazeStreamData *gaze = data;
}

ah_api_setupStream(ah_streamType_Gaze, ah_streamRate_60Hz, gazeDataCallback);

Python

def handler(*data):
    timestamp, xpos, ypos, zpos, vergence = data

api.register_stream_handler(adhawkapi.PacketType.GAZE, gaze_handler)
api.set_stream_control(adhawkapi.PacketType.GAZE, 60)

Saving calibrations

Once a device has been calibrated, the calibration coefficients can be saved using the blobs framework. The blobs framework allows saving and restoring run-time configuration objects of the eye-tracking module.

Send a save blob packet with the blob type set to 1 (gaze calibration data). If the blob was saved successfully, a unique 32-bit identifier is returned in the response packet. The calibration can later be loaded by sending a load blob packet with this ID in the payload.

C

ah_api_calibrationSave();

Python > Only supported when using the AdHawk Backend Service

errcode, blob_id = api.save_blob(adhawkapi.BlobType.CALIBRATION)

Gaze in image

Currently not supported in c sdk

While the main output of the gaze tracker is a stream of user’s gaze ray in 3d, the frontend can request a stream of (x, y) values as the projection of the gaze ray inside the image captured by the front-facing camera. This also assumes that the gaze tracker is already calibrated for the user.

To be able to see the gaze projection in the camera frame in realtime, you need to first start the camera. Once the camera is started and a success response code is received you can subscribe for the video stream. Starting the camera is needed for calculating the gaze projection based on the chosen image resolution, however you don’t necessarily need to subscribe for the video stream. Receiving the video stream only allows you to draw the image and overlay the gaze point on top of it.

In order to get the gaze-in-image stream you need to subscribe for the stream and enable that stream. Note that you don’t need to subscribe for the gaze vector for this but the rate you will receive the gaze-in-image stream is the same rate that is set for the gaze vector stream.

Python


import adhawkapi.frontend

def handle_connect(error):
    if not error:
        # trigger camera start
        api.start_camera_capture(camera_device_index, resolution_index, undistort_image, callback=handle_camera_start_response)

        # register for gaze-in-image stream
        api.set_stream_control(api.PacketType.GAZE_IN_IMAGE, 1, callback=(lambda *args: None))

def handle_camera_start_response(response):
    if response:
        print(f'camera start response: {response}')
    else:
        start_video_stream()

def start_video_stream():
    ''' Start the video streaming '''
    api.start_video_stream(*video_receiver.address, lambda *x: None)

def frame_handler(timestamp, frame_index, image_buf):
    '''
    display the image and overlay the gaze_xy point

    example decoder 1: decode the image using opencv
        image = cv2.imdecode(np.frombuffer(image_buf, dtype=np.uint8), 1)

    example decoder 2: decode the image in pyqtgraph
        qt_img = QtGui.QPixmap()
        qt_img.loadFromData(image_buf, 'JPEG')

    '''
    return

def handle_gaze_in_image_data(_timestamp, gaze_img_x, gaze_img_y, _deg_to_pix_x, _deg_to_pix_y):
    ''' Simple handler that prints the gaze coordinates '''
    global gaze_xy
    gaze_xy = [gaze_img_x, gaze_img_y]
    print(f'gaze coordinates in the image: {gaze_xy}')

def shutdown():
    api.stop_video_stream(*video_receiver.address, lambda *x: None)
    api.stop_camera_capture(lambda *_args: None)
    api.shutdown()

gaze_xy = (0, 0)

# setup a video receiver
video_receiver = adhawkapi.frontend.VideoReceiver(frame_handler)
video_receiver.start()

# define the camera properties
camera_device_index = 0
resolution_index = 0
undistort_image = False

# start
api = adhawkapi.frontend.FrontendApi()
api.register_stream_handler(adhawkapi.PacketType.GAZE_IN_IMAGE, handle_gaze_in_image_data)
api.start(connect_cb=handle_connect)

# shutdown when you are done
shutdown()

Screen tracking

Currently not supported in c sdk

The Adhawk head-mounted eye tracker estimates the user’s gaze in 3d (as well as the projection in the image), whereas majority of the table-mounted eye trackers estimate the gaze as a point inside the screen coordinate system. In order to project the gaze data estimated relative to the user’s head in 3d, onto the screen plane, the screen has to be located and tracked relative to the user’s head (a.k.a. screen tracking). This is done by tracking the screen in the image captured by the front-facing camera. While various methods can be used for tracking the screen, the Adhawk eye tracking module does this by tracking one or multiple Aruco markers displayed in the screen plane. This is done via OpenCV and taking advantage of the IMU sensor integrated on the Adhawk eye tracking glasses to track the head movements.

The first step you need to do before using the screen tracking feature is to prepare the screen setup as follows:

  • Take a set of aruco markers from an aruco dictionary and display them on the same plane as the screen. The markers can have different sizes, can be physical (printed on a piece of paper and glued to a cardboard to make it flat and rigid) or virtual (generated in the software and shown on a screen). The markers can be placed anywhere inside or outside the screen area. Make sure there is a white margin around each marker at least as big as the black border of the marker.
  • Make sure that all the markers are flat and oriented correctly in 3d such that all the 4 corners are placed in an imaginary plane that passes through the screen plane.
  • Measure the bottom-left corner of each marker relative to the origin Screen Coordinate System (top-left corner of the screen) in meters.

Figure below shows an example with 4 markers and the measurement to take.

Screen Coordinate System

In order to get the gaze-in-screen data, you need to register the aruco markers. Registering the screen markers can be done by sending a register screen command that defines your screen size, aruco dictionary, index of each marker (as defined in the dictionary), the position of the bottom-left corner of each maker in the screen coordinate system, as well as the size of each marker in meters. Multiple markers can be added to the same packet. After the registering the screen, enable screen tracking, and subscribe to the gaze-in-screen stream. The Adhawk eye tracking module will stream the estimated gaze position inside your screen coordinate system as (timestamp, x, y) values where the timestamp matches the timestamp of the corresponding gaze vector stream and the (x, y) are normalized float values (0-1) where (0,0) is the top-left corner and (1, 1) is the bottom-right corner of the screen.

Python

import cv2
import adhawkapi

aruco_dictionary = cv2.aruco.DICT_5X5_50
sceen_size = (0.345, 0.195)  # in meters
marker_size = 0.03  # in meters
marker_ids = [0, 1, 2, 3]
markers_pos = [[0.0, 0.12], [0.27, 0.12], [0.0, 0.0], [0.27, 0.0]]

api.register_screen_board(*sceen_size, aruco_dictionary, marker_ids,
                          [[pos[0], pos[1], marker_size] for pos in markers_pos])
api.start_screen_tracking()
def handler(*data):
    timestamp, xpos, ypos = data

api.register_stream_handler(adhawkapi.PacketType.GAZE_IN_SCREEN, handler)

API reference

Control packets

Calibration start [0x81]

A calibration start command packet will instruct AdHawk’s eye-tracking module to reset the calibration table and prepare for a new calibration procedure.

API

ByteTypeDescription
Request0uint80x81
Response0uint80x81
1uint8Return code

C

ah_result ah_api_calibrationStart(void);

Python

api.start_calibration()

Calibration complete [0x82]

A calibration complete command will instruct AdHawk’s eye-tracking module that no further points will be collected and that a new calibration should be generated. Once the Calibration complete packet is received the eye-tracking module will compute the calibration coefficients based on the correspondence of sensor data and gaze references. The AdHawk’s eye-tracking module will respond with a zero (0) return code for successful calibration, and a non-zero value if calibration fails.

API

ByteTypeDescription
Request0uint80x82
Response0uint80x82
1uint8Return code

C

ah_result ah_api_calibrationComplete(void);

Python

api.stop_calibration()

Calibration abort [0x83]

A calibration abort packet will instruct AdHawk’s eye-tracking module to discard all collected data for the new calibration, and restores the previous calibration.

API

ByteTypeDescription
Request0uint80x83
Response0uint80x83
1uint8Return code

C

ah_result ah_api_calibrationAbort(void);

Python

api.abort_calibration()

Register calibration point [0x84]

Once a target gaze point is established and the user is instructed to direct their gaze at the target, a calibration data point may be registered. Registering a data point requires the client to instruct the eye-tracking module of the intended gaze target such that a correspondence can be established between the user’s gaze and the sensor data.

AdHawk’s eye-tracking module will respond with a zero (0) return code if the calibration point is successfully registered. Otherwise, a non-zero payload will be returned indicating that a correspondence between the user’s gaze and sensor data could not be established.

X, Y and Z is the position of the point relative to the midpoint of the scanners. See AdHawk Coordinate System for details.

API

ByteTypeDescription
Request0uint80x84
1-4float32X position in meters
5-8float32Y position in meters
9-12float32Z position in meters
Response0uint80x84
1uint8Return code

C

ah_result ah_api_calibrationRegisterPoint(float xPos, float yPos, float zPos);

Python

api.register_calibration_point(0.0, -0.179, -0.6)

Trigger Auto-tune [0x85]

When trigger scanner ranging is signaled, the eye-tracking module will perform an auto-tune function on all attached trackers. The eye-tracking module will respond with a zero (0) to indicate that scanner ranging successfully completed on both eyes. Otherwise, a non-zero payload will indicate which eye encountered a failure.

The endpoint also supports an optional reference gaze vector within the payload. This reference gaze vector is an estimate of the world space position (in meters) that the user is looking at during the tuning process. Providing a reference gaze vector will increase the overall accuracy of the tuning process. This reference should be supplied whenever possible.

API

ByteTypeDescription
Request0uint80x85
Optional Payload1-4floatX component of Reference Gaze Vector
5-8floatY component of Reference Gaze Vector
9-12floatZ component of Reference Gaze Vector
Response0uint80x85
1uint8Return code

C

ah_result ah_api_runAutotune(ah_autotuneReferenceGazeVector *referenceGazeVec);

Python

api.trigger_autotune(reference_gaze_vector)

Re-center calibration [0x8f]

Updates an existing calibration based on the specified calibration point. The payload of the calibration data point indicates the position of the gaze target.

X, Y and Z is the position of the point relative to the midpoint of the scanners. See AdHawk Coordinate System for details.

API

ByteTypeDescription
Request0uint80x8f
1-4float32X position in meters
5-8float32Y position in meters
9-12float32Z position in meters
Response0uint80x8f
1uint8Return code

C

ah_result ah_api_calibrationRecenter(float xPos, float yPos, float zPos);

Python

api.recenter_calibration(0.0, -0.179, -0.6)

Get tracker status [0x90]

Retrieves the current state of the eye-tracking module. A response value of zero (0) indicates that the tracker is calibrated and fully working.

API

ByteTypeDescription
Request0uint80x90
Response0uint80x90
1uint8Return code

C

ah_result ah_api_getTrackerStatus(void);

Python

try:
    api.get_tracker_status()
except adhawkapi.APIRequestError as exc:
    print(exc)

Note that this API returns the current tracker status, and will raise an exception in synchronous mode if the trackers are not ready.

We recommend registering for the tracker ready stream, which will notify any changes in the tracker status.

Start camera [0xd2]

Currently not supported in c sdk

A command packet that will instruct AdHawk’s eye-tracking module to turn on the front-facing camera.

API

ByteTypeDescription
Request0uint80xd2
1-2uint16Camera device index
3-4uint16Image resolution index
5uint8Undistort image (only recommended when a wide angle lens is used on the camera)
Response0uint80xd2
1uint8Return code

Python

api.start_camera_capture(0, adhawkapi.CameraResolution.MEDIUM)

Stop camera [0xd3]

Currently not supported in c sdk

A stop command packet that will instruct AdHawk’s eye-tracking module to turn off the front-facing camera.

API

ByteTypeDescription
Request0uint80xd3
Response0uint80xd3
1uint8Return code

Python

api.stop_camera_capture()

Subscribe to video stream [0xd4]

Currently not supported in c sdk

A command to subscribe to the video stream from the AdHawk front-facing camera. The receiver of the video needs to provide its IPv4 address and the port. See the example code in section Gaze In Image for how to setup a video receiver. Multiple receivers can subscribe to the video stream.

API

ByteTypeDescription
Request0uint80xd4
1-4bytesIPv4 Address of the receiver
5-6uint16Port of the receiver
Response0uint80xd4
1uint8Return code

Python

api.start_video_stream("192.168.0.1", 62829)

Unsubscribe from video stream [0xd5]

Currently not supported in c sdk

A command to unsubscribe for the video steam of the front facing camera given the ip and the port information of an existing video receiver. See the example code in section Gaze In Image for how to setup a video receiver.

API

ByteTypeDescription
Request0uint80xd5
1-4bytesIPv4 Address of the receiver
5-6uint16port Port of the receiver
Response0uint80xd5
1uint8Return code

Python

api.stop_video_stream("192.168.0.1", 62829)

Register screen ArUco board [0xda]

Currently not supported in c sdk

A packet will instruct AdHawk’s eye-tracking module to reset any existing screen board defined for screen tracking and defines a new screen aruco board based on the given information. The packet includes the screen dimension (in meters), index of the aruco dictionary, and a set of markers (one or more) each defined by 3 float values indicating the bottom-left corner of the marker inside the board plane and the width of the marker. Multiple markers should be defined in the same packet. The coordinate system that is used to define the screen is located at the top-left corner of the screen with positive x axis toward the right and positive y axis upward. The bottom-left corner of each of the markers should be measured relative to that origin (in meters). Note that all the aruco markers should belong to the same aruco dictionary.

API

ByteTypeDescription
Request0uint80xda
1-4float32Width of the physical screen in meters
5-8float32Height of the physical screen in meters
9-10uint16Index of a OpenCV aruco predefined dictionary
11-12uint16id of the aruco marker in the given dictionary
13-16float32X position of bottom-left corner of the marker in meters relative to the top-left corner of the screen
17-20float32Y position of bottom-left corner of the marker in meters relative to the top-left corner of the screen
21-24float32size (width) of the marker in meters
… repeat 11-24 for as many markers as there are being registered
Response0uint80xda
1uint8Return code

Python

# registering a screen defined by a 20 cm marker located at 0, 0
api.register_screen_board(cv2.aruco.DICT_5X5_50, [1], [[0,0, 0.02]])

Screen tracking start [0xdb]

Currently not supported in c sdk

A command packet that will instruct AdHawk’s eye-tracking module to enable screen tracking once a screen has been registered.

API

ByteTypeDescription
Request0uint80xdb
Response0uint80xdb
1uint8Return code

Python

api.start_screen_tracking()

Screen tracking stop [0xdc]

Currently not supported in c sdk

A command packet that will instruct AdHawk’s eye-tracking module to stop screen tracking.

API

ByteTypeDescription
Request0uint80xdc
Response0uint80xdc
1uint8Return code

Python

api.stop_screen_tracking()

Run-time configuration objects (blobs)

The C SDK simplifies the use of blobs. This can be read / written to with the following APIs:

C

ah_result ah_api_readBlob(ah_blobType blobType, uint8_t **blobData, uint16_t *blobSize);

ah_result ah_api_writeBlob(ah_blobType blobType, uint8_t *blobData, uint16_t blobSize);

These functions use the appropriate commands detailed below.

Blob size [0x92]

Retrieves a run-time configuration object from the eye tracking module.

The client/host should get the size of the configuration object using get blob size command, and allocate the required buffer size to store the configuration content. Then, the client/host can get the blob data in chunks of max 25 bytes using the get blob data command until the entire configuration object is retrieved.

Get blob size

API

ByteTypeDescription
Request0uint80x92
1uint8Blob type
Response0uint80x92
1uint8Return code
2-3uint16The size of the blob

Python

errcode, blob_size = api.get_blob_size()

Set blob size

API

ByteTypeDescription
Request0uint80x92
1uint8Blob type
2-3uint16The size of the blob being written
Response0uint80x92
1uint8Return code

Python

api.set_blob_size(adhawkapi.BlobType.CALIBRATION, len(blob_data))

Blob data [0x93]

Applies a run-time configuration object to the eye tracking module.

The client/host should set the size of the configuration object using the set blob size command. This will cause the eye tracking module to allocate the required buffer size to store the configuration content.Then the client/host can set the blob data in chunks of max 25 bytes using the set blob data command until the entire configuration object is written.

Get blob data

API

ByteTypeDescription
Request0uint80x93
1uint8Blob type
2-3uint16The byte offset
Response0uint80x93
1uint8Return code
2..Blob data (max 25 bytes)

Python

blob_type = adhawkapi.BlobType.CALIBRATION
_, size = api.get_blob_size(blob_type)
data = [None] * size
offset = 0

while offset <= size:
    _, chunk = api.get_blob_data(blob_type, offset)
    chunk_size = len(chunk)
    data[offset:offset + chunk_size] = chunk
    offset = offset + chunk_size
print(bytes(data))

Set blob data

API

ByteTypeDescription
Request0uint80x93
1uint8Blob type
2-3uint16The byte offset
4..Blob data (max 25 bytes)
Response0uint80x93
1uint8Return code

Python

def chunks(data):
    '''Generator that breaks the data into chunks'''
    size = len(data)
    offset = 0
    chunk_len = adhawkapi.defaults.BLOB_CHUNK_LEN
    while (offset + chunk_len) <= size:
        yield offset, data[offset:offset + chunk_len]
        offset = offset + chunk_len
    yield offset, data[offset:]


blob_type = adhawkapi.BlobType.CALIBRATION
for offset, chunk in chunks(data):
    api.set_blob_data(blob_type, offset, chunk)

Load blob [0x94]

Applies a run-time configuration object to the eye tracking module. This is similar to writing a blob, except the data for the blob was previously saved to disk in the AdHawk Backend folder under cache as blob_<id>.bin

Only supported when using the AdHawk Backend Service

API

ByteTypeDescription
Request0uint80x94
1uint8Blob type
2-5uint32ID of the blob to load
Response0uint80x94
1uint8Return code

Python

# Loading a calibration blob that was previously saved under cache/blob_1241123630.bin
api.load_blob(adhawkapi.BlobType.CALIBRATION, 1241123630)

Save blob [0x95]

Retrieves and saves a run-time configuration object from the eye tracking module. This is similar to reading a blob, except that the blob data is stored within the eye tracking module to disk in the AdHawk Backend folder under cache as blob_.bin, and the 32-bit unique identifier is returned to the clients for later use.

Only supported when using the AdHawk Backend Service

API

ByteTypeDescription
Request0uint80x95
1uint8Blob type
Response0uint80x95
1uint8Return code
2-5uint32ID of the saved blob

Python

errcode, blob_id = api.save_blob(adhawkapi.BlobType.CALIBRATION)

Properties

Get autotune position [0x9a, 0x01]

Get the center of the scan range. The returned values will be non-zero only if a trigger autotune command completed successfully or if the values were manually set through set autotune position

In the case of monocular operation, 0xFFFF will be returned for the X mean and Y mean of the inactive eye

API

ByteTypeDescription
Request0uint80x9a
1uint80x01
Response0uint80x9a
1uint8Return code
2uint80x01
3-4uint16X mean (right eye), valid range is 0 - 100
5-6uint16Y mean (right eye), valid range is 0 - 100
7-8uint16X mean (left eye), valid range is 0 - 100
9-10uint16Y mean (left eye), valid range is 0 - 100

C

ah_result ah_api_getAutotunePosition(ah_autotunePosition pos[AH_NUM_EYES]);

Python

right_xmean, right_ymean, left_xmean, left_ymean = api.get_autotune_position()[2:]

Set autotune position [0x9b, 0x01]

Set the center of the scan range.

API

ByteTypeDescription
Request0uint80x9b
1uint80x01
2-3uint16X mean (right eye), valid range is 0 - 100
4-5uint16Y mean (right eye), valid range is 0 - 100
6-7uint16X mean (left eye), valid range is 0 - 100
7-8uint16Y mean (left eye), valid range is 0 - 100
Response0uint80x9b
1uint8Return code
2uint80x01

C

ah_result ah_api_setAutotunePosition(const ah_autotunePosition pos[AH_NUM_EYES]);

Python

api.set_autotune_position(right_xmean, right_ymean, left_xmean, left_ymean)

Set stream control [0x9b, 0x02]

Control the rate of a data stream. Setting the rate to 0 disables the stream.

API

ByteTypeDescription
Request0uint80x9b
1uint80x02
2-5uint32Stream control bitmask
6-9float32Stream rate in Hz. A value of 0 will disable the stream. See Supported rates
Response0uint80x9b
1uint8Return code
2uint80x02

Example: Sending 0x9b, 0x02, 1 << 3, 60 enables the gaze stream and sets its rate to 60Hz. Sending 0x9b, 0x02, 1 << 3, 0 disables the gaze stream.

C

ah_result ah_api_setupStream(ah_streamType type, ah_streamRate rate, ah_streamCallback dataCb);

Python

api.set_stream_control(adhawkapi.PacketType.GAZE, 60)

Get stream control [0x9a, 0x02]

Get the current rate of a data stream

API

ByteTypeDescription
Request0uint80x9a
1uint80x02
0-3uint32Stream control bitmask (Only a single stream is allowed)
Response0uint80x9a
1uint8Return code
2uint80x02
3-6floatStream rate in Hz

Example: Sending 0x9a, 0x02, 1 << 3 will return 0x9a, 0x0, 0x2, 60 indicating that the rate for the gaze stream is currently 60 Hz.

C

ah_result ah_api_getStreamRate(ah_streamType type, ah_streamRate *streamRate);

Python

stream_rate = api.get_stream_control(adhawkapi.PacketType.GAZE)[2]

Get component offsets [0x9a, 0x04]

Get the currently configured component offsets. Component offsets can be used if the scanners are not in their nominal position. The offset is the position of the scanner relative to its nominal position. See AdHawk Coordinate System for details.

API

ByteTypeDescription
Request0uint80x9a
1uint80x04
Response0uint80x9a
1uint8Return code
2uint80x04
3-6floatright eye component offset x in millimeters
7-10floatright eye component offset y in millimeters
11-14floatright eye component offset z in millimeters
15-18floatleft eye component offset x in millimeters
19-22floatleft eye component offset y in millimeters
23-26floatleft eye component offset z in millimeters

Python

rx, ry, rz, lx, ly, lz = api.get_component_offsets()[2:]

Set component offsets [0x9b, 0x04]

Configure the component offsets. Component offsets can be used if the scanners are not in their nominal position. The offset is the position of the scanner relative to its nominal position. See AdHawk Coordinate System for details.

API

ByteTypeDescription
Request0uint80x9a
1uint80x04
Response0uint80x9a
1uint8Return code
2uint80x04
3-6floatright eye component offset x in millimeters
7-10floatright eye component offset y in millimeters
11-14floatright eye component offset z in millimeters
15-18floatleft eye component offset x in millimeters
19-22floatleft eye component offset y in millimeters
23-26floatleft eye component offset z in millimeters

Python

api.set_component_offsets(rx, ry, rz, lx, ly, lz)

Set event control [0x9b, 0x05]

Enable or disable an event stream.

API

ByteTypeDescription
Request0uint80x9b
1uint80x05
2-5uint32Event control bitmask
6uint80x01 to enable, 0x00 to disable
Response0uint80x9b
1uint8Return code
2uint80x05

Python

api.set_event_control(event_control, 1)

Get event control [0x9a, 0x05]

Get the currently enabled event streams

API

ByteTypeDescription
Request0uint80x9a
1uint80x05
Response0uint80x9a
1uint8Return code
2uint80x02
3-6uint32Event control bitmask

Python

enabled = api.get_event_control(event_control)

Get normalized eye offsets [0x9a, 0x09]

Get the latest estimate of the eye offsets as measured from the nominal eye. The returned values will be valid only if a trigger autotune command completed successfully.

API

ByteTypeDescription
Request0uint80x9a
1uint80x04
Response0uint80x9a
1uint8Return code
2uint80x04
3-6float32right eye x offset in millimeters
7-10float32right eye y offset in millimeters
11-14float32right eye z offset in millimeters
15-18float32left eye x offset in millimeters
19-22float32left eye y offset in millimeters
23-26float32left eye z offset in millimeters

Python

rx, ry, rz, lx, ly, lz = api.get_normalized_eye_offsets()[2:]

System control

Enable / disable Eye Tracking [0x9c, 0x01]

Either enable or disable eye tracking

API

ByteTypeDescription
Request0uint80x9c
1uint80x01
2uint80 to disable eye tracking, 1 to enable eye tracking
Response0uint80x9c
1uint8Return code
2uint80x01

C

ah_result ah_api_enableTracking(bool enable);

Python

api.enable_tracking(True)

Device setup procedures

Trigger device calibration [0xb0, 0x01]

Begin a device calibration, which runs on all active trackers and restores tuning configurations to factory values. A response value of zero (0) indicates that the calibration has started successfully.

Only supported when using the AdHawk Backend Service

API

ByteTypeDescription
Request0uint80xb0
1uint80x01 (device calibration procedure)
Response0uint80xb0
1uint8Return code
2uint80x01

Python

api.trigger_device_calibration()

Get device calibration status [0xb1, 0x01]

Retrieves the state of the current device calibration. If the calibration has not encountered an error, the returned value is an integer which indicates the progress of the current calibration, ranging from [0, 100], where 100 indicates the calibration has completed successfully.

Only supported when using the AdHawk Backend Service

API

ByteTypeDescription
Request0uint80xb1
1uint80x01 (device calibration procedure)
Response0uint80xb1
1uint8Return code
2uint80x01
3uint8progress percentage

Python

errcode, _, status = api.get_device_calibration_status()

Trigger update firmware [0xb0, 0x02]

Begin a firmware update. A response value of zero (0) indicates that the update has started successfully.

Only supported when using the AdHawk Backend Service

API

ByteTypeDescription
Request0uint80xb0
1uint80x02 (Update firmware procedure)
Response0uint80xb0
1uint8Return code
2uint80x02

Python

api.trigger_update_firmware()

Get update firmware status [0xb1, 0x02]

Retrieves the state of the current firmware update. If the update has not encountered an error, the returned value is an integer which indicates the progress of the update, ranging from [0, 100], where 100 indicates the update has completed successfully.

Only supported when using the AdHawk Backend Service

API

ByteTypeDescription
Request0uint80xb1
1uint80x02 (Update firmware procedure)
Response0uint80xb1
1uint8Return code
2uint80x02
3uint8progress percentage

Python

errcode, _, status = api.get_update_firmware_status()

Marker sequence procedures

Calibration GUI start [0xb0, 0x03]

Launch a calibration GUI provided by the AdHawk Backend Service to guide the user through a calibration procedure.

Only supported when using the AdHawk Backend Service

API

ByteTypeDescription
Request0uint80xb0
1uint80x03 (Calibration procedure)
2uint8Marker sequence mode
3uint8Number of calibration points
4-7float32Size of the marker in millimeter
8uint8Randomize the sequence
9-12float32Horizontal grid span in degrees of field of view (only for fixed-gaze modes)
13-16float32Vertical grid span in degrees of field of view (only for fixed-gaze modes)
17-20float32Horizontal shift of the grid in the user’s field of view (only for fixed-gaze modes)
21-24float32Vertical shift of the grid in the user’s field of view (only for fixed-gaze modes)
Response0uint80xb0
1uint8Return code
2uint80x03

Python

api.start_calibration_gui()

Get calibration GUI status [0xb1, 0x03]

Not supported

Validation GUI start [0xb0, 0x04]

Launch a validation GUI provided by the AdHawk Backend Service to guide the user through a validation procedure.

Only supported when using the AdHawk Backend Service

API

ByteTypeDescription
Request0uint80xb0
1uint80x04 (Validation procedure)
2uint8Marker sequence mode
3uint8Number of rows in the grid
4uint8Number of columns in the grid
5-8float32Size of the marker in millimeter
9uint8Randomize the sequence
10-13float32Horizontal grid span in degrees of field of view (only for fixed-gaze modes)
14-17float32Vertical grid span in degrees of field of view (only for fixed-gaze modes)
18-21float32Horizontal shift of the grid in the user’s field of view (only for fixed-gaze modes)
22-25float32Vertical shift of the grid in the user’s field of view (only for fixed-gaze modes)
Response0uint80xb0
1uint8Return code
2uint80x04

Python

api.start_validation_gui()

Get validation GUI status [0xb1, 0x04]

Not supported

Auto-tune GUI start [0xb0, 0x05]

Perform the autotune. A GUI is displayed to quickly tune the device.

Only supported when using the AdHawk Backend Service

API

ByteTypeDescription
Request0uint80xb0
1uint80x05 (Autotune procedure)
2uint8Marker sequence mode
3-6float32Size of the marker in millimeter
Response0uint80xb0
1uint8Return code
2uint80x05

Python

api.autotune_gui(mode=adhawkapi.MarkerSequenceMode.FIXED_GAZE.value, marker_size_mm=35,)

Get Auto-tune GUI status [0xb1, 0x05]

Not supported

Quick Start GUI start [0xb0, 0x06]

Perform the quick-start procedure. A GUI is displayed to quickly tune and calibrate the device.

Only supported when using the AdHawk Backend Service

API

ByteTypeDescription
Request0uint80xb0
1uint80x06 (Quick Start procedure)
2uint8Marker sequence mode
3-6float32Size of the marker in millimeter
7uint8returning user
Response0uint80xb0
1uint8Return code
2uint80x06

Python

api.quick_start_gui()

Get Quick Start GUI Status [0xb1, 0x06]

Not supported

Camera user settings

Currently not supported in c sdk

A command packet that will instruct AdHawk’s eye-tracking module to set various camera-related settings used when calculating the Gaze-in-image or used when running the calibration/validation GUI provided by the AdHawk Backend Service that relies on the front-facing camera.

Set gaze depth [0xd0, 0x01]

Currently not supported in c sdk

A command packet that will instruct AdHawk’s eye-tracking module to set the default gaze depth used when projecting the user’s gaze into the camera image. If not provided, the system uses a default value of 60 cm.

API

ByteTypeDescription
Request0uint80xd0
1uint80x01 Camera user settings
2-5float32gaze depth in meters
Response0uint80xd0
1uint8Return code

Python

api.set_camera_user_settings(adhawkapi.CameraUserSettings.GAZE_DEPTH, 0.5)

Enable/disable parallax correction [0xd0, 0x02]

Currently not supported in c sdk

A command packet that will instruct AdHawk’s eye-tracking module to enable or disable the parallax correction when projecting the user’s gaze into the camera image. If not provided, the system enables the parallax correction by default.

API

ByteTypeDescription
Request0uint80xd0
1uint80x02 Camera user settings
2uint8enable (1) or disable (0) the parallax correction
Response0uint80xd0
1uint8Return code

Python

api.set_camera_user_settings(adhawkapi.CameraUserSettings.PARALLAX_CORRECTION, 1)

Set sampling duration [0xd0, 0x03]

Currently not supported in c sdk

A command packet that will instruct AdHawk’s eye-tracking module to set the sampling duration used during calibration or validation done by the GUI provided by the AdHawk Backend Service. If not provided, the system uses a default value of 500 ms.

API

ByteTypeDescription
Request0uint80xd0
1uint80x03
2-3uint16Sampling duration in milliseconds Camera user settings
Response0uint80xd0
1uint8Return code

Python

api.set_camera_user_settings(adhawkapi.CameraUserSettings.SAMPLING_DURATION, 1000)

Backend service communication

Register endpoint [0xc0]

Register as an endpoint to start communication with the AdHawk Backend Service.

API

ByteTypeDescription
Request0uint80xc0
1uint32The UDP port of the peer
Response0uint80xc0
1uint8Return code

Python

The packet construction is handled within the start() function of the frontendApi()

def on_connect(error):
    if not error:
        print('Connected to AdHawk Backend Service')

def on_disconnect(error):
    print('Disconnected from AdHawk Backend Service')

api.start(connect_cb=on_connect, disconnect_cb=on_disconnect)

Deregister endpoint [0xc2]

Deregister as an endpoint to stop communication with the AdHawk Backend Service

API

ByteTypeDescription
Request0uint80xc2
Response0uint80xc2
1uint8Return code

Python

api.shutdown()

Ping endpoint [0xc5]

Must be used to ping the AdHawk backend service to indicate that the frontend is still alive, and to verify that the backend service is still reachable. Should be sent every 2s. This is automatically handled within the Python and C SDKs.

API

ByteTypeDescription
Request0uint80xc5
Response0uint80xc5
1uint8Return code

Python

The disconnect_cb passed into start is executed if the pings aren’t successful

Data Streams

Tracker ready [0x02]

The tracker ready packet type is an asynchronous status packet that is transmitted by AdHawk’s eye-tracking module when the system is ready to receive calibration commands. Once the tracker ready signal packet is received, the client/host system may go through a calibration process (send Calibration start, followed by Register calibration data point, and finally Calibration complete).

ByteTypeDescription
Stream0uint80x02

Python

tracker_ready_signal = threading.Event()
api.register_stream_handler(adhawkapi.PacketType.TRACKER_READY, lambda *args: tracker_ready_signal.set())
tracker_ready_signal.wait()

Gaze vector stream [0x03]

X, Y and Z are the estimated coordinates of the user’s gaze point relative to the midpoint of the scanners. Vergence is the angle between left and right eye gaze vectors. See AdHawk Coordinate System for details.

API

ByteTypeDescription
Stream0uint80x03
1-4float32Timestamp since system start in seconds
5-8float32X position in meters
9-12float32Y position in meters
13-16float32Z position in meters
17-20float32Vergence Angle in radians

C

typedef struct
{
    float xPos;
    float yPos;
    float zPos;
} ah_gazeData;
typedef struct __attribute__((__packed__))
{
    float timestamp;
    ah_gazeData gaze;
    float vergence;
} ah_gazeStreamData;

Python

def handler(*data):
    timestamp, xpos, ypos, zpos, vergence = data

api.register_stream_handler(adhawkapi.PacketType.GAZE, handler)

Pupil position stream [0x04]

X, Y and Z are coordinates of the pupil relative to the point between the center of the two (left and right) scanners. See AdHawk Coordinate System for details.

API

Binocular

ByteTypeDescription
Stream0uint80x04
1-4float32Timestamp since system start in seconds
5-8float32X position of right pupil in millimeters
9-12float32Y position of right pupil in millimeters
13-16float32Z position of right pupil in millimeters
17-20float32X position of left pupil in millimeters
21-24float32Y position of left pupil in millimeters
25-28float32Z position of left pupil in millimeters

Monocular

ByteTypeDescription
Stream0uint80x04
1-4float32Timestamp since system start in seconds
5-8float32X position of active pupil in millimeters
9-12float32Y position of active pupil in millimeters
13-16float32Z position of active pupil in millimeters

C

typedef struct
{
    float x;
    float y;
    float z;
} ah_pupilPositionData;
typedef struct __attribute__((__packed__))
{
    float timestamp;
    ah_pupilPositionData pos[AH_NUM_EYES];
} ah_pupilPositionStreamData;

Python

# Binocular
def handler(*data):
    timestamp, rx, ry, rz, lx, ly, lz = data

api.register_stream_handler(adhawkapi.PacketType.PUPIL_POSITION, handler)
# Monocular
def handler(*data):
    timestamp, x, y, z = data

api.register_stream_handler(adhawkapi.PacketType.PUPIL_POSITION, handler)

Pupil diameter stream [0x05]

The stream indicating the diameter of the pupil in millimeters

API

Binocular

ByteTypeDescription
Stream0uint80x04
1-4float32Timestamp since system start in seconds
5-8float32Diameter of right pupil in millimeters
9-12float32Diameter of left pupil in millimeters

Monocular

ByteTypeDescription
Stream0uint80x04
1-4float32Timestamp since system start in seconds
5-8float32Diameter of active pupil in millimeters

C

struct ah_pupilSizeStreamData
{
    float timestamp;
    float diameter[AH_NUM_EYES];
};

Python

# Binocular
def handler(*data):
    timestamp, right_pupil, left_pupil = data

api.register_stream_handler(adhawkapi.PacketType.PUPIL_DIAMETER, handler)
api.set_stream_control(adhawkapi.PacketType.PUPIL_DIAMETER, rate)
# Monocular
def handler(*data):
    timestamp, pupil_diameter = data

api.register_stream_handler(adhawkapi.PacketType.PUPIL_DIAMETER, handler)
api.set_stream_control(adhawkapi.PacketType.PUPIL_DIAMETER, rate)

Per-eye gaze vector stream [0x06]

X, Y and Z are the coordinates of a unit vector relative to the center of each eye. See AdHawk Coordinate System for details.

API

Binocular

ByteTypeDescription
Stream0uint80x06
1-4float32Timestamp since system start in seconds
5-8float32X component of right eye gaze unit vector
9-12float32Y component of right eye gaze unit vector
13-16float32Z component of right eye gaze unit vector
17-20float32X component of left eye gaze unit vector
21-24float32Y component of left eye gaze unit vector
25-28float32Z component of left eye gaze unit vector

Monocular

ByteTypeDescription
Stream0uint80x06
1-4float32Timestamp since system start in seconds
5-8float32X component of active eye gaze unit vector
9-12float32Y component of active eye gaze unit vector
13-16float32Z component of active eye gaze unit vector

C

typedef struct
{
    float xPos;
    float yPos;
    float zPos;
} ah_gazeData;
typedef struct __attribute__((__packed__))
{
    float timestamp;
    ah_gazeData gaze[AH_NUM_EYES];
} ah_gazeStreamData;

Python

# Binocular
def handler(*data):
    timestamp, rx, ry, rz, lx, ly, lz = data

api.register_stream_handler(adhawkapi.PacketType.PER_EYE_GAZE, handler)
# Monocular
def handler(*data):
    timestamp, x, y, z = data

api.register_stream_handler(adhawkapi.PacketType.PER_EYE_GAZE, handler)

Gaze in image stream [0x07]

This stream provides the (x, y) coordinates of the gaze projection in the camera image (see Image Coordinate System). The degree to pixels conversion coefficients are also included in the stream which are two values that indicate the equivalent pixel size (in the image) of a 1 degree of visual angle at the current gaze position.

API

ByteTypeDescription
Stream0uint80x07
1-4float32Timestamp since system start
5-8float32x coordinate in the image
9-12float32y coordinate in the image
13-16float32x degree to pixels
17-20float32y degree to pixels

C

typedef struct
{
    float x;
    float y;
    float xDegToPix;
    float yDegToPix;
} ah_gazeInImageData;
typedef struct __attribute__((__packed__))
{
    float timestamp;
    ah_gazeInImageData gaze;
} ah_gazeInImageStreamData;

Python

def handler(*data):
    timestamp, xpos, ypos, xdegtopix, ydegtopix = data
    # use degtopix values to draw a gaze marker of a size equivalent to k degrees of visual angle. For example:
    # marker_size_deg = 1
    # marker_width_pixel = marker_size_deg * xdegtopix
    # marker_height_pixel = marker_size_deg * ydegtopix


api.register_stream_handler(adhawkapi.PacketType.GAZE_IN_IMAGE, handler)
api.set_stream_control(adhawkapi.PacketType.GAZE_IN_IMAGE, 1)

Gaze in screen stream [0x08]

This stream provides normalized (x, y) coordinates of the gaze inside the screen that was defined by a set of aruco markers. x and y are float values (0-1) where (0,0) is the top-left corner and (1, 1) is the bottom-right corner of the screen. See Register Screen Aruco Board for more details.

API

ByteTypeDescription
Stream0uint80x07
1-4float32Timestamp since system start
5-8float32normalized x coordinate in the screen (value 0-1)
9-12float32normalized y coordinate in the screen (value 0-1)

C

typedef struct
{
    float x;
    float y;
} ah_gazeInScreenData;
typedef struct __attribute__((__packed__))
{
    float timestamp;
    ah_gazeInScreenData gaze;
} ah_gazeInScreenStreamData;

Python

def handler(*data):
    timestamp, xpos, ypos = data

api.register_stream_handler(adhawkapi.PacketType.GAZE_IN_SCREEN, handler)

IMU data stream [0x17]

API

ByteTypeDescription
Stream0uint80x17
1-4float32Timestamp since system start
5-8float32Gyro x (mdps)
9-12float32Gyro y (mdps)
13-16float32Gyro z (mdps)
17-20float32Accelerometer x (mg)
21-24float32Accelerometer y (mg)
25-28float32Accelerometer z (mg)

Python

def handler(*data):
    timestamp, gx, gy, gz, accel_x, accel_y, accel_z = data

api.register_stream_handler(adhawkapi.PacketType.IMU, handler)
api.set_stream_control(adhawkapi.PacketType.IMU, rate)

IMU rotation data stream [0x19]

API

ByteTypeDescription
Stream0uint80x19
1-4float32Timestamp since system start
5-8float32X component of the rotation vector
9-12float32Y component of the rotation vector
13-16float32Z component of the rotation vector

C

Currently only available in UDP mode with backend

typedef struct __attribute__((__packed__))
{
    float timestamp;
    float Xrvec;
    float Yrvec;
    float Zrvec;
} ah_imuRotationStreamData;

Python

def handler(*data):
    timestamp, rvec_x, rvec_y, rvec_z = data

api.register_stream_handler(adhawkapi.PacketType.IMU_ROTATION, handler)
api.set_stream_control(adhawkapi.PacketType.IMU_ROTATION, rate)

Events

Trackloss events

Event indicating a trackloss of an eye. Prolonged blinks (exceeding the defined maximum duration of the blink) may be detected as trackloss.

Trackloss start [0x18, 0x4]

Indicating the beginning of a trackloss event.

API

ByteTypeDescription
Event0uint80x18
1uint80x4 (Track lost)
2-5float32Timestamp since system start
6uint8Eye index (right=0, left=1)

Trackloss end [0x18, 0x5]

Indicating the end of a trackloss event triggered as soon as the eye features are detected.

API

ByteTypeDescription
Event0uint80x18
1uint80x5 (Eye detected)
2-5float32Timestamp since system start
6uint8Eye index (right=0, left=1)

Saccade Events

Combined-eye saccade event when a saccade is detected on both eyes. More specifically, the saccade event indicates the time window where both left and right saccade events overlap in time. This event is triggered as soon as the saccade in any of the eyes ends.

Saccade [0x18, 0x6]

ByteTypeDescription
Event0uint80x18
1uint80x6 (Saccade)
2-5float32Timestamp (since system start) of the end of the saccade
6-9float32Duration (ms)
10-13float32Amplitude (deg)

Types reference

Return codes

API

ResponseDescription
0Request successful
1Internal failure
2Invalid argument
3Tracker not ready
4No eyes detected
5Right eye not detected
6Left eye not detected
7Not calibrated
8Not supported
11Request Timeout
12Unexpected Response
13Hardware Fault
14Camera Fault
15System Busy
16Communication Error
17Device Calibration Required

C

typedef enum
{
    ah_result_Success,
    ah_result_Failure,
    ah_result_InvalidArgument,
    ah_result_TrackerNotReady,
    ah_result_EyesNotFound,
    ah_result_RightEyeNotFound,
    ah_result_LeftEyeNotFound,
    ah_result_NotCalibrated,
    ah_result_NotSupported,
    ah_result_SessionAlreadyRunning,
    ah_result_NoCurrentSession,
    ah_result_RequestTimeout,
    ah_result_UnexpectedResponse,
    ah_result_HardwareFault,
    ah_result_CameraFault,
    ah_result_Busy,
    ah_result_CommunicationError,
    ah_resultCode_DeviceCalibrationRequired,
} ah_result;

Python

class AckCodes(enum.IntEnum):
    '''List of acknowledgement payload values'''

    SUCCESS = 0
    FAILURE = 1
    INVALID_ARGUMENT = 2
    TRACKER_NOT_READY = 3
    EYES_NOT_FOUND = 4
    RIGHT_EYE_NOT_FOUND = 5
    LEFT_EYE_NOT_FOUND = 6
    NOT_CALIBRATED = 7
    NOT_SUPPORTED = 8
    SESSION_ALREADY_RUNNING = 9
    NO_CURRENT_SESSION = 10
    REQUEST_TIMEOUT = 11
    UNEXPECTED_RESPONSE = 12
    HARDWARE_FAULT = 13
    CAMERA_FAULT = 14
    BUSY = 15
    COMMUNICATION_ERROR = 16
    DEVICE_CALIBRATION_REQUIRED = 17

Stream control bitmask

API

BitStream Type
1Pupil position
2Pupil diameter
3Gaze
4Per Eye Gaze
31IMU

C

typedef enum
{
    ah_streamType_Gaze,
    ah_streamType_PupilPosition,

    ah_streamType_Count
} ah_streamType;

Event control bitmask

API

BitStream Type
0Blink
1Eye open / close
2Trackloss start / end
3Saccade
7External Trigger

Blob type

API

Blob TypeDescription
1Gaze calibration data

Python

class BlobType(enum.IntEnum):
    '''Enum representing set of blob types'''
    CALIBRATION = 1

Supported rates

The current listed of supported stream rates are: 5, 30, 60, 125, 200, 250, 333, 500

C

typedef enum
{
    ah_streamRate_Off,
    ah_streamRate_5Hz,
    ah_streamRate_30Hz,
    ah_streamRate_60Hz,
    ah_streamRate_125Hz,
    ah_streamRate_200Hz,
    ah_streamRate_250Hz,
    ah_streamRate_333Hz,
    ah_streamRate_500Hz,

    ah_streamRate_Count,
} ah_streamRate;

Python

class StreamRates(enum.IntEnum):
    '''Enum representing the set of supported rates'''
    OFF = 0
    RATE_5 = 5
    RATE_30 = 30
    RATE_60 = 60
    RATE_125 = 125
    RATE_200 = 200
    RATE_250 = 250
    RATE_333 = 333
    RATE_500 = 500

Camera resolution

Resolution indexDescription
0Low
1Medium
2High

Procedure types

API

Procedure TypeDescription
1Device calibration
2Firmware update
3Backend service built-in calibration procedure with GUI
4Backend service built-in validation procedure with GUI
5Backend service built-in autotune procedure with GUI
6Backend service built-in quickstart procedure with GUI

Python

class ProcedureType(enum.IntEnum):
    ''' Enum representing set of procedure types '''

    DEVICE_CALIBRATION = 1
    UPDATE_FIRMWARE = 2
    CALIBRATION_GUI = 3
    VALIDATION_GUI = 4
    AUTOTUNE_GUI = 5
    QUICKSTART_GUI = 6

Marker sequence mode

The marker sequence GUI used during the backend gaze calibration procedure Marker Sequence Procedures can be launched in 4 modes (fixed-head or fixed-gaze modes where each can be either single marker or 4-markers). All of these modes show a set of visual markers in the screen and a small target for the user to fixate on when taking samples.

Mode indexName
0Fixed-head
1Fixed-gaze
2Fixed-head with 4 markers
3Fixed-gaze with 4 markers

Fixed-head: A sequence of markers with a fixation target at the center will be shown on the screen and the user needs to look at the center of the target (while keeping their head still) and register a sample by pressing one of enter, return, or space keys.

Fixed-gaze: A sequence of markers will be shown at the center of the screen as well as an outline shape that moves on the screen as the user rotates their head. The user needs to align the outline shape with the target at the center while fixating at the center of the target. The user can then register the calibration sample by pressing the space key.

4-marker modes: The sampling procedure in these modes are identical to their corresponding above-mentioned modes. The only difference is that 4 markers will be displayed at the 4 corners of the screen, and they are separated from the fixation target. Use the 4-marker modes if the user finds focusing on the center of the aruco marker distracting.

Camera user setting types

Various camera-related settings used when calculating the Gaze-in-image or when running the built-in Calibration GUI provided by the AdHawk Backend Service (which relies on the camera).

Setting indexDescription
1Gaze depth
2Parallax correction
3Sampling duration

Extended usage notes

Python SDK

Synchronous vs asynchronous operation

In the Python SDK, you can chose to make the API calls blocking or non-blocking. To perform a non-blocking operation, provide a callback function as a parameter to be executed when the operation is complete. This mode of operation is useful for GUI applications. If a callback function is not provided, the call blocks till a response is received and returns the response on success or raises an exception on failure.

# blocking operation
try:
    api.trigger_autotune()
except adhawkapi.APIRequestError as err:
    print(err.ackcode)
# non-blocking operation
def on_autotune_complete(errcode):
    if errcode == adhawkapi.AckCodes.SUCCESS:
        print('Autotune complete')
    else:
        print(f'Autotune failed: {adhawkapi.errormsg(response)}')

api.trigger_autotune(callback=on_autotune_complete)

AdHawk coordinate system

Coordinate System

In the AdHawk coordinate system X, Y and Z are coordinates relative to a particular origin where:

  • X is oriented in the positive direction to the right (user’s point of view)
  • Y is oriented in the positive direction going up
  • Z is oriented in the positive direction when behind the user

The origin is specific to the stream or command and is the same whether operating in monocular or binocular mode. In general the origin is either:

  • The midpoint of the scanners
  • The cyclopean eye
  • The center of the eye
Origin
StreamOrigin
Gaze VectorMidpoint of scanners
Pupil PositionMidpoint of scanners
Per-Eye Gaze VectorCenter of eye

Image coordinate system

In the image coordinate system, X and Y are coordinates relative to the top-left corner of the image where:

  • X is oriented in the positive direction to the right
  • Y is oriented in the positive direction going down

Screen coordinate system

In the image coordinate system, X and Y are coordinates relative to the top-left corner of the screen where:

  • X is oriented in the positive direction to the right (user’s point of view)
  • Y is oriented in the positive direction going up

Changelog

v5.8

v5.9

v5.11