Add New Sensor Interface

Overview

Going to first make a new implementation of IReader to talk to new hardware. Checkout the iPhone IReader or Kinect IReader for inspiration on to get it done. You might then need to extend IFrame for possibly new frame types or data types. You can use parameters in the config.yaml to help define your IReader and make it flexible to things like fps, data types, resolution. Constructing your IReader will happen from those parameters. Make an example config.yaml to be used by ssp_server.cc. Then update ssp_server.cc to be able to read the config.yaml and create the IReader. If new data types are used and you want to use libav_encoder.cc you will need to update it to handle new data types. Finally, you will need to update image_converter.cc for any of the new data types in FrameStructToMat if you want to see it in sensor stream client. You made do some final rescaling to visualize in ssp_client_opencv.cc. If you need to add any libraries or deploy to new platforms will need to update cmake.

1. Create new IReader Implementation and example config file

  • Start by creating a new .yaml in /configs

    • Start with null encoders for each of the types

    • Good place to start is with Kinect example config because of of how many parameters it has

    • From the parameters the IReader will need to return at least GetTypes() and GetFps()

    • You can create "parameters" under "frame_source" to define what you need to correctly grab from your new frame source

      • Examples:

        • Kinect has stream rate and what streams

        • VideoReader has a path to the file

  • Then implement a new IReader

    • Requires the IReader interface

    • IReader implementation can depend on the .yaml configuration parameters to alter function

    • Include error checking and graceful handling for hardware interface issues or configuration issues

    • Good example is Kinect IReader

If implementing a frame type that requires local processing, like body detection, you will need to have a function that can be called that returns detected bodies. This will require opening a connection to a sensor feed that automatically detects bodies each frame, then pulling in the bodies into frame data in the FrameStruct.

2. Update FrameStruct (if necessary, most likely it is)

  • FrameStruct has fields

    • frame_type

    • frame_data_type

  • Likely that the new interface will possibly have a new frame_type (color, depth, ir, confidence)

    • object will be a new frame_type

  • Likely that the frame_data_type might be new

    • Maybe a new color space

    • Or confidence level is an int

3. Update encoder to handle frame_type (optional)

  • Can create a new encoder or can update an existing encoder

  • Most likely will be updating libav if it has to do with color frames

  • If you create a new encoder will need to also implement decoding encoding frames

    • The encoder converts the raw frame into an encoded frame (represented as a packet in the code). Multiple raw frames may be needed to produce a packet. The code accounts for that and sends frames to the encoder until the encoder returns a frame.

    • For each frame type, when you first sent a frame, it sends all the information required to decode those frames (codec type and all other coded information needed, remember the data and extra data fields).

    • The decoder of the client has a hashmap that keeps track of all these decoder information for all received frame streams

    • The decoder transforms this frame into a raw OpenCV image, with the same number of channels, width, height, .. as the original.

  • Can just use null encoder if do not want to implement a new IEncoder or update an existing IEncoder

4. Update FrameStructToMat in image_converter.cc and ssp_client_opencv

  • If planning to visualize a new frame_type or frame_data_type will need to update image_converter.cc

  • If necessary can also update ssp_client_opencv to normalize or change color data

    • Can see example of this for confidence streamed from iPhone into black, grey, white

    • Can see example of this for depth streamed from iPhone

5. Update cmake with additional libraries needed to build updated SSP

  • Current approach is pre-built binaries for each platform

    • If can create dep.tar.gz for the platform moetsi will host and support the new platform dependency

    • ffmpeg 4.3 as shared libraries without the GPL option. Path in the dylib is changed to use @rpath for easier linking.

    • OpenCV 3.4.13 as a static library, only core, imgproc, imgcodecs and highgui modules are built.

    • Cereal 1.3.0, header only

    • spdlog 1.8.2, header only but built as static library for faster compile

    • Zdepth (commit 9b333d9aec520 which includes a patch to generate zdepthConfig.cmake)

    • yaml-cpp 0.6.3 as a static library

    • libzmq 4.3.4 as a static library

    • cppzmq 4.7.1, header only

    • Can require more libraries on the platform to interact with specific hardware possibly

6. Update ssp_server.cc to be able to read the config.yaml and create an IReader

  • Will need to add to the conditionals when reading the config.yaml and implement for the new frame source

    • Will need to feed the parameters to the IReader implementation constructor to create the correct IReader

  • This can require looking for the config.yaml file as is the case for iOS

  • In the case of plug-ins will also need to update to provide callable functionality as a plugin

Last updated