Synthetic Data Recording - transform between two objects

I’m working on object pose detection - and am creating a dataset in the linemod format so that I can train EfficientPose. Similar to the post Syntheitic data recording for BBox3D I’ve had to do some ‘changes’ to get my data. Basically ‘all’ is working except for the transform between my camera and my object. I need this to a) visualize the bbox3D - where I’m at now, and b) drive the model (it needs to transform the CAD object.)

I can see how to get the world poses (I think) for my two objects:

        cam_pose,cam_trans,cam_rot = self.getWorld("/World/Camera/BotCamera")
        pal_pose,pal_trans,pal_rot = self.getWorld("/World/Euro_Pallet1")

        rel_pose = pal_pose - cam_pose   # ????

getWorld defined at end… this is from code somewhere in here I found…

Forgive me for my weakness here - but how do I get the relative pose - from the camera to the object? Above (subtraction) works for transformations (row 4 of pose), but fails for the rotation component. I really want to create a rotation matrix that nicely rotates around the x, y, z axis.

Thanks!

    def getWorld(self,prim_path):
        timeline = omni.timeline.get_timeline_interface()
        timecode = timeline.get_current_time() * timeline.get_time_codes_per_seconds()
        stage = omni.usd.get_context().get_stage()
        curr_prim = stage.GetPrimAtPath(prim_path)
        pose = omni.usd.utils.get_world_transform_matrix(curr_prim, timecode)
        trans = pose.ExtractTranslation()
        trans = np.array(trans)
        abs_rotation = Gf.Rotation.DecomposeRotation3(pose, Gf.Vec3d.XAxis(), Gf.Vec3d.YAxis(), Gf.Vec3d.ZAxis(), 1.0)
        abs_rotation = [
            Gf.RadiansToDegrees(abs_rotation[0]),
            Gf.RadiansToDegrees(abs_rotation[1]),
            Gf.RadiansToDegrees(abs_rotation[2]),
        ]
        abs_rotation = np.array(abs_rotation)
        return pose,pose.ExtractTranslation(),abs_rotation

Okay, getting there. Rookie mistake. Should be:

inv_cam_pose = np.linalg.inv(cam_pose)
rel_pose = np.dot(inv_cam_pose,pal_pose)

Closer…

Hi Peter,

Did you get the pose detection working?

Kindly,
Liila

Still working on it. Very close. Using the EfficientPose model.

Key stumbling point at this exact point is getting the data and pose from Isaac SIM to make the model happy. Getting closer - hoping to be retraining the model within a week. (First time I trained, though w/ bad conversion from Isaac SIM to the model - showed very promising results.)

Great. I will stay tuned :)

Making (great?) progress. The following image shows the machine learning model detecting an object (in this case, a pallet) as well as it’s pose. Green is ground truth, blue is prediction. A little cherry picked as the model is only 5% through training at this point. So all looking good!

Key elements are:

  • Isaac SIM to create synthetic environment
  • Python code to run the SIM and capture the data. The provided code had to be modified a reasonable amount.
  • Conversion code to put data in the format for EfficientPose (close to a standard), though this also requires some conversion on pose/translation
  • Training EfficientPose model.

Probably worthy of a Medium article - lots of lessons learned in there!

2 Likes

This is great.
Peter, You can publish your pose estimation training code as an extension if you want, and share it with others.
I know many are interested in it.

2 Likes

Darn - I missed this. And now it’s obsolete w/ v2. I’ll publish the new version I’m working on… (I hope/when it works…)

Hi Peter,

yes we do have a pose estimation example with the 2022.1.0 release

Thanks, it, unfortunately, is not in my area. But thanks for monitoring and responding!

@peter.gaston

Would you please be able to share the environment.yml for your EfficientPose, Python Version, and Ubuntu version? And if possible the link to your working EfficientPose?

Further, would you please be able to share your script for creating synthetic data for 6D pose via Isaac Sim to pass to EfficientNet?