FAQ - Technical Support
Welcome to our FAQ, we are so happy to have you here and as a client.
We have tried to answer the most common questions.
More technical supports please contact support@limxdynamics.com
1. What development interfaces do you provide?
- We provide SDKs and APIs that support invocations in ROS1/ROS2/non-ROS environments. We also provide Python APIs.
- Based on such SDKs and APIs, we have implemented algorithm deployment frameworks and simulators for ROS1 and ROS2 environments, which can help users quickly deploy and verify algorithms using such SDKs and APIs. The related projects are available on our GitHub.
2. Does the upper-level application development interface support C# programming language?
- Yes. We provide WebSocket communication interfaces based on JSON protocol for algorithm application development. Through this generic interface, users can use various programming languages, including but not limited to C#, JavaScript, Java, C/C++, and Python, to develop their own algorithm applications.
3. How can I perform simulation during development using the upper-layer interface?
- We do not currently provide a simulation environment for the upper-layer development interface.
4. Why is the trained policy's motion performance poor after following the open-source RL training process in the "SDK Development Guide"?
- The open-source RL training is a teaching example of a simplified training framework and aims to familiarize developers with the RL process and tools. To improve motion performance, it is necessary to adjust the reward and optimize the training framework.
5. Why do the training results run stably in Isaac Gym but perform much worse in MuJoCo?
- MuJoCo is closer to the real-world environment than Isaac Gym. Because the trained policy lacks generalization, it performs much worse in MuJoCo.
6. Can I deploy my own RL algorithm to the robot?
- Yes. You can log in as a guest user via SSH for deployment. We provide deployment methods. For details, please refer to the corresponding section in the "SDK Development Guide."
7. Under what circumstances do I need to perform zero calibration on the robot?
- Normally, it's not necessary. Zero calibration is only required after replacing the motor or when there is significant position loss. For example, the front leg position does not reset before the robot stands up.
8. Can I debug the robot with a laptop? Is wireless connection supported?
- You can debug the robot with a laptop. If the robot is in developer mode and the motion control algorithm is running on a laptop, we recommend using a wired connection to ensure control frequency.
9. What sensors does the robot feature, and what inputs are supported for policy during RL training?
- The robot features IMU and joint encoders. The supported inputs during training include IMU data (such as robot base angle, angular velocity, and linear acceleration), joint position, velocity, torque, and remote control command data.
10. During policy deployment, how can I receive feedback from the robot and implement closed-loop testing? Is there a ready-to-use framework available for direct network deployment?
- The SDK provides interfaces and examples for obtaining IMU and joint state data. The deployment code is open-source on our GitHub - limxdynamics/rl-deploy-ros-cpp.
11. Is position control or torque control employed during RL debugging? Is Python debugging supported?
- Torque control is employed on the physical robot, while position control is employed during training. RL supports Python debugging.
12. During deployment, how to ensure the input frequency from sensors matches the policy's output frequency, and how to handle sensor frequency fluctuations in real-time communication?
- During deployment, the sensor's frequency and the frequency policy network inference are generally inconsistent, and the network inference adopts the latest sensor data as input. If the sensor frequency fluctuates, the latest data is used as the network input.
13. Where can I find the code example for the features shown in your demo video, such as making the robot jump?
- These features are only for video demonstration to showcase the potential ability of robot hardware. The code does not have an open-source example.