logoalt Hacker News

beau_gtoday at 4:04 PM0 repliesview on HN

Consider 2 welding systems, a hungover human on a 3 legged ladder with a scratched up welding helmet doing an overhead TIG weld holding the filler rod a foot away from the weld pool, and a 6 DOF Kuka bot doing a weld in the same position on a completely rigid work piece clamped down to a precision machined fixture table which is clamped down to a precision machined floor that the robot is also mounted to.

The human system weighs 250lbs and can be placed anywhere. Let's ask what it takes to walk the factory robot in that direction. First let's have the work piece be moving, let's say on a conveyor belt. The old robotics way of thinking would be to introduce this variable into the programming of the bot/station, create simple sensors for either the work piece or conveyor itself to indicate to the programming loop where the part is with as little error as possible, and continue to keep accuracy while maintaining as much precision as possible using rigidity (which equals mass and space). Now the whole system is functionally 7 DOF, and you add in the error and failure modes of the 7th DOF (the conveyor system) and accumulate some error. Now just imagine instead of a conveyor the part is on a rolling table with random Z height, and so it the robot arm, and you can see this will fall apart, you can't fight this battle with deterministic programming, machining precision, and rigidity. Obviously if you extended this system to be a humanoid robot on a 3 legged ladder which would be 30+ DOF between the weld and the ground, it couldn't possibly work.

But back to the hungover human, why does this system work so well? The human has very good eyes and a very good internal IMU. They are looking at the end of the filler rod and the weld pool, and even though the information isn't that good coming through the scratched welding helmet, they can compensate for all that error and run an internal function that holds the torch and filler rod in the optimum position to do a good TIG weld while ignoring or automatically adjusting for tons of other variables. Now to address your original question, in our system 1. Are current cameras good enough to get an equivalent amount of information about the weld that the hungover welder has? Yes, in fact can get more information than a human can 2. Are IMUs as good as a hungover human has? Hard to really know, but seems like it, though if you need many IMUs attached to different limbs on a robot its probably not as good as humans yet 3. Is the power density of actuators and power storage good enough to approximate this 250lb system of a human on a ladder with some combination of DOF that reaches a sufficient range of motion to emulate the humans hands (whether the robot looks like a human or not?) - yeah, plus in this case the welder is plugged into the ground for the human anyway so that system is already attached to mains power

So given all this, seems like the limiter is just software, which is the bull case for this prospected robotics revolution