Robotics on the Tx1

Hoping users here can help point me in the right direction.

What are the best tools/software to start learning to implement the Tx1 in an android-esque robot (locomotion+robotic arms+computer vision)

Should I start working with ROS on Ubuntu 16.04?
Seems like people are having a lot of problems with it.

Or am I better off trying to use all the native jetpack software?

I’m reading two books on ROS but I’m starting to think they’re not much use for implementing ROS on the Tx1.

I’ve been trying to find more people in the same boat but I can’t figure out how to search the Tx1 archives other than using google to try to find hits. If anyone can point me in some of the right directions of things to research and learn that would be of enormous help.

Thanks

Some people are using a simpler computer on a module in combination with the Jetson. Implementing basic controls on the simple device, but vision and data coming from the Jetson. I don’t know the pros and cons of that, but it does allow some separation of control and brains.

By simple computer do you mean like a shield/cape like a adruino or beaglebone or do you mean just a mini-pc? Or do you mean a like a mini pc on in a PCIe interface?

I’m not sure what you mean by “implementing ROS on the TX1”. You can install ROS on the TX1 and explore it. You’ll need L4T 24.2 (installed by JetPack 2.3). Here’s instructions for installing ROS on the TX1:

The process there installs ros-base, you can modify the scripts to add desktop and desktop-full if desired.

Thanks for reply… Correct me if I’m wrong but it seemed from this thread from people who installed ROS on the Tx1 that it doesn’t seem to be worth doing unless you go back to much older versions perhaps.

Depends on what you’re trying to do.
With the advent of 64-bit on the TX1 and the changes over the last month or so in the ROS source code, two things happened.
First, you can compile ROS from source on the TX1 fairly easily now. This was difficult in the previous mixed 32-bit user space, 64-bit kernel L4T 23.X
Second, installable .deb packages of ROS are now made available from OSRF for ARM 64 bit. There is a trick or two to get it to install on the TX1 correctly, but those issues have been mostly ironed out.

One thing to be careful of when reading forum posts is that in general people are trying to find answers to questions or issues that they have. Sometimes that reflects a very general issue, other times just something very much in a niche. It is sometimes difficult to tell the experience level/background/competency of people asking questions.

You will also notice that a lot of people will say the equivalent of “I have a problem, the program doesn’t work” which generally indicates that they are not experienced. An experienced forum member knows that they have to give enough information and limit the scope of the question to expect help or information.

For example, in the post you mentioned there’s an issue installing a ROS package called Octamap. Remembering that there are several thousand ROS packages, it is not safe to assume that trouble with one package means that all ROS packages have issues. Another part of the thread about the SSL certificate is one of the work arounds I mentioned above. This had been answered in other threads about building ROS over the last year.

If you’re using the 23.X version of L4t, yes you should use the older version of ROS. If you’re using the 64-bit version, then it doesn’t make sense to use the older version. The older version is for Ubuntu 14.04, the new one Ubuntu 16.04. Whether one finds ROS worthwhile in and of itself is another story.

Thank you. That was a very informative read. I’ll start experimenting with Ubuntu 16.04 and Jetpack 2.3 with ROS and see how far I can get. Can you comment on what was mentioned above regarding the usefulness of using a separate mini-pc to run servos, sensors, effectors, etc.? Or is that getting into more trouble than it’s worth making two different linux machines talk?

I did see this link on your site where ROS install has been updated as of Oct. Robot Operating System (ROS) on NVIDIA Jetson TX1 - JetsonHacks

Thank you! That helps. When I was trying ROS a month or so back I was having all sorts of problems finding the package "unable to locate… " as where so many other people. I’m guessing that’s all resolved now.

I don’t know specifics, but from what I’ve heard tiny Cortex-M type controllers (including Arduino) can work for servo control while the more complicated functions go to a Jetson. There wouldn’t be any advantage to using a mini-PC.

The second question is easier to answer. ROS was invented to solve that problem (having multiple machines “talk” to each other). In essence ROS is a distributed message passing system, which basically has a server program (roscore) that allows interested parties to advertise and subscribe to events. Basically if you can get a wire/wireless connection from one machine to another, they can all work together. The rest of the ROS ecosystem has been built on top of that idea. A hardware “standard” is just being rolled out called H-ROS (https://h-ros.com) which allows “plug and play” types of capabilities for hardware.

The first part of your question has more to do with the projects that people are working on, the budgets, parts availability, and what they are comfortable with. The mini-PC is conceptually equivalent to a Jetson, the PC usually don’t have GPIO pins. A small PC type of board, like an UP Board, doesn’t have a GPU. How you put all the parts together and control them tends to be a cost issue.

As linuxdev said, most “smart” sensors, e.g. Dynamixel servos and most motor controllers, have some sort of built in microcontoller (like ARM Cortex M). These devices tend to be driven over a serial bus, easily controllable by a single computer. But the devices tend to be expensive, so people will ‘build their own’. Take the case of a Dynamixel servo versus a hobbyist servo. The hobbyist servo needs a PWM pulse to position it, and provides no feedback. So people use something like an Arduino or PWM driver to provide the PWM pulse, and then use some type of feedback sensor (such as a potentiometer which requires an analog input). Then the main computer talks to the Arduino over serial. They save some bucks with a less robust packaging tradeoff. But everything is like that, in something like a robotic hand where you have 4 or 5 servos, you can imagine the Arduino becoming a little busy which may mean that you need a couple of them as you build out the rest of a robotic arm.

Typically microcontrollers are used when real time performance is required, and tend to be relatively simple applications (get sensor readings, and do something ). Sometimes they’ll do some math, like implement PID loops and so on, but they are not considered ‘general computing’ devices. Other times people will use a microcontroller as a convenience, such as an inexpensive way to add a serial interface to a sensor, conceptually a hardware device driver or gather a bunch of analog signals.

In robot world, almost everything is engineering challenges and tradeoffs. One of the questions you have to answer is “There are all the smart sensors, dumb sensors, electrical bits, how does it all get tied together?” You may want to know how the Jetson fits in. For example, you might want to use the Jetson as a smart sensor. Let’s say that you need high resolution stereo vision with a displacement map. You can use a ZED stereo camera or a couple of high quality, high frame rate cameras and use the Jetson to calculate the displacement map using the CUDA cores, and then publish the map, perhaps as a ROS topic. Along those lines, the Jetson may also do object detection on the scene and publish those results. Having the GPU on the device makes the whole thing work.

On the other hand, let’s say that you want to use a Jetson as a high level ‘brain’ of a robot. You may have trained a neural network which takes robot sensors as input, and then sends output based on those readings. An example might be a self driving car, much like NVIDIA has shown over the last couple of years. A person drives an instrumented car around for a few thousand miles, that information (steering angle, throttle, brakes, camera information and so on) is then used to train a network. The trained network is transferred to a Jetson, which then ‘inferences’ the car sensor input based on the model to send outputs to ‘drive’ the car. Architecturally, all the top node Jetson knows is that it is sending/receiving signals, the implementation of each subsystem is up to the designer. Obviously subsystems might be added, things like traffic signals and pedestrian detection might provide useful additional input. Again, having a GPU on the device allows the Inference Engine to run smoothly.

In your case, you should spend some time getting a feel for what the correct “distribution of labor” is for your design, how many sensors, detectors, motors, etc. There are several serious hobby androidish types of projects to look at on the net, including:

InMoov: http://inmoov.fr
XRobots Project Ultron: https://www.youtube.com/playlist?list=PLpwJoq86vov-v97fBMRfm-A9xv8CzX8Hn
Yale OpenHand: Yale OpenHand Project

If you have a little more budget, then any of the robots from the recent DARPA Robotics Challenge (2012-2015) will provide some good information. The majority of those robots were running ROS.

I was unaware ROS was working on hardware compliance. I’m guessing ROS won’t be selling the hardware they are just making sure vendors will have code that works with and is compliant with ROS? Do I have that right?

That definitely helps me understand the difference between dynamixel and regular servos as well as how I can implement the Tx1 into my design.

Am I correct in my estimation that you can adruino/shield/cape devices wirelessly in certain components that rotate or would have difficulty running power/comm through? Like for example on a Johnny five style robot could you use sheilds/capes to run the tracks and assorted locomotion components and then wirelessly communicate back and forth to the Tx1? And arms/hands work the same way?

In the ancient past (early 2000’s I’ve used sliprings) to do this but that was difficult, expensive and a pain. I’m guessing that kind of a thing is all in the past now and everything that cannot be physically connected with cables wirelessly communicates to the ROS publishing service.

Thank you for pointing me in many good and informative directions. That helps a great deal!!

Wireless depends on the environment and component. In a noisy electrical environment, bad things could happen if you lose connectivity. Motors themselves generate a lot of electrical noise. That’s one of the reasons you can’t have an IMU or GPS receiver close to the motors on a rover or a quadcopter for example. So for non-critical functions, you may be able to get away with wireless, but not for something like locomotion. For lack of a better term, mission critical functions usually need wire connections (and sometimes a backup) in real world applications.

With that said, wireless is ridicously cheap now and can be cleverly integrated into a lot of applications. So if you were building an R2-D2 for example, you could imagine that the electronics in the head could talk wirelessly to a computer placed in the body so you wouldn’t need the usual slip ring used to wire to the spinning head.

I think in our setup we’re going to need two heavy duty ESC’s (Electronic Speed Controllers) to run our motors. Is there anyway of shielding noise so that I can use a wireless connection? Is it really back to the slip ring? :-( Is there any other way around this where you have locomotion in say tank tracks and you have a turret with brains in it that has to spin 360dgs? Or do you really have to put the brains down with the tracks and try to wireless communicate with the turret/head?