Using Pose Estimation on the Jetson Nano with alwaysAI

Lila Mullany
3 min readJul 10, 2020

Many models, including those for pose estimation, may have much better performance when run on a GPU rather than a CPU. In this tutorial, we’ll cover how to run pose estimation on the Jetson Nano B01 and cover some nuances of running starter apps on this edge device.

To complete the tutorial, you must have:

  1. An alwaysAI account (it’s free!)
  2. alwaysAI set up on your machine (also free)
  3. A text editor such as sublime or an IDE such as PyCharm, both of which offer free versions, or whatever else you prefer to code in
  4. A Jetson Nano B01 (+keyboard, mouse, wifi adapter, & monitor for setup)

Please see the alwaysAI blog for more background on computer vision, developing models, how to change models, and more.

Let’s get started!

NOTE: if you already have the alwaysAI CLI installed on your machine, make sure you have the latest installed, at least version 0.3.1 or higher.

Make sure you have set up your Jetson Nano first: we’ve already developed a tutorial to help you do just that. Work your way through the tutorial at the previous link at least through the ‘Docker’ section. If you need help setting up your development machine, you can find step-by-step tutorials on Mac and Windows installation on the blog page as well.

If you have trouble testing your device connection with ‘ping’ you can also try to ssh. If you are on a Mac, ssh in with

ssh username@device_name.local

And if developing on Windows or Linux, use

ssh username@device

NOTE: your username can be found under ‘Settings->User Accounts’ and the device_name can be found under the main menu (upper right hand corner)-> ‘About this computer’-> ‘Device name’. You can edit username and device name here as well, if you would like. Finally, if you have trouble with the ssh timing out, just keep trying ‘aai app configure’, it should work within a few tries. If you get a ‘device not found’ type of error, try restarting the nano, especially if you just enabled wifi or booted up for the first time.

Once ssh is successful, you can type ‘exit’ into the command line to get out of the ssh.

Finally, as the Jetson Nano set up tutorial mentions, the Dockerfile should contain the ‘Nano’ base image. You must use the most current version of edge IQ for the nano to perform pose estimation; currently, this is nano-0.14.1.

Make sure the first line of the Dockerfile is

FROM alwaysai/edgeiq:nano-0.14.1

NOTE: you can find updates on the edge IQ releases at this link.

Once your Jetson Nano is all set up and you’ve downloaded the starter applications, we can edit an existing starter app to run pose estimation on it. Either copy the contents of ‘realtime_pose_estimator’ into a new folder, or modify the existing starter app if you prefer.

The other main difference in this tutorial will be the engine, accelerator, and model we use. For using the Jetson Nano, use the NVIDIA accelerator and the TENSOR_RT backend engine. For more details on swapping out the engine and accelerator in general, visit this page. To update the engine and accelerator, replace the following lines in app.py

pose_estimator.load(engine=..., accelerator=...)

with

pose_estimator.load(engine=edgeiq.Engine.TENSOR_RT, accelerator=edgeiq.Accelerator.NVIDIA)

Now we need to update the model. In the original example app code we used ‘human-pose’, but when using the Jetson Nano, we need to use ‘human_pose_nano’. To update the model, use the CLI, typing the following into the command line:

aai app models add alwaysai/human_pose_nano
aai app models remove alwaysai/human-pose

Now, just start your app (visit this page if you need a refresher on how to do this for your current set up), and open your web browser to ‘localhost:5000’ to see the pose estimation in action!

You can see our blog page for example applications using pose estimation including a YMCA application and a posture corrector.

--

--

Lila Mullany

Background in biomedical informatics, software engineering, and a newfound interest in computer vision.