How to get into robotics if you are a CS major

Posted on Oct 29 2024

I was recently asked by a Computer Science (CS) major about a good roadmap to get into robotics. This is was my response, in case it can help someone else:

My approach to learning is normally by making. If that’s the case for you too, the best way to learn to make your own robot is starting to make your own robot. That should be the forcing function that takes you to understand every hardware and software component in depth (especially when they don’t work). 

If you have experience in hardware, you could just buy or 3D print the individual parts and make your own robot. Otherwise, I’d start with a robot kit. Try RobotShop and Amazon. 

A CS major is a big advantage. You’ll use a big part of it, but here are some nice-to-have fundamentals:

  • Linear algebra: to express locations and movements of the robot and other things in 3D, e.g. vectors, homogeneous matrices, quaternions… There's a lot of overlap with 3D graphics here. 

  • Synchronization: robots are almost always distributed systems; even when there’s only one processor, there are many independent sensors and actuators that can’t be queried/commanded at once. It’s good to understand the impact of desynchronization, know ways to ensure synchronization, or best-effort techniques to deal with desynchronization if unavoidable.

  • Concurrency: because of the asynchronous nature of the system, you’ll have multiple processes/threads running concurrently in your compute (or even the cloud!). Good to understand potential issues and how to deal with them.

  • Communications: all those things above have to communicate asynchronously. Useful to know ways to reduce latency (bad for control loops, see below), guarantee reception, integrity…

  • Systems integration: again, lots of distributed heterogeneous things, so different protocols, standards, libraries, dependencies… Good to know ways to deal with that complexity: portability, interfaces, containerization…

There are disciplines with strong ties to CS that are very specific to robotics: sensor fusion, localization, mapping, motion control, planning… You won’t need to go in depth into all of them, but knowing the general concepts in each one helps. You’ll normally find libraries with implementations of the most popular algorithms and abstractions for many of the fundamentals listed above. Check out the Robot Operating System (ROS), which has an ecosystem of components and libraries ready to use. Make sure to search for “ROS 2” stuff; if you search for just “ROS”, chances are you’ll end up reading deprecated ROS 1 stuff. 

Some of those ROS components are based on complex algorithms and their performance depends on other components and the physical space around the robot. So, reading a bit of the theory while you tinker with the libraries to understand how the algorithms work and their trade-offs will help you configure the system and integrate things the right way. When things don’t work, which is most of the time, that knowledge will help you understand why and how to fix them. 

For instance, you could use a SLAM component to localize the robot while mapping the environment. That component needs to process images from the robot’s camera, and possibly data from the IMU (Inertial Measurement Unit) and/or the wheel encoders. The SLAM algorithm may assume the camera has a global shutter, but your camera might have a rolling shutter. Knowing the internals of the algorithm and the camera would help you either avoid this situation or find appropriate linear and angular speed limits to configure in the control stack given your localization accuracy requirements. 

Controlling things in the physical world, not just robots, is always based on feedback loops. The world is messy, changing, partially observable and pretty unpredictable, so you need to constantly sense and correct as new information becomes available, faster than the rate of change; from keeping the speed of a wheel constant, to moving from point A to point B or picking a bottle from the fridge to put it on the table. Understanding the basics of control theory is a helpful mind framework to tackle automation problems in general. If I had to teach robotics 101, I’d argue for first controlling the environment as much as you can as opposed to designing a complex system to deal with it. For instance, if it’s a pick-and-place robot, make sure mechanically that all items will always be in the same position and orientation before picking. Decreasing uncertainty will always increase the chances for your robot to succeed, but there’s a lot of fun in making the robot deal with the mess instead, which takes us to some of the aforementioned algorithms and, as of lately, machine learning.

If you’re interested in ML, or AI in general, there’s a lot going on there for robotics. There are models that can decide the next actuator commands (target joint angles, wheel speeds, etc.) directly from the camera image and other sensor inputs. Take a look at LeRobot by the HuggingFace team, for instance. I think they recently added ALOHA or some of these end-to-end models (can’t remember which). They also have other models for perception that could be helpful for your project. 

I’m working on releasing my robot as a companion for the home through an eventual crowdfunding campaign, but I’m also considering starting with a maker version that folks can build themselves if there's enough interest. I’d provide the BOM with links to parts, the custom parts (neck, battery pocket, connection hub, etc.), a fully functional software stack to build upon and teaching materials. If that sounds interesting, I can keep you in the loop.

comments powered by Disqus