Welcome to my site

My name is Ignacio (Nacho) Mellado. I build intelligent machines with visual perception and control. In this website you will find a selection of my projects, experiments and random thoughts.

HF1: End of 2024 Recap

As the year reaches its end, I wanted to share with everyone what the HF1 project accomplished in 2024 and what to expect in 2025.

Every sign of support is one more push for HF1 to become a real product. If you joined the waitlist, talked about HF1, liked or reposted my posts, thank you.

One of my New Year's resolutions is making HF1 a learning experience around a DIY kit. If you want that too, please spread the word. There’s a section at the end with ways in which you can help [...]

A lot has happened since I started working full-time on the project in mid April. The prototype’s hardware was mostly in place back then, from the time HF1 was just a fun way to pass the time on [...]

How to get into robotics if you are a CS major

I was recently asked by a Computer Science (CS) major about a good roadmap to get into robotics. This is was my response, in case it can help someone else:

My approach to learning is normally by making. If that’s the case for you too, the best way to learn to make your own robot is starting to make your own robot. That should be the forcing function [...]

Linear algebra: to express locations and movements of the robot and other things in 3D, e.g. vectors, homogeneous matrices, quaternions… There's a lot of overlap with 3D graphics here. 

Synchronization: robots are almost always distributed systems; even when there’s only one processor, there are many independent sensors and actuators that can’t be queried/commanded [...]

HF1: Home companion robot
HF1 is a companion robot for the home. It understands you. It interacts with you. It keeps your data private. Join the waitlist today!
The Perceptive Portable Device
Can a smartphone perceive the environment like a human does? Portable devices are full of sensors, but they are still very limited to understand what is happening from a human perspective: Where am I inside the building? Is my user healthy? Is the baby crying? This side project is my quest to give portable devices such capabilities.
Autonomous LinkQuad quadcopter with Computer Vision
MAVwork, my open-source framework for visual control of multirotors, is now supporting a new quadcopter from UAS Technologies Sweden. Everything was tested with a speed control application. Watch a semiautonomous flight of this new elegant drone.
Autonomous Pelican quadcopter with Computer Vision
Check how the versatile Pelican from Ascending Technologies acquired basic automatic take-off, hover and landing capabilities thanks to MAVwork, the open-source framework for drone control. Watch the open MultirotorController4mavwork in action.
Camera localization with visual markers
There are tons of applications where it is key to have the accurate location of things in a workspace. With these cheap and easy-to-build visual markers, you can know the position and attitude of anything with a camera on it. They block less visual space and offer less air resistance than equivalent-size 2D codes.
MAVwork released for Parrot AR.Drone
MAVwork is a framework for drone control that was born in 2011 during a short research stay in the Australian Research Centre for Aerospace Automation (ARCAA). Read about the inception of MAVwork and watch a video of the first test controller for a Parrot AR.Drone with a Vicon system.
Laura: Self-driving public transportation. Prototype II.
Discover how this 12-ton truck was automated to drive itself with Computer Vision. This was the second prototype, after a Citroën C3, in a project led by Siemens to develop a self-driving public transportation system.
Laura: Self-driving public transportation. Prototype I.
High buildings blocking GPS signal, lane markings and road signs hidden by traffic, ... Cities can be a very harsh environment for a driverless bus trying to know where it is and where to go. In this project, led by Siemens, I explored a solution with Computer Vision.