Category: Robotics
Configuring video cropping and resizing with ROS’s image_proc
The existing documentation on this seemed a little sparse, so I figured I’d post up an example based on what I figured out for my project.
I have a 1024×768 sensor_msgs/Image coming out of image rectification on the topic /overhead_cam/image_rect, with camera info on the predictably-named topic /overhead_cam/camera_info. What I want is to crop off the top 120 px of that image so that it’s the right aspect ratio to get scaled to 1680×1050 without distortion (technically, scaling is a distortion, but it’s the only distortion I want). Then I want to scale it up to 1680×1050.
<!-- Video cropping --> <node pkg="nodelet" type="nodelet" args="standalone image_proc/crop_decimate" name="crop_img"> <param name="x_offset" type="int" value="0" /> <param name="y_offset" type="int" value="120" /> <param name="width" type="int" value="1024" /> <param name="height" type="int" value="684" /> <!-- remap input topics --> <remap from="camera/image_raw" to="overhead_cam/image_rect_color"/> <remap from="camera/image_info" to="overhead_cam/camera_info"/> <!-- remap output topics --> <remap from="camera_out/image_raw" to="camera_crop/image_rect_color"/> <remap from="camera_out/image_info" to="camera_crop/camera_info"/> </node> <!-- Video resizing --> <node pkg="nodelet" type="nodelet" args="standalone image_proc/resize" name="resize_img"> <!-- remap input topics --> <remap from="image" to="camera_crop/image_rect_color"/> <remap from="camera_info" to="camera_crop/camera_info"/> <!-- remap output topics --> <remap from="resize_image/image" to="camera_resize/image_rect_color"/> <remap from="resize_image/camera_info" to="camera_resize/camera_info"/> </node> <!-- Dynamic reconfigure the resizing nodelet --> <node name="$(anon dynparam)" pkg="dynamic_reconfigure" type="dynparam" args="set_from_parameters resize_img"> <param name="use_scale" type="int" value="0" /> <param name="width" type="int" value="1680" /> <param name="height" type="int" value="1050" /> </node>
The first bit does the cropping, running the decimate nodelet as a standalone node and doing the cropping. It can also decimate (shrink images by deleting rows/cols), but I’m not doing that.
The second bit starts up the video resizing nodelet as standalone, and the third bit fires off a dynamic reconfigure signal to set the image size. If you set use_scale to true, it will scale an image by some percentage, which isn’t quite what I wanted.
I imagine it’s possible to fold these into image rectification as nodelets (image rectification is itself a debayer nodelet and two rectify nodelets, one for the mono image and one for the color image), but I didn’t look into that because this worked for me.
More Webcams, Fewer Problems
I want to have two applications using the same webcam under Linux, in my case an app using Kivy to render the webcam view and let people interact with it, and ROS.
My first hope had been to have a Kivy app that subscribed to a ROS image topic and displayed the resulting image, but I’ve spent two days beating on it and only found a variety of ways to make a python program give me a segfault.
The new plan is to use v4l2loopback to create two virtual cameras, and have them both fed from /dev/video0 (the actual camera).
sudo modprobe v4l2loopback devices=2
That gets me the two devices. The thing is, these devices don’t actually output any video. They’re just fake video devices that a program can write to. There are a lot of instructions on feeding them with ffmpeg. For some reason, my computer says it has ffmpeg installed, but locate can’t locate it. Instead, I’ve used gstreamer to set up /dev/video1 as a video device that ROS’s usb_cam node can handle, and configured the launch file for usb_cam to read /dev/video1
gst-launch v4l2src device=/dev/video0 ! video/x-raw-yuv,width=640,height=480,framerate=15/1 ! v4l2sink device=/dev/video1
That solves half of my problem. The other half is having the second video device get fed, and getting Kivy to read /dev/video2.
gst-launch v4l2src device=/dev/video0 ! video/x-raw-yuv,width=640,height=480,framerate=15/1 ! tee name=t ! v4l2sink device=/dev/video1 t. ! queue ! v4l2sink device=/dev/video2
Adding the tee and queue gives me two video devices playing the same video, both fed by /dev/video0. The usb_cam ROS node can read either one of them. The basic Kivy camera app that I bashed together doesn’t work, though. It seems to default to trying to open /dev/video0, and then failing because the gst-launch invocation is using it.
AnchorLayout: anchor_x:"center" anchor_y:"center" Camera: id: camera resolution:(640,480) play: True index:2 KeyboardListener: id:kbd_lstn
Adding index:2 to my Kivy app’s .kv file gets me a kivy app with my video in it. As long as my ROS nodes are looking at /dev/video1 and my Kivy app is looking at /dev/video2, no one steps on anyone’s toes, and they both can operate at the same time.
ROS and OpenCV will fite u, m8
I recently wanted to do some computer vision stuff using OpenFace, which is a collection of face-processing computer vision algorithms and tools to use them. It uses OpenCV 3.0.something, which uses, among other things, vtk6, and friends libvtk6-dev and python-vtk6.
Normally, this wouldn’t be a problem, but I use ROS Indigo, as does the lab I work in. ROS Indigo uses some previous version of vtk, and so attempting to install OpenCV 3.0 blows away my ROS install, and makes apt freak out when I try to install it again. The actual error was something like “you have broken held packages”, only I didn’t actually have held packages OR broken packages.
Apt just gives up at this point. Aptitude, on the other hand, proposes removing the offending VTK packages and proceeding with the ROS install. Only time will tell if I’ve trashed my OpenCV install, but if I have, I can just go back to an older OpenCV version.
Further Troubles with TinyRobos
The white version of the TinyRobo board that has the missing ground trace also doesn’t have a proper connection for the pullups on the I2C address lines for the motor drivers. The drivers are still there, and scanning the I2C bus with a Bus Pirate (Amazon) showed me that they were at 0x63 and 0x64 on the I2C bus, rather than where I expected them to be (at 0x66 and 0x68). The difference is consistent with a connection that should have been to Vcc being left open.
I’m not wild about the problem, but it did give me an opportunity to set up and use Pulseview/Sigrok, my cheap clone logic analyzer, and my Bus Pirate, so it’s not a total waste.
For my own future reference, as well as anyone else who’s interested, the way to set up the Bus Pirate on Ubuntu is this:
- Plug it into a USB port
- Open up a terminal and type screen /dev/buspirate 115200 8N1 , where /dev/buspirate is whatever device your bus pirate ended up on. Mine was /dev/ttyUSB0.
- The terminal will go blank. Hit enter, and you should get the “HiZ>” Bus Pirate terminal.
The I2C bus scan is runĀ by hitting “m” to get the menu, “4” to get I2C mode, “3” to set speed to 100kHz, and then “(1)” to run the scan macro.
The cheapo logic analyzer I got is a USBee & Saleae clone, which I got because I’m bad and should feel bad not rich. It has a switch to determine which device it claims to be. In Saleae mode, Sigrok loads an alternate firmware onto it, so I’m not really sure where that falls in the intellectual property/doing the right thing by small businesses framework, but if you can afford one, get a proper USBee or Saleae. They’re much better built (the Saleae Logics in particular are tanks) and have more and better features.
I’m doing all this stuff in Enschede, at U. Twente. I’ve been hanging out with the people in the HMI group, which “does things with stuff”. They do a lot of work with things like proxemics in interaction, socially aware robots and technology, and so forth. There’s something of a distinction here between technical stuff, which is what I do a lot of, and more abstract work with avatars and such. I’m a better fit with the RaM (Robotics and Mechatronics) group, which builds things like pipe-crawling robots and quadcopters.
Bugs in new boards
I seem to have left a ground connection off the PCB, which causes the 3.3v regulator to not work. I’ve fixed it in the PCB design in the repository, but the DirtyPCBs order link goes to a product that doesn’t work, so I still have to fix that.
Since I’m going to have to do a new version of the PCB anyway, I’ve added a blinky light on one of the IO pins so that I can have an additional channel for debug information.
New Swarm Controllers
I’ve ordered the second version of the swarm control boards. If you want some, you can get them here, but I advise against doing so until after a post shows up here saying either that they work, or that they’re busted.
In the mean time, I’ve been realizing that the boards are good for all sorts of stupid tricks. For instance, you can control people using galvanic vestibular stimulation, which uses 1-1.5mA at pretty low voltages (More academic version, more hacking). Since the swarm control boards already use a 3.7v lithium cell, additional voltage regulation isn’t needed (if anything, they may be too weak), and PWM can be used to control the current. A resistor in series might also be good, in case of… errors.
The same board could also be connected to a door latch, or magnetic strike, which would let a user connect to a web page (the ESP8266 can serve web pages and act as an AP) and put in a password to open the door. Lockitron appears to be making a business out of selling this, but the mechanics are cheaper.
Given that there’s also an I2C bus on the device, IO expanders, sensors, and other goofiness could be added to make wearables that respond to the environment, smart dust sensors, IoT nodes for home automation, scales that tweet about how much you weigh, etc. IoT is the new black! It’s a floor wax! It’s a dessert topping!
ESP8266, Serial Adapters, and Resets
As detailed in the previous post, I’ve been having some trouble getting the Arduino development environment to automatically reset my ESP8266 board using the DTR and CTS lines of the serial adapter. Part of my problem may still have been the cheap serial adapter, but today I found a new part.
The ESP8266 is extremely sensitive to noise on the CH_PD line, and I was using a 9″ long jumper to connect CH_PD to RTS. I confirmed with my O-scope that RTS was pulsing as it should, but the first pulse threw the ESP8266 into some weird state where it spewed noise on a bunch of pins (GPIO0 seemed to be the worst), and uploading would, naturally, fail.
Switching to 3″ jumpers cleared up the problem and let my Arduino IDE reset the ESP8266 as it should.
I’ve changed the schematic in the Github repo for the project to reflect the new reset wiring, but I still have to add a 5V input connection for charging the battery. Once that’s done, I can design a new PCB.
A Word of Warning
My PhD work (TinyRobo) uses a USB-Serial converter to talk to the ESP-8266 modules in the tiny robots. Normal FTDI cables have a cable that ends in a 0.1″ 6-pin header with this pinout:
- Black – Ground
- Brown – CTS
- Red – VCC
- Orange – TX
- Yellow – RX
- Green – RTS
It turns out that esptool can manipulate the DTR and RTS lines to reset the chip in bootloader mode, which is great for uploading code to it. It also means I can get away with not having any parts on the TinyRobo boards to handle the reset, which is great because it lets me keep the board small. Unfortunately, the FTDI cable I have doesn’t expose the RTS line, so I got a converter module for cheap off Amazon. The particular module I got is this one:
I added that red wire and cut a trace so that the pins would be:
- DTR
- RX
- TX
- VCC
- CTS
- Ground
So far, so good, but I can’t upload with it. I threw a scope on the lines, and it looks like instead of swinging from VCC to ground like well-behaved TTL serial lines, they swing from VCC to VCC minus some tiny voltage, less than a volt. Adding pull-downs on the lines doesn’t seem to have helped. It could be that the timing is of, but I suspect that somewhere, some cheapskate saved some fraction of a cent on this board, at the expense of it doing the one thing it was supposed to do (YOU HAD ONE JOB).
Serious MOSFETs
I’m designing a simple H-bridge for simple but large projects. These are 300A 40V MOSFETS. The board also has a driver for the MOSFETs. I hope to find a driver that uses I2C or some other interface, rather than PWM.
The board overall is pretty small, but I haven’t figured out a good way to heat sink it. The unpopulated round footprints are for capacitors, and when the caps are installed, they block any easy installation of a heat sink over the MOSFETs. I may design the second iteration of the board around thermal management, and have holes for mounting a commodity CPU heat sink over the FETs.
The current design of the board is available here.
I’ve tested a prototype of the current design, and it does work, but I didn’t stress it very hard.
Toybrain Returns Again
I found a motor driver chip that looks promising. It does not support easy (solderless) swapping of the motor drivers, but I’ve also had a bit of a shift in my use case. I’m still looking to lobotomise and re-animate children’s toys, but I’m doing it for swarm robotics on the cheap, so being small outweighs replacing the motor driver chips.
The IC is the TI DRV8830. It is a 1A single-channel MOSFET H-bridge with an I2C interface and automatic current limiting. The automatic current limiting makes it very hard to blow the driver by overloading it, so I don’t have to worry about replacing the drivers as much.
Recent Comments