Category: Programming

PDB for n00bs

PDB is the python debugger, which is very handy for debugging scripts. I use it two ways.

If I’m having a problem with the script, I’ll put in the line

import pdb; pdb.set_trace()

just before where the problem occurs. Once the pdb line is hit, I get the interactive debugger and can start stepping through the program and seeing where it blows up, and what variables are getting set to before that happens.

However, I recently found a very handy second way. I was debugging a script with a curses interface, which cleans up when it exits. Unfortunately, that cleanup means that my terminal gets wiped when something crashes, so instead of a stack trace, I just get dumped back to the terminal when something goes wrong, with no information at all left on the screen.

Invoking the script with

python -m pdb ./my_script.py

gets me the postmortem debugger, so when something goes wrong, the program halts and I get the interactive debugger and some amount of stack trace. It’s messy looking because of curses, but I can at least see what is going on.

Lightstone Repairs

I have a Lightstone, and a copy of the game “Journey to Wild Divine”, despite not being a whifty hippie. My friend Ne0nRa1n gave them to me years ago for biofeedback hacking, but pointed out that the finger sensor connections were broken.

The Lightstone is actually a USB interface to GSR and pulse sensors. Internally, the board uses a M430F133 chip from TI to call the shots, along with a ST72F623F2M1. Why there are two microcontrollers in there, I’m not sure. It could be that one is handling USB communication and the other is dealing with the analog signals, which is supported by the ST part being connected directly to the USB port (with a pair of test points along the traces) and the TI part having a lot of traces running to the analog section of the board.

Overall, the design of the hardware is solid. The Lightstone is easy to open up, the board is well-assembled and has what I’d consider good looking PCB design. There are lots of test points in the analog and digital sections. Spare GPIOs on the MSP430 are broken out to little pads for possible hackery. There are even populated headers that probably were used for programming the microcontrollers.

However, there’s one point where the hardware falls down. The finger contacts for the GSR and pulse sensors are on thin wires with minimal strain relief. Two nice microcontrollers, sweet board design, and it falls apart because wires break.

The sensors are on a 6-pin DIN connector, with red, black, orange, green, yellow, and white wires. The red and black wires each go to a GSR contact. The GSR contacts are silver or silver alloy buttons, so I want to keep those. The other four wires go to the pulse sensor, which is a three pin device. Looking into the front of the device, the left lead gets the yellow wire, the center lead gets the green and orange wires, and the right lead gets the white wire.

2016-01-01 21.31.12

To repair the fingertip sensors, I had to pull out the hinge pins that hold the sensor case together. I used very fine-tipped pliers for this, starting from the hinge and then grasping the tip once I had pushed the pin out enough. Then I took the sensors apart, and cut out the bad section of wire. In the image below, the spot to grab the hinge pin is just to the right of the small spring.

2016-01-01 21.23.35

I stripped the original cable, and wrapped the stripped sections in heat shrink. I drilled out the molded strain reliefs so I could thread the wires back through them more easily, threaded the wires, and used more heat shrink to improve the strain relief. My new wires are not as flexible as the old ones, but should be more durable. Finally, I soldered the sensors to the ends of the new wires, and put the sensor cases back together.

2016-01-01 23.05.01

If you do this repair, be careful soldering to the silver-alloy sensor buttons for the GSR sensor. The silver part is surprisingly easy to soften and distort with heat from a soldering iron. I slightly damaged one of mine, but managed to do the other one with no problems.

2016-01-01 23.59.10

I’m using liblightstone to get the information from the device. So far, it seems to be working fine.

Web Development with Flask and Python

I haven’t done anything that could properly be called web development since about 2002, when I took a college course in it. There have been a few developments in the field since then, and I’m a little rusty.

I chose Flask, because I like python and because Django seems like overkill for what I’m doing. There are literally dozens of frameworks out there, and I imagine some people know and care about the differences between them. If I had comments enabled, they’d be yelling at me to switch to Django right now. Hence, no comments.

Installing Flask is easy on Ubuntu:

sudo apt-get install python-flask

I copied the Flask “hello world” from the Flask page, ran it with

python snowflake.py

which got me the expected result, a web server running on port 5000 with a “Hello World” message.

My plan for this web app is to have users be able to visit some page, and the page will contain an image of a snowflake, generated from the url they used to visit. That’s it, but over on Facebook, my post about generating snowflakes from people’s names made people go just about nuts asking for them. Rather than generate them myself, I figured I’d write a web app.

Flask is a joy to work with. In debug mode, it detects changes to the file that contains the currently running app, and restarts when that file changes. It also presents a traceback and interactive debugger if something goes wrong in your app (it goes without saying that this needs to NEVER reach production, since it’s an interactive python shell on your server).

At this point, I have the core functionality of the app together, and I’m not even done with my beer. I can visit a URL, and a snowflake image gets generated from that URL. Everything else is details, and then deployment.

A couple of downloads later, the image now gets converted to png, and served back to the user as an image in a page. Soon, deployment!

Shark Interface Still Not Working

Apparently, having figured out how the Shark joystick sends its information isn’t quite enough to get it working with the motor driver. I wrote software to send the same information that the joystick would usually send, but didn’t get a response. Then I assumed that the way the data lines both go high before serial signalling commences might have been some sort of init signal, so I have an Arduino configured to send the same information, and I still don’t get a response.

It’s entirely possible that I don’t have the bit timing exactly right for the serial link, so I’m now working on bitbanging the serial in a more adaptable way, so I can test different bit lengths.

I’m going to keep plugging away at it for a bit, but I also have a plan B: lobotomize the motor driver. Assuming it uses an ATMega8 like the controller, I can pull the control IC and replace it with one flashed with the Arduino bootloader, and then use rosserial_arduino to control it from ROS. That does mean I’d want to log what the controller does before pulling it, so I have a rough idea what signals go where, but it would vastly simplify controlling the system.

Lessons learned by being The Worst Game Developer

I’m writing a video game. It is called Pebble, and in Pebble, there is a pebble. You contemplate the pebble. I haven’t decided if there is going to be music or not, but there will be a pebble, in a featureless grey expanse, and you can contemplate it.

Just thinking about writing this game has brought me some interesting realizations. I doubt I’m the first one to have them, but it was neat to see how they all fit together.

The first realization is just a recap of things I already knew about developing software: “You’re going to throw the first one away” and “Do the simplest thing that could possibly work”.

When I first came up with the idea for Pebble, it was as a tech demo for Tree, which is similar (There is a tree, you contemplate it), but more complicated, in that a tree is larger. I was going to use level-of-detail (LoD) rendering to support real-time generative zoom from birds-eye to bugs-eye views, store seeds so that the generated versions didn’t change between runs, etc. I read a bunch of papers on the topics, and saw that it was all very complex. I also hadn’t written anything, despite having read a lot of papers and learned a bit.

Eventually, I realized that if I had to load everything I needed to know into my head to write this game, first, I wouldn’t get around to writing it, and second, my head would explode.

Instead of either of those things, I’m writing the simplest bit of code that will draw something on my screen. The first version will draw a polygon, the second version will rotate it, and the third version will texture it. I’m going to have two code streams, one written using openFrameworks and one written using Polycode, so I can decide which of those libraries I’d rather use.

Once both libraries are through three versions, I’ll have the simplest thing that could possibly work, and I’ll throw the other one away.

Another revelation I had is that I don’t really know what pebbles look like. I mean, I have a general idea, but to render a pebble, a general idea doesn’t cut it. It doesn’t capture the variety of surface types that different kinds of weathering can cause, the colors of all the different rocks, and so forth. The reality of pebbles is way more complicated than the idea of pebbles

My girlfriend and I went out on a beach on Cape Cod, Massachusetts, and looked at pebbles. Cape Cod is a terminal moraine, so the rocks there were pushed by glaciers from everywhere north of Cape Cod, and there are loads of different kinds of pebbles there.

This has two effects on my thinking about the design of Pebble, and of video games in general. The first is that the stone surface generation algorithim should be the simplest thing that could possibly work. The second is that AAA games in their current form are doomed.

AAA games have a huge amount of their budget dedicated to resources, such as the textures and designs of the characters. Because the current marketing push in video games is visual, each game is supposed to have better and better graphics than those before it, or people will mock it and it will loose sales. However, this is an infinite pit. Any game world is a map, a less-detailed reperesentation that conveys an impression of a more detailed real world. With real maps, the real world is assumed to also exist, but in games it doesn’t. You run around in Libery City in Grand Theft Auto, but “you” don’t “run” “around”. By pressing buttons, you cause the appearance of motion in a simulated person within a simulated, restricted world. The better the simulation gets, the more resources it requires. In real-world NYC, if you go to Battery Park, you can pick up gravel and throw it in the harbor. In the analogous unplaces in GTA, the ground is a perfect solid, smooth and impenetrable. In order to create a more perfect simulation, there would have to be simulated pebbles, and someone would have to create them.

All of these resources, the pebbles, clothes, guns, car tires, trees, buildings, and so forth in a video game are made by people. These people get paid, and so the more detail you want in a game, the more resources you need, and so the more people you have to pay. Taking longer to make the game doesn’t work, as the technology is constantly shifting, so “more people” is pretty much the only going solution at this point. Even licensing IP from other companies is just an abstraction of getting more people to work on the project.

As a result, the drive is now to make games more and more expensive to make, in order to get finer and finer quality of details that add nothing to the narrative, but make the finished package prettier. However, people are not going to pay hundreds of dollars for a game (except possibly that version of MechWarror that came with a big robot control console), so either the game market has to grow without bound, or the industry has to start putting an upper bound on how much they can invest in making a game.

In a way, I’m hoping Pebble is a signpost on the path of excessive detail, a huge amount of clever rendering algorithims and generative textures in pursuit of the perfect simulation of the experience of contemplating a small stone. Whether the signpost says “Welcome!” or “Abandon Hope All Ye Who Enter Here” is an exercise for the reader.

USB parallel ports under Python on Ubuntu

I have this PCB designed to control four flame effects. Instead of running it on the Arduino, I’m doing an FFT on a laptop and trying to control the solenoid drivers through a USB parallel port adapter on the laptop.

Ubuntu recognizes the USB parallel port adapter, and gives me a port in /dev/usb/lp0. I don’t have permissions to access it, because its user and group are root and lp, and I’m neither of those. The specific error is:

>>> p = parallel.Parallel('/dev/usb/lp0')
Traceback (most recent call last):
  File "", line 1, in
  File "/usr/lib/python2.7/dist-packages/parallel/parallelppdev.py", line 187, in __init__
    self._fd = os.open(self.device, os.O_RDWR)
OSError: [Errno 13] Permission denied: '/dev/usb/lp0'

sudo chmod o+rw /dev/usb/lp0 doesn’t get me any closer, because whatever python-parallel does under the hood is not a legit operation on that dev entry.

>>> p = parallel.Parallel("/dev/usb/lp0")
Traceback (most recent call last):
  File "", line 1, in
  File "/usr/lib/python2.7/dist-packages/parallel/parallelppdev.py", line 189, in __init__
    self.PPEXCL()
  File "/usr/lib/python2.7/dist-packages/parallel/parallelppdev.py", line 241, in PPEXCL
   fcntl.ioctl(self._fd, PPEXCL)
IOError: [Errno 25] Inappropriate ioctl for device

The /dev/usb/lp0 device entry appears to be created by the usblp module. I have a suspicion that what’s going on here is that the device entry created by usblp isn’t claimable the way one created by ppdev would be.

Using rmmod to get rid of usblp doesn’t work, it just gets restarted when I re-insert the USB connector for the adapter. Blacklisting it in /etc/modprobe.d/blacklist.conf just means that the /dev entry doesn’t get created, not that ppdev takes over.

Most reports online also indicate that USB parallel ports don’t really act like parallel ports, but only work for connecting parallel printers. Since I’ve already wasted enough time on this, it’s time to go with plan B. I’m going to fully populate the board, so that it has an Arduino on it, and then interface to that using serial commands and possibly Processing or OpenFrameworks.

Naming things and "ImportError: No module named msg"

I’m using ROS at school for a project. Part of the project is to detect someone’s hand with a camera, so I’m just looking for a patch of “skin colored*” pixels. ROS organizes software as packages, with nodes in them, and messages that the nodes use to communicate with each other.

For my system, I had a package called “hand_detector” with a source file called “hand_detector.py” and a message type called “hand”. ROS generates the messages, which I then import into my python code with the line:

from hand_detector.msg import *

This gets me the error message: “ImportError: No module named msg”

The reason for this is that python searches the same directory as the executing script for imports before it goes looking anywhere else. Since the file hand_detector.py is the executing script, and is naturally in the directory with itself, python finds it there, imports it into itself, and then tries to find a module called “msg” within hand_detector.py. There’s no .msg in there, so I get the error.

The moral of the story here is don’t name your package and the script in it the same thing. Once I converted the script to just “detector.py”, the problem went away.

*I’m somewhat concerned that I wrote a “white people detector”, as it’s really just thresholding the H part of the HSV color space and counting pixels. Other color spaces may be better for this, but this doesn’t have to be perfect. I just don’t want the robot to be a dick to black people.

That's Not Helping, Python.

[ERROR] [WallTime: 1384556730.822164] bad callback: >
Traceback (most recent call last):
File "/opt/ros/groovy/lib/python2.7/dist-packages/rospy/topics.py", line 681, in _invoke_callback
cb(msg)
File "./board_finder.py", line 81, in callback
self.finder.getBoard(data)
File "./board_finder.py", line 68, in getBoard
newImg = cv2.warpPerspective(cvImg, perMat)
TypeError: Required argument 'dsize' (pos 3) not found

[ERROR] [WallTime: 1384556763.964408] bad callback: >
Traceback (most recent call last):
File "/opt/ros/groovy/lib/python2.7/dist-packages/rospy/topics.py", line 681, in _invoke_callback
cb(msg)
File "./board_finder.py", line 81, in callback
self.finder.getBoard(data)
File "./board_finder.py", line 68, in getBoard
newImg = cv2.warpPerspective(cvImg, perMat, newImg.shape)
TypeError: function takes exactly 2 arguments (3 given)

Well which is it? Exactly two arguments, or the third positional argument is required?

The real problem is that the shape of newImg is a 3-tuple, and warpPerspective expects a 2-tuple. This StackExchange post hipped me to the bug.

A Framework for Flow-based Coding

Normally, I think visual presentations of programming languages, such as Labview, are more a problem than a solution, but there is one case where I don’t think that’s the case: when the program is operating on a stream of data, and the program should be able to be changed without stopping the stream. The canonical use case for this, in my opinion, is realtime operation on a video stream. In this case, you can watch the video output change in realtime as operations are added, removed, and modified.

What I was hoping to do, at some point, is to write a set of operations for video that are wrappers for e.g. GStreamer, and by putting them in a visual programming framework. That would give me a set of VJing operations that can be played with in realtime to do things like chromakeying a live video stream on human skin colors or dropping swirling masks on all the faces detected in the video stream.

Pyqtgraph has a lot of promise as a framework for this. My main desire for the framework is that I don’t have to deal with things like handling mouse clicks, and can just get a description of the pipeline and shove data through it. It may be someone already did this, but firtree.org is dead, so maybe it doesn’t matter if they did.

Microcontroller fun

The Arduino has a huge hobbyist-level codebase and lots of libraries for talking to various devices.

The 8051 is a venerable old processor that still gets used in lots of stuff because it’s cheap, and has a low gate count.

It’s probably possible to port a lot of the Arduino stuff (everything that doesn’t use specific on-chip features) to the 8051, thus allowing people to use the software environment they are comfortable with on a new chip. The same is likely true of PICs, and other chips.

The general case, then, is to create a translation system that automates, as much as possible, the process of porting the Arduino libraries and environment from one chip to another. This is, at a high level, possible because anything a computer with a turing-complete instruction set can do, any other computer with a turing-complete instruction set can also do. The hang-up would be on limitations of real hardware (there’s a lot of cool stuff in there, but no infinite data/instruction tape).