Category: Programming

The ancient Roman computer

I was recently assembling a slide deck on assistive technology and the sort of gradient that exists between assistive devices and human enhancement. We already have the technology to build exoskeletons that can give their user increased strength, or allow them to work longer with less fatigue (albeit only in specific types of tasks). These are physical enhancements, the same way that e.g. reading glasses on a person with normal vision act as magnifiers, and give them better close-in vision. The existence of physical enhancements begs the question of whether we can also have cognitive enhancements.

Donald Norman describes the use of Arabic numerals as a “cognitive artifact”, a tool that we use because it makes mathematical operations easier. One alternative in Western civilization is the Roman numeral system, which uses letters to represent values. The Roman numerals are I (1),  V (5),  X(10), L(50), C (100), D (500), and M (1000). So far so good, but Roman numerals are not a place-based system. Instead, the value of a number is a summation based on rules:

  1. Read from left to right, summing up values.
  2. Iif the number you are looking at is bigger than the number to its right, add it to the total.
  3. If the number you are looking at is smaller than the number to its right, subtract the number you’re looking at from the one to its right and add that.

So VI is 6 (5 + 1), but IV is 4 (5 -1), and VC is 95. The current year is MMXVII (1000 + 1000 + 10 + 5 + 1 + 1), which is OK to decode, but 1998 was MCMXCVIII or 1000 + (1000 – 100) + (100 – 10) + 5 + 1 + 1 + 1.  Looking at the Arabic expansion, the -100 and +100 cancel out, so it could be reduced to 1000 + 1000 – 10 + 5 + 1 + 1 + 1, so MXMVIII. There’s a trick to make addition of Roman numerals easier, which Norman describes, but doing multiplication is something of a nightmare.

I recently got to thinking about writing a Roman numeral library for a programming language, so that you could put this sort of thing in a program. This has no doubt been done already, so I’m not going to redo it, but my first thought was that I’d just write a translation from Roman to Arabic numerals (which pretty much all programming languages use already) and back, and then I’d do the math in Arabic numerals. That’s not a Roman numeral library, though. That’s an Arabic library with a Roman presentation layer.

Unfortunately, on most computer architectures, there’s no other way to do it. The binary ALU in processors is a place-value based system. It’s essentially Arabic numerals for people with one finger, where you carry to the next higher place as soon as you have more than one of something. An implementation of Roman numerals on pretty much any CPU will be turned into an Arabic-like representation before it gets operated on.

This got me thinking about what it would be like to attempt to build an ALU, or a simple handheld four-function calculator, that operated in Roman numerals natively. That is to say, the voltages in the various “bits” would represent I, V, X, L, C, D, and M, instead of a place-based binary representation. This immediately runs into a LOT of problems. For starters, the cool thing about binary is that you operate the transistors in switching mode, either on or off, rather than linear mode. Current either flows with minimal resistance, or doesn’t flow. In linear mode, to get multiple voltages, the transistors act as resistors, dissipating some of the power as heat. After that, you have to deal with the context-based values, rather than place-based values. Imagine that we have an adder, with two inputs, A and B, and we feed it A = I, B = V. Clearly, the output has to be, uh… well, it has to be 4, but there isn’t a numeral for that. There’s no single-digit Roman numeral that is 4. But lets say that we don’t care about not having a representation, and that this is just an analog computer. And so that our heads don’t explode trying to keep things straight, lets say that the voltage for ‘4’ is a little less than the voltage for V, but 4 times the voltage for I. This is nice because the voltage for IIII is the same as the voltage for IV. So far so good, but now we give our adder A = V, B = I, and it has to put out ‘6’, which we also don’t have a representation for. We’re clever, it’s analog, the voltage that represents ‘6’ is the voltage for ‘V’ plus the voltage for ‘I’. So if A > B, the output is A + B, if A < B, the output is A-B, and if A = B, the output is A + B (VV is 10, I suppose, but so is X). This is all doable, I’m sure, probably with op-amps, some of which would be acting as comparators, but it’s complex. More complexity is more transistors and components, and so more waste heat and chances to get the wiring wrong.

Annoyingly, that’s not the end of our problems. In the last couple of paragraphs, there are some ambiguous representations of numbers. VV is 10, but so is X. In Arabic numerals, 9 is 9, and 9 is the only digit that’s 9. Arabic also has the nice property that you can guess about how big the representation of the result is, based on the size of the operands. There’s no way to add a 4 digit number and a four digit number (assuming they’re both positive), and get a 2 digit number, or a 12 digit number. Adding II to MXMVIII gets you XX. Multiply two Arabic numbers, and the result will be about the length of the two numbers stuck end to end, +/-1. Division will be the difference in length of the numbers (I’m handwaving here and ignoring fractions). Multiply two Roman numerals, and your guess is as good as mine (As Norman points out, the written length of an Arabic numeral is also a rough gauge of it’s value. 1999 (length 4) is bigger than 322 (length 3), but MXMVIII (length 7) is less than MM (length 2)). There may be some regularities, but I’m not about to sit down and work them out. If anyone does, the bound on the Roman computer word size would be whatever number is longest and less than whatever size of MACHINE_MAX_INT you feel like implementing in the silicon space and power-hungry representation that the machine uses. As a result, a natively Roman calculator would potentially have very long “words” or “bytes”, much of which would go unused most of the time.

Roman numerals also didn’t have a standard representation for fractions, although they apparently sometimes used a duodecimal system for them (yes, two different number systems, one for integers and one for fractions). Roman numerals also had no formal representation at all for zero, although the Romans clearly understood the idea of zero (You have ten dinars, and you owe me ten dinars. I collect your debt, how many tunics can you buy now?). They used “nulla” or “nihil” sometimes, and represented it with an N, but they wouldn’t have written 500 as VNN.  They also didn’t represent negative numbers at all, although there were probably cases where they understood a subtraction to result in something that wasn’t even zero (You have 10 dinars and owe me 15 dinars. I collect your debt. Clearly you don’t have -5 dinars, but I also clearly don’t consider myself fully repaid).

I’ve heard that the Roman conception of math and numbers was highly geometric, rather than arithmetical, so they did have things like the square root of 2, which is the length of the diagonal of a unit square (you know, a square with sides of length I). Pythagoras figured that one out, at roughly the same time people were using “nulla”. Instead of running on abstractions of the representations of numbers, a properly Native Roman Computer might have been a mechanical device, a calculator that operated by using finely crafted physical representations of circles and triangles to perform its operations. Interestingly, the closest thing we have to a calculator from that time period is exactly that.

PDB for n00bs

PDB is the python debugger, which is very handy for debugging scripts. I use it two ways.

If I’m having a problem with the script, I’ll put in the line

import pdb; pdb.set_trace()

just before where the problem occurs. Once the pdb line is hit, I get the interactive debugger and can start stepping through the program and seeing where it blows up, and what variables are getting set to before that happens.

However, I recently found a very handy second way. I was debugging a script with a curses interface, which cleans up when it exits. Unfortunately, that cleanup means that my terminal gets wiped when something crashes, so instead of a stack trace, I just get dumped back to the terminal when something goes wrong, with no information at all left on the screen.

Invoking the script with

python -m pdb ./my_script.py

gets me the postmortem debugger, so when something goes wrong, the program halts and I get the interactive debugger and some amount of stack trace. It’s messy looking because of curses, but I can at least see what is going on.

Lightstone Repairs

I have a Lightstone, and a copy of the game “Journey to Wild Divine”, despite not being a whifty hippie. My friend Ne0nRa1n gave them to me years ago for biofeedback hacking, but pointed out that the finger sensor connections were broken.

The Lightstone is actually a USB interface to GSR and pulse sensors. Internally, the board uses a M430F133 chip from TI to call the shots, along with a ST72F623F2M1. Why there are two microcontrollers in there, I’m not sure. It could be that one is handling USB communication and the other is dealing with the analog signals, which is supported by the ST part being connected directly to the USB port (with a pair of test points along the traces) and the TI part having a lot of traces running to the analog section of the board.

Overall, the design of the hardware is solid. The Lightstone is easy to open up, the board is well-assembled and has what I’d consider good looking PCB design. There are lots of test points in the analog and digital sections. Spare GPIOs on the MSP430 are broken out to little pads for possible hackery. There are even populated headers that probably were used for programming the microcontrollers.

However, there’s one point where the hardware falls down. The finger contacts for the GSR and pulse sensors are on thin wires with minimal strain relief. Two nice microcontrollers, sweet board design, and it falls apart because wires break.

The sensors are on a 6-pin DIN connector, with red, black, orange, green, yellow, and white wires. The red and black wires each go to a GSR contact. The GSR contacts are silver or silver alloy buttons, so I want to keep those. The other four wires go to the pulse sensor, which is a three pin device. Looking into the front of the device, the left lead gets the yellow wire, the center lead gets the green and orange wires, and the right lead gets the white wire.

2016-01-01 21.31.12

To repair the fingertip sensors, I had to pull out the hinge pins that hold the sensor case together. I used very fine-tipped pliers for this, starting from the hinge and then grasping the tip once I had pushed the pin out enough. Then I took the sensors apart, and cut out the bad section of wire. In the image below, the spot to grab the hinge pin is just to the right of the small spring.

2016-01-01 21.23.35

I stripped the original cable, and wrapped the stripped sections in heat shrink. I drilled out the molded strain reliefs so I could thread the wires back through them more easily, threaded the wires, and used more heat shrink to improve the strain relief. My new wires are not as flexible as the old ones, but should be more durable. Finally, I soldered the sensors to the ends of the new wires, and put the sensor cases back together.

2016-01-01 23.05.01

If you do this repair, be careful soldering to the silver-alloy sensor buttons for the GSR sensor. The silver part is surprisingly easy to soften and distort with heat from a soldering iron. I slightly damaged one of mine, but managed to do the other one with no problems.

2016-01-01 23.59.10

I’m using liblightstone to get the information from the device. So far, it seems to be working fine.

Web Development with Flask and Python

I haven’t done anything that could properly be called web development since about 2002, when I took a college course in it. There have been a few developments in the field since then, and I’m a little rusty.

I chose Flask, because I like python and because Django seems like overkill for what I’m doing. There are literally dozens of frameworks out there, and I imagine some people know and care about the differences between them. If I had comments enabled, they’d be yelling at me to switch to Django right now. Hence, no comments.

Installing Flask is easy on Ubuntu:

sudo apt-get install python-flask

I copied the Flask “hello world” from the Flask page, ran it with

python snowflake.py

which got me the expected result, a web server running on port 5000 with a “Hello World” message.

My plan for this web app is to have users be able to visit some page, and the page will contain an image of a snowflake, generated from the url they used to visit. That’s it, but over on Facebook, my post about generating snowflakes from people’s names made people go just about nuts asking for them. Rather than generate them myself, I figured I’d write a web app.

Flask is a joy to work with. In debug mode, it detects changes to the file that contains the currently running app, and restarts when that file changes. It also presents a traceback and interactive debugger if something goes wrong in your app (it goes without saying that this needs to NEVER reach production, since it’s an interactive python shell on your server).

At this point, I have the core functionality of the app together, and I’m not even done with my beer. I can visit a URL, and a snowflake image gets generated from that URL. Everything else is details, and then deployment.

A couple of downloads later, the image now gets converted to png, and served back to the user as an image in a page. Soon, deployment!

Shark Interface Still Not Working

Apparently, having figured out how the Shark joystick sends its information isn’t quite enough to get it working with the motor driver. I wrote software to send the same information that the joystick would usually send, but didn’t get a response. Then I assumed that the way the data lines both go high before serial signalling commences might have been some sort of init signal, so I have an Arduino configured to send the same information, and I still don’t get a response.

It’s entirely possible that I don’t have the bit timing exactly right for the serial link, so I’m now working on bitbanging the serial in a more adaptable way, so I can test different bit lengths.

I’m going to keep plugging away at it for a bit, but I also have a plan B: lobotomize the motor driver. Assuming it uses an ATMega8 like the controller, I can pull the control IC and replace it with one flashed with the Arduino bootloader, and then use rosserial_arduino to control it from ROS. That does mean I’d want to log what the controller does before pulling it, so I have a rough idea what signals go where, but it would vastly simplify controlling the system.

Lessons learned by being The Worst Game Developer

I’m writing a video game. It is called Pebble, and in Pebble, there is a pebble. You contemplate the pebble. I haven’t decided if there is going to be music or not, but there will be a pebble, in a featureless grey expanse, and you can contemplate it.

Just thinking about writing this game has brought me some interesting realizations. I doubt I’m the first one to have them, but it was neat to see how they all fit together.

The first realization is just a recap of things I already knew about developing software: “You’re going to throw the first one away” and “Do the simplest thing that could possibly work”.

When I first came up with the idea for Pebble, it was as a tech demo for Tree, which is similar (There is a tree, you contemplate it), but more complicated, in that a tree is larger. I was going to use level-of-detail (LoD) rendering to support real-time generative zoom from birds-eye to bugs-eye views, store seeds so that the generated versions didn’t change between runs, etc. I read a bunch of papers on the topics, and saw that it was all very complex. I also hadn’t written anything, despite having read a lot of papers and learned a bit.

Eventually, I realized that if I had to load everything I needed to know into my head to write this game, first, I wouldn’t get around to writing it, and second, my head would explode.

Instead of either of those things, I’m writing the simplest bit of code that will draw something on my screen. The first version will draw a polygon, the second version will rotate it, and the third version will texture it. I’m going to have two code streams, one written using openFrameworks and one written using Polycode, so I can decide which of those libraries I’d rather use.

Once both libraries are through three versions, I’ll have the simplest thing that could possibly work, and I’ll throw the other one away.

Another revelation I had is that I don’t really know what pebbles look like. I mean, I have a general idea, but to render a pebble, a general idea doesn’t cut it. It doesn’t capture the variety of surface types that different kinds of weathering can cause, the colors of all the different rocks, and so forth. The reality of pebbles is way more complicated than the idea of pebbles

My girlfriend and I went out on a beach on Cape Cod, Massachusetts, and looked at pebbles. Cape Cod is a terminal moraine, so the rocks there were pushed by glaciers from everywhere north of Cape Cod, and there are loads of different kinds of pebbles there.

This has two effects on my thinking about the design of Pebble, and of video games in general. The first is that the stone surface generation algorithim should be the simplest thing that could possibly work. The second is that AAA games in their current form are doomed.

AAA games have a huge amount of their budget dedicated to resources, such as the textures and designs of the characters. Because the current marketing push in video games is visual, each game is supposed to have better and better graphics than those before it, or people will mock it and it will loose sales. However, this is an infinite pit. Any game world is a map, a less-detailed reperesentation that conveys an impression of a more detailed real world. With real maps, the real world is assumed to also exist, but in games it doesn’t. You run around in Libery City in Grand Theft Auto, but “you” don’t “run” “around”. By pressing buttons, you cause the appearance of motion in a simulated person within a simulated, restricted world. The better the simulation gets, the more resources it requires. In real-world NYC, if you go to Battery Park, you can pick up gravel and throw it in the harbor. In the analogous unplaces in GTA, the ground is a perfect solid, smooth and impenetrable. In order to create a more perfect simulation, there would have to be simulated pebbles, and someone would have to create them.

All of these resources, the pebbles, clothes, guns, car tires, trees, buildings, and so forth in a video game are made by people. These people get paid, and so the more detail you want in a game, the more resources you need, and so the more people you have to pay. Taking longer to make the game doesn’t work, as the technology is constantly shifting, so “more people” is pretty much the only going solution at this point. Even licensing IP from other companies is just an abstraction of getting more people to work on the project.

As a result, the drive is now to make games more and more expensive to make, in order to get finer and finer quality of details that add nothing to the narrative, but make the finished package prettier. However, people are not going to pay hundreds of dollars for a game (except possibly that version of MechWarror that came with a big robot control console), so either the game market has to grow without bound, or the industry has to start putting an upper bound on how much they can invest in making a game.

In a way, I’m hoping Pebble is a signpost on the path of excessive detail, a huge amount of clever rendering algorithims and generative textures in pursuit of the perfect simulation of the experience of contemplating a small stone. Whether the signpost says “Welcome!” or “Abandon Hope All Ye Who Enter Here” is an exercise for the reader.

USB parallel ports under Python on Ubuntu

I have this PCB designed to control four flame effects. Instead of running it on the Arduino, I’m doing an FFT on a laptop and trying to control the solenoid drivers through a USB parallel port adapter on the laptop.

Ubuntu recognizes the USB parallel port adapter, and gives me a port in /dev/usb/lp0. I don’t have permissions to access it, because its user and group are root and lp, and I’m neither of those. The specific error is:

>>> p = parallel.Parallel('/dev/usb/lp0')
Traceback (most recent call last):
  File "", line 1, in
  File "/usr/lib/python2.7/dist-packages/parallel/parallelppdev.py", line 187, in __init__
    self._fd = os.open(self.device, os.O_RDWR)
OSError: [Errno 13] Permission denied: '/dev/usb/lp0'

sudo chmod o+rw /dev/usb/lp0 doesn’t get me any closer, because whatever python-parallel does under the hood is not a legit operation on that dev entry.

>>> p = parallel.Parallel("/dev/usb/lp0")
Traceback (most recent call last):
  File "", line 1, in
  File "/usr/lib/python2.7/dist-packages/parallel/parallelppdev.py", line 189, in __init__
    self.PPEXCL()
  File "/usr/lib/python2.7/dist-packages/parallel/parallelppdev.py", line 241, in PPEXCL
   fcntl.ioctl(self._fd, PPEXCL)
IOError: [Errno 25] Inappropriate ioctl for device

The /dev/usb/lp0 device entry appears to be created by the usblp module. I have a suspicion that what’s going on here is that the device entry created by usblp isn’t claimable the way one created by ppdev would be.

Using rmmod to get rid of usblp doesn’t work, it just gets restarted when I re-insert the USB connector for the adapter. Blacklisting it in /etc/modprobe.d/blacklist.conf just means that the /dev entry doesn’t get created, not that ppdev takes over.

Most reports online also indicate that USB parallel ports don’t really act like parallel ports, but only work for connecting parallel printers. Since I’ve already wasted enough time on this, it’s time to go with plan B. I’m going to fully populate the board, so that it has an Arduino on it, and then interface to that using serial commands and possibly Processing or OpenFrameworks.

Naming things and "ImportError: No module named msg"

I’m using ROS at school for a project. Part of the project is to detect someone’s hand with a camera, so I’m just looking for a patch of “skin colored*” pixels. ROS organizes software as packages, with nodes in them, and messages that the nodes use to communicate with each other.

For my system, I had a package called “hand_detector” with a source file called “hand_detector.py” and a message type called “hand”. ROS generates the messages, which I then import into my python code with the line:

from hand_detector.msg import *

This gets me the error message: “ImportError: No module named msg”

The reason for this is that python searches the same directory as the executing script for imports before it goes looking anywhere else. Since the file hand_detector.py is the executing script, and is naturally in the directory with itself, python finds it there, imports it into itself, and then tries to find a module called “msg” within hand_detector.py. There’s no .msg in there, so I get the error.

The moral of the story here is don’t name your package and the script in it the same thing. Once I converted the script to just “detector.py”, the problem went away.

*I’m somewhat concerned that I wrote a “white people detector”, as it’s really just thresholding the H part of the HSV color space and counting pixels. Other color spaces may be better for this, but this doesn’t have to be perfect. I just don’t want the robot to be a dick to black people.

That's Not Helping, Python.

[ERROR] [WallTime: 1384556730.822164] bad callback: >
Traceback (most recent call last):
File "/opt/ros/groovy/lib/python2.7/dist-packages/rospy/topics.py", line 681, in _invoke_callback
cb(msg)
File "./board_finder.py", line 81, in callback
self.finder.getBoard(data)
File "./board_finder.py", line 68, in getBoard
newImg = cv2.warpPerspective(cvImg, perMat)
TypeError: Required argument 'dsize' (pos 3) not found

[ERROR] [WallTime: 1384556763.964408] bad callback: >
Traceback (most recent call last):
File "/opt/ros/groovy/lib/python2.7/dist-packages/rospy/topics.py", line 681, in _invoke_callback
cb(msg)
File "./board_finder.py", line 81, in callback
self.finder.getBoard(data)
File "./board_finder.py", line 68, in getBoard
newImg = cv2.warpPerspective(cvImg, perMat, newImg.shape)
TypeError: function takes exactly 2 arguments (3 given)

Well which is it? Exactly two arguments, or the third positional argument is required?

The real problem is that the shape of newImg is a 3-tuple, and warpPerspective expects a 2-tuple. This StackExchange post hipped me to the bug.

A Framework for Flow-based Coding

Normally, I think visual presentations of programming languages, such as Labview, are more a problem than a solution, but there is one case where I don’t think that’s the case: when the program is operating on a stream of data, and the program should be able to be changed without stopping the stream. The canonical use case for this, in my opinion, is realtime operation on a video stream. In this case, you can watch the video output change in realtime as operations are added, removed, and modified.

What I was hoping to do, at some point, is to write a set of operations for video that are wrappers for e.g. GStreamer, and by putting them in a visual programming framework. That would give me a set of VJing operations that can be played with in realtime to do things like chromakeying a live video stream on human skin colors or dropping swirling masks on all the faces detected in the video stream.

Pyqtgraph has a lot of promise as a framework for this. My main desire for the framework is that I don’t have to deal with things like handling mouse clicks, and can just get a description of the pipeline and shove data through it. It may be someone already did this, but firtree.org is dead, so maybe it doesn’t matter if they did.