Category: Programming

In Which Vapes May Yet Be Sensed

I’m coming to the “being a narc” thing a little late, since everyone is now having a panic about cutting agents in vapes killing people (Body count stands at ~15. The cops killed about 678 people so far this year, so y’know, keep vaping and avoid cops, it’s 45 times safer). At any rate, a school in my area was sold fantastically expensive devices that monitor areas for vaping, and report to the school administration when someone does it. The devices are on a subscription model, so they’re not just expensive once, they stay expensive.

This got me wondering what it actually takes to detect vape… vapor. Vape juice is mostly propylene glycol and glycerine, plus a dab of flavor and possibly nicotine. I have two theories about ways to detect this. One is that there will be some material produced by vaping that is detectable with a gas sensor. The other way is that phat clouds can be picked up by a particulate sensor, and the fact that it picks up smoke too is just fine, since the kids aren’t supposed to be smoking in the bathroom either.

MQ-2Combustable gasses, LPG, propane, hydrogen, probably also methane
MQ-8Just hydrogen
MQ-4Methane and natural gas
MQ-3Alcohol
MQ-6LPG, iso-butane, propane
MQ-5LPG, natural gas, “town gas”
MQ-135Carbon monoxide, carbon dioxide, ammonia, nitrogen oxide, alcohols, “aromatic compounds”, “sulfide”, and smoke
MQ-9Carbon monoxide, LPG
CMJU-1100VOCs, toluene, formaldehyde, benzene, and so on

There’s a lot of overlap in these sensors, as well as a lot of ambiguity in their datasheets. “Sulfide” isn’t a thing without whatever it’s a sulfide of. Hydrogen sulfide is a toxic and terrible-smelling gas. Cadmium sulfide is a bright yellow solid. “Town gas” is typically a mix of carbon monoxide and hydrogen. LPG is also a mix, including propane, butane, and isobutane, so having it in a list with any of those is kind of redundant.

I haven’t tested yet, but I suspect that the sensors most likely to detect sick clouds are the MQ-3, the MQ-135, and maaaaybe the CMJU-1100. The MQ-3 claims to detect alcohol, as in a breathalyzer, but the family of alcohols is actually pretty large. There’s ethyl (drinkin’ alcohol) and isopropyl (cleanin’ alcohol) and methyl (killin’ alcohol), in addition to some stuff people don’t typically think of as alcohols, like the sugar alcohols, which includes glycerine. Since glycerine is in vape juice, perhaps the sensor will detect it.

The actual mechanism of these sensors is interesting. They appear to the circuit as a resistor that changes resistance in the presence of a gas. The resistor is made of tin dioxide, which has a resistance that drops when exposed to the gasses, but it only responds quickly if the device is hot, so there is also a built-in heater for the sensors.

Because the sensors have little heaters in them, A) they smell weird when they power up for the first time and B) they take a while to stabilize. I powered them up and watched them on the Arduino serial plotter until the outputs more or less leveled out. Then I vaped at them.

Pretty much all of the sensors had some sort of response, but the thing that seemed to vary between them was that some responded faster than others. The next step is going to be logging which ones are outputting what, so I can tell which ones had the fast response and which ones had the strongest response.

The sensors also have a slight response when I just blow at them with normal breath (I have a control! I’m a scientist, doin’ a science!). Interestingly, for some of the sensors, the reaction to my normal breath was a deviation downwards/towards a lower value, while the vape reactions was uniformly a deviation upwards/towards a higher value. This suggests that including one of the sensors that indicates vaping by the sign of its change would serve as a check, since only vaping would cause both sensors to go up, and normal breath would cause one to go up and one to go down.

int readings[9] = {0,0,0,0,0,0,0,0,0};
int index = 0;

void setup() {
  Serial.begin(9600);
}

void loop() {
  //Read all the analog inputs from A0-A8 (I think they're consecutive...)
  index = 0;
  for(int ii = A0; ii <= A8; ii++ ){
    readings[index] = analogRead(ii);
    index++;    
  }

  for(int jj = 0; jj < index-1; jj++){
    Serial.print(readings[jj]);
    Serial.print(",");
  }
  Serial.println(readings[index-1]);
}

In which Ellipses are Dealt with Handily

I’m making a user interface where people can draw a lasso select on an image, and I want to see if certain points in the image are within that lasso select. Now that I think about it, I could probably have done some form of polygon membership test (e.g. raycasting) for each of the points that I wanted to check, with the points of the lasso forming the polygon, but that has a few problems. One of them is that the ends of the lasso can overlap, but that can be handled by e.g. a convex hull. In fact, I may end up doing that, because the current approach has the problem that it can’t handle an arbitrarily-shaped lasso selection.

What I ended up doing as a first cut, however, has some other useful properties. I had the software find a minimum enclosing ellipse of the points for the lasso selection, and then tested the points I wanted checked to see if they were inside the ellipse.

 

def min_bounding_oval(points, tolerance=0.01):
  #Create a numpy 2xN matrix from a list of 2D points
  P = np.matrix(points).T

  # Pseudocode from https:#stackoverflow.com/questions/1768197/bounding-ellipse
  # Pythonificiation by me
  # Input: A 2xN matrix P storing N 2D points 
  #        and tolerance = tolerance for error.
  # Output: The equation of the ellipse in the matrix form, 
  #         i.e. a 2x2 matrix A and a 2x1 vector C representing 
  #         the center of the ellipse.

  # Dimension of the points
  d = 2;   
  # Number of points
  N = len(points);  

  # Add a row of 1s to the 2xN matrix P - so Q is 3xN now.
  Q = np.vstack([P,np.ones((1,N))]) 

  # Initialize
  count = 1;
  err = 1;
  #u is an Nx1 vector where each element is 1/N
  u = (1.0/N) * np.ones((N,1))

  # Khachiyan Algorithm
  while err > tolerance:
    # Matrix multiplication: 
    X = Q * np.diagflat(u) * Q.T

    M = np.diagonal(Q.T * X.I * Q)

    # Find the value and location of the maximum element in the vector M
    maximum = M.max()
    j = np.argmax(M);

    # Calculate the step size for the ascent
    step_size = (maximum - d -1)/((d+1)*(maximum-1));

    # Calculate the new_u:
    # Take the vector u, and multiply all the elements in it by (1-step_size)
    new_u = (1 - step_size) * u ;

    # Increment the jth element of new_u by step_size
    new_u[j] = new_u[j] + step_size;

    # Store the error by taking finding the square root of the SSD 
    # between new_u and u
    err = math.sqrt(((new_u - u)**2).sum());
    
    # Increment count and replace u
    count = count + 1;
    u = new_u;

  # Put the elements of the vector u into the diagonal of a matrix
  # U with the rest of the elements as 0
  U = np.diagflat(u);

  # Compute the A-matrix
  A = (1.0/d) * (P * U * P.T - (P * u)*(P*u).T ).I

  # And the center,
  c = P * u

  return [A, c]

That gets me A, a 2×2 matrix representing the ellipse and c, a 2×1 matrix representing the center of the ellipse.

To check if a point is in the ellipse, some fiddling has to be done to A.

  
#Convert A to a whitening matrix (W = A**-1/2)
w, v = np.linalg.eig(A)
D = np.diagflat(w)
W = v * np.sqrt(D) * v.I

#Subtract the center of the ellipse and use the whitening matrix
#tag_x and tag_y are the x and y positions of the point to check
p = np.matrix([[tag_x],[tag_y]])
p_center = p - c
p_white = W * p_center

#Check if the whitened point is in the ellipse
if np.linalg.norm(p_white) <= 1:
     return True #or whatever you want to do with it

Note that I haven’t proven this code correct or anything, just run it and had it work for me.

TF-IDF in Python

I am trying to process a bunch of text generated by human users to figure out what they are talking about, in the context of an experiment where there were robots a person could be controlling, a few target areas the robots could move to, and a few things they could move, like a crate. One thing that occurred to me was to use TF-IDF, which, given a text and a collection of texts, tells you what words in the text are relatively unusual to that particular text, compared to the rest of the collection.

It turns out that that’s not really what I wanted, because the words that are “unusual” are not really the ones that the particular text is about.

select(1.836) all(3.628) red(2.529) robots(1.517) ((1.444) small(5.014) drag(2.375) near(3.915) lhs(4.321) of(1.276) screen(2.018) )(1.444)

This is a sentence from the collection, and the value after each word is the TF-IDF score for that word. The things I care about are that it’s about robots, and maybe a little that it’s about the left hand side of the screen. “Robots” actually got a pretty low score (barely more than “of”), but “small” got the highest score in the sentence.

At any rate, this is how I did my TF-IDF calculation. Add documents to the object with add_text, get scores with get_tfidf. It uses NLTK for tokenization, you could also use t.split(” “) to break strings up on spaces.

class TFIDF(object):
  def __init__(self):
    #Count of docs containing a word
    self.doc_counts = {}
    self.docs = 0.0

  def add_text(self, t):
    #We're adding a new doc
    self.docs += 1.0
    #Get all the unique words in this text
    uniques = list(set(nltk.word_tokenize(t)))
    for u in uniques:
      if u in self.doc_counts.keys():
        self.doc_counts[u] += 1
      else:
        self.doc_counts[u] = 1

  def get_tfidif(self, t):
    word_counts = {}
    #Count occurances of each word in this text
    words = nltk.word_tokenize(t)
    for w in words:
      if w in word_counts.keys():
        word_counts[w] += 1
      else:
        word_counts[w] = 1
    #Calculate the TF-IDF for each word
    tfidfs = []
    for w in words:
      #Word count is either 0 (It's in no docs), or the count
      w_docs = 0
      if w in self.doc_counts.keys():
        w_docs = self.doc_counts[w]

      #the 1 is to avoid div/zero for previously unseen words
      idf = math.log(self.docs/(1+w_docs))
      tf = word_counts[w]
      tfidfs.append((w, tf * idf))
    return tfidfs

 

The ancient Roman computer

I was recently assembling a slide deck on assistive technology and the sort of gradient that exists between assistive devices and human enhancement. We already have the technology to build exoskeletons that can give their user increased strength, or allow them to work longer with less fatigue (albeit only in specific types of tasks). These are physical enhancements, the same way that e.g. reading glasses on a person with normal vision act as magnifiers, and give them better close-in vision. The existence of physical enhancements begs the question of whether we can also have cognitive enhancements.

Donald Norman describes the use of Arabic numerals as a “cognitive artifact”, a tool that we use because it makes mathematical operations easier. One alternative in Western civilization is the Roman numeral system, which uses letters to represent values. The Roman numerals are I (1),  V (5),  X(10), L(50), C (100), D (500), and M (1000). So far so good, but Roman numerals are not a place-based system. Instead, the value of a number is a summation based on rules:

  1. Read from left to right, summing up values.
  2. Iif the number you are looking at is bigger than the number to its right, add it to the total.
  3. If the number you are looking at is smaller than the number to its right, subtract the number you’re looking at from the one to its right and add that.

So VI is 6 (5 + 1), but IV is 4 (5 -1), and VC is 95. The current year is MMXVII (1000 + 1000 + 10 + 5 + 1 + 1), which is OK to decode, but 1998 was MCMXCVIII or 1000 + (1000 – 100) + (100 – 10) + 5 + 1 + 1 + 1.  Looking at the Arabic expansion, the -100 and +100 cancel out, so it could be reduced to 1000 + 1000 – 10 + 5 + 1 + 1 + 1, so MXMVIII. There’s a trick to make addition of Roman numerals easier, which Norman describes, but doing multiplication is something of a nightmare.

I recently got to thinking about writing a Roman numeral library for a programming language, so that you could put this sort of thing in a program. This has no doubt been done already, so I’m not going to redo it, but my first thought was that I’d just write a translation from Roman to Arabic numerals (which pretty much all programming languages use already) and back, and then I’d do the math in Arabic numerals. That’s not a Roman numeral library, though. That’s an Arabic library with a Roman presentation layer.

Unfortunately, on most computer architectures, there’s no other way to do it. The binary ALU in processors is a place-value based system. It’s essentially Arabic numerals for people with one finger, where you carry to the next higher place as soon as you have more than one of something. An implementation of Roman numerals on pretty much any CPU will be turned into an Arabic-like representation before it gets operated on.

This got me thinking about what it would be like to attempt to build an ALU, or a simple handheld four-function calculator, that operated in Roman numerals natively. That is to say, the voltages in the various “bits” would represent I, V, X, L, C, D, and M, instead of a place-based binary representation. This immediately runs into a LOT of problems. For starters, the cool thing about binary is that you operate the transistors in switching mode, either on or off, rather than linear mode. Current either flows with minimal resistance, or doesn’t flow. In linear mode, to get multiple voltages, the transistors act as resistors, dissipating some of the power as heat. After that, you have to deal with the context-based values, rather than place-based values. Imagine that we have an adder, with two inputs, A and B, and we feed it A = I, B = V. Clearly, the output has to be, uh… well, it has to be 4, but there isn’t a numeral for that. There’s no single-digit Roman numeral that is 4. But lets say that we don’t care about not having a representation, and that this is just an analog computer. And so that our heads don’t explode trying to keep things straight, lets say that the voltage for ‘4’ is a little less than the voltage for V, but 4 times the voltage for I. This is nice because the voltage for IIII is the same as the voltage for IV. So far so good, but now we give our adder A = V, B = I, and it has to put out ‘6’, which we also don’t have a representation for. We’re clever, it’s analog, the voltage that represents ‘6’ is the voltage for ‘V’ plus the voltage for ‘I’. So if A > B, the output is A + B, if A < B, the output is A-B, and if A = B, the output is A + B (VV is 10, I suppose, but so is X). This is all doable, I’m sure, probably with op-amps, some of which would be acting as comparators, but it’s complex. More complexity is more transistors and components, and so more waste heat and chances to get the wiring wrong.

Annoyingly, that’s not the end of our problems. In the last couple of paragraphs, there are some ambiguous representations of numbers. VV is 10, but so is X. In Arabic numerals, 9 is 9, and 9 is the only digit that’s 9. Arabic also has the nice property that you can guess about how big the representation of the result is, based on the size of the operands. There’s no way to add a 4 digit number and a four digit number (assuming they’re both positive), and get a 2 digit number, or a 12 digit number. Adding II to MXMVIII gets you XX. Multiply two Arabic numbers, and the result will be about the length of the two numbers stuck end to end, +/-1. Division will be the difference in length of the numbers (I’m handwaving here and ignoring fractions). Multiply two Roman numerals, and your guess is as good as mine (As Norman points out, the written length of an Arabic numeral is also a rough gauge of it’s value. 1999 (length 4) is bigger than 322 (length 3), but MXMVIII (length 7) is less than MM (length 2)). There may be some regularities, but I’m not about to sit down and work them out. If anyone does, the bound on the Roman computer word size would be whatever number is longest and less than whatever size of MACHINE_MAX_INT you feel like implementing in the silicon space and power-hungry representation that the machine uses. As a result, a natively Roman calculator would potentially have very long “words” or “bytes”, much of which would go unused most of the time.

Roman numerals also didn’t have a standard representation for fractions, although they apparently sometimes used a duodecimal system for them (yes, two different number systems, one for integers and one for fractions). Roman numerals also had no formal representation at all for zero, although the Romans clearly understood the idea of zero (You have ten dinars, and you owe me ten dinars. I collect your debt, how many tunics can you buy now?). They used “nulla” or “nihil” sometimes, and represented it with an N, but they wouldn’t have written 500 as VNN.  They also didn’t represent negative numbers at all, although there were probably cases where they understood a subtraction to result in something that wasn’t even zero (You have 10 dinars and owe me 15 dinars. I collect your debt. Clearly you don’t have -5 dinars, but I also clearly don’t consider myself fully repaid).

I’ve heard that the Roman conception of math and numbers was highly geometric, rather than arithmetical, so they did have things like the square root of 2, which is the length of the diagonal of a unit square (you know, a square with sides of length I). Pythagoras figured that one out, at roughly the same time people were using “nulla”. Instead of running on abstractions of the representations of numbers, a properly Native Roman Computer might have been a mechanical device, a calculator that operated by using finely crafted physical representations of circles and triangles to perform its operations. Interestingly, the closest thing we have to a calculator from that time period is exactly that.

PDB for n00bs

PDB is the python debugger, which is very handy for debugging scripts. I use it two ways.

If I’m having a problem with the script, I’ll put in the line

import pdb; pdb.set_trace()

just before where the problem occurs. Once the pdb line is hit, I get the interactive debugger and can start stepping through the program and seeing where it blows up, and what variables are getting set to before that happens.

However, I recently found a very handy second way. I was debugging a script with a curses interface, which cleans up when it exits. Unfortunately, that cleanup means that my terminal gets wiped when something crashes, so instead of a stack trace, I just get dumped back to the terminal when something goes wrong, with no information at all left on the screen.

Invoking the script with

python -m pdb ./my_script.py

gets me the postmortem debugger, so when something goes wrong, the program halts and I get the interactive debugger and some amount of stack trace. It’s messy looking because of curses, but I can at least see what is going on.

Lightstone Repairs

I have a Lightstone, and a copy of the game “Journey to Wild Divine”, despite not being a whifty hippie. My friend Ne0nRa1n gave them to me years ago for biofeedback hacking, but pointed out that the finger sensor connections were broken.

The Lightstone is actually a USB interface to GSR and pulse sensors. Internally, the board uses a M430F133 chip from TI to call the shots, along with a ST72F623F2M1. Why there are two microcontrollers in there, I’m not sure. It could be that one is handling USB communication and the other is dealing with the analog signals, which is supported by the ST part being connected directly to the USB port (with a pair of test points along the traces) and the TI part having a lot of traces running to the analog section of the board.

Overall, the design of the hardware is solid. The Lightstone is easy to open up, the board is well-assembled and has what I’d consider good looking PCB design. There are lots of test points in the analog and digital sections. Spare GPIOs on the MSP430 are broken out to little pads for possible hackery. There are even populated headers that probably were used for programming the microcontrollers.

However, there’s one point where the hardware falls down. The finger contacts for the GSR and pulse sensors are on thin wires with minimal strain relief. Two nice microcontrollers, sweet board design, and it falls apart because wires break.

The sensors are on a 6-pin DIN connector, with red, black, orange, green, yellow, and white wires. The red and black wires each go to a GSR contact. The GSR contacts are silver or silver alloy buttons, so I want to keep those. The other four wires go to the pulse sensor, which is a three pin device. Looking into the front of the device, the left lead gets the yellow wire, the center lead gets the green and orange wires, and the right lead gets the white wire.

2016-01-01 21.31.12

To repair the fingertip sensors, I had to pull out the hinge pins that hold the sensor case together. I used very fine-tipped pliers for this, starting from the hinge and then grasping the tip once I had pushed the pin out enough. Then I took the sensors apart, and cut out the bad section of wire. In the image below, the spot to grab the hinge pin is just to the right of the small spring.

2016-01-01 21.23.35

I stripped the original cable, and wrapped the stripped sections in heat shrink. I drilled out the molded strain reliefs so I could thread the wires back through them more easily, threaded the wires, and used more heat shrink to improve the strain relief. My new wires are not as flexible as the old ones, but should be more durable. Finally, I soldered the sensors to the ends of the new wires, and put the sensor cases back together.

2016-01-01 23.05.01

If you do this repair, be careful soldering to the silver-alloy sensor buttons for the GSR sensor. The silver part is surprisingly easy to soften and distort with heat from a soldering iron. I slightly damaged one of mine, but managed to do the other one with no problems.

2016-01-01 23.59.10

I’m using liblightstone to get the information from the device. So far, it seems to be working fine.

Web Development with Flask and Python

I haven’t done anything that could properly be called web development since about 2002, when I took a college course in it. There have been a few developments in the field since then, and I’m a little rusty.

I chose Flask, because I like python and because Django seems like overkill for what I’m doing. There are literally dozens of frameworks out there, and I imagine some people know and care about the differences between them. If I had comments enabled, they’d be yelling at me to switch to Django right now. Hence, no comments.

Installing Flask is easy on Ubuntu:

sudo apt-get install python-flask

I copied the Flask “hello world” from the Flask page, ran it with

python snowflake.py

which got me the expected result, a web server running on port 5000 with a “Hello World” message.

My plan for this web app is to have users be able to visit some page, and the page will contain an image of a snowflake, generated from the url they used to visit. That’s it, but over on Facebook, my post about generating snowflakes from people’s names made people go just about nuts asking for them. Rather than generate them myself, I figured I’d write a web app.

Flask is a joy to work with. In debug mode, it detects changes to the file that contains the currently running app, and restarts when that file changes. It also presents a traceback and interactive debugger if something goes wrong in your app (it goes without saying that this needs to NEVER reach production, since it’s an interactive python shell on your server).

At this point, I have the core functionality of the app together, and I’m not even done with my beer. I can visit a URL, and a snowflake image gets generated from that URL. Everything else is details, and then deployment.

A couple of downloads later, the image now gets converted to png, and served back to the user as an image in a page. Soon, deployment!

Shark Interface Still Not Working

Apparently, having figured out how the Shark joystick sends its information isn’t quite enough to get it working with the motor driver. I wrote software to send the same information that the joystick would usually send, but didn’t get a response. Then I assumed that the way the data lines both go high before serial signalling commences might have been some sort of init signal, so I have an Arduino configured to send the same information, and I still don’t get a response.

It’s entirely possible that I don’t have the bit timing exactly right for the serial link, so I’m now working on bitbanging the serial in a more adaptable way, so I can test different bit lengths.

I’m going to keep plugging away at it for a bit, but I also have a plan B: lobotomize the motor driver. Assuming it uses an ATMega8 like the controller, I can pull the control IC and replace it with one flashed with the Arduino bootloader, and then use rosserial_arduino to control it from ROS. That does mean I’d want to log what the controller does before pulling it, so I have a rough idea what signals go where, but it would vastly simplify controlling the system.

Lessons learned by being The Worst Game Developer

I’m writing a video game. It is called Pebble, and in Pebble, there is a pebble. You contemplate the pebble. I haven’t decided if there is going to be music or not, but there will be a pebble, in a featureless grey expanse, and you can contemplate it.

Just thinking about writing this game has brought me some interesting realizations. I doubt I’m the first one to have them, but it was neat to see how they all fit together.

The first realization is just a recap of things I already knew about developing software: “You’re going to throw the first one away” and “Do the simplest thing that could possibly work”.

When I first came up with the idea for Pebble, it was as a tech demo for Tree, which is similar (There is a tree, you contemplate it), but more complicated, in that a tree is larger. I was going to use level-of-detail (LoD) rendering to support real-time generative zoom from birds-eye to bugs-eye views, store seeds so that the generated versions didn’t change between runs, etc. I read a bunch of papers on the topics, and saw that it was all very complex. I also hadn’t written anything, despite having read a lot of papers and learned a bit.

Eventually, I realized that if I had to load everything I needed to know into my head to write this game, first, I wouldn’t get around to writing it, and second, my head would explode.

Instead of either of those things, I’m writing the simplest bit of code that will draw something on my screen. The first version will draw a polygon, the second version will rotate it, and the third version will texture it. I’m going to have two code streams, one written using openFrameworks and one written using Polycode, so I can decide which of those libraries I’d rather use.

Once both libraries are through three versions, I’ll have the simplest thing that could possibly work, and I’ll throw the other one away.

Another revelation I had is that I don’t really know what pebbles look like. I mean, I have a general idea, but to render a pebble, a general idea doesn’t cut it. It doesn’t capture the variety of surface types that different kinds of weathering can cause, the colors of all the different rocks, and so forth. The reality of pebbles is way more complicated than the idea of pebbles

My girlfriend and I went out on a beach on Cape Cod, Massachusetts, and looked at pebbles. Cape Cod is a terminal moraine, so the rocks there were pushed by glaciers from everywhere north of Cape Cod, and there are loads of different kinds of pebbles there.

This has two effects on my thinking about the design of Pebble, and of video games in general. The first is that the stone surface generation algorithim should be the simplest thing that could possibly work. The second is that AAA games in their current form are doomed.

AAA games have a huge amount of their budget dedicated to resources, such as the textures and designs of the characters. Because the current marketing push in video games is visual, each game is supposed to have better and better graphics than those before it, or people will mock it and it will loose sales. However, this is an infinite pit. Any game world is a map, a less-detailed reperesentation that conveys an impression of a more detailed real world. With real maps, the real world is assumed to also exist, but in games it doesn’t. You run around in Libery City in Grand Theft Auto, but “you” don’t “run” “around”. By pressing buttons, you cause the appearance of motion in a simulated person within a simulated, restricted world. The better the simulation gets, the more resources it requires. In real-world NYC, if you go to Battery Park, you can pick up gravel and throw it in the harbor. In the analogous unplaces in GTA, the ground is a perfect solid, smooth and impenetrable. In order to create a more perfect simulation, there would have to be simulated pebbles, and someone would have to create them.

All of these resources, the pebbles, clothes, guns, car tires, trees, buildings, and so forth in a video game are made by people. These people get paid, and so the more detail you want in a game, the more resources you need, and so the more people you have to pay. Taking longer to make the game doesn’t work, as the technology is constantly shifting, so “more people” is pretty much the only going solution at this point. Even licensing IP from other companies is just an abstraction of getting more people to work on the project.

As a result, the drive is now to make games more and more expensive to make, in order to get finer and finer quality of details that add nothing to the narrative, but make the finished package prettier. However, people are not going to pay hundreds of dollars for a game (except possibly that version of MechWarror that came with a big robot control console), so either the game market has to grow without bound, or the industry has to start putting an upper bound on how much they can invest in making a game.

In a way, I’m hoping Pebble is a signpost on the path of excessive detail, a huge amount of clever rendering algorithims and generative textures in pursuit of the perfect simulation of the experience of contemplating a small stone. Whether the signpost says “Welcome!” or “Abandon Hope All Ye Who Enter Here” is an exercise for the reader.

USB parallel ports under Python on Ubuntu

I have this PCB designed to control four flame effects. Instead of running it on the Arduino, I’m doing an FFT on a laptop and trying to control the solenoid drivers through a USB parallel port adapter on the laptop.

Ubuntu recognizes the USB parallel port adapter, and gives me a port in /dev/usb/lp0. I don’t have permissions to access it, because its user and group are root and lp, and I’m neither of those. The specific error is:

>>> p = parallel.Parallel('/dev/usb/lp0')
Traceback (most recent call last):
  File "", line 1, in
  File "/usr/lib/python2.7/dist-packages/parallel/parallelppdev.py", line 187, in __init__
    self._fd = os.open(self.device, os.O_RDWR)
OSError: [Errno 13] Permission denied: '/dev/usb/lp0'

sudo chmod o+rw /dev/usb/lp0 doesn’t get me any closer, because whatever python-parallel does under the hood is not a legit operation on that dev entry.

>>> p = parallel.Parallel("/dev/usb/lp0")
Traceback (most recent call last):
  File "", line 1, in
  File "/usr/lib/python2.7/dist-packages/parallel/parallelppdev.py", line 189, in __init__
    self.PPEXCL()
  File "/usr/lib/python2.7/dist-packages/parallel/parallelppdev.py", line 241, in PPEXCL
   fcntl.ioctl(self._fd, PPEXCL)
IOError: [Errno 25] Inappropriate ioctl for device

The /dev/usb/lp0 device entry appears to be created by the usblp module. I have a suspicion that what’s going on here is that the device entry created by usblp isn’t claimable the way one created by ppdev would be.

Using rmmod to get rid of usblp doesn’t work, it just gets restarted when I re-insert the USB connector for the adapter. Blacklisting it in /etc/modprobe.d/blacklist.conf just means that the /dev entry doesn’t get created, not that ppdev takes over.

Most reports online also indicate that USB parallel ports don’t really act like parallel ports, but only work for connecting parallel printers. Since I’ve already wasted enough time on this, it’s time to go with plan B. I’m going to fully populate the board, so that it has an Arduino on it, and then interface to that using serial commands and possibly Processing or OpenFrameworks.