Category: Making

On Looking into Mouse Sensors

That is to say, the sensors from optical mice, rather than a sensor intended to detect the small rodent.

I have 10 boards from optical mice and three desoldered sensors. Among that bunch there are two common IC packages, a 16-pin staggered DIP (4 units) and an 8-pin staggered DIP (6 units). There is also a 20-pin staggered DIP, and two 12-pin DIP packages.

Most of the chips were made by Agilent, or Avago, a spin-off of Agilent that eventually bought Broadcom and started operating under that name. A couple are from At Lab, or as they style themselves “@lab”.

The chip interfaces are very heterogeneous. Some of them just output PS2 data and clock signals, and so are a very integrated mouse IC. Some of them output quadrature signals for x and y motion.

I had high hopes for using these mouse sensors for a couple of hacks. One of them is that they are essentially optical flow processors, so you can use them to either get velocity based on the observed motion of stationary objects from a moving platform, assuming you know how far away the objects are (and so get odometry for a moving robot by watching the ground roll by). The inverse of that is that you can also get how far away an object is assuming that it is stationary and you know your own speed (for height over ground detection in a drone, for example).

Ultimately, though, I don’t think this stash of ICs is going to do the job I want. What I want is something I can drop into projects I’m working on, and reverse engineering each of these, finding the datasheets for ICs old enough to support PS2 protocol, and so forth, would be its own hassle of a project. USB optical mice are $7 or so, so I can’t really justify the effort to get these working, sort out optics for them, etc.

On top of that, drone optical flow sensors with the optics already sorted are like $10-20, so for that use case, I can just buy the part. For robot odometry, I can use the same part, or put optics on a USB mouse that can actually plug into a recent computer, instead of decoding quadrature or PS2.

It feels kind of weird to pick up one of my old projects that I had been kind of looking forward to, and realize that it’s simply not useful or interesting, but I guess that’s just how it goes. At least I can free up that parts drawer now!

The Year of Linux on the Desktop is “Ha, Get Fucked”

People talk about usability, and in a lot of ways, Ubuntu is pretty close. It certainly beats Windows 10, at least for me, since you’re allowed to know what has gone wrong and fix it, instead of just rebooting whenever your computer gets infested with ghosts. I do complain about Ubuntu, but it’s usually because most things are decent, so the things that are bad are particularly egregious. And, after all, I did get what I paid for.

This time, though, I’m trying to work with a Raspberry Pi Zero W, and Raspbian. It has a little setup walkthrough that clearly took a serious blow to the head at some point, since it asks you to connect to WiFi, and then if you didn’t connect to a network, still asks if you want to download software. From what, I might ask, since there’s not a network connection?

Of course, the reason I didn’t connect to the network is that it’s hidden. Not a problem, I have the SSID on paper here… and no way to tell it to the Pi. There’s a network configurator, but it only lets you deal with unhidden networks, not enter your own SSIDs. Maybe this was done because “SSID” is one of those worrying “techie” terms, and this is supposed to be for somewhat less technical users? Protip: it’s not. It ships as a bare board in an antistatic bag, for fuck’s sake. Maybe this was not done because it’s somehow hard?

At any rate, the solution appears to be: manually edit wpasupplicant.conf. Manually edit a text file that only root can edit, in this, the Year of Our Lord 2019. I don’t have a problem with this. I have a PhD in making tiny robots go and have been using Linux for 15 years, because everything else is worse for my use cases. Normal humans, who are perhaps entering college and just want to check out Linux and maybe try writing a little Scratch or Python, they are going to have a problem with this.

Also, the same startup script asked me if my screen had black bars around the edges. It did, so I said yes (more fool me!). When I rebooted, the edges of the desktop (you know, where the UI goes) were mostly off the edge of the screen. Setting the hostname with the network config tool on the toolbar caused host name resolution errors every time I use sudo, apparently because sudo wanted “ouija1” but “ouija_1” got written into /etc/hosts. That’s not actually a valid hostname (my error), but it got written into /etc/hosts (an error by whoever wrote the alleged network config tool). Again, I know what /etc/hosts is, editing it isn’t an issue. I’m weird. For most people, this is an issue.

So in general, my experience with the Raspberry Pi and Raspbian is that it’s not ready for end users who want to use it to do things. If you want it in order to do things to it, rather than with it, and are already an experienced electronics enthusiast and Linux user, you’ll be fine.

Serious MOSFETs

2015-08-26 23.44.43

I’m designing a simple H-bridge for simple but large projects. These are 300A 40V MOSFETS. The board also has a driver for the MOSFETs. I hope to find a driver that uses I2C or some other interface, rather than PWM.

2015-10-12 22.13.25

The board overall is pretty small, but I haven’t figured out a good way to heat sink it. The unpopulated round footprints are for capacitors, and when the caps are installed, they block any easy installation of a heat sink over the MOSFETs. I may design the second iteration of the board around thermal management, and have holes for mounting a commodity CPU heat sink over the FETs.

The current design of the board is available here.

I’ve tested a prototype of the current design, and it does work, but I didn’t stress it very hard.

2015-09-13 17.07.01

Direct-to-PCB laser printing

“Also, contrary to popular belief, there’s no issue with laser printing on a conductive surface.” – from here.

I’ve seen this in operation at Worcester Polytechnic Institute. There, the flexible/soft robotics lab uses copper tape on plastic substrate to make PCBs, sticks them to a sheet of paper, and runs it through a totally-normal, unhacked laser printer. Copper tape is probably thinner than PCB copper, so it ends up not sinking too much heat away from the fuser. The resulting PCBs can be etched right out of the printer, with no ironing step in the middle.

I have to hit a hardware store at some point this week, perhaps I’ll be able to get copper tape there.

Accidental Aesthetics

Works from a “school” of art share some common elements. Looking at paintings by Dali, Magritte, and Breton, one can say that they share something that is not shared with a Monet. People not trained in the academic study of art might have a hard time naming or articulating that quality, but it is definitely present.

The artists named above are all painters. If one wants to get truly pedantic, it’s possible to claim that their works all have the common quality “flat surface covered by pigments mixed with a binder”. The actual common quality is more a matter of their treatment of form, especially in relation to the expected juxtaposition of forms in the real world, and their engagement with the representation of the unconscious world, that is to say, the realms of dream, delusion, and insanity, as well as direct handling of the duality of representation and reality.

From the fact that this common quality does not directly relate to the material used, we can infer that there can exist works that do not use the same material, and yet have the same quality. This inference is supported by the existance of surrealist sculpture.

However, some materials and creative processes force a certain common developmental aesthetic. Three cases of a unified aesthetic that is incidental to the product, but nonetheless shared, are: the textures used in 3D modeling, the debug output of computer vision systems, and the appearance of DIY/prototyped electromechanical devices from the current generation of hacker spaces.

These aesthetics are unified within themselves, but they are not of a piece with each other. Textures adopt the form that they do because the technology demands it. The technology is defined, and the aesthetic is fully constrained by it. Computer vision systems develop their aesthetic because they must map the world through the system’s understanding into a form that is understood by the human user. The technology is not fully defined, but the system is confined on on three fronts: The input of the real world, the representation available in the system, and what users can “read” in realtime. Prototyped devices have the fewest constraints. The technology is incompletely defined, and the form of it is also undefined, so it is shaped by expedience and available tools. It is the most accidental aesthetic, because it is the one that forms when no other aesthetic is selected.

paladin_head

This is an example of a texture for a human head from here. The distortion would be corrected by remapping onto a model of a human head.

Textures are the most rigidly constrained accidental aesthetic. This description comes from a common modeling file format, but the technology is similar across many modeling processes. The model consists of three files. The first file, the model file, describes the 3D points that make up the surfaces of the model. It also includes a reference to the second file, which is a material file. The material file describes a set of materials that the object is made of, and how light interacts with them. Each material may refer to a third file, which is the texture. A texture is a flat image file. Regions of the flat image file are mapped onto surfaces of the model by a one-to-one (usually) mapping from vertices on the model to vertices on the texture. The vertices on the texture define a shape which is then “cut out” and “applied” to the corresponding shape on the model. Because of the way this works, and the tools used to create this mapping, the texture is frequently a flat representation of the 3D object, in much the way a map of the earth is a flat representation of the 3D world.

Altering the texture would result in changes to its display on the model, so the texture is completely constrained by the model. Because it is a flat image file, the texture is also constrained in the ways that it can be displayed to the user. Because of this complete constraint, the textures display a very strong unity of aesthetic.

Robot readable world from Timo on Vimeo.

Robot Readable World is a compilation of the debugging output of computer vision algorithms. The computer system operates on the video stream to produce data streams which are not visible to humans. These video outputs are intended to allow human debuggers to determine what the system “sees”, that is, to map the data structures into human-readable form and present it mixed with the incoming images so that the person can relate from real objects to the system’s “perception”. Because these are merely explanations of the state of the system, rather than a key part of its functioning, they can be altered and rearranged to provide the maximally useful representation for human readers. The data underneath may not change, but the presentation can be altered.

As a result, these systems are unconstrained at at least one end, the presentation to the user. However, they are constrained at the other end to operate on images. The images are in turn, constrained by the postions and relations of objects in the real world. A computer vision system that operates in a made-up or simulated environment would have no practical use to humans unless they also inhabited that environment. This is not to say that this is not done, as vision approaches could be used in video games, but it is less likely.

dog_treat_dispenser

This dog treat dispenser is an example of the third accidental aesthetic: the design of DIY electronics. Some hallmarks of this aesthetic are the exposed circuit boards, the surface texturing of 3D printed or laser cut (in this case, 3D printed) parts, visible and accessible wiring, and the use of visible, commercially available screws and other connectors.

Nahman_laser_box4

This project, a controller for a coffee roaster, has the same aesthetic, despite being constructed by a different person, unknown to the maker of the dog treat dispenser.

This is the least constrained of the three accidental aesthetics. The maker can choose the parts used to create the device, and the form of the finished device. However, the tools available to the user to create the device will drive certain decisions in its eventual form. A 3D printer provides a way to quickly create certain forms, but has a distinct material, texture, and color for those forms. Laser cutting allows a form to be built from layers of flat materials, but again, some building techniques work better than others. Off the shelf commercial components have to be connected together, which leads to visible wires. All of these decisions, to print or not print, laser or not laser, wire or make PCBs have a bias in them that each artist/creator navigates, and the sequence of the decisions leads to a particular aesthetic for the piece.

Toy Helicopter Hacking

This Christmas, my parents gave me a palm-sized toy helicopter (Avatar Z008), and my girlfriend’s parents gave me a slightly bigger toy helicopter with a video camera (Egofly Spyhawk). I also have one that I bought myself (Syma S107). All of them are gyro-stabilized, coaxial-rotor helicopters, which basically just means that they automatically don’t roll, and are easy to fly.

I had hoped to convert one of them into a tiny drone. I opened up the S107 this morning to take a look at the internal PCB. The IR signal from the remote goes to an unmarked 14-pin IC. The gyro (which I assume to be the little metal can mounted on a daugther board from the main PCB) is marked, with “C 146” and “Y2373”. One pin of the gyro is grounded, one, marked “TLY” is connected to the unmarked IC, and one goes to Vcc. That is pretty clearly power, ground, and a signal pin.

This means any control that the system is doing based on the gyro is done by that unmarked IC. Chances are that re-implementing the gyro control would be amusing, but much harder than simply adding whatever drone control I decided to add “on top of” the existing hardware.

An easier approach would be to take advantage of work that other people have done on reverse-engineering the IR protocol, and add my own control circuit that sends IR control signals to the existing board. That way, the existing board would take care of driving the motors and keeping the helicopter balanced, while my board would add autonomy.

Downward and front facing versions of SpeckleSense could be used to give the helicopter a sense of its movement in the world, which might be good enough for dead-reckoning navigation over small distances.

Mindflex EEG Hacking

I got a Mindflex Duel for Christmas. The Mindflex Duel is a toy that uses a pair of EEG headsets to read signals from the users, and then send those signals to a base unit that contains a blower and a little sliding cart to move the blower. The users try to concentrate to control the cart, moving a little ball suspended in the air jet from the blower into a goal.

Needless to say, I gutted it.

The base unit has a little PCB with a 2.4Ghz radio on it, and a little hardware to control the blower and cart motors. The headsets are the really interesting part. Each one has a single-channel EEG and a wireless radio. I took the radios out and replaced them with BlueSmiRF bluetooth-to-serial links so that I could connect them to my laptop. The hardware part of the replacement is below, the software part will be in another post.

The guts of one of the headsets. The 2.4 Ghz radio is the top daughter board, the EEG hardware is the bottom daughter board.

IMG_20130107_205037.jpg

I desoldered the original radio. It works in the same band as Bluetooth, and consumes power, so there was no need to have it there.

IMG_20130107_210345.jpg

The red and black wires supply power for the BlueSmiRF. It can take up to 5 or so volts, but the headset runs on 4.5v, so it is fine to hook it up like this. The red wire is connected to the power switch, rather than V+, so that the power switch also turns off the bluetooth radio.

IMG_20130107_221759.jpg

The white wire goes from the pin labeled “T” on the EEG board to the RX pin on the BlueSmiRF. The T pin of the EEG board is a serial line, which transmits the EEG data to the BlueSmiRF.

IMG_20130107_223120.jpg

Glue the bluetooth radio into place with hot glue. The LEDs on the BlueSmiRF are covered by black paint on the inside of the Mindflex headset, but I scractched away the paint in little circles so the BlueSmiRF status lights would shine through.

IMG_20130108_074905.jpg

The finished product looks stock, until you turn it on. That red light on the side is not normally there.

IMG_20130108_075434.jpg

Toybrain further improvements

I populated one of the Toybrain V2 boards and gave it a bit of a shakedown. I still have to test the motor driver, but I’ve at least fixed the backwards ICSP header and the reversed TX/RX lines. I did add a LED for debugging, but then hooked it up to a ADC line, so lighting it up means losing an analog pin.

For those as don’t know, you can get the analog pins on an Arduino/ATMega168/ATMega328/whatever to act as a digital GPIO by treating it like one, using the aliases “A0” through “A5”. Full instructions are here.

The V2 boards also have a reset button, which is very useful.

I’m already working on V3, which is going to be smaller. The V2 has headers for power, ground, and data for each pin, which I think is a bit much. I want that room back to build a voltage regulator and some filtering onto the board. The microcontroller can run at 1.8 to 5.5 volts, so the filtering is a bit more important than the regulation. However, some toys surely run at 6 volts or more, and would ruin the microcontroller, which is a SMD device.

So V3 will include filter capacitors, probably SMD, on the power rails, and a 3.3V regulator (The Micrel MIC5209-3.3 in SOT233 package looks good) or the option to short around it. The regulator only supplies the power for the microcontroller, so it won’t need to be very high power.

Again with the lasers

There is an Instructable up on using speakers as galvanometers for a laser projector. This looks just about optimal for the Nuiteblaster, as it provides readable text without defocusing or otherwise spreading the laser beam.

I’ve started building one, with a couple of modifications. Instead of resistors, I’m using diodes to snub the back-EMF from the speakers. I’m also using MOSFETS instead of transistors to switch the power to the speakers. MOSFETS have lower on-resistance than transistors, and so transfer more power and waste less energy as heat. They also have VERY low gate current (low enough to treat as non-existent for my purposes), so there’s no need for current-limiting resistors on the gates, although a resistor might be good to limit any ringing that might happen from slamming 5V into it. Since I’m driving it directly from a 5V microcontroller, gate drive and switching time hopefully won’t be a concern.

Welding

I’m trying to build a tricycle tallbike, because nothing says “overconfidence” like having your first welding project be something that drops you into traffic if the welds break.

It will have two front wheels and one rear wheel. The rear wheel will provide power, and the two front wheels will provide steering. I’ll post photos as soon as I have any of it together.