Category: Making

Serious MOSFETs

2015-08-26 23.44.43

I’m designing a simple H-bridge for simple but large projects. These are 300A 40V MOSFETS. The board also has a driver for the MOSFETs. I hope to find a driver that uses I2C or some other interface, rather than PWM.

2015-10-12 22.13.25

The board overall is pretty small, but I haven’t figured out a good way to heat sink it. The unpopulated round footprints are for capacitors, and when the caps are installed, they block any easy installation of a heat sink over the MOSFETs. I may design the second iteration of the board around thermal management, and have holes for mounting a commodity CPU heat sink over the FETs.

The current design of the board is available here.

I’ve tested a prototype of the current design, and it does work, but I didn’t stress it very hard.

2015-09-13 17.07.01

Direct-to-PCB laser printing

“Also, contrary to popular belief, there’s no issue with laser printing on a conductive surface.” – from here.

I’ve seen this in operation at Worcester Polytechnic Institute. There, the flexible/soft robotics lab uses copper tape on plastic substrate to make PCBs, sticks them to a sheet of paper, and runs it through a totally-normal, unhacked laser printer. Copper tape is probably thinner than PCB copper, so it ends up not sinking too much heat away from the fuser. The resulting PCBs can be etched right out of the printer, with no ironing step in the middle.

I have to hit a hardware store at some point this week, perhaps I’ll be able to get copper tape there.

Accidental Aesthetics

Works from a “school” of art share some common elements. Looking at paintings by Dali, Magritte, and Breton, one can say that they share something that is not shared with a Monet. People not trained in the academic study of art might have a hard time naming or articulating that quality, but it is definitely present.

The artists named above are all painters. If one wants to get truly pedantic, it’s possible to claim that their works all have the common quality “flat surface covered by pigments mixed with a binder”. The actual common quality is more a matter of their treatment of form, especially in relation to the expected juxtaposition of forms in the real world, and their engagement with the representation of the unconscious world, that is to say, the realms of dream, delusion, and insanity, as well as direct handling of the duality of representation and reality.

From the fact that this common quality does not directly relate to the material used, we can infer that there can exist works that do not use the same material, and yet have the same quality. This inference is supported by the existance of surrealist sculpture.

However, some materials and creative processes force a certain common developmental aesthetic. Three cases of a unified aesthetic that is incidental to the product, but nonetheless shared, are: the textures used in 3D modeling, the debug output of computer vision systems, and the appearance of DIY/prototyped electromechanical devices from the current generation of hacker spaces.

These aesthetics are unified within themselves, but they are not of a piece with each other. Textures adopt the form that they do because the technology demands it. The technology is defined, and the aesthetic is fully constrained by it. Computer vision systems develop their aesthetic because they must map the world through the system’s understanding into a form that is understood by the human user. The technology is not fully defined, but the system is confined on on three fronts: The input of the real world, the representation available in the system, and what users can “read” in realtime. Prototyped devices have the fewest constraints. The technology is incompletely defined, and the form of it is also undefined, so it is shaped by expedience and available tools. It is the most accidental aesthetic, because it is the one that forms when no other aesthetic is selected.

paladin_head

This is an example of a texture for a human head from here. The distortion would be corrected by remapping onto a model of a human head.

Textures are the most rigidly constrained accidental aesthetic. This description comes from a common modeling file format, but the technology is similar across many modeling processes. The model consists of three files. The first file, the model file, describes the 3D points that make up the surfaces of the model. It also includes a reference to the second file, which is a material file. The material file describes a set of materials that the object is made of, and how light interacts with them. Each material may refer to a third file, which is the texture. A texture is a flat image file. Regions of the flat image file are mapped onto surfaces of the model by a one-to-one (usually) mapping from vertices on the model to vertices on the texture. The vertices on the texture define a shape which is then “cut out” and “applied” to the corresponding shape on the model. Because of the way this works, and the tools used to create this mapping, the texture is frequently a flat representation of the 3D object, in much the way a map of the earth is a flat representation of the 3D world.

Altering the texture would result in changes to its display on the model, so the texture is completely constrained by the model. Because it is a flat image file, the texture is also constrained in the ways that it can be displayed to the user. Because of this complete constraint, the textures display a very strong unity of aesthetic.

Robot readable world from Timo on Vimeo.

Robot Readable World is a compilation of the debugging output of computer vision algorithms. The computer system operates on the video stream to produce data streams which are not visible to humans. These video outputs are intended to allow human debuggers to determine what the system “sees”, that is, to map the data structures into human-readable form and present it mixed with the incoming images so that the person can relate from real objects to the system’s “perception”. Because these are merely explanations of the state of the system, rather than a key part of its functioning, they can be altered and rearranged to provide the maximally useful representation for human readers. The data underneath may not change, but the presentation can be altered.

As a result, these systems are unconstrained at at least one end, the presentation to the user. However, they are constrained at the other end to operate on images. The images are in turn, constrained by the postions and relations of objects in the real world. A computer vision system that operates in a made-up or simulated environment would have no practical use to humans unless they also inhabited that environment. This is not to say that this is not done, as vision approaches could be used in video games, but it is less likely.

dog_treat_dispenser

This dog treat dispenser is an example of the third accidental aesthetic: the design of DIY electronics. Some hallmarks of this aesthetic are the exposed circuit boards, the surface texturing of 3D printed or laser cut (in this case, 3D printed) parts, visible and accessible wiring, and the use of visible, commercially available screws and other connectors.

Nahman_laser_box4

This project, a controller for a coffee roaster, has the same aesthetic, despite being constructed by a different person, unknown to the maker of the dog treat dispenser.

This is the least constrained of the three accidental aesthetics. The maker can choose the parts used to create the device, and the form of the finished device. However, the tools available to the user to create the device will drive certain decisions in its eventual form. A 3D printer provides a way to quickly create certain forms, but has a distinct material, texture, and color for those forms. Laser cutting allows a form to be built from layers of flat materials, but again, some building techniques work better than others. Off the shelf commercial components have to be connected together, which leads to visible wires. All of these decisions, to print or not print, laser or not laser, wire or make PCBs have a bias in them that each artist/creator navigates, and the sequence of the decisions leads to a particular aesthetic for the piece.

Toy Helicopter Hacking

This Christmas, my parents gave me a palm-sized toy helicopter (Avatar Z008), and my girlfriend’s parents gave me a slightly bigger toy helicopter with a video camera (Egofly Spyhawk). I also have one that I bought myself (Syma S107). All of them are gyro-stabilized, coaxial-rotor helicopters, which basically just means that they automatically don’t roll, and are easy to fly.

I had hoped to convert one of them into a tiny drone. I opened up the S107 this morning to take a look at the internal PCB. The IR signal from the remote goes to an unmarked 14-pin IC. The gyro (which I assume to be the little metal can mounted on a daugther board from the main PCB) is marked, with “C 146” and “Y2373”. One pin of the gyro is grounded, one, marked “TLY” is connected to the unmarked IC, and one goes to Vcc. That is pretty clearly power, ground, and a signal pin.

This means any control that the system is doing based on the gyro is done by that unmarked IC. Chances are that re-implementing the gyro control would be amusing, but much harder than simply adding whatever drone control I decided to add “on top of” the existing hardware.

An easier approach would be to take advantage of work that other people have done on reverse-engineering the IR protocol, and add my own control circuit that sends IR control signals to the existing board. That way, the existing board would take care of driving the motors and keeping the helicopter balanced, while my board would add autonomy.

Downward and front facing versions of SpeckleSense could be used to give the helicopter a sense of its movement in the world, which might be good enough for dead-reckoning navigation over small distances.

Mindflex EEG Hacking

I got a Mindflex Duel for Christmas. The Mindflex Duel is a toy that uses a pair of EEG headsets to read signals from the users, and then send those signals to a base unit that contains a blower and a little sliding cart to move the blower. The users try to concentrate to control the cart, moving a little ball suspended in the air jet from the blower into a goal.

Needless to say, I gutted it.

The base unit has a little PCB with a 2.4Ghz radio on it, and a little hardware to control the blower and cart motors. The headsets are the really interesting part. Each one has a single-channel EEG and a wireless radio. I took the radios out and replaced them with BlueSmiRF bluetooth-to-serial links so that I could connect them to my laptop. The hardware part of the replacement is below, the software part will be in another post.

The guts of one of the headsets. The 2.4 Ghz radio is the top daughter board, the EEG hardware is the bottom daughter board.

IMG_20130107_205037.jpg

I desoldered the original radio. It works in the same band as Bluetooth, and consumes power, so there was no need to have it there.

IMG_20130107_210345.jpg

The red and black wires supply power for the BlueSmiRF. It can take up to 5 or so volts, but the headset runs on 4.5v, so it is fine to hook it up like this. The red wire is connected to the power switch, rather than V+, so that the power switch also turns off the bluetooth radio.

IMG_20130107_221759.jpg

The white wire goes from the pin labeled “T” on the EEG board to the RX pin on the BlueSmiRF. The T pin of the EEG board is a serial line, which transmits the EEG data to the BlueSmiRF.

IMG_20130107_223120.jpg

Glue the bluetooth radio into place with hot glue. The LEDs on the BlueSmiRF are covered by black paint on the inside of the Mindflex headset, but I scractched away the paint in little circles so the BlueSmiRF status lights would shine through.

IMG_20130108_074905.jpg

The finished product looks stock, until you turn it on. That red light on the side is not normally there.

IMG_20130108_075434.jpg

Toybrain further improvements

I populated one of the Toybrain V2 boards and gave it a bit of a shakedown. I still have to test the motor driver, but I’ve at least fixed the backwards ICSP header and the reversed TX/RX lines. I did add a LED for debugging, but then hooked it up to a ADC line, so lighting it up means losing an analog pin.

For those as don’t know, you can get the analog pins on an Arduino/ATMega168/ATMega328/whatever to act as a digital GPIO by treating it like one, using the aliases “A0” through “A5”. Full instructions are here.

The V2 boards also have a reset button, which is very useful.

I’m already working on V3, which is going to be smaller. The V2 has headers for power, ground, and data for each pin, which I think is a bit much. I want that room back to build a voltage regulator and some filtering onto the board. The microcontroller can run at 1.8 to 5.5 volts, so the filtering is a bit more important than the regulation. However, some toys surely run at 6 volts or more, and would ruin the microcontroller, which is a SMD device.

So V3 will include filter capacitors, probably SMD, on the power rails, and a 3.3V regulator (The Micrel MIC5209-3.3 in SOT233 package looks good) or the option to short around it. The regulator only supplies the power for the microcontroller, so it won’t need to be very high power.

Again with the lasers

There is an Instructable up on using speakers as galvanometers for a laser projector. This looks just about optimal for the Nuiteblaster, as it provides readable text without defocusing or otherwise spreading the laser beam.

I’ve started building one, with a couple of modifications. Instead of resistors, I’m using diodes to snub the back-EMF from the speakers. I’m also using MOSFETS instead of transistors to switch the power to the speakers. MOSFETS have lower on-resistance than transistors, and so transfer more power and waste less energy as heat. They also have VERY low gate current (low enough to treat as non-existent for my purposes), so there’s no need for current-limiting resistors on the gates, although a resistor might be good to limit any ringing that might happen from slamming 5V into it. Since I’m driving it directly from a 5V microcontroller, gate drive and switching time hopefully won’t be a concern.

Welding

I’m trying to build a tricycle tallbike, because nothing says “overconfidence” like having your first welding project be something that drops you into traffic if the welds break.

It will have two front wheels and one rear wheel. The rear wheel will provide power, and the two front wheels will provide steering. I’ll post photos as soon as I have any of it together.

Low Power Electronics

I am building a set of strings of lights to illuminate a labyrinth. As someone walks the labyrinth, the strings of lights will light up ahead of them to show the way, and fade out behind them as they pass. Instead of doing the build from the ground up, I’m starting with solar-powered garden lights that charge during the day, and light a string of lights at night.

My initial thought was that this would be a pretty simple task. I’d rig each light with a Sharp IR ranger, poll the ranger, and light the lights when something got close enough. Once it passed, I’d set a timer based on how long it takes to walk a strand of lights, and then shut the lights off when the timer timed out.

Unfortunately, that idea went away when I got the solar light. The light uses a single 1.2V battery, and runs the LED strand by having a simple boost converter double that to pulses of around 2.5V at a high enough rate that the LEDs don’t look like they are pulsing. I figured I would get around that by rectifying the pulses using a voltage doubler, which would get me 5V for my microcontroller and sensor. Unfortunately, voltage doublers get you voltage at the expense of current. The Sharp IR rangers can eat around 20mA, and the microcontroller is another 15mA or so. With that amount of load, the voltage on the voltage doubler rapidly falls back to ~2V. The Sharp IR rangers don’t work at anything less than about 4 volts, so I couldn’t use them.

I decided that since I don’t need range measurement, just the presence or absence of something in the range of the detector, I could get by with lighting the area up with 38kHz modulated IR, and picking it up with an IR detector module like the ones used in TVs to receive the remote signal. The microcontroller can generate the modulation signal to drive the IR LED. I got the code to do it here, I think, but that site is down now. In practice, this works just fine. I used my Arduino to do a quick sketch of the detection circuitry, and got it to blink an LED.

Unfortunately, the IR detectors I have also don’t work with less than 5V. However, unlike the Sharp IR rangers, there are a bunch of manufacturers that make the TV remote receivers, and some of them operate down to 2.4V. I ordered some of these, and set up my microcontroller, IR LED, and remote receiver so I could blink an LED by sending a IR pulse.

That worked just fine on battery power, but running from the voltage doubler still drained the caps too fast. Powering the IR LED at reasonable brightness just took too much current. In order to let the capacitors in the voltage doubler recharge, I shortened the IR LED on time to a 10th of a second, and put the microcontroller in a very low power (i.e. it runs on microamps, rather than milliamps) sleep mode when it was not firing the LED. Since the circuit spends most of its time off, the IR detector is the main draw on the voltage doubler. So far, this seems to work. If I want to save even more power, I can power the IR detector from a pin of the microcontroller, and shut it down when the microcontroller goes down.

Soon, I’m going to test the full circuit. I’ll post about it if I have to make any wild and crazy hardware changes.

ToyBrain at the Maker Faire

I’ll be at the Cambridge Mini Maker Faire (details here, here, and here) this Friday, showing off my ToyBrain boards, LED art, and other oddities. Look for the guy with the unnatural red hair.

Chris Connors was kind enough to include my work from 2010 in his post about the Faire.