Category: Linux

Snap is a failure

Ubuntu finally figured out that no one wanted the Unity desktop environment, but now they’re trying to fuck up software installation with “Snap”, which is apparently some kind of misbegotten blend of containerization and static linking.

Here’s what it looks like to the user: you install an application, like a text editor. It’s noticably slower to launch, and then go to open a file in the Documents folder in your home directory. Unfortunately, the text editor is a snap, so the Documents folder is in some random number’s home directory, not yours.

Then you delete the snap and install the text editor from a source that doesn’t use snap, and so is FUCKING COMPATIBLE WITH DIRECTORIES.

As a bonus, it runs faster now.

This feels a lot like how Firefox made a working browser, and then realized that once it was working, maintenance coding was boring and not profitable, so they divide their time between stupid but new things no one wants (Pocket) and breaking existing features (Open image in new tab).

It seems it’s never enough to do a good job, or once it’s done the project attracts useless hangers-on who have to appear to be doing something to justify their pay.

The Year of Linux on the Desktop is “Ha, Get Fucked”

People talk about usability, and in a lot of ways, Ubuntu is pretty close. It certainly beats Windows 10, at least for me, since you’re allowed to know what has gone wrong and fix it, instead of just rebooting whenever your computer gets infested with ghosts. I do complain about Ubuntu, but it’s usually because most things are decent, so the things that are bad are particularly egregious. And, after all, I did get what I paid for.

This time, though, I’m trying to work with a Raspberry Pi Zero W, and Raspbian. It has a little setup walkthrough that clearly took a serious blow to the head at some point, since it asks you to connect to WiFi, and then if you didn’t connect to a network, still asks if you want to download software. From what, I might ask, since there’s not a network connection?

Of course, the reason I didn’t connect to the network is that it’s hidden. Not a problem, I have the SSID on paper here… and no way to tell it to the Pi. There’s a network configurator, but it only lets you deal with unhidden networks, not enter your own SSIDs. Maybe this was done because “SSID” is one of those worrying “techie” terms, and this is supposed to be for somewhat less technical users? Protip: it’s not. It ships as a bare board in an antistatic bag, for fuck’s sake. Maybe this was not done because it’s somehow hard?

At any rate, the solution appears to be: manually edit wpasupplicant.conf. Manually edit a text file that only root can edit, in this, the Year of Our Lord 2019. I don’t have a problem with this. I have a PhD in making tiny robots go and have been using Linux for 15 years, because everything else is worse for my use cases. Normal humans, who are perhaps entering college and just want to check out Linux and maybe try writing a little Scratch or Python, they are going to have a problem with this.

Also, the same startup script asked me if my screen had black bars around the edges. It did, so I said yes (more fool me!). When I rebooted, the edges of the desktop (you know, where the UI goes) were mostly off the edge of the screen. Setting the hostname with the network config tool on the toolbar caused host name resolution errors every time I use sudo, apparently because sudo wanted “ouija1” but “ouija_1” got written into /etc/hosts. That’s not actually a valid hostname (my error), but it got written into /etc/hosts (an error by whoever wrote the alleged network config tool). Again, I know what /etc/hosts is, editing it isn’t an issue. I’m weird. For most people, this is an issue.

So in general, my experience with the Raspberry Pi and Raspbian is that it’s not ready for end users who want to use it to do things. If you want it in order to do things to it, rather than with it, and are already an experienced electronics enthusiast and Linux user, you’ll be fine.

Let’s make a 3D rendering of a forest!

Let’s do it with a video shot from a GoPro by someone walking around a camp site!

I’m not actually sure this is all that easy, or even possible. The main way to do this sort of rendering from a monocular camera is structure from motion (SfM), which relies on SIFT features in the image, and from what I recall of SIFT features, I’m not 100% convinced that they’re going to track very well from frame to frame, because the frames are mostly full of trees, and leaves look a lot like each other. On the other hand, there’s plenty of texture in the image, so maybe it will work just fine.

I’m planning to use Visual SfM, for which there are instructions online. Those instructions are for Ubuntu 12.04, but I’m on 16.04, so there are some modifications required. I got and ran the latest CUDA installer, using the local install rather than the network one because NVidia broke the network one somehow. I suspect a bad URL in the step where you install the keys.

I got what I think are the dependencies with the command

sudo apt-get install libgtk2.0-dev glew-utils libdevil-dev libboost-all-dev libatlas-cpp-0.6-dev libatlas-dev imagemagick libatlas

The build for VSfM was so quick I thought something had gone wrong, but it did build.

The SiftGPU link in the instructions is dead, but this looks legit. The make process threw some warnings at me, but no errors.

Rather than copying the library once it’s built, I linked it with ln -s ../../SiftGPU/bin/libsiftgpu.so ./libsiftgpu.so, we’ll see if this comes back to haunt me later.

I got PBA from the suggested place, and made it without adding the include for stdlib.h, since it’s 1.05 rather than 1.04 now, and perhaps they fixed that. It seems good, as it built (although with some warnings).

I went ahead and applied the mylapack hack to pvms-2, it seems to have gone well.

cd pmvs-2/program/main/
mv mylapack.o mylapack.o.bak
make clean
mv mylapack.o.bak mylapack.o
make depend
make

Graclus 1.2 built just fine (again with the warnings, though). Read the readme, it has how to set the number of bits used, and if you’re not on a laughably ancient machine, the value you want is 64.

I installed the cmvs from here, you want the one called cmvs-fix2.tar.gz, because non-fix versions seem to have issues with finding lapack.

After that I had to reboot, because everything that built was linked against CUDA and video drivers that I wasn’t running, and so it all crashed when it tried to render anything in X. After rebooting, everything came up fine and I can run VisualSFM, so the next part of this will be getting the video out of the GoPro and feeding it to VisualSFM.

More Webcams, Fewer Problems

I want to have two applications using the same webcam under Linux, in my case an app using Kivy to render the webcam view and let people interact with it, and ROS.

My first hope had been to have a Kivy app that subscribed to a ROS image topic and displayed the resulting image, but I’ve spent two days beating on it and only found a variety of ways to make a python program give me a segfault.

The new plan is to use v4l2loopback to create two virtual cameras, and have them both fed from /dev/video0 (the actual camera).

sudo modprobe v4l2loopback devices=2

That gets me the two devices. The thing is, these devices don’t actually output any video. They’re just fake video devices that a program can write to. There are a lot of instructions on feeding them with ffmpeg. For some reason, my computer says it has ffmpeg installed, but locate can’t locate it. Instead, I’ve used gstreamer to set up /dev/video1 as a video device that ROS’s usb_cam node can handle, and configured the launch file for usb_cam to read /dev/video1

gst-launch v4l2src device=/dev/video0 ! video/x-raw-yuv,width=640,height=480,framerate=15/1 ! v4l2sink device=/dev/video1

That solves half of my problem. The other half is having the second video device get fed, and getting Kivy to read /dev/video2.

gst-launch v4l2src device=/dev/video0 ! video/x-raw-yuv,width=640,height=480,framerate=15/1 ! tee name=t ! v4l2sink device=/dev/video1 t. ! queue ! v4l2sink device=/dev/video2

Adding the tee and queue gives me two video devices playing the same video, both fed by /dev/video0. The usb_cam ROS node can read either one of them. The basic Kivy camera app that I bashed together doesn’t work, though. It seems to default to trying to open /dev/video0, and then failing because the gst-launch invocation is using it.

AnchorLayout:
anchor_x:"center"
anchor_y:"center"
   Camera:
      id: camera
      resolution:(640,480)
      play: True
      index:2
   KeyboardListener:
      id:kbd_lstn

Adding index:2 to my Kivy app’s .kv file gets me a kivy app with my video in it. As long as my ROS nodes are looking at /dev/video1 and my Kivy app is looking at /dev/video2, no one steps on anyone’s toes, and they both can operate at the same time.

Lightlocker does not play well with NVida (or something)

If you have an Ubuntu system with an NVida video card, and after suspending and resuming you get a login screen, but after you log in you get a black screen, and using ctrl + alt + F1 gets you a text console, your problem is probably lightlocker. I’ve seen some sites claim that the problem is fixable with sudo apt-get remove --purge nvidia-* which would be great if you didn’t want video drivers or CUDA. I’d rather hang onto my video drivers, so sudo apt-get remove light-locker makes everything better.

ROS and OpenCV will fite u, m8

I recently wanted to do some computer vision stuff using OpenFace, which is a collection of face-processing computer vision algorithms and tools to use them. It uses OpenCV 3.0.something, which uses, among other things, vtk6, and friends libvtk6-dev and python-vtk6.

Normally, this wouldn’t be a problem, but I use ROS Indigo, as does the lab I work in. ROS Indigo uses some previous version of vtk, and so attempting to install OpenCV 3.0 blows away my ROS install, and makes apt freak out when I try to install it again. The actual error was something like “you have broken held packages”, only I didn’t actually have held packages OR broken packages.

Apt just gives up at this point. Aptitude, on the other hand, proposes removing the offending VTK packages and proceeding with the ROS install. Only time will tell if I’ve trashed my OpenCV install, but if I have, I can just go back to an older OpenCV version.

Download ALL The Music

Given a file containing a list of songs, one per line, in the format “Artist – Song Title”, download the audio of the first youtube video link on a Google search for that song. This is quite useful if you want to the MP3 for every song you ever gave a thumbs up on Pandora. On my computer, this averages about 4 songs a minute.

The Requests API and BeautifulSoup make writing screenscrapers and automating the web really clean and easy.

#!/usr/bin/python

# Takes a list of titles of songs, in the format "artist - song" and searches for each
# song on google. The first youtube link is passed off to youtube-dl to download it and 
# get the MP3 out. This doesn't have any throttling because (in theory) the conversion step
# takes enough time to provide throttling. 

import requests
import re
from BeautifulSoup import BeautifulSoup
from subprocess import call

def queryConverter(videoURL):
	call(["youtube-dl", "--extract-audio",  "--audio-format", "mp3", videoURL])

def queryGoogle(songTitle):
	reqPreamble = "https://www.google.nl/search"
	reqData = {'q':songTitle}
	r = requests.get(reqPreamble, params=reqData)
	if r.status_code != 200:
		print "Failed to issue request to {0}".format(r.url)
	else:
		bs = BeautifulSoup(r.text)
		tubelinks = bs.findAll("a", attrs={'href':re.compile("watch")})
		if len(tubelinks) > 0:
			vidUrl = re.search("https[^&]*", tubelinks[0]['href'])
			vidUrl = requests.utils.unquote(vidUrl.group(0))
			return vidUrl
		else:
			print "No video for {0}".format(songTitle)

if __name__=="__main__":
	with open("./all_pandora_likes", 'r') as inFile:
		for line in inFile:
			videoURL = queryGoogle(line)
			if videoURL is not None:
				queryConverter(videoURL)

KiCad is Still a Shitshow

Six months ago, I designed a schematic and circuit board in KiCad. Yesterday, I tried to open it and got a list of errors so long that it didn’t fit on my screen from top to bottom. The reason is that KiCad loads the footprints of the parts you use from Github, so when (not if) the KiCad devs change how they store footprints, your install of KiCad breaks. In other words, installing KiCad gives the KiCad devs permission to break your workflow for no reason.

I fixed my KiCad package list by writing a little python script that goes through the KiCad github repo and generates a new footprint list based on however they happen to be organized and whatever they happen to be called today. Copy that over my previously working footprint list, and the error message went away.

However, since the last time I used it, my install of KiCad has stopped rendering in CVPCB. No error message or anything, just doesn’t render the copper layers, parts, etc. I got part of the ratsnest, and that’s it.

Clearly, the thing to do is to install the newest version of KiCad from the developer’s PPA.

Nope, that returns a 404.

Clearly, the thing to do is to install using the script the devs wrote to install and build KiCad.

Nope, the script fails because they use a version of WXWidgets that like no one else on earth uses.

So let’s recap: No method of installing KiCad works, and just LEAVING YOUR WORKING INSTALL ALONE is not sufficient to ensure that it continues to work.

Since I’m designing a small PCB, CadSoft’s Eagle is looking more and more tempting. Limited PCB sizes aren’t a problem if you’re making something small.

Un-Ubuntuing Ubuntu Redux

Ubuntu ships with the “main” and “universe” repos enabled, and the “multiverse” repo disabled. This is because there are legal or ideological reasons to not use the software in multiverse.

However, there are practical reasons to use the rar/unrar packages from multiverse over the unrar-free package in universe: unrar-free frequently doesn’t work, but unrar-nonfree does.

If you want your compression and decompression software to operate, rather than be ideologically compliant:

sudo apt-add-repository multiverse
sudo apt-get update
sudo apt-get install unrar rar

Playatech started charging for their plans

Unfortunately for burners, you can no longer download Playatech’s plans for their furniture without paying them first. They used to offer the plans as free downloads, and then asked that you donate some small amount if you used them.

Unfortunately for Playatech, they left all the PDFs in a world-readable directory. The command line below gets the index of that directory, finds all the lines with “pdf” in them, gets the file names out using cut, and then downloads each file.

for file in `wget -qO- http://playatech.com/wp-content/uploads/2013/05/ | grep pdf | cut -d ‘>’ -f 2 | cut -d ‘”‘ -f 2`; do wget http://playatech.com/wp-content/uploads/2013/05/$file; done