Category: Python

Color All The FloatCanvas Objects!

I’m drawing stuff over a background image of the ocean, which looks bluish-green. Naturally, bluish-greens and some grays don’t have enough contrast to stand out, so they kind of get lost in the image.

I initially tried calculating luminance from RGB triplets, which does work for readability, but I was using the web readability thresholds for the luminance ratio (4.5 for normal, 7 for high contrast), and it didn’t work very well because my background is sort of middling in luminance, so it punched out most of the middle range of colors, resulting in everything being either dark and similar-looking, or light and similar-looking.

I switched to RGB color distance (which, yes, isn’t perceptually flat, but this isn’t really a color-sensitive application, beyond not making them too similar). In order to figure out where my threshold should be, I wanted to get a list of all the wxPython named colors, and what they looked like, and their distance from “cadetblue”, which is about the same color as the average of my ocean background.

#!/usr/bin/python

# Generate a page that previews all the colors from WX
import wx

app = wx.App(False)

import wx.lib.colourdb as wxColorDB

def get_d(colorname1, colorname2="cadetblue"):
    target_color = wx.Colour(colorname1).Get()
    source_color = wx.Colour(colorname2).Get()
    # Euclidian distance
    d = math.sqrt(sum([pow(x[0] - x[1], 2) for x in zip(target_color[:3], source_color)]))
    return d
    
print("<html>")
print("  <body>")

colors = list(set(wxColorDB.getColourList()))
colors.sort(key=lambda c: get_d(c, "white"))

for color in colors:
    cRGB = wx.Colour(color).Get()
    print("    <div style=\"white-space: nowrap\">")
    print(f"      <div style=\"display: inline-block; width:300px\">{color}</div>")
    print(f"      <div style=\"display: inline-block;height:30px;overflow:auto;background-color:rgb({cRGB[0]},{cRGB[1]},{cRGB[2]}); width:100px\"> </div>")
    d = get_d(color, "white")
    print(f"      <div style=\"display: inline-block\">{d}</div>")
    print("    </div>")

print("  </body>")
print("</html>")

That script generates an HTML page with the colors in it, ranked by distance from the given color name. I picked white for the version published here, but as you can see, the default is “cadetblue”. If you pick a color name WX doesn’t know, you’re going to have a bad time.

A distance of 80 seemed to work pretty well for me, so as a rule of thumb, 80 units of color distance gets you a distinct color in 8-bit-per-color RGB color space.

There are, of course, some problems to be aware of. For instance, distance folds hue and value together, so getting brighter or darker and remaining the same hue can make up a lot of that 80 units, without necessarily getting good contrast.

In which Ellipses are Dealt with Handily

I’m making a user interface where people can draw a lasso select on an image, and I want to see if certain points in the image are within that lasso select. Now that I think about it, I could probably have done some form of polygon membership test (e.g. raycasting) for each of the points that I wanted to check, with the points of the lasso forming the polygon, but that has a few problems. One of them is that the ends of the lasso can overlap, but that can be handled by e.g. a convex hull. In fact, I may end up doing that, because the current approach has the problem that it can’t handle an arbitrarily-shaped lasso selection.

What I ended up doing as a first cut, however, has some other useful properties. I had the software find a minimum enclosing ellipse of the points for the lasso selection, and then tested the points I wanted checked to see if they were inside the ellipse.

 

def min_bounding_oval(points, tolerance=0.01):
  #Create a numpy 2xN matrix from a list of 2D points
  P = np.matrix(points).T

  # Pseudocode from https:#stackoverflow.com/questions/1768197/bounding-ellipse
  # Pythonificiation by me
  # Input: A 2xN matrix P storing N 2D points 
  #        and tolerance = tolerance for error.
  # Output: The equation of the ellipse in the matrix form, 
  #         i.e. a 2x2 matrix A and a 2x1 vector C representing 
  #         the center of the ellipse.

  # Dimension of the points
  d = 2;   
  # Number of points
  N = len(points);  

  # Add a row of 1s to the 2xN matrix P - so Q is 3xN now.
  Q = np.vstack([P,np.ones((1,N))]) 

  # Initialize
  count = 1;
  err = 1;
  #u is an Nx1 vector where each element is 1/N
  u = (1.0/N) * np.ones((N,1))

  # Khachiyan Algorithm
  while err > tolerance:
    # Matrix multiplication: 
    X = Q * np.diagflat(u) * Q.T

    M = np.diagonal(Q.T * X.I * Q)

    # Find the value and location of the maximum element in the vector M
    maximum = M.max()
    j = np.argmax(M);

    # Calculate the step size for the ascent
    step_size = (maximum - d -1)/((d+1)*(maximum-1));

    # Calculate the new_u:
    # Take the vector u, and multiply all the elements in it by (1-step_size)
    new_u = (1 - step_size) * u ;

    # Increment the jth element of new_u by step_size
    new_u[j] = new_u[j] + step_size;

    # Store the error by taking finding the square root of the SSD 
    # between new_u and u
    err = math.sqrt(((new_u - u)**2).sum());
    
    # Increment count and replace u
    count = count + 1;
    u = new_u;

  # Put the elements of the vector u into the diagonal of a matrix
  # U with the rest of the elements as 0
  U = np.diagflat(u);

  # Compute the A-matrix
  A = (1.0/d) * (P * U * P.T - (P * u)*(P*u).T ).I

  # And the center,
  c = P * u

  return [A, c]

That gets me A, a 2×2 matrix representing the ellipse and c, a 2×1 matrix representing the center of the ellipse.

To check if a point is in the ellipse, some fiddling has to be done to A.

  
#Convert A to a whitening matrix (W = A**-1/2)
w, v = np.linalg.eig(A)
D = np.diagflat(w)
W = v * np.sqrt(D) * v.I

#Subtract the center of the ellipse and use the whitening matrix
#tag_x and tag_y are the x and y positions of the point to check
p = np.matrix([[tag_x],[tag_y]])
p_center = p - c
p_white = W * p_center

#Check if the whitened point is in the ellipse
if np.linalg.norm(p_white) <= 1:
     return True #or whatever you want to do with it

Note that I haven’t proven this code correct or anything, just run it and had it work for me.

TF-IDF in Python

I am trying to process a bunch of text generated by human users to figure out what they are talking about, in the context of an experiment where there were robots a person could be controlling, a few target areas the robots could move to, and a few things they could move, like a crate. One thing that occurred to me was to use TF-IDF, which, given a text and a collection of texts, tells you what words in the text are relatively unusual to that particular text, compared to the rest of the collection.

It turns out that that’s not really what I wanted, because the words that are “unusual” are not really the ones that the particular text is about.

select(1.836) all(3.628) red(2.529) robots(1.517) ((1.444) small(5.014) drag(2.375) near(3.915) lhs(4.321) of(1.276) screen(2.018) )(1.444)

This is a sentence from the collection, and the value after each word is the TF-IDF score for that word. The things I care about are that it’s about robots, and maybe a little that it’s about the left hand side of the screen. “Robots” actually got a pretty low score (barely more than “of”), but “small” got the highest score in the sentence.

At any rate, this is how I did my TF-IDF calculation. Add documents to the object with add_text, get scores with get_tfidf. It uses NLTK for tokenization, you could also use t.split(” “) to break strings up on spaces.

class TFIDF(object):
  def __init__(self):
    #Count of docs containing a word
    self.doc_counts = {}
    self.docs = 0.0

  def add_text(self, t):
    #We're adding a new doc
    self.docs += 1.0
    #Get all the unique words in this text
    uniques = list(set(nltk.word_tokenize(t)))
    for u in uniques:
      if u in self.doc_counts.keys():
        self.doc_counts[u] += 1
      else:
        self.doc_counts[u] = 1

  def get_tfidif(self, t):
    word_counts = {}
    #Count occurances of each word in this text
    words = nltk.word_tokenize(t)
    for w in words:
      if w in word_counts.keys():
        word_counts[w] += 1
      else:
        word_counts[w] = 1
    #Calculate the TF-IDF for each word
    tfidfs = []
    for w in words:
      #Word count is either 0 (It's in no docs), or the count
      w_docs = 0
      if w in self.doc_counts.keys():
        w_docs = self.doc_counts[w]

      #the 1 is to avoid div/zero for previously unseen words
      idf = math.log(self.docs/(1+w_docs))
      tf = word_counts[w]
      tfidfs.append((w, tf * idf))
    return tfidfs

 

Download ALL The Music

Given a file containing a list of songs, one per line, in the format “Artist – Song Title”, download the audio of the first youtube video link on a Google search for that song. This is quite useful if you want to the MP3 for every song you ever gave a thumbs up on Pandora. On my computer, this averages about 4 songs a minute.

The Requests API and BeautifulSoup make writing screenscrapers and automating the web really clean and easy.

#!/usr/bin/python

# Takes a list of titles of songs, in the format "artist - song" and searches for each
# song on google. The first youtube link is passed off to youtube-dl to download it and 
# get the MP3 out. This doesn't have any throttling because (in theory) the conversion step
# takes enough time to provide throttling. 

import requests
import re
from BeautifulSoup import BeautifulSoup
from subprocess import call

def queryConverter(videoURL):
	call(["youtube-dl", "--extract-audio",  "--audio-format", "mp3", videoURL])

def queryGoogle(songTitle):
	reqPreamble = "https://www.google.nl/search"
	reqData = {'q':songTitle}
	r = requests.get(reqPreamble, params=reqData)
	if r.status_code != 200:
		print "Failed to issue request to {0}".format(r.url)
	else:
		bs = BeautifulSoup(r.text)
		tubelinks = bs.findAll("a", attrs={'href':re.compile("watch")})
		if len(tubelinks) > 0:
			vidUrl = re.search("https[^&]*", tubelinks[0]['href'])
			vidUrl = requests.utils.unquote(vidUrl.group(0))
			return vidUrl
		else:
			print "No video for {0}".format(songTitle)

if __name__=="__main__":
	with open("./all_pandora_likes", 'r') as inFile:
		for line in inFile:
			videoURL = queryGoogle(line)
			if videoURL is not None:
				queryConverter(videoURL)

PDB for n00bs

PDB is the python debugger, which is very handy for debugging scripts. I use it two ways.

If I’m having a problem with the script, I’ll put in the line

import pdb; pdb.set_trace()

just before where the problem occurs. Once the pdb line is hit, I get the interactive debugger and can start stepping through the program and seeing where it blows up, and what variables are getting set to before that happens.

However, I recently found a very handy second way. I was debugging a script with a curses interface, which cleans up when it exits. Unfortunately, that cleanup means that my terminal gets wiped when something crashes, so instead of a stack trace, I just get dumped back to the terminal when something goes wrong, with no information at all left on the screen.

Invoking the script with

python -m pdb ./my_script.py

gets me the postmortem debugger, so when something goes wrong, the program halts and I get the interactive debugger and some amount of stack trace. It’s messy looking because of curses, but I can at least see what is going on.