A little something I knocked together in Processing language (source code here). I used a bit of calculus to determine the trajectories of the sparks — nothing too tricky, just a bit of integration.
I have an idea for an animation (or maybe a game?) which would involve finding the roots of a 4th degree polynomial to determine where a trajectory (two quadratics) will intersect with a circle. I’ll be using NumPy to do this, as once you’ve defined the polynomial NumPy can find the roots automatically.
I’ll also investigate NumPy alternatives for Java so that I can use Numerical programming with Processing.
Using the Python Imaging Library I’ve created a faster alternative to the turtle.py module. It takes only a few seconds to draw images that were taking several minutes with turtle.py.
There are a few downsides to this approach I suppose:
- There is no on-screen animation of the image being created which is useful tool when teaching turtle graphics — only a finished image is created.
- Output is a raster image, not a nice svg or eps file.
On the up side:
- Super fast
- All Python Imaging Library tools and methods also available.
- At the moment is largely compatible with the turtle.py module
Who knows… once I’ve got the code into a more finished state I may share my fast-turtle module with the world.
Here’s a nice animation that I created with my aforementioned fast turtle module:
Python comes with a turtle graphics module right there in the standard library. A new and improved version was introduced with Python 2.6, and it is quite good. I think its real strength could be in the initial stages of teaching kids to program in Python.
For my use I found two major drawbacks. First, there is no built-in way to export your drawing to an image file (taking a screenshot is the solution I’ve been using). More importantly, the turtle drawing is intolerable slow for complex images.
I did manage to create this rather nice recursive tree:
(The leaves are dead turtles)
Nevertheless I think I will concentrate on some other turtle graphics implementation for now. After all, I already program in Python so I won’t be learning anything new there. Also did I mention slow? So slow.
I would be interested in learning UCBLogo as the language itself would be interesting for me to learn (A far more turtle-y Lisp dialect). Also from my initial tests it seems to be so much faster than turtle.py. However it also has limited capacity to save drawings as images (only eps output with no filled shapes 😦 ). The output window also appears buggy on my Ubuntu Linux system… The image in the window does not refresh after being obscured or minimised, which pretty much makes it useless to me until this issue has been addressed. I figured out that turning on window compositing fixes this somewhat. The output is no longer deleted after the output window is obscured by another window. If you minimise or resize the window the image disappears, but seems to come back most of the time if you type “REFRESH” into the Logo session.
These Buddhabrot scripts can sure be processor intensive. I would be better off programming them in C or C++, much as I do love Python. They are rather trivial, so it shouldn’t be too difficult.
One purpose of this blog is to openly record my musings of the various things that I think I ought to do—in the hope that I may remember to actually do them. This is but one such open musing.
My previous Buddhabrot images were built up from many thousands of ‘z‘ orbits. Over time the trails left by these orbits average out into nice soft symmetrical clouds. Most of what you can see are the trails left by the orbits that survive for relatively few iterations before escaping to infinity (there are so many more of this kind of orbit).
If you filter out these “shallow” iteration orbits and instead render only orbits with deep iterations something very interesting happens. The paths of the individual orbits become discernable. They almost look like temporary strange attractors, spinning and swirling before finally blowing away on the winds of chaos.
Take a look at these fantastic patterns (click to enlarge):
This image consists only of orbits with between 20,000 and 30,000 iterations.
As with my strange attractor images, due to a few very bright pixels, it was necessary to render in 16-bit greyscale and then equalize the images to bring out the details which would otherwise be too dark to see. Normalize or auto-levels won’t do the trick. I used ImageJ for the equalize function.
If more orbits were calculated, over time a regular and symmetrical pattern should arise.
Here is the same image as above allowed to accumulate roughly 3x the orbits:
Some interesting discussion of deeply iterated Buddhabrots can be found here.
In my previous post on Buddhabrots I demonstrated that different iteration or “bail-out” levels can produce different looking Buddhabrots. Buddhabrot discoverer Melinda Green also described the method of mapping differently-iterated Buddhabrots to the different channels of an RGB image. This can result in strikingly “Nebulous” images, reminiscent of those famous images from the Hubble space telescope.
Obviously this was the next thing I had to try.
This image uses 3000 iterations on the red channel, 1500 on the green channel, and 500 on the blue channel. I have normalized the channels individually which seems to favor the channel with the least iterations.
This images uses the same configuration of iterations per channel as the last. This seems to favor the channel with the most iterations.
It strikes me that both of these approaches are rather arbitrary. The best solution would be to combine the channels in a decent image editing program and adjust the levels of each until the result is visually appealing (These images had their channels combined at creation time in Python). Unfortunately GIMP is not the decent image editing program I’m looking for. CinePaint, however, seems perfect for this job.
“To infinity… and beyond!”
1000 iterations red, 200 green, 100 blue. Adjusted to Hubbletastic perfection. Thanks CinePaint!
The next step is probably to render some super-big color images.
The Buddhabrot is closely related to the Mandelbrot set. For both, you would calculate the orbit of z under the dynamic for various values of the constant c (where z and c are numbers on the complex plain).
When rendering the Mandelbrot set we are chiefly concerned with values of c corresponding to pixels in our final image. We check which values of c cause the path of z to enter an orbit around the origin, and which values of c cause z to fly off to infinity (and how quickly). Pixels are then coloured accordingly.
For a Buddhabrot, the value of c is less important. Instead, the frequency with which the orbiting point z visits various pixels is recorded — lighter pixels have received a higher frequency of visits from z. While values of c are chosen at random, the classic Buddhabrot uses only those values of c which will cause z to escape to infinity (non Mandelbrot-set points). I think of this process as being a cross between a Mandelbrot set and a strange attractor.
Here is my first attempt at rendering a Buddhabrot using a program I wrote in Python:
This is a 100 iteration image This means that, when checking to see if an orbiting z point is escaping to infinity, we bail-out after 100 iterations and assume all remaining points are within the Mandelbrot set (their orbits are not used in the Buddhabrot). More iterations means more of the kind of orbit that take longer to escape.
It turns out that adjusting this ‘bail-out’ level can have a dramatic impact on the resulting image. For example, here is a 1000 iteration image:
We can see that higher bail-out values result in finer details.
Finally, a 5000 iteration image (click for super-high-res):
Here is the Python code in full, with gratuitous comments and print statements included — remember that you will need PyPng and Numpy to run this code.
#! /usr/bin/python import png import numpy as np from random import random def c_set(num_samples, iterations): # return a sampling of complex points outside of the mset # Allocate an array to store our non-mset points as we find them. non_msets = np.zeros(num_samples, dtype=np.complex128) non_msets_found = 0 # create an array of random complex numbers (our 'c' points) c = (np.random.random(num_samples)*4-2 + \ (np.random.random(num_samples)*4-2)*1j) # Optimizations: most of the mset points lie within the # within the cardioid or in the period-2 bulb. (The two most # prominant shapes in the mandelbrot set. We can eliminate these # from our search straight away and save alot of time. # see: http://en.wikipedia.org/wiki/Mandelbrot_set#Optimizations print "%d random c points chosen" % len(c) # First elimnate points within the cardioid p = (((c.real-0.25)**2) + (c.imag**2))**.5 c = c[c.real > p- (2*p**2) + 0.25] print "%d left after filtering the cardioid" % len(c) # Next eliminate points within the period-2 bulb c = c[((c.real+1)**2) + (c.imag**2) > 0.0625] print "%d left after filtering the period-2 bulb" % len(c) # optimizations done.. time to do the escape time algorithm. # Use these c-points as the initial 'z' points. # (saves one iteration over starting from origin) z = np.copy(c) for i in range(iterations): # apply mandelbrot dynamic z = z ** 2 + c # collect the c points that have escaped mask = abs(z) < 2 new_non_msets = c[mask == False] non_msets[non_msets_found:non_msets_found+len(new_non_msets)]\ = new_non_msets non_msets_found += len(new_non_msets) # then shed those points from our test set before continuing. c = c[mask] z = z[mask] print "iteration %d: %d points have escaped!"\ % (i + 1, len(new_non_msets)) # return only the points that are not in the mset return non_msets[:non_msets_found] def buddhabrot(c, size): # initialise an empty array to store the results img_array = np.zeros([size, size], int) # use these c-points as the initial 'z' points. z = np.copy(c) while(len(z)): print "%d orbits in play" % len(z) # translate z points into image coordinates x = np.array((z.real + 2.) / 4 * size, int) y = np.array((z.imag + 2.) / 4 * size, int) # add value to all occupied pixels img_array[x, y] += 1 # apply mandelbrot dynamic z = z ** 2 + c # shed the points that have escaped mask = abs(z) < 2 c = c[mask] z = z[mask] return img_array if __name__ == "__main__": size = 400 # size of final image iterations = 200 # bailout value -- higher means more details samples = 10000000 # number of random c points chosen img_array = np.zeros([size, size], int) i = 0 while True: print "get c set..." c = c_set(samples, iterations) print "%d non-mset c points found." % len(c) print "render buddha..." img_array += buddhabrot(c, size) print "adjust levels..." e_img_array = np.array(img_array/float(img_array.max())*((2**16)-1), int) print "saving buddhabrot_n_%di_%03d.png" % (iterations,i) # save to final render to png file imgWriter = png.Writer(size, size, greyscale=True, alpha=False, bitdepth=16) f = open("buddhabrot_n_%di_%03d.png" % (iterations,i), "wb") imgWriter.write(f, e_img_array) f.close() print "Done." i += 1