toxiclibs Building blocks for computational design Sun, 22 Mar 2020 04:26:15 +0000 en-US hourly 1 Metropolitan Works workshop: Facade tool Tue, 20 Dec 2011 12:47:17 +0000 As announced a few months ago, I recently taught two London based workshops and in the interest of learning, the source code of various examples done at both events is now available in the repository. Released under the same license as toxiclibs itself, LGPLv2.1.

The most recent workshop took place at Metropolitan Works, the London Metropolitan University’s digital fabrication facility. Under the overarching theme of digital fabrication, initially this workshop was focused almost exclusively on geometry aspects and how to use various toxiclibs classes to construct shapes & forms and solve problems in this domain. During the 2nd workshop session, I wanted to combine several related topics into a single large exercise and we started building a little hypothetical facade design tool.

Over the past couple of weeks I squeezed in some extra time to finish cleaning up and adding comments (lots of!) to the source code of that tool with some further descriptions below…

Facade designs generally require to take into account the varying light conditions & requirements of the space sections on the inside of the building. Using strategically placed particles & attractors we can model & modulate these different spatial densities on the facade surface…

Step 1: Strategically place some attractors & particles to modulate the spatial density of the facade

Step 2: Connect particles into a 2D mesh using Delaunay triangulation...

Step 2a: ...or use Voronoi to create more cell like patterning

Step 3: Map the 2D mesh onto the extruded profile of the facade

Step 4: Apply & tweak surface deformation along normal vectors

Step 5: Produce a water tight Iso-surface mesh of the wireframe structure and export as STL for 3D printing

Result of using a more complex profile spline

Since we didn’t fully get to finish everything as a group, I have added several more features (some on popular demand):

  • Allow for selecting, repositioning & adjusting of particle attractors
  • Produce 3d iso surface of facade wireframe/lattice mesh and export as STL
  • Allow selecting & moving of points in spline editor component (incl. realtime updates in 3d wireframe preview)
  • Added user adjustable 3D surface deformation using simplex noise
  • Added toggle to switch between Delaunay & Voronoi shapes
  • Adding arc ball controller to more naturally change 3D view orientation
  • Integrating mouse wheel to adjust zooming in 3D
  • Generally improved usability and made ControlP5 GUI more dynamic (some controllers are now context sensitive and only visible in certain display/edit modes)

It’s almost a proper little tool now and should provide a lot of food for thought for the dear students (and maybe give them something to do over the holidays :)

You can download the entire Eclipse project from here:

Please note, this project is a plain Java project (no Processing Eclipse plugin required) and contains all required libraries in the project’s /lib folder:

  • Processing 1.5.1 + JOGL (only with OSX & Win32/64 binaries)
  • ControlP5 0.5.4
  • pre-release of toxiclibs-0021

Below are some quick steps to import the project into your Eclipse workspace:

  • Download via the .zip link in the top toolbar…
  • Unzip to anywhere on your disk
  • Rename resulting folder into “metworks-2011-facade” (all lower case, use dashes, no quotes)
  • In Eclipse, choose File > Import… > General > Existing projects into workspace…

  • In the import dialog use the “select root directory” option and navigate to the folder in step 3
  • The project should now show up just below, make sure it’s ticked
  • Press “Finish”

  • Back in the Eclipse workbench, open up the project in the project explorer…
  • Right click on the file “FacadeApp.launch” and choose “Run > FacadeApp”… Have fun! That is all, if you’re on OSX…

Windows users will have to do a few more additional steps and edit the location of the native library components first (used by JOGL):

  • Right click on the “metworks-2011-facade” project and choose “Properties > Java Build Path > Libraries”.
  • open the sub-tree for jogl.jar, select “Native library location” and then press the “Edit…” button

  • Now choose the /lib/windows64 (or 32) folder inside your project folder and then confirm all changes. Then you can have fun too! :)

Some more brief overview of the different classes in the project’s /src folder:

  • FacadeApp – main application (extends PApplet)
  • ParticleSystem – physics based particle simulation & simple editor features for attractors
  • SplineEditor – simple 2D curve editor for designing the facade profile
  • FacadePoint – extension of Vec3D to include surface normal and normalized 2D position (needed for displacement)
  • DisplacementStrategy – interface definition for defining concrete displacement operators (applied to FacadePoints)
  • NoiseDisplacement – the currently sole available implementation of DisplacementStrategy, using 2d simplex noise
  • ArcBall – re-usable arc ball view component for easier navigation/rotation of the 3D view

Last, but definitely not least: Many, many dear thanks to Arthur Mamou-Mani and Marcus Bowerman for organising, your hospitality, 3d printing and making everything happen… We all hope to repeat something similar in the next year. You’ve been (and will be again) warned! :)


]]> 2
Upcoming Processing & toxiclibs workshops in London Wed, 28 Sep 2011 14:01:58 +0000 Here’re some details about two upcoming workshop opportunities in London town this autumn:

The joys of Processing @ onedotzero

Dates: 26 & 27 November 2011 (10am – 5pm)
Location: BFI Southbank
Limit: 20 people (TBC)
Level: Beginner
Price: TBC

This year marks the 10th anniversary of Processing and onedotzero are doing their bit to help celebrating this occasion with a weekend workshop and public panel discussion as part of this year’s festival at the BFI Southbank.

In this 2-day weekend workshop we will explore how we can use Processing to express ourselves in code form and create small audio-visual performance tools, both reactive & interactive. You’ll be introduced to creating 2D & 3D shapes and compositions dynamically, play back samples, create complex animations by adding behaviours, export animations as video and control your code using external inputs (e.g. microphone, webcam, OSC/MIDI). There will also be a panel discussion following the workshop on Sunday evening.

Tickets will go on sale later this week via the BFI box office. Please note that BFI members will get priority!

Digital fabrication with Processing & toxiclibs

Dates: 3 & 10 December 2011 (10am – 6pm)
Location: Metropolitan Works, Whitechapel
Level: Intermediate
Limit: 20 people
Price: £175

Arthur Mamou-Mani, lecturer in the Architecture department at University of Westminster and London Metropolitan University, recently approached me to run a workshop around using Processing & toxiclibs for digital fabrication. Using the facilities at Metropolitan Works, we will be applying generative & parametric design approaches to work with different physical materials (2D & 3D) and fabrication techniques (laser cutting, CNC and 3D printing) and refine these explorations into a small, custom design tool, complete with graphical user interface. These high level topics will also include a discussion about using the different material constraints as guiding factors for our design process and learn how to use extensible software structures to customize and combine existing solutions to our specific needs.

We will introduce you to all those concepts in a learning-by-doing manner, but since this workshop only has a 2-day duration, you’re encouraged to prepare yourself beforehand (in the interest of the whole group) and familiarize (not become proficient!) with the following topics:

  • Processing syntax & environment
  • Check out some of the examples bundled with toxiclibs
  • Learn about basic geometry concepts (coordinate systems, vectors etc.)
  • Object oriented thinking

For more details & booking, please head over to the workshop page on Arthur’s website.

If you have any questions about either workshop, please add a comment below! Thanks & hope to see you soon!

]]> 1
The road ahead Mon, 26 Sep 2011 19:21:19 +0000 As I’m using any spare moment to continue working on getting the next release ready for public consumption, I thought it will be very useful to give a better overview of the current development tasks (and challenges) for the near future. The diagram below hopefully visualizes this current state quite well and maybe even encourages one or two brave souls to lend a helping hand as toxiclibs is slowly but steadily breaking out of its initial Java shell and starts integrating more publicly into other languages and application contexts other than Processing.

roadmap diagram

toxiclibs roadmap Q4/2011 - Q2/2012 (click to enlarge)

A bit of philosophy

From the very outset, the creation of design oriented, composable and reusable data structures and algorithms for manipulating them has been the main aim of this project and also one of the most obvious counter approaches to the way people traditionally work with Processing (hitherto the most used environment for the libs). Even though the majority of classes provided by toxiclibs can have a visual representation, there’s a strict exclusion of any rendering related code in the core packages of the library, since these often carry a vast amount of secondary dependencies, eventually binding the library to a large, rigid environment. Me not likey. Almost all toxiclibs classes are “pure” models which can be queried, transformed, combined and otherwise manipulated as abstract entities. They’re usable as a tool for solving (design) problems, not only for drawing. They are the M in MVC. If one of them ever needs to be drawn/rendered, a 3rd party component is required (the V in MVC, e.g. Processing or straight OpenGL), but toxiclibs does not prescribe how this drawing should be done (but it provides optional tools to support that task). This separation of concerns really has been the #1 feature aimed at making it as easy as possible (and encouraging) to enter this next stage of the project: systematically porting to other languages.

Polyglot toxiclibs


In the past, myself and other people have done half-baked attempts to port selected classes to other languages (ActionScript, JavaScript, C++). However, all of them were just isolated fragments needed for specific projects and never approached general library status even closely. As most people are surely aware of by now, since the beginning of the year Mr. Kyle Philips has been doing a stellar effort systematically porting large parts of the toxiclibs core and physics classes to JavaScript and due to the popularity of that language this port is gaining huge traction. That’s exactly what I’ve been hoping for to happen as an eventual consequence of the above design points, so I’m super happy to see this effect kicking in. JavaScript however does not only mean in-browser usage, even though in combination with WebGL and libraries like three.js & Processing.js the number of potential use cases is huge (I’d say even more so than in the traditional Processing tied context). JavaScript’s reach is massive and it also has deservedly gained traction outside the browser as general purpose language (largely thanks to V8 based platforms, like Node.js, but also Dean McNamee‘s Plask as a more related example). I think toxiclibs can contribute and actively support new developments on these platforms.

In terms of similarity and porting, JavaScript is one of the closest things to Java there is, but it also has unique deployment issues and there’s some substantial organizational effort needed to create a better system for splitting the codebase into modules and integrating the JS port with existing coding standards (CommonJS) and module managers/loaders (e.g. NPM, RequireJS), incl. to start thinking about adopting Google Closure conventions to harness the optimizations achievable by that compiler (e.g. dead code removal, a huge benefit for large libraries as this).

Good API design requires a clean, consistent ethos, a worldview and opinion which not only is carried through and made visible throughout a project, but also leads users of that API to write their own software in a certain/similar manner. To achieve that goal is often a long, winding road and takes much longer to get right than writing actual code, but I think by now, toxiclibs does provide a decent set of consistently used patterns (excluding a few edge cases). Having such a familiar set of classes & APIs available in multiple languages is a serious benefit for users and makes it much easier to experiment & switch between environments, and that without forcing users to stay in a sandbox of sorts (e.g. as ProcessingJS does). I think, having a familiar API too needs to be complimented & balanced with the unique features, idioms and development practices of the host language to allow both code & coder live up to their full potential. For JS these differences are still relatively harmless, but even there we should embrace them more.


Ever since college, I had an ongoing, if usually fleeting, fascination with Lisp and its seemingly alien, stripped down approach to syntax, its obsession with brackets and generally doing things “the way round”, at least compared to common (imperative) languages. I never considered Lisp as a serious contender for my own development arsenal until earlier this summer, when I stumbled across Clojure, a modern dialect of Lisp running on the JVM (get a detailed feature overview on their website, it’s worth reading). This time I immediately was struck by its elegance, the resulting concise code and the many other features this language brings to the table, especially for working with collections, data processing & concurrency. Many data munging tasks can be solved in approx. 30-50% code than in Java/Processing, very useful for dataviz. So I made an effort to get into it more seriously. Then I read this article and when I hit this quote, I felt in very similar shoes:

“Many extremely intelligent people I knew and had much respect for were praising Lisp with almost religious dedication. There had to be something there, something I couldn’t afford not to get my hands on! Eventually my thirst for knowledge won me over. I took the plunge, bit the bullet, got my hands dirty, and began months of mind bending exercises. It was a journey on an endless lake of frustration. I turned my mind inside out, rinsed it, and put it back in place. I went through seven rings of hell and came back. And then I got it.

The enlightenment came instantaneously. One moment I understood nothing, and the next moment everything clicked into place. I’ve achieved nirvana.”

Similar to Scala, Clojure compiles directly into JVM byte code and therefore provides comparable speed and can easily interoperate with the vast amount of Java libraries available. Unlike Java, Clojure is focused on immutable data and provides a functional approach to computing, an antidote to living a kingdom of nouns. Common to all Lisps also is the REPL, offering livecoding features as part of the core development process. Lisp/Clojure code is data is code. I don’t need to point out, how exciting this is for people in our field. Besides that though, my main excitement is about the forward thinking take on concurrency/multicore support (agents/futures/atoms/refs) and the ability to easily create domain specific languages. Community activity around the language seems to be glowing hot and is sporting an impressive ecosystem of libraries and amazing support tools, making open source development true fun. Some noteworthy introductory links, should you feel inclined to give it a spin too:


ClojureScript is a recent addition to the Clojure ecosystem and potentially something to keep watching closely for the purposes of porting: Clojure compiled into JavaScript. I know it sounds like herasy, but in some respects it seems ClojureScript does “better” JavaScript than the original, providing all the intelligent language features of Clojure (e.g. namespaces, destructuring, function overloading, atoms, macros) and generating JavaScript in a format targeted to Google’s Closure compiler, allowing for vastly better optimizations of large applications than handwritten JavaScript (which can only fully utilize the compiler if you stick to the necessary conventions in ALL your code). I’m in no position yet to actually back any of that with my own experience so far, but it’s an exciting development for sure. Here’s also a video of the launch event of ClojureScript at Google: Rich Hickey (creator of Clojure) unveils ClojureScript and the official announcement with more links.

Keeping in sync & documentation

The next major set of tasks related to porting is figuring out ways to keep the different ports in sync or at least better document which parts of the library are available in which port. For that purpose, I started prototyping a new web app (WebNoir + CouchDB) which will collect metadata from the different source codes and automatically produce a port coverage/sync report. Kyle started manually producing a top-level version of this for his JS port, but its granularity is only at class level, whereas we really do need that information per function/method to be truly useful. This is also because there are still some areas of the original Java version, which will receive further updates and bug fixes and at current there’s no system to mark those places in the code as needing to be reflected in the other ports. Serious development of this tool is top priority after v0021 is out.

Related to that also is documentation, the historically slowest evolving aspect of the whole project. One of the comments I hear most often is “Javadocs suck”. They suck even more so, since for many library classes they’re not even existing or if so only superficially. So in a way I couldn’t agree more, but then again, for the past four years these libraries have largely evolved around my own needs and client projects and I’ve relied on users to consult the 95+ examples bundled with each release (or attend my workshops) in order to learn the basic usage patterns. Whilst I still believe the latter (learning by example) is by far the most efficient way of learning (having done so my entire career), I also think things can be vastly improved by offering documentation in several formats, all cross-referenced between: actual running examples with source code, a literate programming style doc system (e.g. Docco/Marginalia based, very good for workshops) and the traditional javadoc style, for integrating the docs in an IDE setup.

Example output of running an early(ier) Clojure porting effort through Marginalia to produce nice, easy to read documentation next to source code.

IMHO the reason Javadocs are soooo unsuccessful amongst Processing users is actually largely down to the lack of Javadoc support in the Processing PDE. If people would use IDEs like Eclipse showing Javadocs in-situ within the editor context by simply hovering over class names, I believe people would see them for what they really are: kind of awesome!

Eclipse screenshot with Javadocs in editor context.

So in conjunction with the task of adding metadata to every method of every public class in the codebase, all methods will also be receiving full documentation. These docs will then also link to existing examples using the particular method. This will be the second main focal point of the v0022 release (the one after next).

Versioning & repositories

Speaking of version numbers: Again, due to the organic and isolated growth of this project and my own past development practice, the linear versioning scheme was quite sufficient for now. As we all learn new things and our development tactics change, so too will the versioning for this project have to change to something more meaningful. Enter semantic versioning. The idea is nothing new and I’ve been using it for most other projects in the past, but I think this time the reasoning behind it is somewhat different:

  1. Currently there’s toxiclibs support for 2.1 languages (the Clojure port is not quite there yet to fully count)
  2. Development outside the Processing IDE is far more dependent on build management tools, open source repositories and module managers (e.g. Maven, NPM, Leiningen)
  3. Different projects are created at different times and might require different versions of the libraries and
  4. I’d really love to get to a point where there’ll be synchronised releases in order to reduce build & documentation complexity and avoid confusion for users so that they can assume v1.0.0 of a module will contain the same features in all (sup)ported languages.

Semantic versioning is the lowest common denominator between all current module managers/repositories and hence will be introduced with the next release. As a result, users will have a much easier way to integrate the libraries into their own (non-Processing) projects, since they will also be available via the major existing open source repositories for the various languages (Sonatype [Java/Maven], NPM registry (JS), Clojars [Clojure/Leiningen]). Apparently, Processing 2.0 will feature its own centralized library management system, but from what I gather it will not offer any integration with any of these existing open source repositories.

New website & tutorials

The current WordPress based setup is not the best platform for integrating all the planned new documentation, tutorials and other features like the bundled example & user galleries. I’ve been test driving Confluence on a private dev server and found this more promising (at least for the documentation & tutorial side), so I might adopt this in the near future. The other (more appealing) alternative is to extend the new CouchDB based sync/doc tool into a more generic web app and add wiki & blogging features. The main issue for that will be increased hosting costs, which I will need to think about more how they can be better balanced. The new website will also host as many of the bundled examples as possible, effectively deprecating the current gallery on, which is impossible to batch update and therefore can’t reflect any API changes in older examples (which causes unnecessary comments). To better integrate community contributions, that new system will also be used to provide a user gallery.

Furthermore, if you’re like the amazing Amnon Owed, and feel like creating super useful tutorials for the libraries, please start doing so. Any help on increasing the number of learning resources would be an amazing contribution to this project and will play a major role on the new site. Speaking of tutorials, I really do think there also is a big need for a general knowledge tutorial about how to make the most of open source libraries like this from a user perspective and encourage people to contribute (even if in the most indirect ways). Much of the feedback and comments I receive, hints at a large knowledge gap about how to even go about finding out (in a self guided manner) about existing resources, updates, work with the source code, work with issue trackers etc. This generally seems to be a far bigger problem with users in the Processing camp than I’m aware of from other environments. Food for thought!

Summary & next steps

Well, this is the closest thing to a sharable master plan as I could get to for now. Before most of these things will/can be addressed, other minor tasks need to be completed to get 0021 out of the door first. A brief, non-exhaustive overview of new features is here:

  • Initial support for NURBS curves and surfaces is the biggest new addition and still requires some more testing and internal restructuring. Supported operations are:
    • Curve builders for circles, arcs and creating curves from a list of points, joining curves
    • Surface builders: extrude curve, revolve curve, construct surface from a grid of points/control mesh
    • Convert surface into TriangleMesh instance (variable resolution & UV coordinate generation)
  • Several important additions to the Polygon2D class:
    • construct regular polygons from a given base line segment (useful for creating tesselations)
    • rotate/scale/translate polygons
    • pick random points within a polygon (useful for color sampling)
    • retrieving edge list
  • Tesselate polygons using Delaunay triangulation with flexible grid resolutions
  • Addition of ConvexPolygonClipper to clip a polygon to the shape of another
  • New BezierCurve2D/3D classes in addition to existing Spline2D/3D
  • Implementing Visitor pattern for PointQuadtree/PointOctree
  • Adding UV coordinate generation to SurfaceMeshBuilder
  • Adding PLYWriter for exporting 3D meshes in Standard Polygon format
  • ToxiclibsSupport line drawing now supports decorators for dashed lines and arrow heads (customizable)
  • Custom DXFWriter for 2D shapes with DXF layer support
  • Improving precision/reducing rounding errors for VolumetricBrush
  • Adding FluidSolver3D

There’re also a ton of other smaller additions and bug fixes plus approx. 10-15 new examples. Let me also point you again to the repository of workshop projects at, which contains also several larger projects than the examples bundled with the release.


A few people have been asking about sending in stuff for this year’s showreel. Of course, I do intend to produce one again (it’s one of the highlights of the year), though I can only realistically get to this in Nov/Dec, hopefully making it a nice video for the Holiday season. I will send out a proper call for submissions in the next month, but if you have stuff ready, please send it along already. Specs will be the same as previously: Video assets 1280×720 (if possible), still images are fine too and of course please add some brief description & credits.

Any feedback, suggestions & help offers are highly appreciated! So long…

]]> 5
Tutorials galore Tue, 10 May 2011 22:08:28 +0000 The ongoing lack of tutorials is still one of the most pressing issues to resolve for me & everyone else using (or trying to use) these libraries. Add to this the recent lack of updates to this blog, it all might give the illusion the project itself is stagnating. This couldn’t be further from the truth. In fact, the past few months have seen an incredible uptake of interest as well as development effort (93 revisions since beginning of the year), but I’m also close to reaching the point where I’ll impose a temporary new feature freeze as soon as version 0021 has been released within the next 6 weeks. 0022 will most likely be far more focused on a new, much improved system for documentation and a new website…

In the meanwhile, I’ve been doing my best to respond to concrete issues & tasks people were trying solve on the Processing forums as well as the issue tracker. The list of demos on OpenProcessing has grown too. And thanks to personal heroes of mine, like Golan Levin, Daniel Shiffman and their students, there’re also a number of very interesting student projects this year, which are utilising the libraries and (in some cases) have their code explorations shared (like good citizens tend to do :). A round-up post of these will follow shortly.

Speaking of missing examples and small projects, teaching workshops has been another well under-documented effort of mine. For most of them, I’ve created a Mercurial repository on this website and I’d encourage you to download and play with these examples as well. A lot of them are more advanced than the examples bundled with each release, some utilize 3rd party libraries and all are generally full of comments, not just about library specific topics. Please also send your virtual thanks to all the unis & institutions allowing this material to be shared!

Workshop repositories: source code is LGPL licensed unless stated otherwise.

Before I get to the compiled list of mini-tutorials and discussions from the Processing forum, Amnon Owed has recently produced two excellent tutorials for this project and I sincerely hope his efforts will inspire other users to follow suit:

Working with toxiclibs, part #1 (polygons, voronoi explosions)
Working with toxiclibs & Processing, part #2 (physics, colors, zoomlens)

List of recent forum threads (including lots of source code), sorted by subject:

Interaction, events, multi-threading

iPad, TUIO & particles with dynamic attraction behaviors
Custom Events + Event listener?


How to use the Toxiclibs Voronoi class?
2D Collision detection – irregular shapes (computer vision blobs)
Octree Visualization
PerlinNoise to specified target destination
What does Vec2D.heading() in Toxiclibs do exactly?
How to calculate the tangent line of a circle?
Mouse within a certain area
using toxiclibs geomutils to solve the “pulley problem”

Geo location (Twitter & Flickr)

GPS to spherical coordinates with Vec3D & toxilibs
GeoLocation Twitter Search: Twitter4j
Simple mapping of geolocated tweets


Working with toxiclibs & Processing, part #2
Question on toxiclibs colorutils Histogram


GLGraphics + Toxiclibs Volumeutils (also see post on codeanticode)
Drawing a dotted/dashed arc
Library with box(x,y,z) function?


Can toxi Spline2D be use for more than Vector positions?
convert 0.54321 to 0.54 and convert 2 to 100


These two threads are about JAXB, which is indirectly referenced by various toxiclibs geometry types in order to store them as XML. In anyway, many users are interested in data visualization and JAXB is lightyears ahead of the default XML library bundled with Processing…

JAXB tutorial: XML parsing with style
JAXB tutorial question

Last, but not least: If you have any similar questions, interest in running workshops at your university/company or any small tutorials to share, please do get in touch! ¡Muchas gracias!

toxiclibs-0020 Mon, 03 Jan 2011 14:50:00 +0000 Six months in the making and at least three months delayed, the anniversary 0020 release of toxiclibs was finally released today. A little Happy-New-Year present for you & me. Before getting into the details of the countless things which have been changed & added with this release, please first go forth and download the bundle of ALL library modules from here: toxiclibs-0020

In the past few days I’ve uploaded 20 new demos (many of them using new functionality) to OpenProcessing. The release contains now 95 demos(!) in total, incl. several complete exercises of workshops I’ve taught throughout the past years. So let’s please also say thank you to the universities & institutions which made these possible!

Here’s also a quick list of some key additions & updates (some were already quietly introduced with the 0019 release in summer, but never documented). A complete & detailed changelog is bundled with each library module.

  • 3D mesh construction & processing:
    • New Winged-Edge mesh class for structured 3D data & enabling complex mesh operations/navigation
    • Adding mesh subdivision strategies, vertex selectors & mesh filter classes (e.g. for modeling, smoothing)
    • New BezierPatch class for constructing 3D surfaces from 4×4 control points
    • Added .toMesh() methods for all 3D geometry primitives: AABB, Cone, Cylinder, Sphere, Plane, Terrain, BezierPatch
  • Revamped architecture of all voxel based modeling classes:
    • VolumetricSpace is now an abstract class with array or HashMap based implementations
    • IsoSurface is now an interface with array or HashMap based implementations
    • HashMap implementations able to produce much higher resolution meshes
  • Major refactoring of core geometry classes:
    • Introduction of immutable vectors (ReadonlyVec2D/3D)
    • Introduction of Shape2D/3D interfaces for more polymorphic client code
    • Support for barycentric coordinates in Triangle2D/3D
    • Added Sutherland-Hodgeman polygon clipping algorithm (useful for laser cutting tasks)
  • Behavioral physics:
    • Massive refactoring of internal force handling & introduction of ParticleBehaviour implementations
    • Added Attraction and ConstantForce (Gravity) behaviours
    • Behaviours can be dynamically added globally (for entire sim) or to individual particles
  • More maths:
    • Added bezier, exponential, threshold and decimated interpolation classes
    • SinCosLUT not a static class anymore, but can be instantiated with different precisions
  • More utils:
    • Added FileSequenceDescriptor class for working with file sequences
    • Added FileUtils for file chooser dialogs, stream wrappers etc.
    • Added DateUtils for creating timestamps (with optional timezone support)
    • Added generic pluggable EventDispatcher helper to easily implement Observer pattern in client code
  • Image based color palettes using new Histogram class in toxi.color package
  • More simulations:
    • Added 2D heightfield erosion strategies: TalusAngleErosion, ThermalErosion (still lacking good demos)
    • Integrated optimized Jos Stam 2D fluid solver (from old 2006 demo, also still lacking bundled demo)

A lot of these above additions are in dire need for more documentation & tutorials. I will do my best to rectify this situation ASAP, but do hope the supplied demos do give you a jump start in the meantime. A lot of time went into creating & documenting them.

Likewise, to ensure the ongoing quality of the libraries and examples, please submit a bug report or enhancement request via the issue tracker…

Finally, another kind request, largely addressed to recent developments in the open source world around creative coding tools like this one: Please give credit where credit is due. If you’re using these libraries with Processing, please also list them in the tools used in your documentation (this is a separate project). Same goes for porting parts of these libraries to other languages. This social link back is hugely important and essential for this culture to exist and grow in the future.

Thank you & all the best for 2011!!

CfP: Community showreel 2010 Fri, 27 Aug 2010 02:42:04 +0000 It’s this time of the year again – Showreel time! By now the project has grown to over 270+ classes distributed in 8 sub-libraries and especially this past year has seen the potential & impact of these libs realised in different fields from architecture, education, generative product design to interactive installations, and that not just in the Processing based core-community.

So just like last year’s effort, I’d very much wish for and would like to produce another showreel of all the recent interesting projects & experiments done by YOURSELVES with the various library packages. The aim of this undertaking is simply to create a record, a snapshot, some overview and inspiration for other (possibly new) users of these libs. To make this happen I really do need your help & generous contributions in the form of footage, both video and still image assets. Finished projects are desirable, but often the work-in-progress stages are highly interesting too, so if possible, please do include these too. All work will be clearly credited and the reel will be premiered during my talk at Flash On The Beach on September 27, 2010. Afterwards the video will be hosted on Vimeo.

Like last year, the guidelines are remaining as follows:

  • only submit projects you’ve worked on/own rights to/have permission to include
  • project name, client (if any), year, author(s), project URL
  • list of toxiclibs package(s) used
  • video resolution 1280×720 (if possible, lower res might be fine too)
  • screenshots/photos (if you have stills only, more than one would be extremely helpful)
  • (optional) your vimeo username for crediting using their system

Please get in touch via email: toxiclibs at postspectacular dot com

I can provide FTP upload space if you don’t have any yourself. Alternatively, you might want to sign up with Amazon S3, Dropbox, or similar services…

UPDATE!!! Entry deadline is: Thu 23 Sep 2010, 09:00 GMT Monday 20 September 2010, 12:00pm GMT

Your help is v.appreciated & I shall thank you dearly!!!!

Once more for the record, here’s the previous reel from 2009…

]]> 5
Upcoming: The Winged-Edge mesh class Sat, 07 Aug 2010 12:02:56 +0000 Just earlier this week I finished a project for which I needed to work with quite large 3D meshes (2 million+ triangles). The meshes needed to be stored in such a way that one can efficiently navigate from a given vertex to its various neighbours and so forth (e.g. for use in a steering system)… So a traversal graph was needed and I’ve finally implemented the Winged-Edge structure on top the existing TriangleMesh class in toxiclibs, something I’ve been meaning to do for a long while.

Having connectivity information for each vertex, edge and face of the mesh allows for a whole wide range of new, exciting applications and my first exploration on this front deals with various subdivision and mesh smoothing strategies and their use as generative modeling tools… I’m currently developing an extensible architecture to make this system as flexible as possible and you’ll be able to create your own custom strategies to decide about the location of new vertices, but without having to deal with any of the actual subdivision complexities itself, like splitting edges & faces. The same interface pattern thinking is also applied to mesh smoothing and currently I’ve implemented Laplacian smoothing and am working on other options as well…

All this should be very interesting for all users with a more architectural background, but IMHO has also lots of potential for those creating digital fabrication tools. If you have any interest in this and/or some useful pointers to share, please do get in touch!

Pending further testing, this new mesh structure and support architecture will available in the next release (0020)…

The mesh in the video and the examples below are using normal shading to help verifying the correctness of the edge/face splitting algorithm. Each vertex is tinted using its normal vector XYZ components interpreted as RGB color intensities.

These following images show a displacement-subdivided cube with (right) & without (left) mesh smoothing applied…

Some slightly older experiments from the early stages of developing the system. These meshes started out as a simple 8-point cube, subdivided 4-5 levels and then rendered as VBOs with standard gouroud shading…

]]> 3
Olhares de Processing: Porto workshop Tue, 15 Jun 2010 16:14:04 +0000 Before it gets too quiet here (sorry about that recent work & travel-induced hiatus, there’re loads of updates coming), I’m super happy to announce details of the next workshop related to this project, incl. a preliminary outline/focus topics for us to get our teeth into. This upcoming workshop is entitled Olhares de Processing (Glimpses of Processing) and will take place at the School of Arts @ Universidade Católica Portuguesa Porto in conjunction with the Festival de Artes Digitais Olhares de Outono.

Mark these dates in your calendar: July 12-18th 2010 – It’s going to be a whole 7 long days of code crafting in the north of Portugal and I’m looking very forward to it! The workshop is limited to 14 participants. Bookings are handled by the University and should be done via their special website. Thank you dearly!

The planned outline is below the poster I made for this unique occasion, so please do read on:

Workshop poster

The general idea is to split our time into 4 days of intense tutorials and hands-on examination of core principles & techniques of the computational design approach in the context of creating “generative identities”, without prescribing too much what shape & form these should take. Part of our workshop’s remit is also to enquire the current possibilities. The final 3 days will then be used to build your own project(s) to be use for the Olhares de Outono festival later in November.

The topics listed below are not set in stone and we’ll decide as group on what to focus (much depends on the skills & interests of the participants). Similarly, if you’d like to experiment/include external devices into your project (Wiimote, Arduino), please bring them along. The workshop space is equipped with iMacs, but there’re also a couple of spaces to use your own machine as well… At the end of the workshop, we all should have at least one completed (if not polished) project for the festival, and should strive to document it (the project) too.

Day 1: Getting ready


  • recap of basics
    • types
    • structures
    • working with libraries
    • exporting
  • scope
    • learning curve
    • use as environment (PDE)
    • use online vs. offline
    • use as library in larger frameworks
    • P5 within the bigger picture
      • JavaScript
      • Java (Android)
      • OpenFrameworks
      • Cinder


  • Overview
  • Recent updates
  • Philosophy
  • Resources
  • Use cases
  • Exercises
    • Key techniques/classes
    • Layering processes/Combining modules
    • Easier handling in Processing


  • Concepts
    • Interfaces
    • Inheritance
    • Polymorphism
    • Encapsulation
  • Best practices
    • Events
    • Architecture
    • Design patterns
    • Anti patterns
    • Reusability
    • Open source


  • overview
  • project setup
  • using Processing as lib only
  • editor features
    • code completion
    • navigation
    • refactoring

Day 2: Working with data

Data modelling/processing

  • Collections
    • Hashmaps
      • Histograms:
        • Images, FFT
        • Tag clouds
    • Lists
      • Iterators
    • Queues
      • Priority based processing
      • Pipes
      • Stacks
    • Trees
      • recursion
      • sorted sets using comparators
        • sort by custom criteria
        • spatial subdivision (quadtree, octree etc.)
  • XML
    • standard formats
      • Atom
      • RSS
    • code generation from data model
      • XML Schema
      • JAXB
    • Defining your own formats
      • Loading/saving app state
      • Presets
      • Configuration
    • Aggregation
      • merging of sources and/or time samples
      • set theory
        • union
        • intersection
        • difference
        • relationships (1:1, 1:N, N:M)


  • basic graph theory
  • finding & creating metaphors
  • techniques
    • geometry basics
    • coordinate systems
      • spherical (Geomapping example)
      • polar (color transforms)
      • cartesian
    • vector maths
    • mapping/geometric transformations
      • M->N dimensions
      • time -> space
    • mesh generation
  • animation
    • interpolation curves
    • state transitions
    • viewport changes
      • transformation matrix
      • camera control (e.g. 3rd person camera)
  • exporting data
    • high res bitmap
    • PDF
    • image sequence + automatic FFMPEG assembly
    • 3D data for digital fabrication

Day 3: Interactions

Building on previous day exercises


  • Wiimote
  • Mobile
  • Computer vision
  • TUIO / OSC
      • multitouch
      • reacTIVision
      • external devices
  • QRCodes
  • Location triggers
    • GPS/compass based AR
  • Serial input
    • Firmata

Machine-machine interactions

  • asynchronous event handling
    • twitter updates
    • reacting to Pachube sensor data
  • multi-threading
  • network communications/protocols
    • UDP
    • OSC

Day 4: Generative techniques

Building on previous day exercises

Processes as design tools

  • inputs
    • observation
    • abstraction
    • mental model building
  • behaviour
    • parametrization
    • rules
    • feedback
  • simulation
    • agents
    • automata
    • erosion
    • fluids
    • particles
    • physics
  • randomness
    • balance of control
    • bias
    • chaos vs. determinism
    • role of authorship?
    • techniques & differences

Day 5-7: Work on own projects


  • If possible form pairs/groups
  • 2 reviews/status reports/discussion per day with all
  • Karsten giving help & support to all groups
  • Final review and presentation on Sunday PM
  • Project documentations

And once again, please head over to this site for further organisational things & the signup form…

]]> 8
Processing Paris workshop Tue, 13 Apr 2010 01:44:33 +0000 After several earlier announcements on Twitter & the Processing forums, here’s another (last) call for people who’d still like to be part of this (at the time of writing less than 5 places are left):

On April 23 & 24, 2010 I’ll be teaching an advanced Processing, Eclipse & toxiclibs workshop as part of the Processing Paris activities organized by the talented Mr. Webster & David Abouna-Tomé from OFFF.

The Memory Tree

During the 2 days of the advanced Processing Paris workshop we will create an interactive installation called The Memory Tree. The installation will consist of a large projection of a generative, slowly growing 3D tree whose leaves are all made up from messages/thoughts left by visitors and workshop participants.

These messages can either be submitted as voice via mobile phones, Skype or IM, but will also be harvested automatically via tagged content from Flickr and Twitter. The tree will grow and become more complex with every new message collected and so slowly form a browsable history of its creation during the workshop, but also document the reactions of exhibition visitors. Visitors can interact with the installation via a mouse (or Wiimote, if we’re quick…) to change the view of the tree, zoom in, and focus particular messages/images or play recorded voice messages. There could also be a mode where the user directs a “cursor” freely between the various tree branches and listens to all voice messages associated with leaves in the cursor’s proximity. This playback would use 3D audio so that when the focal point is moved, the recorded voices move around in space accordingly and are creating an immersive audio collage. Voices closer to the cursor will play louder than ones further away.

The installation concept will nicely combine a number of different concepts, technologies and programming techniques. It’ll also educate participants about the distributed nature of technologies available and the importance of open standards acting as technological glue between them.


Amongst other things, we will cover:

  • core 3D geometry techniques: vectors, matrices, quaternions, cameras, curves, texture mapping
  • complex mesh creation with volumetric modelling
  • working with OpenGL
  • dealing with parallel processes using multi-threading
  • working with 3rd party libraries (mainly from
  • multi-channel audio playback
  • working with XML efficiently (using JAXB)
  • parsing RSS/Atom feeds (Flickr, Twitter integration)
  • working with (and creating) REST based web services
  • designing an application data model
  • object oriented architecture as key enabler for flexible designs

The installation will be obviously using Processing as core platform, however we will use Eclipse as development environment to make development faster, easier and more efficient. Participants should have a medium/firm grasp of Processing and feel comfortable with experimenting with new concepts & techniques with a looming deadline.

If you want to sign up for this, please head over to:

The images are above and below are some very early explorations of a deterministic random 3D tree generator. I’m currently working on a proof-of-concept of some of the above ideas, mainly in order to help us be as efficient as possible on these two workshop days…

The images below are showing the combination of the generated tree structures above with volumeutils to create 3D meshes of the trees…

]]> 3
simutils-0001: Gray-Scott reaction diffusion Wed, 24 Feb 2010 08:05:39 +0000 This is part 2 of the discussion of the classes & processes provided by the recently released simutils package. The first part of this series dealt with Diffusion-limited Aggregation (DLA) and this next process too is related to the simulated diffusion of particles. However, whereas the DLA process dealt with individual particles, this next one is only looking at the concentrations of particles of different “substances” in a two-dimensional simulation space.

The Gray-Scott reaction diffusion model is a member of a whole variety of RD systems, popular largely due to its ability to produce a very varied number of biological looking (and behaving) patterns, both static and constantly changing. Some patterns are reminiscent of cell devision, gastrulation or the formation of spots & stripes on furry animals. As with all RD models, these patterns are the result of an iterative process evaluating each cell of the simulation space based on the concentrations of the two main parameters (for Gray-Scott usually named f and K) of the reaction equation as well as taking into account the concentrations of these 2 substances in neighboring cells. So conceptually the reaction diffusion system lies somewhere between the isolated DLA process working with individual particles and the entirely rulebased evaluation of a cell’s neighborhood in traditional cellular automatas, which we will deal with in the next post.

The image below, by Robert Munafo, is a fantastically helpful map of possible patterns resulting from various combinations of the f and K coefficients. Click the image to go to his website and explore the parameters in more detail (there’re close-ups and videos of all interesting combinations).

GS parameter map

As with the other classes of the simutils package, the GrayScott implementation comes with several small demos to help you get started. The most basic use case (HelloGrayScott) is just setting up a simulation with default parameters and then allows you to draw in the simulation space with the mouse. Each frame, the reaction is updated, its new state translated into grayscale pixels and then rendered to the screen. The important thing to know here is that there’re 2 separate results states available, normally called the u and v buffers.

GrayScott gs;

// create a new simulation instance
// the "false" refers to a non-tiled space with walls
// set to true to create tiling patterns
gs=new GrayScott(width,height,false);

// configure the simulation params to:
// f=0.023, K=0.074
// diffusion speed for u buffer = 0.095
// diffusion speed for u buffer = 0.03

The main draw() loop then just does this:

void draw() {
  if (mousePressed) {
    // set cells around mouse pos to max saturation
    gs.setRect(mouseX, mouseY,20,20);
  // update simulation by 10 time steps per frame
  for(int i=0; i<10; i++) gs.update(1);
  // read out the v buffer and translate into grayscale colors
  for(int i=0; i<gs.v.length; i++) {
    float cellValue=gs.v[i];
    // the cell values in v are usually in the range 0.0 .. 0.33 
    int col=255-(int)(min(255,cellValue*768));
    // use the col value for red, green and blue and set alpha to full opacity

To avoid this manual pixel pushing and enable us to make some of the subtle changes of densities more obvious, we can also use the handy (and also new) ToneMap class of the colorutils package. This allows us to map a number range to different positions (colors) on a multi-color gradient and so potentially better visualize the different densities. The basic usage of this class is shown below:

import toxi.color.*;

ToneMap toneMap;

// define a color gradient by adding colors at certain key points
// a gradient is like a 1D array with target colors at certain points
// all inbetween values are automatically interpolated (customizable too)
// this gradient here will contain 256 values
ColorGradient gradient=new ColorGradient();
gradient.addColorAt(0, NamedColor.BLACK);
gradient.addColorAt(128, NamedColor.RED);
gradient.addColorAt(192, NamedColor.YELLOW);
gradient.addColorAt(255, NamedColor.WHITE);

// now create a ToneMap instance using this gradient
// this maps the value range 0.0 .. 0.33 across the entire gradient width
// a 0.0 input value will be black, 0.33 white
toneMap=new ToneMap(0, 0.33, gradient);

Now we can refactor our Gray-Scott rendering code into something even simpler:

for(int i=0; i<gs.v.length; i++) {
    // take a GS v value and turn it into a packed integer ARGB color value

Btw. The ToneMap class is a nice example of the whole reusable “building block philosophy” of toxiclibs (and the object oriented approach in general). The class is simply a composition of other library classes and under the hood delegates everything to these elements:

ToneMap composition

All other GrayScott demos, as well as all Cellular Automata examples, make use of this ToneMap class, so do have a look at those for more reference…

Since a homogeneous configuration of the entire sim grid will always just provide one particular character/patterning, the GrayScott class has been designed with extension in mind. The CustomGrayScott demo shows how to impose a pattern on the actual simulation parameters themselves, so that cells at different positions are evaluated using different parameters:

class PatternedGrayScott extends GrayScott {

  // our constructor just passes things on to the parent class
  PatternedGrayScott(int w, int h, boolean tiling) {

  // this function is called for each cell
  // to retrieve its f coefficient
  public float getFCoeffAt(int x, int y) {
    // here we only take the x coordinate
    // and choose one of 2 options (even & odd)
    return 0==x%2 ? f : f-0.005;

  // this function is called for each cell
  // to retrieve its K coefficient
  public float getKCoeffAt(int x, int y) {
    // here we only use the y coordinate
    // and create a gradient falloff for this param
    return k-y*0.00004;

Instead of using the default GrayScott class, we only need to change one line in the setup() method to use our extended version instead:

GrayScott gs;

void setup() {
	gs=new PatternedGrayScott(width,height,false);

The result of this is shown below… Easy, huh? :)

Having this mechanism in place, it can be also used to create more interesting types of masking. For a commission to produce a cover design for Print Magazine in 2008, I generated a type face from simple line & arc segments and used it as mask to manipulate the concentrations of the f & K parameters to achieve two different types of patterning: one for the inside of the letter shapes, the other for the outside…

The frames of that animation were then stacked up along the Z axis in 3D space and with the help of the volumeutils classes turned into 3D mesh, exported as STL format (also see TriangleMesh.saveAsSTL()) and finally fabricated into the physical, 3D printed sculpture shown below. In a way, the sculpture can be seen as a map of its entire creation process…

Type & Form sculpture

Here’re some more rendered detail shots of the sculpture. More information about this project is here and the related flickr set.

Finally, the GrayScottImage demo shows another supported technique of using the library: the seeding of the simulation using a bitmap image.

]]> 11