toxiclibs

The road ahead

As I’m using any spare moment to continue working on getting the next release ready for public consumption, I thought it will be very useful to give a better overview of the current development tasks (and challenges) for the near future. The diagram below hopefully visualizes this current state quite well and maybe even encourages one or two brave souls to lend a helping hand as toxiclibs is slowly but steadily breaking out of its initial Java shell and starts integrating more publicly into other languages and application contexts other than Processing.

roadmap diagram

toxiclibs roadmap Q4/2011 - Q2/2012 (click to enlarge)

A bit of philosophy

From the very outset, the creation of design oriented, composable and reusable data structures and algorithms for manipulating them has been the main aim of this project and also one of the most obvious counter approaches to the way people traditionally work with Processing (hitherto the most used environment for the libs). Even though the majority of classes provided by toxiclibs can have a visual representation, there’s a strict exclusion of any rendering related code in the core packages of the library, since these often carry a vast amount of secondary dependencies, eventually binding the library to a large, rigid environment. Me not likey. Almost all toxiclibs classes are “pure” models which can be queried, transformed, combined and otherwise manipulated as abstract entities. They’re usable as a tool for solving (design) problems, not only for drawing. They are the M in MVC. If one of them ever needs to be drawn/rendered, a 3rd party component is required (the V in MVC, e.g. Processing or straight OpenGL), but toxiclibs does not prescribe how this drawing should be done (but it provides optional tools to support that task). This separation of concerns really has been the #1 feature aimed at making it as easy as possible (and encouraging) to enter this next stage of the project: systematically porting to other languages.

Polyglot toxiclibs

JavaScript

In the past, myself and other people have done half-baked attempts to port selected classes to other languages (ActionScript, JavaScript, C++). However, all of them were just isolated fragments needed for specific projects and never approached general library status even closely. As most people are surely aware of by now, since the beginning of the year Mr. Kyle Philips has been doing a stellar effort systematically porting large parts of the toxiclibs core and physics classes to JavaScript and due to the popularity of that language this port is gaining huge traction. That’s exactly what I’ve been hoping for to happen as an eventual consequence of the above design points, so I’m super happy to see this effect kicking in. JavaScript however does not only mean in-browser usage, even though in combination with WebGL and libraries like three.js & Processing.js the number of potential use cases is huge (I’d say even more so than in the traditional Processing tied context). JavaScript’s reach is massive and it also has deservedly gained traction outside the browser as general purpose language (largely thanks to V8 based platforms, like Node.js, but also Dean McNamee‘s Plask as a more related example). I think toxiclibs can contribute and actively support new developments on these platforms.

In terms of similarity and porting, JavaScript is one of the closest things to Java there is, but it also has unique deployment issues and there’s some substantial organizational effort needed to create a better system for splitting the codebase into modules and integrating the JS port with existing coding standards (CommonJS) and module managers/loaders (e.g. NPM, RequireJS), incl. to start thinking about adopting Google Closure conventions to harness the optimizations achievable by that compiler (e.g. dead code removal, a huge benefit for large libraries as this).

Good API design requires a clean, consistent ethos, a worldview and opinion which not only is carried through and made visible throughout a project, but also leads users of that API to write their own software in a certain/similar manner. To achieve that goal is often a long, winding road and takes much longer to get right than writing actual code, but I think by now, toxiclibs does provide a decent set of consistently used patterns (excluding a few edge cases). Having such a familiar set of classes & APIs available in multiple languages is a serious benefit for users and makes it much easier to experiment & switch between environments, and that without forcing users to stay in a sandbox of sorts (e.g. as ProcessingJS does). I think, having a familiar API too needs to be complimented & balanced with the unique features, idioms and development practices of the host language to allow both code & coder live up to their full potential. For JS these differences are still relatively harmless, but even there we should embrace them more.

Clojure

Ever since college, I had an ongoing, if usually fleeting, fascination with Lisp and its seemingly alien, stripped down approach to syntax, its obsession with brackets and generally doing things “the way round”, at least compared to common (imperative) languages. I never considered Lisp as a serious contender for my own development arsenal until earlier this summer, when I stumbled across Clojure, a modern dialect of Lisp running on the JVM (get a detailed feature overview on their website, it’s worth reading). This time I immediately was struck by its elegance, the resulting concise code and the many other features this language brings to the table, especially for working with collections, data processing & concurrency. Many data munging tasks can be solved in approx. 30-50% code than in Java/Processing, very useful for dataviz. So I made an effort to get into it more seriously. Then I read this article and when I hit this quote, I felt in very similar shoes:

“Many extremely intelligent people I knew and had much respect for were praising Lisp with almost religious dedication. There had to be something there, something I couldn’t afford not to get my hands on! Eventually my thirst for knowledge won me over. I took the plunge, bit the bullet, got my hands dirty, and began months of mind bending exercises. It was a journey on an endless lake of frustration. I turned my mind inside out, rinsed it, and put it back in place. I went through seven rings of hell and came back. And then I got it.

The enlightenment came instantaneously. One moment I understood nothing, and the next moment everything clicked into place. I’ve achieved nirvana.”

Similar to Scala, Clojure compiles directly into JVM byte code and therefore provides comparable speed and can easily interoperate with the vast amount of Java libraries available. Unlike Java, Clojure is focused on immutable data and provides a functional approach to computing, an antidote to living a kingdom of nouns. Common to all Lisps also is the REPL, offering livecoding features as part of the core development process. Lisp/Clojure code is data is code. I don’t need to point out, how exciting this is for people in our field. Besides that though, my main excitement is about the forward thinking take on concurrency/multicore support (agents/futures/atoms/refs) and the ability to easily create domain specific languages. Community activity around the language seems to be glowing hot and is sporting an impressive ecosystem of libraries and amazing support tools, making open source development true fun. Some noteworthy introductory links, should you feel inclined to give it a spin too:

ClojureScript

ClojureScript is a recent addition to the Clojure ecosystem and potentially something to keep watching closely for the purposes of porting: Clojure compiled into JavaScript. I know it sounds like herasy, but in some respects it seems ClojureScript does “better” JavaScript than the original, providing all the intelligent language features of Clojure (e.g. namespaces, destructuring, function overloading, atoms, macros) and generating JavaScript in a format targeted to Google’s Closure compiler, allowing for vastly better optimizations of large applications than handwritten JavaScript (which can only fully utilize the compiler if you stick to the necessary conventions in ALL your code). I’m in no position yet to actually back any of that with my own experience so far, but it’s an exciting development for sure. Here’s also a video of the launch event of ClojureScript at Google: Rich Hickey (creator of Clojure) unveils ClojureScript and the official announcement with more links.

Keeping in sync & documentation

The next major set of tasks related to porting is figuring out ways to keep the different ports in sync or at least better document which parts of the library are available in which port. For that purpose, I started prototyping a new web app (WebNoir + CouchDB) which will collect metadata from the different source codes and automatically produce a port coverage/sync report. Kyle started manually producing a top-level version of this for his JS port, but its granularity is only at class level, whereas we really do need that information per function/method to be truly useful. This is also because there are still some areas of the original Java version, which will receive further updates and bug fixes and at current there’s no system to mark those places in the code as needing to be reflected in the other ports. Serious development of this tool is top priority after v0021 is out.

Related to that also is documentation, the historically slowest evolving aspect of the whole project. One of the comments I hear most often is “Javadocs suck”. They suck even more so, since for many library classes they’re not even existing or if so only superficially. So in a way I couldn’t agree more, but then again, for the past four years these libraries have largely evolved around my own needs and client projects and I’ve relied on users to consult the 95+ examples bundled with each release (or attend my workshops) in order to learn the basic usage patterns. Whilst I still believe the latter (learning by example) is by far the most efficient way of learning (having done so my entire career), I also think things can be vastly improved by offering documentation in several formats, all cross-referenced between: actual running examples with source code, a literate programming style doc system (e.g. Docco/Marginalia based, very good for workshops) and the traditional javadoc style, for integrating the docs in an IDE setup.

Example output of running an early(ier) Clojure porting effort through Marginalia to produce nice, easy to read documentation next to source code.

IMHO the reason Javadocs are soooo unsuccessful amongst Processing users is actually largely down to the lack of Javadoc support in the Processing PDE. If people would use IDEs like Eclipse showing Javadocs in-situ within the editor context by simply hovering over class names, I believe people would see them for what they really are: kind of awesome!

Eclipse screenshot with Javadocs in editor context.

So in conjunction with the task of adding metadata to every method of every public class in the codebase, all methods will also be receiving full documentation. These docs will then also link to existing examples using the particular method. This will be the second main focal point of the v0022 release (the one after next).

Versioning & repositories

Speaking of version numbers: Again, due to the organic and isolated growth of this project and my own past development practice, the linear versioning scheme was quite sufficient for now. As we all learn new things and our development tactics change, so too will the versioning for this project have to change to something more meaningful. Enter semantic versioning. The idea is nothing new and I’ve been using it for most other projects in the past, but I think this time the reasoning behind it is somewhat different:

  1. Currently there’s toxiclibs support for 2.1 languages (the Clojure port is not quite there yet to fully count)
  2. Development outside the Processing IDE is far more dependent on build management tools, open source repositories and module managers (e.g. Maven, NPM, Leiningen)
  3. Different projects are created at different times and might require different versions of the libraries and
  4. I’d really love to get to a point where there’ll be synchronised releases in order to reduce build & documentation complexity and avoid confusion for users so that they can assume v1.0.0 of a module will contain the same features in all (sup)ported languages.

Semantic versioning is the lowest common denominator between all current module managers/repositories and hence will be introduced with the next release. As a result, users will have a much easier way to integrate the libraries into their own (non-Processing) projects, since they will also be available via the major existing open source repositories for the various languages (Sonatype [Java/Maven], NPM registry (JS), Clojars [Clojure/Leiningen]). Apparently, Processing 2.0 will feature its own centralized library management system, but from what I gather it will not offer any integration with any of these existing open source repositories.

New website & tutorials

The current WordPress based setup is not the best platform for integrating all the planned new documentation, tutorials and other features like the bundled example & user galleries. I’ve been test driving Confluence on a private dev server and found this more promising (at least for the documentation & tutorial side), so I might adopt this in the near future. The other (more appealing) alternative is to extend the new CouchDB based sync/doc tool into a more generic web app and add wiki & blogging features. The main issue for that will be increased hosting costs, which I will need to think about more how they can be better balanced. The new website will also host as many of the bundled examples as possible, effectively deprecating the current gallery on openprocessing.org, which is impossible to batch update and therefore can’t reflect any API changes in older examples (which causes unnecessary comments). To better integrate community contributions, that new system will also be used to provide a user gallery.

Furthermore, if you’re like the amazing Amnon Owed, and feel like creating super useful tutorials for the libraries, please start doing so. Any help on increasing the number of learning resources would be an amazing contribution to this project and will play a major role on the new site. Speaking of tutorials, I really do think there also is a big need for a general knowledge tutorial about how to make the most of open source libraries like this from a user perspective and encourage people to contribute (even if in the most indirect ways). Much of the feedback and comments I receive, hints at a large knowledge gap about how to even go about finding out (in a self guided manner) about existing resources, updates, work with the source code, work with issue trackers etc. This generally seems to be a far bigger problem with users in the Processing camp than I’m aware of from other environments. Food for thought!

Summary & next steps

Well, this is the closest thing to a sharable master plan as I could get to for now. Before most of these things will/can be addressed, other minor tasks need to be completed to get 0021 out of the door first. A brief, non-exhaustive overview of new features is here:

  • Initial support for NURBS curves and surfaces is the biggest new addition and still requires some more testing and internal restructuring. Supported operations are:
    • Curve builders for circles, arcs and creating curves from a list of points, joining curves
    • Surface builders: extrude curve, revolve curve, construct surface from a grid of points/control mesh
    • Convert surface into TriangleMesh instance (variable resolution & UV coordinate generation)
  • Several important additions to the Polygon2D class:
    • construct regular polygons from a given base line segment (useful for creating tesselations)
    • rotate/scale/translate polygons
    • pick random points within a polygon (useful for color sampling)
    • retrieving edge list
  • Tesselate polygons using Delaunay triangulation with flexible grid resolutions
  • Addition of ConvexPolygonClipper to clip a polygon to the shape of another
  • New BezierCurve2D/3D classes in addition to existing Spline2D/3D
  • Implementing Visitor pattern for PointQuadtree/PointOctree
  • Adding UV coordinate generation to SurfaceMeshBuilder
  • Adding PLYWriter for exporting 3D meshes in Standard Polygon format
  • ToxiclibsSupport line drawing now supports decorators for dashed lines and arrow heads (customizable)
  • Custom DXFWriter for 2D shapes with DXF layer support
  • Improving precision/reducing rounding errors for VolumetricBrush
  • Adding FluidSolver3D

There’re also a ton of other smaller additions and bug fixes plus approx. 10-15 new examples. Let me also point you again to the repository of workshop projects at learn.postspectacular.com, which contains also several larger projects than the examples bundled with the release.

Showreel

A few people have been asking about sending in stuff for this year’s showreel. Of course, I do intend to produce one again (it’s one of the highlights of the year), though I can only realistically get to this in Nov/Dec, hopefully making it a nice video for the Holiday season. I will send out a proper call for submissions in the next month, but if you have stuff ready, please send it along already. Specs will be the same as previously: Video assets 1280×720 (if possible), still images are fine too and of course please add some brief description & credits.

Any feedback, suggestions & help offers are highly appreciated! So long…

Comments are closed.