FRAMED 2.0 Arrives

11902421_1519601911612290_569343398399817546_n

 

Well it’s been a long time coming, but it seems Framed 2.0 is finally here! I’ve been following this company since their first digital art screen Framed 1.0, which was a hefty 55-inch unit coming in at close to $15,000 AUD. Since then the prospect of a more affordable 2.0 version has been looming.

Digital art screens are something I’ve dreamed of for years – you can even look back to an old post I wrote in anticipation of ‘Google TV’ (something that never really materialised). Well, Framed 2.0 may just be the Google TV I’d always hoped for.

The concept is simple – digital art has emerged as a notable field in recent years, but there’s never really been a way to display it. Enter Framed 2.0.

It’s a nascent time for the digital art screen. And there are a range of competitors entering the market. But there’s something Framed 2.0 promises to do that none of the others do – and that’s run creative coding frameworks – live software applications such as Processing, Cinder, OpenFrameworks, vvvv, and max/msp – right out of the box.

While that’s certainly no mean feat, Framed also offers similar features to competitor devices: gifs, website display, and fullscreen video. But this offer of native runtime software, if they can pull it off, gives Framed 2.0 an edge I think, one that may appeal to some people in particular, namely – software artists.

Like many others, I backed the Framed 2.0 project in a Kickstarter campaign a little over a year ago, and while there’s been sporadic communication from the creators since then, and the delivery dates have been pushed way back for all, I have to say at this point, upon delivery of three units today (the ‘studio pack’ tier), the wait has been well worth it.

The rest of this post functions as something of a visual ‘unboxing’ for other backers that may be interested (spoiler alert!), and as a review of the device and the system that supports it at this early Beta stage. Because ultimately, that’s currently where this offering sits. It’s clear there’s more to come from the Framed team in terms of an official launch, the native remote control app, artists uploads, and of course native software support – but what exists now is solid, clean, and to be quite honest a joy to use. I was blown away.

The packaging is fantastic – everything is well and safely packed. You get a remote, power supply, usb cable for charging the remote, a wall mount, start up guide and of course the device itself.

Continue reading “FRAMED 2.0 Arrives”

OCD OSC

vezer_screenshot_01

I’ve spent the last couple months getting together a basic outline of my thesis, but also trying to get together a workable 3d camera system.

In my research I discovered Vezér, an OSC platform for sending and receiving Open Sound Control messages around various software. It should really be called Open Signal Control, because at the moment I’m not using it to control any sound.

It’s a great application, allowing me to control animation and camera movement with easing and bezier curve acceleration, without the code. Just set up an address, and Vezér does the rest. Initially I looked into duration, but its buggyness and lack of active development motivated me to look for something else. Vezér on the other hand is actively being developed and has great support on the website from its creator.

On the camera side of things, I’m using OCD – Obsessive Camera Direction – a workable camera library for Processing. While I initially started with PeasyCam, it proved too limited in its functionality for my needs. OCD provides basic functionality for moving and aiming multiple cameras, and I found a great hack online enabling me to send multiple camera feeds to seperate graphics layers in Processing (which in itself was its own hurdle).

Along with control from TouchOSC, I now have a workable system whereby I can control camera movements wirelessly with an iPad, record those movements in Vezér, play them back, and see the results directly in Processing in real-time. It’s fantastic.

Not only that, all of Vezér’s controls are OSC addressable, meaning a bit of code in Processing can have you skipping and moving around multiple chapters and compositions, keeping things well mixed up – which is exactly what I want 🙂

It’s certainly been challenging figuring it all out, and not without some help from friends and members of the Processing community. The task ahead now is to begin setting up multiple scenes and multiple cameras, recording camera paths, and finally, creating a whole range of 3d content.

Lots of work ahead >_<

Orbital Systems

Things have moved so fast with my research lately I haven’t really commented on it. But there’s a focus for my current project now. Some of that is visible on my tumblr – prgrms.tumblr.com otherwise here is a bit of an update on where things are headed.

Some of my previous projects used NASA imagery in an animated way – whether that was looking in, via satellite imagery of Earth, or outwards with cosmic perspectives brought to us by the Hubble, I recently realised that this has been a space my work has operated in. To be honest it hadn’t totally hit me – maybe in some ways that’s because I was working with the imagery produced by satellites. But on closer thought I saw that I’d been working with satellites as a subject in and of themselves.

Hubble in orbit

 

If Airfield used imagery by satellites pointed directly at Earth, and Noumina was directed outward, this new work comes to be about that middle point – the orbital perspective – as Ron Garan calls it. This new project, which I am tentatively titling EO (as an abbreviation for elliptical/Earth orbit, or earth observation) comes to be about this perspective, the location of satellites and their horizonal views from Low Earth Orbit.

iss_art01_1680x1050

 

Airfield arose out of the proliferation of imagery that entered into the public consciousness via technologies like Google Earth. And perhaps Noumina came to be following the 2009 upgrade maintenance mission to the Hubble, which returned it to full functionality, allowing for its highest quality images since launch.

The EO project responds to the images that have been produced aboard the International Space Station, either taken by astronauts, or the recently installed HDEV video camera system which provides a live HD feed of the Earth from the point of view of the ISS. This ‘Orbital Perspective’, I feel, is one that belongs in the lineage of images produced via spaceflight such as Earthrise, the Blue Marble, and Pale Blue Dot.

nooV9

 

What is the significance of the images produced aboard ISS, and how have they altered our perception of the planet and ourselves? While my research project attempts to investigate these questions in part, it is also an abstraction of it. In my project I am using software to create orbital virtual cameras and art ‘satellites’ that rotate around a spherical body. This has put me into contact with a wide range of so-called ‘orbital artworks’ – projects that either use an orbital methodology or process, or works that involve orbital spaceflight and the images it produces.

In the ramp up to the completion of the project I’ll be posting more bite sized chunks of research as they relate to EO and its accompanying exegesis.

Art as Problem Solving

It’s interesting to think of art as problem solving.

Art does this in many ways. And maybe it doesn’t actually ‘solve’ any problems, but kind’ve looks at them, from different angles, exposes problems, considers them, and starts conversations or critiques.

But when something like programming enters the equation (quite literally), the whole process of making becomes a process of figuring out a problem.

In a sense making a work becomes like a puzzle. You put pieces together. But I’m not sure if anyone, in any discipline that uses computer programming, would find that the scope of the work was in some ways made monotonous by computer programming.

Quite the opposite. In programming one always tries to optimise, and along the way is always learning new techniques and ways of doing things that were unknown previously.

I’m sure there are always more things to learn with programming, because every day I try and code something, I get one step further. It might only be a small step, but a step nonetheless, which then becomes part of your veritable tool box of techniques.

And this I find incredibly rewarding, personally. It’s fair to say one never hits the limit of any tool, but the thing about programming is simply being caught as you create. I’ve read or heard elsewhere that this is a blight of programming, that the creative process is halted by a bug or some error. And you get caught in this loop for hours, trying to figure out the tiniest detail.

It’s a strange challenge in the creative process. It’d be like trying to create a chorus for a song your working on on guitar, and then having to fix the guitar halfway through the writing process. I’m sure this isn’t unique to programming, building as you go, learning as you go, and solving problems as you create. But it’s something I find particularly fascinating about the medium.

That creating art becomes a kind of brain teaser. You know what you want, but how do you get there? It reminds me of John Maeda, in his book Creative Code, when talking about this process: “and in a flash of lightning it is suddenly there”.

Visual Research

I’ve created a new visual research blog on the Tumblr platform.

It’s more of an ongoing mood-board, a visual mind-dump of where my current thoughts are at, and visual stuff I create, or find online/IRL.

It’s also a mix of Instagram, and things that go to Facebook. Facebook is a bit of a strange one, because stuff often gets lost, hidden or deleted. I often post there, and in some ways this new blog is an attempt to capture some of whats lost there.

Instagram is great for capturing life moments, in your work or your time off, but I don’t really use Instagram to post other peoples stuff. So this Tumblr will be useful for that. Who knows I may create a second Instagram at some stage. But the Tumblr is a nice archive I think, a mix of all things. .Gifs and videos to come soon, too.

And then there’s just a ream of images I download from the Internet, so a lot of that will go up here.

I thought about creating a physical mood-board, but I guess printing things out and sticking them up on a wall doesn’t really make sense anymore. Digital happened so much more quickly. Maybe a few life-size posters of the major stuff go up in the studio.

I also avoided Tumblr for the longest time. But I don’t think my intention here is to cultivate any kind of network or followers, it’s just a digital visual diary that anyones welcome to look at.

http://prgrms.tumblr.com

MADA Twin Screens

Recent work in Processing on display in the MADA Foyer, on the Twin Screens at Monash University in Caulfield, Melbourne.

Displayed for a short-run across the weekend and to coincide with an in-house Symposium taking place on the Monday following, these works in progress reflect my work with Processing over the course of this year.

Even though its not an official ‘exhibition’, it was nevertheless exciting for me to finally see some work up on a wall, and these two screens had just been replaced with these two shiny new models.

I love the interplay across twin and multiple-screen environments, where slightly varying the content across each display creates shifting arrays of possible combinations.

See pics below: