Thursday May 6, 2010

Serious Fun

Our Introduction to Creative Technologies lecture had a guest speaker, an information scientist (I think was the title, I can't double check right now as the slideshow online is currently corrupt). We lightly touched upon diverse subjects including quantum entanglement, which I will research further as I had a theory that such knowledge opens up possibilities in the (distant) future of communications devices that don't require radio and operate over any distance. Much like what you see in any Sci-fi show where spaceships communicate even though they are light years away. But I have no idea what I'm talking about, perhaps I should research before posting...

So on Tuesday we started preparation for the 3rd project, Serious Fun". A new-to-us tutor with a background in illustration introduced us to character development with examples of work both her and other people have done. Our first exercise was to draw character concepts. Yes, with pencil and paper. I imagine this came as a welcome relief to some members of the class as we've been hit hard with a lot of technology lately, and this was calling upon different skills in our multidisciplinary degree. Despite having a technical bent, I also used to draw a lot and paint and took art through out school, and as a kid drew cartoons (which I still have) so this was quite enjoyable too and a change. It was also mildly frustrating because I'm fairly unpractised at drawing and coming up with a character was challenging. It's different from being a kid though, as a kid cartoons and fictional anthropomorphic characters were part of some natural interest, but of course now what perhaps came naturally without much conciousness of wider issues has factors to actively consider attached. For example, the character has a function, a combination of identifiable qualities and social features. Well my cartoon characters from childhood had all of that, but I didn't sit down and consider them before I made the character. It's all very well to just draw some freaky little fuzzball and say "he/she does this and that" but as an adult there has to be some sort of point to the character. Why does it exist? Yes I know it exists because we were told to make it, but why else? What's it's purpose? What is it trying to say? I like that kind of stuff, but sometimes it can trip a person up trying to be creative. Perhaps I should have just drawn characters and later explored who they were. Or perhaps I could have come from another angle, and come up with issues I'd like to express and then design characters around that. Well I tried various combinations of the above, I also wanted to  create characters that were not disney/cartoon like, with big eyes etc. We were going to animate these characters later and it was obvious that we'd be conveying something like emotion with them too. A human-like face is pretty boring really, it's much more fun to convey emotion from objects or perhaps even lifeforms that arguably don't possess any emotions.

I wasn't really happy with any of my characters because I wanted it to fulfil a purpose and I was probably being a bit pre-emptive with what we'd be doing with that character.

In the afternoon we were to choose a character, develop it further and make a flipbook of about 2 seconds at 12 frames per second. So 24 pages.  I'm not entirely sure how many pages I made my flip book, possibly a few more than 24 pages, I've scanned in the pages and put it back together for your benefit.

It's just a character I made the purpose of the flipbook. (Edit: I've just noticed the youTube re-encoding has dropped some frames making it look like the block that falls on the worm just appeared out of nowhere UGH!)

On Wednesday we were to bring plasticine and a digital camera. We were asked to make a 3D representation of our chosen character and and create a stop-motion animation of it moving. I've never done this before and it's funny, there have been times as a kid I'd love to have been able to do this kind of thing but was never able to. Now the technology is completely available to just about anyone. As a kid I did make a few flip books, except they weren't meant for flipping. They were meant to be turned into real animations at a later date when I had the technology. A few years ago I scanned in two typing pads of a character I drew many years before, mainly so I could throw out the originals. I have a pile of tif files on this computer as a result.

Anyway, we used our camera's to take a sequence of shots and compile them using Quicktime Pro, which has a menu item to do just that from a folder of images sequentially numbered (so make sure your camera is naming the resulting images sequentially). I discovered my $2 shop plasticine was useless and an hour later that the city's shops appeared to all be out of stock, but once of my classmates was nice enough to lend me a small amount to  do my animation. I still didn't have a character in mind but used a snail that I'd drawn randomly as it was easy to mould. I didn't really plan what it was going to do, perhaps move to the camera, check it out and move on. So I started doing that, and considered how a snail might move. It's movements probably aren't representative of how a real snail moves but it has more of a human element to it, bobbing it's head as it slides along.

The camera has manual focus but when you set it, it permanently superimposes a square of the focused portion close up in the viewfinder over the picture, making it difficult to see the rest of the frame. Surely you can switch this behaviour off, but I don't know how to yet, so I set the camera to Macro Focus (close up). As the snail got closer it got more out of focus as the camera was focusing on the background. This did however give me the opportunity to explore animated focusing and make it appear deliberate. When the snail stopped at the camera, over  few frames I slowly bought the camera into focus with the manual setting. Unfortunately as one point the camera auto-powered down and lost the manual focus before I finished the film so the snail goes out of focus again. Anyway, at the end I was going to make it slide off, but decided it might be amusing to make a rocket appear from somewhere presumably a door on the shell so the snail suddenly became turbo-charged, and perhaps a camera pan for good measure. I had some time before afternoon presentation so I put the result in Adobe Premiere Pro and added some sound effects and titles. One last slight snag was that Premiere was set to 25 frames/second so it interpolated the frames that were missing, which made it look silly when our tutor did the frame-by-frame analysis on presentation. Tech issues... always annoying. The solution would have been to set up as a 24-frame movie and set the field options to not interpolate (in the menu it's called reduce fuzziness I think). You can't just set a movie to 12 fps as far as I'm aware. Well, it's not immediately obvious how to anyway.

In the afternoon after viewing the videos, we were asked to get into groups of 3 and combine our characters and make a 10 second movie (120 frames) considering how the characters interact. Along with how they move etc.

Our group came up with a basic idea, the characters were my snail and another person's blue Yeti and Ninja. The Yeti is eating from a paper bag full of food (which was an inanimate version of the 3rd person's character) when the snail approaches, the Yeti turns around, sees the snail and runs away Meanwhile the Ninja magically appears every now and then from behind plasticine "trees". The final result has a few things I was personally not happy with, I felt there was a lack of consideration given to how long it takes to react to an environment and some movements were far too quick and spread out, but we managed to get 128 frames. I took it home and put it together adding sound effects and I also actually slowed down the action by 80% because I felt parts moved far too fast. But as it was already 12 fps I couldn't slow it down further without making any jerkiness totally unacceptable. The end is a bit disjointed but it seemed to be well received by the class.


    Saturday May 1, 2010

Processing

Another retrospective post, but I'm glad to say I actually have a home, space, a desk, computer, internet, my belongings... yay! So I'll be more current again. I considered that I'd like to have edited the last post because there was a lot of venting, but this blog is usually part of the submission of projects so I can't really change it after a hand in deadline, so it's fixed. Plus I looked back and it wasn't quite as bad as I remember it. Everything went wrong, and more time or better management of what time I did have was not the answer, it was resources. It was like being used to working in Photoshop in OSX with a 30" monitor then suddenly having to do the same work on a 13" 800x600 display on a Pentium 2 running windows 95. Or something. Anyway...

The Introduction to Creative Technologies lecture was cancelled on the submission day, so once Project 2 was submitted I went home. Well, to a friends apartment for a few hours while I waited for my ride back to out of town, but small details.

The rest of the week our 3rd paper for the semester started, and it was to be a full-time 2 weeks (not sure what happens after that). Programming for Creativity, we were introduced to an open source language called Processing, skewed heavily towards visual communication and a good introduction for visually-oriented people who have never programmed before. Each day we were introduced to elements of the language and expected to complete exercises. I think the pace was probably a little bit too fast as a few people struggled to keep up, I've mucked around with a few high-level languages in my time (and dabbled with some low-level ones too) so I wasn't one of them fortunately for me, but the pace was too fast for me to take time from what I was doing to help anyone else struggling, I was still learning the language like everyone else after all.

We started off with learning comments, which is code that does nothing. It's a way of putting your own words into a program so that you or anyone else reading the code can understand what it's actually doing at any one point. You see, computer programs, even to experienced programmers are fairly cryptic to look at, you generally can't just read it and understand what's going to happen, you have to sit down and concentrate and follow it and even then it may not be completely clear. This is because code is usually full of things like coordinates and variables which are essentially place holders for numbers or other values (such as true and false and text). Variables usually change over the course of a program due to equations and conditions (or both). A condition by the way is a test to see if something has taken place such as "if the user clicks the mouse button then do this, otherwise do this". Without boring you to much (it's not actually boring if you can see results but it does make dry reading in a blog) building blocks of pretty much any computer language are as follows: Comments - ignored by the program, it's to help the programmer or some poor soul who has to read someone else's code keep on task. Also useful to be able to "comment out" sections of real code to see what it does and how the program runs without it (if it does). Variables, the place holders for information within a program. You don't refer to the actual information because a) you may not know it yet b) it might change through out the program and c) you might want to reuse that information multiple times. Conditions, used to test if an event has occurred or if a variable has reached a certain value. It's a way of making decisions. Otherwise computer programmes would do one thing and never change. How for example could  you have Photoshop if Photoshop can't test that you have selected a tool. Loops which contain code that is to be run over and over again or perhaps a certain number of times, perhaps when a condition of a variable is met for example... Functions are sub programs that perform a specific task and can be called upon again and again. In Processing - and lots of other high level graphic languages there also are specific instructions for drawing, such as ellipse, line, rect (for rectangle) point and many others. What is quite interesting to me is how closely it seems to follow Postscript. Postscript is a graphical language and the first technology that made Adobe rich (makers of Photoshop, Illustrator, Acrobat to name a few). It's also the language that made the Apple Macintosh useful in one industry in particular, democratised publishing, gave rise to the term Desktop Publishing and and entire industry redundant in a few short years. Postscript describes pages and what they look like, and if you are familiar with Illustrator, you might not realise but you sort of draw postscript programs in Illustrator. What made Postscript popular is that it's easily generated by machines and also easily read by postscript printers such as the original Apple Laserwriter. And thereafter every piece of computer equipment associated with churning out Offset printing plates. Anyway, In Processing you can set the attributes stroke and fill of objects that you are about to draw much like Postscript. Or Illustrator. You can also rotate, scale and transform the canvas that you are about to draw on so every object drawn will follow those coordinates. Which is exactly like Postscript. Which makes sense for a number of reasons, not least of which that if it's written for visually-oriented people, and lots of visually oriented people are probably quite familiar with the concepts in illustrator, then they'll pick this up fairly quickly.

So our exercises became slowly more complicated incorporating more of what we learnt.

Day 1 (Tuesday) was mostly to make shapes on the screen, and find ways of condensing complex code to simpler code using loops, Day 2 was a bit of an oddball. To replicate real life artworks on the screen using computational methods (programming them). Obviously not attempting to recreate the Mona Lisa with pixel perfect accuracy, but to make a connection between one's program and the artwork - it had to be recognisable as an attempt at a well-known picture. I got a bit bogged down that day helping people completely lost.

Day 3 to make colour transitions and move shapes around the screen - not by plotting every single movement or colour, but using loops and variables to do the work for us. We had Friday, the weekend and Monday to complete homework which was to combine all we had learnt to make a shooting gallery type of game. Unfortunately this was also the weekend of the big move to a real home again and not only did I have to move my belongings, but also help family move theirs. Yes family. Odd thing, 2 siblings have been in the market for a new place too, one moved back to Auckland after work relocation and another had a flat breaking up. Er, anyway, this very annoyingly took all of my weekend, then what I was fearing mid-last-week happened, and I got rather sick with some odd cold/cough combo. Result is that I didn't make it in for most of the following week. But I've been reading a prescribed text and I think managed to keep up. I missed out on learning some maths functions, sine/cosine and applying that to movement, matrices, which has always intrigued me from a young age because I believe they have something to do with converting 3D space into 2D coordinates. But I doubt it was covered in that much detail so I can learn that kind of thing on my own.

The big presentation on the Friday was a game, either Pong or Tron, I was starting to feel a bit more alive on Thursday night so I made Pong and decided to go on Friday. I think it turned out okay. Pong is a 2-player game and I think we were just expected to make it controllable by 2 people. My version is a single player game and you play the computer. It has some basic AI and the computer is a decent opponent, and it does on occasion try to send the ball to the opposite side of the court from where you are. But you can beat it, and when you do it isn't just because it's decided to let you beat it - it can actually miss. Once the presentation was over I went back home, still feeling quite sick. We have 2 weeks to present all of our work from the last 2 weeks on a CD.  That should be more than enough time for me to finish reading the texts that we have been given and build the programs that I missed out on during the second week. Then again I don't know what's in store for us next week.


    Sunday April 18, 2010

In the face of adversity

I just don't believe how this project has gone. Well I sort of do. I knew it was going to be difficult while living out of a suitcase, but just how difficult it's become is unbelievable.

The second week I knuckled down on the project now that I knew what I was doing. I did a bit more research on Monday and had decided to head into AUT that week. Living more than 50km away, that could be a bit expensive with petrol and then there is parking and just the time it takes to get there and back. I know people who live in the city so I can crash somewhere, although parking costs are completely unbelievable. It can cost $22 a day in the city. On Tuesday I tried to see what I could do at home still, procrastinating the costs of getting and staying in the city. I then discovered that I could actually get a lift into the city, the only caveat being I'd be leaving home at 6am.

So why am I telling you all of this? Because all of this life admin has been has been an overriding theme this holiday and I'm really annoyed.

On Wednesday I came into AUT, I was able to get a lift later in the day through alternative means, first heading to a garage where most of my belongings are to get a few things I'd need later in the week. At AUT in the afternoon, I mucked around with MaxMSP working out how to load sounds into memory for later playback. This took quite a while. I knew how to load sounds straight from disk as needed, but with 8 sounds, I didn't want any lag. So I had to work out with the help of the documentation how this was done. Things don't always work the way you expect them. I also needed some decent drum sounds and first trawled this mac's harddisk as it has some music and audio software installed, but that was mildly fruitless, I didn't really like what I found. I went home and searched the internet. It's a little bit annoying, I'm pretty sure I have a decent drum kit or 2 on my desktop computer. The one that is packed away.

Thursday and Friday were both the 5:30am starts, getting into AUT at 7. I was able to put a lot of time into the code, but really, I'm a bit annoyed at how long it took just to get a program that played a sound when you hit a certain key. It's really simple, but I just had a raft of newbie issues. Friday afternoon I bought a soldering iron, a stand and some solder. I'd quite have liked to get some extra flux as I'd be soldering to a sanded circuit board and the copper would probably oxidise straight away making it hard to solder to, but couldn't find that. I also bought 50m of black wire for $6. Multiple colours would have been handy, but I could get by just tagging the wires as I went. But I felt mostly prepared to build the jeans. Sort of. I'd be improvising with some house hold materials to make sensors, but I thought it would probably work.

Saturday was the big day of making the jeans with sensors, I found and old pair of jeans and started making the sensors. They were made from polystyrene foam sandwiched between 2 layers of tinfoil. Wires were taped to the foil and it was all taped together with masking tape. Low budget but as long as they worked. These sensors were then wrapped around a piece of chucks cloth which was then sewed to the jeans. Wires were to be routed down the seams. Oh it all sounds so easy. It kind of would have been too, but first my multimeter stopped working. Well, more annoying, it was no longer a reliable continuity tester, it seemed one of it's leads was broken.

When I went to sew in the sensors, the sewing machine started to jam, and after a couple of hours of mucking around trying to work out just why it was jamming, decided to hand sew the rest. Now I was starting to get behind schedule so I opted to get some fabric tape to tape up the wires on the inside. Nearest service station, 8km. Living in the wops was getting on my nerves. I had a birthday dinner to go to, so that was it for Saturday.

Sunday, I taped up the wiring and started soldering the keyboard hack up. I'd opted to connect a parallel printer cable to the keyboard circuit board as, and make a connector at the jeans. This would allow me to unplug them and not get tangled up when demonstrating the project. It was all going okay except a few joints at the printer connector end did not want to solder, and a couple of wires at the keyboard end. Wishing I had the flux... Eventually I did manage to get everything except 2 wires at the keyboard end soldered, tested it, and everything worked. But those 2 wires then went on to cost me a lot of time. One in particular would not solder. To cut a long story short, my constant attempts to solder it, the heat must have spread to the integrated circuit and fried the hack. It stopped working. There was another old keyboard lying around so I started pulling that to bits, and went through the entire process of determining the keyboard mapping again. Except this time I had to make sure it fit the jeans that were already wired. Once again I had trouble soldering. Now, I'm not totally inexperienced with soldering, I've done quite a bit. But this would not work.

When I went to test it, nothing happened. At this point I was about to throw in the towel. Very frustrated at my lack of workspace, the fact that half my stuff is 50km and I can't find anything. It's all made this project that would otherwise have been fairly simple, into a complete nightmare.

However, I realised that I was testing it with my MaxMSP program that is only looking for certain keys, and my key mappings have changed. I tested it in a text editor and it did work after all. So I persevered and finished, set up the shoe part and then moved to other parts of the project... Like... The video demonstration. Ah yes,  I have my video camera but my Firewire cable in in a box in Auckland. So is the tripod. I have a digital camera though, it takes video, so I used that. It sat on a bench at a crap angle. Videos look like they are 15fps.

I made the vide, did a bibliography, made a reflective statement and.. came here. I've been up all night. It's due at 12. Everything except the visualisation is done, which I thought I'd have been able to do on Thursday. No Friday. No, Sunday... Oh and on that, part of the original visualisation was to have video footage in the background (under splotches of colour that blobbed in time to the music), but the internet got wound down to 56kbps so youtube was unusable. I'd tried to change the plan a few days before it happened but... nevermind.

Very annoyed, but I move into a new place next week. But I suppose then I'll be back to working in groups.

Update: I've been able to at least make a mockup of the visualisation in Premiere Pro.

Once I'm moved, I'll decorate this blog with pictures and video.


    Tuesday April 13, 2010

Holidays! ... Er.. Holidays?

So I've had my first week off from my course, it stared with Easter, but when everyone else went back to work I remained off. Wow, being a student rocks! Well, it wasn't a complete break, we have a project and lots of study to do. This particular break is very difficult for me personally as I don't have my usual space or my desk, my desktop computer (with a mouse...) and half of my stuff is in a garage 50km away. Fortunately I have a place lined up, unfortunately it's not ready to move into for another 2 weeks. Do you know how hard it is to do a project when your not really living somewhere surrounded by all your belongings? Very very hard.

So my first week off consisted of studying. I've been reading tutorials on Jitter, pdfs on wearable technology (some of it a bit laughable tbh), and various links included in the original brief and slowly formulating ideas for this project.

I've had a few different ideas, but one challenge for me was to create something with an actual purpose, as well as demonstrating my understanding of the subject. It's all very well creating a garment that changes hues of a video when you move your arm, but why on earth would you want to do that? I had one idea of creating a jacket that sensed various gestures, perhaps you could make a squeeze motion and the video would compress, rotate motion and it rotate on a z axis. Okay not entirely useful either at first glance but it has practical applications, but more importantly it would be cool and impressive. But this demonstrates the second challenge. How on earth does one turn a keyboard hack into gestures? A keyboard is on/off, a gesture is variable. I had two possible ideas, one was to have multiple switches for one gesture to indicate the degree of movement. So it wouldn't be smooth, but there would a couple of degrees at least. Another was to create sensors that could detect variable movement fed into some sort of circuit that would then output steps depending on the amount of resistance the sensors were outputting. I'm not sure if either idea is very good or not but I can see potential headaches in the execution, particularly if one lacks a workspace. Oh btw, I'm also currently living 50km away from AUT... Joy.

Another idea was given to me by a good friend of mine with a sense of humour.  Zaphod Beeblebrox sunglasses. Zaphod Beeblebrox was a character in The Hitchhikers Guide to the Galaxy, and his sunglasses could would go black if confronted with anything scary so that he would maintain his cool composure that was so important. This was kind of a neat idea mainly for it's humour and it could possibly have been built upon but there were a few obstacles, one being I didn't come up with the idea which always disturbs me. The other is the need to make electro-chromatic sunglasses. Electro-chromatic glass (which took me a while to find out what it's name was) might in the future be available at every corner drugstore, but in 2010, it's a little hard to come by (spot the reference). But I did learn a bit out the glass, there are a few different technologies available and you can by the stuff for your home (although I imagine it's expensive). I only knew it existed because I remember watching Beyond 2000 as a kid and some story featured glass that had a layer of liquid crystal in the middle, passing current through it made it opaque.

Musical Pants

So on Sunday after skimming through the last of the Jitter tutorials, I had a look at a website called Instructibles and browsed the various tutorials in the tech section. Instructibles by the way is a website dedicated to showing you how to build or do certain things. Users make something cool document it and put up their own how-to. After looking a various fabric switches and sensors I saw a picture to the side of the page of someone tapping on his legs. Something I've been prone to do when listening to music. It suddenly occurred to me that I could add sensors to jeans and shoes and have them make actual drum kit sounds. Clicking on the picture took me to a tutorial when someone had done something similar which was a bit annoying, but fortunately his design wasn't as complex as the idea I have, I want to incorporate 6 - 8 different drums, his has 2. Also mine incorporates an optional shoe attachment. What would be pretty good here is Microsoft's experimental pressure-sensitive keyboard to alter the volume of the drum pieces, although I have no idea how it works and how easy it would be to hack. Then there is the getting working drivers for a mac. Yeah...

I could create sensors that have a dual response, doubling the number of keys used in the keyboard hack part, but my aim is to keep this thing fairly simple due to having limited time and limited workspace (AAARGH!!).


    Thursday April 8, 2010

Human Interfaces

I'll keep this short so I can catch up with blog entries, I've had a bit of a busy/difficult time, moved out of my own HQ and in temp accommodation, very glad I have this laptop, but desk space would also be nice.

Monday Introduction to Creative Technologies, we listened to a producer and saw his show reel. I guess what I got from it was how the democratisation of video production technology hasn't just enabled average people to produce below average videos (ala YouTube) but also enabled professionals and highly creative individuals to produce great results too. I worked in the studio on the keyboard hacks and decided to abandon my complex animation as it I could do something much simpler that would demonstrate that I understood all that we had been taught so far. All we were required to do was build some sort of switch from the keyboard that controlled an animation on the computer in some way. I had a mildly amusing idea and amusing is always a good start I think.

Tuesday we demonstrated our keyboard hacks. Mine was a stress ball, when you squeezed it, thanks to a microswitch inside, a picture of a llama on screen blew up (and accompanying explosion sound). This seemed popular enough so that was nice. Something had to blow up, I chose a llama because it seemed a little bit random, a little bit funny like the first time I watched Bad Taste with the sheep blown up by a badly aimed bazooka, and finally because llamas seem to hold some odd place in internet memes.

We were then given our project, due 3 weeks from this date (The first Monday after a 2-week holiday that starts next week... well, it's already started as I'm writing retrospectively). Pretty full-on, it's to create a piece of clothing that in some way controls what is on screen. Interactive clothing. We then had a class teaching the fundamentals of MaxMSP again in more detail as some students were finding it hard going, and then we moved on to Jitter.

Jitter is a component of Max that handles video and 3D processing. It seems rather powerful, in a relatively short time we were getting it to open video and vision-mix it with our web cameras with a slider. We then controlled some chroma keying with footage that had a blue screen background.

On Wednesday we were to learn some of the 3D controls of Jitter. Unfortunately I was not able to be there as that was moving house day. I didn't move my items to a new place though, I moved them to storage so I have to do all of this again soon. But it's complicated and not what the blog is about. It's also frustrating because it's a massive distraction. My tutor was satisfied that I am capable of learning the lessons on my own though, I've always been self-taught anyway. But I'd like to have been there all the same.

Thursday, students were able to present their ideas for approval to tutors hanging around, which I wasn't able to attend due to the move and all things related, but I had a chat to a tutor later on. I did attend an induction to the textiles department in AUT and once again was impressed with the facilities (and expertise of course) at our disposal. AUT owns a machine that can essentially thread a whole garment together in 3D, eliminating the need to stitch pieces of material together as in most conventional clothing. Also owned is a machine that can print directly on to fabrics.

Friday was a public holiday.


    Sunday April 4, 2010

Aftermath

This has been written about a week later, it's been a busy time here, not just BCT, but also because I'm moving house.

Monday

The week started with our Introduction to Creative Technologies lecture, and we covered  what technology has meant and it's role over the centuries. It was a bit of a dry one really.

Back in the studio, our group discussed the video production so I pulled out my USB key with a mostly completed edit. It was lacking music and could have done with some better images of the results. The group liked it and decided on music to add, as the music in the footage was predictably muffled. It took a while for the music choices to be made and I was thankful we had the video finished. I took images and music home and added them along with titles. I also had to write a reflective statement for the project which proved to be a bit difficult. Usually I'm full of things to say, as probably evidenced here, but I was ready to move on from this project and I'd put so much time into the video that the reflective statement became somewhat of an afterthought. I did manage to put some ideas to paper though. I authored the movie into a DVD image with encore, bought some blank DVDs and created the DVD.

Tuesday

Our group had to hand in all materials by 1. Our class was at 2pm instead of the usual 10am, and we gathered into the computer lab and were introduced to the next project, one that would be individual-based.  I found this rather pleasing. We were also introduced to a program called Animata, an open-source application that you can attach bones and joints to images such as jpg and png, that you import. You can then move the bones and joints and distort the image. Before 10am the next day we were to have an image of ourselves moving in some contorted way.

Also, a group of us were introduced to the 3D labs, that is, the rooms in one of AUT's buildings use for construction of woodwork, metalwork and plasters as well as spray booths and ovens and vacuum molding among other things. Rather exciting. Aside from the cost of raw materials, we now have the tools at our disposal to make whatever we can think of from those materials. This combined with the 3D printer and laser cutter is nice to be aware of.

Wednesday

In the morning we presented our homework, which involved opening it up on a lab computer or our laptops and walking around to see what everyone else has made. I was slightly stung by a difference between the Mac version and PC version of Animata, my photo was high resolution, on the PC you could zoom out. On the Mac, you can apparently zoom out, but not using the same methods. There is no documentation with this program.So it was a bit too close up to see what I'd attempted to do. Animata is kind of neat in what it can do (though there are other programs that can do these things) but it's really not very polished. Not surprising though, it's 004 release which I assume means .04 or something... or at least not near a 1.0 release.

We then started our first introduction to Max MSP, which is a visual programming environment. Behind the graphics is real code in C++ (A low level language, meaning it's powerful - most real software is created in C but very hard to understand for a beginner), but the interface is simply icons that one links up into a process. Despite the simplicity there were a few concepts one had to get familiar with or it would be hard going. One of the main concepts is in order to make a command start working it needed to be banged. - Basically sending an event "do whatever you do now" to an icon. So a Max MSP program looks sort of like a chain reaction. Max MSP is apparently used by communities of people in audio visual presentations, such as VJs and music performers, and it's easy to see why. It's simple interface is easy to program once you get used to it, it's pretty powerful and it can easily control other programs and devices, such as Ableton Live (a brilliant music creation program that has found it's way amongst big players such as Cubase, Logic and Pro Tools), and... Animata. Our homework was to use some existing examples of code that we had been given to control our Animata file with an apple remote.

Thursday

In the morning we presented our work and we'd also been told to bring in an old keyboard to pull apart. The afternoon consisted of pulling the keyboard to bits and tracking the connections to certain keys back to the circuit board in the keyboard.This was so that we could re-purpose the keyboard to be whatever on/off device we wanted. In the afternoon we were shown the engineering lab with soldering facilities and how to solder. Er, well I was busy in the computer lab and missed it, but luckily I'm rather familiar with soldering and electronics in general.

Friday

There was more keyboard work, and we were given a task for the following Tuesday. Our task was to create an Animata file (or use our existing one) and control it with our keyboard hack.

Also on Friday my new MacBook Pro turned up. The last few weeks has proven to me how much of a disadvantage it is to not have a laptop on this course and I was glad that it had turned up. You can get by, but presentations become more unpredictable as you move your working files from one computer to another, you are limited to the size of your USB stick (and 8gb was too small for the video work I was doing earlier in the week), and you are stuck in the computer lab while others are in the studio. One is more likely to go home to do stuff on the home desktop, but then you can't suddenly have a good idea and run it by a tutor because you are at home.

On the weekend I studied Max MSP, as my idea was a bit complicated. I found the keyboard hack rather easy, but I wanted to animate myself from side on walking. This was quite complex as first I had to work out the steps involved in walking, then work out how to program that in Max MSP which was out of the scope of any tutorials we had received.

Mucking around in Animata, the file I'd made of me side-on consisted of several layers, one for each limb that was animated. The process for making multiple layers in Animata is a bit convoluted, and the result isn't always what you expect (you probably can get predictable results with experimentation but the program is so clunky, one doesn't really want to bother experimenting). You cannot rearrange layers in a file once they are imported. So if you don't get it right, you have to start again. Animata has no undo either.

However, the file preview I noticed doesn't show a picture, but what looks like XML. XML is a mark up language that looks very similar to HTML but is customisable to whatever application it's made for -ie there aren't really any commands or markup, you make them up for the program you are using. The object of XML is to be both human and machine readable. Although "readable" might be a slightly rose-coloured description. This was great, all I had to do was change the file extension to .xml and load it into Dreamweaver (or any other text editor, but Dreamweaver has a collapsable code function making it easier to navigate long documents) and I could quickly work out the structure of an Animata document and take out mistake layers and reorder other layers. I did that and it worked. I had a clean file. But I was concerned that what I was doing was overly complicated and straying away from the brief for Tuesday. Plus half my weekend was taken up with checking out a new place to live...


    Sunday March 21, 2010

Lights! Camera! Action!

I'm going to try the following format this week....

Monday:

Introduction to Creative Technologies, we had an incredibly interesting lecture from an industry insider on 3D printing technologies. Summarising, there are a few different processes, some superior to others, but all with a niche of one kind or another, in the last couple of years the materials that can be used include metals, plastics, textiles and even bio/organic materials. One company in the States has been building spare body parts (in the example we were shown, new bladders) for patients from cells grown from the patient's own stem cells, so that when the new part is implanted, there is no rejection. Experimental technologies included house building, and printing at nano scale. It became apparent of the possibility of a major disruption to the economy as the economy of this method of construction could displace many jobs.

Well this has happened may times in the past, and sometimes the shift is very dramatic. From my own experience (although not direct thankfully, I entered the industry after the most major  changes) the Apple Mac and Adobe Postscript, page layout software, then CTP (Computer-to-Plate) and digital photography made loads of professions fairly worthless. Some people are scared of such change, I think if you are then you are more likely to wind up on the lower end of the socio-economic scale when your job gets displaced by new technology. Learn to deal with it.

The guest lecturer was however under the impression that the ability to print in 3D was a step on the road to more leisure time and 30-hour weeks. I'm no economics wiz but I don't think it works like that. If anything we have become busier in the last 30 years with the advent of computers for example. There are other factors involved too of course but what seems to happen is more useless crap gets produced for lower margins and the operators are expected to work faster and smarter to keep up. I don't see that changing until our culture and attitudes change.

Back in the Studio our team had found lasers on Trademe, and had made a deal to get 3 for $100, which amounts $14 per group member. This was a deal so when it comes time to sell them we stand to get all our money back. We're poor students. The only concern to me was that they are all green and we could have paid $100 and got one blue and one green, buying a lower power red for $5. But the team believed the lower power of the red would not show up at all and while I'm not so sure, I have no hard data and we can't test it, so three green lasers it was. Two members headed off to pick them up and the rest of us planned a few details on the robot. I tried to work out the maths behind the sensors on the sensing robot. As stated before I want the software to process the input and churn out numbers based on that processing, and interaction of the sensors, rather than just a direct input to output. I found it a bit difficult to do in the studio so I finished it at home. I now have a lovely scribble flow diagram. Tomorrow I will translate this into actual "code" and also see where everyone else is at. I'm still not entirely sure about the actuating robot design but tomorrow things should become clear for everyone as we will have all of our materials. I'm also a bit worried about how we are going to transmit from one robot to the other. The NXT computers have a bluetooth radio, but we've never used it, one group member believes we will be able to ask the 2nd year students, I'm not sure they will be much help or know much more than I can find out myself, but for now I'm going to be optimistic.

Tuesday:

(Written on Wednesday) On Tuesday we reconvened and I attempted to turn the diagram in my book into code. The actuator robot had changed it's design due to one member having similar issues with the previous design too, so it looked more hopeful. But as I've become the coder  for this project I haven't stuck my nose too much into the robot progress. I took the work home later in the day but started to consider that the new design didn't look too much like what I thought we'd all agreed on and I might wind up coding movements that are not possible for the robot that has been designed. Hmmm...

Wednesday:

I came in this morning and there was a finished-looking actuator robot. It looked more appropriate in some ways but the new design had not taken into account that now it's movement was limited and I'd have to somehow program that in, and Lego Mindstorms is not exactly a great "programming" environment. It's slow and clunky and lacks simple UI elements such as scroll bars leaving you with a hand tool (and no keyboard shortcut) to move with. However, I decided to not worry about that yet as I had still to get the maths done to turn the sensor's inputs into outputs... I'd deal with how the outputs were interpreted at the other end later. I still had to work out how to send those outputs via bluetooth to the other robot. We did however get to try out laser light waved at a wall photographed by a long exposure camera, and the results looked promising.

Laser through Lense

Laser Scribble

I spent the afternoon "writing" the code for the sensing robot, turning the maths into little programming blocks. It takes a while to get used to. Some of things I'm trying to do seem convoluted in NXT but would be fairly basic in a real language. It makes me wonder if the compiled code that NXT produces is  bloated and slow. An example of a basic operation would be to take the inputs of 2 sensors, add them together then divide by the input of a third sensor in order to give a number used to control the speed of one of the motors. Basically I'm making up equations based on the inputs to come up with numbers for various parameters of the output instructions. One difficult one was converting distance from the ultrasonic sensor into a number from 1 to 4 depending on a certain range. I wrote 2 versions of the same code - I liked the 1st version better because it looked cleaner and more skilful (and probably executed faster) but the second version, which had comparison loops within comparison loops, I could be more sure it was actually doing what I thought I'd told it to do. It's quite difficult to test the code, you can output the result to a screen on the NXT but I found that difficult to get working properly. So I'm working kind of blind, and hoping my logic is solid.

I'm a bit concerned that my group sits in the lab with me and mostly slows my concentration. If I ever do this again I'm going to make a flow diagram with their input to the whole thing. It seems a bit silly to have one person know what's going on with the code while everyone else sits there bored. Plus I'm so busy with the coding that I haven't been able to make suggestions about the actuating robot. I think the design is mildly flawed, and not what I thought we originally agreed on.

I stayed until quite late with one member who was stuck there until 9pm. I took the robot home and mused that it would be my chance to fix the design, but there are a few good and obvious reasons not to: It's not my place to change things without group input. This project is not assessed but a learning experience, and the group needs a chance to learn from mistakes and reflect upon improvements. I don't need to be emotionally involved in it, but this does illustrate why I don't usually like working in groups. I mentioned this in a previous post but took it out later fearing that it wasn't the most diplomatic thing to say on a blog classmates can read. Put simply, I've never worked in a group, even in a commercial/work situation where the result is more satisfying than what I could have done on my own. This may sound like "Chris can't work in teams" but I can, and do. But that is my life-long experience of groups. And in BCT I'm somewhat older and more experienced but trying to avoid the "older student drawing on life experiences" stereotype, so it can be frustrating when I see some of the paths we are going down. Right, unprofessional rant over. Another reason why I didn't alter the robot is because I am tired and, I don't have any resources available to me right now. This is as frustrating as hell because in past lives I've had money, tools and materials and a space to work in, at the moment I have none of that, plus  all the shops are shut anyway. Off to bed.

So it was getting a bit urgent now becausea demonstration is meant to happen tomorrow. First, let me explain the design flaw (in my opinion). While it's a better design than previous, we seem to have discarded the ideas of using prisms and mirrors to direct the laser light into the wall, instead mounting the lasers directly to the robot actuators. This limits the range of freedom available to the motors which makes it harder to program and sort of also means some of my code is redundant while I have to re-write other parts. Some of it I'm not sure how to write, in such a short time. While the robot can detect motor rotation, it only knows the rotation from where it last was, it's not an absolute number. This means if the starting position is different the robot could wave the lasers off course or arms could crash into each other. I also think it reduces the artistic potential. Take a look at any laser in a night club and you will see that the laser is static and it's mirrors in front of the laser than alter the course of the light. So I pointed this out at a meeting in the morning and was met with scepticism so I demonstrated (I found having to demonstrate what seemed obvious to me a bit painful). There was still resistance but I went in search of reflecting objects in the city. While out, the battery for the test laser failed and I had trouble finding the right objects. Bare in mind the demonstration is tomorrow and this is stuff I hoped the rest of the group would have done earlier. I had also run a test program for the bluetooth and while it worked I'm not sure that it's really all that stable or workable for the project. While I was gone, the group had started coming up with alternative strategies, which I found very pleasing as the weight of success being on my shoulders while given limitations I was unhappy with was getting to be burdensome. I was happy to drop the bluetooth completely as an unnecessary complexity especially as the new design had 2 receiving robots and I'm not sure how that would work given that there is what's called hand-shaking involved with bluetooth. That is, 2 bluetooth devices have to connect to each other and agree to talk, it's not just a case of one machine sending out signals like a radio station and nearby devices just picking them up. No bluetooth did present the problem of what the robot would be sensing now though. So it was also decided that the actuating robots would now sense sound and basically dance to music. Another member wrote a basic program while I was gone and tested it and it worked. This was great, although I offered to submit parts of my code that processed the input to add some dimension to the program but it was declined to keep it simple. At this point I didn't really mind, but would like to point out that it was the bluetooth that failed, not the algorithms and they could have made the code and possibly the dance more interesting, but I can can understand that people do not want to know about anything to do with a certain approach once part of it fails them.

But we now had a working machine, not as grand as the original idea but I think everyone was over it, especially me. I also wanted to distance myself from the project a little bit now.

We waited until later in the night to record on video the team (or part of the team, 3 members had other commitments) setting up the robot and capturing our images. We first tried in the computer lab, then moved to the studio where we recorded our first images. We then went into the city to find a wall to project to. I think the aim was to get video of people reacting. The problem was that it was far too light for a long exposure camera. I'm not a photographer, so my call that "it's too light" may have not carried much weight but I thought this was kind of obvious from the beginning but nevertheless we spent a couple of hours mucking around capturing shots.

From there we went back into the AUT building and into the stairwell. It was a lot darker, a member of our group wanted to aim the lasers at nothing in particular, just the staircase. I wasn't certain about this either but I was quietly getting irritable as it was about 9pm. However this method proved to produce the best shots of the night, so it was rather inspired. The effect was more 3 dimensional than projecting against a wall and this is what he had planned. It worked.  This is the same team member who also came up with the alternative to bluetooth, sound-controlled idea so he'd really saved the day a couple of times.

As soon as it was over though I wanted to go home so I did.

Laserbot side view

This is the day of the presentation, and we got to see what other groups had come up with. I was quite impressed with all of the robots, although an awful lot of them had microphones attached and sensed sound which was a bit of pain given that we were also doing that now. I think sound was the most obvious choice as light sensing works best when there is some degree of mobility. Touch sensing is just an on-off switch and ultrasonic sensing also assumes either mobility or lots of movement around the robot. With sound you can crank up a stereo and say it's responding to the music (even if it is only doing so very crudely).

The first group had a robot that moved back and forth above a tank of water dropping food colouring to music, while a camera focused on the tank and recorded. It was quite nice and tranquil, especially as they weren't playing some terrible obnoxious commercial pop, which... well... nevermind..

Another group had 2 robots, one drew with chalk, charcoal or crayon while the other had an eraser, they sensed each other and the wall that enclosed them. The artwork was kind of appealing.

Another group had a robot sensing strong light and music controlling a radio controlled car that had paint squeezed from it, making tyre tracks on the large canvas.

Other ideas included a robot that sensed the edge of the paper and drew dotted lines. As more lines were drawn the robot could also sense those and react, so it was reacting to it's own artwork. A robot that sensed music and... I'm not sure how it worked, it apparently controlled a mouse and was drawing lines in Photoshop. Another drew circles with salt, and the last also used a long exposure camera but threaded wool from the edges of a box that had wire catchers attached, like some sort of loom. They introduced it with a commentary about people feeling threatened by robots talking over their industries. I thought it interesting that they had done this, and made a machine that threaded wool but yet not known to reference Luddites.  Luddites were people who worked in the textile industry in the 19th century during the industrial revolution who protested (sometimes violently) against new automated looms that represented an end to their jobs. These days the term is used to disparagingly refer to people who reject new technology simply because they don't want to learn it. I've met a few in my time. Anyway, interesting parallel, even more interesting that it was accidental.

Our group presentation was an utter shambles. I had backed off a little bit because by this time I felt that not much of what I had said had gotten through and I'd sort of washed my hands of it. Perhaps not the best approach but I take a certain amount of pride in my work and my abilities and I don't take kindly to being credited with the work of others, whether it's good or bad.

There were technical difficulties that could have been overcome with preparation, and perhaps some of that preparation could have been made had we not been making our pictures last thing last night. But it was good in the sense that everyone probably learnt how not to present.

After that we were to work out putting the video footage together, which I really wasn't looking forward to. I'm a bit tired of group-think and taking 10 minutes to make any decision because it needs to be discussed and run by everyone who may or may not object, or someone saying after a decision has finally been made "I have an idea". I had to pick up something so I left a USB key containing the source footage with the group and quietly hoped that indecision would mean lack of progress which I could then fix when I got back. I'm not sure if anyone else knows how to edit video, and I felt a strong need to selvage the whole project with a semi-decent looking video  presentation. I got back and everyone had gone home, not sure what decisions had been made, but it seemed that none had been. So I spend this weekend producing most of the video. Most, because no one gave me the photos that need to be inserted. Which means some painful group/studio time tomorrow adding the bits that need to be added... with a lengthy process of consultation with individuals who have never worked in a commercial environment or have an eye for what looks professional before but still have equal say. Ugh. Not that my movie is completely up to my own standard anyway, I haven't actually used my video camera much nor shot footage for a long time and I need practise, the camera work was terrible on a couple of different levels. But it's probably quite passable for a 1st assignment for a 1st year student.

So this post probably looks a bit negative, and I am glad it's over to a degree. I find group work tedious and frustrating but it's a skill I have to learn in order to succeed in BCT. At the moment I see groups as an obstacle to producing exceptional work, I need to learn how to leverage a group's potential instead. There were things on reflection that we could have done a lot better. First, testing the bluetooth thoroughly before we embarked on the mission might have been a very very good idea, instead of waiting until we got to it. Some more research online about the technology in NXT would have gone down well. In fact, in my opinion, research was the one thing that lacked through out the entire project. We are now academics, and research is crucial. It's funny actually, as a worker I researched things all the time before I went out and did them. Why this approach was not taken this time is a bit confounding. We have more resources available to us now, not just google which can produce pretty useless or incorrect results sometimes anyway. Although I only went to a class on how to use the AUT Library resources on Thursday during downtime, it did suddenly occur to me what we weren't testing our ideas. Had we done this there would have been less arguing about what we think is the correct approach. We could back up any thoughts with hard data. The team was friendly and I liked them on a personal level. They were pretty open to new ideas to a degree, yet not open to execution of small details that became pretty important. I think that more planning would have meant less work. Another big thing missing was communication. We should have been able to communicate with each other while at home through social networks. At the very least through AUT portals such as our AUT email. Although email, especially the cryptic addresses and webmail logins might have meant the email not utilised properly. Sure everyone has msn or google talk or something similar. Perhaps if everyone set up an AIM address, that way there would be separation of our personal social networks and our AUT ones (for those who don't want to bring their work into their social lives - I'd be one of them). The changing of direction of a project without consultation was a major issue and I'm not sure how everybody missed the significance of a change in robot design being important to the programming. We should have got the whiteboard out, planned, researched and drawn diagrams. The coding should have been everyone's responsibility. I'm not saying everyone should have coded, there should be one or 2 dedicated coders, but everyone could have developed the logic. We are all BCT students so no one can wave their hands in surrender and say "I don't know about this stuff" and switch off, like an office worker who is not interested in an explanation as to why their outlook mailbox is not receiving any more emails and how to avoid it in the future.

Being close to the project I can see it's flaws. But we DID produce artwork and we did get the job done. To an outsider (aside from the presentation) we probably did okay.


    Saturday March 13, 2010

We are the Robots

On Mondays we have a different class, a lecture called Introduction to Creative Technologies. The first lecture defined creativity from several perspectives and explored the history of creativity. It's interesting to note that creativity has changed from being regarded as something spiritual and outside of people, to a talent that was available to only certain people to something within everyone that is easily taught and brought out. My personal view is (put simply) that everyone is capable of creative thought, but for some people it's much more instinctive. Just like any other talent really. We also received an assignment designed to increase our awareness of creative thinking and practice by critically thinking and reflecting upon events - basically keep a review blog. We can review anything that we can define as an event, be it a concert, an exhibition, a film, a performance, a dining experience... Whatever as long as we can pull it off. At least 8 different things over the next 3 months. You will see the link in the sidebar to this blog/assignment.  The blog itself looks incredibly ugly at the moment, I'll be fixing that in due course...

On Tuesday we started to build on knowledge from our first week with 2 short exercises over 2 days followed by a main project set to continue into the following week. In the morning we were introduced to the concept that all games have rules and every element in the game has a protocol to follow. For example, chess has 6 different pieces each with their own protocol such as a pawn can only move forward. We were to divide into groups of 4 and come up with a new chess piece based on a type of character and work out what characteristics that character has (the example we were given was a "The Politician" who'd always sidestep and go back on their word). From there design protocols for the piece to follow based on those characteristics (The politician for example would side-step and move 2 steps back for every step forward). We'd then draw a diagram of it's movements and present it later in the morning. Our group devised the Terrorist, a character that sneaks around and then blows itself and everyone around it up. The protocol was simple. It could only move in a forward direction, 2 units straight but only one sideways. I'm not entirely sure why the group decided that it could only move forward, but I consider it interesting given an article I read in New Scientist a few years ago about the psychology of a terrorist and the people surrounding one in the days/months leading up to a planned attack. There basically is no going back because of the social pressure and expectation placed upon them (although it really amounts to nothing but manipulation by the superiors).

When an opponent jumped the terrorist a roll of a dice would determine how many immediately surrounding elements (including friendlies) would be taken from the game. I quietly wasn't keen on a terrorist at first (though I didn't speak up), but I thought the translation into an actual game piece was rather well done in some respects, and quite different.

Once we presented the piece, the robots came out. Lego Mindstorms is a lego set (obviously) that also comes with various sensors, 3 actuators (motors), and a controller unit. Hmmm, something familiar about this... It also comes with a visual programming language. When I say visual, I mean that commands are drag and drop blocks on the UI. Very visual, very basic (Not BASIC). Our next task in the same groups was the build a tribot (3 wheels) that replicated the protocols of the chess piece we made. It was also not to bump into all the other robots other teams would be making, tomorrow when we placed them all into the same 2m x 2m square at 10am the next morning. The other three members were very intent on building a lego robot so I decided to get familiar with the programming.

The next morning (Wednesday) the rules had changed slightly but the main idea was for the robots to not bump into each other or they were disqualified. I'm not entirely sure what happened with our robot, it seemed to get caught in a pile up of robots. It had been programmed to stop and manoeuvre around obstacles (at 45 degree angles like the chess piece), but we could not program it to back up as the terrorist cannot go backwards. We could not control robots running into us. Our next challenge, in the same team was to rebuild the robot and program it to find it's way from the outside entrance of the BCT class through the corridor and to the administrator's office where it would knock on the door and deliver a message. Just how we would achieve this was up to us.

The Mindstorms robot comes with an Ultrasonic sensor which detects objects and their distance. This can be used as a crude form of vision. A microphone that detects sound (obviously) and it's volume level. A light sensor, that detects the intensity of the light. That sensor also has a light of it's own shielded from the sensor. It can be used to bounce light off surfaces. It also comes with a push-button switch sensor. There are other sensors available though the Mindstorms website, but that is what is available in the basic package. We could make use of whatever resources were available, and we quickly found a program that used the light sensor to follow a line on the floor, and we reversed one comparison made in the code so that instead of following a black line on a white floor, it would follow a while line on a dark floor (such as BCT's carpet), and we laid paper masking tape down.

There were 3 versions of the same program available to us, each one more complex but faster than the last. We stuck to the basic program because the UI of Mindstorms frankly needs some work. Basics like scroll bars are missing. There is a hand tool, but as yet I'm unaware of the shortcut key to switch quickly between the hand and the selection tool, making navigating large programs clunky. Reverse engineering the most complicated program was out of the question, particularly as the group had a working robot and were weary of improving it and potentially breaking it (I'm a bit more gung ho in my approach to things which is why I find collaboration challenging at times). However, it was nice to have a working robot fairly early in the day, other teams looked like they were struggling a great deal.

Thursday morning at 10am our groups demonstrated our robots. A number of teams had simply programmed coordinates into their robots so that they didn't really sense anything. This tactic relied heavily on the placement of the robot at the beginning and it turned out that traction on the carpet was an inconsistent variable. This was not a successful method. Our team and one other who we shared the code with has robots that tracked the line, and they made it to the end (albeit slowly). But the team that impressed me the most had made use of 2 light sensors on each side of their robot which allowed it to run at full speed slowing only to correct when one sensor hit the line. See our approach had the robot detecting one edge of the line and making minute adjustments to stick to it. Their robot drove in a general direction and only adjusted if it got too off track. They had also made the program themselves. It would have been simpler yet more effective. This quietly annoyed me a little bit as I'd probably have done this myself if left to my own devices. Anyway at least one member in that team was also responsible for the Powerpoint slide show last week that looked really good (and despite being Powerpoint to boot) so we have some pretty smart cookies to watch.

We were then given our assignment. In the same groups - possibly merge with another group - conceptualise, design and create a robot that draws based on sensory input. Drawing does not necessarily mean pen and paper, but I had a bit of difficulty at first grasping what else it could mean. Our group merged with another group and we all decided that it would be best if we went home and came up with ideas to present to one another the following day. Suited me, I had lots of life-admin to do, I cannot wait until things have settled for me on a personal basis (a few weeks off). I struggled with thinking about the mechanics and limitations of the robots, and more of the limitations of the programming language. It has loops and conditions, but as far as I can see it cannot branch out to other parts of the program. I did not have the language installed at home to investigate though and could not find a copy of it on line (I'm pretty sure that its open source and I imagine free as it's the hardware that lego is selling really). But I considered ideas such as a plotter and then perhaps connecting it to a Etch-a-Sketch as that has 2 dials with which to draw.

Friday morning I came in to find our group had detached from the other group but reattached to a different group. oookay... Not entirely sure about the events leading up to that but it was no big deal. Just... odd. A few ideas had been presented, and for the first time I was feeling quite content to be in a group as I felt a bit stumped with ideas at first. One member of our group had considered the use of lasers in some sort of light show which at first I thought it was a bit too performance oriented and too abstract for my liking, but then I saw a Photoshop doodling fall out of another member's visual diary and it was very colourful and there as something inspiring about it for me. Suddenly I realised how one can draw without the use of paint, pencils, felts, pens and 2D media such as paper. In a pretty decent collaboration of thoughts from various members we decided to use 2 robots, one would do the sensing and transmit instructions based on that input to the second robot that would execute the drawing. The drawing would be the use of laser pointers, prisms, mirrors, lights and whatever else we could think of to project an image onto a wall that would then be recorded as both a performance but also as a single piece of art by a still camera set to a continuous long exposure. We could produce potentially beautiful artwork. Both creative, technical and what appealed to me most of all is that it was ambitious. I usually find groups to lack ambition - it was very very pleasant to see a group striving for great goal without worrying about whether it can really be done. Of course it can be done, anything can be done when people stop questioning every single thing. The rise of the personal computer industry was precisely because the people that bought it about were not aware that "it could not be done".

We divided into 2 teams, one building the sensing robot, the other building the robot that executes. The sensing robot is of course a lot easier, and our team of 2 people had it done by midday. The next task is to program it. My feeling is that the sensors should interact with each other in some way mathematically to produce more interesting instructions for the other machine. Of course I have no idea how to do this and while I'm aware of what an algorithm is, I don't have the faintest idea on how to produce one. But I'll learn. The other team have not had as much success yet and current designs are a little bit unsettling so once our team has some code under way perhaps we will rejoin and see if we can also contribute. It is a complex task, especially as we have only 3 motors to control a number of lasers and prisms etc. Perhaps we could try to get a hold of a third robot...


    Sunday March 7, 2010

Situational Shuffle

On Thursday morning we presented our visualisations, and I was rather impressed with a couple of them, it was as if someone had upped the ante since our last presentation and people had risen to the challenge. Our team did not however present which annoyed me after all the work I put in, but the problem was that at the moment I'm a bit short on certain tools, in this case a USB stick (a couple of weeks ago I had 2 of them!) so I copied the swf (the file Flash creates) to my mobile phone. Contrary to the idea that macs "just work", the macs at BCT did not recognise my phone and I guess it would be wildly uncool to pop up and say "I need a driver" like Windows might, so macs just do.... nothing. Oh by the way, I usually abuse Windows far more than I abuse OSX, but OSX is not perfect. And with Windows 7, there are actually some UI elements that make OSX look a bit... clunky to use. Strange but true. Anyway, it didn't occur to me to burn it to CD, I have a stack of them around here.

Thursday's lesson we were to get into groups of 4 and follow instructions on a set of cards that had been shuffled in random order. Each member of the group had a role, one was to carry out the task (The Actuator). One was to read the task and direct the Actuator (The Controller). One was to chart the progress of the Actuator on a map (The Tracker). And one was to sit outside of the group and observe and record it using whatever methods available (The Sensor). Our group had 3 people so The Controller and the Actuator became the same person. I was the Sensor. The instructions included "walk to nearest traffic lights", "Follow a man with a tattoo for 2 minutes", "Take something free" as well as instructions like "Turn 90 degrees left". So all groups wound up taking different paths through the city and had it recorded in video, pictures and drawings.

The next part was to find a way to present our excursion leveraging the abilities of individual members of the group. I floated the idea that we make a video using all our material and perhaps animation for the map and narrate it. This was met with scepticism by the group as it would not be interactive, so we moved onto Flash. Trouble is none of are expert enough with Flash to guarantee that we wouldn't be there until midnight and quite frankly this did not appeal as I'd already done as much the previous 2 nights and I was starting to get tired. So we moved on to PowerPoint which I cannot personally use (and being a bit of a snob to MS Office for various reasons I won't bore you with, never really had the desire to learn), so I relinquished control of the lab iMac and got lunch. While eating lunch, two things occurred to me:

1, We had ditched Adobe Premiere video-making because it wasn't interactive in favour of PowerPoint... which is also not interactive - and pretty rubbish by comparison.

2, If were were going to do a Slideshow with video elements, why not do it in InDesign and export as pdf, which will mimic a PowerPoint slideshow quite well - one could even have naff transitions if you are that way inclined. Over all it could look 10 times better and well designed and of course, one of our members was experienced with the application so we could do the job faster. Er, that person is me. I got back and mentioned this. Soon the group was quite dissatisfied with the current progress, so we decided to boot it for the InDesign method.

The result was finished at about 6:30, although I took it home and tweaked a few small things about it. We presented it on Friday morning, and a few of the other presentations were very impressive. Including one in PowerPoint that looked like a proper slide presentation (rather than like all the terrible ones that you get in emails that that one guy in the office forwards you all the time, made by other bored office workers. You know the ones, mundane subject turned into a ppt file. Comic-sans font, awful and slow transitions and a degree of pointlessness because it would have been more effective as just a normal email with pictures and text attached). Unfortunately despite all videos being encoded in some standard codec inside .mov (quicktime) format, it working on this windows machine (that doesn't even have quicktime installed), and working on the lab macs, the mac used for the presentation threw up errors when I tried to play a video. Ugh! The show went on without the video component, and our team realised that perhaps although we had little time, we should have budgeted in some practice so we knew who was saying what rather than look at each other as if to say "No you speak". But it was an okay first presentation, not by any means a shambles. But roll on the leased laptops, something that will take some of the unpredictability out of our future presentations.

The rest of the day was fairly relaxed, a BBQ with all years and the first of the laptops arrived for student's who had paid the bond. I haven't yet but will be um... soon.

So that was the first week, fun although tiring. This weekend hasn't been one of total relaxation though, I've mulled over the week's lessons so I could blog about them, done some self-directed learning on Edward Tufte and prepared for tomorrow by nosing around the online resources for tomorrow's class that have already been put up. Plus preparation for moving soon to a more appropriate living situation (ie one that doesn't cost the kind of money that I no longer earn) and trading past life items for cash on trademe in the process. Exciting!

Next week's known challenges: more design work on this blog and trying to be more succinct in posts.

Oh and working out why comments are not currently working. It will be fixed... - Edit: It IS fixed!!


    Saturday March 6, 2010

Data Visualisation

First week but it's been quite full-on, possibly because I put just a little bit too much effort for the allotted time, I should probably be more realistic, but I've always been one to chase a vision with out regard to what it will take to produce. I might need to learn to be more pragmatic.

So building upon the lessons of Network Science/Social Networking, on Wednesday we reflected on the various presentations of Tuesday's data using Gephi, determining whose works were the most successful at conveying the information and why. The use of colour, size and position along with general placement of text for readability varied between different people's presentations so we could all see clearly what worked and what didn't and learn from it.

We were then introduced to some other examples of visual display of data, and a pioneer in this field, Edward Tufte, along with a short video of his critique of the iphone interface. This caught my interest in particular, as interface design is something I find mildly interesting, often commenting on various elements of Windows and OSX interfaces, and gadgets and mobile phones. I'm pretty sure somewhere in my portfolio that got me into the course to start with I commented on the iphone interface. It was a type of interface I'd been sort of waiting for, finding normal mobile phones incredibly clunky to use. To me the bad design really shows it self when  you are in a rush, and cell phones are one piece of equipment you are likely to use when in a rush or a panic. Getting lost in their screens and menus because is incredibly poor design in my opinion.

You have to watch the video in the link to know what I'm talking about in this paragraph. I tended to agree a little with the sentiments of his critique on the stock exchange application, although his version was a little ugly. I imagine that if you follow the stock exchange on a daily basis, the crammed information on Tufte's design would not be an issue. It looks daunting to those (including me) who have no idea what that information is conveying. But his theme (or lack of, it almost looked like HTML without the CSS*) certainly does not fit into Apple's.

Our task after this was to get into groups of 3 and observe and record data of some sort so that we could then visualise it. But there were restrictions to this project, mainly that the visualisation should be non-representational and must not contain words which at the time of doing, the concept was a bit lost on me. That's one reason this blog entry was not made on the day we did this project. My aim for this blog is to explain in layman's terms what I've learnt or done on this course because being able to do so to me means you understand it. I didn't quite "get" it. It seemed very abstract. To present data without actually using the subject matter, with no words, how on earth is one to know what meaning a visualisation carries? One either had to be highly creative or have more in depth understanding of data visualisation using abstract methods. Particularly considering what we chose to record as data. We sat in the foyer for an hour and counted people walking past for an hour in blocks of 10 minutes. For added interest we counted people wearing mostly black separately from those wearing colours. Interestingly by the way, colour appears to be the new black.

A time-consuming challenge was that I decided to do the presentation in Flash. I've used Flash before - several times - and produced work from it. The trouble is, I use it so infrequently that it's almost like I have to re-learn it every time. I find it's interface a bit clunky (and too macromedia-ish - the previous developers before Adobe bought them out) and while I get the basic concepts I find it really hard to actually execute the steps in those concepts. Plus not knowing keyboard shortcuts the way I know InDesign/Illustrator/Photoshop short keys really painful. Yet this didn't deter me and I produced an animation I was mildly happy with on behalf of our group. I then went straight to bed at 12:30am...

somethig...

Yes I put a title in there.... Anyway it the clock turned and the walking men fluctuated in size over the duration. And yes, the walking was animated. Once I figure out how to host a swf on this blog I'll edit this post.

*HTML (HyperText Markup Language) is what web pages are essentially made of, and what web browsers read. A number of years ago in the evolution of the web, a distinct separation was made between HTML, which is the content and CSS which is the presentation and design. CSS stands for cascading style sheets.  Well designed websites make a clear distinction between content and design. Advantages of such a method include easy repurposing of content for other devices such as mobile phones and TV sets, accessibility such as for blind people and also easy design change without a single change to the content. Visit https://www.csszengarden.com/  for a dramatic demonstration.