AI- general thoughts

•December 2, 2011 • 1 Comment

Ra-One is a Bollywood movie that came out in October 2011 and it is based on an artificial intelligent game. Those characters in the game actually has the potential of coming alive with all sort of feelings a normal human being would have.

This movie caught my attention and I came to realized this could actually be the future of us in few years of time. How would it feel to interact with an artificial intelligent character from a game that is infinitely smarter than you in every aspect? Moreover it has a human like body including skin?

AI in art is something total opposite where art has the ability to think and act accordingly. Even though there aren’t much to compare with general machine intelligence, art has the ability to take anything to a different level in its own way. Though I have taken some art courses in high school, I never realized art can also be artificially intelligent.

As everyone in this blog has mentioned, we collectively built an application that can create an art piece according to the decision made by the user. This application opens the door for an art piece to think and react accordingly to user input.

- Sinthu Sathananthan

The final prototype is ready!

•December 2, 2011 • Leave a Comment

Click to try it out!

As Diette, Richard and Kevin described, we created a simple application that learned what images and patterns of squares the user wanted to see more of, and added them to the ‘canvas.’ The idea is to let the user create their own masterpiece. It was a neat application. The more I played with it, the more different patterns I wanted to see. Could we add more color? Could we change the shapes? What would it look like if I selected a smaller area vs a bigger area?

I’ll go even further, and ask – could we use our similar concept above and apply it to music? Or, how can one integrate artificial intelligence to allow users to create music easily, without having to understand music theory?

uJam is a “web app that crafts entire songs with precise accompaniment out of whatever the user whistles, hums or sings (or tries to sing) with an auto-tune feature to smooth out the rough spots.”

Real magic lies in creating something from scratch, either with your voice or a musical instrument, in a multitude of styles. Even if you only have a single instrument on hand, or just your own whistling lips, uJam can turn your output into a number of other instruments, as Gorges demonstrates below by turning a simple recorder melody into a full-on guitar jam.

Eliot Van Buskirk

And where does it go from here? Soon, I believe, we’ll be able to tell a program, “mash me up a new beautiful song that sounds like Beethoven’s Fur Elise meets Backstreet Boys’ ‘Quit Playing Games with my Heart’ ” and it’ll craft a unique song, based on mathematical calculations and algorithms found in those two songs. Will we need professional producers and artists anymore, at that point? That’s a philosophical discussion I’ll write about another day.

~ Jean Le

Swarm intelligence & Art

•December 2, 2011 • Leave a Comment

When creating a game or a movie that consists of a large group of people or animals, one of the challenges is trying to make them behave and move naturally, as it occurs in real life. One might consider animating a group of objects individually. And then distributing a copied version of that group over a map / screen. However, the result isn’t as natural.

Thankfully, there are many programs today that makes this much easier. Swarm Intelligence looks at the interactions and behaviors of intelligent agents such as ants or birds. It produces an algorithm of the structure, patterns, or rules that dictate how these agents behave.

Tim Burton used swarm intelligence to create the movement of a group of bats in the movie, “Batman Returns”. Lord of the Rings also used swarm intelligence, using a software called MASSIVE , to easily and quickly create thousands of individual agents, such as humans, to act as individuals.

“This simulation shows the complex behavior that can be exhibited through collective intelligence. A swarm of boids are shown interacting with each other and predators in the environment. The boids have no intelligence other than trying to stay close to their neighbors and avoid predators.”

Using swarm intelligence and technology is very easy and simple, making it a very popular tool – even for creating art.

Although we didn’t use swarm technology in our prototype, a group of students tried to apply swarm intelligence, or the algorithms of simple agents interacting with one another, to create art.

Swarms draw a tree.

You can also draw with swarms!

To start drawing with swarms click here.
~ Jean Le.

It’s the thought that counts

•December 1, 2011 • Leave a Comment

While computers are able to weigh different options and select the most appropriate response, they have real trouble with pattern recognition (compared to the human brain anyway). Recognizing patterns and recreating them on a computing platform is truly a noble cause though. Our minds are hard wired to take in new information and apply it to learned information to surmise new patterns or more accurate patterns. You can see this everyday when people speak using specific language or drink and eat specific foods. We’ve recognized things that make us feel pleasant or get a positive response from those around us and we learn to do these things more and more. Writing some code that mimics that behaviour is a daunting task, however is definitely worth undertaking.

Computers do not understand relevance or importance very well either. If we just take the average of all things learned we all become watered down and boring. Certain interactions must have a more substantial affect on our overall behaviour. We determine these events by the type of emotions we are feeling at the time or if advice is coming from someone we respect. Computers don’t have respect or emotion though, so to mimic intelligence we must also mimic these things. Our “Art”ificial Intelligence project attempts to make the user feel as if the program is responding to their input and learning from past input. However rudimentary, we have achieved a certain amount of this “learning“.

 

~ Richard Vieweg

AARON, Artificially Intelligent Painter

•December 1, 2011 • Leave a Comment

AARON is an artificially intelligent program created in 1973 that can paint by itself.  It has been learning how to paint for 40 years now, and it’s getting better at it.

Programmed in C, AARON has no user interaction.  It was given the tools to paint, but AARON decides what it’s going to paint and how it is going to paint it.

While it’s still a ways away from painting the Mona Lisa, the fact that AARON makes its own decisions with regards to style and subject is pretty amazing.  AARON even cleans its own brushes!

Here is a link to the page about AARON, and there is also a video of AARON and some commentary from artists and students in computer science programs.  Pretty cool stuff!

~Kevin Dekker

Kevin’s Attempt at Intelligence

•December 1, 2011 • 1 Comment

Artificial Intelligence Early TestHere is my first attempt at a program that has some kind of artificial intelligence incorporated into it. Basically, all this is doing is creating a random number of squares in random locations. Then, it is running a hit test on the square itself to see how many squares are hitting.

The idea is that we can take the data of how many are hitting and use it to gauge which area the viewer likes. So, if the viewer likes an area with more squares in it, they will select it and the piece will regenerate itself with an amount of squares more similar to the area that was selected.

This version of the file does not yet recreate the image, only creates more squares when the blue block is clicked. It also counts the squares as mentioned before. Not quite Skynet, but hey, it’s a start.

 

~ Kevin Dekker

creating intelligent art experiment

•December 1, 2011 • Leave a Comment

Using ActionScript 3.0, we developed a random art generator that allows the user to select an area that they find appealing and submit a “hit test” that will output the objects that are within the selected area.

This idea is the initial phase of an application designed to output an image that appeals to the user based on the information within their selection.

Information such as shape, alpha, colors, size, etc. can be stored and based on these percentages a new image could generate, that should appeal to the viewer.

the challenge

The challenge was to allow the user to create their own selection instead of providing static areas of the image for them to choose from.

I found some sample Actionscript 3.0 code online (flashandmath.com), where a user can input three points of a triangle which is drawn once the user confirms the final point. The user can also change their selection at any time, by dragging any one of the points while the image redraws itself and for added functionality, there is a reset button.

Also, from (flashandmath.com) I found another sample of a line segment drawn after two points are established.  Both of these samples were helpful in creating  a product that would allow the user to select their own defined area.

I began with the line segment code and rather than draw a line i created a rectangle. Working from the triangle sample,  I then used a fill, instead of the line so the user can view the area that they’ve “highlighted”. Here is an excerpt that code (after event listener for mouse click), which also includes the submit button that collects info for output:

interesting twist

Both points, after placement, have the ability to be dragged but I was uncertain how to go about redrawing the image as it is re-sized. Within the samples, I found the code (which as you can see, I’ve made a comment to research the update event listener):

the headache

I had to play with the arrangement of the user created points, the highlighted box, objects for hit test and the drawing board (above the stage).  It was important to be able to place the point on top of objects without being obstructed by them. The points had to remain on top of the highlighted box in order to allow them to be dragged later to reshape. The hit test needed to function properly and the arrangement affected this as well.

combining code

While the user created box was being worked on, so was the code for randomly generating objects (squares).  We then combined these two samples to produce our mock-up example.

& the next step

I tried adding a “clear” button that would clear all the objects but when the objects were removed,  I would then lose functionality with the rest of my options.  I’ve left the code commented out but hope to revisit this for debugging.

The next major step would be to store a few properties of the squares (alpha, color, etc.) and have that info display as well.

Finally, our last step would be to generate an image for the user, based on the stored property information.

 

~ diette janssen

 
Follow

Get every new post delivered to your Inbox.