Art or Artificial?

by dfunction

There are now a number of painting robots, and I’m not referring to the programmable arms that spray cars, but machines that can interpret images and choose their own stylistic elements to create innovative, individualised works of art.
For example, one paint-bot is connected to the internet, and chooses images and colour schemes from Google searches of its own devise. It then takes this cache of inspirational material and extrapolates a digital painting. The choices it makes are based on programming, of course, but that programming contains avenues for randomisation and algorithms for aesthetic correlation. Whichever way you look at it, this painting robot is making artistic decisions of its own.
Is this an example of artificial intelligence? I don’t think so. But I do think that its existence engenders an interesting question – at what point does programming and randomisation give way to intelligence? If we replace programming with instinct, and randomisation with trial and error, or the act of learning, then the synthetic parallels the organic, and the existence of artificial intelligence becomes a matter of perspective.
Most would scoff at the idea that a painting robot is anything but a cleverly programmed tool, reactive, unaware. But are we ourselves not cleverly programmed by evolution? If a machine mind were created with enough complexity, with the right programming, is it not possible that it could gain sentience?
Some say that an entity cannot truly be alive unless it has emotions, fear of its own demise, for example. Others argue that organic organisms are quintessentially alive, and cannot be synthesised, that the very act of synthesis automatically renders an artificial entity non-intelligent.
But, when a human mind has been accurately simulated within a quantum computer, or perfectly replicated within a sophisticated automaton, how can we say that the result is not sentient or intelligent when it is an exact digital analogue of ourselves?
When we reach this level of technological complexity, organic intelligence and artificial intelligence will no longer be delineated in terms of ability, the question will be a wholly philosophical one. If we then remove any religious or spiritual notions from the equation, we’re left with a simple comparison, and if both human and machine pass all applicable tests of cognition and sentience, must we not accept the machine as being intelligent and self aware?
Should the above circumstances come to pass, I’ve no doubt that humanists would question the validity and rigour of the testing, and the very definition of what it is to be alive.

I’d be most interested to hear your opinions on this subject; by what criteria can we claim to be self-cognisant?

I think therefore I am. Thanks for reading, Dom Carter

© 2012 – 2014. All rights reserved. Dominic Carter is the sole author/creator of this website/blog. All content, except images displayed with the permission of Christian Grajewski, is the intellectual property of the author (Dominic Carter). All material displayed within, is the exclusive property of said author.
Unauthorized use, reproduction, alteration, and/or duplication of this material without express and written permission from this blog’s author/creator is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Dominic Carter and, with appropriate and specific direction to the original content.