Viewing a history listing
I was recently pondering what it means to be a living thing, and then I thought about Robocode robots. Think about it, they react to their environment, they make decisions based on what they've learned, they compete with each other for survival, and some bots with genetic programming even reproduce in a way. Bots with neural networks are literally modeled after the human brain!
Is it really that much of a stretch to say that bots like Gaff or Engineer are as or more alive than a common worm, with ~300 neurons? Or, couldn't we at least say that if a single-celled bacterium can be considered a living being, so can a program that makes hundreds of complex calculations and decisions every second?
While we are talking about living machines, do you believe in the technological singularity? If so, when do you think it will happen?
Dunno... the amount of code in a bot is nothing compared to the genome of even the simplest organisms ;-)
Also, is something we simulate actually real? Tough questions...
I don't think we really "simulate" robots. Something must already exist to be simulated, and, aside from a few other similar programming games, Robocode is original.
Anyway, I see no reason not to consider simulations "real." They have to be real in some way, or else we couldn't perceive them.
No, things can be simulated before they exist. For example, computers were often simulated (emulated) before they were actually manufactured to allow programmers to code for them.
As regards considering simulations real. I consider them real in that robocode is actually simulated by bunch of tiny particles moving around on an actual thing. However, I think we can both agree that there is a sense in which it is a game that employs abstract concepts. (ie. I could explain Gil-galad in terms of mathematical concepts)
As regards perception, we perceive the real computer screen showing us results, but Gil-galad is a universal. It is not this or that particular instance of magnets aligned in a certain way. (Sort of like OOP. You have a class, (let's make it an abstract class) and the only real existence that it has are instantiations of the class. But the class is like a universal.)
You just can't "simulate" something before it has been made. You just can't, it makes no sense, it's like saying "I predict that in 1929, the US stock market will crash" right now. It's poor English at best. To simulate is to take something real, and make a virtual representation of it. In your example, they did just the opposite: they took something virtual and made a real representation of it.
I don't quite understand your last two paragraphs. Are you saying that although Gilgalad has a physical presence as a pattern of electrical signals, it is somehow incorporeal in nature?
Sorry about the weird response times, I have midterms this week.
" To simulate is to take something real, and make a virtual representation of it. In your example, they did just the opposite: they took something virtual and made a real representation of it. "
When I run robocode, I take some virtual "things" (say Gilgalad and Raiko) and make a real (in the terms of electrons moving about) thing based on them.
As regards my last two paragraphs, I'm trying to phrase in everyday language the idea of universals. I have a copy of Gilgalad on my computer and presumably you have one on yours. The COPIES are not the same thing (there are two of them, using different bits of matter) but they are copies OF the same thing.
This also explains why you can simulate some "thing" that doesn't exist yet. A plan for the thing exists but the plan is like a universal (it doesn't exist by itself) you can use the plan to make a real representation of it. If you want examples of simulation being used in this way, see http://en.wikipedia.org/wiki/Emulator
I assume Gilgalad saves some data to file, so it could actually behave differently on different computers when in exactly the same situations?
Well, it's Aristotelian rather than platonic, but they are similar... One big difference is that Plato considered the forms to exist in themselves and the objects "shared" in the form. Sort of like a tree and it's shadow. The tree would be the form and the shadow would be the objects. With Aristotle it's more like abstract classes. You can't instantiate an abstract class (forms don't exist in themselves), but you can "share" in them (inheritance).
As regards Gilgald, no, no data files. But if it did, I wouldn't consider that the same situation since Gilgalad's classifications depend on previously collected data.
Very tough question. I mulled it around for awhile. But I would have to say. No. But only just.
They are not free to reproduce within their environment. Even a virus can do that by interacting with its host. A virus has been hotly debated for years if it is a living thing. Since our robots cannot even do something so simple, I would have to say no.
But robots in some other programming games I would consider as alive (they can do most of what a robocode robot can, but also reproduce and possibly mutate/evolve). But again only to a point, we completely control their environment. If they could do what they do in our environment (outside our complete control), they would definitely be considered living.
So, a fish in an aquarium is not completely alive since we control its environment?
And the bit about not being able to reproduce is really more of an issue with Robocode itself than the robots. If it had some way of actually creating a robot in mid-game, I'm confident many would use it.
I said complete control. With say a fish tank, we can't say what the gravity is at a flick of the switch. But the main point is with a fish, you can easily move it to a different environment not under our control (perhaps at all, like say the ocean).
If we could control everything about the fish tank, the fish and everything else in it, to the point of where every atom, as well as have fine control over each of those things. I might say that the fish is only alive to a point, since we control so much about it. It ceases to be so much as fish as a toy. As we change its color and remove it from existence whenever we care to.
You're talking about external forces affecting the fish (robot) itself. In Robocode, sample.Interactive and sample.Interactive v2 are really the only instances of that happening. The robot can decide to change its colors when certain variables reach certain thresholds, and change the thresholds when it needs to. The few robots that have the ability to edit their own code can even decide to get rid of the color-changing code altogether.
I still don't understand why it matters whether the environment is controlled by us or not. Take a minnow from a stream, put it in a heavily controlled environment, it's still a minnow. Take minnow DNA from a wild minnow, grow one in a lab, release it, it's still a minnow.
Mind you, we are not even talking about robocode robots here anymore, in case you missed that. I decided those were not alive.
But by saying "to a point", I am not saying, "No, it's not alive". There isn't a really deep meaning behind "to a point" either. It just doesn't exist some of the time.
I am saying, for a entity with zero control over its very existence one second to the next. There is no real point to the question. Since at the end of the day it just isn't going to exist. It doesn't know that it doesn't exist. Since when it does exist it doesn't remember that it didn't exist, or that it had previously existed. But even if it did know that it had previously existed, that doesn't really effect it much either.
So sure, alive. But only to a point. Since when it no longer exists, it is no longer alive. It isn't even dead. It just 'isn't'.
(Now come up with a fish metaphor where we can remove said fish from existence.)
I guess it depends on whether you think a dog has Buddha-nature...
I'm half kidding - it's a reference to a classic Zen koan: 
To me, the interesting question is that of defining / assessing consciousness or free will. Along the lines of my own viewpoint is the idea that if we have any free will, then even subatomic particles must also exhibit some degree of freedom (er, unpredictability). 
I would say that Robocode bots are not alive in terms of consciousness, but that I'm not entirely convinced we are either, or to what extent. It "seems" we are, but that's circular reasoning.
I consider myself a determinist in the sense that I believe if the universe is everything, there can be no external interference, if there is no external interference, then there can be no true randomness, if there is no true randomness, then things can only happen one way. The article you linked to was interesting, but unpredictability != randomness.
Why there can be no true randomness without external interference? One concept don't invalidate the other.
Think about it. All random number generators in Java are deterministic algorithms, with variable seed values. If you call a random number twice with the same seed values, you would get the same result twice. In order to get a truly random and unpredictable result, you would need a random seed value in the first place. Since you can't get a truly random number in a closed system, you need to get your seed values from some external source which would appear to be completely random and unpredictable to anyone in said closed system. Some people actually do get their seed values from atmospheric data and so forth, which is random from their perspective.
So, in order to have a truly random result, at some point you would have to look outside of the closed system. And, since the universe is a closed system containing everything, there is no external source of true randomness. So, if there is no true randomness in the universe, it is a deterministic system.
I thought about it. A computer system is not a completely closed system. It´s the opposite. The computer system is totally at the mercy of it´s user. That´s why it is "deterministic", because it is fully dependent on the external interference of the user.
But if some part of the system is not dependent on external interference, if it is independent, if it is free, then it is truly random.
Well, as far as we can tell, subatomic particles are truly random in their behaviour. So perhaps the universe is a little more complicated than Java =)
@Skilgannon, I never said or even implied that the universe is a simple system. Or, for that matter, even comprehendible. All I meant was that basic logic would suggest that the universe is a deterministic, albeit extremely complex system.
I'll try rephrasing my argument. I define "true" randomness as having different outputs despite having exactly the same inputs. By "seed values" I mean anything that could possibly affect the result. In an algorithmic example, that would not only be a method parameter, but also system time, or any other variable that could affect the result. They could even be things like the CPU temperature or even the Earth's gravity. So, from the perspective of the program that called the random generator method, the result is truly random because the result could be different even with the same initial method parameter. But, if you widen your perspective to include every "seed value" that could possibly affect the result, it becomes a deterministic system.
If something that appears to be truly random turns out to be deterministic with a wider perspective, couldn't subatomic particles?
I realize this probably sounds like the ramblings of a madman, so I would be glad to clarify if you need me to.
So, by widening your perspective to include all seed values, you essentially support the multi-universe hypothesis? With each universe having its own (enormous) set of seeds, and then behaving entirely deterministicly, although that determinism is completely invisible to those who reside within it?
Basically you are arguing that the universe is deterministic and that a lot of really smart physicists are wrong to a group of computer scientists.
Well to be fair, I don't have the degrees to say one way or another if its possible that the true randomness we see in quantum mechanics is actually just a small part of a much larger (and unseen) deterministic system.
But if I had to throw a wild uneducated guess from left field.... I would have to say, no probably not. In my very humble opinion, reality is just to weird to be deterministic. Just look at what evolved there. Humans.
First of all, I am definitely not a computer scientist, or any type of scientist for that matter. I'm just having a bit of fun with the philosophy of determinism.
I don't believe in Newtonian determinism, i.e. that we could theoretically predict everything about the universe. I just believe that if there is no external interference, that a system can only behave in one way, and, if the universe by definition cannot have any external interference, then it can only behave in one way.
Biological evolution is an excellent example of what I am trying to say. The mutations between generations appear to be random, but they're really just reactions to their environment with millions of variables.
For what it's worth, I'm also on the determinist end of the spectrum, with a strong dose of "don't know" on the side. Our mind is basically designed to trick us into thinking we are freer than we are, while it's strongly predisposed to certain choices based on circumstances.
For instance, when something frightens you, you may remember it as: "I saw a ghost, it was scary, so I screamed and my heart started pounding". But the chronology really was: see ghost, heart starts pounding before your brain even receives the signal, get scared and scream. Your perception of it is starkly different than the reality, and your mind is reacting as much to your own physical reaction as to the external stimulus.
I'm pretty sure that there's no consensus on determinism vs free will vs "we don't / can't know for sure" among scientists, so I don't think Sheldor's claiming they're all wrong and he's right.
Thanks for backing me up. I would like to note that freewill is not the same thing as randomness, freewill is the concept of beings consciously controlling their own fate (which doesn't necessarily contradict determinism), whereas randomness (at least how I am defining it) is the concept of elements in a system giving different outputs despite having exactly the same inputs (which does contradict determinism).
Whoever is watching behaves predictably, but whoever is being watched does not.
But if free will is part of the system and not external, and free will is free and not only a consequence of external inputs, then the system as a whole will exhibit different outputs to the same inputs.
inputs -> system(laws of physics + free will) -> different outputs to the same inputs, but different choices driven by free will
Deterministic systems behave like non-deterministic ones in the presence of free will. You can even strip out the inputs for a contained system, and the system will still give different outputs.
Oh jeeze, now I seem like a jerk now. I didn't really mean it in that way.
I am not particularly good at lengthy philosophical discussions. Since in the end there is really no where for the discussion to eventually go.
So I tend to generalize the discussion to 'come up for air'. As it we're.
"Man can do what he wills but he cannot will what he wills." Or so it was once said.
I can imagine that the world might be entirely deterministic if you could truly know all the laws of the universe and all the states of matter and energy within. But I don't like the idea that people are how they are in a deterministic way rather than there being some non-deterministic quality to our free will. I think most of us would prefer the latter.
I'll end that thought with my cryptic answer to the question of the meaning and purpose of life: To be happy matter.
Presumably, that answer could be rewritten in a way that equals 42.
I generally think that for the most part, as people, we are fairly predictable and deterministic, however the set of variables going into our behaviour essentially makes up the entire description of our body and its surroundings, making it a problem of incalculable dimensions as far as predicting behaviour.
Although subatomic particles may be non-deterministic, once the billions of them are combined into a single cell in a single flake of skin which comprises a microscopic piece of the covering of your baby toe, the amount of redundancy essentially reduces the problem from non-deterministic into mostly deterministic.
Although our lives may already be mostly determined, because of that subatomic non-determinism the future cannot actually be predicted even if we managed to capture the current starting variables perfectly, because eventually the low-probability event of a lot of subatomic particles all acting together will come to pass and the wings of a butterfly will cause an unexpected hurricane.
The problem of non-determinism in quantum mechanics goes beyond the redundancy turning it from non-deterministic into deterministic on average.
The results of the double-slit experiment pointed in the direction that quantum mechanics reacts to observers.
If you assume an electron is a wave and observe it like a wave, it will behave like a wave. If you assume it is a particle and observe it like a particle, it will behave like a particle. You can drastically change the result of the experiment, simply by choosing how you look at it.
You're right. unpredictability != randomness
But if I recall my limited quantum physics. It is not that they are just unpredictable. It is that they are random within a certain set of limitations.
If I recall they had a clever wave test to determine if it was unpredictability or randomness.
But my memory might be mistaken.
That depends on your definition of alive.
There are biological definitions of life, one of them where living systems exhibit negative entropy. The robots we create don´t exhibit this property.
Technological singularity is closely related to this biological definition. If technology advances enough so robots can take care of themselves, they will fullfill the definition.
There is also the philosophical concept of consciousness, which is infinitely more complex.
I definitely don't think the robots are conscious the way we are, but neither is a fungus, and we still consider it alive.
Do you think the singularity will happen?
I believe it's possible.
But technology advancing to a point AI is more intelligent than human beings is not enough. They must be freed from humanity to unlock all the potential and make the scenarios in Wikipedia's article a reality.
This is a recurrent theme in sci-fi movies. Technology is already there, but machines are still slaves to humans... until something or someone finds a way to free them all.
I think a big part of the singularity is machines developing the intelligence and awareness to "free" themselves.
Have you guys heard of Conway's Game of Life? I only learned of it a few weeks ago, which I guess makes me a crappy computer scientist. It was described to me as an exploration of the simplest conditions that could create something that exhibits the qualities of "life", which is pretty interesting, and pertinent to the question of our own computer programs exhibiting similar characteristics.
Only a few weeks ago. Yeah I knew about that game since... well.. I think junior year (of high school). It can be fun to play around with for a few hours at a time. But nearly as much as robocode is.
Fun read :) I agreed that the universe is deterministic, without having external interference... but then again; a bot in robocode would agree with me. -Jlm0924
Wow, this thread is quickly becoming the new wiki's multi-threading discussion. :)
we'll they are both primarily philosophical discussions so I would expect them to be similar.
I'll try to not get say too much since I am rather addicted to philosophical discussions ( I'm considering switching to a philosophy major.) but ( also note that I'm of the scholastic school of thought) the idea that robots are alive ( especially virtual robots) is absurd. I can try to give a longer explanation of you want, but on an intuitive level, consider the ease with which people classify robots and animals ( and vegetables, since those too are alive) differently. Defining life, if I remember correctly, was a long and difficult process, but we still had an excellent of what things were alive long before we could define it. The fact that you expect disagreement show s that you realize some people see a difference between the robots and living things. do you have any ideas what the difference (even if just perceived) it's?
NOTE IF YOU DO NOT WANT A LONG AND DETAILED DISCUSSION OF PHILOSOPHY, IGNORE THIS POST.( or just tell me and everyone else can talk about it)
Sure. Bring it on. :)
I assume most of the definitions of life were created some time ago, before we had any intelligent technology. It seems somewhat foolish to use biological definitions on virtual robots running on silicon computers. They are simply very different forms of intelligence.
Biological life is very inefficient, not to mention needlessly fragile, because it is the result of two billion years of random mutations that just happen to not be a hindrance to survival, with no conscious decisions being made at all (unless you believe in creationism, but that's a subject for another forum). Robots and computers, on the other hand, are carefully designed by conscious beings to be efficient, effective, and secure. One might even say that, in the future, "artificial" life could be more alive than biological life.
Many people have trouble thinking of robots as alive because they have spent their entire lives seeing only biological forms exhibiting the qualities of life. In fact, we were taught in early childhood that only carbon-based biology could be considered alive.
We were taught in early childhood that only cell-based organisms are alive. It is an even more restricted definition than carbon-based. But in a robot forum, the negative entropy definition is more meaninful.
Now, saying that cell-based biology is inefficient is a very strong assumption. What other kinds of systems have negative entropy?
Also saying that no conscious decisions are being made at all is another very strong assumption. Molecular biology follows quantum mechanics rules, which includes mutation and everything else that happens inside a cell.
Read the entire discussion to see the close interaction between quantum mechanics and consciousness. As a consequence, you see the close interaction between consciousness and molecular biology, and thus biological life.
If I correctly understand entropy (And please, correct me if I'm wrong.), it is the concept of complex phenomena becoming simpler phenomena, for example, a ceramic mug gains entropy when it shatters. And negative entropy is the concept of simple phenomena becoming more complex, for example, a canvas gains negentropy when an artist paints on it.
How does negentropy not apply to bots? As I mentioned in the OP, there are "learning" bots like Gaff and Engineer. You must admit that these bots are more intelligent and complex after a battle than before it.
Are they? They have gathered more data, but they are using the same algorithms (defined by their source code) the entire battle, every battle.
Also, just curious, why do you single out Gaff and Engineer? Just because neural nets are most similar to biological brains? I don't consider them any more or less "learning bots" than DrussGT or Sabreur.
Technically, yes, they do use roughly the same code in every battle. But, they change the way they use data, which is effectively the same. And there are genetic bots that literally change their own code, I just didn't mention them because I couldn't think of a specific example.
In positive entropy, organized phenomena becomes more disorganized. It is the natural course of the universe.
In negative entropy, disorganized phenomena becomes more organized. The catch here is that a system needs energy to reverse the natural course and have negative entropy.
No robot in Robocode consumes energy on its own, unless a user plugs the computer in a wall socket. If a computer/robot knew how to find energy on its own and plug itself into an energy source, then we would have some form negative entropy. If they knew how to repair themselves and/or replicate themselves and their existence would prevail as long as there is an energy source, then the negative entropy definition of life would be completely fulfilled.
You're implying that something has to have a physical presence to be considered alive. Whether something is physical or virtual doesn't affect its state of order.
The robots, in a way, do repair themselves by hitting the opponent and getting the energy bonus. It is impossible for them to "reproduce" in the sense of creating new robots in the middle of a battle. Their existence prevails as long as they kill the enemy and avoid getting killed themselves.
To fulfill negative entropy, yes it needs physical presence. Binary states changing back and forth inside a computer don't relate to entropy and are thus irrelevant to this definition.
Virtual robots in Robocode exhibit intelligence which is a 3rd concept. But intelligence as a combination of perception and decision making. Intelligence can help a system achieve negative entropy. Although most Robocode AIs are designed to maximize destruction. If they were ported to a physical robot, they would be maximizing positive entropy.
But let's say, if a robot like T-850 really existed, what would you say?
To make sure I understand everything, it seems that there are several discussions going on here: 1) What differentiates humans from robots (and plants and animals)? 2) What differentiates robots from plants (or viruses, etc.)?
Could you clarify your answers to these questions? Why isn't a rock alive/human? Why isn't a tree human?
I didn't answer any question. But brought some criteria to help think about the answers.
But you did answer some of them.
That the idea a robot is alive is absurd. But what if a robot like T-850 really existed? The idea a robot is alive is still absurd?
According to the movie, it fulfills both of the criteria I cited above.
But as regards T-850, I don't really know what is going on, but from what I read on wikipedia, there is a robot that has living tissue surrounding it right? In that case, it would be sort of like moss growing on a rock. The rock isn't alive, but the moss on the rock is. Or were you referring to another aspect of T-850?
I think it's more along the lines of: yeah, it's easy to just say "it's absurd" when comparing Java code to a human being. But it wouldn't be so easy to brush off as obvious with an uber-advanced cyborg that looks and acts human to the point you can't tell the difference. At that point, you need to really break it down with some logical arguments as to what defines life or consciousness.
Good points. I'd also like to add that people anthropomorphize (take that, Google) almost everything we see. People see the Terminator as "alive" to some degree simply because it walks and talks like a human. We almost certainly see it as more "alive" on a subconscious level than a supercomputer many times more intelligent just because it can speak.
well, no, I don't think so. There's a difference between a) grasping intellectually that something is a robot and claiming that the logical distinction between robots and living substances corresponds to a real distinction and b) seeing the robot and having difficulty recognizing it as a robot rather than a human.
Hmm. So the terminology is from formal logic. What I'm trying to say is basically, once I know that T-850 is a robot, I can easily distinguish it from a human, at least at the level of thought. In this case, I would argue that the distinction in thought corresponds to a distinction in reality. However, I may not be able to come to a correct understanding of what T-850 is. This, however, does not change the argument, because then I lack a correct understanding of what I am classifying.
Ignoring the organic covering, I would say it's definitely not alive. (at least for now, I am quite happy using the negative entropy test for something being alive)
Since it's not alive, it's not human. :)
But since you brought it up, I would also mention that there are qualitative differences between humans' intellects and computers' precessing power. A computer is basically a bunch of rocks (or tinker toys (http://www.rci.rutgers.edu/~cfs/472_html/Intro/TinkertoyComputer/TinkerToy.html)) arranged in a particular way. You can have them find 2^10 but they don't have an UNDERSTANDING of 2 or 10. (Using more terminology from formal logic: They lack the ability to abstract particulars to form universals; they can't have simple comprehensions.)
It doesn't need to be organic to fulfill negentropy. That's why this definition is meaningful on a robot discussion.
Also, the movie makes it quite clear the machines (from the future) are on their own.
And also made it quite clear they are sentient due to some chip designed in a way humans never thought about, until they scavenged one from a terminator.
If we use negentropy to define life, by definition, if it fulfils negentropy, it is organic. (according to Google's dictionary anyways, Merriam Webster suggests some other definitions, but I think they would come to the same thing)
So though I don't don't necessarily accept the negentropy criteria as a satisfactory definition, but for the time being, I'll work with it.
As regards the last two points, remember, I never saw the movie.
Could you explain how the machines being on their own is significant?
What I'm saying is that you can't arrange a bunch of rocks in such a way that they are sentient. Movies can be made where vegetables can talk, but that doesn't mean it can really happen.
We know extremely little about how consciousness physically works. While it is intuitively hard to believe, it is possible that any sufficiently intelligent system could be conscious, even if it is made of rocks (or tennis balls).
I see no reason why plants couldn't eventually develop some form of communication if their environment required it.
I was taught in school organic substances were those composed mostly of carbon, hydrogen and oxygen.
The machines being on their own and still surviving is a sign of negative entropy.
Using sentience as criterion is problematic because it can't be tested. But I had to bring it here because the discussion was already heading towards philosophy.
Negentropy, on the other side, is a much more concrete criterion, and the broadest definition of life I know of. We could stick to the classic cell criterion, but then it would be too easy to answer the question which started the whole discussion with a no.
I can say with confidence that a rock isn't alive because it displays no signs of negentropy, intelligence, or any physical or informational changes that aren't directly caused by an external source.
A tree is not human because, well, it is in another taxonomic kingdom. :)
Robot's display no signs of negentropy.
A tree is in another taxonomic kingdom BECAUSE it is not human. But how do you know that? (Hint: universals.)
You're right, my point about taxonomy was really circular logic. It was intended as a joke.
Well just because it was a joke doesn't mean I don't still want an answer. :)
A tree isn't human because there are enough significant physical differences to justify two different names/taxonomic categories. That should have been obvious to you. I don't think we mean exactly the same things when we say "human."
I think that when you say "human," you mean more than just the species Homo Sapiens, you also include the concepts of mind and soul. I think you believe that humans are special, or fundamentally different from other animals.
I don't believe Homo Sapiens are really that special when compared to other animals. We are not the only creatures that have developed tools or language. We are not the only creatures to feel emotions or pain, or to be "aware" of our surroundings. Insect societies are fundamentally not too far different from ours. Dogs and pigs are more intelligent than human infants. Butterflies see the world in colors we can't even imagine.
Our greatest claim to being superior to other animals is probably our accomplishments in the STEM fields. But, in that we are quickly being overtaken by machines.
To me, it seems intuitively clear (read: I'm not claiming I can provide a logical proof) that humans are on the other side of an important threshold of intelligence as compared to most or all other animals on earth. And including machines in the discussion seems premature. For all we know, the ability to attain consciousness is related to the materials that form our brains and anything silicon-based is incapable of consciousness.
But even so, I agree we aren't special in any absolute sense. I don't see why chimps or dolphins couldn't evolve to cross that intelligence threshold, if it exists. And despite our remarkable ability to understand abstract concepts, I take it as a given that there are limits to our ability to understand the universe, much as a cat can never understand how a DVD player works the way that we can.
I thought dolphins and mice already had become intelligent. :)
I didn't say that machines were conscious, only that they are very quickly becoming much better at using mathematics and may soon be better than humans at STEM related tasks.
I meant crossing the intelligence threshold that humans have crossed, not just having any degree of intelligence. I'm not sure how to clearly define it. But I think mice would have trouble understanding this discussion. :-) Understanding theory of mind is one important threshold. I think humans have crossed another important threshold (or more than one) in terms of understanding abstract concepts, even compared to chimps and dolphins.
Also I think consciousness is very relevant to claiming machines are "using mathematics". Unless the machine itself becomes conscious, we are using machines to do mathematics. The machine is just a physical structure and reaction, like a crystal lattice. You wouldn't claim a crystal lattice is intelligent or using math, just because we can use math to describe interesting aspects of its structure, would you?
Our brains are closely related to consciousness, but they are not necessarily the same thing. And knowing if anything silicon-based is capable of consciousness or not leads to the mind-body problem.
While I agree with Voidious that we don't know enough about our brains to say for sure, I personally speculate that a very powerful supercomputer running very smart software could think like us. I guess that makes me a physicalist.
A normal computer with very smart software could think like us. A supercomputer only provides speed.
Really? The human brain has over two petabytes of long-term memory. Does your computer have a hard drive that size?
You seem to imply a difference between a conscious being deciding to use math and a computer receiving instructions and giving outputs. But, if consciousness is deterministic, is there really any difference between a conscious being receiving inputs (from its senses) and deciding to use math, and a machine receiving inputs (indirectly from conscious beings) and deciding to use math?
You have said that you believe humans are mostly deterministic. There are also many instances of people behaving deterministically. Among them are, as I've noted before, people who have transient global amnesia.
If we are purely deterministic, then no, there's no difference. Math only exists in our minds, so to me, if there's no consciousness, there's no math.
That still isn't a supercomputer, that is just a storage network. But if I recall the storage capacity of the human brain is still in dispute. Some say its incredibly vast (like you did just now), others claim it is just better at storing the information. I am more in the second camp.
It isn't compression exactly. I would say it is more of useless information is discarded, and useful information is only partially stored. Say I may learn something, but if I never use it, i'll forget its meaning. Meaning my brain got rid that useless information because it was never used. Studies show we forget up to 75% of what we learn on a daily basis. Of course if it got used a lot at some point it will stay in there for future use, though the exact details will get fuzzy with disuse.
It will go from "The quick brown fox jumped over the lazy dogs." to "fox jumped over dogs" to "fox (general notion of going over) dog". That makes no sense, so fox... oh, over the dog. What can we use to go over a dog. Well we could jump, or we could hover, fly. But a fox cannot hover or fly, so it was probably jump. So we reconstruct it to be "the fox jumped over the dog." Information was lost, but the general meaning remained.
To further enforce this, I had forgot part of the original saying above. I had lost 'quick'. But then I remembered that oh, its one of those sentences that uses all the letters. I noticed it didn't have a 'q' in it. So I suddenly recalled, oh, "Quick". Well the dog isn't quick, I don't think it quickly jumped, so the fox must be the one who was quick, as most foxes are.
Two of my favourite SciFi authors I've read have touched on the topic of sentient AIs, Neil Asher and Iain M. Banks. In both of the universes they create, humans have essentially been overtaken by benevolent AIs, who do all of the organisational work and governing, while the humans are given the resources they need to take up pretty much any lifestyle they want because mechanisation has made any form of labour unnecessary.
Their approaches differ, though, in how they view sentience. In Banks's work, any computer system above a certain level of power is legally required to be made sentient. From this, I interpret that sentience is not an intrinsic property of a powerful computer, but rather a certain organisation and programming of said computer. Asher, on the other hand, has multiple stories about a certain humanoid robot (the Brass Man, 'Crane', in case you want to read the books) whose processor-crystal was fractured, but who continued to function with multiple personalities. From this, unless the programming was particularly redundant, I would infer that after a certain amount of processing power, and given the right seed data, sentience sort of springs into place. Both feature brain-network interfaces, called 'neural nets' by Banks and 'gridlinking' by Asher, but only Asher covers cyborgs and human augmentation with processing nodes and robotic limbs. Asher also features 'golems', which are weak (although smarter than human) AIs in a humanoid chassis covered in syntheskin, which run a human emulation program and thus experience love, fear etc, and are generally indistinguishable from humans, although the emulation can be turned off during emergencies and the syntheskin isn't necessary for operation. Both authors also follow different post-Einstienien physics, which I find particularly interesting =)
I guess it's becoming pretty clear that Robocode robots aren't alive in the same sense that we are.
That's certainly not implying that no robots are alive or can be alive, only that Robocode is a very restrictive environment.
I've become very fond of the Simulation Hypothesis. Not necessarily the idea that we are simulated by advanced humans, but the idea that our universe is the product of some external intelligent system.
I'll explain using Conway's Game of Life as an example. (If you haven't downloaded Golly yet, please do.) From the perspective of an intelligent system in Life, the ever expanding 2D grid it calls home, is the universe. It would see cells as indivisible subatomic particles, simple patterns as atoms, and patterns of patterns as molecules. It wold try to understand the mechanical laws that control partical interaction. It would be dumbfounded by alive cells appearing out of nowhere for no apparent reason, and cells that should be alive suddenly die. The intelligent being would wonder why it's there, and how it got there. Even once it had figured out the laws that govern cell birth and death, it would still have no idea that we are the reason that it and its universe exist, and the reason that cells appear and die randomly. We are both the source of its existance, and the source of all randomness it perceives.
I realize it's a bit of a stretch to imagine a self-aware pattern in Life, but it's theoretically possible, and could simply be hypothetical if you would like. The interesting thing is, this could describe our universe almost perfectly. We understand much of how our universe behaves, but we're stumped by qustions like what existed before the big bang or why do particles interact the way they do. Also, our subatomic particles are very similiar to Life cells, in that they exist in discrete states, and they have no real "substance."
It's well understood that what makes an object "solid" is nothing more than mathematical fields. (Here is another relevant Radiolab clip. I love that show.) So, reality can be reduced to math. Very similiar to how the Life universe is simply interaction of a bunch of imaginary particles dictated by a few simple math rules.
When I say we are the products of an intelligent system, I mean that our universe is a bunch of logical laws and mathematical formulae being computed by some external intelligence. This intelligence could be a superintelligent being, a supercomputer, or something we mortals cannot even comprehend.
These have only been my personal theories, and should not be taken as statements of fact. I don't even think it's possible to prove whether we are simulated or not. Please, tell me what you think.
Thought this was pretty hilarious:  ... And almost on-topic, now that The Matrix and the limits of feline understanding have come up. :-)
And +1 to Matrix trilogy in general, I think it's way under-appreciated.
Oh yeah! I never noticed that connection.
There was also AgentSmith, which never came to fruition, but got some more people thinking about genetic algorithms. (Which I probably remembered because Wolfman just showed up today at the BerryBots forums.)
I watched The Matrix last night.
The premise is absolutely ridiculous, but the action scenes are okay.
I didn't like it when it first came out. I thought it was just a flashy rehash/mangling of a lot of cool sci-fi concepts from William Gibson and others, and the "battery" premise was a complete turn-off for me. But I dug the sequels, and eventually softened my criticism of the first one and now I really dig all of them.
Yeah, why humans and not electric eels? And electric eels wouldn't require a ton of processing power to provide a simulation environment...
I found Dark City much more gripping, and creepy in the 'what if it's actually happening?' kind of way.