AmoebaMan wrote:
nXIII wrote:
Saying you believe in God is exactly like saying you believe in Flying Purple Bunnies with time-traveling abilities which rule over the universe, just because nobody has proved this wrong.
yeah, except your Flying Time-Travelling Universe Ruling Purple Bunnies don't have a 1000+ page book filled with the testimonies of people who witnessed Christ working miracles. that's the difference. and this is not meant to provoke an argument. you just go on believing your thing, and I will believe mine.
Well, that wasn't really written BY God or whatever, so it doesn't really prove God exists... it COULD be just some extremely unlikely set of coincidences....
I LIKE MY TIME-TRAVELING UNIVERSE-RULING PURPLE BUNNIES (that I made up on the spot...)
Offline
Going back to what sparks said:
sparks wrote:
in a sense, we are in fact very much like the electronic ADC chip (analogue to digital converter) which turns an analogue input value into a binary value that a computer can understand (usually a value from 0 to 225 for microchips because that's the standard bit processing capability for a microchip's input.)
aaanyway, humans process analogue information, that's true, but we store it digitally, using synoptic nerve connections in our brain and electrical currents that pass between them. when a new thing is lernt, a new connection is made (it will only be temporary at first, that is called short term memory and lasts around 20 seconds) but can be strengthened by repitition or association, in which case the connection can last forever.
A computer could theoretically "recognize" things that it sees as long a it actually has something in it's memory to compare it to, just like a human. If we see something that we have never seen or heard of before, and it is nothing like anything we have ever seen, we would have no idea what it is. But, if someone told us what it was, either by giving us a description or by visually showing us what it is, we would then have something to actually compare that thing to in our brain.
So, if you could give a computer a reference to what it is seeing, it could potentially recognize it and know what it is.
Offline
ScratchReallyROCKS wrote:
Going back to what sparks said:
sparks wrote:
in a sense, we are in fact very much like the electronic ADC chip (analogue to digital converter) which turns an analogue input value into a binary value that a computer can understand (usually a value from 0 to 225 for microchips because that's the standard bit processing capability for a microchip's input.)
aaanyway, humans process analogue information, that's true, but we store it digitally, using synoptic nerve connections in our brain and electrical currents that pass between them. when a new thing is lernt, a new connection is made (it will only be temporary at first, that is called short term memory and lasts around 20 seconds) but can be strengthened by repitition or association, in which case the connection can last forever.A computer could theoretically "recognize" things that it sees as long a it actually has something in it's memory to compare it to, just like a human. If we see something that we have never seen or heard of before, and it is nothing like anything we have ever seen, we would have no idea what it is. But, if someone told us what it was, either by giving us a description or by visually showing us what it is, we would then have something to actually compare that thing to in our brain.
So, if you could give a computer a reference to what it is seeing, it could potentially recognize it and know what it is.
True, but we can (and computers will be able to) make inferences about an unknown thing that we encounter.
Offline
We can't make inferences on things we have nothing to compare to, like when you are a newborn baby and you see a car, what do you think of it, you have nothing that could help you make an inference because you pretty much don't have any prior knowledge of anything. This is probably what you could think of a basic AI program as, a computer trying to make sense of everything around it.
BUT I'm not sure if Scratch is complicated enough to power an AI program, if analog is the key to AI, then Scratch would need a few new blocks such as:
(pitch), along the lines of the (loudness) block, this would measure the pitch of a sound instead of the volume. you could potentially make voice recognition with this.
(color?), would report the color that the sprite is touching, meaning there would have to be a (color []) blocks that would have a color picker that you could add to <[]=[]> block or put in a variable. It would also have a number corresponding to each color so you could put it into number value spaces in blocks.
Last edited by ScratchReallyROCKS (2010-05-13 20:45:24)
Offline
ScratchReallyROCKS wrote:
We can't make inferences on things we have nothing to compare to, like when you are a newborn baby and you see a car, what do you think of it, you have nothing that could help you make an inference because you pretty much don't have any prior knowledge of anything. This is probably what you could think of a basic AI program as, a computer trying to make sense of everything around it.
True, but I'm talking about when we DO have things to compare it to.
Last edited by nXIII (2010-05-13 20:44:52)
Offline
ScratchReallyROCKS wrote:
Exactly! If a computer had references to things, it could make inferences too.
yes, but the power of the human mind is that it can make inferences without references. if we couldn't, we would never mature beyond the afore-mentioned infancy stage. our knowledge comes from somewhere, and that somewhere is not a programmer plugging things into our brain (i hope
)

Offline
a human mind does need some prior knowledge to make inferences, like you can't infer that that a bag is paper if you don't know what paper is and you can't infer that a truck is like a car if you don't know what a car is. So if you saw something unimaginable (if it was imaginable you could make inferences about it) that was literally like nothing you had ever seen before, describable in every way, you can't infer anything about it because you can't define anything about it. And yes we would make it past infancy because you are constantly learning things..hey whats that on my floor?..and by doing so, we are getting references to things we might have seen/heard/smelled/stubbed toe on before so we can define what they are. So, you can't make inferences on things that you can't analyze in any way. Same with computers, if they have nothing in their memory that could define an object, they wouldn't know what it was, like a robot that is programmed to see cubes and only has cubes stored in it's memory. If it saw a sphere, it wouldn't know what it was.
Last edited by ScratchReallyROCKS (2010-05-14 00:27:12)
Offline
ScratchReallyROCKS wrote:
a human mind does need some prior knowledge to make inferences, like you can't infer that that a bag is paper if you don't know what paper is and you can't infer that a truck is like a car if you don't know what a car is. So if you saw something unimaginable (if it was imaginable you could make inferences about it) that was literally like nothing you had ever seen before, describable in every way, you can't infer anything about it because you can't define anything about it. And yes we would make it past infancy because you are constantly learning things..hey whats that on my floor?..and by doing so, we are getting references to things we might have seen/heard/smelled/stubbed toe on before so we can define what they are. So, you can't make inferences on things that you can't analyze in any way. Same with computers, if they have nothing in their memory that could define an object, they wouldn't know what it was, like a robot that is programmed to see cubes and only has cubes stored in it's memory. If it saw a sphere, it wouldn't know what it was.
neat explaination there, I agree mostly with it.
The one thing I'd like to add is that if we come across something we had never seen before, or never seen the like of before, we can still vaguely relate to in in the sense of how we react to it as we have over the course of our childhood been constantly presented with such things.
for instance, we will watch an unknown object for quite a while to learn what we can about it. if it were, for example a car, (and we had nothing to compare it to) we would note that is is not moving or making any noise, so potenitally not a threat, we would note that it is a lot larger than us, so if it moved we should be wary of it, and that it is shiny, hinting that it was designed. If something about the car changed, such as the motor turned on, the radio started playing or it began gently coasting down a hill, we would instantly have our fight or flight response triggered, essentially become very aware of it and watching what it is doing (learning as we go). if the radio turned on, we would, after a while no longer find it threatening as nothing bad has happened.
I may have gone off track a bit there, but essentially, even if we have no previouse experience with an object, we still have built in methods of dealing with it and we do all we can to learn about it. Humans hate to not know things, why else would our world be catagorised, catalogued and controlled? Everything has a name, everything is tested.
If any of you have a cat, you'll notice this exact phenomenon, the cat keeps its distance while being very curious about the unkown object, if nothing happens it will slowly get closer to see what it is, the slightest noise from it, or even near it and it will probably catapault (on pun intended) itself round the corner and catiously poke it's head round to watch.
Last edited by sparks (2010-05-14 03:00:27)
Offline
Nick60 wrote:
What is all this [removed by Forum Moderator]?
AI has been achieved however in Scratch (without any further programming) cannot achieve AI due to its limitations.
excuse me? please don't swear, the star does not mean it isn't a swear word, we can still tell what it says.
I don't understand you... AI has not been achieved yet, only AAI as it were, machines have managed to learn and be intelligent, but only in a small field, nothing created yet can generalise all it's learnt information and be able to intelligently react to any situation possible.
no, scratch is not the system that will provide AI, it isn's designed for that.
Last edited by Paddle2See (2010-05-25 19:03:24)
Offline
sparks wrote:
Nick60 wrote:
What is all this [removed by Forum Moderator]?
AI has been achieved however in Scratch (without any further programming) cannot achieve AI due to its limitations.excuse me? please don't swear, the star does not mean it isn't a swear word, we can still tell what it says.
I don't understand you... AI has not been achieved yet, only AAI as it were, machines have managed to learn and be intelligent, but only in a small field, nothing created yet can generalise all it's learnt information and be able to intelligently react to any situation possible.
no, scratch is not the system that will provide AI, it isn's designed for that.
It would make me laugh if it was, but very VERY VERY unlikely.
Asimo is quite advanced, I dont know of any other ones as advanced as him. But im willing to be proved wrong. Anyone know of a more advanced AI? (Waits for people to go goole Advanced AI)
Last edited by Paddle2See (2010-05-25 19:03:42)
Offline
sparks wrote:
for instance, we will watch an unknown object for quite a while to learn what we can about it. if it were, for example a car, (and we had nothing to compare it to) we would note that is is not moving or making any noise, so potenitally not a threat, we would note that it is a lot larger than us, so if it moved we should be wary of it, and that it is shiny, hinting that it was designed. If something about the car changed, such as the motor turned on, the radio started playing or it began gently coasting down a hill, we would instantly have our fight or flight response triggered, essentially become very aware of it and watching what it is doing (learning as we go). if the radio turned on, we would, after a while no longer find it threatening as nothing bad has happened.
I agree with that. Humans do have innate behaviors to deal with unknown things, BUT you could program things like that into a computer's AI program, things like avoiding heated objects moving at 70 miles per hour. Those would be the things it wouldn't have to learn so it could survive for a while before it actually learns things.
Offline
I suppose what we're saying is that machines need a "default" reaction to an unknown thing in order to react properly, their equivalent to instinct. Yes, they need to be cautious about an anknown object, and try to learn what they can about it so that any things they have stored as refrences can help them decide what it is.
it begs the question, how advanced do you think computers will have to be before they can notice things that shouldn't be there, or should be there and are not.
it's like, if we walk into a warehouse and there are loads of crates everywhere, we think it's fine, similarly an empty warehouse is fine too, as soon as we see a single crate in the middle of the empty warehouse (possibly dramatically lit by a spotlight) that we become suspicious. A machine may note that the things around it are crates, but if there were no crates, it would currently not ask where the crates where (unless specifically designed to need crates) and if there was only one crate, it would not count it as suspicious because crates belong in warehouses.
Yes, I'm a big fan of ASIMO too, it is a truly remarkable robot. One demo video I saw that I really liked was that ASIMO could be told that an object was a chair, it could be then shown a stool, quite different from a chair in design (no back, possibly more legs or twistable) and notice key links that could help it decide that this new object was a chair.
EDIT:
oh yes, I meant to say, one of the feautures of Panther I have been keen to put in, and have managed to get in is camera input and pixil colour abilities, this is really BRILLIANT, I've been playing with it and I can see that with hard work, it will allow Panther projects to have object, colour and movement detection and recognition.
I have already made a project that reads "barcodes" by checking brightness values of different parts of the screen, and I am currently working on a project that locks onto and tracks a red ball you place in front of it. It won't be particularly fast, and not as advanced as many existing things, but it will allow a whole new dimension of project interaction, and has let me explore my fave areas of computer science, I'm sure some of you like the idea of that.
Last edited by sparks (2010-05-14 11:05:21)
Offline
sparks wrote:
I suppose what we're saying is that machines need a "default" reaction to an unknown thing in order to react properly, their equivalent to instinct. Yes, they need to be cautious about an anknown object, and try to learn what they can about it so that any things they have stored as refrences can help them decide what it is.
it begs the question, how advanced do you think computers will have to be before they can notice things that shouldn't be there, or should be there and are not.
it's like, if we walk into a warehouse and there are loads of crates everywhere, we think it's fine, similarly an empty warehouse is fine too, as soon as we see a single crate in the middle of the empty warehouse (possibly dramatically lit by a spotlight) that we become suspicious. A machine may note that the things around it are crates, but if there were no crates, it would currently not ask where the crates where (unless specifically designed to need crates) and if there was only one crate, it would not count it as suspicious because crates belong in warehouses.
Thats where you would have a tricky bit of programming to do. But if it were programmed lie a child. The child wouldn't know that there was something suspicious, unless it was told so. So basicly computers are like toddlers. Except they are very bright in maths and are programmable (And for parents everywhere, they have volume butons!)
sparks wrote:
Yes, I'm a big fan of ASIMO too, it is a truly remarkable robot. One demo video I saw that I really liked was that ASIMO could be told that an object was a chair, it could be then shown a stool, quite different from a chair in design (no back, possibly more legs or twistable) and notice key links that could help it decide that this new object was a chair.
I think they had that in James Mays Big Ideas.
Offline
I think one key block, as I mentioned earlier, would be the (pitch) reporter block. you could potentially train speech recognition software in Scratch, and, to go one step farther, have it recognize patterns in a specific persons' speech patterns, AND, recognize people from their voices.
Offline
To try to anwer the original question:
jps wrote:
...a bot to follow the player through a maze and know the quickest way to the player without going through walls but this has to be adaptable so if the player moves the bot will change to a new quickest route...
This is possible
Look at Coolstuff's pathfinder ai. It would need to run much faster so that it can reevaluate quickly, but I bet it's possible.
And here's a very simple pathfinding AI I made a while back.
For everybody else: read James 1:5 and the book of Revelations AND/OR The Singularity is Near by Ray Kurzweil.
Offline
sparks wrote:
well I'm quite enjoying the current discussion. I'm liking that you've seen james may's big ideas, that was a good episode.
Yeah, It was cool how Asimo had real time teaching (Or whatever its real name is) Though he was taught that the wind up robot was "Grandpa"
Offline
sparks wrote:
haha, I remember, I guess truth telling isn't part of it's capabilities, it takes your word for it. I bet robots will always have trouble with metaphors and sarcasm as humans find it hard enough sometimes
Well thats like kids, the older they get they learn more.
Offline
sparks wrote:
p.s, although light does display a few characteristics of a partical, such as the release of photons from a metal in the ultra-violet part of the spectrum, light isn't a partical, it's a wave.
Actually, this is where you should understand quantum physics. To put it simply, normally physics lets light behave like both a particle and a wave. But if you used something to work out if it is a particle, then it will say that it is a particle. But if you use something that works out if it is a wave, it says that it is a wave.
Last edited by calebxy (2010-05-25 12:04:48)
Offline
calebxy wrote:
sparks wrote:
p.s, although light does display a few characteristics of a partical, such as the release of photons from a metal in the ultra-violet part of the spectrum, light isn't a partical, it's a wave.
Actually, this is where you should understand quantum physics. To put it simply, normally physics lets light behave like both a particle and a wave. But if you used something to work out if it is a particle, then it will say that it is a particle. But if you use something that works out if it is a wave, it says that it is a wave.
... I have studied that... the video had a weird floating head explaining it... it creeped the class out :S
anyhoo, you have just said what I said, not disproved it. It displays characteristics of both, because everything, even the particals that make us up display properties of both. it is a case of which they display more of that defines what we call to them. I would never refer to my particals as waves because I exhibit more properties of a partical than I do a wave. In the same way, I generally refer to light as a wave because that is the property it exhibits most in my mind.
Offline
sparks wrote:
calebxy wrote:
sparks wrote:
p.s, although light does display a few characteristics of a partical, such as the release of photons from a metal in the ultra-violet part of the spectrum, light isn't a partical, it's a wave.
Actually, this is where you should understand quantum physics. To put it simply, normally physics lets light behave like both a particle and a wave. But if you used something to work out if it is a particle, then it will say that it is a particle. But if you use something that works out if it is a wave, it says that it is a wave.
... I have studied that... the video had a weird floating head explaining it... it creeped the class out :S
anyhoo, you have just said what I said, not disproved it. It displays characteristics of both, because everything, even the particals that make us up display properties of both. it is a case of which they display more of that defines what we call to them. I would never refer to my particals as waves because I exhibit more properties of a partical than I do a wave. In the same way, I generally refer to light as a wave because that is the property it exhibits most in my mind.
Id just say that light is a wave because thats how I've known it, such as solar waves.
Would i be right saying here that im exhibiting how the mind, if it doesn't know what something it, its trys to find something like it?
Last edited by markyparky56 (2010-05-25 14:20:13)
Offline