markyparky56 wrote:
sparks wrote:
calebxy wrote:
Actually, this is where you should understand quantum physics. To put it simply, normally physics lets light behave like both a particle and a wave. But if you used something to work out if it is a particle, then it will say that it is a particle. But if you use something that works out if it is a wave, it says that it is a wave.... I have studied that... the video had a weird floating head explaining it... it creeped the class out :S
anyhoo, you have just said what I said, not disproved it. It displays characteristics of both, because everything, even the particals that make us up display properties of both. it is a case of which they display more of that defines what we call to them. I would never refer to my particals as waves because I exhibit more properties of a partical than I do a wave. In the same way, I generally refer to light as a wave because that is the property it exhibits most in my mind.Id just say that light is a wave because thats how I've known it, such as solar waves.
Would i be right saying here that im exhibiting how the mind, if it doesn't know what something it, its trys to find something like it?
not quite sure what you mean...
yes, at a lower level of science (before you specialise in physics) light is always treated as a wave because anything you study at that level will not contradict that knowledge. In the same way as you don't need to know what's inside a cell's nucleus (other than the vague idea that it's DNA) at a certain level, and then as you advance, the nucleus actually reveals itself to be made of all sorts of things, and DNA is made up of things as well (I don't take biology so I apoligise if this is wrong but I think it's made of carbon?)
Offline
sparks wrote:
markyparky56 wrote:
sparks wrote:
... I have studied that... the video had a weird floating head explaining it... it creeped the class out :S
anyhoo, you have just said what I said, not disproved it. It displays characteristics of both, because everything, even the particals that make us up display properties of both. it is a case of which they display more of that defines what we call to them. I would never refer to my particals as waves because I exhibit more properties of a partical than I do a wave. In the same way, I generally refer to light as a wave because that is the property it exhibits most in my mind.Id just say that light is a wave because thats how I've known it, such as solar waves.
Would i be right saying here that im exhibiting how the mind, if it doesn't know what something it, its trys to find something like it?not quite sure what you mean...
yes, at a lower level of science (before you specialise in physics) light is always treated as a wave because anything you study at that level will not contradict that knowledge. In the same way as you don't need to know what's inside a cell's nucleus (other than the vague idea that it's DNA) at a certain level, and then as you advance, the nucleus actually reveals itself to be made of all sorts of things, and DNA is made up of things as well (I don't take biology so I apoligise if this is wrong but I think it's made of carbon?)
We'll since we all break down to carbon after we die (I think) then it's likely that what you're saying is correct. I missed most of the biology section in science in first year.
Offline
sparks wrote:
markyparky56 wrote:
sparks wrote:
... I have studied that... the video had a weird floating head explaining it... it creeped the class out :S
anyhoo, you have just said what I said, not disproved it. It displays characteristics of both, because everything, even the particals that make us up display properties of both. it is a case of which they display more of that defines what we call to them. I would never refer to my particals as waves because I exhibit more properties of a partical than I do a wave. In the same way, I generally refer to light as a wave because that is the property it exhibits most in my mind.Id just say that light is a wave because thats how I've known it, such as solar waves.
Would i be right saying here that im exhibiting how the mind, if it doesn't know what something it, its trys to find something like it?not quite sure what you mean...
yes, at a lower level of science (before you specialise in physics) light is always treated as a wave because anything you study at that level will not contradict that knowledge. In the same way as you don't need to know what's inside a cell's nucleus (other than the vague idea that it's DNA) at a certain level, and then as you advance, the nucleus actually reveals itself to be made of all sorts of things, and DNA is made up of things as well (I don't take biology so I apoligise if this is wrong but I think it's made of carbon?)
Yes... I think they are what are called organic compounds
Offline
AmoebaMan wrote:
the fact is, Artificial intelligence is impossible to create. doing so would be playing God, and you would probably get struck by lightning if you did succeed. anyway, because computers are only capable of understanding two values (1 and 0) and two functions (add and bit-shift), it is impossible to create a program that could do what the human mind could do.
that being said, i think what you are looking for is a CBR (computer behavioral replicant). those aren't that difficult to make. you just have to make a @#$%-load of "if" statements to define every possible scenario. either that or find a mathematical formula that could be used to define those if statements with given inputs. or a combination.
^ LOL this guy is just the strongest catholic I know (of)! ^
Offline
sparks wrote:
Before I begin, let me explain that I don't have a god in my life, I believe in things only if they can be satisfactorily proven scientifically. I'm not gonna change my mind, you're not gonna change your mind, which is why we are NOT going to debate the truth of either of our beliefs, I'm onlt mentioning it.
our entire consciounce revolves around trillions of if statements, that let us react according to everything we experiece, from having a bath to driving a car, we use past experience to do them, eg, during previous baths, the water has always been wet, generally stayed in the tub, and we are supposed to wash ourselves. We assume this will be the case with the bath we are having at the moment, even though there is nothing to say that those things will be the same this time. In a car, we assume the wheel will do exactly what it did every other time we used it. Our previous experiences reflect our current and future reactions.
when we come across something we have never seen before, we can use past experience to decide what best to do in the situation, this is the things machines find so hard, and give us an edge for the time being.
another example of learnt information is that things look smaller further away, which allows us to make sense of a two dimensional picture as having depth, even though it is all flat.
I agree with sparks!
Offline
AmoebaMan wrote:
sparks wrote:
our entire consciounce revolves around trillions of if statements, that let us react according to everything we experiece, from having a bath to driving a car, we use past experience to do them, eg, during previous baths, the water has always been wet, generally stayed in the tub, and we are supposed to wash ourselves. We assume this will be the case with the bath we are having at the moment, even though there is nothing to say that those things will be the same this time. In a car, we assume the wheel will do exactly what it did every other time we used it. Our previous experiences reflect our current and future reactions.
when we come across something we have never seen before, we can use past experience to decide what best to do in the situation, this is the things machines find so hard, and give us an edge for the time being.the fact is, on its most basic level, the brain is far more complex than the computer. the signal system used to relay information in the human brain is far more complex than binary - thus, a computer simply isn't capable of doing some things that a brain can - namely, thinking entirely on its own. a true artificial intelligence would be able to respond to events with no prior programming dictating those events, just like a human brain can. an if statement can't do that.\
sparks wrote:
another example of learnt information is that things look smaller further away, which allows us to make sense of a two dimensional picture as having depth, even though it is all flat.
someone doesn't know their physics. things far away appearing small isn't due to information modification by the brain - its basic physics. as you get farther and farther away, the stream of light particles that actually reach your eyes gets smaller and smaller, which makes your perception of it decrease.
Ahem, we are talking about ARTIFICIAL intelligence, not ACTUAL intelligence!
Offline
AmoebaMan wrote:
G-factor = God. Humans are much more complex than trillions of little "if"s.
And anyways, it probably would not be good to create a machine with intelligence parallel to that of a human mind. If you have ever read "Footprints of God" or watched The Matrix, you probably know what I mean.
I haven't read Footprints of God, nor do I want to. I've seen the Matrix, though. The machines are truly AI.
AI in Scratch means some kind of robot.

Offline
If you read my previous posts, you will see that artificial intelligence is, as far as anyone in this thread is concerned, possible, but not for scratch.
Offline
There is a robot called Cog. That, I think, has the best AI ever created. It was actually made before there were computers. It uses chemicals that are extremely similar to real brain cells. It learns just like a real human.
Last edited by calebxy (2010-05-26 04:59:10)
Offline
Cog was developed at MIT and I doubt any robots were designed before there were computers, as they all rely on computing to function. It does not use chemicals that I know of. you can read about Cog here.
Offline
The reason Cog uses artificial brain cells instead of a computer is because computers weren't good enough or small enough for a robot.
Offline
soz i havn't been on
but i took a look at some of the post's
btw if you want to talk to me about the AI or VI just put jps in the post or send me a message
(\ /) BTW+ about the religous stuff
(='.'
i'm a athiest but i'm open
(")_(") BTW++ jesus represents the sun
get to me and i'l get to you
Last edited by jps (2010-06-11 21:10:09)
Offline
to Scratchreallyrocks
i think it is impossible to develop a perfect AI through most scripting systems
because darwins theory of evolution describes everything changeing including the way peaple think although that's more environmental rather than natural
Offline
to likegames
i not a 100% master of programing so i'll say through what experiance i've had so far
CBR will only work if you switch between scripts and i'm making a game that ivolves speed recently.
and switching doesn't work in a straight forward way.
but i got a VI to work a week or to ago using a sprite for every turn
the turns on the sprite but if a player goes through a corner the setup/variables of the sprites change leading the bot to the player but it took a week for a simple 8 shaped stage to work to 90% efficency.
(;
Offline
jps wrote:
to Scratchreallyrocks
i think it is impossible to develop a perfect AI through most scripting systems
because darwins theory of evolution describes everything changeing including the way peaple think although that's more environmental rather than natural
![]()
You can quote people again you know. And you could program all that it needs to know to start out with into it, then make it learn. Read my earlier posts to see how this works.
Offline
Interesting that MIT-AI is now promoting the idea that physical embodiment is necessary for an AI. When Bert Dreyfus visited MIT-AI to explain that to them about 25 years ago (I forget the exact date and I'm too lazy to search) he got shouted down (sadly, mainly by my otherwise-hero Gerry Sussman).
At least you're no longer arguing about binary. But there's still a bit of analog vs. digital in the discussion. That doesn't matter. We may not know very much about how humans work, but both digital and analog computation are very well understood and they are entirely equivalent in their ability to express a computation. (And no, "computation" doesn't mean just about numbers.)
We know more about human information processing than we did a few years ago, however. For example, we know a fair amount about the way in which visual information is processed in the optic nerve, before it gets to the brain. When I was in college people mostly thought that there was a sort of pixel-by-pixel image of the retina inside the brain, but it turns out that the pixels aren't what gets to the brain; the optic nerve has the circuitry to do various kinds of feature detection, e.g., vertical or horizontal lines fire particular neurons in the brain. This comes from a combination of MRI studies of humans and (ugh) open-brain studies of animals.
There's nothing in there but neurons, basically, and neurons are kind of in between digital and analog; they work with analog voltages, but a neuron makes a yes-or-no decision; either it fires or it doesn't. They are straightforwardly simulable by digital computers, and much fruitful research has been done that way.
In almost every way, computers have a huge edge over humans. Electronic switching happens in nanoseconds; neural switching is more like tenths of a second. Computers have gigabyte memories on tiny chips; people famously have a short-term memory capacity of 7 items and even our long-term memory doesn't come close. But human beings are massively parallel in ways that we are only beginning to understand and can't yet simulate. It's parallelism where you champions of humanity over computers should be arguing.
But we are starting to have massively parallel electronic hardware. My colleagues who work on this topic say that in a decade you'll have 10,000 processors on your desk. What we don't have is much know-how at programming with massive parallelism. We don't have a clue about the human ability to take in the important feature of a complicated scene instantly. And it's not just vision at work; visual processing interacts in complicated ways with higher level intellectual reasoning. So, one of the great pieces of cognitive science research involved putting chess pieces on a board in different positions and showing them to people, then later asking the people to draw the board. Non chess players were hopeless at this, of course. What's really interesting is that chess masters could do it with amazing accuracy -- provided that the position came from an actual chess game. If the pieces were placed on the board randomly, the chess masters were just as bad as the rest of us.
Anyway, the point is, although the humans-beat-computers side of the debate is right that we don't understand much about how people think, we understand a lot more than we once did. I think we'll get there eventually. Not in my lifetime, but perhaps in yours. Or maybe it'll be your great-grandchildren who see human-level AI. But intelligence isn't magic, I'm pretty sure.
PS If there is a god, which I doubt, then she gave people intelligence by building an intelligent mechanism, not by extramechanical magic. Imho.
Offline
bharvey wrote:
Interesting that MIT-AI is now promoting the idea that physical embodiment is necessary for an AI. When Bert Dreyfus visited MIT-AI to explain that to them about 25 years ago (I forget the exact date and I'm too lazy to search) he got shouted down (sadly, mainly by my otherwise-hero Gerry Sussman).
At least you're no longer arguing about binary. But there's still a bit of analog vs. digital in the discussion. That doesn't matter. We may not know very much about how humans work, but both digital and analog computation are very well understood and they are entirely equivalent in their ability to express a computation. (And no, "computation" doesn't mean just about numbers.)
We know more about human information processing than we did a few years ago, however. For example, we know a fair amount about the way in which visual information is processed in the optic nerve, before it gets to the brain. When I was in college people mostly thought that there was a sort of pixel-by-pixel image of the retina inside the brain, but it turns out that the pixels aren't what gets to the brain; the optic nerve has the circuitry to do various kinds of feature detection, e.g., vertical or horizontal lines fire particular neurons in the brain. This comes from a combination of MRI studies of humans and (ugh) open-brain studies of animals.
There's nothing in there but neurons, basically, and neurons are kind of in between digital and analog; they work with analog voltages, but a neuron makes a yes-or-no decision; either it fires or it doesn't. They are straightforwardly simulable by digital computers, and much fruitful research has been done that way.
In almost every way, computers have a huge edge over humans. Electronic switching happens in nanoseconds; neural switching is more like tenths of a second. Computers have gigabyte memories on tiny chips; people famously have a short-term memory capacity of 7 items and even our long-term memory doesn't come close. But human beings are massively parallel in ways that we are only beginning to understand and can't yet simulate. It's parallelism where you champions of humanity over computers should be arguing.
But we are starting to have massively parallel electronic hardware. My colleagues who work on this topic say that in a decade you'll have 10,000 processors on your desk. What we don't have is much know-how at programming with massive parallelism. We don't have a clue about the human ability to take in the important feature of a complicated scene instantly. And it's not just vision at work; visual processing interacts in complicated ways with higher level intellectual reasoning. So, one of the great pieces of cognitive science research involved putting chess pieces on a board in different positions and showing them to people, then later asking the people to draw the board. Non chess players were hopeless at this, of course. What's really interesting is that chess masters could do it with amazing accuracy -- provided that the position came from an actual chess game. If the pieces were placed on the board randomly, the chess masters were just as bad as the rest of us.
Anyway, the point is, although the humans-beat-computers side of the debate is right that we don't understand much about how people think, we understand a lot more than we once did. I think we'll get there eventually. Not in my lifetime, but perhaps in yours. Or maybe it'll be your great-grandchildren who see human-level AI. But intelligence isn't magic, I'm pretty sure.
PS If there is a god, which I doubt, then she gave people intelligence by building an intelligent mechanism, not by extramechanical magic. Imho.
Wow, that pretty much sums up the whole thing.
Offline
Well we are going a lot OFF-TOPIC. Btw what we need is simply a mechanism that will automaticaly take in data, process it and use it to reprogram itself. that will be the true AI.this may not be pssible in scratch but maybe in some distant new programming language which will revolutionize the world.....
It will surely need a lot of inspiring brains to do it.
The solution to the real problem (AI to take shortest route) .......... Well I'm trying to solve it.
Offline
Yes, Scratch most certainly isn't the ideal language for AI. what Subh said above is correct as his statement "automaticaly take in data, process it and use it to reprogram itself" can be applied to the human learning method (especially if you are a fan of the behavioural learning model).
I don't think that embodiment is necessary for AI, as bharvey suggested in the sense that a computer can have a virtually embodied self inside it's own program, but I do think that in order for a computer to be considered intelligent for many people it would have to be able to deal with the real world, in which case embodiment IS needed.
@Bharvey: I just did that study for my psychology exam and I think the results were 7+-2 (between 5 and 9 but with the majority at 7)
What I love about AI is the wonderful mix of biology, psychology, electronics and programming you encounter.
I was thinking that a lovely way of showing how a certain level of image processing could be achieved in Scratch (and a nice challange) would be to create a project that let a user hand draw a shape on the screen and get the project to recognise it.
this wouldn't be intelligent if you preprogrammed the specifications for each shape (such as four sides of equal length with 90° angles = square) but if the project analysed the shape, was able to ask what the shape was, and then store that knowlege and MAKE COMPARISONS so that the square you draw next time is not in the same XY position or the same size THEN you have started to fulfill the definition of AI set out by Subh above. It's not generally intelligent, but it is a learning machine. Let me know if anyone tries this, I might have a go at the project myself.
Offline
sparks wrote:
Yes, Scratch most certainly isn't the ideal language for AI. what Subh said above is correct as his statement "automaticaly take in data, process it and use it to reprogram itself" can be applied to the human learning method (especially if you are a fan of the behavioural learning model).
I don't think that embodiment is necessary for AI, as bharvey suggested in the sense that a computer can have a virtually embodied self inside it's own program, but I do think that in order for a computer to be considered intelligent for many people it would have to be able to deal with the real world, in which case embodiment IS needed.
@Bharvey: I just did that study for my psychology exam and I think the results were 7+-2 (between 5 and 9 but with the majority at 7)
What I love about AI is the wonderful mix of biology, psychology, electronics and programming you encounter.
I was thinking that a lovely way of showing how a certain level of image processing could be achieved in Scratch (and a nice challange) would be to create a project that let a user hand draw a shape on the screen and get the project to recognise it.
this wouldn't be intelligent if you preprogrammed the specifications for each shape (such as four sides of equal length with 90° angles = square) but if the project analysed the shape, was able to ask what the shape was, and then store that knowlege and MAKE COMPARISONS so that the square you draw next time is not in the same XY position or the same size THEN you have started to fulfill the definition of AI set out by Subh above. It's not generally intelligent, but it is a learning machine. Let me know if anyone tries this, I might have a go at the project myself.
Hmmmmmmmmm......Very interesting..
We can certainly use lists to store info and make it such that it never deletes the list.
Also we need a Query answer where, If I say its a rhombus and then say its a square; It will ask me which one is right.
We can program it but will include lots of variables.Good.Lets see.
If anyone's willing to try such a program , I'll definitely help. Just leave a comment on any of my projects.
I myself will try it.
P.S. We also need a system in the program to rectify human errors(If a draw a square and I make one of the angles 89 degrees, then also the program will recognise a square (This can be done by approximation method
[blocks]
<if> <( <abs( (( <{ angle }> <-> 90 )) <<> 3 )>
<set{ angle }to( 90
[/blocks]
Things like the one above.
BTW,I'm trying to make a program (Work already started) which allows you to draw anything and then the program finds out the area and the perimeter
(PIXEL BY PIXEL- Using the if touching colour things..)
Offline
After taqking a look at the cog thing, I'd say that its quite cool but I personaly think Asimo is more advanced, since it has legs and hands, plus it doesn't take 3 hours to teach it something new.
Offline
I made a similar project on Panther. It will teach itself new objects, when you press space. It also remembers the object for future scanning and can tell you the object in diffrent light aswell so...
Offline
Offline
Will do
Here: http://www.mediafire.com/?mrzymitiont
Press Space. DO NOT PRESS THE FLAG!!!! as if you press the flag, a diffrent feature will start!
Last edited by johnnydean1 (2010-06-12 09:15:34)
Offline
when?Just comment the link on one of my projects
Offline