This is a read-only archive of the old Scratch 1.x Forums.
Try searching the current Scratch discussion forums.

#6726 2013-02-02 21:45:40

MathWizz
Scratcher
Registered: 2009-08-31
Posts: 1000+

Re: BYOB 3 - Discussion Thread

Just someone's PC...? 0.o


http://block.site90.net/scratch.mit/text.php?size=30&text=%20A%20signature!&color=333333

Offline

 

#6727 2013-02-02 22:00:27

nathanprocks
Scratcher
Registered: 2011-04-14
Posts: 1000+

Re: BYOB 3 - Discussion Thread

MathWizz wrote:

Just someone's PC...? 0.o

Not just someone's PC. It has a name. The computer name is "james-pc".


http://carrot.cassiedragonandfriends.org/Scratch_Signature/randomsig.php
http://trinary.site40.net/images/scratchrank.php?username=nathanprocks&display=small

Offline

 

#6728 2013-02-02 22:17:44

Hardmath123
Scratcher
Registered: 2010-02-19
Posts: 1000+

Re: BYOB 3 - Discussion Thread

nathanprocks wrote:

Someone's PC always appears in Finder on my MacBook when I'm in my room. I was bored last night, so I was looking through this person's shared files and found a desktop shortcut to BYOB.  tongue

Yeah! Eternal fame!

@bharvey: I woke up this morning and checked my email on my iPad. Somehow I ended up on the Haskell site (link to some math stuff-> someone's math blog -> Haskell code to find Mersenne primes). Anyway, I figured I really didn't like it. There were way too many symbols being used, and it was too strict with types. I did find the pattern-matching stuff cool (kind of like Prolog), but it was just too dense.

What do you think? Should languages be verbose but easy-to-read, or compact but stuffed with symbols? I was leaning towards verbose in my hypothetical language (ideally I'd only have brackets, spaces, newlines, tabspaces, and JSON as special characters, but that's probably not going to happen).

Also, I'm not sure I understand "lazy" evaluation. If you don't evaluate anything until you need it, then you need to remember a chain of transformations on every piece of data you have, and do them really quick as soon as I call "print". That seems a bit odd; if I were writing a ray tracer, would it do nothing until I called the writeToFile function, and then suddenly backtrack through the layers until it got to the actual ray racing?


Hardmaths-MacBook-Pro:~ Hardmath$ sudo make $(whoami) a sandwich

Offline

 

#6729 2013-02-02 23:30:40

bharvey
Scratcher
Registered: 2008-08-10
Posts: 1000+

Re: BYOB 3 - Discussion Thread

Hardmath123 wrote:

What do you think? Should languages be verbose but easy-to-read, or compact but stuffed with symbols?

Well, God's language, Scheme, is sort of in between.  It doesn't have crazy notations like = and == meaning two unrelated things, but it doesn't have noise words like the "steps" in "move 10 steps" either.  The important thing about the notation isn't how verbose it is, I think, but rather the fact that the notation makes it trivially self-reflective; a Scheme program is a Scheme list.

In my youth I was very partial to APL, possibly the world's least verbose language, but what made it a great language was the way scalar operators automatically generalized to operate on arrays, so you didn't need separate hofs like map.  The terse notation wasn't for the sake of saving keystrokes, but rather to mimic the chalkboard (whiteboards hadn't been invented yet) notation of mathematicians.

So I guess I'm going with "both can be good or bad; the language should have an idea behind it and the details should follow from the idea."  Scheme (well, Lisp) programs are self-reflective because John McCarthy wanted to do theorem-proving about programs.  Everything follows from that.

Also, I'm not sure I understand "lazy" evaluation. If you don't evaluate anything until you need it, then you need to remember a chain of transformations on every piece of data you have, and do them really quick as soon as I call "print". That seems a bit odd; if I were writing a ray tracer, would it do nothing until I called the writeToFile function, and then suddenly backtrack through the layers until it got to the actual ray racing?

I'm not a graphics person and I have no idea whether ray tracing is best expressed as a composition of functions.  But let's say for the sake of argument that it is.  The starting point of your computation is a list of points? of line segments? something.  And in a non-functional style you'd iterate through the list, handling one element at a time.  Lazy evaluation would let you write your program as if you had a super parallel computer that could operate on the entire list at once, so your program would be conceptually very simple.  And then, when you went to write out the results, yeah, each element of the result list would be generated as needed.

Let me take an example I do understand, a little, namely audio signal processing.  The canonical simple example is that you want to digitize your old vinyl records and you want to get rid of the annoying clicks that come from little scratches on the surface.  Well, we know from information theory that you can't do that perfectly; once you get noise mixed into a signal, you can't get it out again.  But, to a first approximation, a click is a very sudden change in amplitude, and if we differentiate the signal, it gets even more sudden and huge (in principle, infinite).  So here's the click filter:

input signal -> differentiate -> clip -> integrate -> output de-clicked signal

Well, "differentiate" and "clip" and "integrate" are easy to define for lists of sample amplitudes.  Here's differentiate:

Code:

(define (differentiate signal)
   (/ (map - (all-but-first-of signal) signal)
       time-between-samples))

or in other words (Δ signal)/(Δ time).

Now, what makes this so simple is that "signal" isn't a single sample; it's all the samples as one data structure.  And I haven't worried about indexing into the list, or reaching the end of the list.  Lazy evaluation lets me get away with this; I write the code as if the whole signal gets differentiated, and then it all gets clipped, and then it all gets integrated.  But what really happens is that as my digital-to-analog converter wants sample values, each (pair of) sample(s) get(s) subtracted, clipped, added.

You can view the march of computer science through the years as a greater and greater distancing between how the programmer expresses an algorithm and what's happening down in the computer circuitry.  Part of that, a big part, is freeing programmers from the tyranny of a sequence of events.  With lazy evaluation you can write in terms of an abstract sequence of events, while the actual sequence of events is quite different (and more complicated).

The cost of lazy evaluation is that it only works in a purely functional computation.  Once you do things like assign values to variables, the actual sequence of events matters and you're stuck with it.  This is why multicore programming (in which there is no determinate sequence of events) has led real world programmers to get interested in functional programming, formerly viewed as a toy of boffins.  (Sorry, I've just finished a biography of Turing.)

The example I use as a starting point with my SICP students is determining whether a number is prime.  Definition: A prime is a number with no factors in [2, n-1].  So, the most straightforward way to turn that into a program is to generate the list of numbers [2, n-1], and then filter for factors of n, and then ask whether the result is empty:

Code:

(define (prime? n)
  (null? (filter (lambda (x) (divides? x n))
                 (range 2 (- n 1)))))

But if you really write it this way, and then ask whether 1,000,000 is prime, you first have to generate a list of length 999,998, then test all those numbers for being factors of n, then when you get the list of factors you don't even look at the values, just ask whether it's empty or not.  And so, before lazy evaluation, programmers instead wrote hideous loops with index variables.  Lazy evaluation lets the program above work efficiently for (prime? 1000000), because the call to RANGE returns a promise, the call to FILTER goes as far as the first factor of 1000000, namely 2, and then NULL? immediately returns false without cashing in the rest of the promise.

Does that help?

PS:  I don't much like Haskell, but that might be because of the disorganization of the person teaching the workshop where I was supposed to learn it.  But I don't like strongly typed languages in general.  Ones that infer types aren't as awful as Javaesque ones in which you have to declare everything, but it's only the existence of type-valued variables that makes Haskell livable at all.  (Otherwise you'd have to write a separate MAP function for every type of list item, and couldn't handle heterogeneous lists at all.)

But I grew up at the MIT AI Lab, where I was taught to speak with scorn about "protecting the user from himself."  Not only declaration-free languages, but a protection-free filesystem in our homebrew OS.  (No protecting you against other users either!)  This philosophy works great on a project in which your entire programming staff consists of Jens.  But I can see how if you're Microsoft and you're trying to get 5000 programmers to work on a project together, you're in trouble because there probably aren't 5000 great programmers altogether, and none of them want to work at Microsoft anyway, so you feel like you need a language for bad programmers that will (fat chance) stop them from making mistakes.  smile

Last edited by bharvey (2013-02-03 00:04:24)


http://cs.berkeley.edu/~bh/sig5.png

Offline

 

#6730 2013-02-03 07:15:39

Hardmath123
Scratcher
Registered: 2010-02-19
Posts: 1000+

Re: BYOB 3 - Discussion Thread

bharvey wrote:

Hardmath123 wrote:

What do you think? Should languages be verbose but easy-to-read, or compact but stuffed with symbols?

It doesn't have crazy notations like = and == meaning two unrelated things

I know, right?!  tongue

In my youth I was very partial to APL, possibly the world's least verbose language, but what made it a great language was the way scalar operators automatically generalized to operate on arrays, so you didn't need separate hofs like map.  The terse notation wasn't for the sake of saving keystrokes, but rather to mimic the chalkboard (whiteboards hadn't been invented yet) notation of mathematicians.

Is that the stuff I quoted, that looked like I opened a BYOB project in TextEdit?

So I guess I'm going with "both can be good or bad; the language should have an idea behind it and the details should follow from the idea."  Scheme (well, Lisp) programs are self-reflective because John McCarthy wanted to do theorem-proving about programs.  Everything follows from that.

Oh. I have a pretty simple syntax in mind that is like Scheme with whitespace to reduce bracketiness.

Also, I'm not sure I understand "lazy" evaluation. If you don't evaluate anything until you need it, then you need to remember a chain of transformations on every piece of data you have, and do them really quick as soon as I call "print". That seems a bit odd; if I were writing a ray tracer, would it do nothing until I called the writeToFile function, and then suddenly backtrack through the layers until it got to the actual ray racing?

I'm not a graphics person and I have no idea whether ray tracing is best expressed as a composition of functions.  But let's say for the sake of argument that it is.  The starting point of your computation is a list of points? of line segments? something.  And in a non-functional style you'd iterate through the list, handling one element at a time.  Lazy evaluation would let you write your program as if you had a super parallel computer that could operate on the entire list at once, so your program would be conceptually very simple.  And then, when you went to write out the results, yeah, each element of the result list would be generated as needed.

Well, in raytracing basically you need to iterate over a set of angles at which rays are shot and simulated. So it seems counter-intuitive that when I ask for a ray to be simulated, it's actually not until I tell it to produce an image file.  hmm

Let me take an example I do understand, a little, namely audio signal processing.  The canonical simple example is that you want to digitize your old vinyl records and you want to get rid of the annoying clicks that come from little scratches on the surface.  Well, we know from information theory that you can't do that perfectly; once you get noise mixed into a signal, you can't get it out again.  But, to a first approximation, a click is a very sudden change in amplitude, and if we differentiate the signal, it gets even more sudden and huge (in principle, infinite).  So here's the click filter:

input signal -> differentiate -> clip -> integrate -> output de-clicked signal

Well, "differentiate" and "clip" and "integrate" are easy to define for lists of sample amplitudes.  Here's differentiate:

Code:

(define (differentiate signal)
   (/ (map - (all-but-first-of signal) signal)
       time-between-samples))

or in other words (Δ signal)/(Δ time).

Now, what makes this so simple is that "signal" isn't a single sample; it's all the samples as one data structure.  And I haven't worried about indexing into the list, or reaching the end of the list.  Lazy evaluation lets me get away with this; I write the code as if the whole signal gets differentiated, and then it all gets clipped, and then it all gets integrated.  But what really happens is that as my digital-to-analog converter wants sample values, each (pair of) sample(s) get(s) subtracted, clipped, added.

Ah. The dash after map (presumably to reference the subtraction function) looks out-of-place, BTW, are there nicer ways to do that?

You can view the march of computer science through the years as a greater and greater distancing between how the programmer expresses an algorithm and what's happening down in the computer circuitry.  Part of that, a big part, is freeing programmers from the tyranny of a sequence of events.  With lazy evaluation you can write in terms of an abstract sequence of events, while the actual sequence of events is quite different (and more complicated).

The cost of lazy evaluation is that it only works in a purely functional computation.  Once you do things like assign values to variables, the actual sequence of events matters and you're stuck with it.  This is why multicore programming (in which there is no determinate sequence of events) has led real world programmers to get interested in functional programming, formerly viewed as a toy of boffins.  (Sorry, I've just finished a biography of Turing.)

The example I use as a starting point with my SICP students is determining whether a number is prime.  Definition: A prime is a number with no factors in [2, n-1].  So, the most straightforward way to turn that into a program is to generate the list of numbers [2, n-1], and then filter for factors of n, and then ask whether the result is empty:

Code:

(define (prime? n)
  (null? (filter (lambda (x) (divides? x n))
                 (range 2 (- n 1)))))

But if you really write it this way, and then ask whether 1,000,000 is prime, you first have to generate a list of length 999,998, then test all those numbers for being factors of n, then when you get the list of factors you don't even look at the values, just ask whether it's empty or not.  And so, before lazy evaluation, programmers instead wrote hideous loops with index variables.  Lazy evaluation lets the program above work efficiently for (prime? 1000000), because the call to RANGE returns a promise, the call to FILTER goes as far as the first factor of 1000000, namely 2, and then NULL? immediately returns false without cashing in the rest of the promise.

Does that help?

Oh. Kind of like being able to have infinite-length lists in Python without blowing up the computer? I never believed in those…  hmm

PS:  I don't much like Haskell, but that might be because of the disorganization of the person teaching the workshop where I was supposed to learn it.  But I don't like strongly typed languages in general.  Ones that infer types aren't as awful as Javaesque ones in which you have to declare everything, but it's only the existence of type-valued variables that makes Haskell livable at all.  (Otherwise you'd have to write a separate MAP function for every type of list item, and couldn't handle heterogeneous lists at all.)

How about JS? There's a very loose sense of type there, everything's just a dictionary which coincidentally has a set of properties and a constructor attribute.

you need a language for bad programmers that will (fat chance) stop them from making mistakes.  smile

Welcome to my school. Visual Basic. *groans with pain*

You know, we have a "computer science" test tomorrow? We're supposed to learn, among other things, 5 advantages of using MS Access and 5 properties and 3 methods and 2 events each of  a list of some 4 VB elements.  sad


Hardmaths-MacBook-Pro:~ Hardmath$ sudo make $(whoami) a sandwich

Offline

 

#6731 2013-02-03 12:24:19

Jens
Scratcher
Registered: 2007-06-04
Posts: 1000+

Re: BYOB 3 - Discussion Thread

Ugh, why learn a programming language when you can always just google it? (learning about ideas is something else, of course). You have my sympathy. BASIC was okay when I was young (Dijkstra disagreed), Dan Ingalls even wrote the very first Smalltalk interpreter in it...


Jens Mönig

Offline

 

#6732 2013-02-03 13:33:09

bharvey
Scratcher
Registered: 2008-08-10
Posts: 1000+

Re: BYOB 3 - Discussion Thread

Hardmath123 wrote:

Is that the stuff I quoted, that looked like I opened a BYOB project in TextEdit?

Hmm, is that how you feel when you read a math book?  Consider that most people wouldn't have a clue how to pronounce

http://cs.berkeley.edu/~bh/expchain.png

(a randomly chosen example from Wikipedia), let alone what it means, but mathematicians would never get any work done if they had to say things like that in ascii/English all the time.

I have a pretty simple syntax in mind that is like Scheme with whitespace to reduce bracketiness.

I know that's in vogue right now, but I consider it one of the two seriously wrong design choices in Python.  (The other, of course, is its half-***ed lambda.)  Ironically it would have been okay back when I started working with computers, but today we have non-monospace fonts and indentation just isn't reliably rendered across different environments.  For example, in the Scheme code examples I used in the message to which you are replying, I didn't discover until after submitting that the spacing as rendered in a code box is different from the spacing in this box where I'm typing right now, so it came out looking wrong and I had to edit it, counting characters instead of making it look right as I was typing.  (Code boxes are displayed in monospace, but this editor isn't smart enough to use monospace while I'm entering the code!)

Well, in raytracing basically you need to iterate over a set of angles at which rays are shot and simulated. So it seems counter-intuitive that when I ask for a ray to be simulated, it's actually not until I tell it to produce an image file.  hmm

But you don't ask for a ray to be simulated.  You ask for them all to be simulated at once!  You can express the computation in terms of operations on a list-of-rays.

Maybe it'll help if, instead of thinking about writing to a file, you think about rendering on the screen.  In that case, the result for each ray is needed one at a time, so the computation happens the same way it would have in your iterative program; it's just that the program you write looks simpler.

Anyway, it's counterintuitive because it's not the programming style you first learned.  Programmers aren't going to be able to write really complicated programs until they develop intuitions that aren't about sequences of low-level hardware operations.  smile

Ah. The dash after map (presumably to reference the subtraction function) looks out-of-place, BTW, are there nicer ways to do that?

Sure, you can say (define subtract -) and then (map subtract ...).

Oh. Kind of like being able to have infinite-length lists in Python without blowing up the computer? I never believed in those…  hmm

Yes, exactly.  But, isn't it great how little mechanism you need to get those infinite-length lists?

http://cs.berkeley.edu/~bh/streams.png

The entire secret is making IN FRONT OF a special form by declaring its second input to be of Unevaluated type.  (It would look better if we had a "List (Unevaluated)" type so that the input slot would look like a List slot.  Maybe when things slow down we should make "unevaluated" one of the buttons at the bottom instead of a set of types, so you could have "Number (Unevaluated)" etc.)  And then to undo the non-evaluation, you put a CALL block in the lazy version of ALL BUT FIRST.

If you replace the primitive versions of those two blocks with these versions, then everything else (MAP, etc.) just works, unchanged, on infinite lists!

(In practice lazy lists are implemented slightly differently, for efficiency: The first time you CALL one of those promises you remember the result, so that you only compute each promise once even if you traverse the list again.)

How about JS? There's a very loose sense of type there, everything's just a dictionary which coincidentally has a set of properties and a constructor attribute.

Oh, there's nothing wrong with types.  Every language has types.  The issue is whether a type is associated with a value (good) or with a variable (bad).  In the former case, the language figures out the type of a value as it's computed, and an invisible type tag is attached to the value, and it can be assigned to any variable.

In Snap! we have a funny compromise in which formal parameters can have declared types, but mostly those are just advisory rather than enforced.

You know, we have a "computer science" test tomorrow? We're supposed to learn, among other things, 5 advantages of using MS Access and 5 properties and 3 methods and 2 events each of  a list of some 4 VB elements.  sad

Umm, why are you taking this course?  Easy A?  You should be teaching computer science at your school!


http://cs.berkeley.edu/~bh/sig5.png

Offline

 

#6733 2013-02-03 18:24:29

bharvey
Scratcher
Registered: 2008-08-10
Posts: 1000+

Re: BYOB 3 - Discussion Thread

Jens wrote:

You have my sympathy. BASIC was okay when I was young (Dijkstra disagreed), Dan Ingalls even wrote the very first Smalltalk interpreter in it...

He has my sympathy!  Why didn't he use Interlisp (the PARC implementation of Lisp)?

PS:  Not that you don't have my sympathy, too, Hardmath.  smile

Last edited by bharvey (2013-02-03 20:14:11)


http://cs.berkeley.edu/~bh/sig5.png

Offline

 

#6734 2013-02-03 20:40:11

shadow_7283
Scratcher
Registered: 2007-11-07
Posts: 1000+

Re: BYOB 3 - Discussion Thread

Jens wrote:

Dijkstra disagreed

Kudos to him for making my science fair project possible.

Offline

 

#6735 2013-02-03 21:51:40

OldCodger
New Scratcher
Registered: 2012-05-16
Posts: 54

Re: BYOB 3 - Discussion Thread

Hardmath123 wrote:

nathanprocks wrote:

Someone's PC always appears in Finder on my MacBook when I'm in my room. I was bored last night, so I was looking through this person's shared files and found a desktop shortcut to BYOB.  tongue

Yeah! Eternal fame!

@bharvey: I woke up this morning and checked my email on my iPad. Somehow I ended up on the Haskell site (link to some math stuff-> someone's math blog -> Haskell code to find Mersenne primes). Anyway, I figured I really didn't like it. There were way too many symbols being used, and it was too strict with types. I did find the pattern-matching stuff cool (kind of like Prolog), but it was just too dense.

What do you think? Should languages be verbose but easy-to-read, or compact but stuffed with symbols? I was leaning towards verbose in my hypothetical language (ideally I'd only have brackets, spaces, newlines, tabspaces, and JSON as special characters, but that's probably not going to happen).

Also, I'm not sure I understand "lazy" evaluation. If you don't evaluate anything until you need it, then you need to remember a chain of transformations on every piece of data you have, and do them really quick as soon as I call "print". That seems a bit odd; if I were writing a ray tracer, would it do nothing until I called the writeToFile function, and then suddenly backtrack through the layers until it got to the actual ray racing?

If you want an insight into Haskell go to Channel 9 Microsoft and check out the tag - Haskell. Then watch the first lecture by Eric Meijer 1 of 13.

Offline

 

#6736 2013-02-03 22:27:39

bharvey
Scratcher
Registered: 2008-08-10
Posts: 1000+

Re: BYOB 3 - Discussion Thread

shadow_7283 wrote:

Kudos to him for making my science fair project possible.

Okay, I'll bite.


http://cs.berkeley.edu/~bh/sig5.png

Offline

 

#6737 2013-02-04 09:45:55

Hardmath123
Scratcher
Registered: 2010-02-19
Posts: 1000+

Re: BYOB 3 - Discussion Thread

@bharvey (I'm not going to quote you, it would make this post too long)

When I read a math book, it's important for the reader to see how the math was being done. Solutions that have no verbal accompaniment are often tedious to read. Though in the end, equations are a concise way to represent an abstract idea (the product of numbers from 1 to n divided by the product of numbers from 1 to k and the product of numbers from 1 to the difference between n and k means n!/(k!(n-k)!), or just (n c k)), it's usually easier to understand something when it is explained in natural language, probably because natural language has evolved over a couple thousand years to suit our needs perfectly whereas math notation is designed to save ink and hand pain.

So the only problem with whitespace is that nobody really agrees on what it means? That's not too tough to solve: I can suggest adding a line on top of each file:

Code:

//! whitespace mode:4 spaces
//! whitespace mode:tab
//! whitespace mode:1 spaces

Yucky, but better than those cascades of brackets that make my eyes hurt. Besides, once my language takes over as the next JavaScript, there will be plenty of custom-built IDEs that do that stuff automatically.  tongue

I kind of see this whole lazy evaluation thing now, but I'm still not convinced it's better than an actual evaluation. It seems like how I felt about variable references back when I was a JS newbie: it shouldn't be there by default, otherwise it'll confuse people like me. I would have liked something nice like variable.reference() to explicitly make a reference, but clearly that's not happening. Similarly with lazy evaluation, imagine I've written a ray tracer. I then tell it to print "tracing", then trace the image, then print "done", then display the image. What may happen is that it prints "tracing" and "done", then painfully starts fulfilling its promise to trace the rays once I actually ask for a display (raytracing is a long, time-consuming process, to this is clearly wrong).

And the reason I'm taking our school's (excuse for) CS is because I'm still in middle school where the System declares I'm not old enough to pick what I want to learn. Seriously, I would be doing English Lit, advanced math and physics, chemistry and history had the world been a better place.

Actually, had the world been a better place, I would probably be taking a proper CS class (I'm relying on the upcoming Udacity class on HTML5 game development, and the endless procrastination to pick up SICP).

Any chance you'll still be in Berkeley by the time I'm in college? Could be an extra reason to apply.  tongue


Hardmaths-MacBook-Pro:~ Hardmath$ sudo make $(whoami) a sandwich

Offline

 

#6738 2013-02-04 11:16:47

blob8108
Scratcher
Registered: 2007-06-25
Posts: 1000+

Re: BYOB 3 - Discussion Thread

Hardmath123 wrote:

Any chance you'll still be in Berkeley by the time I'm in college? Could be an extra reason to apply.  tongue

I would definitely have applied to MIT or Berkeley or someplace had it not been in America...  tongue


Things I've made: kurt | scratchblocks2 | this cake

Offline

 

#6739 2013-02-04 12:30:34

bharvey
Scratcher
Registered: 2008-08-10
Posts: 1000+

Re: BYOB 3 - Discussion Thread

Hardmath123 wrote:

@bharvey (I'm not going to quote you, it would make this post too long)

The trick is ruthless editing, leaving just enough for the reader to understand what the next paragraph is an answer to.

When I read a math book [...] Solutions that have no verbal accompaniment are often tedious to read.

Yes, and program documentation should, like math documentation, use natural language -- not a verbose programming language or (shudder) "pseudocode," but actual sentences and paragraphs that a non-programmer could read.  But the program itself, like the math itself, is best expressed in an unambiguous formal language, and sometimes it's most useful for that language to be terse.

Bear in mind that you spent years learning to program in a particular way and then I threw a snippet of APL code at you.  That's like throwing that equation I found on Wikipedia at someone who has no idea what an exponential function is.

Bear in mind, too, that my meta-point isn't to convince you that an APL-like notation is the best solution for your purposes.  Rather, it's that first you have to have a purpose, and then the notation should follow from that.  Ken Iverson, who invented APL, was a mathematician.  At first he wasn't even interested in implementing APL on a computer.  Adin Falkoff, who was sort of the Dan Ingalls of APL, did that.  Iverson's purpose was to enable mathematicians, hitherto mostly interested in unchanging, eternal truths, to talk usefully about algorithms, in which what's true changes over time.  He spent his latter years writing high school and college math textbooks using APL notation (and using computer exercises to help teach the math ideas).

When McCarthy invented Lisp, he did it using ASCII characters, because he did want to run it on a computer, but in pretty short order the MIT and Stanford AI labs had special keyboards with λ keys.

whereas math notation is designed to save ink and hand pain.

No, actually, that's not true, or at least not entirely true.  Math notation is designed to be unambiguous.  Natural language doesn't have words for many mathematical ideas, and when mathematicians recycle ordinary words for their ideas, the result is all sorts of confusion.  It took an unnecessary century or so for the idea of extending the domain of the square root function to negative numbers to catch on because someone, I forget who, was silly enough to use the word "imaginary" to name the resulting values, and even other mathematicians were influenced, probably unconsciously, by that into not really believing in them.  When I was a kid I found it incredibly amusing that no rational number is normal.

So the only problem with whitespace is that nobody really agrees on what it means?

Not what it means, what it looks like.  It's not a problem if you never let your code escape from an IDE or a programmer's editor like Emacs.  But if you ever let your code into the hands of an email client or something, that thinks line wrapping is okay, you're doomed.  And if you use software that works in a non-monospace font, you'll get the spacing wrong with your own hands, without the software making any egregious changes to it.  Just learn not to notice the parentheses.  Imho.

I kind of see this whole lazy evaluation thing now, but I'm still not convinced it's better than an actual evaluation.

As with the notation question, there's no universal right answer.  First of all, lazy evaluation makes sense only in the context of a purely functional computation.  You're envisioning your raytracing example with an algorithm that's full of state!  First do this, then add this value to that array, then... etc.  Doing that lazily wouldn't just be pointless; it would likely give the wrong answer.

So, to get the point of lazy evaluation, you have to teach yourself to have functional-programmer instincts.  Take my favorite example, finding the acronym for a phrase.  To me the only possible approach to that problem is

   combine with JOIN
      map FIRST LETTER
         keep IMPORTANT WORDS
            sentence->list (input)

but almost all programmers would immediately think

  for i = 1 to length(phrase)
     ...
     blah blah
     ...
     if phrase[i] == ' ' spaceflag=1
     ...
     and so on.

Once you instinctively think functionally, then you can take the small next step to noticing a slight inefficiency of functional programming and working out how to fix it, namely by memoized lazy evaluation.

And the reason I'm taking our school's (excuse for) CS is because I'm still in middle school where the System declares I'm not old enough to pick what I want to learn.

Ah.  I figured you were a little older than that.  But, listen, I was in middle school once (although it was called junior high school back then), and, although the System is awful, sometimes you luck out and get individual teachers who believe in kids, or at least in particular kids.  The reason I am who I am today is that I had an algebra teacher in 7th grade who, after I read the entire textbook the first weekend of school and then suddenly started being bored in class, noticed, gave me a final exam on the spot, and then handed me an Algebra II book and told me to stop coming to class and to bring it back when I was done.  That year he gave me about a dozen math books to read, including calculus; set theory, my favorite; and Courant and Robbins.

I don't know anything about your teachers, but you can try to talk them into treating you as the special case you are.

Any chance you'll still be in Berkeley by the time I'm in college? Could be an extra reason to apply.  tongue

Thank you!  But unless you skip a lot of grades in the meantime (which is another way to deal with stupefying school), by the time you get here I'll probably be so decrepit you'll have to remind me what an algorithm is.  smile

Seriously, I'll be around Berkeley, but this is my last semester of official teaching; I retire in June.  OTOH,  one of my planned projects for next year is to make an edX course about SICP.


http://cs.berkeley.edu/~bh/sig5.png

Offline

 

#6740 2013-02-04 12:39:03

bharvey
Scratcher
Registered: 2008-08-10
Posts: 1000+

Re: BYOB 3 - Discussion Thread

blob8108 wrote:

I would definitely have applied to MIT or Berkeley or someplace had it not been in America...  tongue

Hmm.  Hard to know how to respond to this without some context... The US is awful in most ways, but universities are one of the two things we're good at.  (The other is toilet paper.)  Of course, in the case of public universities like Berkeley this may not be true forever.  sad

P.S.  By contrast, you Brits are really good at high school, from what I understand.

Last edited by bharvey (2013-02-04 12:41:12)


http://cs.berkeley.edu/~bh/sig5.png

Offline

 

#6741 2013-02-04 16:53:06

blob8108
Scratcher
Registered: 2007-06-25
Posts: 1000+

Re: BYOB 3 - Discussion Thread

bharvey wrote:

blob8108 wrote:

I would definitely have applied to MIT or Berkeley or someplace had it not been in America...  tongue

Hard to know how to respond to this without some context... The US is awful in most ways, but universities are one of the two things we're good at.

I'm sure you are! But whoops, I'm being unclear — I was referring more to the distance. America's quite far away.  tongue  And having to do SATs (or whatever tests one needs), as well as exams here, would have been too much effort (to put it bluntly).

But I got an offer from Cambridge, so I'm very happy.  smile   (Due in part, I'm sure, to all the Scratching/reading SICP/your comments...)

P.S.  By contrast, you Brits are really good at high school, from what I understand.

Really? I couldn't compare...  tongue


Things I've made: kurt | scratchblocks2 | this cake

Offline

 

#6742 2013-02-04 17:55:23

bharvey
Scratcher
Registered: 2008-08-10
Posts: 1000+

Re: BYOB 3 - Discussion Thread

blob8108 wrote:

But I got an offer from Cambridge, so I'm very happy.  smile

Congrats!  Not too dusty.


I couldn't compare...  tongue

Well I only went to high school in one country, too.  But what people say is that US students coast through high school and then suddenly have to work hard at university, whereas in the UK you learn everything in high school and then spend your time at uni sipping sherry and rowing.  (I think this explains the otherwise confusing fact that in the US "baccalaureate" means four years of university but in Europe it seems to mean "a good secondary education."  Or at least that's what the IB people seem to think.)

Stereotype, oversimplified, yeah, I know.  smile

P.S.  I did visit Marlborough once.  The kids were so polite!  I figure either they put something in the water over there or else all the kids secretly torture animals after dark or something.  (I had a great time, though.)

Last edited by bharvey (2013-02-04 18:03:23)


http://cs.berkeley.edu/~bh/sig5.png

Offline

 

#6743 2013-02-04 19:53:21

shadow_7283
Scratcher
Registered: 2007-11-07
Posts: 1000+

Re: BYOB 3 - Discussion Thread

bharvey wrote:

Okay, I'll bite.

Figured you might.  tongue
A* path-finding modification and dynamic heuristic calculation with applications to real-time traffic flow
A* is just Dijkstra's search algorithm with a heuristic slapped on. I kinda glossed over my project's details in the paper to help my teacher, but you can see a screenshot of the BYOB 3 simulation in there.  big_smile

EDIT: Completely unrelated, Brian, but what do you think of sig figs? Somehow in my computer experience I've never knowingly encountered them, so they're new to me. My dad says they're super important to low-level computations, but they bug me. Why would you try to make assumptions about the circumstances by which a number was obtained? I mean 50 === 50.0 checks, doesn't it? It feels like people are trying to apply rules to numbers when numbers are kind of rules in and of themselves.

Last edited by shadow_7283 (2013-02-04 20:15:38)

Offline

 

#6744 2013-02-04 20:10:19

Hardmath123
Scratcher
Registered: 2010-02-19
Posts: 1000+

Re: BYOB 3 - Discussion Thread

blob8108 wrote:

And having to do SATs (or whatever tests one needs), as well as exams here, would have been too much effort (to put it bluntly).

Agreed. Where I am, I had to spend a couple of months to get our school to host AMC (which is today, BTW, so wish me luck!).

@bharvey: Clearly I'm not that off then, 'cause I thought of it something like this:

Code:

"".join(map(lambda x:x[0], input.split(" ")))

Hardmaths-MacBook-Pro:~ Hardmath$ sudo make $(whoami) a sandwich

Offline

 

#6745 2013-02-04 20:50:49

Hardmath123
Scratcher
Registered: 2010-02-19
Posts: 1000+

Re: BYOB 3 - Discussion Thread

bharvey wrote:

Ah.  I figured you were a little older than that.

Is that a good thing or a bad thing?  tongue

But, listen, I was in middle school once (although it was called junior high school back then), and, although the System is awful, sometimes you luck out and get individual teachers who believe in kids, or at least in particular kids.  The reason I am who I am today is that I had an algebra teacher in 7th grade who, after I read the entire textbook the first weekend of school and then suddenly started being bored in class, noticed, gave me a final exam on the spot, and then handed me an Algebra II book and told me to stop coming to class and to bring it back when I was done.  That year he gave me about a dozen math books to read, including calculus; set theory, my favorite; and Courant and Robbins.

Yeah, back when I was in elementary school I had a great teacher—she's more than partly responsible for me actually liking school. I wish there were more like her.  sad

unless you skip a lot of grades in the meantime (which is another way to deal with stupefying school)

They did offer in 3rd grade, but I figured life's too short—enjoy it while you can.  smile

I love this forum thread—we do everything from riddles about centipedes to bashing the education system!  tongue


Hardmaths-MacBook-Pro:~ Hardmath$ sudo make $(whoami) a sandwich

Offline

 

#6746 2013-02-04 21:37:07

bharvey
Scratcher
Registered: 2008-08-10
Posts: 1000+

Re: BYOB 3 - Discussion Thread

Whoa!  This from someone who claims not to be able to understand lambda?  (Or am I misremembering?)

Is there, in fact, an available source of traffic light data?  And does "data" in this context mean how the light is facing right now, or the light's duty cycle, or what?

I hope you didn't find "syntax free" in anything I said!  It's not a big deal, except you happened to push one of my hot buttons.  There's no such thing as a syntax free language.  Scratch and its children try to be syntax error free, by not letting you construct the invalid program in the first place rather than catching the error when you run it.

Imho you're trying way too hard to use formal language, to the point where I found some paragraphs really hard to read.  (And then you say "plain old HTML..."  tongue )  In particular, it's okay to say "I" in a paper; that's much better than putting sentences in the passive voice, especially when you use an adjective such as "familiar."  Familiar to whom?  "I picked an algorithm I knew" is much better than circumlocutions.  The passive voice also makes it hard (e.g., for your teacher) to work out how much of this is original with you, especially since you are comparing (if I'm understanding correctly) three things: Dijkstra's algorithm, A*, and your project.

Sorry, I don't suppose you posted the link because you wanted textual criticism; I can't help it -- I'm a teacher.  smile

what do you think of sig figs?

I used to get in trouble by telling my students, "If you want a happy life, stick to writing programs about integers."  smile

There are several issues here, and I'm not sure which one(s) your dad has in mind.  The most important to me is about statistical honesty.  Let's say you measure some value and come up with 1234, give or take 10.  Now you square this number and get 1522756.  You can't just report that as if it's precise!  You have to say "give or take 100."  (Or rather, you should have reported the original number as 1230 and the square as 1522800 plus or minus etc.)

A totally different issue is to design computer hardware and math libraries so that the reported result really is the closest representable value to the mathematically correct answer.  This turns out to be very, very, very, very hard.  (My favorite cautionary tale about this is that for several years the official 32-bit Unix random number generator always gave even numbers!  Hard to imagine how nobody noticed for so long.  The reason was that they reused the code from the 16-bit PDP-11 library unchanged, but it contained constants that made sense only for a 16-bit range.)

The difficulty doesn't come only from significant figures; an even worse problem is poles of functions, such as tan π/2.  If you have an angle that's just slightly different from π/2, with a teeny weeny error in its measurement or its computer representation, and you take its tangent, you're quite likely to get a grossly wrong result.

Speaking of trig functions, most math libraries use angles in radians, but, as you know, Scratch and its children represent angles in degrees.  So, let's say you're trying to implement Scratch in, whatever, JS or Java or C or Lisp or anything.  Your user asks for sin of x degrees, and you compute sin of x*π/180 radians.  This works fine for small angles, it turns out, but if the angle is big, even 300 degrees I think (let alone 10000 degrees, which is perfectly legitimate), you're likely to get a slightly wrong answer.  The official correct algorithm for this problem is to get x degrees into the range [0, 45] and then take ±sin-or-cos(±y*π/180) where y is the [0, 45] value, depending on which octant the original angle was in.  (I know this only because William Kahan, the inventor of IEEE floating point, is a Berkeley prof, and when I was working on Berkeley Logo I asked him how to do it!)  The point of this example is that if computers could represent real numbers exactly, it wouldn't be a problem.

I mean 50 === 50.0 checks, doesn't it?

Only because the floating point spec takes pains to ensure that every 32-bit integer fits completely into a 64-bit double-precision float.  If you use 32-bit floats, then you'll still get the answer you want for 50, but not for a number in the millions or billions.

Every integer is in principle representable in a computer if you have an infinite supply of bits (that is, using a finite but unbounded number of bits for any particular integer), but that's not true about real numbers.  (It's not true for rational numbers if you represent their decimal expansions; even 1/3 can't be exactly represented that way.  But you can represent any rational as a pair of integers, numerator and denominator.)  Do you know any set theory?  The insurmountable problem is that the number of real numbers is larger than the number of integers, way larger, and likewise larger than the number of bit patterns representable in an unbounded-but-finite number of bits.

So, there's no guarantee that, say, (1.0/3.0)*3 === 1, although I think that particular example probably works.  Well, even in the cases that work there's a lot of complexity going on.  Every possible floating point bit pattern represents a range of real values, and software works hard to find the simplest of those values (more or less, fewest printed characters, although that's not really how it's measured) as the canonical real value of that bit pattern.

So, yeah, your dad's right, once you start dabbling in non-integer values you really do have to worry about this business.  Stick to integers if you don't like it.  smile

EDIT:  PS.  APL tried to finesse this whole problem by letting you choose some small number and declaring that any two values less than that far apart would be considered equal.  I guess this proves that Iverson was an algebraist rather than an analyst; numerical analysis people sort of foam at the mouth when they talk about that design decision.  tongue

Last edited by bharvey (2013-02-05 01:50:51)


http://cs.berkeley.edu/~bh/sig5.png

Offline

 

#6747 2013-02-04 22:25:47

bharvey
Scratcher
Registered: 2008-08-10
Posts: 1000+

Re: BYOB 3 - Discussion Thread

Hardmath123 wrote:

Is that a good thing or a bad thing?  tongue

Umm, well, we all have to go through all ages, so there's nothing good or bad about what age you are (although I do like teaching middle school kids), but I guess it's good that I overestimated your age based on how much you know!

EDIT:

riddles about centipedes

centimeters!

Last edited by bharvey (2013-02-05 01:42:16)


http://cs.berkeley.edu/~bh/sig5.png

Offline

 

#6748 2013-02-05 00:40:59

OldCodger
New Scratcher
Registered: 2012-05-16
Posts: 54

Re: BYOB 3 - Discussion Thread

bharvey wrote:

shadow_7283 wrote:

Whoa!  This from someone who claims not to be able to understand lambda?  (Or am I misremembering?)

Is there, in fact, an available source of traffic light data?  And does "data" in this context mean how the light is facing right now, or the light's duty cycle, or what?

I hope you didn't find "syntax free" in anything I said!  It's not a big deal, except you happened to push one of my hot buttons.  There's no such thing as a syntax free language.  Scratch and its children try to be syntax error free, by not letting you construct the invalid program in the first place rather than catching the error when you run it.

Imho you're trying way too hard to use formal language, to the point where I found some paragraphs really hard to read.  (And then you say "plain old HTML..."  tongue )  In particular, it's okay to say "I" in a paper; that's much better than putting sentences in the passive voice, especially when you use an adjective such as "familiar."  Familiar to whom?  "I picked an algorithm I knew" is much better than circumlocutions.  The passive voice also makes it hard (e.g., for your teacher) to work out how much of this is original with you, especially since you are comparing (if I'm understanding correctly) three things: Dijkstra's algorithm, A*, and your project.

Sorry, I don't suppose you posted the link because you wanted textual criticism; I can't help it -- I'm a teacher.  smile

what do you think of sig figs?

I used to get in trouble by telling my students, "If you want a happy life, stick to writing programs about integers."  smile

There are several issues here, and I'm not sure which one(s) your dad has in mind.  The most important to me is about statistical honesty.  Let's say you measure some value and come up with 1234, give or take 10.  Now you square this number and get 1522756.  You can't just report that as if it's precise!  You have to say "give or take 100."  (Or rather, you should have reported the original number as 1230 and the square as 1522800 plus or minus etc.)

A totally different issue is to design computer hardware and math libraries so that the reported result really is the closest representable value to the mathematically correct answer.  This turns out to be very, very, very, very hard.  (My favorite cautionary tale about this is that for several years the official 32-bit Unix random number generator always gave even numbers!  Hard to imagine how nobody noticed for so long.  The reason was that they reused the code from the 16-bit PDP-11 library unchanged, but it contained constants that made sense only for a 16-bit range.)

The difficulty doesn't come only from significant figures; an even worse problem is poles of functions, such as tan π/2.  If you have an angle that's just slightly different from π/2, with a teeny weeny error in its measurement or its computer representation, and you take its tangent, you're quite likely to get a grossly wrong result.

Speaking of trig functions, most math libraries use angles in radians, but, as you know, Scratch and its children represent angles in degrees.  So, let's say you're trying to implement Scratch in, whatever, JS or Java or C or Lisp or anything.  Your user asks for sin of x degrees, and you compute sin of x*π/180 radians.  This works fine for small angles, it turns out, but if the angle is big, even 300 degrees I think (let alone 10000 degrees, which is perfectly legitimate), you're likely to get a slightly wrong answer.  The official correct algorithm for this problem is to get x degrees into the range [0, 45] and then take ±sin-or-cos(±y*π/180) where y is the [0, 45] value, depending on which octant the original angle was in.  (I know this only because William Kahan, the inventor of IEEE floating point, is a Berkeley prof, and when I was working on Berkeley Logo I asked him how to do it!)  The point of this example is that if computers could represent real numbers exactly, it wouldn't be a problem.

I mean 50 === 50.0 checks, doesn't it?

Only because the floating point spec takes pains to ensure that every 32-bit integer fits completely into a 64-bit double-precision float.  If you use 32-bit floats, then you'll still get the answer you want for 50, but not for a number in the millions or billions.

Every integer is in principle representable in a computer if you have an infinite supply of bits (that is, using a finite but unbounded number of bits for any particular integer), but that's not true about real numbers.  (It's not true for rational numbers if you represent their decimal expansions; even 1/3 can't be exactly represented that way.  But you can represent any rational as a pair of integers, numerator and denominator.)  Do you know any set theory?  The insurmountable problem is that the number of real numbers is larger than the number of integers, way larger, and likewise larger than the number of bit patterns representable in an unbounded-but-finite number of bits.

So, there's no guarantee that, say, (1.0/3.0)*3 === 1, although I think that particular example probably works.  Well, even in the cases that work there's a lot of complexity going on.  Every possible floating point bit pattern represents a range of real values, and software works hard to find the simplest of those values (more or less, fewest printed characters, although that's not really how it's measured) as the canonical real value of that bit pattern.

So, yeah, your dad's right, once you start dabbling in non-integer values you really do have to worry about this business.  Stick to integers if you don't like it.  smile

Forth traditionally uses fixed point maths. Read Chapter 5 of Starting Forth which you can find on the Internet. Charles Moore who invented Forth has many interesting albeit somewhat extreme views about programming which are well worth reading.

Offline

 

#6749 2013-02-05 03:43:59

blob8108
Scratcher
Registered: 2007-06-25
Posts: 1000+

Re: BYOB 3 - Discussion Thread

bharvey wrote:

This turns out to be very, very, very, very hard.

And I was wondering just the other day why Snap!'s float calculations were off...  tongue

bharvey wrote:

combine with JOIN
      map FIRST LETTER
         keep IMPORTANT WORDS
            sentence->list (input)

Hardmath123 wrote:

I thought of it something like this:

Code:

"".join(map(lambda x:x[0], input.split(" ")))

Using "reduce", "map", "filter" were my first thought, too. Clearly I've gotten too much used to Python's "functional" functions.  tongue

@bharvey Can you give an example that doesn't use reduce/map/filter?


Things I've made: kurt | scratchblocks2 | this cake

Offline

 

#6750 2013-02-05 06:17:24

Hardmath123
Scratcher
Registered: 2010-02-19
Posts: 1000+

Re: BYOB 3 - Discussion Thread

You mean like this?

Code:

input = "All that is gold does not glitter, not all who wander are lost"
counter = 0
beforespace = True
result = ""
while counter < len(input):
    if beforespace:
        result = result+input[counter]
    if input[counter]==" ":
        beforespace = True
    else:
        beforespace = False
    counter = counter + 1
    print result

Least Pythonic example I could create.


Hardmaths-MacBook-Pro:~ Hardmath$ sudo make $(whoami) a sandwich

Offline

 

Board footer