Alright, I've started to build implement a small codebase for Blockset, something that will be kinda sorta similar to scratch that could convert (non-scratch) blocks to textual syntax and vice versa, sort of similar to Etoys. When its finished, one could ignore the block half of the language, or the textual part and still get 100% out of the language. Right now, its being built in Ruby. If you want to help me in any way, go right ahead! Or you could sit around and wait for this to be finished, which it might not ever be.
This is be just for fun, I'm not planning on making it big and popular. But other people could, if they wanted to
Also, the beginnings of Blockset is currently here.
Thanks!
BASIC PROGRESS:
BLOCK EDITOR IDE
-- 0%
EVALUATOR
-- basically works (missing code for some sort of self, though)
PARSER
-- being implemented
TOKENIZER
-- done, but subject to major change
Last edited by Jwosty (2012-04-16 19:19:24)
Google it.
Offline
trinary wrote:
Sounds very interesting!
What language are you thinking of having it output?
(I don't know Ruby, though.)
I don't really know, I might actually just implement a simple one. Or maybe it could be Ruby, but that would be tough to translate to blocks in some cases...
As for not knowing Ruby, Why's Poignant Guide will fix that. You'd probably like it
Last edited by Jwosty (2012-03-31 18:06:34)
Google it.
Offline
From what you're saying, would it not be Scratch-compatible? ie., it wouldn't convert from text syntax to Scratch projects or vice-versa. Because if it *did*, that'd be cool.
Offline
More "full"? How do you mean -- what would you change? (:
Offline
I've been thinking about building a converter to C#, but not any time soon, too busy, sorry!
I think you may need to have a modified version of Scratch, I've been thinking I could create a mod that outputs a list of sprite names to a text file and exports all costumes and backgrounds since just the project summary won't be enough.
Offline
Either C# (for app store) or java. I'd definitely use this, although it wouldn't be able to fix the ridiculously lousy graphics.
Offline
blob8108 wrote:
More "full"? How do you mean -- what would you change? (:
Its syntax would be as consistent as I can get it to be, like Lisp, so there would only be one type/shape of block. Every block would always return something, even if some sort of nil value. All this is so that it would be pretty simple to translate code in text form to code in block form. I'll make a mockup after I finish writing this post.
sparks wrote:
I've been thinking about building a converter to C#, but not any time soon, too busy, sorry!
Oh, a converter from Scratch projects? Interesting! I've also been thinking about the same thing, but to (yeah, you guessed it) Ruby.
MrScoop wrote:
Either C# (for app store) or java. I'd definitely use this, although it wouldn't be able to fix the ridiculously lousy graphics.
Awesome, thanks for your opinion!
EDIT: Oh yeah, do you guys think it should be interpreted, or compiled just-in-time and possibly AOT, if the latter?
EDIT AGAIN: Uggh, The GIMP is acting up right now (after I installed an XQuartz update)
Last edited by Jwosty (2012-04-02 19:33:23)
Google it.
Offline
Here's an example of a method call and the Fibonacchi sequence. Also, comments will be denoted with double-dashes (--)
self.foo: 42 bar: 56
-- Calls the method foo:bar: with the params 42 and "Hello" on self (a keyword [!!!])
-- Note that this'll raise an error if you don't define the method
-- The classic Fibonacchi sequence definition.
-- Also, no keywords to define methods!
self.defName: #fib -- #fib is a symbol. In some other languages, it would be :fib
params: {#n} -- An array of parameters. Arrays are defined with special syntax
body: [ -- The method body. Lambdas are defined with special syntax; can't escape that
(n < 2).ifTrue: [n]
ifFalse: [(self.fib: n-1) + (self.fib: n-2)]
]
self.printLine: self.fib: 8
-- Results in 21Last edited by Jwosty (2012-04-25 18:16:43)
Google it.
Offline
There's a little bit of a codebase beginning to emerge; I'll create a GitHub repo soon.
EDIT: Oh also, I decided to call it "Blockset"; your "typeset" is blocks (well, half the time). Please post your opinion on this! If you can think of a better name, which you guys probably can, I might change it!
Last edited by Jwosty (2012-04-04 08:50:11)
Google it.
Offline
slinger wrote:
Sounds totally awesome. I'd use it
![]()
Awesome! The text half is beginning to take shape. And I'll write the IDE in Blockset itself. That should be interesting... xD
nathanprocks wrote:
Does the tokenizer compile the program with a runtime engine or something?
Short answer: no.
What it actually does:
A tokenizer (also called a lexical analyzer, or lexer for short) splits a stream of input characters into usually larger chunks of characters based on how you classify them. For example, take this small Blockset program:
self.printLine: "Hello world!";
The tokenizer would split it into the following tokens (type first, value second):
keyw self special . m_chunk printLine special : str "Hello world!" special ;
These tokens can be more easily parsed by a parser; one doesn't write a parser completely at the character level (that would be far to complex).
For anyone who wants to know how I built the lexer, I use a library and this is a bit of my (Ruby) code:
Lexer = Lexr.that {
# |||
# Regexp vvv
ignores /s+/ => :whitespace
ignores /--.*/ => :comment # Blockset comments begin with double dashes, can contain any character, and run until the end of the line.
matches /"[^"]"/ => :string
matches "self" => :key
}The rest of it is available on the link I provided to Github
Last edited by Jwosty (2012-04-18 09:06:54)
Google it.
Offline
Jwosty wrote:
slinger wrote:
Sounds totally awesome. I'd use it
![]()
Awesome! The text half is beginning to take shape. And I'll write the IDE in Blockset itself. That should be interesting... xD
nathanprocks wrote:
Does the tokenizer compile the program with a runtime engine or something?
Short answer: no.
What it actually does:
A tokenizer (also called a lexical analyzer, or lexer for short) splits a stream of input characters into usually larger chunks of characters based on how you classify them. For example, take this small Blockset program:Code:
self.printLine: "Hello world!";The tokenizer would split it into the following tokens (type first, value second):
Code:
keyw self special . m_chunk printLine special : str "Hello world!" special ;These tokens can be more easily parsed by a parser; one doesn't write a parser completely at the character level (that would be far to complex).
For anyone who wants to know how I built the lexer, I use a library and this is a bit of my (Ruby) code:Code:
Lexer = Lexr.that { # ||| # Regexp vvv ignores /s+/ => :whitespace ignores /--.*/ => :comment # Blockset comments begin with double dashes, can contain any character, and run until the end of the line. matches /"[^"]"/ => :string matches "self" => :key }The rest of it is available on the link I provided to Github
![]()
Oh because I used a programming language before that compiles the programs into a .tkn file so you can give it to people with the runtime engine.
Offline