

colorForth antecedents / pre-parsing questions

colorForth antecedents / pre-parsing questions Author Message John Passanit

#1 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions

I've been looking into colorForth more deeply these days. Like some of the

others here, I can't run Chuck's executable, but that's okay. I'm more

interested in the concepts behind colorForth than Chuck's implementation. I

expect that as time goes on, there will probably be folks who build

colorForth on top of Win32 or Linux platforms, using those interfaces

instead of writing directly to the hardware. Chuck might not like such an

idea for his work, but that's the value of "coloring outside the lines."

Just as Chuck doesn't feel bound to stay in the metaphorical lines imposed

by the industry, we shouldn't feel bound to stay within *his* lines.

Chuck's box of crayons is big, but I don't see my favorite colors-- so I'll

add them.

I've been looking into colorForth more deeply these days. Like some of theothers here, I can't run Chuck's executable, but that's okay. I'm moreinterested in the concepts behind colorForth than Chuck's implementation. Iexpect that as time goes on, there will probably be folks who buildcolorForth on top of Win32 or Linux platforms, using those interfacesinstead of writing directly to the hardware. Chuck might not like such anidea for his work, but that's the value of "coloring outside the lines."Just as Chuck doesn't feel bound to stay in the metaphorical lines imposedby the industry, we shouldn't feel bound to stay within *his* lines.Chuck's box of crayons is big, but I don't see my favorite colors-- so I'lladd them. (Put it this way, if and when colorForth becomes such a total environment

that I no longer see Chuck posting messages using Outlook Express under some

flavor of Windows-- then I'll believe in the promise that colorForth holds.

But until then, the operating systems Chuck rails against provide too many

useful services and interfaces to ignore. That's a pragmatic, not an

aesthetic statement. colorForth riding on top of Win32 or Linux sounds like

an excellent start, providing programmers access to an interesting new

language, while still retaining the operating system services and interfaces

we all use. This isn't meant to start the classic debate between "skinny"

and "fat" Forth implementation; there is nothing in colorForth that

absolutely requires writes direct to the hardware other than Chuck's

personal philosophy.)

This message is about a particular aspect of colorForth-- the pre-parsing of

source code. I, as usual, have some comments and questions.

COMMENTS:

colorForth isn't the first language to pre-process source code so that

interpreting it becomes easier. Depending on your definition of

"pre-parsing," Lisp would probably be the first language to do this. But

even if you don't want to count Lisp, there are other languages that have

"tokenized" input so that the execution of subsequent code (interpreter or

compiler) is easier.

Want an example? Try Bill Gates. Look no farther than Microsoft's first

product-- their BASIC interpreter. The encoding schemes changed over time

as their BASIC matured, but the general format was something like this:

Each line started with a 16-bit integer representing the line number. This

was followed by the length of the line and an offset to the next line

number. Each line was composed of a series of tokens mixed with ASCII text.

That is, if you used the PRINT command, it would be stored not as those five

characters, but as a single-byte token. Depending on the version of BASIC,

variable names were stored not as their names, but as an offset into a table

of variables. Numeric constants were also typically stored not in their

character forms, but rather as the default numeric form (typically

single-precision floating point numbers).

Microsoft tokenized for a different reason than Chuck does. Their primary

motivation was more to reduce the size, but it also had the side effect of

making the BASIC interpreter faster as well.

And there is another similarity between Microsoft's BASICs and colorForth--

the editor. When you entered source code in Microsoft BASIC, it was

tokenized on the spot and stored internally in tokenized form. Well, that

form isn't editable, so just like colorForth has to "detokenize" source in

order to list it, so did Mircosoft's BASICs. The "LIST" command simply

converted the tokenized source code into ASCII. The line-based editor that

Microsoft had in some of their BASICs also did this detokenization step.

Why mention this? In one of the videos on www.ultratechnology.com, Chuck

states he isn't aware of anyone else who pre-parses input (and presumably,

stores only the pre-parsed input). And, he claims pre-parsing as an

innovation of colorForth. I certainly believe the *style* of pre-parsing he

does is interesting and unusual, but pre-parsing itself is not innovative.

colorForth *is* innovative, but only when taken as a whole. (It kind of

reminds me of the modern version of the film "The Fly." In the film, actor

Jeff Goldblum states that the components he uses to build his machine are

from other people-- and in some cases, he doesn't fully understand they

work-- his role is more of a systems integrator. The machine he makes is

innovative in the *synthesis* of the components. colorForth strikes me as

much the same. The central ideas in colorForth are ideas you can find

elsewhere in computer science. There really is nothing new with the ideas.

Rather, colorForth's strengths and innovation come from the way these ideas

are bound together.)

QUESTIONS:

It is intuitively obvious that pre-parsing source code makes the colorForth

compiler radically simpler than traditional Forth models. But what isn't

obvious-- at least to me-- is understanding the particular choices Chuck

made for his pre-parsed form.

I don't see the value behind pre-parsing source into 32-bit quantities.

Chuck uses four bits of each 32-bit word to encode "function" (number, text,

function, variable, etc.). The other 28 bits are used to represent the

value. Thus, in order to do anything with one of the pre-parsed chunks of

source code, he has to mask off the low four bits to determine the type, and

then shift around the value one or more times to operate on it. And this is

especially noticeable on text, which is encoded with a static Huffman table

geared (presumably) for English. In order for Chuck to deal with each

character of each 32-bit pre-parsed chunk, he has to call "unPACK". Why?

This all strikes me as valuable if we were back in the 1970's when memory

was expensive and address spaces were small. And it may even be useful on

very small modern embedded systems. But does it make sense on a desktop

machine?

I usually go by the operating principle of "do the simplest thing that can

possibly work." I don't have a problem with Chuck's attempts to pre-parse

input and categorize it. But isn't the simplest thing that can possibly

work here just to encode source in something like this:

<token> <value> <token> <value> ...

Where <token> is a small value that tells how to interpret the <value> that

follows it. Let's say we divide up tokens into four bits (as Chuck does)

for the function, and have the other four bits left over for other

attributes. For example, Chuck reserves one four-bit pattern to declare an

integer that executes (that is, gets pushed on the data stack) and another

four-bit pattern for integers that are compiled into a word. But that isn't

enough for editing source code-- you also need to know what number base the

number was encoded in. Chuck apparently only uses decimal and hexadecimal

quantities, and this bit selects between them.

The <token> I'm describing can do exactly the same thing, with the advantage

that the 32-bit quantity that follow <token> is already in the proper form

needed to be used. There is no shifting or other decoding-- it's a integer

quantity that is simply fetched by the processor. Or put another way,

here's a decided non-Zen question: Which takes more effort-- to shift a

value before you use it, or to simply use it?

Related to this, Chuck's handling of text also seems wildly complex relative

to the "simplest thing that could possibly work" ideal. The most obvious

representation of text is simply text itself in some fixed bit-width size

(typically a byte). Instead, Chuck has to represent characters as

variable-width quantities. The *only* advantage I see to this is gaining

some speed during the comparison of strings, such as if one does a linear

search through the Forth dictionary to find the address of a word. But is

an increase in the speed of searching a benefit great enough to deal with

the annoyance of having to pack and unpack characters when you want to

actually use them? Or asked another way, this probably helps to make the

compiler extremely fast. But which is more important to the user of an

application written in colorForth-- the speed of compiling or the speed of

running the application that was compiled?

So please help me understand this-- assuming it isn't just Chuck's personal

preference and aesthetics. Note that I am NOT asking for justification of

pre-parsing. I think that is a simple (and old) idea that has immediate and

obvious value. What I'm asking about is the specific encoding Chuck uses

for his pre-parsed text. Why it instead of something simpler-- as I

suggested above? What is being optimized here? Who is benefited by this?





Sun, 04 Jan 2004 05:39:19 GMT Jeff Fo

#2 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions Quote:

> It is intuitively obvious that pre-parsing source code

> makes the colorForth compiler radically simpler than

> traditional Forth models. But what isn't obvious-- at

> least to me-- is understanding the particular choices

> Chuck made for his pre-parsed form.



I think the history helps frame the sequence of choices.

Chuck's ideas of Forth are embodied in his chip and the

software tuned for them. His chips have all been word

(cell) addressing.

I think the history helps frame the sequence of choices.Chuck's ideas of Forth are embodied in his chip and thesoftware tuned for them. His chips have all been word(cell) addressing. He also said that we should recognize that as a language

Forth was word oriented. In ColorForth he wanted a

text representation that made sense on his chips. While

a Pentium style implementation could take advantage of

byte addressing he wanted to access text in word sized

chunks, 16, 20, 32, 18bits or whatever rather than 8

bit access. 8 bit access would be slow, and efficient

for Chuck's hardware software vision.

He went from packing 3 6 bit characters in a 20 bit

word or 5 6 bit characters in a 32 bit word in his

representation of pre-parsed text. Later he moved

to a huffman encoding scheme with up to seven

characters packed into a word.

Quote: > I don't see the value behind pre-parsing source into

> 32-bit quantities.



The underlying implemenation is a 32 bit word addressing

machineForth. The Pentium implemenation has a 32 bit

virtual machine, but the idea has been done on 18, 20,

24, and 32 bit silicon implementations.

The underlying implemenation is a 32 bit word addressingmachineForth. The Pentium implemenation has a 32 bitvirtual machine, but the idea has been done on 18, 20,24, and 32 bit silicon implementations. Quote: > Chuck uses four bits of each 32-bit word to encode

> "function" (number, text, function, variable, etc.).

> The other 28 bits are used to represent the

> value. Thus, in order to do anything with one of the

> pre-parsed chunks of source code, he has to mask off the

> low four bits to determine the type, and

> then shift around the value one or more times to operate on it.



Right, that is how the colorForth internal flowchart

is different than classical Forth that just parses

characters for a delimeter and then processes the word.

Right, that is how the colorForth internal flowchartis different than classical Forth that just parsescharacters for a delimeter and then processes the word. Quote: > And this is

> especially noticeable on text, which is encoded with a

> static Huffman table geared (presumably) for English.

> In order for Chuck to deal with each character of each

> 32-bit pre-parsed chunk, he has to call "unPACK". Why?



He almost never deals with each character. That is

not what he optimizes for. The design is optimized

for compilation speed and source size.

He almost never deals with each character. That isnot what he optimizes for. The design is optimizedfor compilation speed and source size. Quote: > This all strikes me as valuable if we were back in

> the 1970's when memory was expensive and address spaces

> were small. And it may even be useful on

> very small modern embedded systems. But does it make

> sense on a desktop machine?



Chuck's ColorForth is a development environment that

is tuned to both development and coding style suited

to Chuck's chips. It is a desktop machine port of

Chuck's ideas of how software, chips, and Forth

should work.

Chuck's ColorForth is a development environment thatis tuned to both development and coding style suitedto Chuck's chips. It is a desktop machine port ofChuck's ideas of how software, chips, and Forthshould work. Some users of desktop enviroments refer to instaneous

development cycles, but they still appear noticable

to me and to Chuck. Non-trivial applications like

compiling chip designs can take considerable cycles.

Rather than waiting for hours Chuck wanted it to

happen in a click. So he did things like used

pre-parsed text rather than reading source character

by character and using the compiler internals that

were good enough 20 years ago.

Quote:

> thing that can possibly work." I don't have a problem with

> Chuck's attempts to pre-parse input and categorize it. But

> isn't the simplest thing that can possibly

> work here just to encode source in something like this:

> I usually go by the operating principle of "do the simplest> thing that can possibly work." I don't have a problem with> Chuck's attempts to pre-parse input and categorize it. But> isn't the simplest thing that can possibly> work here just to encode source in something like this: > <token> <value> <token> <value> ...





another tokenized source approach to Forth tuned to

Chuck's hardware designs.

That sounds like some of the internals in Aha which isanother tokenized source approach to Forth tuned toChuck's hardware designs. Quote: > Where <token> is a small value that tells how to

> interpret the <value> that follows it. Let's say we

> divide up tokens into four bits (as Chuck does)

> for the function, and have the other four bits left over

> for other attributes. For example, Chuck reserves one

> four-bit pattern to declare an integer that executes

> (that is, gets pushed on the data stack) and another

> four-bit pattern for integers that are compiled into a word.



I tuned the bits used to represent tokens to optimize

the act of processing the tokeninzed source in Aha.

I tuned the bits used to represent tokens to optimizethe act of processing the tokeninzed source in Aha. Like ColorForth the compiler and editor are factored

differently than in traditional Forth. The view

provided by an editor somewhat independent of the

compiler internals. But because of the tokenization

an editor can perform many forms of error checking

and correction at edit time that would normally hide

till the compilation cycle.

Quote: > But that isn't enough for editing source code-- you

> also need to know what number base the number was encoded in.

> Chuck apparently only uses decimal and hexadecimal

> quantities, and this bit selects between them.



I think that is the way he has assigned his "color"

tokens.

I think that is the way he has assigned his "color"tokens. Quote: > The <token> I'm describing can do exactly the same thing,

> with the advantage that the 32-bit quantity that follow

> <token> is already in the proper form needed to be used.

> There is no shifting or other decoding-- it's a integer

> quantity that is simply fetched by the processor. Or put

> another way, here's a decided non-Zen question: Which takes

> more effort-- to shift a value before you use it, or to simply

> use it?



Yes. Tokens can be tuned for the particular type of

processing to be done. That was part of the fun in

the initial Aha design.

Yes. Tokens can be tuned for the particular type ofprocessing to be done. That was part of the fun inthe initial Aha design. Yes, Grasshoopper. Just fetching and using is simpler

than fetching and processing then using. Chuck's

packing scheme acts a little like a precomputed hash

table to speed up other things too.

Quote: > Related to this, Chuck's handling of text also seems

> wildly complex relative to the "simplest thing that

> could possibly work" ideal. The most obvious

> representation of text is simply text itself in some fixed

> bit-width size (typically a byte).



Simplest would probably be one character per cell, 32 bit

characters with a count. But other people use hash tables

and things to improve other characteristics. Simple is fine

but not at the expense of what you need to do.

Simplest would probably be one character per cell, 32 bitcharacters with a count. But other people use hash tablesand things to improve other characteristics. Simple is finebut not at the expense of what you need to do. Quote: > Instead, Chuck has to represent characters as

> variable-width quantities.



Yeah, isn't that wierd. I does seem to provide nice

density but does complicate the ColorForth editor.

Yeah, isn't that wierd. I does seem to provide nicedensity but does complicate the ColorForth editor. Quote: > The *only* advantage I see to this is gaining

> some speed during the comparison of strings,



considerable speed and compactness are gained.

considerable speed and compactness are gained. Quote: > such as if one does a linear

> search through the Forth dictionary to find the

> address of a word. But is

> an increase in the speed of searching a benefit great

> enough to deal with the annoyance of having to pack and

> unpack characters when you want to actually use them?



On Chuck's word addressing designs packing and unpacking

are things that are very expense compared to simple

access. 1ns vs ... The annoyance is shifted into the

domain of the editor where things happen at human

typing speed. This allows the compiler to be simpler

and faster and lets Chuck do things like compile chip

designs at incredible speed with the software and

in a way that makes sense for his chips too.

On Chuck's word addressing designs packing and unpackingare things that are very expense compared to simpleaccess. 1ns vs ... The annoyance is shifted into thedomain of the editor where things happen at humantyping speed. This allows the compiler to be simplerand faster and lets Chuck do things like compile chipdesigns at incredible speed with the software andin a way that makes sense for his chips too. Quote: > Or asked another way, this probably helps to make the

> compiler extremely fast. But which is more important

> to the user of an application written in colorForth--



IE. OKADII with Chuck Moore as the user.

IE. OKADII with Chuck Moore as the user. Quote: > the speed of compiling or the speed of

> running the application that was compiled?



The application "compiles" chips. ;-)

The application "compiles" chips. ;-) Quote: > So please help me understand this-- assuming it isn't

> just Chuck's personal preference and aesthetics. Note

> that I am NOT asking for justification of

> pre-parsing. I think that is a simple (and old) idea

> that has immediate and obvious value. What I'm asking

> about is the specific encoding Chuck uses for his pre-parsed

> text. Why it instead of something simpler-- as I

> suggested above? What is being optimized here? Who is

> benefited by this?



Chuck answered those questions in different parts of

various presentations. I reviewed them in my presentation

to FIG on Aha. I hope I also was able to answer the

questions above by reminding you about what the real

code on Chuck chips looks like and why Chuck wanted it

to work that way.

Chuck answered those questions in different parts ofvarious presentations. I reviewed them in my presentationto FIG on Aha. I hope I also was able to answer thequestions above by reminding you about what the realcode on Chuck chips looks like and why Chuck wanted itto work that way.



Sun, 04 Jan 2004 06:37:09 GMT Stephen J. Bev

#3 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions Quote:

> Some users of desktop enviroments refer to instaneous

> development cycles, but they still appear noticable

> to me and to Chuck. Non-trivial applications like

> compiling chip designs can take considerable cycles.

> Rather than waiting for hours Chuck wanted it to

> happen in a click.



I'm curious, how much text is being compiled that takes this long and

on what sort of chip is it being run? The reason I ask is that if the

Forth has a tuned IO and uses a hashing dictionary it should be able

to process MB/s on a reasonably spec'd x86. I'm not knocking the

pre-parsed approach I'm just interested in knowing what the

environment is where this a big win over just parsing at compile time.

I can imagine it is when target compiling on an 8-bit chip but it is

not clear to me that it is if you can cross-compile on a

desktop/workstation.

I'm curious, how much text is being compiled that takes this long andon what sort of chip is it being run? The reason I ask is that if theForth has a tuned IO and uses a hashing dictionary it should be ableto process MB/s on a reasonably spec'd x86. I'm not knocking thepre-parsed approach I'm just interested in knowing what theenvironment is where this a big win over just parsing at compile time.I can imagine it is when target compiling on an 8-bit chip but it isnot clear to me that it is if you can cross-compile on adesktop/workstation.



Sun, 04 Jan 2004 11:17:28 GMT Bernd Paysa

#4 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions Quote:

> He went from packing 3 6 bit characters in a 20 bit

> word or 5 6 bit characters in a 32 bit word in his

> representation of pre-parsed text. Later he moved

> to a huffman encoding scheme with up to seven

> characters packed into a word.



Wouldn't a symbol table be even more efficient? Instead of encoding the

characters directly, add an entry to the symbol table for each new

symbol (i.e. : <symbol> does that, ":" refers to the "define word"

color), and refer to that entry using an index into the symbol table?

Wouldn't a symbol table be even more efficient? Instead of encoding thecharacters directly, add an entry to the symbol table for each newsymbol (i.e. : does that, ":" refers to the "define word"color), and refer to that entry using an index into the symbol table? Since all words in Forth are used more than once (at definition time),

this saves a lot of space. Also, except for listing and editing, no

access to the symbol table is ever necessary (it's part of the editor,

not the rest of the Forth system), the index into the symbol table

itself is unique. Source files however are split into two parts: symbols

and the tokenized source itself. For editing, you need to access both,

for compiling, you just have to access the source. Symbols can be stored

in a format that makes sense on a particular machine, since they are

independent of the source; it's not even necessary to compress them to

the limit, since repeated occurances in the source refer always to the

same symbol (thus compressing the source efficiently). Furthermore, by

wasting a bit space in the symbol table, the search step can be

eliminated completely: : <symbol>, when compiled, will put the label

address into a spare cell at the symbol's position, and compiling symbol

will just use that label address to compile the call. The job of FIND

then is already performed by the editor (a place where we can accept a

few microseconds per symbol).

--

Bernd Paysan

"If you want it done right, you have to do it yourself"

http://www.jwdt.com/~paysan/





Sun, 04 Jan 2004 16:39:06 GMT Roelf Toxopeu

#5 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions Quote:

> I've been looking into colorForth more deeply these days. Like some of the

> others here, I can't run Chuck's executable, but that's okay. I'm more

> interested in the concepts behind colorForth than Chuck's implementation. I

> expect that as time goes on, there will probably be folks who build

> colorForth on top of Win32 or Linux platforms, using those interfaces

> instead of writing directly to the hardware.





actually for quite some time.

See the UltraTechnology website for pointers.

There are allready colorForth inspired implementations from others,actually for quite some time.See the UltraTechnology website for pointers. (There might be some activity on the OpenFirmware and MacOS front.

The difficult or sometimes impossible to get information about hardware

components on the Mac, makes that system calls have to be used, I guess.

I would be, pleasantly, surprised to learn differently.)

-Roelf





Sun, 04 Jan 2004 17:26:50 GMT Roelf Toxopeu

#6 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions Quote:

> X-no-archive: yes



> > There are allready colorForth inspired implementations from others,

> > actually for quite some time.

> > See the UltraTechnology website for pointers.

> Yes, but they are standalone "operating systems" as well, not systems built

> on top of Windows or Linux, as John was referring to.





I read it as 'on top of -fill in your OS-'.

Ok, taking him literaly, there's none that I know of, for those systems.I read it as 'on top of -fill in your OS-'. You might have a closer look at Terry Loveall's DOS colorForth pointed to

at the UT site. I would not consider that a standalone "operating system".

It uses DOS functions for file I/O.

But whatever you like...

-r





Mon, 05 Jan 2004 04:19:36 GMT Jeff Fo

#7 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions Quote:

> will just use that label address to compile the call. The job of FIND

> then is already performed by the editor (a place where we can accept a

> few microseconds per symbol).



Almost. FIND cannot be completely performed in an editor

because it returns an XT and that only exists after

compilation. But what you descibe does shift the construction

of the symbol table to edit time and reduce the associated

compile function from microseconds to nanoseconds. But yes

that is the idea, to shift some things from compile time

back to edit time where things are moving more slowly.

Almost. FIND cannot be completely performed in an editorbecause it returns an XT and that only exists aftercompilation. But what you descibe does shift the constructionof the symbol table to edit time and reduce the associatedcompile function from microseconds to nanoseconds. But yesthat is the idea, to shift some things from compile timeback to edit time where things are moving more slowly. A side benefit is that more error checking can be done at

edit time or simply eliminated altogether by imposing

restrictions in the editor. It is easier to catch and

eliminate errors when they get made in the editor than to

let them sit until caught by the compiler at some later

time when they are not so fresh in the mind of the

programmer.





Sun, 04 Jan 2004 22:21:37 GMT Jeff Fo

#8 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions Quote:



> > He went from packing 3 6 bit characters in a 20 bit

> > word or 5 6 bit characters in a 32 bit word in his

> > representation of pre-parsed text. Later he moved

> > to a huffman encoding scheme with up to seven

> > characters packed into a word.

> Wouldn't a symbol table be even more efficient?





like editing. But Chuck was not concerned about

overhead in the editor, the idea was to move the

overhead out of the compiler and into the editor.

I think it would be more efficient for some thingslike editing. But Chuck was not concerned aboutoverhead in the editor, the idea was to move theoverhead out of the compiler and into the editor. Quote: > Instead of encoding the

> characters directly, add an entry to the symbol table for each new

> symbol (i.e. : <symbol> does that, ":" refers to the "define word"

> color), and refer to that entry using an index into the symbol table?



In Aha the editor pre-parsing includes uninitialized CFA links.

So the compiler need only initialize the CFA link or take one

link to get a compilation address. The idea being to minimize

what goes on at compile time. There could be a number of

different ways to avoid the overhead in the traditional Forth

approach.

In Aha the editor pre-parsing includes uninitialized CFA links.So the compiler need only initialize the CFA link or take onelink to get a compilation address. The idea being to minimizewhat goes on at compile time. There could be a number ofdifferent ways to avoid the overhead in the traditional Forthapproach. Quote: > Since all words in Forth are used more than once (at definition time),

> this saves a lot of space. Also, except for listing and editing, no

> access to the symbol table is ever necessary (it's part of the editor,

> not the rest of the Forth system), the index into the symbol table

> itself is unique. Source files however are split into two parts:

> symbols

> and the tokenized source itself. For editing, you need to access

> both, for compiling, you just have to access the source. Symbols

> can be stored in a format that makes sense on a particular

> machine, since they are independent of the source; it's not

> even necessary to compress them to the limit, since repeated

> occurances in the source refer always to the

> same symbol (thus compressing the source efficiently). Furthermore, by

> wasting a bit space in the symbol table, the search step can be

> eliminated completely: : <symbol>, when compiled, will put the label

> address into a spare cell at the symbol's position, and compiling

> symbol

> will just use that label address to compile the call. The job of FIND

> then is already performed by the editor (a place where we can accept a

> few microseconds per symbol).



That is what Aha does.

That is what Aha does.



Sun, 04 Jan 2004 21:29:39 GMT Jeff Fo

#9 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions Quote:



> > Some users of desktop enviroments refer to instaneous

> > development cycles, but they still appear noticable

> > to me and to Chuck. Non-trivial applications like

> > compiling chip designs can take considerable cycles.

> > Rather than waiting for hours Chuck wanted it to

> > happen in a click.

> I'm curious, how much text is being compiled that takes this long and

> on what sort of chip is it being run?





other high end chips used in high end machines. In the worst

cases compiles take days. High end, non-trivial stuff,

certainly not 8 bit machines...

From megabytes to megalines of text. Pentiums, Alphas, andother high end chips used in high end machines. In the worstcases compiles take days. High end, non-trivial stuff,certainly not 8 bit machines... Reducing the amount to text to be processes by a few

orders of magnitude is a big factor. Reducing the

processing required by pre-parsing at edit time is

the other factor.

Quote: > The reason I ask is that if the

> Forth has a tuned IO and uses a hashing dictionary it

> should be able to process MB/s on a reasonably spec'd x86.

> I'm not knocking the pre-parsed approach I'm just interested

> in knowing what the environment is where this a big win over

> just parsing at compile time.



Any enviroment that takes a long time to boot, a long time

to load up the cache and a long time to load things from

files etc. before it can even begin to parse text, ie.

popular OS used on desktops and large multiprocessing systems.

Any enviroment that takes a long time to boot, a long timeto load up the cache and a long time to load things fromfiles etc. before it can even begin to parse text, ie.popular OS used on desktops and large multiprocessing systems. Numbers are an excellent example. In traditional Forth

the compiler parses a text string character by character

to begin while in ColorForth a color token directs the

compiler to pick up a binary value. The fact that the

token also directs the editor as to how to display and

edit the field does not slow the ColorForth compiler

at all and what it does is analagous to a literal.

So picking up a literal is several times faster than

parsing a character string.

But a traditional Forth will then search the dictionary

for the string to see if it is a compiled word. With

many systems containing tens of thousands of words this

is a big hit compared to the zero cycles in ColorForth.

Then after the dictionary search and failure a traditional

Forth will then convert the string character by character

into a number using the current number base, or perhaps

a prefix temporary number base word that is in many

ways analogous to Chuck's prefix tokens. So all in

all the traditional approach is ten to a hundred to

a thousand times slower depending on the internals,

size of the dictionary etc. Best case in Chuck's

approach is a couple of nanoseconds. It is hard

to parse a string, search a dictionary, and convert

a string to a number in a couple of nanoseconds

using the old approach.

Quote: > I can imagine it is when target compiling on

> an 8-bit chip but it is not clear to me that it

> is if you can cross-compile on a

> desktop/workstation.



Having seen compiles take days on multi-million

dollar parallel computers, or compiles take hours

on a room full of workstations the idea was to get

it down to milliseconds. The idea was not dealing

with the limitations of 8-bit computers, but dealing

with the limitations of software on high end machines.

Having seen compiles take days on multi-milliondollar parallel computers, or compiles take hourson a room full of workstations the idea was to getit down to milliseconds. The idea was not dealingwith the limitations of 8-bit computers, but dealingwith the limitations of software on high end machines. I have had this discussion before so please if you

have a high end system where you can load and compile

the OS and GUI and compiler and high end application all

from source and all in a in few milliseconds from power

on then let us know about it. I have heard before that

waiting for a few minutes to load boot code and

waiting for hours or days for the compilation is

considered imperceptable to some people but I am not

one of them.

No one has been talking about doing VLSI CAD on

8 bit machines. After Chuck worked with VLSI CAD

for almost a decade the ideas in ColorForth

evolved to solve remaining performance problems.

It has nothing to do with the limitations of 8

bit computers vs desktop machines.





Sun, 04 Jan 2004 21:28:17 GMT Jerry Avin

#10 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions Quote:



> > X-no-archive: yes



> > > There are allready colorForth inspired implementations from others,

> > > actually for quite some time.

> > > See the UltraTechnology website for pointers.

> > Yes, but they are standalone "operating systems" as well, not systems built

> > on top of Windows or Linux, as John was referring to.

> Ok, taking him literaly, there's none that I know of, for those systems.

> I read it as 'on top of -fill in your OS-'.

> You might have a closer look at Terry Loveall's DOS colorForth pointed to

> at the UT site. I would not consider that a standalone "operating system".

> It uses DOS functions for file I/O.

> But whatever you like...

> -r





BIOS ROM? If so, then with little modification, it could be a CP/M BIOs

instead. I ran Aforth and polyFORTH that way. I considered that the

system ran on "bare iron".

By DOS functions, do you mean the contents (and dispatch table) of theBIOS ROM? If so, then with little modification, it could be a CP/M BIOsinstead. I ran Aforth and polyFORTH that way. I considered that thesystem ran on "bare iron". Jerry

--

Engineering is the art of making what you want from things you can get.

-----------------------------------------------------------------------





Mon, 05 Jan 2004 09:46:44 GMT Stephen J. Bev

#11 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions Quote:

> Having seen compiles take days on multi-million

> dollar parallel computers, or compiles take hours

> on a room full of workstations the idea was to get

> it down to milliseconds.



Is this Forth or another language? I've seen C++ compiles take lots

of time and lots of memory but I've never experienced that with Forth.

If you have seen it in Forth, what size of application was being

compiled i.e. roughly how many bytes/lines/words?

Is this Forth or another language? I've seen C++ compiles take lotsof time and lots of memory but I've never experienced that with Forth.If you have seen it in Forth, what size of application was beingcompiled i.e. roughly how many bytes/lines/words? Quote: > I have had this discussion before so please if you

> have a high end system where you can load and compile

> the OS and GUI and compiler and high end application all

> from source and all in a in few milliseconds from power

> on then let us know about it.



I don't since all the x86 machines I have access to have a BIOS in

them which takes seconds or 10s of seconds to initialize before it

even sees whatever kernel I'm trying to run. On the slower ones this

actually takes longer than the kernel boot :-<

I don't since all the x86 machines I have access to have a BIOS inthem which takes seconds or 10s of seconds to initialize before iteven sees whatever kernel I'm trying to run. On the slower ones thisactually takes longer than the kernel boot :- However, I'm interested in how long it would take a conventional

parsing Forth to compile the volume of code that you have. Perhaps

you could give a ballpark figure or if not an estimate of the number

of bytes and/or lines of Forth that are involved? I don't doubt that

pre-tokenizing is quicker, I'm just interested in how much quicker.





Mon, 05 Jan 2004 11:32:37 GMT John Passanit

#12 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions

Quote: > I have had this discussion before so please if you

> have a high end system where you can load and

> compile the OS and GUI and compiler and high

> end application all from source and all in a in few

> milliseconds from power on then let us know

> about it.



I can't think of such a platform-- but then again, I don't see it as

valuable.

I can't think of such a platform-- but then again, I don't see it asvaluable. You keep talking about compiles that take days. Yet I seriously doubt that

*anyone* here works on such systems. Do they exist? Sure-- why not. Are

they the stuff most people deal with? Nope.

I am currently working on a large system written in C. If you compile

everything, it takes around one minute and thirty seconds. Yet 98% of the

time, it takes less than a second to compile and a couple more seconds to

link. That's because the classic C software development model is to compile

code only when it needs to be compiled, using precompiled objects whenever

possible. I spend more time reaching for my mouse in a day than I do

waiting for a compile.

I'm not saying this is superior to the ultra-fast compile times of

colorForth. It's just a different model. Jeff, I have no idea how much

time you spend these days working with real-world projects in langauges

other than Forth. But compiles that last for days are not the norm. Hell,

it takes no more than a couple hours-- on slow hardware-- to compile an

entire Linux distribution (bintools, gcc, kernel, X, utilities, games, etc.)

from scratch. The kinds of systems you're talking about are so rare that I

wonder why you bring them up-- except to come up with an extreme boundary

case. Let's talk about the real world, please.

Certainly the 2% of the time that I sit waiting for a complete compile, I

often wish it was faster. But that's such a minor portion of my development

time, that I don't worry about it. I spend more time day-dreaming, surfing

the web, and doing things that ultimately improve my productivity by

relaxing or solving different problems. colorForth would-- for me-- get me

back that 2% of the time. And while some may find that compelling, I don't.

colorForth's ultra-fast compilation times have benefits for other domains.

For example, I can easily imagine a colorForth engine at the heart of a

Java/C#-like system that let people download code over the Internet and

instantly compile. The current generation of Just-In-Time compilers could

probably benefit from the ideas of colorForth.

But as a programmer, the speed of the compiler is the *last* thing I care

about. I care more about if the language lets me express my algorithms and

data structures in a simple, direct, and intuitive way. And I care more

about the run-time speed of the language than I do how fast it compiles.

Of course my priorities aren't yours or Chuck's or necessarily anyone elses.

Quote: > I have heard before that waiting for a few

> minutes to load boot code and waiting for

> hours or days for the compilation is

> considered imperceptable to some people

> but I am not one of them.



The sad part is that colorForth really is a compelling system on a number of

levels. Unfortunately you actually diminish colorForth's value by creating

straw-man arguments and not making sensible comparisons. I look forward to

more people getting their hands on colorForth, kicking the metaphorical

tires, and learning both the pluses and minuses of the system. Such people

will write about it dispassionately as a piece of technology, and won't be

injecting your hyperbole.

The sad part is that colorForth really is a compelling system on a number oflevels. Unfortunately you actually diminish colorForth's value by creatingstraw-man arguments and not making sensible comparisons. I look forward tomore people getting their hands on colorForth, kicking the metaphoricaltires, and learning both the pluses and minuses of the system. Such peoplewill write about it dispassionately as a piece of technology, and won't beinjecting your hyperbole.



Mon, 05 Jan 2004 12:56:14 GMT John Passanit

#13 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions

Quote: > I think the history helps frame the sequence of choices.

> Chuck's ideas of Forth are embodied in his chip and the

> software tuned for them. His chips have all been word

> (cell) addressing.



I understand this. But it begs some questions.

I understand this. But it begs some questions. Somewhere, I read that Chuck isn't a big fan of writing portable code. He

would rather rewrite to target different hardware than build-in generic

interfaces. So why doesn't he do that with colorForth. He's on a Pentium--

so why isn't he exploiting the features of the processor he is on? Why come

up with a pre-parsed encoding format for source code that makes perfect

sense for his chips, but which doesn't make sense for Pentiums?

I guess I see an inconsistency here. colorForth's primary application is a

CAD system to design Chuck's chips-- something he does not on the target

chips, but on desktop systems. Given Chuck's passion for speed, it seems

illogical that he would be adding a layer of indirection here.

But whatever. As I wrote, what I find compelling about colorForth isn't the

implementation, but the ideas behind it.

Quote: > Yes, Grasshoopper. Just fetching and using is simpler

> than fetching and processing then using. Chuck's

> packing scheme acts a little like a precomputed hash

> table to speed up other things too.



Not that I can see. The only value I see is that when he does a linear

search through the table of words, he can do fast comparisons. That's not a

hash. Indeed, if it was, Chuck wouldn't need to do a time-consuming linear

search through the table.

Not that I can see. The only value I see is that when he does a linearsearch through the table of words, he can do fast comparisons. That's not ahash. Indeed, if it was, Chuck wouldn't need to do a time-consuming linearsearch through the table. I guess this is what bugs me about colorForth. You keep reiterating the

theme that the system was built for compilation speed and compactness. Yet,

there are obvious speed optimizations that could be done to make it even

faster. We're not talking about {*filter*} ideas either-- an encoding format

more suited to the Pentium's byte-addressing capabilities and precalculating

a hash value are obvious ideas that would make colorForth even faster than

it is. And Chuck's not a stupid man-- he obviously knows this. So the

question becomes why doesn't he take the next step if compilation speed is

really that important to him?

Quote: > Simplest would probably be one character per cell, 32 bit

> characters with a count. But other people use hash tables

> and things to improve other characteristics. Simple is fine

> but not at the expense of what you need to do.



But nothing I have said is at the expense of what needs to be done. I have

purposefully focused on a narrow set of optimizations that wouldn't break

anything in colorForth.

But nothing I have said is at the expense of what needs to be done. I havepurposefully focused on a narrow set of optimizations that wouldn't breakanything in colorForth. Quote:

> > variable-width quantities.

> > Instead, Chuck has to represent characters as> > variable-width quantities. > Yeah, isn't that wierd. I does seem to provide nice

> density but does complicate the ColorForth editor.





compilation speed at the expense of complexity. I want simplicity at the

expense of compilation speed. After all, when I'm done compiling, I should

be able to take a snapshot of memory, dump it to disk, and then be able to

instantly reload it. The end users of my code don't care in the least how

fast the compiler was for me. They care about speed at run-time. And

really, that's what I care about too.

And I guess that's where we get into different priorities. Chuck wantscompilation speed at the expense of complexity. I want simplicity at theexpense of compilation speed. After all, when I'm done compiling, I shouldbe able to take a snapshot of memory, dump it to disk, and then be able toinstantly reload it. The end users of my code don't care in the least howfast the compiler was for me. They care about speed at run-time. Andreally, that's what I care about too. Quote: > Chuck answered those questions in different parts of

> various presentations. I reviewed them in my presentation

> to FIG on Aha. I hope I also was able to answer the

> questions above by reminding you about what the real

> code on Chuck chips looks like and why Chuck wanted it

> to work that way.



Well, the answer apparently boils down to two facts-- colorForth apparently

exists as a code simulator for the eventual environment of his chips. And--

as I suggested-- some of this boils down to aesthetic choices Chuck makes

and his priorities.

Well, the answer apparently boils down to two facts-- colorForth apparentlyexists as a code simulator for the eventual environment of his chips. And--as I suggested-- some of this boils down to aesthetic choices Chuck makesand his priorities. I look forward to people with different priorities to take the core concepts

in colorForth and adapt them to domains Chuck either doesn't care about or

doesn't understand.





Mon, 05 Jan 2004 13:16:21 GMT John Passanit

#14 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions



Quote: > I'm curious, how much text is being compiled that takes this

> long and on what sort of chip is it being run? The reason I

> ask is that if the Forth has a tuned IO and uses a hashing

> dictionary it should be able to process MB/s on a

> reasonably spec'd x86. I'm not knocking the pre-parsed

> approach I'm just interested in knowing what the environment

> is where this a big win over just parsing at compile time.

> I can imagine it is when target compiling on an 8-bit chip but

> it is not clear to me that it is if you can cross-compile on a

> desktop/workstation.



That's an interesting question too. The last time I did on-target Forth

development on an 8-bit system was a few years ago, and even on that system,

compiles were insanely fast. And that was on a 8051-based system running at

12 MHz! I can run an 8051 simulator on my slowest Pentium-class desktop

machine that runs faster than the original target does-- and run that some

Forth and have a system that compiles with the same perceptible speed of

colorForth.

That's an interesting question too. The last time I did on-target Forthdevelopment on an 8-bit system was a few years ago, and even on that system,compiles were insanely fast. And that was on a 8051-based system running at12 MHz! I can run an 8051 simulator on my slowest Pentium-class desktopmachine that runs faster than the original target does-- and run that someForth and have a system that compiles with the same perceptible speed ofcolorForth. And I have to ask the other big question-- why does the end user care about

compilation speed?

I'm just not getting what the priorities are here.





Mon, 05 Jan 2004 13:21:23 GMT Jeff Fo

#15 / 69 colorForth antecedents / pre-parsing questions colorForth antecedents / pre-parsing questions Quote:

> Is this Forth or another language? I've seen C++ compiles take lots

> of time and lots of memory but I've never experienced that with Forth.

> If you have seen it in Forth, what size of application was being

> compiled i.e. roughly how many bytes/lines/words?



I have not seen any Forth apps that took days to compile.

I haven't seen any apps that big in Forth either. Forth

normally has a faster compiler than multipass compilers

and Forth apps tend to do more with less code. The 20

year old Forth techniques are still faster than most

other things, but still much slower than what Chuck

has gone to to tackle bigger problems.

I have not seen any Forth apps that took days to compile.I haven't seen any apps that big in Forth either. Forthnormally has a faster compiler than multipass compilersand Forth apps tend to do more with less code. The 20year old Forth techniques are still faster than mostother things, but still much slower than what Chuckhas gone to to tackle bigger problems. Most people don't use an OS written in Forth so they

must compile the OS with another compiler and that

compiler usually isn't written in Forth so they have

to compile the compiler with something other than

Forth. Their Forth may or may not be written in

Forth so they may have to compile their Forth

with another compiler, then compile their app

with their Forth compiler. This means they have

to deal with a lot more source code than when

it is all Forth and doesn't demand multiple

compilers.

And as you know when you optimimize whatever bottleneck

is left over becomes the largest bottleneck. After

removing the bottlenecks imposed by convention OS

Chuck went on to reduce the bottlenecks in the

structure of the data and computations involved in the

transistor models by careful scaling. After a decade

this exposed the bottleneck caused by Chuck's optimized

approach and the GDSII constraints. So OKAD II was

redesigned to take advantage of the GDSII constraints

that were left as bottlenecks in the original OKAD

and turn them into an advantage.

That is pretty typical of the history of things like

that. You can't anticipate what problems will appear

in later versions until you solve the initial problems

and see what problems remain. But the legacy of Forth

was to remove the overhead imposed by conventional

compilers or OS by having a custom OS or at least custom

drivers written in Forth. That left the overhead still

in Forth such as the simple way it searches the dictionary

every time at compile time for every number and fails

to find the numbers in most cases.

Quote: > However, I'm interested in how long it would take a conventional

> parsing Forth to compile the volume of code that you have.



Well the first problem is that the volume of code is related

to the type of Forth. If you remove the overhead of the

conventional OS such as its file system from your application

you also remove the OS code from your required code list.

That is the first factor, reduce the required code by a

huge factor.

Well the first problem is that the volume of code is relatedto the type of Forth. If you remove the overhead of theconventional OS such as its file system from your applicationyou also remove the OS code from your required code list.That is the first factor, reduce the required code by ahuge factor. Systems that have megalines of source for their OS and GUI

and megabytes of source for their Forth compiler need to compile

megalines and megabytes to get a working Forth. So asking how

long it would take one of those systems to compile the few

lines of Forth required to get the OS or GUI functions is

not relevant to much of anything. The relevant question is

how long it takes them to process the code they have to process,

not the amount of code that someone else needs to process.

Since the simpler Forth has less code to compile that is

the biggest reason that it could compiles faster. It it

takes 500 lines instead of 500,000 lines then generally

speaking it will take 1000 times less time without

taking into effect how much faster the compiler is.

But you can't separate the two things. Bigger

more complex compilers that have larger amounts of

code and/or ride on top of even bigger more complex

environments have to process larger amounts of code.

Quote: > Perhaps you could give a ballpark figure or if not an

> estimate of the number of bytes and/or lines of Forth

> that are involved?



On the high end hundreds of files and megabytes of code.

Maybe someone will report thousands of source files in

one of their big applications. I haven't spend the

thousands of dollars required to get and compile the

whole source with some of the popular compilers so just

use smaller programs to measure compilation speed.

On the high end hundreds of files and megabytes of code.Maybe someone will report thousands of source files inone of their big applications. I haven't spend thethousands of dollars required to get and compile thewhole source with some of the popular compilers so justuse smaller programs to measure compilation speed. One problem I sometimes see is that other people who

publish the speed leave out the time the OS used

to load the code from disk and just give the out of

cache figure which looks to me like a very deceptive

representation of how long the system and OS really

needed to load those 100 files from disk and compile them.

In one case I reported that a Forth advertized as the

fastest available had a brain damaged file interface

that combined with anti-virus software to impose an

absurd overhead on compilation.

If one simply accepts that sort of overhead and removes

it from benchmarks because one figures that everyone justs

takes all that for granted then it will appear that that

soft of overhead doesn't exist and that no one cares about

it.

The worse cases I have seen on small programs were times

of 15 minutes on 400Mhz Pentium II with ... so I can

exptrapolate about how long the same software would

take to compile 1000 times more source. I have never

heard anyone report the actual times needed for their

system to reboot and recompile the OS and compiler and

large applications.

The number of lines per second or words per second or

bytes per second compiled from cache are impressive

numbers compared to the old systems we used in the old

days. But the size of the source that many people use

has expanded at a similar rate to the increase in compiler

speed and I prefer to use the actual time the actual

system needs to compile actual source with all the

OS overhead times included.

Quote: > I don't doubt that pre-tokenizing is quicker, I'm just

> interested in how much quicker.



You need to talk about specific compilers and specific

programs for such comparisons to mean much. In Aha

there are several types of tokens and they compile

at different speeds so the balance of the program

can make a big difference. Some are 10x faster

than the previous compiler, some are 100x faster

it depends on the program and the two compilers

in question.

You need to talk about specific compilers and specificprograms for such comparisons to mean much. In Ahathere are several types of tokens and they compileat different speeds so the balance of the programcan make a big difference. Some are 10x fasterthan the previous compiler, some are 100x fasterit depends on the program and the two compilersin question. Even with a traditional system with a hash table

lookup consider the number example. Parse a

string, (perform a hash) search the dictionary (from

a few hash compares to possibly thousands of string

compares), fail, convert character by character

to a number using the current or temporary number

base. When you compare the worst case above to

the pre-parse technique of loading a literal

the speed ratio might be 10,000 to 100,000 to 1.

By eliminating all file loads, all dictionary searchs,

and all string parsing at compile time a compiler can

be made MUCH faster, especially if it has simple

optimizer that doesn't take many cycles. Worst case

speedup guess 10x ? Best case speedup guess 100,000x ?

Average speedup guess 100x to 1000x depending on program ?

I know in the Aha project the goal was to provide

maximum compilation speed from symbolic source with

additional goals of compressing the source and

simplifying the compiler. It has features of a

relocatable code linker, but the processor does not

have relocatable code. It has features of JIT.

The compiler could have been made simpler and smaller

but it would also make it slower some of the time. The

idea was to turn second compiles into millisecond compiles

and millsecond compiles into microsecond compiles

while fitting 4 times as many programs into the same system.

That is the compilation speed side of the equation

which is them multiplied by the required code side of

the equation. Again those milage figures vary. Some of

the fun of writing that kind of code are the detailed

comparisons of exactly how fast each option would be

on some given code on your processor.

It is easy to generate different cases that fall

between the best cases and worst cases. It is not

easy to know where the mix of code in any given

program would fall without detailed benchmarks.

But ask yourself, if the best case is a 1ns literal

and the worst case is a character by charcter parse

followed by a search of tens of thousands of dictionary

entries followed by a character by character conversion

how many ns will the worst case take? Xns/1ns will

be the ratio you come up with.

In Aha words that are recognized as compiling to

relocatable and unabiguous opcodes get tokenized

that way and get the biggest compilation speedup.

Next come the words that compile as simple Machine

Forth operations but which require no dictionary

lookup get the next biggest compilation speedup.

User defined words require going though a dictionary

link and making a few decisions to get compiled and

have a slightly higher overhead than the other types

of tokens, but still require no dictionary search so

still have a substantial speedup compared to a traditional

compiler technique. The token mix for a given

application could result in ~ a 10/1 variation

in speedup depending on the mix. The same things

that result in faster compilation also result in

faster application code in this case.





Mon, 05 Jan 2004 12:33:03 GMT Page 1 of 5

[ 69 post ] Go to page: [1] [2] [3] [4] [5]

Relevant Pages 1. Pre-parsing 2. ColorForth questions 3. ColorForth / MachineForth question 4. ColorForth Questions 5. Colorforth questions 6. Pre-pre-registration for Ruby Conference 2002 7. <pre> <pre> parsing help 8. Pre-Newbie Question 9. Clarion 4 Beta Pre-Purchase Questions 10. question about pre- & post-conditions 11. Pre-newbie questions about Apple Dylan 12. Newbis question - Pre-defined classes & modules

