#421 From: "JMA" <mail@...>
Date: Sun Apr 7, 2002 9:48 pm
Subject: CVS additions mailjmase
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Hi all !
The following has been added to the CVS.
srcincludeosfree.h Used to include OS/2 toolkit
srcincludeall_shared.h Shared code for all source
srcincludeall_messageids.h Message id's
srccmdansi ansi.exe source
srccmdansimakefile
srccmdansiansi.c
srccmdansiansi.lnk
srccmdchkdsk chkdsk.exe source
srccmdchkdskmakefile
srccmdchkdskchkdsk.c
srccmdchkdskchkdsk.lnk
srccmdinclude Shared code for all cmdline tools source
srccmdincludecmd_ExecFSEntry.h
srccmdincludecmd_MessageIDs.h
srccmdincludecmd_Messages.h
srccmdincludecmd_QueryCurrentDisk.h
srccmdincludecmd_QueryFSName.h
srccmdincludecmd_ShowVolumeInfo.h
srccmdincludecmd_shared.h
srccmdshared Shared code for all cmdline tools source
srccmdsharedcmd_ExecFSEntry.c
srccmdsharedcmd_FSEntry16.c
srccmdsharedcmd_Messages.c
srccmdsharedcmd_QueryCurrentDisk.c
srccmdsharedcmd_QueryFSName.c
srccmdsharedcmd_ShowVolumeInfo.c
Sincerely
JMA
Development and Consulting
John Martin , jma@...
==================================
Website: http://www.jma.se/
email: mail@...
Phone: 46-(0)70-6278410
==================================
Part 15 - Apr 07 2002
Re: Part 15
#422 From: "Lynn H. Maxson" <lmaxson@...>
Date: Sat Apr 13, 2002 8:33 pm
Subject: Emphasis on list processing lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
I just feel the need to further respond to Frank
Griffin's reasonable inquiry relative to SL/I. As
a development effort we are quite few in number for
the size of the project. Its project size whether
you are an IBM, Apple, Microsoft, or Linux (open
source) runs into the hundreds just for
development. When you include maintenance, i.e.
fixpaks, and new versions, both of which normally
involve multiple concurrent tracks the numbers
quickly get into the thousands.
Whatever the elongated period it would take our
group to produce an initial product, assuming that
it is competitive in function and performance to
other offerings, we could not maintain competitive
pace during the maintenance and new version stage,
i.e. post-development. The issue is clearly one of
increased productivity. We need to achieve the
same result in less time with fewer people.
The primary activity we engage in is writing, the
writing of user requirements, the writing of
specifications, the use of CASE tools in the
writing of analysis and design documents
(dataflows, structure charts, UML), the writing of
source code, the writing of test cases (including
test scripts and supporting test data). We have
two approaches. One, we can reduce the number of
different writings. Two, we can reduce the writing
necessary within any given form. Assuming that the
results of such writing is necessary or desirable,
the answer in either case lies in turning such
writing over to software.
Now we cannot avoid the writing of user
requirements. Technically we cannot avoid the
translation of user requirements into formal
specifications, though in practice we often defer
such writing until programming. Logic programming,
which accepts our specification language as a
programming language, goes directly from
specifications to executable(s). If so directed as
in the instance of SQL, logic programming performs
an exhaustive true/false proof, which in essence
covers the test cases.
Thus in theory we have only three forms of writing
necessary, two of them (user requirements on input
to the process and user documentation of the
output) lie outside the formal software development
process. Only one, specification, lies within the
process. In logic programming this is the only
writing necessary before turning analysis, design,
construction, and testing over to the software.
Setting aside for the moment the creation of the
rows in the table, if you accept that SQL works,
then you must accept that logic programming works
directly from specifications (the SQL query) and
that the software, in this instance the database
manager, does the analysis, the design, the
construction, and the testing, which is the result
of the query. If you are familiar with Prolog,
then you know it operates the same.
So there can be no argument that the only formal
writing necessary is that of specification, that
the software incorporating the two-stage proof
engine of logic programming, suffices for the rest.
So we already have a proven means of reducing the
number of different writings, thus a corresponding
reduction in people required. Now we just need to
look at reducing what we have to write in the one
form of writing remaining (again setting aside
issues relative to test cases, scripts, and data).
As APL has shown since 1962 one such reduction
occurs by allowing operators to have aggregate
operands (arrays and structures). APL has also
shown the advantage of an "operator rich" language
as well as an economy of expression by having each
operator have a monadic (one operand) as well as a
dyadic (two operands) form. In addition APL is
implemented as an interpreter with the additional
advantages that offers.
APL, however, is missing one aggregate form, that
of a list, on which logic programming depends. A
list may contain 0 (empty), 1, or more entries. In
logic programming as part of its exhaustive
true/false proof (the automatic testing stage) the
results can be 'false' (no true instances) or one
or more true instances.
The processing of an array, a structure, or a list
involves iteration. A difference between say C
which allows element operands only and PL/I which
allows aggregate operands in addition is that in C
the programmer must write the iterative logic;in
PL/I, the software writes it. The same
software-based iteration occurs with operators with
list operands.
Most iterations are of the 'do...while...' type
where the testing of the "while" conditions occur
prior to the first and next execution of the loop.
Thus a 'do...while...' may execute a loop 0, 1, or
more times. Note that in a 'do...until...' the
test for the next execution occurs after the
previous. Thus a 'do...until...' assumes at least
one true instance exists.
While 1st, 2nd, and 3rd generation languages
(including the misnamed Forth) are imperative
languages whose basic processing statement is the
assignment, 4th generation languages all based on
logic programming have added (sometimes as in the
instance of Prolog to the exclusion of the
assignment statement) the assertion statement. We
should note here that all assertions must
ultimately resolve into assignments. Thus a 4th
generation language for completeness should allow
both (unlike either Prolog or SQL).
An assertion is either true or false. If false, no
true instances exist. If true, one or more true
instances exist. The "natural" form then for
storage of the result of an assertion is the list,
which can have 0, 1, or more entries. For this
reason LISP, which is a 3rd generation language and
thus had an assignment statement, with its list
aggregate data type is the implementation language
of choice for many logic programming applications,
including early Prologs and AI expert systems.
Note that an assertion expression (which in Prolog
is a goal) can in its evaluation invoke another
assertion (in Prolog a sub-goal) which can in turn
invoke another and so on. This is similar (in fact
indentical) to a main routine invoking a
sub-routine and so on. In either instance we can
derive the logical hierachy based on these
"internal" invocations within an assertion or
assignment expression. If we can derive it,
certainly the software can (and in logic
programming does).
This means that we have no need of rules specifying
the order in which invoked blocks must appear. It
also means that we have no need of rules requiring
the nesting of "internal" (internally invoked)
procedures within an "external". The first rule
relative to order is a "compiler rule" used by
"one-pass" compilers, which frankly is intended to
increase the work of application programmers and
lessen that of compiler programmers. The second
rule relative to "nesting" is another compiler
rule, again increasing the effort of application
programmers while lessening that of compiler
programmers. PL/I, which is a multiple-pass
compiler, one, places no requirement on the order
of procedures, and, two, with its 'package' option
eliminates any need for nesting of internal
procedures. The procedures exist unordered in a
package, one of which must be designated in the
meta-code the external or main procedure.
In truth no such designation is necessary, because
the main procedure is the one invoked by no other.
We can carry this a step further by allowing
multiple main procedures to exist, which only means
the compiler in organizing the hierarchy based on
internal invocations instead of creating only one
can create a list of hierarchies. In true logic
programming fashion compile each hierarchical entry
on the list separately. In this manner you can
submit all the procedures (shared or not)
associated with an application system into a single
compile, producing multiple executables. If you
like, you submit all the procedures associated with
multiple applications systems (in fact all the
application systems within the enterprise),
producing all the associated executables as a
single unit of work.
If you do this change to a compiler from a single
hierarchical entry to a list of such entries
separately compiled, you eliminate all the people
time necessary for such synchronization of changes.
As it is only one compile you only need one person.
The point is that our current compiler paradigms,
which we have shown to be arbitrary (and thus
unnecessary) restrict our productivity.
Considering the increased productivity possible
with the same function in an interpreter, just
having to use a compiler is productivity
restrictive. We could take any 3rd generation
language, eliminate these restrictions, and operate
with an interpreter. This would result in
significantly increased productivity, due, one, to
the less work required, and, two, to the fewer
people required.
If we take a 3rd generation language, add a list as
a native data type, and with it an assertion
statement, we move it to the 4th generation where
now the software does the analysis, design,
construction (creation of logical hierarchies), and
automatic testing. This further reduces the number
of people required and what the remaining people
have to do.
If we have these tools with these capabilities in
place, then our few in number will suffice for our
ability to compete with the much larger
organizations. The larger organizations will have
to adopt our methods in order to compete, which
means among other things that they will cease being
large organizations.
That, my friends, is how you make the real threat
of a software monopoly disappear. You don't need
(nor can you trust) the DOJ to do the job. You
just simply make the job doable within your means.
Date: Sat Apr 13, 2002 8:33 pm
Subject: Emphasis on list processing lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
I just feel the need to further respond to Frank
Griffin's reasonable inquiry relative to SL/I. As
a development effort we are quite few in number for
the size of the project. Its project size whether
you are an IBM, Apple, Microsoft, or Linux (open
source) runs into the hundreds just for
development. When you include maintenance, i.e.
fixpaks, and new versions, both of which normally
involve multiple concurrent tracks the numbers
quickly get into the thousands.
Whatever the elongated period it would take our
group to produce an initial product, assuming that
it is competitive in function and performance to
other offerings, we could not maintain competitive
pace during the maintenance and new version stage,
i.e. post-development. The issue is clearly one of
increased productivity. We need to achieve the
same result in less time with fewer people.
The primary activity we engage in is writing, the
writing of user requirements, the writing of
specifications, the use of CASE tools in the
writing of analysis and design documents
(dataflows, structure charts, UML), the writing of
source code, the writing of test cases (including
test scripts and supporting test data). We have
two approaches. One, we can reduce the number of
different writings. Two, we can reduce the writing
necessary within any given form. Assuming that the
results of such writing is necessary or desirable,
the answer in either case lies in turning such
writing over to software.
Now we cannot avoid the writing of user
requirements. Technically we cannot avoid the
translation of user requirements into formal
specifications, though in practice we often defer
such writing until programming. Logic programming,
which accepts our specification language as a
programming language, goes directly from
specifications to executable(s). If so directed as
in the instance of SQL, logic programming performs
an exhaustive true/false proof, which in essence
covers the test cases.
Thus in theory we have only three forms of writing
necessary, two of them (user requirements on input
to the process and user documentation of the
output) lie outside the formal software development
process. Only one, specification, lies within the
process. In logic programming this is the only
writing necessary before turning analysis, design,
construction, and testing over to the software.
Setting aside for the moment the creation of the
rows in the table, if you accept that SQL works,
then you must accept that logic programming works
directly from specifications (the SQL query) and
that the software, in this instance the database
manager, does the analysis, the design, the
construction, and the testing, which is the result
of the query. If you are familiar with Prolog,
then you know it operates the same.
So there can be no argument that the only formal
writing necessary is that of specification, that
the software incorporating the two-stage proof
engine of logic programming, suffices for the rest.
So we already have a proven means of reducing the
number of different writings, thus a corresponding
reduction in people required. Now we just need to
look at reducing what we have to write in the one
form of writing remaining (again setting aside
issues relative to test cases, scripts, and data).
As APL has shown since 1962 one such reduction
occurs by allowing operators to have aggregate
operands (arrays and structures). APL has also
shown the advantage of an "operator rich" language
as well as an economy of expression by having each
operator have a monadic (one operand) as well as a
dyadic (two operands) form. In addition APL is
implemented as an interpreter with the additional
advantages that offers.
APL, however, is missing one aggregate form, that
of a list, on which logic programming depends. A
list may contain 0 (empty), 1, or more entries. In
logic programming as part of its exhaustive
true/false proof (the automatic testing stage) the
results can be 'false' (no true instances) or one
or more true instances.
The processing of an array, a structure, or a list
involves iteration. A difference between say C
which allows element operands only and PL/I which
allows aggregate operands in addition is that in C
the programmer must write the iterative logic;in
PL/I, the software writes it. The same
software-based iteration occurs with operators with
list operands.
Most iterations are of the 'do...while...' type
where the testing of the "while" conditions occur
prior to the first and next execution of the loop.
Thus a 'do...while...' may execute a loop 0, 1, or
more times. Note that in a 'do...until...' the
test for the next execution occurs after the
previous. Thus a 'do...until...' assumes at least
one true instance exists.
While 1st, 2nd, and 3rd generation languages
(including the misnamed Forth) are imperative
languages whose basic processing statement is the
assignment, 4th generation languages all based on
logic programming have added (sometimes as in the
instance of Prolog to the exclusion of the
assignment statement) the assertion statement. We
should note here that all assertions must
ultimately resolve into assignments. Thus a 4th
generation language for completeness should allow
both (unlike either Prolog or SQL).
An assertion is either true or false. If false, no
true instances exist. If true, one or more true
instances exist. The "natural" form then for
storage of the result of an assertion is the list,
which can have 0, 1, or more entries. For this
reason LISP, which is a 3rd generation language and
thus had an assignment statement, with its list
aggregate data type is the implementation language
of choice for many logic programming applications,
including early Prologs and AI expert systems.
Note that an assertion expression (which in Prolog
is a goal) can in its evaluation invoke another
assertion (in Prolog a sub-goal) which can in turn
invoke another and so on. This is similar (in fact
indentical) to a main routine invoking a
sub-routine and so on. In either instance we can
derive the logical hierachy based on these
"internal" invocations within an assertion or
assignment expression. If we can derive it,
certainly the software can (and in logic
programming does).
This means that we have no need of rules specifying
the order in which invoked blocks must appear. It
also means that we have no need of rules requiring
the nesting of "internal" (internally invoked)
procedures within an "external". The first rule
relative to order is a "compiler rule" used by
"one-pass" compilers, which frankly is intended to
increase the work of application programmers and
lessen that of compiler programmers. The second
rule relative to "nesting" is another compiler
rule, again increasing the effort of application
programmers while lessening that of compiler
programmers. PL/I, which is a multiple-pass
compiler, one, places no requirement on the order
of procedures, and, two, with its 'package' option
eliminates any need for nesting of internal
procedures. The procedures exist unordered in a
package, one of which must be designated in the
meta-code the external or main procedure.
In truth no such designation is necessary, because
the main procedure is the one invoked by no other.
We can carry this a step further by allowing
multiple main procedures to exist, which only means
the compiler in organizing the hierarchy based on
internal invocations instead of creating only one
can create a list of hierarchies. In true logic
programming fashion compile each hierarchical entry
on the list separately. In this manner you can
submit all the procedures (shared or not)
associated with an application system into a single
compile, producing multiple executables. If you
like, you submit all the procedures associated with
multiple applications systems (in fact all the
application systems within the enterprise),
producing all the associated executables as a
single unit of work.
If you do this change to a compiler from a single
hierarchical entry to a list of such entries
separately compiled, you eliminate all the people
time necessary for such synchronization of changes.
As it is only one compile you only need one person.
The point is that our current compiler paradigms,
which we have shown to be arbitrary (and thus
unnecessary) restrict our productivity.
Considering the increased productivity possible
with the same function in an interpreter, just
having to use a compiler is productivity
restrictive. We could take any 3rd generation
language, eliminate these restrictions, and operate
with an interpreter. This would result in
significantly increased productivity, due, one, to
the less work required, and, two, to the fewer
people required.
If we take a 3rd generation language, add a list as
a native data type, and with it an assertion
statement, we move it to the 4th generation where
now the software does the analysis, design,
construction (creation of logical hierarchies), and
automatic testing. This further reduces the number
of people required and what the remaining people
have to do.
If we have these tools with these capabilities in
place, then our few in number will suffice for our
ability to compete with the much larger
organizations. The larger organizations will have
to adopt our methods in order to compete, which
means among other things that they will cease being
large organizations.
That, my friends, is how you make the real threat
of a software monopoly disappear. You don't need
(nor can you trust) the DOJ to do the job. You
just simply make the job doable within your means.
Re: Part 15
#423 From: "dwgras" <dwgras@...>
Date: Wed Apr 17, 2002 9:35 pm
Subject: Re: Emphasis on list processing dwgras
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Lynn
If I understand you better, what is needed is a better compiler. I
noticed Timur Tabi is the only OS/2 representative on the Open Watcom
compiler. This doesn't speak well for the OS2/eCS community. I am
trying to teach myself to program C++ and I noticed that the Windows
compilers out there have a much beter interface and are much easier to
use than the OS/2 versions included in Watcom.
Regards,
David Graser
>
> If you do this change to a compiler from a single
> hierarchical entry to a list of such entries
> separately compiled, you eliminate all the people
> time necessary for such synchronization of changes.
> As it is only one compile you only need one person.
>
> The point is that our current compiler paradigms,
> which we have shown to be arbitrary (and thus
> unnecessary) restrict our productivity.
> Considering the increased productivity possible
> with the same function in an interpreter, just
> having to use a compiler is productivity
> restrictive. We could take any 3rd generation
> language, eliminate these restrictions, and operate
> with an interpreter. This would result in
> significantly increased productivity, due, one, to
> the less work required, and, two, to the fewer
> people required.
>
> If we take a 3rd generation language, add a list as
> a native data type, and with it an assertion
> statement, we move it to the 4th generation where
> now the software does the analysis, design,
> construction (creation of logical hierarchies), and
> automatic testing. This further reduces the number
> of people required and what the remaining people
> have to do.
>
> If we have these tools with these capabilities in
> place, then our few in number will suffice for our
> ability to compete with the much larger
> organizations. The larger organizations will have
> to adopt our methods in order to compete, which
> means among other things that they will cease being
> large organizations.
>
> That, my friends, is how you make the real threat
> of a software monopoly disappear. You don't need
> (nor can you trust) the DOJ to do the job. You
> just simply make the job doable within your means.
Date: Wed Apr 17, 2002 9:35 pm
Subject: Re: Emphasis on list processing dwgras
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Lynn
If I understand you better, what is needed is a better compiler. I
noticed Timur Tabi is the only OS/2 representative on the Open Watcom
compiler. This doesn't speak well for the OS2/eCS community. I am
trying to teach myself to program C++ and I noticed that the Windows
compilers out there have a much beter interface and are much easier to
use than the OS/2 versions included in Watcom.
Regards,
David Graser
>
> If you do this change to a compiler from a single
> hierarchical entry to a list of such entries
> separately compiled, you eliminate all the people
> time necessary for such synchronization of changes.
> As it is only one compile you only need one person.
>
> The point is that our current compiler paradigms,
> which we have shown to be arbitrary (and thus
> unnecessary) restrict our productivity.
> Considering the increased productivity possible
> with the same function in an interpreter, just
> having to use a compiler is productivity
> restrictive. We could take any 3rd generation
> language, eliminate these restrictions, and operate
> with an interpreter. This would result in
> significantly increased productivity, due, one, to
> the less work required, and, two, to the fewer
> people required.
>
> If we take a 3rd generation language, add a list as
> a native data type, and with it an assertion
> statement, we move it to the 4th generation where
> now the software does the analysis, design,
> construction (creation of logical hierarchies), and
> automatic testing. This further reduces the number
> of people required and what the remaining people
> have to do.
>
> If we have these tools with these capabilities in
> place, then our few in number will suffice for our
> ability to compete with the much larger
> organizations. The larger organizations will have
> to adopt our methods in order to compete, which
> means among other things that they will cease being
> large organizations.
>
> That, my friends, is how you make the real threat
> of a software monopoly disappear. You don't need
> (nor can you trust) the DOJ to do the job. You
> just simply make the job doable within your means.
Re: Part 15
#424 From: Cristiano Guadagnino <criguada@...>
Date: Thu Apr 18, 2002 1:43 am
Subject: Re: Re: Emphasis on list processing criguada
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Hi David,
** Reply to message from "dwgras" <dwgras@...> on Wed, 17 Apr 2002
17:35:25 -0000
> f I understand you better, what is needed is a better compiler. I
> noticed Timur Tabi is the only OS/2 representative on the Open Watcom
> compiler. This doesn't speak well for the OS2/eCS community. I am
That's not true. There's Michal Necasek also.
Bye
Cris
Date: Thu Apr 18, 2002 1:43 am
Subject: Re: Re: Emphasis on list processing criguada
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Hi David,
** Reply to message from "dwgras" <dwgras@...> on Wed, 17 Apr 2002
17:35:25 -0000
> f I understand you better, what is needed is a better compiler. I
> noticed Timur Tabi is the only OS/2 representative on the Open Watcom
> compiler. This doesn't speak well for the OS2/eCS community. I am
That's not true. There's Michal Necasek also.
Bye
Cris
Re: Part 15
#425 From: "Lynn H. Maxson" <lmaxson@...>
Date: Thu Apr 18, 2002 1:54 am
Subject: Re: Re: Emphasis on list processing lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
David Graser writes:
"If I understand you better, what is needed is a
better compiler. I noticed Timur Tabi is the only
OS/2 representative on the Open Watcom compiler.
This doesn't speak well for the OS2/eCS community.
I am trying to teach myself to program C++ ..."
Well, yes, but even more is understanding that
considerations of compiling have on language
design. C is a one-pass compiler. That means that
data declarations, in effect data definitions, must
appear in the source prior to their use in any
expression. Internal (or nested) procedures must
appear in the source prior to any procedure which
invokes them. Otherwise the extra writing of a
'VOID' statement with the procedure name must
appear.
So to say that the language requires this is a
misnomer. It is not part of the language per se,
but a requirement for a one-pass compiler. If your
compiler is a multi-pass compiler, e.g. PL/I, the
order of data declarations and internal procedures
is arbitrary, i.e. under the whim of the developer.
So if you convert your C compiler to a multi-pass
version among other things you eliminate the need
for the VOID statement. You can have your data
declarations appearing after their first use in an
expression and you can write your source code in
true top-down manner with highest level routines
(procedures) at the top and the lower level in a
logical hierarchical sequence.
Moreover as all procedures have a name by which
they are invoked internally by another you have no
need in the language or in a compiler to "nest" the
procedures. In fact they can simply appear in any
arbitrary, i.e. random, order without causing any
problems for the compiler. Only the compiler
writer has to create a procedure list with an entry
for each procedure followed by a list of the
procedure names it invokes.
The only other addition is a counter associated
with a procedure list entry denoting the number of
times it is invoked by another. A main procedure
is one invoked by no other, i.e. has an invocation
count of zero.
A compiler then after reading in all the source
code for the procedures can then search the
procedure list for those with an invocation (use)
count of zero. As the same syntax and semantic
analysis has occurred for all procedures the
compiler can do a completeness proof on each
procedure with a zero use count. It will create an
object module for each one, automatically reusing
the common procedure source code.
Thus the current limit on one external procedure
(and thus one object module) per compile (unit of
work) is arbitrary as well. There is no reason
then why once global changes to procedure source
code has occurred that the entire set of
procedures, including multiple main procedures,
cannot occur within a single compile.
Neither of these require a change to either
language syntax or semantics, i.e. a change to the
language per se. They require only a minor change
of allowing for a list of multiple procedure names
with a zero use count (main procedures). Otherwise
the processing is identical.
I will not argue C++ with you or the value of the
Watcom compiler. I regard, for reasons I have
stated elsewhere, C and its variants (C++, C#, and
JAVA) as "incomplete" if not "crippled" languages.
As PL/I exists I see no reason to regress from the
full functionality that should be available to a
programmer nor of a programming language whose
authors major emphasis lay in easing the task of
compiler writers at the expense of increasing those
of compiler users. I am more than willing to have
the smaller number of compiler writers exert
greater effort if the net result is compiler
writers (who exceed their number by several orders
of magnitude) can exert less. In short I hold that
the productivity of compiler users has priority
over that of compiler writers.
If the Watcom compiler is truly open source, then
we should easily be able to incorporate these two
changes. As a result we would increase
significantly programmer productivity. As you may
also guess Timur Tabi are far apart on what is
needed in a compiler.
In fact if you have followed my other messages, you
know I am arguing against use of a compiler, which
separates the data entry of source from its
compilation, in favor of an interactive interpreter
in which the data entry and its compilation occurs
without a delay as part of a single integrated
process. An interactive interpreter is a "true"
IDE (Integrated Development Environment) which none
of the products you mentioned provide regardless of
their claims to the contrary.
The better interface is one which provides you the
source and the different visual outputs
(abstractions of that source) according to how you
want them, not dictated by the authors' views of
what and how you should see them. You are the
programmer.
PL/I is the only programming language designed for
and by programmers, placing responsibility on
compiler writers to follow the dictates of the
programmer without question and no matter how
absurd they may appear. For example, most systems
support either 16- or 32-bit binary arithmetic.
Note that I did not say binary "integer" arithmetic
as PL/I supports real binary numbers, integer and
non-integer (a fractional portion following a
binary point).
Thus a full-word (31-bit plus sign) binary integer
in PL/I is declared as 'fixed bin (31)'. I can
include a fractional part, e.g. 'fixed bin (31,5)'.
This particular designation gives me binary
accuracy to 1/32 (1/2**5) which is quite useful in
measuring lumber for example.
However, I could as a programmer get quite nasty
and decide not to use either 16- or 32-bit
designations. I could, for example, declare a
binary real variable as 'fixed bin (17,4)'. PL/I
supports these. Of course it forces the compiler
writer to produce extra code, but that's not my
concern. The compiler writer may want to give me
an informational message that such extra code was
produced to insure my understanding of what I have
said, but if that's what I want then his job is to
follow my dictates, not me his.
Why programmers allow language and compiler writers
to dictate terms to them, particularly since the
introduction of PL/I in 1964 and its availability
in 1965, is beyond me when it was shown to be
unnecessary. Why programmers would let K&R, the
authors of C, allow 'int' to be implementation
defined and thus vary by compiler and platform is
also beyond me. Why C doesn't support variable
precision, fixed-point decimal arithmetic is beyond
me. I know the silly reason K&R submitted as it
not being necessary.
The point is that if I say 'fixed bin (31)' or
'fixed bin (15)' or 'fixed bin (63)' or 'fixed bin
(23)' what it means is independent of
implementation. I don't need a standards
committee. I simply need someone who can write the
necessary compiler code. In terms of readability
these are head and shoulders above 'int', 'short',
or 'long'.<g>
In the long run because IBM and Microsoft et al use
these tools you mention, they also employ hundreds
and thousands of programmers in order to get the
miserly throughput due to the productivity
constraints imposed by the tools. If JMA, Timur
Tabi, and others want to use the tools, they too
will have to match the people numbers just to stay
even.
I don't expect to have those numbers. Therefore I
have to have better tools. If I use their tools, I
have to play their game under their rules. If I
have better tools, then they have to play my game
under my rules. If they use my tools, then their
hundreds and thousands will drop to the dozens.
When it does that then "open source" will have
practical meaning, because it means that the better
tools will allow me as an individual to choose a
different path without concern for support or
numbers of others on the same path.
As I said before if IBM were able to give us the
source (in C, C++, and assembler) for OS/2, we
would be unable to maintain and enhance it in any
reasonable manner without a similar hundreds and
thousands of organized participants. So even if
the source were free, we still couldn't afford to
use it with the tools available to us. We have no
further to look than to Linux where no distributor
has yet to turn a profit regardless of the millions
of copies.
So open source advocates hurt themselves with
dependencies on "free" compilers like GCC and
Watcom. For any significant project in the long
term you can't afford to use them. However "free"
they are, their associated (people) expense runs in
the other direction. Open source in order to
compete effectively with closed source should get
as far away as possible from tools used by closed
source, including GCC and Watcom. Whether free or
not, their cost contribution is insignificant to
the people cost involved associated with their use.
If you believe in the future of open source as a
successful business (and thus competitive) model,
then the last thing on earth you want is a "free"
batch compiler and a patchwork IDE. What you want
is an interactive interpreter which produces all
necessary documentation from a single organized
source in real time as changes are introduced. If
you have it the IBMs and Microsofts will disappear
from the software scene. You will have no worries
about anyone obtaining a software monopoly. That
will guarantee the viability of open source, making
any other kind unattractive competitively and
economically.
Date: Thu Apr 18, 2002 1:54 am
Subject: Re: Re: Emphasis on list processing lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
David Graser writes:
"If I understand you better, what is needed is a
better compiler. I noticed Timur Tabi is the only
OS/2 representative on the Open Watcom compiler.
This doesn't speak well for the OS2/eCS community.
I am trying to teach myself to program C++ ..."
Well, yes, but even more is understanding that
considerations of compiling have on language
design. C is a one-pass compiler. That means that
data declarations, in effect data definitions, must
appear in the source prior to their use in any
expression. Internal (or nested) procedures must
appear in the source prior to any procedure which
invokes them. Otherwise the extra writing of a
'VOID' statement with the procedure name must
appear.
So to say that the language requires this is a
misnomer. It is not part of the language per se,
but a requirement for a one-pass compiler. If your
compiler is a multi-pass compiler, e.g. PL/I, the
order of data declarations and internal procedures
is arbitrary, i.e. under the whim of the developer.
So if you convert your C compiler to a multi-pass
version among other things you eliminate the need
for the VOID statement. You can have your data
declarations appearing after their first use in an
expression and you can write your source code in
true top-down manner with highest level routines
(procedures) at the top and the lower level in a
logical hierarchical sequence.
Moreover as all procedures have a name by which
they are invoked internally by another you have no
need in the language or in a compiler to "nest" the
procedures. In fact they can simply appear in any
arbitrary, i.e. random, order without causing any
problems for the compiler. Only the compiler
writer has to create a procedure list with an entry
for each procedure followed by a list of the
procedure names it invokes.
The only other addition is a counter associated
with a procedure list entry denoting the number of
times it is invoked by another. A main procedure
is one invoked by no other, i.e. has an invocation
count of zero.
A compiler then after reading in all the source
code for the procedures can then search the
procedure list for those with an invocation (use)
count of zero. As the same syntax and semantic
analysis has occurred for all procedures the
compiler can do a completeness proof on each
procedure with a zero use count. It will create an
object module for each one, automatically reusing
the common procedure source code.
Thus the current limit on one external procedure
(and thus one object module) per compile (unit of
work) is arbitrary as well. There is no reason
then why once global changes to procedure source
code has occurred that the entire set of
procedures, including multiple main procedures,
cannot occur within a single compile.
Neither of these require a change to either
language syntax or semantics, i.e. a change to the
language per se. They require only a minor change
of allowing for a list of multiple procedure names
with a zero use count (main procedures). Otherwise
the processing is identical.
I will not argue C++ with you or the value of the
Watcom compiler. I regard, for reasons I have
stated elsewhere, C and its variants (C++, C#, and
JAVA) as "incomplete" if not "crippled" languages.
As PL/I exists I see no reason to regress from the
full functionality that should be available to a
programmer nor of a programming language whose
authors major emphasis lay in easing the task of
compiler writers at the expense of increasing those
of compiler users. I am more than willing to have
the smaller number of compiler writers exert
greater effort if the net result is compiler
writers (who exceed their number by several orders
of magnitude) can exert less. In short I hold that
the productivity of compiler users has priority
over that of compiler writers.
If the Watcom compiler is truly open source, then
we should easily be able to incorporate these two
changes. As a result we would increase
significantly programmer productivity. As you may
also guess Timur Tabi are far apart on what is
needed in a compiler.
In fact if you have followed my other messages, you
know I am arguing against use of a compiler, which
separates the data entry of source from its
compilation, in favor of an interactive interpreter
in which the data entry and its compilation occurs
without a delay as part of a single integrated
process. An interactive interpreter is a "true"
IDE (Integrated Development Environment) which none
of the products you mentioned provide regardless of
their claims to the contrary.
The better interface is one which provides you the
source and the different visual outputs
(abstractions of that source) according to how you
want them, not dictated by the authors' views of
what and how you should see them. You are the
programmer.
PL/I is the only programming language designed for
and by programmers, placing responsibility on
compiler writers to follow the dictates of the
programmer without question and no matter how
absurd they may appear. For example, most systems
support either 16- or 32-bit binary arithmetic.
Note that I did not say binary "integer" arithmetic
as PL/I supports real binary numbers, integer and
non-integer (a fractional portion following a
binary point).
Thus a full-word (31-bit plus sign) binary integer
in PL/I is declared as 'fixed bin (31)'. I can
include a fractional part, e.g. 'fixed bin (31,5)'.
This particular designation gives me binary
accuracy to 1/32 (1/2**5) which is quite useful in
measuring lumber for example.
However, I could as a programmer get quite nasty
and decide not to use either 16- or 32-bit
designations. I could, for example, declare a
binary real variable as 'fixed bin (17,4)'. PL/I
supports these. Of course it forces the compiler
writer to produce extra code, but that's not my
concern. The compiler writer may want to give me
an informational message that such extra code was
produced to insure my understanding of what I have
said, but if that's what I want then his job is to
follow my dictates, not me his.
Why programmers allow language and compiler writers
to dictate terms to them, particularly since the
introduction of PL/I in 1964 and its availability
in 1965, is beyond me when it was shown to be
unnecessary. Why programmers would let K&R, the
authors of C, allow 'int' to be implementation
defined and thus vary by compiler and platform is
also beyond me. Why C doesn't support variable
precision, fixed-point decimal arithmetic is beyond
me. I know the silly reason K&R submitted as it
not being necessary.
The point is that if I say 'fixed bin (31)' or
'fixed bin (15)' or 'fixed bin (63)' or 'fixed bin
(23)' what it means is independent of
implementation. I don't need a standards
committee. I simply need someone who can write the
necessary compiler code. In terms of readability
these are head and shoulders above 'int', 'short',
or 'long'.<g>
In the long run because IBM and Microsoft et al use
these tools you mention, they also employ hundreds
and thousands of programmers in order to get the
miserly throughput due to the productivity
constraints imposed by the tools. If JMA, Timur
Tabi, and others want to use the tools, they too
will have to match the people numbers just to stay
even.
I don't expect to have those numbers. Therefore I
have to have better tools. If I use their tools, I
have to play their game under their rules. If I
have better tools, then they have to play my game
under my rules. If they use my tools, then their
hundreds and thousands will drop to the dozens.
When it does that then "open source" will have
practical meaning, because it means that the better
tools will allow me as an individual to choose a
different path without concern for support or
numbers of others on the same path.
As I said before if IBM were able to give us the
source (in C, C++, and assembler) for OS/2, we
would be unable to maintain and enhance it in any
reasonable manner without a similar hundreds and
thousands of organized participants. So even if
the source were free, we still couldn't afford to
use it with the tools available to us. We have no
further to look than to Linux where no distributor
has yet to turn a profit regardless of the millions
of copies.
So open source advocates hurt themselves with
dependencies on "free" compilers like GCC and
Watcom. For any significant project in the long
term you can't afford to use them. However "free"
they are, their associated (people) expense runs in
the other direction. Open source in order to
compete effectively with closed source should get
as far away as possible from tools used by closed
source, including GCC and Watcom. Whether free or
not, their cost contribution is insignificant to
the people cost involved associated with their use.
If you believe in the future of open source as a
successful business (and thus competitive) model,
then the last thing on earth you want is a "free"
batch compiler and a patchwork IDE. What you want
is an interactive interpreter which produces all
necessary documentation from a single organized
source in real time as changes are introduced. If
you have it the IBMs and Microsofts will disappear
from the software scene. You will have no worries
about anyone obtaining a software monopoly. That
will guarantee the viability of open source, making
any other kind unattractive competitively and
economically.
Re: Part 15
#426 From: "dwgras" <dwgras@...>
Date: Thu Apr 18, 2002 5:58 am
Subject: Re: Emphasis on list processing dwgras
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Lynn,
Is there even one available at this time and if not why has no one
started a project to develop one? If what you say is true, would not
there be someone with some foresight somewhere developing such a
thing. Maybe someone did and the IBM's or the Microsoft's bought the
code to keep it from the masses.
David
What you want
> is an interactive interpreter which produces all
> necessary documentation from a single organized
> source in real time as changes are introduced. If
> you have it the IBMs and Microsofts will disappear
> from the software scene. You will have no worries
> about anyone obtaining a software monopoly. That
> will guarantee the viability of open source, making
> any other kind unattractive competitively and
> economically.
Date: Thu Apr 18, 2002 5:58 am
Subject: Re: Emphasis on list processing dwgras
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Lynn,
Is there even one available at this time and if not why has no one
started a project to develop one? If what you say is true, would not
there be someone with some foresight somewhere developing such a
thing. Maybe someone did and the IBM's or the Microsoft's bought the
code to keep it from the masses.
David
What you want
> is an interactive interpreter which produces all
> necessary documentation from a single organized
> source in real time as changes are introduced. If
> you have it the IBMs and Microsofts will disappear
> from the software scene. You will have no worries
> about anyone obtaining a software monopoly. That
> will guarantee the viability of open source, making
> any other kind unattractive competitively and
> economically.
Re: Part 15
#427 From: "Michal Necasek" <michaln@...>
Date: Thu Apr 18, 2002 7:58 am
Subject: Re: Re: Emphasis on list processing michalnec
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
On Wed, 17 Apr 2002 17:35:25 -0000, dwgras wrote:
>If I understand you better, what is needed is a better compiler. I
>noticed Timur Tabi is the only OS/2 representative on the Open Watcom
>compiler. This doesn't speak well for the OS2/eCS community.
>
It certainly hints at the fact that the "OS2/eCS community" will
not be able to create and maintain a new compiler.
>I am
>trying to teach myself to program C++ and I noticed that the Windows
>compilers out there have a much beter interface and are much easier to
>use than the OS/2 versions included in Watcom.
>
That depends on what you mean by "interface" and "easier to use".
I suspect you are talking about IDEs - you'd be perhaps surprised
to find out how many professional programmers do not use IDEs at
all or use IDEs built into programmers' editors. IDEs look very
appealing to beginners but are horribly inflexible for many uses.
It's basically the old GUI vs. command line thing.
And you can be sure that OS/2 or osFree kernel developers will NOT
be using an IDE
Michal
Date: Thu Apr 18, 2002 7:58 am
Subject: Re: Re: Emphasis on list processing michalnec
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
On Wed, 17 Apr 2002 17:35:25 -0000, dwgras wrote:
>If I understand you better, what is needed is a better compiler. I
>noticed Timur Tabi is the only OS/2 representative on the Open Watcom
>compiler. This doesn't speak well for the OS2/eCS community.
>
It certainly hints at the fact that the "OS2/eCS community" will
not be able to create and maintain a new compiler.
>I am
>trying to teach myself to program C++ and I noticed that the Windows
>compilers out there have a much beter interface and are much easier to
>use than the OS/2 versions included in Watcom.
>
That depends on what you mean by "interface" and "easier to use".
I suspect you are talking about IDEs - you'd be perhaps surprised
to find out how many professional programmers do not use IDEs at
all or use IDEs built into programmers' editors. IDEs look very
appealing to beginners but are horribly inflexible for many uses.
It's basically the old GUI vs. command line thing.
And you can be sure that OS/2 or osFree kernel developers will NOT
be using an IDE
Michal
Re: Part 15
#428 From: "Lynn H. Maxson" <lmaxson@...>
Date: Thu Apr 18, 2002 10:13 am
Subject: Another voice pipes in lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
The following is a response to an inquiry on the
tunes mailing list. You will see that C is not
high on their list of preferable languages. The
point is the different paths we often take to the
same conclusion.
"I highly recommend that you familiarize yourself
with the object systems and ideas of as many
languages as possible, and perhaps you will begin
to realize why what you are proposing is amusing to
us.
And perhaps you should think more on why Fare
recommended you not use C as a prototype language.
Perhaps after learning about the expressivity
of the numerous unhandicapped languages out there,
you will understand this better: C is not used out
of its inherent qualities but rather as the
historical burden that has been placed on us by
Unix and its ilk."
I used to attempt to listen in on the chat sessions
with this group, but gave up when they commented
that the "moron" had joined the group.<g> They
wouldn't even allow me to silently listen.
Date: Thu Apr 18, 2002 10:13 am
Subject: Another voice pipes in lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
The following is a response to an inquiry on the
tunes mailing list. You will see that C is not
high on their list of preferable languages. The
point is the different paths we often take to the
same conclusion.
"I highly recommend that you familiarize yourself
with the object systems and ideas of as many
languages as possible, and perhaps you will begin
to realize why what you are proposing is amusing to
us.
And perhaps you should think more on why Fare
recommended you not use C as a prototype language.
Perhaps after learning about the expressivity
of the numerous unhandicapped languages out there,
you will understand this better: C is not used out
of its inherent qualities but rather as the
historical burden that has been placed on us by
Unix and its ilk."
I used to attempt to listen in on the chat sessions
with this group, but gave up when they commented
that the "moron" had joined the group.<g> They
wouldn't even allow me to silently listen.
Re: Part 15
#429 From: "Lynn H. Maxson" <lmaxson@...>
Date: Thu Apr 18, 2002 11:08 am
Subject: Re: Re: Emphasis on list processing lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
David Graser writes:
"Is there even one available at this time and if
not why has no one started a project to develop
one? If what you say is true, would not there be
someone with some foresight somewhere developing
such a thing. Maybe someone did and the IBM's or
the Microsoft's bought the code to keep it from the
masses."
No. Nothing sinister here. You have an industry
contented with supporting many separate, piece
vendors concerned with their ability to continue to
make a profit rather than to join in a venture
which in effect would combine the pieces into a
seamless fit. With so many vendors producing a
single multi-purpose product most of the vendors
would disappear or have a tougher time competing
than they do currently which is tough enough.
As I have said in other responses this quickly
became obvious (though unspoken) in IBM's attempt
with AD/Cycle, which depended entirely on vendors
cooperating on interfaces, the primary substance of
the proposed data repository. The last CASE tool
in my portfolio was Popkin's System Architect whose
last entry was UML support. As I no longer see ads
for the product I assume that Popkin has gone the
way of other CASE vendors, leaving UML support
primarily from Rational Rose for whom the three
different OO design proponents that make up the
"Unified" in UML work.
The problem is that each UML document has its own
source and is used as input to the generation of
the OO program source. As there are seven such
documents, thus seven different visual inputs and
outputs, up from the two (dataflows and structure
charts) of structured design which is up from the
one of flowcharts, there is much more manual labor
in the "up front" effort (specification, analysis,
and design) as well as the "coding" effort of OO.
Thus instead of addressing the problem it was first
proposed to resolve, the increase in software
backlogs and maintenance costs, it has in effect
made them even worse: more expensive and time
consuming.
Michal Necasek is right about the inappropriately
named (for marketing purposes only) IDEs. He is
wrong about a true IDE, the interactive
interpreter, that I mentioned. He may be right
about the OS/2 community's ability to support
(create and maintain) a new compiler. If he is,
what does that say about its ability to support
(again create and maintain) an operating system?
After all a "pseudo" IDE composed of an editor and
compiler has only five functions to support:
editing, syntax analysis, semantic analysis, proof
theory, and meta theory. If you can't support
software that does only that how can you support
applications, including operating systems, with
ever expanding number of required functional
support?
Michal, who differs with or at least questions most
of my assertions, probably doesn't understand that
he is making my case for me. I have said all along
that if you play their game, i.e. use the same
tools, you play by their rules. Their rules,
because of their tools, puts their support numbers
into the hundreds and thousands.
You may be willing to spend eight years to develop
an open source version of OS/2. If and when you
do, you will be eight years (or more) behind as
well. C, C++, C#, and JAVA are crippled programming
languages. Otherwise you would not see so much
activity in the standards area or so many
"enhanced" versions appearing so rapidly. You
can't blame "planned obsolescence" or poor planning
or new discoveries. You can blame a poor
foundation becoming more and more expensive and
time consuming to shore up.
Perhaps no one outside myself is willing to believe
that increases in productivity measured in orders
of magnitude is possible. No silver bullet thus
far, including OO (which began as such), has lived
up to promises. My solution is quite simple: let
the software do more and people do less. The more
the software does and the less people do the
greater the productivity. Logic programming,
whether SQL, Prolog, or AI expert systems,
illustrates that you can go directly from
specifications to production, including along the
way software automated analysis, design,
construction, and testing. I don't make this up as
it is achieved millions of times daily on client
processes. We will achieve the same if we apply it
to our software development process. We have every
reason to believe that we will achieve the same
productivity gains for ourselves that we do for our
clients. It's just a matter of "Physician heal
thineself".<g>
Date: Thu Apr 18, 2002 11:08 am
Subject: Re: Re: Emphasis on list processing lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
David Graser writes:
"Is there even one available at this time and if
not why has no one started a project to develop
one? If what you say is true, would not there be
someone with some foresight somewhere developing
such a thing. Maybe someone did and the IBM's or
the Microsoft's bought the code to keep it from the
masses."
No. Nothing sinister here. You have an industry
contented with supporting many separate, piece
vendors concerned with their ability to continue to
make a profit rather than to join in a venture
which in effect would combine the pieces into a
seamless fit. With so many vendors producing a
single multi-purpose product most of the vendors
would disappear or have a tougher time competing
than they do currently which is tough enough.
As I have said in other responses this quickly
became obvious (though unspoken) in IBM's attempt
with AD/Cycle, which depended entirely on vendors
cooperating on interfaces, the primary substance of
the proposed data repository. The last CASE tool
in my portfolio was Popkin's System Architect whose
last entry was UML support. As I no longer see ads
for the product I assume that Popkin has gone the
way of other CASE vendors, leaving UML support
primarily from Rational Rose for whom the three
different OO design proponents that make up the
"Unified" in UML work.
The problem is that each UML document has its own
source and is used as input to the generation of
the OO program source. As there are seven such
documents, thus seven different visual inputs and
outputs, up from the two (dataflows and structure
charts) of structured design which is up from the
one of flowcharts, there is much more manual labor
in the "up front" effort (specification, analysis,
and design) as well as the "coding" effort of OO.
Thus instead of addressing the problem it was first
proposed to resolve, the increase in software
backlogs and maintenance costs, it has in effect
made them even worse: more expensive and time
consuming.
Michal Necasek is right about the inappropriately
named (for marketing purposes only) IDEs. He is
wrong about a true IDE, the interactive
interpreter, that I mentioned. He may be right
about the OS/2 community's ability to support
(create and maintain) a new compiler. If he is,
what does that say about its ability to support
(again create and maintain) an operating system?
After all a "pseudo" IDE composed of an editor and
compiler has only five functions to support:
editing, syntax analysis, semantic analysis, proof
theory, and meta theory. If you can't support
software that does only that how can you support
applications, including operating systems, with
ever expanding number of required functional
support?
Michal, who differs with or at least questions most
of my assertions, probably doesn't understand that
he is making my case for me. I have said all along
that if you play their game, i.e. use the same
tools, you play by their rules. Their rules,
because of their tools, puts their support numbers
into the hundreds and thousands.
You may be willing to spend eight years to develop
an open source version of OS/2. If and when you
do, you will be eight years (or more) behind as
well. C, C++, C#, and JAVA are crippled programming
languages. Otherwise you would not see so much
activity in the standards area or so many
"enhanced" versions appearing so rapidly. You
can't blame "planned obsolescence" or poor planning
or new discoveries. You can blame a poor
foundation becoming more and more expensive and
time consuming to shore up.
Perhaps no one outside myself is willing to believe
that increases in productivity measured in orders
of magnitude is possible. No silver bullet thus
far, including OO (which began as such), has lived
up to promises. My solution is quite simple: let
the software do more and people do less. The more
the software does and the less people do the
greater the productivity. Logic programming,
whether SQL, Prolog, or AI expert systems,
illustrates that you can go directly from
specifications to production, including along the
way software automated analysis, design,
construction, and testing. I don't make this up as
it is achieved millions of times daily on client
processes. We will achieve the same if we apply it
to our software development process. We have every
reason to believe that we will achieve the same
productivity gains for ourselves that we do for our
clients. It's just a matter of "Physician heal
thineself".<g>
Re: Part 15
#430 From: "Lynn H. Maxson" <lmaxson@...>
Date: Wed Apr 17, 2002 7:43 pm
Subject: Ambiguity, Indecision, Contradiction, Generic lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
In previous posts I have said that if you cannot
increase your available people resources to the
size of the project, then you need to resize the
project to the size of your people. If the project
doesn't change, i.e. its scope remains the same,
then you have to increase the productivity of your
people. That productivity in our profession as in
any other lies in the tools available to us.
We want the productivity to reduce the number of
people necessary. We also want it to reduce the
necessary effort of the remaining people. As the
effort primarily consists of people meeting time,
people thinking time, and people writing time we
need to reduce the amount of time necessary for
meetings, the number of things necessary to think
about, and the number of writing activities as well
as the amount of writing per activity.
As the software development cycle has as its input
user requirements, either initial or changes to
existing, these in turn feed into the stages of
specification, analysis, design, construction, and
testing. Each of these stages has different
written forms, i.e. writing activities and skills.
Logic programming as illustrated by SQL goes
directly from specification through testing. That
means that the only people writing necessary is
user requirements and their translation into formal
specifications. The only people writing "within"
the software development cycle is that of
specifications in a specification language, which
to be effective must also be a programming
language.
The argument is that if people do the specification
writing that the software tool used will do the
remainder. Using the two-stage proof engine of
logic programming, once the first stage, the
completeness proof, is complete, i.e. the logical
hierarchy of source code exists, then the software
can produce all the documentation normally
associated with analysis and design. In the
earliest form this would be flowcharts. In the
later form of (Constantine's) structured design
this would be dataflow diagrams (analysis) and
structure charts (design). In the more recent OO
methodology this would be the UML documents of
Activity Diagram, Class Diagram, Collaboration
Diagram, Implementation Diagram, Sequence Diagram,
Statechart Diagram, and Use Case Diagram.
You don't have to have much experience with any of
these three different forms of analysis and design
to understand the time difference between people
engaging in these activities and that of software.
We should note that packages exist to produce any
of these from source code.
A tremendous increase in productivity lies in
writing source specifications, using logic
programming on the source, and having software
produce documentation from source. We are reduced
then to a single people writing activity in a
specification language. So we need to consider
only the nature of that language and of the
software tool implementing it.
In dealing with any specification/programming
language we know that it must go through five
stages: (1) data entry (editing), (2) syntax
analysis, (3) semantic analysis, (4) proof theory,
and (5) meta-theory. We have two choices for
implementing this, one of separate edit and compile
steps and one of an integrated, interactive
interpreter.
The real issue is why two forms? Compilers differ
from interpreters only in the proof theory, i.e.
the type of executable code produced. There is no
real reason why we cannot use the meta theory to
designate which form of code to produce in a given
instance.
Moreover we have two different environments, one
for production and one for development. A
production environment is designed to increase
application productivity, i.e. performance or per
transaction rate. A development environment is
designed to increase application writer
productivity.
A compiler increases application productivity while
an interpreter increases application writer
productivity. Therefore a compiler has less
productivity than an interpreter. So you know
right off that you shouldn't use a compiler in
development. And secondly that compiled output
should be a meta theory option of an interpreter.
In short regardless of whose productivity we want
to optimize, the application's or the application
writer's, our lesser preferred (and most
constrained) tool is the compiler. Using a
compiler keeps us from achieving the full
productivity gains possible.
An interpreter regards a statement, either a data
or processing statement, i.e. one containing an
expression, as the smallest, complete, testable
unit. A compiler regards an (external) procedure
as the smallest, complete, testable unit. An
interpreter will allow the dynamic testing of any
complete segment (an aggregate of one or more
statements) within an otherwise incomplete program.
Thus an interpreter allows execution and testing of
a dynamically chosen subset of source statements
not possible within the source available as input
to a compiler. The net is that the interpreter
wins hands down over the compiler when it comes to
people productivity.
If you will accept for the moment the increased
productivity possible through logic programming in
the reduction of the different people writing
activities necessary and through use of an
interactive interpreter over a compiler, then that
leaves only the consideration of the specification
language used. While you can use C, C++, C#, or
JAVA, they in general work to decrease not increase
productivity. You need to understand the effect of
language on productivity.
First there is the amount you have to write. Here
APL comes out champ for writing succinctness. That
succinctness is due to two things. One, its
extended set of operators, their symbol set, and
their dual modes of monadic (single operand) and
dyadic (two operands). Two, that the operators
work equally well with data aggregates (arrays and
structures) as they do with data elements (which C,
C++, C#, and JAVA are limited to).
PL/I comes out on top for simplicity of syntax
(every program element is a statement;every
statement ends in a semi-colon) and for the widest
variety of data types.
To upgrade either APL or PL/I to logic programming
means the addition of an assertion statement to the
existing assignment statement. It also means the
addition of a list as a native, aggregate data type
usable as operands by the operators.
So you end up with a specification language
combining the best features of APL, PL/I, logic
programming, and LISP plus automated output of all
CASE documentation supported within an interactive
interpreter. But it doesn't end there.
Remember the goal is to reduce the amount of manual
writing activities and writing within a remaining
activity without reducing the total set of
documentation results, this latter becoming the
responsibility of the software tool, the
interactive interpreter. Now if you are going to
reduce the amount of manual "writing", you need to
reduce the need for manual "rewriting", relying
once more on the software tool to provide this
service.
Now rewriting occurs due to some change due some
change in circumstances, in logic, etc. which has
taken place since the existing writing occurred.
As such rewriting is a form of maintenance which
occurs in the development stage prior to the
release of the first (temporary) version. Thus the
only difference between development and maintenance
stages in a product's life cycle lies in the
presence or absence of a version. The maintenance
process itself, however, differs not one whit from
the development process: each involve changes to
and processing of specifications. As
specifications are the only manual writing activity
here the only manual rewriting activity is that of
specifications with all other rewriting occurring
through the software tool.
Now notice that the rate of change requests need
not change but through the use of a software tool
the response rate does. Every manual writing
activity now done in software occurs millions of
times faster and cheaper. Where now change request
backlogs occur suddenly change backlogs disappear:
our ability to respond to change occurs faster than
the expected rate of change requests.
But the software tool only assumes writing
activities which can be automated, i.e. clerical in
nature and algorithmically describable. We must
still do the creative writing activity, that which
the software cannot do, manually. But even here we
can use a software assist.
We have in software writing a concept of binding
time, the point at which we must bind a variable to
some set value. Consider the expression 'a = b +
c;'. Implicitly a, b, and c must have numeric
values. As we are using PL/I rules here a, b, and
c can be any combination of string (bit or
character) or numeric (decimal, binary, float,
integer or real) variables.
As this expression ('a = b + c;') is only one in
which these variables occur we may want to hold
off, i.e. later binding, explicitly declaring them
until we know all use cases (different
expressions). By their appearance in any
expression they are implicitly declared (unlike C,
C++, or JAVA) and we have several levels of
explicit declarations possible. We can just
declare the variable name alone, e.g. 'dcl a;'. We
can declare the variable name with a generic data
type, e.g. 'dcl a numeric;'. We can declare the
variable name with a more specific data type, e.g.
'dcl a decimal;'. Or we can declare it quite
specifically, e.g. 'dcl a fixed dec (7,2);'.
We have the option of having the software assign an
implicit declaration based on use cases or of
explicitly declaration from the most generic ('dcl
a;') to the least ('dcl a fixed dec (7,2);'). The
point is the software tool could care less. The
only time it becomes important due to performance,
not logic, considerations is in production (not
development).
That performance decision need not occur until all
the use cases exist, i.e. sometime prior to release
as a production version. This implies a later or
latest binding of a variable instead of an earlier
or earliest (at initial data entry) now required by
some languages (like C, C++, C#, and JAVA).
Beyond this the software tool as part of its
semantic analysis can inform us of the various
uses, e.g. as string or numeric variables, found in
the use cases (expressions). We can wait then
(late binding) to make a "final" decision, thus
avoiding a common need to "rewrite" a declaration.
Note that a generic declaration is ambiguous. It
has multiple possible choices available. Note also
that in terms of writing effort, i.e. reduction of
rewriting, and the possibility of latest binding
that such ambiguity is desirable, preferred. Note
also that for production purposes we must resolve
the ambiguity, i.e. make a definitive decision.
Frankly that's the case with all forms of ambiguity
which can occur in development. Ambiguity can
occur in user requirements as well as in data
types. Ambiguity involves multiple possible
choices. Programmers and production versions don't
like ambiguity. In fact neither does most popular
programming languages. The problem is not one of
production, in which all ambiguities must be
resolved, but in development, in which they are a
natural occurrence.
One form of ambiguity is the contradiction where we
have two choices based on the same set of
conditions which are the opposite of each other.
Another form is the instance of a user who doesn't
want to be more specific without having more
information on which to base a decision.
In short the user has the same basis for generic
descriptions and latest binding as do programmers.
As it is a "natural" condition neither the
specification language nor the tool which
implements it should deny their use in development,
when by definition the process is incomplete, only
as something which needs resolving prior to
production.
So we have an arena of ambiguity resulting from
indecision, contradiction, and generic
possibilities. They are a natural part of
development and should be acceptable to the
specification language and to its software
implementation. We should not be forced to resolve
them before we feel comfortable about our
resolution choice, usually this means later in the
process (late binding) than earlier (early
binding).
One of the beauties of logic programming is that it
doesn't give a damn. As long as the completeness
proof has a software means of dealing with
ambiguity it need only generate the set of all
possible source code, i.e. logical organizations,
as well as provide a list of ambiguities, a
recognition of something that eventually needs
resolving, i.e. re-specification.
For example, if there are multiple means (paths) of
getting from here to there, why should you have to
pick (early binding) one prior to knowing all the
paths (late binding)? The most optimal machine
code varies by Intel processor. Discovery of that
optimal code is partly intuitive (and thus
algorithmically describable) and partly trial and
error (and thus algorithmically describable).
There's no reason to have to make an optimal choice
prior to knowing all the choices, all which the
software tool can provide.
So we have no reason to fear ambiguity in
development or its form in contradictions or
indecision or generics. They are a natural
occurrence in development which have to be resolved
prior to production. Most importantly they do not
have to be resolved prior to data entry (a
requirement of C, C++, C#, and JAVA). You can
resolve them at a time and in the order that the
programmer deems fit, not the software demands.
I called this software tool The Developer's
Assistant and not The Developer's Dictator, because
its role is to assist, not dictate. The object
with an ambiguity in any form is to recognize it,
accept it, and not reject it. The "proper"
assistant will note it, generate all possible
logical organizations, and present them as options
to the developer. The developer will then decide
the order of their resolution prior to production,
not the software.
Our purpose here lies in developer productivity,
how quickly a developer can perform a unit of work,
not in how many statements per second the software
tool can process. We can control it both in
reducing the number of writing activities to one,
the writing of specifications, and the amount of
writing (and rewriting) within that activity.
The key here lies in having a software tool and
software language that accepts the same conditions
in development that the developer faces: ambiguity,
indecision, contradiction, and generic. In all
cases the choices and the order of choosing should
be developer- not software-based. The purpose of
the software tool is to support (assist) the
developer. It is not the purpose of the developer
to cater to the needs of the tool.
Date: Wed Apr 17, 2002 7:43 pm
Subject: Ambiguity, Indecision, Contradiction, Generic lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
In previous posts I have said that if you cannot
increase your available people resources to the
size of the project, then you need to resize the
project to the size of your people. If the project
doesn't change, i.e. its scope remains the same,
then you have to increase the productivity of your
people. That productivity in our profession as in
any other lies in the tools available to us.
We want the productivity to reduce the number of
people necessary. We also want it to reduce the
necessary effort of the remaining people. As the
effort primarily consists of people meeting time,
people thinking time, and people writing time we
need to reduce the amount of time necessary for
meetings, the number of things necessary to think
about, and the number of writing activities as well
as the amount of writing per activity.
As the software development cycle has as its input
user requirements, either initial or changes to
existing, these in turn feed into the stages of
specification, analysis, design, construction, and
testing. Each of these stages has different
written forms, i.e. writing activities and skills.
Logic programming as illustrated by SQL goes
directly from specification through testing. That
means that the only people writing necessary is
user requirements and their translation into formal
specifications. The only people writing "within"
the software development cycle is that of
specifications in a specification language, which
to be effective must also be a programming
language.
The argument is that if people do the specification
writing that the software tool used will do the
remainder. Using the two-stage proof engine of
logic programming, once the first stage, the
completeness proof, is complete, i.e. the logical
hierarchy of source code exists, then the software
can produce all the documentation normally
associated with analysis and design. In the
earliest form this would be flowcharts. In the
later form of (Constantine's) structured design
this would be dataflow diagrams (analysis) and
structure charts (design). In the more recent OO
methodology this would be the UML documents of
Activity Diagram, Class Diagram, Collaboration
Diagram, Implementation Diagram, Sequence Diagram,
Statechart Diagram, and Use Case Diagram.
You don't have to have much experience with any of
these three different forms of analysis and design
to understand the time difference between people
engaging in these activities and that of software.
We should note that packages exist to produce any
of these from source code.
A tremendous increase in productivity lies in
writing source specifications, using logic
programming on the source, and having software
produce documentation from source. We are reduced
then to a single people writing activity in a
specification language. So we need to consider
only the nature of that language and of the
software tool implementing it.
In dealing with any specification/programming
language we know that it must go through five
stages: (1) data entry (editing), (2) syntax
analysis, (3) semantic analysis, (4) proof theory,
and (5) meta-theory. We have two choices for
implementing this, one of separate edit and compile
steps and one of an integrated, interactive
interpreter.
The real issue is why two forms? Compilers differ
from interpreters only in the proof theory, i.e.
the type of executable code produced. There is no
real reason why we cannot use the meta theory to
designate which form of code to produce in a given
instance.
Moreover we have two different environments, one
for production and one for development. A
production environment is designed to increase
application productivity, i.e. performance or per
transaction rate. A development environment is
designed to increase application writer
productivity.
A compiler increases application productivity while
an interpreter increases application writer
productivity. Therefore a compiler has less
productivity than an interpreter. So you know
right off that you shouldn't use a compiler in
development. And secondly that compiled output
should be a meta theory option of an interpreter.
In short regardless of whose productivity we want
to optimize, the application's or the application
writer's, our lesser preferred (and most
constrained) tool is the compiler. Using a
compiler keeps us from achieving the full
productivity gains possible.
An interpreter regards a statement, either a data
or processing statement, i.e. one containing an
expression, as the smallest, complete, testable
unit. A compiler regards an (external) procedure
as the smallest, complete, testable unit. An
interpreter will allow the dynamic testing of any
complete segment (an aggregate of one or more
statements) within an otherwise incomplete program.
Thus an interpreter allows execution and testing of
a dynamically chosen subset of source statements
not possible within the source available as input
to a compiler. The net is that the interpreter
wins hands down over the compiler when it comes to
people productivity.
If you will accept for the moment the increased
productivity possible through logic programming in
the reduction of the different people writing
activities necessary and through use of an
interactive interpreter over a compiler, then that
leaves only the consideration of the specification
language used. While you can use C, C++, C#, or
JAVA, they in general work to decrease not increase
productivity. You need to understand the effect of
language on productivity.
First there is the amount you have to write. Here
APL comes out champ for writing succinctness. That
succinctness is due to two things. One, its
extended set of operators, their symbol set, and
their dual modes of monadic (single operand) and
dyadic (two operands). Two, that the operators
work equally well with data aggregates (arrays and
structures) as they do with data elements (which C,
C++, C#, and JAVA are limited to).
PL/I comes out on top for simplicity of syntax
(every program element is a statement;every
statement ends in a semi-colon) and for the widest
variety of data types.
To upgrade either APL or PL/I to logic programming
means the addition of an assertion statement to the
existing assignment statement. It also means the
addition of a list as a native, aggregate data type
usable as operands by the operators.
So you end up with a specification language
combining the best features of APL, PL/I, logic
programming, and LISP plus automated output of all
CASE documentation supported within an interactive
interpreter. But it doesn't end there.
Remember the goal is to reduce the amount of manual
writing activities and writing within a remaining
activity without reducing the total set of
documentation results, this latter becoming the
responsibility of the software tool, the
interactive interpreter. Now if you are going to
reduce the amount of manual "writing", you need to
reduce the need for manual "rewriting", relying
once more on the software tool to provide this
service.
Now rewriting occurs due to some change due some
change in circumstances, in logic, etc. which has
taken place since the existing writing occurred.
As such rewriting is a form of maintenance which
occurs in the development stage prior to the
release of the first (temporary) version. Thus the
only difference between development and maintenance
stages in a product's life cycle lies in the
presence or absence of a version. The maintenance
process itself, however, differs not one whit from
the development process: each involve changes to
and processing of specifications. As
specifications are the only manual writing activity
here the only manual rewriting activity is that of
specifications with all other rewriting occurring
through the software tool.
Now notice that the rate of change requests need
not change but through the use of a software tool
the response rate does. Every manual writing
activity now done in software occurs millions of
times faster and cheaper. Where now change request
backlogs occur suddenly change backlogs disappear:
our ability to respond to change occurs faster than
the expected rate of change requests.
But the software tool only assumes writing
activities which can be automated, i.e. clerical in
nature and algorithmically describable. We must
still do the creative writing activity, that which
the software cannot do, manually. But even here we
can use a software assist.
We have in software writing a concept of binding
time, the point at which we must bind a variable to
some set value. Consider the expression 'a = b +
c;'. Implicitly a, b, and c must have numeric
values. As we are using PL/I rules here a, b, and
c can be any combination of string (bit or
character) or numeric (decimal, binary, float,
integer or real) variables.
As this expression ('a = b + c;') is only one in
which these variables occur we may want to hold
off, i.e. later binding, explicitly declaring them
until we know all use cases (different
expressions). By their appearance in any
expression they are implicitly declared (unlike C,
C++, or JAVA) and we have several levels of
explicit declarations possible. We can just
declare the variable name alone, e.g. 'dcl a;'. We
can declare the variable name with a generic data
type, e.g. 'dcl a numeric;'. We can declare the
variable name with a more specific data type, e.g.
'dcl a decimal;'. Or we can declare it quite
specifically, e.g. 'dcl a fixed dec (7,2);'.
We have the option of having the software assign an
implicit declaration based on use cases or of
explicitly declaration from the most generic ('dcl
a;') to the least ('dcl a fixed dec (7,2);'). The
point is the software tool could care less. The
only time it becomes important due to performance,
not logic, considerations is in production (not
development).
That performance decision need not occur until all
the use cases exist, i.e. sometime prior to release
as a production version. This implies a later or
latest binding of a variable instead of an earlier
or earliest (at initial data entry) now required by
some languages (like C, C++, C#, and JAVA).
Beyond this the software tool as part of its
semantic analysis can inform us of the various
uses, e.g. as string or numeric variables, found in
the use cases (expressions). We can wait then
(late binding) to make a "final" decision, thus
avoiding a common need to "rewrite" a declaration.
Note that a generic declaration is ambiguous. It
has multiple possible choices available. Note also
that in terms of writing effort, i.e. reduction of
rewriting, and the possibility of latest binding
that such ambiguity is desirable, preferred. Note
also that for production purposes we must resolve
the ambiguity, i.e. make a definitive decision.
Frankly that's the case with all forms of ambiguity
which can occur in development. Ambiguity can
occur in user requirements as well as in data
types. Ambiguity involves multiple possible
choices. Programmers and production versions don't
like ambiguity. In fact neither does most popular
programming languages. The problem is not one of
production, in which all ambiguities must be
resolved, but in development, in which they are a
natural occurrence.
One form of ambiguity is the contradiction where we
have two choices based on the same set of
conditions which are the opposite of each other.
Another form is the instance of a user who doesn't
want to be more specific without having more
information on which to base a decision.
In short the user has the same basis for generic
descriptions and latest binding as do programmers.
As it is a "natural" condition neither the
specification language nor the tool which
implements it should deny their use in development,
when by definition the process is incomplete, only
as something which needs resolving prior to
production.
So we have an arena of ambiguity resulting from
indecision, contradiction, and generic
possibilities. They are a natural part of
development and should be acceptable to the
specification language and to its software
implementation. We should not be forced to resolve
them before we feel comfortable about our
resolution choice, usually this means later in the
process (late binding) than earlier (early
binding).
One of the beauties of logic programming is that it
doesn't give a damn. As long as the completeness
proof has a software means of dealing with
ambiguity it need only generate the set of all
possible source code, i.e. logical organizations,
as well as provide a list of ambiguities, a
recognition of something that eventually needs
resolving, i.e. re-specification.
For example, if there are multiple means (paths) of
getting from here to there, why should you have to
pick (early binding) one prior to knowing all the
paths (late binding)? The most optimal machine
code varies by Intel processor. Discovery of that
optimal code is partly intuitive (and thus
algorithmically describable) and partly trial and
error (and thus algorithmically describable).
There's no reason to have to make an optimal choice
prior to knowing all the choices, all which the
software tool can provide.
So we have no reason to fear ambiguity in
development or its form in contradictions or
indecision or generics. They are a natural
occurrence in development which have to be resolved
prior to production. Most importantly they do not
have to be resolved prior to data entry (a
requirement of C, C++, C#, and JAVA). You can
resolve them at a time and in the order that the
programmer deems fit, not the software demands.
I called this software tool The Developer's
Assistant and not The Developer's Dictator, because
its role is to assist, not dictate. The object
with an ambiguity in any form is to recognize it,
accept it, and not reject it. The "proper"
assistant will note it, generate all possible
logical organizations, and present them as options
to the developer. The developer will then decide
the order of their resolution prior to production,
not the software.
Our purpose here lies in developer productivity,
how quickly a developer can perform a unit of work,
not in how many statements per second the software
tool can process. We can control it both in
reducing the number of writing activities to one,
the writing of specifications, and the amount of
writing (and rewriting) within that activity.
The key here lies in having a software tool and
software language that accepts the same conditions
in development that the developer faces: ambiguity,
indecision, contradiction, and generic. In all
cases the choices and the order of choosing should
be developer- not software-based. The purpose of
the software tool is to support (assist) the
developer. It is not the purpose of the developer
to cater to the needs of the tool.