#371 From: "JMA" <mail@...>
Date: Thu Mar 7, 2002 2:15 pm
Subject: Re: Re: My point of view... mailjmase
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
On Wed, 06 Mar 2002 20:39:13 -0000, dwgras wrote:
>Does this disqualify you from reviewing any code and offering
>suggestions since you wouldn't actually be writing any code.
>
>Also is your expertise in kernal coding?
>
Lynn (or any old IBM'er) can only be disqualified if he participated
in writing the OS/2 kernel or other stuff that found its ways into
the kernel.
If he worked (for example) as an AIX hardware specialist I
cannot see any reason for him to feel disqualified.
>What about coding for other areas of the project?
>
Or documenting.
>Regards,
>
>David Graser
>
>--- In osFree@y..., "Lynn H. Maxson" <lmaxson@p...> wrote:
>>
>> As a retired IBM employee I find myself disqualified from
>> participating in the coding of the kernel. That leaves me with
>> participating in the documentation only. However, we may as well
>> put this portable CPAPI issue to rest. If you haven't the
>> resources to produce a single kernel, you have even less to
>> produce multiple.
>>
Sincerely
JMA
Development and Consulting
John Martin , jma@...
==================================
Website: http://www.jma.se/
email: mail@...
Phone: 46-(0)70-6278410
==================================
Part 13 - Mar 07 2002
Re: Part 13
#372 From: "Lynn H. Maxson" <lmaxson@...>
Date: Thu Mar 7, 2002 8:28 pm
Subject: Re: Re: My point of view... lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
David Glaser (dwgras) writes:
"Does this disqualify you from reviewing any code and offering
suggestions since you wouldn't actually be writing any code.
Also is your expertise in kernal coding?
What about coding for other areas of the project?"
You've had another response by John Martin (JMA) is this area
which requalifies me.<g> My expertise lies in specification,
analysis, and design, those things which should (and more often
does not) precede coding. I have a manufacturing view of software
development which says that if the engineering--the up front work
of specification, analysis, and design--is done correctly, then
the assembly (the coding) is a relatively simple clerical task,
requiring no more skill than reading a detailed blueprint.
Too much is being made of the differences between a kernel
programmer and a non-kernel (application) one. In either instance
a well-defined set of specifications, proper analysis, and
detailed design simplifies the writing of either as I stated
earlier to a straightforward clerical process. I should also
state that this is a process designed for use of procedural
(imperative) languages. If you use logic programming, thus
declarative languages, which goes directly from specifications to
construction (source code production), performing analysis and
design automatically in software and not manually, it gets even
easier. In this instance the only writing, i.e. coding, which
occurs is that of specifications.
You must understand the classical five stages of software
development--specification, analysis, design, construction, and
testing--are basically manual processes using procedural
languages, but in declarative languages only specification remains
as a manual process, the remainder being done in software. You
must understand the significance of this shift of effort relative
to the amount of human effort, the major software cost in money
and time.
The only reason to bore you with this is to say that I agree with
Timur Tabi and others about the human resources required to write
and maintain an OS kernel or with Michal Necasek that ODIN suffers
only from "ridiculously small" human resources. Organization and
infrastructure aside there aren't enough of us to do the work
necessary within a timeframe that allows a competitive product.
Some want only to develop an indentical OS/2 clone, saying they're
done with the remainder, the ongoing maintenance, up to some
unspecified others. Yet the ongoing maintenance cost and absence
of third-party software offerings (drivers and applications)
forced IBM to retreat from OS/2.
To me the answer lies in finding some way to make the
"ridiculously small", the necessary and sufficient. That means
automating more of the software development process, shifting as
much of the now manual effort to software. I proposed a
guideline: let people do what software cannot and software, what
people need not.
In my mind we have had the means to do just that for over thirty
years in logic programming which lets people write specifications
(which the software cannot) and lets the software do the remainder
(which the people need not). Understand that this works. It
works in AI expert systems. It works in neural nets. It works in
Prolog, Trilogy, and other languages. It works in SQL.
In point of fact it can work in C. It's not so much a language
thing as it is an implementation, the software which supports the
language. Keep the language, eliminate its implementation
restrictions, and replace the software. All programming languages
are specification languages and all specification languages are
programming languages. All have four core processes: (1) syntax
analysis, (2) semantic analysis, (3) proof theory, and (4) meta
theory. The difference between an imperative (procedural)
language and a declarative (logic programming) lies in what occurs
within the proof theory.
Knowledgeable people will jump in to say that the basic difference
lies in their "processing" statements, the "assignment" statement
of imperative languages and the "assertion" statement of
declarative languages. What gets lost in all this that all
assertion statements must ultimately translate (in the software)
into assignment statements, because the atomic processing
statements, the instruction set of the computer, reflect an IPO
model based on assignment.
What gets lost in this is that the proof theory of an assignment
differs from that of an assertion. An assignment is "true" if and
only if one possible code generation form exists. It is "false",
if no possible code generation form exists, i.e. the compilation
fails with a "severe" error. An assertion, on the other hand,
cannot fail as it accepts as "normal output" the possibility of
zero (false) or one or more "true" instances.
Thus an assignment proof is a subset of an assertion one. The
intent lies in the statement form. To allow both assignment and
assertion processing statements means using to recognizable
(distinct) syntactical forms. This means to me that the major
failure of Prolog, for example, lies in its exclusive support of
assertions, excluding assignments altogether. This means
basically that you cannot write Prolog entirely in Prolog.
The same difficulty exists in imperative languages which support
only a high-level form of an assignment statement and not the
lower-level form used by the instruction set. This necessitates
that HLLs cannot be written entirely in the HLL, i.e. requires the
use of assembly language. What makes this seem strange is that
Intel, for example, in its Pentium reference manual offers a HLL
form for every instruction.
Now I've been accused of trying to offer a "silver bullet"
solution, when in my mind all I've done is separate out the
available ammunition. The key lies in implementing the
functionality of the proof theory of logic programming, up to now
restricted in use to declarative languages, into imperative
(procedural) language compilers.
The proof theory of logic programming lies in a two-stage proof
process embedded in a logic engine. The process, i.e.
functionality, is the same in all of logic programming from AI
expert systems to SQL: (1) a completeness proof and (2) an
exhaustive true/false proof.
Key here is the completeness proof, that of an assignment
statement (true only if one instance found; false other wise) and
that of an assertion (true if one or more instances found; false
otherwise). The other difference lies in the need to manually
"pre-order" the source code in imperative language implementations
and the "unordered" source allowed by declarative languages. This
difference allows the implementation (the software) to impose an
order, i.e. logical organization, on the unordered input. This
means essentially that the software "rewrites" the source in each
instance. Thus rewriting which affects the logical organization
occurs through software, not manual, means.
Now we have a given set of CPAPIs, which in decomposition down
through internal API must ultimately come down to a basic set of
MKAPIs. The shortest path between the two (CPAPI and MKAPI) would
be if no internal APIs were necessary. If they weren't, the two
would be identical. However, internal APIs are necessary. The
challenge lies in reducing them to a minimum, one to reduce the
coding effort (thus the cost and time involved) and two reduce the
instruction path lengths to obtain maximum performance.
Now understand that a flow, represented by a dataflow diagram,
exists from the CPAPI down through the internal APIs to the
MKAPIs. This dataflow represents a verifiable architecture on
which to base a design, nominally represented by a structure
chart. Now notice that the dataflow assumes a continuity for the
data corresponding to one or more levels of path segments
occurring between the CPAPI and the MKAPI. In truth we don't give
a damn at this point about "how" the processes, i.e. the coding,
in the APIs work, only "what" they must do to conform to the rules
of the dataflow.
That why analysis focuses on linear dataflow diagrams and design
maintains it hierarchically in structure charts. You have the I
and the O of your IPO model (verified, minimized, optimized) with
only the P, the coding remaining. So if you have done your
analysis and design sufficiently, you can turn the coding over to
experienced coders (assembly line producers of code) without
making any distinction between kernel and non-kernel coders, as
truly no significant difference exists.
Now if you have tracked me thus far, you will realize that I
assume the specifications for the MKAPIs and CPAPIs exist. The
CPAPIs define the highest level "goals" of the OS while the MKAPIs
define the lowest level "means" of getting there. All that
remains is the are the sub-goals, the internal APIs, that connect
the two.
There's no reason that the MKAPIs, the CPAPIs, and the internal
APIs cannot appear in the input in any order, i.e. unordered,
relying upon the completeness proof to impose the necessary
logical organization. Now each API has an IPO form, an implied
process (P) and well-defined inputs (I) and outputs (O). These
represent nodes: root (CPAPI), leafs (MKAPI), and internal path
segments (internal API). Their proper interconnection constitutes
an implementable architecture.
Now we can do little to reduce the existing CPAPI, unless we find
an example which has never been or will ever be used. Chances are
from existing models choose a reduced MKAPI set. The challenge
then lies in finding a reduced internal API set. The processing
within each API function establishes its internal reference
pattern to other (same- or lower-level) APIs.
Thus we can present them in an unordered manner to our logic
programming software which will logically organize, i.e. rewrite,
them. If we are clever enough when it has completed this process
as far as it can (the completeness proof), we can ask it to
furnish us with dataflow diagrams and structure charts logically
equivalent to the "generated" logical organization of the source
code. Though a reversal of the "initial" purpose of dataflows and
structure charts, the same reversal has been universally accepted
in current flowchart programs based on using source code as input
instead of manually drawn.
To reiterate (and end this discourse) the point is to make our
"ridiculously small" set of human resources all that is necessary
and sufficient to complete and compete. Succeeding in this will
make Microsoft's period of monopoly power (or anyone else's)
transitory.
A better OS/2 than OS/2, Linux than Linux, and Windows than
Windows. Sounds competitive to me.<g>
Date: Thu Mar 7, 2002 8:28 pm
Subject: Re: Re: My point of view... lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
David Glaser (dwgras) writes:
"Does this disqualify you from reviewing any code and offering
suggestions since you wouldn't actually be writing any code.
Also is your expertise in kernal coding?
What about coding for other areas of the project?"
You've had another response by John Martin (JMA) is this area
which requalifies me.<g> My expertise lies in specification,
analysis, and design, those things which should (and more often
does not) precede coding. I have a manufacturing view of software
development which says that if the engineering--the up front work
of specification, analysis, and design--is done correctly, then
the assembly (the coding) is a relatively simple clerical task,
requiring no more skill than reading a detailed blueprint.
Too much is being made of the differences between a kernel
programmer and a non-kernel (application) one. In either instance
a well-defined set of specifications, proper analysis, and
detailed design simplifies the writing of either as I stated
earlier to a straightforward clerical process. I should also
state that this is a process designed for use of procedural
(imperative) languages. If you use logic programming, thus
declarative languages, which goes directly from specifications to
construction (source code production), performing analysis and
design automatically in software and not manually, it gets even
easier. In this instance the only writing, i.e. coding, which
occurs is that of specifications.
You must understand the classical five stages of software
development--specification, analysis, design, construction, and
testing--are basically manual processes using procedural
languages, but in declarative languages only specification remains
as a manual process, the remainder being done in software. You
must understand the significance of this shift of effort relative
to the amount of human effort, the major software cost in money
and time.
The only reason to bore you with this is to say that I agree with
Timur Tabi and others about the human resources required to write
and maintain an OS kernel or with Michal Necasek that ODIN suffers
only from "ridiculously small" human resources. Organization and
infrastructure aside there aren't enough of us to do the work
necessary within a timeframe that allows a competitive product.
Some want only to develop an indentical OS/2 clone, saying they're
done with the remainder, the ongoing maintenance, up to some
unspecified others. Yet the ongoing maintenance cost and absence
of third-party software offerings (drivers and applications)
forced IBM to retreat from OS/2.
To me the answer lies in finding some way to make the
"ridiculously small", the necessary and sufficient. That means
automating more of the software development process, shifting as
much of the now manual effort to software. I proposed a
guideline: let people do what software cannot and software, what
people need not.
In my mind we have had the means to do just that for over thirty
years in logic programming which lets people write specifications
(which the software cannot) and lets the software do the remainder
(which the people need not). Understand that this works. It
works in AI expert systems. It works in neural nets. It works in
Prolog, Trilogy, and other languages. It works in SQL.
In point of fact it can work in C. It's not so much a language
thing as it is an implementation, the software which supports the
language. Keep the language, eliminate its implementation
restrictions, and replace the software. All programming languages
are specification languages and all specification languages are
programming languages. All have four core processes: (1) syntax
analysis, (2) semantic analysis, (3) proof theory, and (4) meta
theory. The difference between an imperative (procedural)
language and a declarative (logic programming) lies in what occurs
within the proof theory.
Knowledgeable people will jump in to say that the basic difference
lies in their "processing" statements, the "assignment" statement
of imperative languages and the "assertion" statement of
declarative languages. What gets lost in all this that all
assertion statements must ultimately translate (in the software)
into assignment statements, because the atomic processing
statements, the instruction set of the computer, reflect an IPO
model based on assignment.
What gets lost in this is that the proof theory of an assignment
differs from that of an assertion. An assignment is "true" if and
only if one possible code generation form exists. It is "false",
if no possible code generation form exists, i.e. the compilation
fails with a "severe" error. An assertion, on the other hand,
cannot fail as it accepts as "normal output" the possibility of
zero (false) or one or more "true" instances.
Thus an assignment proof is a subset of an assertion one. The
intent lies in the statement form. To allow both assignment and
assertion processing statements means using to recognizable
(distinct) syntactical forms. This means to me that the major
failure of Prolog, for example, lies in its exclusive support of
assertions, excluding assignments altogether. This means
basically that you cannot write Prolog entirely in Prolog.
The same difficulty exists in imperative languages which support
only a high-level form of an assignment statement and not the
lower-level form used by the instruction set. This necessitates
that HLLs cannot be written entirely in the HLL, i.e. requires the
use of assembly language. What makes this seem strange is that
Intel, for example, in its Pentium reference manual offers a HLL
form for every instruction.
Now I've been accused of trying to offer a "silver bullet"
solution, when in my mind all I've done is separate out the
available ammunition. The key lies in implementing the
functionality of the proof theory of logic programming, up to now
restricted in use to declarative languages, into imperative
(procedural) language compilers.
The proof theory of logic programming lies in a two-stage proof
process embedded in a logic engine. The process, i.e.
functionality, is the same in all of logic programming from AI
expert systems to SQL: (1) a completeness proof and (2) an
exhaustive true/false proof.
Key here is the completeness proof, that of an assignment
statement (true only if one instance found; false other wise) and
that of an assertion (true if one or more instances found; false
otherwise). The other difference lies in the need to manually
"pre-order" the source code in imperative language implementations
and the "unordered" source allowed by declarative languages. This
difference allows the implementation (the software) to impose an
order, i.e. logical organization, on the unordered input. This
means essentially that the software "rewrites" the source in each
instance. Thus rewriting which affects the logical organization
occurs through software, not manual, means.
Now we have a given set of CPAPIs, which in decomposition down
through internal API must ultimately come down to a basic set of
MKAPIs. The shortest path between the two (CPAPI and MKAPI) would
be if no internal APIs were necessary. If they weren't, the two
would be identical. However, internal APIs are necessary. The
challenge lies in reducing them to a minimum, one to reduce the
coding effort (thus the cost and time involved) and two reduce the
instruction path lengths to obtain maximum performance.
Now understand that a flow, represented by a dataflow diagram,
exists from the CPAPI down through the internal APIs to the
MKAPIs. This dataflow represents a verifiable architecture on
which to base a design, nominally represented by a structure
chart. Now notice that the dataflow assumes a continuity for the
data corresponding to one or more levels of path segments
occurring between the CPAPI and the MKAPI. In truth we don't give
a damn at this point about "how" the processes, i.e. the coding,
in the APIs work, only "what" they must do to conform to the rules
of the dataflow.
That why analysis focuses on linear dataflow diagrams and design
maintains it hierarchically in structure charts. You have the I
and the O of your IPO model (verified, minimized, optimized) with
only the P, the coding remaining. So if you have done your
analysis and design sufficiently, you can turn the coding over to
experienced coders (assembly line producers of code) without
making any distinction between kernel and non-kernel coders, as
truly no significant difference exists.
Now if you have tracked me thus far, you will realize that I
assume the specifications for the MKAPIs and CPAPIs exist. The
CPAPIs define the highest level "goals" of the OS while the MKAPIs
define the lowest level "means" of getting there. All that
remains is the are the sub-goals, the internal APIs, that connect
the two.
There's no reason that the MKAPIs, the CPAPIs, and the internal
APIs cannot appear in the input in any order, i.e. unordered,
relying upon the completeness proof to impose the necessary
logical organization. Now each API has an IPO form, an implied
process (P) and well-defined inputs (I) and outputs (O). These
represent nodes: root (CPAPI), leafs (MKAPI), and internal path
segments (internal API). Their proper interconnection constitutes
an implementable architecture.
Now we can do little to reduce the existing CPAPI, unless we find
an example which has never been or will ever be used. Chances are
from existing models choose a reduced MKAPI set. The challenge
then lies in finding a reduced internal API set. The processing
within each API function establishes its internal reference
pattern to other (same- or lower-level) APIs.
Thus we can present them in an unordered manner to our logic
programming software which will logically organize, i.e. rewrite,
them. If we are clever enough when it has completed this process
as far as it can (the completeness proof), we can ask it to
furnish us with dataflow diagrams and structure charts logically
equivalent to the "generated" logical organization of the source
code. Though a reversal of the "initial" purpose of dataflows and
structure charts, the same reversal has been universally accepted
in current flowchart programs based on using source code as input
instead of manually drawn.
To reiterate (and end this discourse) the point is to make our
"ridiculously small" set of human resources all that is necessary
and sufficient to complete and compete. Succeeding in this will
make Microsoft's period of monopoly power (or anyone else's)
transitory.
A better OS/2 than OS/2, Linux than Linux, and Windows than
Windows. Sounds competitive to me.<g>
Re: Part 13
#373 From: "Lynn H. Maxson" <lmaxson@...>
Date: Thu Mar 7, 2002 8:48 pm
Subject: Re: Re: My point of view... lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
John Martin (JMA) writes:
"Lynn (or any old IBM'er) can only be disqualified if he
participated in writing the OS/2 kernel or other stuff that found
its ways into the kernel."
I've been granted a reprieve, though I certainly qualify as an
"old" IBMer.<g> I make no bones in my response to dwgras that I
have an interest in "specifying" the MKAPIs, the CPAPIs, and the
internal APIs (IAPIs(?)). If each is complete as a specification
group, i.e. a functional definition, with internal references in
either assignment or assertion statements, I am more than willing
to allow the software to logically organize them into an optimal
whole, producing all the the logically equivalent outputs of
source, dataflow diagrams, and structure charts. I imagine that
would go a long way towards reducing the documentation effort.
I have a greater interest in modifying the GCC compiler to support
this along with elimination of current unnecessary restrictions on
the C language, e.g. the need to "nest" procedures, making an
unecessary distinction between internal and external procedures.
Doing so means also eliminating the unnecessary restriction of the
scope of compilation to a single external procedure, allowing the
compilation of multiple external procedures as a single unit of
work. This means that we can offer a set of unordered procedures
(APIs) as input allowing the compiler to logically organize them
into the requisite hierarchy of paths. We can then use the meta
theory, e.g. compiler options, to control the organization of the
resulting set of modules, i.e. .sys, .dll, .exe, etc..
Our "ridiculously small" number will quickly turn into a surplus.
Date: Thu Mar 7, 2002 8:48 pm
Subject: Re: Re: My point of view... lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
John Martin (JMA) writes:
"Lynn (or any old IBM'er) can only be disqualified if he
participated in writing the OS/2 kernel or other stuff that found
its ways into the kernel."
I've been granted a reprieve, though I certainly qualify as an
"old" IBMer.<g> I make no bones in my response to dwgras that I
have an interest in "specifying" the MKAPIs, the CPAPIs, and the
internal APIs (IAPIs(?)). If each is complete as a specification
group, i.e. a functional definition, with internal references in
either assignment or assertion statements, I am more than willing
to allow the software to logically organize them into an optimal
whole, producing all the the logically equivalent outputs of
source, dataflow diagrams, and structure charts. I imagine that
would go a long way towards reducing the documentation effort.
I have a greater interest in modifying the GCC compiler to support
this along with elimination of current unnecessary restrictions on
the C language, e.g. the need to "nest" procedures, making an
unecessary distinction between internal and external procedures.
Doing so means also eliminating the unnecessary restriction of the
scope of compilation to a single external procedure, allowing the
compilation of multiple external procedures as a single unit of
work. This means that we can offer a set of unordered procedures
(APIs) as input allowing the compiler to logically organize them
into the requisite hierarchy of paths. We can then use the meta
theory, e.g. compiler options, to control the organization of the
resulting set of modules, i.e. .sys, .dll, .exe, etc..
Our "ridiculously small" number will quickly turn into a surplus.
Re: Part 13
#374 From: Gennady Kudryashoff <genka@...>
Date: Thu Mar 7, 2002 11:17 pm
Subject: Re[2]: Microwindows gui? genka@...
Send Email Send Email
One, two, three, four, one, two...
Let me tell you how it will be, JMA.
6 марта 2002 г. 13:12. JMA -> osFree@yahoogroups.com:
J> Sure, but if we dont want to go for a PM compatability then
J> xFree/86 is a much better choice. Its mature, has lots of
J> drivers and porting Linux apps is relativly easy.
Errr... So, really, you want to build OS/2 clone, build OS/2 clone
kernel, or you want to build Linux Distributive with FreeBSD
principes of distribution and OS/2 principes of GUI/windowmanager design?
If first, you really need PM. If second, you can get PM from OS/2 (for
legal OS/2 users only) and so, go on, and rewrite some pieces of
software. If third, what the purpose of this list and project?
IMHO, the target should be OS/2 PM and Console programs execution, may
be, drivers support, and, in the end, an OS distro built with some
useful freeware included (may be, not only opensource, but freeware).
And, of cource, we all should remember, that while OS/2 is quite close
to Unix, it is not Unix system, and it is, may be, the one of the positive
things I love the OS/2 for.
Gennady Kudryashoff. [Team The Beatles]
STC MCC "Energetics" / MSIEM Fac. of Appl. math. / FidoNet: 2:5020/1159
Date: Thu Mar 7, 2002 11:17 pm
Subject: Re[2]: Microwindows gui? genka@...
Send Email Send Email
One, two, three, four, one, two...
Let me tell you how it will be, JMA.
6 марта 2002 г. 13:12. JMA -> osFree@yahoogroups.com:
J> Sure, but if we dont want to go for a PM compatability then
J> xFree/86 is a much better choice. Its mature, has lots of
J> drivers and porting Linux apps is relativly easy.
Errr... So, really, you want to build OS/2 clone, build OS/2 clone
kernel, or you want to build Linux Distributive with FreeBSD
principes of distribution and OS/2 principes of GUI/windowmanager design?
If first, you really need PM. If second, you can get PM from OS/2 (for
legal OS/2 users only) and so, go on, and rewrite some pieces of
software. If third, what the purpose of this list and project?
IMHO, the target should be OS/2 PM and Console programs execution, may
be, drivers support, and, in the end, an OS distro built with some
useful freeware included (may be, not only opensource, but freeware).
And, of cource, we all should remember, that while OS/2 is quite close
to Unix, it is not Unix system, and it is, may be, the one of the positive
things I love the OS/2 for.
Gennady Kudryashoff. [Team The Beatles]
STC MCC "Energetics" / MSIEM Fac. of Appl. math. / FidoNet: 2:5020/1159
Re: Part 13
#375 From: "JMA" <mail@...>
Date: Fri Mar 8, 2002 12:16 am
Subject: Re: Re[2]: Microwindows gui? mailjmase
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
On Thu, 7 Mar 2002 23:17:14 +0300, Gennady Kudryashoff wrote:
>One, two, three, four, one, two...
> Let me tell you how it will be, JMA.
>
>6 марта 2002 г. 13:12. JMA -> osFree@yahoogroups.com:
>
>J> Sure, but if we dont want to go for a PM compatability then
>J> xFree/86 is a much better choice. Its mature, has lots of
>J> drivers and porting Linux apps is relativly easy.
>
This was just a reply to someone that sought to find a GUI for
osFree. I tried to tell him that if he thinks osFree needs another
GUI (than PM) there is already a GUI (xFree) that is there already.
How will it be ?
I want a PM clone and CPAPI clone and a WPS clone.
But I cannot dictate how this project goes further. Its a team effort
and not up to me. All I can say is what I recommend.
Sincerely
JMA
Development and Consulting
John Martin , jma@...
==================================
Website: http://www.jma.se/
email: mail@...
Phone: 46-(0)70-6278410
==================================
Date: Fri Mar 8, 2002 12:16 am
Subject: Re: Re[2]: Microwindows gui? mailjmase
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
On Thu, 7 Mar 2002 23:17:14 +0300, Gennady Kudryashoff wrote:
>One, two, three, four, one, two...
> Let me tell you how it will be, JMA.
>
>6 марта 2002 г. 13:12. JMA -> osFree@yahoogroups.com:
>
>J> Sure, but if we dont want to go for a PM compatability then
>J> xFree/86 is a much better choice. Its mature, has lots of
>J> drivers and porting Linux apps is relativly easy.
>
This was just a reply to someone that sought to find a GUI for
osFree. I tried to tell him that if he thinks osFree needs another
GUI (than PM) there is already a GUI (xFree) that is there already.
How will it be ?
I want a PM clone and CPAPI clone and a WPS clone.
But I cannot dictate how this project goes further. Its a team effort
and not up to me. All I can say is what I recommend.
Sincerely
JMA
Development and Consulting
John Martin , jma@...
==================================
Website: http://www.jma.se/
email: mail@...
Phone: 46-(0)70-6278410
==================================
Re: Part 13
#376 From: "tomleem7659" <jersey@...>
Date: Fri Mar 8, 2002 6:06 pm
Subject: Re: Microwindows gui? tomleem7659
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
--- In osFree@y..., "JMA" <mail@j...> wrote:
> On Thu, 7 Mar 2002 23:17:14 +0300, Gennady Kudryashoff wrote:
>
> >One, two, three, four, one, two...
> > Let me tell you how it will be, JMA.
> >
> >6 марта 2002 г. 13:12. JMA -> osFree@y...:
> >
> >J> Sure, but if we dont want to go for a PM compatability then
> >J> xFree/86 is a much better choice. Its mature, has lots of
> >J> drivers and porting Linux apps is relativly easy.
> >
> This was just a reply to someone that sought to find a GUI for
> osFree. I tried to tell him that if he thinks osFree needs another
> GUI (than PM) there is already a GUI (xFree) that is there already.
>
> How will it be ?
>
> I want a PM clone and CPAPI clone and a WPS clone.
>
> But I cannot dictate how this project goes further. Its a team
effort
> and not up to me. All I can say is what I recommend.
>
>
>
>
> Sincerely
>
> JMA
>
I only mentioned microwindows and minigui to see if they
could be used in such a project. If they can not, that
is okay. I found them when searching for a simple gui
for FreeDOS on my notebook computer (an older one that
I thought would be neat to try FreeDOS on, I found
it while searching for DOS programs to create a DOS
bootable disk).
TomLeeM
Date: Fri Mar 8, 2002 6:06 pm
Subject: Re: Microwindows gui? tomleem7659
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
--- In osFree@y..., "JMA" <mail@j...> wrote:
> On Thu, 7 Mar 2002 23:17:14 +0300, Gennady Kudryashoff wrote:
>
> >One, two, three, four, one, two...
> > Let me tell you how it will be, JMA.
> >
> >6 марта 2002 г. 13:12. JMA -> osFree@y...:
> >
> >J> Sure, but if we dont want to go for a PM compatability then
> >J> xFree/86 is a much better choice. Its mature, has lots of
> >J> drivers and porting Linux apps is relativly easy.
> >
> This was just a reply to someone that sought to find a GUI for
> osFree. I tried to tell him that if he thinks osFree needs another
> GUI (than PM) there is already a GUI (xFree) that is there already.
>
> How will it be ?
>
> I want a PM clone and CPAPI clone and a WPS clone.
>
> But I cannot dictate how this project goes further. Its a team
effort
> and not up to me. All I can say is what I recommend.
>
>
>
>
> Sincerely
>
> JMA
>
I only mentioned microwindows and minigui to see if they
could be used in such a project. If they can not, that
is okay. I found them when searching for a simple gui
for FreeDOS on my notebook computer (an older one that
I thought would be neat to try FreeDOS on, I found
it while searching for DOS programs to create a DOS
bootable disk).
TomLeeM
Re: Part 13
#377 From: Gennady Kudryashoff <genka@...>
Date: Mon Mar 11, 2002 11:37 am
Subject: Kernel/mikrokernel architecture genka@...
Send Email Send Email
One, two, three, four, one, two...
Let me tell you how it will be, osfree@yahoogroups.com.
The question is - what status the project have now. Is there any code
written by developers for now, or not?
Will developers simply rewrite IBM OS/2 kernel from scratch,
simulating its work for the underlying software levels, leaving the
OS/2 driver model, and then continue to write other code (free parts
of the system), or...?
What file systems will be supported in kernel, hpfs/fat, may be fat32,
will the system use some mikrokernel (GNU/Mach is not a choice due to
license restrictions provided by GNU license), or it will be... just a
kernel? So?
What executable formats will be supported by the system?
May be, developers will provide some FAQ discussing those questions.
Gennady Kudryashoff. [Team The Beatles]
STC MCC "Energetics" / MSIEM Fac. of Appl. math. / FidoNet: 2:5020/1159
Date: Mon Mar 11, 2002 11:37 am
Subject: Kernel/mikrokernel architecture genka@...
Send Email Send Email
One, two, three, four, one, two...
Let me tell you how it will be, osfree@yahoogroups.com.
The question is - what status the project have now. Is there any code
written by developers for now, or not?
Will developers simply rewrite IBM OS/2 kernel from scratch,
simulating its work for the underlying software levels, leaving the
OS/2 driver model, and then continue to write other code (free parts
of the system), or...?
What file systems will be supported in kernel, hpfs/fat, may be fat32,
will the system use some mikrokernel (GNU/Mach is not a choice due to
license restrictions provided by GNU license), or it will be... just a
kernel? So?
What executable formats will be supported by the system?
May be, developers will provide some FAQ discussing those questions.
Gennady Kudryashoff. [Team The Beatles]
STC MCC "Energetics" / MSIEM Fac. of Appl. math. / FidoNet: 2:5020/1159
Re: Part 13
#378 From: "Lynn H. Maxson" <lmaxson@...>
Date: Mon Mar 11, 2002 7:51 pm
Subject: Re: Kernel/mikrokernel architecture lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Gennady Kudryashoff writes:
"...Will developers simply rewrite IBM OS/2 kernel from scratch,
simulating its work for the underlying software levels, leaving
the OS/2 driver model, and then continue to write other code (free
parts of the system), or...? ..."
Yes to the writing of the OS/2 kernel as we will not simply
rewrite, read, or otherwise use the "inappropriately released OEM
source". The same is true for the microkernel as open source
should be open source, i.e. unrestricted in terms of use, which
means no licensing restrictions period.
I don't know what's available from the www.os2.cz site announced
by Stefan Zigulec in another posting as beyond requiring
encryption you need a userid and password. For the moment then I
don't know what purpose it serves.
To replace OS/2 the package as open source, thus making it an
unrestricted product available without a requirement for an IBM
license, means more than writing a kernel. It means the writing
of all the utility functions, the presentation manager, WPS, REXX,
etc.. As I understand it some coding is underway for some
non-kernel pieces.
In my personal opinion you don't simply replace the OS/2 kernel
without enhancing it functionally to minimize its resource usage,
to maximise its performance, and to increase its usefulness. This
means making it a better OS/2 than OS/2. Then while you are at it
to incorporate the kernels, i.e. OS personalities, of Linux and
Windows to have a better Linux than Linux and Windows than
Windows. There's no sense in entering a competitive arena, one,
without being competitive, and, two, forcing your competition to
react accordingly.
In terms of file systems you support at a minimum what the various
OS personalities support. Furthermore you make that support
cross-systems, available to each OS personality. The emphasis
here as it should be in all areas is one of no compromise, simply
providing the best support in terms of function, quality, resource
usage, and performance. The major obstacle to this in current
offerings in the hierarchical rewrite of production, i.e. the
executable form, source code. The solution lies in borrowing from
logic programming in which such rewriting belongs to the software
instead of manually. In the software it occurs tens of millions
of times faster, thus the time and effort required no longer
exists as an impediment to change.
I'm always amused when someone asks first what coding has occurred
without concern for what thinking occurred first. It usually
indicates a non-structured (multiple steps at a time) approach,
one of thinking while in the process of coding. I don't subscribe
to it, preferring to think free of the restrictions of source code
mass and to code once the necessary mass of thinking is done. So
the argument of what code exists, how something is implemented,
has little bearing prior to deciding on what to implement.
At the very least for those eager to think while coding you should
have a detailed design, an architected, hierarchical organization
of the APIs from the lowest level, the core- or micro-kernel, to
the highest. This at least allows for an initial decomposition
into reasonable (and working) divisions of labor. The point being
to maximize the coding time by minimizing the communication time,
i.e. the number of meetings resolving issues. Ask any programmer.
Those are much easier to resolve in a thought process prior to
coding than after a body of coding exists.
As offered in another response a kernel is a microkernel plus one
or more OS personalities so that no conflict exists in choosing an
approach. The only differences possible lie in functional
differences, i.e. the interfaces, of microkernel choices. The
option for any OS personality (the remainder of the kernel) is to
choose one which at a minimum supports its highest level APIs. If
you are to support multiple OS personalities, then you aggregate
their highest level APIs prior to making the choice.
You may take this as my commitment to participate in the kernel
development, first in its documentation and design and then in its
coding. Others may begin their coding as their impatience
demands.<g>
Date: Mon Mar 11, 2002 7:51 pm
Subject: Re: Kernel/mikrokernel architecture lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Gennady Kudryashoff writes:
"...Will developers simply rewrite IBM OS/2 kernel from scratch,
simulating its work for the underlying software levels, leaving
the OS/2 driver model, and then continue to write other code (free
parts of the system), or...? ..."
Yes to the writing of the OS/2 kernel as we will not simply
rewrite, read, or otherwise use the "inappropriately released OEM
source". The same is true for the microkernel as open source
should be open source, i.e. unrestricted in terms of use, which
means no licensing restrictions period.
I don't know what's available from the www.os2.cz site announced
by Stefan Zigulec in another posting as beyond requiring
encryption you need a userid and password. For the moment then I
don't know what purpose it serves.
To replace OS/2 the package as open source, thus making it an
unrestricted product available without a requirement for an IBM
license, means more than writing a kernel. It means the writing
of all the utility functions, the presentation manager, WPS, REXX,
etc.. As I understand it some coding is underway for some
non-kernel pieces.
In my personal opinion you don't simply replace the OS/2 kernel
without enhancing it functionally to minimize its resource usage,
to maximise its performance, and to increase its usefulness. This
means making it a better OS/2 than OS/2. Then while you are at it
to incorporate the kernels, i.e. OS personalities, of Linux and
Windows to have a better Linux than Linux and Windows than
Windows. There's no sense in entering a competitive arena, one,
without being competitive, and, two, forcing your competition to
react accordingly.
In terms of file systems you support at a minimum what the various
OS personalities support. Furthermore you make that support
cross-systems, available to each OS personality. The emphasis
here as it should be in all areas is one of no compromise, simply
providing the best support in terms of function, quality, resource
usage, and performance. The major obstacle to this in current
offerings in the hierarchical rewrite of production, i.e. the
executable form, source code. The solution lies in borrowing from
logic programming in which such rewriting belongs to the software
instead of manually. In the software it occurs tens of millions
of times faster, thus the time and effort required no longer
exists as an impediment to change.
I'm always amused when someone asks first what coding has occurred
without concern for what thinking occurred first. It usually
indicates a non-structured (multiple steps at a time) approach,
one of thinking while in the process of coding. I don't subscribe
to it, preferring to think free of the restrictions of source code
mass and to code once the necessary mass of thinking is done. So
the argument of what code exists, how something is implemented,
has little bearing prior to deciding on what to implement.
At the very least for those eager to think while coding you should
have a detailed design, an architected, hierarchical organization
of the APIs from the lowest level, the core- or micro-kernel, to
the highest. This at least allows for an initial decomposition
into reasonable (and working) divisions of labor. The point being
to maximize the coding time by minimizing the communication time,
i.e. the number of meetings resolving issues. Ask any programmer.
Those are much easier to resolve in a thought process prior to
coding than after a body of coding exists.
As offered in another response a kernel is a microkernel plus one
or more OS personalities so that no conflict exists in choosing an
approach. The only differences possible lie in functional
differences, i.e. the interfaces, of microkernel choices. The
option for any OS personality (the remainder of the kernel) is to
choose one which at a minimum supports its highest level APIs. If
you are to support multiple OS personalities, then you aggregate
their highest level APIs prior to making the choice.
You may take this as my commitment to participate in the kernel
development, first in its documentation and design and then in its
coding. Others may begin their coding as their impatience
demands.<g>
Re: Part 13
#379 From: "JMA" <mail@...>
Date: Mon Mar 11, 2002 8:08 pm
Subject: Re: Kernel/mikrokernel architecture mailjmase
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
On Mon, 11 Mar 2002 11:37:03 +0300, Gennady Kudryashoff wrote:
>One, two, three, four, one, two...
> Let me tell you how it will be, osfree@yahoogroups.com.
>
>The question is - what status the project have now. Is there any code
>written by developers for now, or not?
>
Kernel source - no.
Other things are being written with some cmd line tools almost finished.
** Please remember, the project started last month. **
>Will developers simply rewrite IBM OS/2 kernel from scratch,
>simulating its work for the underlying software levels, leaving the
>OS/2 driver model, and then continue to write other code (free parts
>of the system), or...?
>
>What file systems will be supported in kernel, hpfs/fat, may be fat32,
>will the system use some mikrokernel (GNU/Mach is not a choice due to
>license restrictions provided by GNU license), or it will be... just a
>kernel? So?
>
Who knows. I have suggested that we dont do a kernel. My suggestion is
to build a generic CPAPI layer and make it possible to graft it on
different kernels.
I doubt we will get a large enough team to build YAK (yet another kernel)
and there are lots of team out there that are building kernels. Why not
reuse what they are doing
>What executable formats will be supported by the system?
>
LX and possible NE
When we make it run on existing kernels it will probably support the
native exe formats to.
>May be, developers will provide some FAQ discussing those questions.
>
We welcome more developers !
Sincerely
JMA
Development and Consulting
John Martin , jma@...
==================================
Website: http://www.jma.se/
email: mail@...
Phone: 46-(0)70-6278410
==================================
Date: Mon Mar 11, 2002 8:08 pm
Subject: Re: Kernel/mikrokernel architecture mailjmase
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
On Mon, 11 Mar 2002 11:37:03 +0300, Gennady Kudryashoff wrote:
>One, two, three, four, one, two...
> Let me tell you how it will be, osfree@yahoogroups.com.
>
>The question is - what status the project have now. Is there any code
>written by developers for now, or not?
>
Kernel source - no.
Other things are being written with some cmd line tools almost finished.
** Please remember, the project started last month. **
>Will developers simply rewrite IBM OS/2 kernel from scratch,
>simulating its work for the underlying software levels, leaving the
>OS/2 driver model, and then continue to write other code (free parts
>of the system), or...?
>
>What file systems will be supported in kernel, hpfs/fat, may be fat32,
>will the system use some mikrokernel (GNU/Mach is not a choice due to
>license restrictions provided by GNU license), or it will be... just a
>kernel? So?
>
Who knows. I have suggested that we dont do a kernel. My suggestion is
to build a generic CPAPI layer and make it possible to graft it on
different kernels.
I doubt we will get a large enough team to build YAK (yet another kernel)
and there are lots of team out there that are building kernels. Why not
reuse what they are doing
>What executable formats will be supported by the system?
>
LX and possible NE
When we make it run on existing kernels it will probably support the
native exe formats to.
>May be, developers will provide some FAQ discussing those questions.
>
We welcome more developers !
Sincerely
JMA
Development and Consulting
John Martin , jma@...
==================================
Website: http://www.jma.se/
email: mail@...
Phone: 46-(0)70-6278410
==================================
Re: Part 13
#380 From: "Lynn H. Maxson" <lmaxson@...>
Date: Tue Mar 12, 2002 12:19 am
Subject: Re: Kernel/mikrokernel architecture lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
John Martin (JMA) writes:
"... Who knows. I have suggested that we dont do a kernel. My
suggestion is to build a generic CPAPI layer and make it possible
to graft it on different kernels. ..."
If you "graft" it on different kernels, you do so on top of their
CPAPI layers. In effect you seek to vertically layer one OS
personality on top of one or most host CPAPI layers. This means
supporting a different "graft" for each. At a minimum it means
supporting not only what additional functions you want within a
"graft", but limiting those functions to what is supported in the
underlying host. Thus if different hosts support different
functions, some of which are either incompatible with or
unsupported for the guest (graft), you end up with incompatible or
incomplete versions of the graft.
Let's stop making out that this kernel is a big deal. We already
have the open source kernel for Linux. We have most of what is
needed for Windows in open source with ODIN. We need only to
provide the open source for OS/2. If we use parallel or
horizontal layering via something like the microkernel, then we
need only produce a single OS/2 "graft" regardless of what other
kernels (microkernel plus OS personalities) are present. Moreover
it is not a graft except to the microkernel as are the other OS
personalities.
We should operate under the KISS principle. If the need exists to
support multiple OS personalities concurrently, the easiest way in
terms of initial development and ongoing support lies in having
them loosely coupled, i.e. independent. Thus changes in the one
will not cause changes in the other. The major problem with a
vertical layering approach aside from the different (incomplete or
incompatible) functionality that may exist is the tightly coupling
between the OS personalities.
We should note that the Virtual PC approach is effectively an
overly elaborate microkernel approach. The only advantage it has
is that it does not require the source, only the executable
binaries, of any guest OS personality. However, it's not having
the open source for all the OS personalities that leaves us
dependent upon vendors like IBM and Microsoft and distributors
like Red Hat and SUSE to have the operating system environment of
our choice.
We've already voiced our objections to Linux and Windows. Even
those users should appreciate having a better future choice while
protecting their existing investment. So why compromise? Why
shouldn't the using community take control of OS development? Why
should not vendors have to conform to user desires and priorities
instead of dictating them? Why should we not completely destroy
the Microsoft monopoly, something apparently the government after
having gotten all the way to the Supreme Court with respect to
Microsoft's guilt seems unwilling to do? Is it not time that we,
the beneficiaries of an open, competitive marketplace, insure that
it returns to such a condition and prevent anyone from ever
assuming monopoly power?
Writing a kernel is no big deal intellectually, only physically.
Once the intellectual part, the specification, analysis, and
design, is complete, the clerical part of writing the source code
is straightforward. Once you have the design, then you can engage
in a division of labor through decomposition where multiple
loosely-coupled teams can proceed in parallel.
If Linus Torvalds can produce something as poor as Linux, then
certainly we can produce something head and shoulders better than
that. After all Linus had an OS/2 model he could have copied, but
decided to take the low road instead of the higher. We don't have
to get stuck with traveling down the same path. We have perhaps
too many options to choose from. It is up to us to decide which
to take and which to reject. We don't have to live with the
mistakes of others.
Date: Tue Mar 12, 2002 12:19 am
Subject: Re: Kernel/mikrokernel architecture lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
John Martin (JMA) writes:
"... Who knows. I have suggested that we dont do a kernel. My
suggestion is to build a generic CPAPI layer and make it possible
to graft it on different kernels. ..."
If you "graft" it on different kernels, you do so on top of their
CPAPI layers. In effect you seek to vertically layer one OS
personality on top of one or most host CPAPI layers. This means
supporting a different "graft" for each. At a minimum it means
supporting not only what additional functions you want within a
"graft", but limiting those functions to what is supported in the
underlying host. Thus if different hosts support different
functions, some of which are either incompatible with or
unsupported for the guest (graft), you end up with incompatible or
incomplete versions of the graft.
Let's stop making out that this kernel is a big deal. We already
have the open source kernel for Linux. We have most of what is
needed for Windows in open source with ODIN. We need only to
provide the open source for OS/2. If we use parallel or
horizontal layering via something like the microkernel, then we
need only produce a single OS/2 "graft" regardless of what other
kernels (microkernel plus OS personalities) are present. Moreover
it is not a graft except to the microkernel as are the other OS
personalities.
We should operate under the KISS principle. If the need exists to
support multiple OS personalities concurrently, the easiest way in
terms of initial development and ongoing support lies in having
them loosely coupled, i.e. independent. Thus changes in the one
will not cause changes in the other. The major problem with a
vertical layering approach aside from the different (incomplete or
incompatible) functionality that may exist is the tightly coupling
between the OS personalities.
We should note that the Virtual PC approach is effectively an
overly elaborate microkernel approach. The only advantage it has
is that it does not require the source, only the executable
binaries, of any guest OS personality. However, it's not having
the open source for all the OS personalities that leaves us
dependent upon vendors like IBM and Microsoft and distributors
like Red Hat and SUSE to have the operating system environment of
our choice.
We've already voiced our objections to Linux and Windows. Even
those users should appreciate having a better future choice while
protecting their existing investment. So why compromise? Why
shouldn't the using community take control of OS development? Why
should not vendors have to conform to user desires and priorities
instead of dictating them? Why should we not completely destroy
the Microsoft monopoly, something apparently the government after
having gotten all the way to the Supreme Court with respect to
Microsoft's guilt seems unwilling to do? Is it not time that we,
the beneficiaries of an open, competitive marketplace, insure that
it returns to such a condition and prevent anyone from ever
assuming monopoly power?
Writing a kernel is no big deal intellectually, only physically.
Once the intellectual part, the specification, analysis, and
design, is complete, the clerical part of writing the source code
is straightforward. Once you have the design, then you can engage
in a division of labor through decomposition where multiple
loosely-coupled teams can proceed in parallel.
If Linus Torvalds can produce something as poor as Linux, then
certainly we can produce something head and shoulders better than
that. After all Linus had an OS/2 model he could have copied, but
decided to take the low road instead of the higher. We don't have
to get stuck with traveling down the same path. We have perhaps
too many options to choose from. It is up to us to decide which
to take and which to reject. We don't have to live with the
mistakes of others.