PEIRCE-L Digest 1295 -- February 13-14, 1998

CITATION and QUOTATION from messages on PEIRCE-L is permissable if
the individual message is identified by use of the information on
   From PEIRCE-L Forum, Jan 5, 1998, [name of author of message],
   "re: Peirce on Teleology"   

If the type is too large and the message runs off the screen on the 
right you can shrink the size of the typeface by use of the option
on your browser.
Since it is mostly in ASCII format You can download the
whole document easily by using the SELECT ALL and COPY commands, then
PASTE-ing it into a blank page in your word processor; or you can
SELECT, COPY, and PASTE individual messages using your mouse.  

Topics covered in this issue include:

  1) Re: a question about LISP and recursion
	by Tom Burke 
  2) Re: Cohen and Hook
	by Howard Callaway 
  3) Does Language Determine Our Scientific Ideas?
	by Howard Callaway 
  4) Re: a question about LISP and recursion
	by BugDaddy[…] (BugDaddy)
  5) Re: a question about LISP and recursion
	by joseph.ransdell[…] (ransdell, joseph m.)
  6) Seconding Joe's suggestion on Max Fisch
	by Gary Shank 
  7) Re: a question about LISP and recursion
	by Douglas Moore 
  8) Re: a question about LISP and recursion
	by Thomas.Riese[…] (Thomas Riese)
  9) Re: Porphyry: On Aristotle's Categories/The New List (4)
	by BugDaddy[…] (BugDaddy)


Date: Fri, 13 Feb 1998 02:34:06 -0500
From: Tom Burke 
To: peirce-l[…]
Subject: Re: a question about LISP and recursion

>joseph.ransdell[…] (ransdell, joseph m.) wrote:
> ...  The question I have been working toward on this -- going
>back to some earlier dialogue with Thomas Riese a few days ago -- has to
>do with Peirce's definitions of the representation relation, which I
>take to be in some important sense recursive, and specifically so in the
>sense of what mathematicians call a "recursive definition".   Am I right
>on this?  And if so what exactly is happening in the case of a recursive
>definition, in the mathematician's sense?

In reply...

At 11:24 PM -0500 2/12/98, BugDaddy wrote:
>>I would be interested,
>>in any case, in what others besides Doug who have expertise on the topic
>>of recursion could say that might make that conception as clear as
>It seems to me that recursion is a generalization of the idea of
>induction.  You have an initial state S1 and a procedure P(S1) =
>S2; P(S2) = S3; P(S3) = S4.. such that  (1) each step takes you
>closer to what you are looking for and (2) you know that after a
>certain point you can stop with a satisfactory result.
>My favorite example is a simple procedure that allows one to
>calculate the square root of a positive number X. Take an initial
>estimate S1 for the square root of X.  [S1 might as well be X,
>itself.]  Then either S1 is (1) less than the square root of X,
>(2) equal to the square root of X or greater than the square root
>of X.  If (1) holds then X/S1 is greater than the square root.
>If (2) holds then X/S1 = S1.  If (3) holds X/S1 is greater than
>the square root.

I don't know about Peirce's definitions of the representation relation,
specifically in what sense it is recursive, ... but we should distinguish

1. definition by recursion
2. proof by (mathematical) induction
3. calculation by sucessive approximation
4. inductive inference (based on assumptions about repeated sampling)
5. etc?

The square-root procedure is a "successive approximation" procedure.
definition by recursion has a wholly different aim in view, namely, to
specify a few simple formation rules that when repeated indefinitely to
produce all the members in a given (infinite) class.  E.g., the set of
well-formed formulas in the language of propositional logic can be defined
recursively [-- if P,Q are wffs, so are P&Q, PvQ, ~P, P->Q, P<->Q --].  Or
the hierarchy of sets in elementary set theory can be defined recursively
[-- Let "{}" be a level-0 set, and let level-n be the power set of the set
of all sets at lower levels.  So we get "{{}}" as a set.  So is
"{{},{{}}}".  So are "{{},{{},{{}}}}", "{{{}},{{},{{}}}}",
"{{},{{}},{{{}},{{},{{}}}}}", etc etc etc.  Ultimately we get the set of
all sets.  But is that set?  What about IT's power set?  oops.  etc etc.

I'm not sure I have the definition of the set hierarchy exactly right, but
that's the basic idea.  I also don't know a lot about computational
paradigms, but there the differences in various paradigms comes down to how
the programming "languages" are specified.  All of them involve some kind
of recursive definition, just like the language of propositional logic, but
with different twists here and there.

"Functional programming" works with a kind of language built up by
so-called lambda-recursion.  It is just a calculus of some set of basic
elements (possibly empty) plus "functions" of those elements.  [-- starting
with basic variables x and basic function symbols f as lambda expressions
L, then "lambda-x * L" is also a lambda expression, as is "(L L')" (viz., L
applied to L'). --]  It's just a syntax of functions and function
applications.  There are lots of other complications involved (various
abstraction and reduction rules), but what you have in the end is a syntax
capable of generating any computable function whatsoever.  One can do "any
kind of computation" with this method.

Programming languages like PROLOG are rather different in what they take as
primitives.  There you have (expressions for) basic objects, basic
relations between objects, and declarations of facts and rules about
objects and relations.  The syntax is essentially a computational form of
the predicate calculus (hence the name PRO*LOG*) (without the

Notice that a lambda calculus doesn't have anything like "facts".  Just
"terms" computed from other "terms".  Essentially, all you have is "words"
and more "words", whereas with PROLOG you have "words" plus "sentences"
built recursively from "words" and "sentences".  There is a grammatical
categorial distinction here, yet either is completely adequate by itself to
explain what your Mac does (in the abstract).

Can anyone see any Peircean triadic patterns in any of this?  One
hypothesis would be that if you think about either of these programming
paradigms in the right way, or any other paradigm that works just as well,
you'll be able to find in each the full complement of 1-nesses, 2-nesses,
and 3-nesses, which would explain why it is adequate by itself to do
everything computational.  The differences in these paradigms lie only in
what each can do easily and simply versus what it can do only in a rather
arcane and/or baroque manner.

I'm not sure where any of this is headed, or how it relates to your
concerns, Joe.



Date: Fri, 13 Feb 1998 08:55:55 +0100 (MET)
From: Howard Callaway 
To: Multiple recipients of list 
Subject: Re: Cohen and Hook

On Wed, 11 Feb 1998 RBTalisse[…] wrote:

> Christopher Phelps' new intellectual biography of Hook (YOUNG SIDNEY HOOK.
> Cornell UP) sheds some light on Hook / Cohen / Dewey relations.  There are a
> few references to places where Hook comments on Cohen which might be worth
> checking.


Thanks for the note. The book sounds interesting. If you get hold of it,
let me know what you find. For those readers who might have some further
interest in the book, see the advertizement in the current issue of
_The New York Review of Books." Apparently there is already a review
of the volume by Jim Gilbert, _In These Times_. 



H.G. Callaway
Seminar for Philosophy
University of Mainz


Date: Fri, 13 Feb 1998 09:26:52 +0100 (MET)
From: Howard Callaway 
To: peirce-l[…]
Subject: Does Language Determine Our Scientific Ideas?


I would like to offer copies of one of my papers for possible
discussion. The paper with the above title appeared in the Swiss
journal _Dialectica_ in 1992, after I had delivered it at a
conference held in Bern. While I have offered this paper before,
I believe, this was a few years back, and others might like to
have a look.

The basic issue addressed concerns sociological determinism and
the sociology of knowledge. I address "determinism" as a practical
question, and my argument is that anything which we might want
to call "sociological determinism" arises from restrictive 
patterns of communication (of a kind which tend to arise from
social and political conflicts).

Another way to view the issue is in terms of our need to set 
social conditions for the growth of knowledge. In more Peircean 
terms, we might say that social conditions of inquiry help to 
determine the degree to which inquiry is self correcting, there 
being no guarantee of this which operates in ways completely 
independent of social conditions.  

Since there are already various discussions going on, you may
be reluctant to start another one. If so, this is certainly
understandable. Still, if you would like to see an electronic
off-print of the paper, please let me know.


H.G. Callaway
Seminar for Philosophy
University of Mainz


Date: Fri, 13 Feb 1998 12:35:17 GMT
From: BugDaddy[…] (BugDaddy)
To: peirce-l[…]
Subject: Re: a question about LISP and recursion
Message-ID: <34e4390a.668969[…]>

Tom Burke  wrote:

>Ultimately we get the set of all sets.

No, we don't.  There is no set of all sets.  If there were then
we would have to consider the set of all sets that do not contain
themselves as elements and that *set* is the root of all evil in

The whole point of having rules to allow one to *construct* sets
is to avoid such anomalies as result from the uncontrolled use of
the word *set.*

Really, I do not think the word *set* can have a definition.  We
do not *define* sets, we *construct* them.  The *only* way to
determine whether something is a set is to go through such a
construction starting at the primitive terms.  The methods for
constructing sets are recursive as noted.

>Can anyone see any Peircean triadic patterns in any of this?  One
>hypothesis would be that if you think about either of these programming
>paradigms in the right way, or any other paradigm that works just as well,
>you'll be able to find in each the full complement of 1-nesses, 2-nesses,
>and 3-nesses, which would explain why it is adequate by itself to do
>everything computational.  The differences in these paradigms lie only in
>what each can do easily and simply versus what it can do only in a rather
>arcane and/or baroque manner.

Yes.  It seems to me that the *atoms* of LISP are First;
functional operators such as CAR, CDR, CONS... are Second [They
are also Firsts, being atoms.] and lists (or Lambda expressions)
are Third.  [They may also be First as when a Lambda expression
is constructed recursively from other Lambda expressions or
Second if they represent functions in their own right.]  

LISP seems to me to be the simplest real language ever
constructed and yes, it is Peircean.

"In essentials unity, in nonessentials diversity, 
         in all things charity"

 Life is a miracle waiting to happen.
         William  Overcamp


Date: Fri, 13 Feb 1998 08:20:03 -0600
From: joseph.ransdell[…] (ransdell, joseph m.)
Subject: Re: a question about LISP and recursion
Message-ID: <007a01bd388a$7af41220$1aa432ce[…]>

To Tom Burke:

Thanks, Tom, for the response.  You say, as regards Doug Moore's
computer language paradigms, :

>Can anyone see any Peircean triadic patterns in any of this?  One
>hypothesis would be that if you think about either of these programming
>paradigms in the right way, or any other paradigm that works just as
>you'll be able to find in each the full complement of 1-nesses,
>and 3-nesses, which would explain why it is adequate by itself to do
>everything computational.  The differences in these paradigms lie only
>what each can do easily and simply versus what it can do only in a
>arcane and/or baroque manner.
>I'm not sure where any of this is headed, or how it relates to your
>concerns, Joe.

This is all just exploratory thinking, Tom, but it runs along the
following line.  Computer languages are not, I think, languages proper
but rather representation notations -- notations for representing
representation -- and can be divided into those that are developed for
special purposes without attempting all-purpose adequacy and those that
aim at omnicompetence in principle in representing representation,
regardless of whether they are or are not developed chiefly for special
purpose application. As presently conceived the use of these notations
is to write programs, and programs are plans and essentially mentalistic
in that sense, even when their use is strictly in the control of
nonmental processes, as in "embedded" applications, i.e. they represent
the process as conceived. Hence Peirce's beginnings, from about 1904 on,
at developing a systematic way of representing representation in as
articulate a form as might possibly be required, through use of the
innumerably many sign divisions that can be systematically developed --
as with the sign divisions which start being systematically developed in
1904 (or shortly before), when he develops the three basic trichotomies
as systematically coordinate in such a way as to yield the "tetraktys"
matrix -- should correspond to the principles underlying development of
omnicompetent-in-principle computer languages such as Doug is concerned
with.  What the three paradigms might correspond to, then, would perhaps
be computer representation systems which are maximized, respectively,
for predominantly symbolic, predominantly indexical, and predominantly
iconic semiosis processes, i.e. processes which are such that, when we
represent them, we will be concerned primarily though not exclusively
with their symbolic or indexical or iconic aspects.  (Or if not that
trichotomy then perhaps to some other trichotomic distinction Peirce

Thus FORTH, as I understand, is most widely used in embedded
applications where the process is primarily what we would think of as
mechanistic, which will naturally be regarded by the engineer primarily
in terms of indexical relationships of various sorts.  Some version of
LISP, on the other hand, might conceivably be adaptable to such a use,
but no one would in practice want to do that because it would require so
many special programming compensations at every  step of the way that it
would be in practice undoable for any complex application, though it
does seem to have some special affinities with the representation of
human intelligence in particular, especially where the semiosis is
primarily symbolic because verbal.  It is difficult, though, to put
PROLOG into this scenario.

In any case, the special interest I have in this is as a possible source
for understanding the problematics of Peircean thought via the
problematics of computer programming as representation of
representation.  As a computer scientist as well as philosopher, Doug
thinks of it the other way around, too, but I don't think of it that way
myself.   I count anything I learn from it as helpful, but I don't
expect it to result in an "Open sesame!" or anything like that.


 Joseph Ransdell            or  <>
 Department of Philosophy, Texas Tech University, Lubbock TX 79409
 Area Code  806:  742-3158 office    797-2592 home    742-0730 fax
 ARISBE: Peirce Telecommunity website -


Date: Fri, 13 Feb 1998 11:39:00 +0300
From: Gary Shank 
To: peirce-l[…]
Cc: joseph.ransdell[…]
Subject: Seconding Joe's suggestion on Max Fisch

Just a quick note seconding Joe's suggestion to hunt down Max Fisch's book
and read it!  In particular, his essay on Peirce's journey from nominalism
to realism is one of the most important essays on Peirce's thought ever.  I
had the pleasure of having a course team taught by Max and Ed Moore, and I
learned more there about Peirce than at any other source -- except talking
and writing with and to, and listening to, Joe  :-)

gary shank

ps the comment comparing Max to Jerry Garcia was particularly apt, and
ironicially enough, i was listening to Jer rip up 'Jack Straw' from the
Hundred Year Hall CD at the very moment i was reading joe's words  :-)


Date: Fri, 13 Feb 1998 18:52:59 +0200
From: Douglas Moore 
To: peirce-l[…]
Subject: Re: a question about LISP and recursion
Message-ID: <34E47A6B.6438EA9[…]>

I've got a bit of mail to respond to. I'll start off on this one from Joseph
as it involves a nice elegant concept..I've tried to make this accessible to

ransdell, joseph m. wrote:

> Doug:
> I had a further question I forgot to include in the earlier batch.  I
> wondered if you could say more exactly what is meant in speaking of LISP
> as involving "naturally recursive control flow"?  Mathematicians,
> logicians, and computer scientists sometimes seem to have somewhat
> different ideas of what recursion is, and I have never been clear on
> exactly what everyone agrees on as fundamental in it.

First of all, I provide the following technical summary and then go on to a
more layperson's explanation with some philosphical overtones.

For mathematicians, the theory of recursion is expressed in the form of the
theory of a what are called Lambda functions. This might sound rather
formidable but is quite simple, as I will explain later.

As for the computer scientist, the same thing applies except instead of
involving the lambda function in existential theorems, they actually
implement the Lambda function in software. This is what McCarthy did back in
1954. Implementng the Lambda function in a general way, lead to the LISP
programming language.  This is tantamount to implementing the most general
form of recursive control flow possible. As a consequence, the resulting
LISP "virtual machine" has been proved to be "Turing Machine complete." The
Turing Machine can compute any computational process. This is equivalent to
saying that a TM and a LISP machine can both compute any recursively
enumerable function and hence anything computable.

Now let's start with simple recursion that can even be expressed in may
simple procedural languages. Now in programming a procedure in say PASCAL
you would first give it a name - say MYPROC.  This MYPROC procedure may have
arguments. The procedure can be used to carry out some computation, for
example. The "action sematics" of the computation (or action) can then
programmed as the code "body" of the procedure. The procedure thus has a
name and a code body.

The procedure body may contain procedural calls to other procedures or
"subroutines." Procedures are built of procedures. In the case where the
procedure body for MYPROC has a call to the procedure named MYPROC, then we
see that the procedure is "calling itself." This is a case of simple

A simple example would be a procedure FACT calculating the factorial of some
number N. In the body of FACT would be the simple calculation  N*FACT(N-1)
which will result in repeated calls to iself with the value of N decreased
by one  each time. For N=10, FACT(10) will calculate factorial 10 and
repeatedly call FACT ten times (untill N=0). .

Very simple and so what's the catch? This form of recursion is not
completely general. This is only recursion for objects called procedures.
Not all objects in the computer world are procedures. For example,
procedures are named semantic units. Why should the semantic unit have to
have a name? In addition, the notion of a semantic unit in procedural
languages is very restricted. They are nothing more than chunks of hard
wired logic. Why does logic have to be hard wired? Why not construct or
compute the logic on the fly, depending on some as yet to be defined

In short, from the point of view of recursion, simple recursion only
considers named procedural objects as "first class" objects. All other
objects are second class and can't be treated recursively at all.

In walks the Lambda function. The Lambda function can be considered as some
kind of higher level procedure. The Lambda function is sometimes called an
"anonomous" function. This is because in an absolutely pure LISP, all
functions have this very same name. This is tantamount to saying that all
functions (general procedures) have no proper name at all! They are all
called LAMBDA no matter what their semantics! You don't even have to use the
LAMBDA name!

In Fact, the semantics for a Lambda function are passed to it as an argument
of the function.

This is an important and necessary step towards "first class" recursion - a
mechanism where any object whatsoever can be treated recursively.

Classical LISP does admit named functions but  this is not necessary. A
pure LISP doesn't need any named functions at all and becomes a simple
Lambda function interpreter in this case.

The other important innovation in the Functional Programming LISP paradigm
is that all objects must be treated recursively, not just explicit
functions. Thus the basic objects in LISP are not functions but lists.
That's where it get's it's name LISt Programming language. A simple list is
a mere ordered collection of symbols. In more complex list may be a list of
symbols and other lists. The list is also defined recursively.

A simple list is written as
    (    )
A complex list might look like
    (    (  ( ) ) )
In addition there might be the an object consisting of a single symbol
This symbol, or "atom", could be considered as the name of a list.

In LISP, data presents itslef in the form of a general list. In addition, a
function also presents itself as a general list. Data is a list. A function
is a list. This eliminates the dichotomy between data and "program". They
are all first class objects. They are all lists.

Given a list in the absence of any context, it is impossible to say whether
the list is data or a function. It all depends on context. The list can be
used in the context as a list of arguments for a nameless Lambda function.
In this case the list is "a lambda function with arguments."Alternatively a
list can be used as an argument interpreted as data. It all depends on

This turned out to be more verbose than I intended, however two fundamental
principles are illustrated here in order to arrive at the most general form
of recursion computationally possible. One can refer to the final result as
"generic recursion." This is the form of recursion applicable to
_anything_whatsoever in the symbolic, computational world.

The following principles illustrated by the LISP paradigm are equally
applicable to any system of signs, that is to say, any semiologie. Any
fundamental "Theory of Signs" must respect these principles.

The Requirement of First Classness
There must be no "second class" objects which can't be treated by the

The paradigm must be applicable to "anything" (anything that makes sense
within it's denotational domain, at least) There must be no "edges" or
exceptions to the rule.

The Elimination of Static Fixed Dichtomies.
The First Classness is always violated when confronted with a fixed static
dichotomy. The general mechanism for eliminating such dichtomies is to
replace the fixed, static difference between the objects on the two sides of
the dichotomy by a difference_that_uniquely_ depends_ on_ context.

All major advances in Computer Science can be attributed to extending this
principle of First Classness - removing the static and replacing it with the
context dependant. The same applies to philosophy and, in many cases but not
always, to the historical eclipse of one religion by another religion (more
generic, more first classness).

In the dichotomy A|B, x might be in A for one context but there must always
exist a context in which x is in B in the other.

Taking this to further to the limit, if we have an entity X with a context
C, then there must exist a context B where the context C of X becomes an
entity in its own right with context B.

The ulimate aim (for me) is the First Class System where everything depends
on context and is free of any fixed, static dichotomies. Every entity is
first class.

In Conclusion
LISP is one of the simplest of all programming languages with only a
handfull of basic primitices and perhaps the easiest to learn.

If one wants to get a hands on feel for "Firstness" (Firstness in the
context of the other two paradigms ) and the marvelous world of the cat
chasing its tail, you can't do better than play with LISP and muse a bit on
what is happening.

An example of a problem to which  the paradigm is perfectly suited is the
maze problem.All you do is to define (recursively) in LISP what a general
maze (any maze) and maze problem actually is. This definition can be
expressed as a lambda function. The argument to the function will be a
specific description (in the form of a structured list) of the maze out of
which you want to find the solution - the path to the exit.

Once you have, once and for all, this specification of the problem then all
you do is execute (evaluate) the function and out pops the answer (a
structured list describing the path out of the maze in this case).

The solution to the any maze problem whatsover is simply stated in LISP and
is a standard textbook excercise.

In general, if you can actually state your problem in (pure) LISP then you
automatically also have the answer! Always.

Thus finding the answer to a problem means finding and expressing the right
question to ask. That's all.

Perhaps this expresses the essence of recursive problem solving.



Date: Fri, 13 Feb 1998 18:47:23 +0100
From: Thomas.Riese[…] (Thomas Riese)
To: peirce-l[…]
Subject: Re: a question about LISP and recursion


a good standard reference on recursion as an extension and 
complication of the idea of mathematical induction (Peirce would have 
said 'Fermatian inference'), is

Hartley Rogers, Theory of Recursive Functions and Effective 
Computability, MIT Press 1987

It also contains an overview of the historical background.



Date: Sat, 14 Feb 1998 04:33:08 GMT
From: BugDaddy[…] (BugDaddy)
To: peirce-l[…]
Subject: Re: Porphyry: On Aristotle's Categories/The New List (4)
Message-ID: <34e91119.4149615[…]>

Porphyry wrote:

"Q.  How many genera are there of expressions said without

"A.  The ten already mentioned.

"Q.  What is the definition of each of them?

"A.  It is impossible to give definitions for any of them, for
every definition contains a genus, and there is no genus of
these:  they are the highest genera.

"Q.  What can one give in this case?

"A.  Only examples and propria, which is what Aristotle, himself

For Aristotle, a definition was something special.  It was not a
matter of stating how people actually use words.  Rather it was a
statement of how a thing was similar to other beings (its genus)
and how it differed from others.  Aristotle was, of course, a
biologist.  I think that shows in the way he looked at

So we are reduced to giving examples and propria.  For example,
when we consider substance, Porphyry gives the examples: Socrates
and Plato, man and animal.  With regard to the propria for
substance, he says "Hence it is a common property of every
substance *qua* being a substance to receive contraries in turn.
This would thus be a proprium of substance alone, which alone can
undergo a change, and does not include in its essence the
unchangeability of its qualities."

So I would suggest that in our reading of the New List we try to
find examples and propria for the categories.

"In essentials unity, in nonessentials diversity, 
         in all things charity"

 Life is a miracle waiting to happen.
         William  Overcamp



This page is part of the website ARISBE
Page last modified by B.U. July 7, 2012 — B.U.
Last modified February 13-14, 1998 — J.R.

Queries, comments, and suggestions to:
Top of the Page