continuation of mefi tangent on email clients December 4, 2001 3:22 PM   Subscribe

This is a continuation of the tangents raised while discussing e-mail clients. Up for discussion:
* whether there is any "naturally occuring phenomenon" in cyberspace
* whether Microsoft understand the environment they are in
* whether Outlook is a secure product or not.
posted by Neale to MetaFilter-Related at 3:22 PM (21 comments total)

The one I would most like to discuss is, obviously, the first, whether the "Internet" or "software", or, broadly, the digital environment has naturally occuring phenomenon.

I don't have a position in this yet, but, as the digital "realm" is an entirely man made construct, I lean towards "no" at the moment.

A case for "yes" could be made if (a) maths is a product of nature (prove that!), (b) external "meatspace" factors can influence an "innerspace" world, or (c) just because something is bound by entirely man-made strucutures & laws doesn't mean chaos-theory won't play havoc with it.

Anyone want to throw the first pitch?
posted by Neale at 3:26 PM on December 4, 2001


Because Neale found saying "Fuck You" in the original thread didn't stop the people who "bitch and bitch" about these issues.
posted by Catch at 3:29 PM on December 4, 2001


Two posts before a MetaTalk thread went OT. A new record?
posted by Neale at 3:35 PM on December 4, 2001


neale:

entropy is one naturally occuring phenomenon i would say. software and, if not that, hardware will eventually fail somewhere.

i don't consider outlook a secure product, but i think microsoft understands very well the environment that they are in. they have consistently produced very poor designs for their software: they give too much access to the client system via javascript, for example. their word macro system is fairly powerful, and has had several viruses designed for it.

that's not to say that emacs-lisp isn't just as, indeed more, powerful; it's just that microsoft IS a monopoly, but they don't seem to account for that in software design, producing junk that "trusts" the source (word macros processed by default, etc.) which you cannot do when you're such a large target. emacs merely gets away with being both an underdog and a sympathetic entity.
posted by moz at 3:36 PM on December 4, 2001


i just read the thread you're referring to, by the way. interesting the fpp text is one huge opinion.
posted by moz at 3:38 PM on December 4, 2001


entropy is one naturally occuring phenomenon i would say. software and, if not that, hardware will eventually fail somewhere.

I agree that hardware may eventually fail, but must software? Hardware is bound by real-world wear & tear conventions, power necessity, etc, etc... software, if thought of in a "pure maths" sense, need never fail if written correctly. I place my bets on there being software in this great, wide world that has never "crashed" or "stopped".

Is the atomic clock a good example of this, or a good counter example (ie, are the loss of seconds due to the software's reliance on the half-life level, or due to software cycles alone)?
posted by Neale at 3:44 PM on December 4, 2001


I think that software reliability is purely a hardware issue in the real world -- hard drives can fail, programs can get corrupted, etc.
In a more abstract sense, perfectly written software executing on an ideal Turing machine would have no reason not to run forever -- no wear and tear on the code, of course, and in an abstract run-time environment it wouldn't fail.
Could the internet as a whole be considered such an environment? Odds are it'll never fail all at once, and if one were to set a program loose it could theoretically run forever.
Atomic clocks will, eventually, run down, but this is purely a hardware issue. In an abstract mathematical sense, perfectly written software shouldn't have to.

Note also that in an idealized run-time environment error-handling wouldn't be much of a problem.
posted by j.edwards at 3:53 PM on December 4, 2001


hardware would never fail if maintained properly, neale, as well. but it is false to assume that software, even if it could be written to perfection, will be a few times let alone often.

it's possible to write a programming language which can thus be proven to always act in a "correct" manner (some have probably been written, and i'd like to suggest ML or Haskell as possible languages, but i am not sure); the reality is most programmers use something quick and dirty like C where most mistakes are your responsibility.

there's software that has never "crashed" or abnormally stopped. let's write one right now in LISP:

;; i am super cool.
(print "Hello World!")

as systems grow in size, the chances of our writing software that is fool-proof in a non-fault-tolerant systems falls dramatically -- to negligible amounts, i would say. there are some fairly fault tolerant systems (such as Erlang, a language written by programmers at Ericsson for their telecom stuff) but even those are open to design errors as opposed to simple typos.
posted by moz at 3:55 PM on December 4, 2001

The one I would most like to discuss is, obviously, the first, whether the "Internet" or "software", or, broadly, the digital environment has naturally occuring phenomenon.
Viruses are a computer phenomenon that will forever exploit holes and spread. That they wouldn't occur without humans doesn't make them "unnatural".
posted by holloway at 4:01 PM on December 4, 2001


holloway: That they wouldn't occur without humans doesn't make them "unnatural".

Is this a "GM Food isn't unnatural" kind of argument or a "Man-made bushfire isn't unnatural" kind of argument?

Virus are bound by what their human-writers have given them the ability to do. Admittedly, we hear a lot of de-machinising descriptors for viruses... like the name itself, "Virus", which is a natural being, "in the wild", even though the "Wild" is entirely digital, people "catch" viruses, even though technically they're being caught by viruses

Surely as soon as something exists only due to man-made structures (the electricity running the hardware, the server using the hardware, the OS on the server, the app on the OS) it cannot be seen as "natural" any more. It's too many levels away from naturally occuring phenomenon.

j.edwards: Could the internet as a whole be considered such an environment? Odds are it'll never fail all at once, and if one were to set a program loose it could theoretically run forever.

Perhaps this would be the ultimate virus, and instinctively the perils of computer AI, (or in bad sci-fi, evil computer AI). Could a virus, written in "perfect code" live forever; and how would it combat the "anti-virus" eventually sent to stop it?
posted by Neale at 4:16 PM on December 4, 2001


I haven't gotten any emails from my friends at all today. It has been one long and boring day at work. I think though that it would be interesting to see whether everyone here is downed by the virus, and that that has contributed to the exceedingly long and lively discussion that has gone on all over the place (well, here) today.
posted by goneill at 4:18 PM on December 4, 2001


When reading this I see a switch from specific examples of poor design to vague higher principles that excuse poor design, Microsoft's popularity used as an excuse for poor design, and quibbles over the definition of natural (as if it matters).

It's my fault, and quite sad, really.
posted by holloway at 4:28 PM on December 4, 2001


Does a brain have naturally occurring phenomenon? It's just a more complicated machine than a computer. I would say that the architecture of Von Neumann machines is the limiting factor.

The interesting question here is "Does hyperlink space, with it's conscious (human) mechanisms for connectivity and comparison, start to approach the complexity of a brain?"

Can't be long now, unless we all do have immortal souls.
posted by walrus at 3:52 AM on December 5, 2001


Oops I'm way, waaaaaay offtopic, without a paddle.
posted by walrus at 3:54 AM on December 5, 2001


Well, walrus raises an interesting point, but I think it ends up back at the complexity argument. Anyone else read that old sci-fi story (maybe called "Dial F for Frankenstein") where they plug in that massive communications network and it achieves consciousness? It was a terrible story.

I see a switch from specific examples of poor design
Well, the 'hello world' example doesn't work in this situation, since it doesn't (we hope) run indefintely. I think an example of a program such as one we are hypothesizing would be considered a virus at this point, merely by virtue of not terminating and using processor time, but I suppose unless it replicates, no worries.
posted by j.edwards at 12:03 PM on December 5, 2001


A couple of interesting side notes on the "complexity argument" (this will be long, so use your scrollbar judiciously if you aren't interested).

When Rumelhart & McClelland proposed their model of neural learning, they made three fundamental errors (IMO ... I'm not going to cite, since it's been a few years since I was involved in AI research and I no longer have access to a decent library):

* that the neuron is the basic unit of computation of the brain (capable of and/or/not type computation)

* that a heavily connected network was analogous to the brain

* feed forward connectivity

Research since has tended to show that:

* the synaptic connection onto an axon is capable of and/or/not computation (look into synaptic plasticity). Neurons are now thought to be of similar complexity to a PC (or perhaps a website)

* learning in a heavily connected network is NP-complete (however it works in a sparsely connected network ... and synaptic networks have a similar level of connectivity to that which would work. In my opinion, so does the internet).

* the brain uses feedback extensively

Remember also that learning is all about strengthening "good" connections and weakening "bad" ones, which is what the human users of the web do (analogy: humans are the consciousness of the web).

I'm not saying intelligent networks of people and cyberspace will necessarily emerge: I'm postulating that it's possible. The web is analogous to a brain at certain levels. Neural learning algorithms work well in an environment where the network starts with random connections and is built up through directed learning (strengthening and weakening connections in a goal-based way).

In any case, a poor science fiction novel neither strengthens nor weakens the theory. I'm not sure what a testable hypothesis might be. That users of certain web communities might pull together facts "out of the air" to solve given problems which they could not solve in isolation? In that case it's the connectedness which is providing the emergent intelligence.
posted by walrus at 2:19 PM on December 5, 2001


ps I think I remember that the brain has in the order of 2 billion neurons. Dunno how many websites there are ...
posted by walrus at 2:26 PM on December 5, 2001


Post of the day, walrus.
posted by rodii at 5:11 PM on December 5, 2001


Thanks rodii. I went on to write a lot more on the subject, in case anyone is interested.
posted by walrus at 6:24 AM on December 6, 2001


Thanks, walrus. (offtopic: this is now my favorite thread ever)
posted by j.edwards at 1:56 PM on December 6, 2001


Does anyone know any good on-going forums for these topics of conversations?
posted by Neale at 9:16 AM on December 7, 2001


« Older Mathowie on The Morning News   |   Comment display error (Dec. 2001) Newer »

You are not logged in, either login or create an account to post comments