ftp vs http vs scp

From: Sean 'Captain Napalm' Conner <spc_at_conman.org>
Date: Fri May 28 13:34:43 2004

It was thus said that the Great Jules Richardson once stated:
>
> Bottom line to me is that HTTP is a pretty heavyweight and bloated
> protocol, whereas FTP is a lot cleaner. So for raw data transfer I'd
> always prefer an FTP server.

  I consider FTP to be more convoluted than HTTP (having helped written a
webserver and only briefly looked over the FTP specs) since under FTP there
is support for differing file modes (stream, record, conversions between
text formats, etc.) and setting up tranfers between multiple machines.

  HTTP is simple enough to use via telnet. Given

        http://boston.conman.org/2000/06/04.1

  you can do:

        %telnet boston.conman.org 80
        Trying 216.82.116.251...
        Connected to swift.conman.org.
        Escape character is '^]'.
        GET /2000/06/04.1 HTTP/1.1
        Host: boston.conman.org
                (blank line, type nothing here)

        (server starts spewing data out at you)

  The complications arrise when you start using more of the features in
HTTP/1.1, such as byte ranges, conditional requests (if the "file" hasn't
changed since such-n-such a date, don't send anything), caching, etc.
  
> Password security is an issue because it's perhaps not as good as HTTPS
> - but then with HTTPS aren't we getting into pay-through-the-nose server
> certificate territory?

  Not necessarily. You can sign your own certificates, but browsers will
give a warning about an unknown certificate authority (someone other than
Verisign & Co.). The money you pay is to repay the bribes the secure
companies paid the browser makers to include their certificate.

> > it's just common for a request to be
> > satisfied by serving up a copy of a file, and this is (ab)used as a
> > file-transfer mechanism.
>
> Upload's even more of a mess from what I remember, requiring something
> at the server end (be it Perl, compiled CGI, Java or whatever) to handle
> and save the incoming data stream - i.e. there's no standard for
> actually saving an upload to the filestore. Similarly, if you do want to
> administer files behind a web server then the solution is going to be
> localised as there's no standard for that either.

  There's a reason for that---what a webserver serves up may not even *be* a
file---it really depends upon the website in question, or even what part of
the site you're uploading a "file" to. That's why most webservers farm out
the uploading to an external process, although one could always write a
module (say, for Apache) to handle it in-server.

  The intent really, is that a PUT method (of HTTP) "puts" a resource at the
given URL. So, I could (although I've yet to do this), write a client that
connects to boston.conman.org and issues:

        PUT /2004/05/28.2 HTTP/1.1
        Host: boston.conman.org
        Content-type: application/x-mod-blog; charset="US-ASCII"
        Content-length: nnnn

        title: title of this post
        catagories: various catagories, blah blah

        <p>Here is a post I'm making today ... </p>

  And some piece of code on my server (as an Apache module, or an external
program that Apache calls) will take the content and store it in the
appropriate locations. The actual storage of entries in my online journal
isn't a one-to-one file relationship, and the structure I'd use to upload an
entry for a URL of

        http://boston.conman.org/2004/05/28.2

can't be the same as

        http://boston.conman.org/2004/05

as the latter contains entries for an entire month, and each entry (for the
site to work) needs to be stored separately.

  That's why webservers require something on their end to handle and save
the incoming data stream.

  -spc (I actually use email to update my site 8-)
Received on Fri May 28 2004 - 13:34:43 BST

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:37:13 BST