mini-MeFi November 21, 2001 1:50 AM Subscribe
Y'know what? If this had the feel, functionality and all-around hoopiness of MeFi, that'd be really cool. [more]
ummm, what about ben brown's discuss?
...
i guess the ultimate distributed computing community site wet dream would be when *every* member of mefi hosted some component of the site - their own profile and threads for example. users with more powerful nodes would be able to mirror content from others and host the archives
its not going to happen anytime soon, but working with xml has given me an idea of how it all might be possible - if you have a bunch of users with permanent net connections and web servers (something to keep in mind is that os x is shipping with the open source apache web server and php), all you need is a layer of web server intelligence to tie it all together and authenticate users and data
posted by sawks at 3:40 AM on November 21, 2001
...
i guess the ultimate distributed computing community site wet dream would be when *every* member of mefi hosted some component of the site - their own profile and threads for example. users with more powerful nodes would be able to mirror content from others and host the archives
its not going to happen anytime soon, but working with xml has given me an idea of how it all might be possible - if you have a bunch of users with permanent net connections and web servers (something to keep in mind is that os x is shipping with the open source apache web server and php), all you need is a layer of web server intelligence to tie it all together and authenticate users and data
posted by sawks at 3:40 AM on November 21, 2001
It depends on what the bottleneck is when Metafilter starts straining under a heavy load. Is it the database, the coldfusion server, or the network bandwidth? As elegant as a distributed software solution sounds in theory, often the cheapest solution in practice is to throw more hardware at the problem. (I suppose that's easy to say in theory as well, in practice there's the small matter of cash...)
I worked at a company where we used to do compute intensive simulations on Sun boxes. As the simulations got more complex, they became intolerably slow. There were many ideas on how to redesign the simulator's architecture, make it distributed across multiple boxes, and so on. In the end we just said sod it and went down to Frys to buy a dirt cheap super-GHz linux box with a big fast disk. Increased performance four fold and saved us a whole load of hassle.
posted by dlewis at 4:17 AM on November 21, 2001
I worked at a company where we used to do compute intensive simulations on Sun boxes. As the simulations got more complex, they became intolerably slow. There were many ideas on how to redesign the simulator's architecture, make it distributed across multiple boxes, and so on. In the end we just said sod it and went down to Frys to buy a dirt cheap super-GHz linux box with a big fast disk. Increased performance four fold and saved us a whole load of hassle.
posted by dlewis at 4:17 AM on November 21, 2001
I would guess that the bottleneck will be the DB, and as the number of comments and threads increases it will only get worse. One idea to keep the size of the DB relatively constant would be to shuffle old threads (with the obvious exceptions) out of the DB and into static pages but this would require a fair bit of work on Matt's part.
posted by gi_wrighty at 5:46 AM on November 21, 2001
posted by gi_wrighty at 5:46 AM on November 21, 2001
I don't think that DB size affects the performance much. And it would be contention on the new threads which would be the problem. If the DB were indeed the bottleneck, the software solution would be to render a new static page every time a comment is posted. That would offload the bulk of the read-only requests to webserver. There is of course the issue of dealing with user preferences, but a linked style sheet might take care of that.
Having said all that, new hardware is still a better solution.
posted by dlewis at 6:10 AM on November 21, 2001
Having said all that, new hardware is still a better solution.
posted by dlewis at 6:10 AM on November 21, 2001
I've just had a somewhat wacky idea for improving bandwidth, as well. If static pages were compressed using some standard text compression method, their size would probably be reduced by about 75%. You could put the compressed encoding into a comment or something. Then place a link in the page to a Javascript progam (which would be subsequently cached by the browser), which on page loading, reads the encoded comment and uncompresses it, using the DOM to append it to the page. Combined with the static page approach, that would reduce the load both on the database, the webserver and the T1 link.
Having said all that, new hardware is still a better solution.
posted by dlewis at 6:36 AM on November 21, 2001
Having said all that, new hardware is still a better solution.
posted by dlewis at 6:36 AM on November 21, 2001
i guess the ultimate distributed computing community site wet dream would be when *every* member of mefi hosted some component of the site - their own profile and threads for example. users with more powerful nodes would be able to mirror content from others and host the archives
Sexy.
posted by rushmc at 7:15 AM on November 21, 2001
Sexy.
posted by rushmc at 7:15 AM on November 21, 2001
The only problem with that is that it escapes from the Scylla of database overload into the Charybdis of link rot.
posted by Steven Den Beste at 7:29 AM on November 21, 2001
posted by Steven Den Beste at 7:29 AM on November 21, 2001
Hmm.. on second thoughts, scrub that Javascript decompression engine idea. I've been looking around and it seems to have been done before.
(apologies for posting 4 times to this thread, too. I deserve a complaint on Metatalk for that.)
posted by dlewis at 7:47 AM on November 21, 2001
(apologies for posting 4 times to this thread, too. I deserve a complaint on Metatalk for that.)
posted by dlewis at 7:47 AM on November 21, 2001
(apologies for posting 4 times to this thread, too. I deserve a complaint on Metatalk for that.)
Hey, take it to MetaMetaTalk, d.
...
More seriously, don't recent versions of HTTP do compression anyway?
posted by rodii at 9:30 AM on November 21, 2001
Hey, take it to MetaMetaTalk, d.
...
More seriously, don't recent versions of HTTP do compression anyway?
posted by rodii at 9:30 AM on November 21, 2001
It'd be a lot easier to send Matt money or buy a TextAd.
posted by gleemax at 10:06 AM on November 21, 2001
posted by gleemax at 10:06 AM on November 21, 2001
If you're interested in running your own MeFi style site, you should check out Verbamanent. Here's a message from the php-mefi board that gives some pointers:
http://groups.yahoo.com/group/php-mefi/message/170
posted by daver at 11:31 AM on November 21, 2001
http://groups.yahoo.com/group/php-mefi/message/170
posted by daver at 11:31 AM on November 21, 2001
Wasn't the php-mefi project the answer to this post's question? That was a really interesting thing, and I was sad to see its enthusiasm die down to almost nothing... twice. I've been developing php-mefi code on my own since then.
posted by tomorama at 3:04 PM on November 21, 2001
posted by tomorama at 3:04 PM on November 21, 2001
Re: compression of text -- check out mod_gzip for Apache -- I think it does what you want.
The server compresses the HTML page using gzip, and sends that out to the end user. Good for situations where CPU power is plentiful and bandwidth is scarce (ie not necessarily MetaFilter). IE and Netscape > 4.0 have the ability to accept gzip encoded content.
Although Matt could set up an Apache frontend to the site using ProxyPass and mod_gzip, which would handle the compression for him, as well as free up the main web server from the process of serving the pages up to the end users.
Just thinking of novel new ways to spend Matt's time and money...
posted by kaefer at 4:31 PM on November 21, 2001
The server compresses the HTML page using gzip, and sends that out to the end user. Good for situations where CPU power is plentiful and bandwidth is scarce (ie not necessarily MetaFilter). IE and Netscape > 4.0 have the ability to accept gzip encoded content.
Although Matt could set up an Apache frontend to the site using ProxyPass and mod_gzip, which would handle the compression for him, as well as free up the main web server from the process of serving the pages up to the end users.
Just thinking of novel new ways to spend Matt's time and money...
posted by kaefer at 4:31 PM on November 21, 2001
<ditto>I've been developing php-mefi code on my own since then</ditto>
Mine's called PHPilfer, because open source is theft.
How far have you got along the trail?
posted by holloway at 8:05 PM on November 21, 2001
Mine's called PHPilfer, because open source is theft.
How far have you got along the trail?
posted by holloway at 8:05 PM on November 21, 2001
(apologies for posting 4 times to this thread, too. I deserve a complaint on Metatalk for that.)
Well, I was just about to post a thread of my own, a "What's the maximum number of comments per user to a MeTa thread before re-routing the guy to a new MeTa thread" thread, when an angel descended and reminded me I'd exceeded my daily quota of the word "thread" already. Shucks.
posted by MiguelCardoso at 8:41 PM on November 21, 2001
Well, I was just about to post a thread of my own, a "What's the maximum number of comments per user to a MeTa thread before re-routing the guy to a new MeTa thread" thread, when an angel descended and reminded me I'd exceeded my daily quota of the word "thread" already. Shucks.
posted by MiguelCardoso at 8:41 PM on November 21, 2001
I don't know anything about ColdFusion, but I have faced the content management problem before. The best solution for overloaded db-backed pages is usually 'caching' the page, by generating static snapshots of the page every X seconds, instead of doing so on every request. I bet that if you track how many requests you are receiving per minute/second/whatever, you can work out a function to adjust caching accordingly so that the DB/CF engine is hit only around a specific number of times/sec.
posted by costas at 7:39 AM on November 22, 2001
posted by costas at 7:39 AM on November 22, 2001
You are not logged in, either login or create an account to post comments
Instead of just one MeFi that's getting topheavy with users (though it's proven to be able to withstand thousands whereas server.com doesn't), there could be a sea of little MeFis. MeFi would succeed where server.com has failed. I guess that's what I'm trying to say.
Okay. I just wanted to imagine for a minute. Back to reality.
posted by ZachsMind at 2:00 AM on November 21, 2001