What's That Noise?! [Ian Kallen's Weblog]

« Previous page of month (Feb 2006) | Main | Next month (Mar 2006) »

20060205 Sunday February 05, 2006

Web Two Dot Overload

The past few weeks have seen such a saturation of new services, it's mind boggling. The flickr and delicious acquisitions clearly stoked a yodling frenzy (y!a think?). Ya know, it seemed like in months past there were new services every few weeks or so: digg, reddit, memeorandum, personalbee, topix, tailrank, squidoo and on and on and on. But lately, it's hourly.

Now it's megtite, 30boxes, dabble, newroo, wink, tinfinger, podserve and chuquet. Goddamit, TechCrunch has a form to submit your very own web 2.0 venture for coverage. And of course, everyone's variously buzzword compliant with tags, feeds and ajax. Bubble 1.0 was a lot of fun but the sudden thud at the end kinda smarted, didn't it?

pours a glass of two buck chuck
  OK, I'm back.

I'm not diss'ing any of these services, there are a lot of really good ideas out there (nor OTOH am I endorsing any). But can they just take a few days off so we can get some work done? How about a moratorium? No more web 2.0 service launches for 48 hours, please! The funny thing is when the content is all self-referential (to the extent that the subject matter is the same or yet-another-new-service); when the services are capturing our artifacts and our artifacts are talking about them... the top story on megtite right now is coComment, on chuquet it's there along with dabble. It's what happens when you point the mic at the P.A. system.

Please, take a few days off. Go skiing or whale watching or something. Or go old school: content is king! e-commerce! woohoo!

( Feb 05 2006, 09:44:39 PM PST ) Permalink

20060201 Wednesday February 01, 2006

Large Heap Sizes and OOM

Note to self: If you're getting OutOfMemoryError's, bumping up the heap size may actually make the problem worse. Usually, OOM means you've exceeded the JVM's capacity... so you set -Xms and -Xmx to a higher strata of memory allocation. Well, at least I thought that was the conventional wisdom. Having cranked it up to 1850M to open very large data structures, OOM's were still bringing down the house. OK, spread the work around in smaller chunks across multiple JVMs. But it still bombs out. It turns out that you have to be very particular about giving the JVM a lot of heap up front. This set of posts seems to peg it. I'd figured that nailing a big heap allocation was how I'd prevent OOM'ing. Looks like it's time for me to bone up on JVM tuning. I should probably dig into 64-bit Java while I'm at it.

( Feb 01 2006, 11:57:15 PM PST ) Permalink