Let's take a look at the HTML specification:
A URL is a string used to identify a resource.
There's a cleaner explination on SEOmoz:
URL, or Uniform Resource Locator, is a subset of the Uniform Resource Identifier (URI) that specifies where an identified resource is available and the mechanism for retrieving it.
Here's one of Daring Fireball's endpoints and another on Scripting News. The pages are nice and clean but don't lead you anywhere (which is totally on spec). Now let's take a look at a Mashable post, there's a lot more going on here: related topics, related articles, top related stories, paid content, and even a "see also". In order to create a stickier web, we've created related posts, tag clouds, the dreaded target blank, and the like. Mind you, both Gruber at DF and Wiener at SN aren't fishing for clicks, in fact Gruber drives most of his clicks offsite, and it's what I've based Kripy on. I think what I'm trying to do here is talk myself into having barebones endpoints.
So why the failed "timeline" experiment? It completely borked my SEO. And herein lies the problem: duplicate content on my URL endpoints was confusing the spiders and making a mess of my SEO. Search queries were resolving to incorrect pages: the term daniel lee seungmin cho was returning "Programming Is Easy" and "Cut Your Hair" - both articles on Kripy - as top hits on the site when instead it should have been resolving to 'Reverse Racism'.
I've since thought of a few fixes and further thoughts on spiders and SEO but it's not for this article. I've ranted enough about this for the moment. I've reversed the changes and have since resubmitted my sitemap to the search engines. I'll let you how it goes.