I’ve opted to clean up the front page as well as other things locally. Some of these things I have decided should be deprecated, for instance, the front page has been becoming rather busy and overfull – there is no reason for a display of the last 5 songs I’ve listened to, and with my enhanced calendar in the upper left corner, there is no longer any feasable reason for my elder styled archive search being displayed in monthly increments.

I’ve kicked the music down to the last song, because I happen to like an archive of what I’ve been listening to since the creation of the applet, so I’ll continue to store and display it, albiet a bit less ostentatiously.

My “Latest Entries” has been quelched to five entries, and I’ve taken into consideration doing a ‘hard wrap’ of the ‘QuickURLs’ blogging functionality I introduced late last year.

The file manager has been retooled, cutting code by nearly a third since it’s implementation – it is at an interm state, but is quite fast, and will be faster with my next set of modifications I have set to roll out.

It’s nice to shed a bit of weight, after all.

Shawn’s Blue Cheez Burger

· 4 pounds ground sirloin · salt and freshly ground pepper · 16 oz crumbled blue cheese · 4 tbsp Worcestershire sauce · 4 Scallions, finely chopped · 4 tbsp olive oil · a few sprigs of Cilantro · 8 hamburger buns

Preheat the grill. Make it nice and hot. Divide the meat into sixteen portions, forming each portion into a round patty. With a large wooden spoon, mix the Worcestershire and blue cheese until nice and gooey. Peal the cilantro leaves from the stem. Discard the stems. Fold in the cilantro leaves and scallions. Place a quarter of the blue cheese mixture between two patties and squeeze the edges firmly together to seal the burger. Dip in the oil. Arrange the burgers, oil side down, on the hot grate and grill until nicely browned, usually 3-4 minutes. Brush the other side lightly with olive oil and season with the salt and pepper. Turn with a spatula and continue grilling till cooked to taste. Place on buns, serve. I suggest a nice soft drink that is light in sugar, and a side of browned fries.

Enjoy your meal!

Still in it’s infancy, my RSS Feed Reader has grown from a buggy concept into a fully-realized utility. Previously modeled upon Keith Devens XML-RPC, it’s been rewritten with a customized version of Onyx RSS.

Rarely is it required to “throw the baby out with the bathwater”, but my former implementation has been rewritten from a 200k+ monolith to roughly 40k.

It will read, parse, cache, then spew out a brief synopsis of articles available. I will be expanding it as I usurp it into my Rollator CMS system; but feel free to test it out. I will reimplement “Titles Only”, as well as other various tweaks in the near future.

Update: Heck, I don’t really care about Onyx; it’s a bit bloated for my needs. I’ve got all I need right here:

function startElement($parser, $tagName, $attrs) { global $insideitem, $tag; if ($insideitem) { $tag = $tagName; } elseif (($tagName "ITEM") || ($tagName “IMAGE”)) { $insideitem = true; } } function endElement($parser, $tagName) { global $insideitem, $tag, $title, $description, $link, $iurl, $i, $n; if ($i >= $n) { return; } if ($tagName == “ITEM”) { printf(“� %s – %s
\n”, trim($link), htmlspecialchars(trim($title)), htmlspecialchars(trim($description))); $title = “”; $description = “”; $link = “”; $iurl = “”; $insideitem = false; $i += 1; } } function characterData($parser, $data) { global $insideitem, $tag, $title, $description, $link, $iurl; if ($insideitem) { switch ($tag) { case “TITLE”: $title .= $data; break; case “DESCRIPTION”: $description .= $data; break; case “LINK”: $link .= $data; break; case “URL”: $iurl .= $data; break; } } } function parse_feed ($url, $num) { global $insideitem, $tag, $title, $description, $link, $iurl, $i, $n; $insideitem = false; $tag = “”; $title = “”; $description = “”; $link = “”; $iurl = “”; $i = 0; $n = $num; // Create an XML parser $xml_parser = xml_parser_create(); // Set the functions to handle opening and closing tags xml_set_element_handler($xml_parser, “startElement”, “endElement”); // Set the function to handle blocks of character data xml_set_character_data_handler($xml_parser, “characterData”); // Open the XML file for reading $fp = fopen($url, “r”) or die(“Error reading RSS data.”); // Read the XML file 4KB at a time while ($data = fread($fp, 4096)) { // Parse each 4KB chunk with the XML parser created above xml_parse($xml_parser, $data, feof($fp)) // Handle errors in parsing or die(sprintf(“XML error: %s at line %d”, xml_error_string(xml_get_error_code($xml_parser)), xml_get_current_line_number($xml_parser))); } // Close the XML file fclose($fp); // Free up memory used by the XML parser xml_parser_free($xml_parser); $i = 0; }

As a followup to my initial search engine array post, I’ve implemented a rather useless little utility, slightly building upon it.

For one of my test sites, rather than having a generic ‘nothing here’ page, I decided to redirect the user to search for a given string based upon their forage into my directory:

header(“location: ” . randomSearchString($_SERVER[“REDIRECT_URL”]));

function randomSearchString($findString) { $randNum = mt_rand(1, 8); $searchengine = array ( ‘1’ => ‘http://www.altavista.com/cgi-bin/query?pg=q&what=web&fmt=&q=’, ‘2’ => ‘http://www.everything2.com/index.pl?node=’, ‘3’ => ‘http://msxml.excite.com/info.xcite/search/web/’, ‘4’ => ‘http://www.google.com/search?q=’, ‘5’ => ‘http://hotbot.lycos.com/?SW=web&SM=MC&DC=10&DE=2&RG=NA&_v=2&MT=’, ‘6’ => ‘http://search.lycos.com/?npl=&query=’, ‘7’ => ‘http://dpxml.webcrawler.com/info.wbcrwl/search/web/’, ‘8’ => ‘http://search.yahoo.com/bin/search?p=’ ); return $searchengine[$randNum] . $findString;
}
?>

What this does is take the string returned by the webserver and run it through my simple subroutine, which has a choice of 8 different search engines. It pseudo-randomly chooses one, appending this to the search, then blind-forwards the user to that search engine with a properly formatted query to search for what was given (in this case, my directory), in hopes that they find whatever they think they are looking for. Simple, and kind of cute.

Since I’ve broken my filemanager out of the main Rollator system, it lost a few parts of the integral product which I feel should be there. Every item I write has a small ‘comment’ function associated with it, allowing end users to offer their insights, suggestions, and even just a simple smiley if they find some of my ramblings useful.

As I rewrote the filemanager seperate, I opted to not continue it’s ability for end users to comment with the initial revision. I’ve since re-implemented my universal comment function for the file manager. In doing so, I re-implemented a bit of functionality into my administrative interface that I wrote before I wrote my Apache mod_rewrite rulesets to beautify things.

This function utilized a variable, cryptically called RURI, which stood for ‘Return URI’. This contained the localized server path of the file from whence it was called. This trivial little function allows me to become nearly re-entrant upon data modifications, returning the user to the same place they were before they entered their comment.

Aside from being elegant in that the end user does not have to contend with any ‘beneath the hood’ sections of the software, they are able to instantly see their entry. Usually.

Due to the volatile nature of my software, and the front page, I have to do what I can to minimize system impact. These two sections of my site are those ‘hardest hit’, and thus, I have opted to implement caching for both of them.

The practical upshot is that these pages load almost instantly, and the server does not get bogged down with the few SQL calls used to popualte my variables. Of course, it does not make sense to ‘break cache’ when a user enters a comment, so there is a small delay between the users entering a comment on one of these pseudo-static pages, and when it actually shows up upon a rendered page. I’ll investigate a means to update the ‘static’ section of the pages, whilst leaving the comment section dynamic, as that is it’s nature.

This is certainly going to be an interesting task, as the way I cache the page is horribly simple. Feel free to skip the rest, as what follows is a horribly geeky commentary—unless you enjoy that sort of thing.

What I do is create a hook before the page renders, populate and recurse through my data, saving all of this to an overloaded variable. Then, I compress this data with zlib, and store it in it’s own table, with a hash for a reference. I pass this as an E-Tag to the browser, which then uses this for it’s own information. When the browser presents it’s E-Tag, Rollator checks this E-Tag prior to rendering, and if it matches one in the cache, it presents the cache, decompressing on the fly as need be.

So, I need to discern a more appropriate way of caching data, rather than creating the whole page pseudo-statically, or, a more optimal way of parsing and proffering comments. I know which will be more challenging, and hopefully, more rewarding.