New Feature: Deep Linking

Yesterday I did a bit of coding and added a new feature to Sunsetter: deep linking. When you make a query to find a sunrise or sunset forecast, the address bar will update with a link you can copy and share or send to friends to show the same page you were on.

For example, here’s a link to the alignment between the Tokyo Skytree and Fujisan:

http://www.sunset.io/#pov=35.71,139.810744&poi=35.363976,138.732217

fujisan-from-skytree

Early November or February should make for some nice pictures!

Feel free to share some nice alignments with your friends.

Predicting Manhattanhenge

There has been a lot of talk lately about the Manhattanhenge, the 2 days in the year when the sun sets in the alignment of New York streets (thank you city grid design).

Manhattanhenge By @NYCphotos-flickr

It’s awesome to see so many pictures like this popping up on flickr and instagram because this way I also get to confirm my little app Sunsetter is actually giving correct results:

Manhattanhenge prediction on Sunsetter

Note: the app is configured to predict when the sun’s lower limb touches the horizon, not the civil sunset when the sun completely disappears behind the horizon, as this makes for a better picture.

Tokyo is not an easy city to take such pictures but so many cities in the US have the potential. For example, I’m hoping to see many pics from San Francisco on September 24th.

SF-henge

Pet Project: Sunsetter

At home I have a nice view of the Fujisan to the south-west. I often take pictures of it in winter when the skies are so clear.

Many times I’ve told myself it would be nice to take a picture with the sun setting right behind the mountain. I’ve searched the internet for an app that would tell me when this happens but all I could find were apps that tell you where the sun sets on a particular day, not the other way around. So I decided to build it…

Sunsetter is a simple python web app running on Heroku. It’s based on the brilliant pysolar library for the hardcore astronomical calculations and binds it all to Google Maps with a dash of Ajax and JavaScript.

The app still needs polish but the data it gives out should be pretty accurate and reliable at normal altitudes (standing on top of a very tall mountain overlooking a wide plain will change the distance of the horizon and screw with the calculations a bit).

If this app was useful to you or you have suggestions, I’d love to hear from you in the comments.

Update (2012-06-03): I’ve open sourced the code on Github.

Foreman and Procfile tips&tricks

Lately I’ve been playing around with the Heroku stack and I’d like to share little tricks that might be common knowledge but which I’ve not seen mentioned on the Heroku standard documentation.

So the doc tells you you must set your Procfile as such (for a Python app):

web: gunicorn app:app -b 0.0.0.0:$PORT -w 3

You can then run the webserver on your machine for development with the command:

foreman start

But if your app also uses memcached, redis, postgres or others, then you must open as many additional tabs in your terminal to run each service (I don’t want to have the daemons running all the time on my all-purpose macbook air).

What you can do to make your life easier is to create a new file Procfile.dev which you should add to .gitignore (so that it is not uploaded to Heroku and does not affect your production environment) and add all those services in there:

web: python app.py
memcached: memcached -v
coffee: coffee --watch --output static/js/ --compile lib/

And run it with:

foreman start -f Procfile.dev

This will launch my app with standard Python (easier for quick debugging than gunicorn), my memcached instance and even compile my CoffeeScript on the fly so I can edit freely while testing my changes.

Looking in your terminal you’ll even see each service logs all pretty and color coded.

WWDC Keynote on MacBidouille.com with App Engine

No bandwidth, no servers, no infrastructure, no money required. Just a bit of python and a tad of javascript and you can live stream an event to 10.000 people concurrently (theoretic figure, Analytics said the live-blog site had 30.000 visits in all) within Google App Engine‘s free quotas.

keynote requests per second

This is the graph taken from my App Engine dashboard the morning after the WWDC’09 Keynote after MacBidouille.com live-blogged the event in French through my application. We always had scaling problems while Google’s infrastructure was in beta and we were bound by smallish quotas, but since they opened fully the service a couple months ago, the sky is the limit.

Blacklisting words in Twitter Tools

There’s a new game trending on Twitter these days, Spymaster, and it likes to write out stuff to your twitter feed. There’s a good controversy running on the web whether these tweets are spam or not. I’m playing and I’ve set it up to tweet out only level ups which is pretty minimalist.

However, I am also running the Twitter Tools plugin to copy my tweets back from twitter to my blog. But if I’m fine with exposing my twitter followers with #spymaster notifications, I’d rather not show them to my blog readers.

There is no way currently in the plugin to exclude tweets based on words, so I made a patch for it:

blacklist in the twitter tools options menu

You can download the patch for the current 1.6 version and apply it with the following command:

patch twitter-tools.php < twittertools-blacklist.patch

I hope this feature will make it in the next version of the plugin.

Meta-tags proposal for the new DiggBar

Many think the DiggBar is evil. I don’t. I find it ingenious, especially the digg.com pre-pending which will automatically generate a shortened URL for you as well as a “Submit to Digg” button if the page URL has not been submitted yet.

prepending digg.com for the digg bar

unsubmitted diggbar

However, when you submit a link to Digg by this way, the title and description of the item are empty by default, placing the burden to fill up these fields on the submitter. He needs to go back to the page, copy the title, copy some text of the article or make up a better description, which is all a pain and poses a big hurdle…

Digg submission - all empty

Digg offers a way for webmasters to create a link that will pre-fill these fields with the data you want your readers to use. This is done by simply setting some parameter in a URL to put as target of the link:

http://digg.com/submit?url=example.com&title=TITLE&bodytext=DESCRIPTION&media=MEDIA&topic=TOPIC

But this process is not compatible with the DiggBar and its URL pre-pending feature. What we, webmasters, need is a way to define these values that will work everytime.

Why not Meta tags? Step 1 of the step 2 in the screenshot above is Digg downloading the page to check it really exists and provide potential thumbnails for the submission. At this stage they could read a couple of meta tags in the <head> of the page and use that to pre-fill these fields.

<meta name="digg-title"  content="My title here" />
<meta name="digg-description" content="My 350 characters excerpt." />

It would then be trivial to write a WordPress plugin that generates these meta from your post title and excerpt (or similar concepts in other CMS platforms).

If you think this would be a feature you would like to see, I invite you to digg this blog post: http://digg.com/d1p1YV

Twitter integration

As you might have noticed, in the past weeks I have more tightly integrated my twitter messages into the blog. When they used to just show up in the sidebar, they are now posted simultaneously here as full blog posts, albeit with a special minimalistic styling.

tweetshot

You can clickity-click on the cute blue birdie to go to the post page and comment, as with every other post, on the inane stuff I post there. This wonder of technology is brought to you by the twitter tools wordpress plugin and my awesome coding skills.

Alas, I know some of you are already following my tweets on your twitter account and might find the double punch effect of reading these messages twice, in your twitter timeline and in your RSS feedreader, a bit overwhelming.

feedsanstwitter

Which is why I created an extra RSS feed to which you can subscribe and get only the fat fleshy blog posts, free of the 140 characters tweets. You can switch to that, I won’t begrudge you…

WWDC Keynote

Tonight (at least in Japanese time) is Steve Job’s WWDC Keynote. It is widely accepted that he will be announcing the new 3G enabled iPhone and I am secretly hoping he will give us a release date for Japan live from the Moscone West.

These past weeks, I’ve been developing a live-blogging system for my other website to cover the event minute by minute. It’s a challenge to devise a system, both software and hardware-wise, capable of handling the huge loads involved in such a big event. We are expecting more than 50.000 persons to follow the Keynote via MacBidouille.

Until now, all our attempts have failed. But this time, I’m trying something new with an Ajax based interface running on the new Google App Engine platform. It’s been really fun using a new technology, Python, on Google’s own architecture.

I have no idea if this new system will withstand the load this year, especially with the tight quotas in place for the beta phase of App Engine. But I sure hope Steve announces the iPhone launch in Japan starting tomorrow and I’ll take the day-off to rush to the Apple Store in Ginza as soon as it opens…

Playing with Twitter

Twitterific for MacI’ve started using Twitter today and implemented it on my blog’s sidebar with the cool Twitter Tools plugin. You can see my latest entries to the right, under the search on the main page of the blog.

I’ll use that to post all the one-liner updates that I never bother to post here for fear of breaking my pretty blogging structure. It’s also usable via my keitai so I can post on the go.

You can see my twitter profile and add me if you’d like. If you plan on signing up, be careful, I can see it becoming really addictive.