You are currently browsing the category archive for the ‘Technology’ category.

I’ve been using Evernote for a little over a year. No, it’s not a new IDE or a text editor. It’s a note-taking  software alternative to Microsoft’s OneNote, and it’s free (as in “free beer“, but proprietary).

And no, Evernote isn’t a revolutionary new way to deal with software, it just makes a few routine things easier to do, like

  • I take tons of screenshots when I need to, either with Evernote’s screen clipper, or the old fashioned way and paste in to a note. Evernote does OCR on them so I can search for text in the screenshots. I don’t have to have an elaborate directory structure or think of descriptive filenames.
  • I put test code that I don’t think I want, but don’t want to delete. I put URL’s, test plans and other notes and give all the notes a single title, for example the Jira issue number. Search for the issue, and voilà, all my notes and screenshots are there.
  • Almost a moot point, but the notes can have checkboxes. Makes awesome TODO lists.
  • Did I say I love the search?
  • It’s in the cloud, so I can access it from anywhere.
  • And yeah, work is only part of life, so it’s a much nicer way to jot down quick thoughts, TODO, links much more conveniently than e-mailing it to myself. Notes can be tagged too, so I can track ideas, financial things, work and personal notes easily.

I just cannot imagine a life without a diff tool, but this comes pretty close too 🙂

Advertisements

My grad school lets me access ACM publications whenever I access them from an internal lab computer. When I’m home and I want to read something though, I had two options so far: hope that they had copies on public home pages and search via Google Scholar, or get the link and use wget to fetch the PDF’s over an SSH terminal and then SCP it across. Both were no fun, and the solution turned out to be simpler than I thought.

The easiest approach I could think of is to tunnel HTTP requests over SSH so that the other party (ACM etc.) sees the requests as coming from a lab computer and gives me access. On PuTTY this was pretty straightforward: we just need to add a forwarded port from a local port (I picked 3129) with a blank destination and “dynamic” for the connection type.

Configuring PuTTY to proxy web traffic

The only gotcha I noted in configuring Firefox was that I had to give (localhost, 3129) as the SOCKS host, and not the HTTP proxy, and had to ensure that the SOCKS version was 5 (default choice).

Proxy configuration in Firefox

Speaking of tunnels, I also used Stunnel recently for a project and found it to be quite nice as well – giving some features PuTTY doesn’t give like logging the individual connections. Firefox remembers the last proxy settings, so enabling and disabling the proxy is not difficult. For those not so fortunate, there are numerous proxy manager plugins for Firefox like SwitchProxy and FoxyProxy. Can’t vouch for either, but I think I used SwitchProxy some time back and thought it was neat.

I found this article to be very helpful in figuring things out. Another article shows a slightly more complicated approach with proxy auto configuration scripts and using Netscape Navigator’s profiles for managing proxies.

Workplace colleague Sukitha showed me a couple of neat tools. Nothing fancy, but just adding a little more productivity to an otherwise boring life.

First, a wrapper for the Windows Console (or any other shells you might be running) courtesy of Console, featuring multiple tabs, resizeable console area (I really missed this in cmd.exe), configurable copy/paste options (I’m on Shift+Select to copy and Scroll button to paste), transparency and the ability to link any shell or command line tool as the shell.

Second was Launchy, a cool tool that’s like a totally revamped Run dialog box. Features skinnable UI, plugins (it does Google and Wikipedia search, among others) and the ability to show you a list of available commands / folder locations etc when you’ve typed in the first few letters.

I’ve been bored at work, and figured I could search for a way to perform CVS operations from inside VS.Net. Nothing fancy, like the kind of integration VS does with VSS, just a way to maybe do a CVS diff from inside VS without having to browse to that directory and do it.

Since we use TortoiseCVS at work, it makes these things a lot easier, but still the part about locating a file inside a huge directory structure is a real time killer. I started by looking at what cvs.exe in the TortoiseCVS directory could do, but it required the server and CVS Root to be specified, didn’t like absolute paths and had pathnames the Unix way. Even if you hardcoded those and call cvs-diff, it still shows you the diff in the console, and not in my diff tool. Getting that done would involve writing a macro to check out the file to a temporary location and then calling the diff.

Then I happened to come across TortoiseAct here and then in a mailing list thread. And guess what, all we need to do is pass the required action and the filename to TortoiseAct, and it takes care of everything: the CVS server and root,  authentication, checking out to a temporary location and opening my favourite diff tool.

The next step is to add it as an external tool in VS.Net. For example, to invoke CVS Diff, you must create a new External Tool with the Command being the path to the TortoiseAct exe (which for me is “C:\Program Files\TortoiseCVS\TortoiseAct.exe”), and the Arguments being “CVSDiff -l $(ItemPath)”. The file TortoiseMenus.config has a list of other commands that TortoiseAct can run. Now it’s just a matter of opening a file in the VS.Net editor, and starting the CVSDiff external tool!

Debugging is itself not too bad, but when you have to attach your debugger to a running task, it can become a pain. Faced with this problem, I found a neat little macro (probably from here, but I can’t be sure) that could attach the debugger to a known process. I managed to add that to my custom toolbar along with another custom toolbar action that launched my application’s EXE (I work on the DLL’s, not the app itself). Now, launch and debug are right there in VS.

A workplace colleague also pointed out how to search for any bit of text (i.e. Code, error message etc.) on Google from within VS.Net. The comments there had a few other nifty ideas too.

Looking to send a Audio clip to a friend, and not having a decent MP3 encoder around, I came across Media-Convert which promises to covert files between various types of audio, video, mobile phone ringtone formats, image, document, archive and some more that I’m sure I missed.

Instead of downloading the Wav file in question (who keeps around uncompressed audio, anyway?) Media-Convert let me simply point to the URL of the source file, specify my output format and presented me a link for the MP3 file, active for 24 hours.

I personally found the MP3 file size (at 44KHz / 128kbps / CBR) to be too big, but I didn’t really get much of a chance to test out the encoder. It did seem to work though, and I’ll probably get the need to test it out again some day in the future. More details will have to wait till then.


Engadget ran an article on Hitachi’s new 1TB Hard Disk drive. According to the comments, in spite of its supposed 400$ price tag, it’s still the same price per byte as any other drive out there, but in a single unit. For those who are wondering what they can do with 1TB, the CEO of Seagate has some ideas.

Yahoo Mail presented me with this

Chat inside Yahoo! Mail - Thumb

when I logged in today. Not too suprising too, but Yahoo!, once a leader in web applications seems to have ended up playing second fiddle to Google.

I was one of the first to get the 100MB upgrade (in 2003 if I remember right), and then the 1GB upgrade, and the upgrade to the new Beta interface, and looks like my early-preview luck still holds. Came across some reviews too.

Sharepoint Portal Server 2003 is my first memory of an HTTP application (that sounds more appropriate than “Web application” 🙂 ) that allowed web content to be dragged and dropped around. Considering that SPS2003 only worked over IE, Google IG is definitely a good step ahead, at least in my books.

Then again, I came across this post on CodeProject and the guy was right – his site leaves you speachless! It just has to be seen, so head over there right now. Be warned though, it ran ok on IE7, but failed on both IE6 and Firefox.

Hunting around for a disposable e-mail address for a site registration, I came across Mailinator – a no-registration, no-login, easy to use service. At any crappy site which requests an e-mail address to ‘verify’ you, simply enter <anything>@mailinator.com, and a mailbox is automatically created, which can hold e-mails for ‘3-4 hours’. Checking it is a snap, with no login, and just the entry of the e-mail address.

Alternatively, each home page visit suggests a brand new, pseudorandom, 14-character mail alias. I thought the idea was really neat… I have come across SpamGourmet before, but for quick and dirty do-it-and-forget-it type of things, I would much rather prefer Mailinator. Of course, SpamGourmet is more feature laden and configurable so it’s no doubt a very nice tool.

The guy who created Mailinator blogs about his creation, and is a very interesting read. About.com mentioned both of these in an article too.

I came across a post about searching Google for MP3 files. Even though it didn’t do a whole lot, it seemed kind of interesting, and I checked out how Firefox’s search plugins are done, and created a new search plugin (and added support for OGG files too 🙂 ).

It was actually very easy to do, where all I did was copy the google.xml file from <FIREFOX_INSTALL_DIR>\searchplugins\ and modify the search\param value. Didn’t feel like “enough work” was done, so I extracted the image from the XML file (courtesy of this online Base64 encoder/decoder) and replaced with my own. Once the file is placed in the searchplugins directory, restarting Firefox loads it up.

Give it a shot. Alternatively, download my file.

Since sometime around June last year, I’ve been working with ANTLR at work. Even though I haven’t really had any formal education on Compiler Theory, this certainly pushed me to the deep end and there was a lot to learn.

Everything went fine till we started to write a parser for PL\SQL somewhere in August, and that brings me to my rant:
in PL\SQL, apparently records can have columns named “type”. Variables can be named “ref”. Cursors can be named “count”. And the list goes on. I personally think that
(a). Any language that allows keywords to be used as variables is fundamentally flawed and leads to confusing code
(b). Any programmer who re-uses keywords to create variable names is retarded violates basic software engineering principles.

Getting the actual parser running was not too difficult. ANTLR’s Grammar page had a few PL\SQL grammars but none worked with my files. BNF Web was very interesting and I spent a couple of days visiting all of their pages and copying their BNF, but that didn’t work either. Then I actually started going through my workplace files and created a grammar from scratch. That turned out to be highly ambiguous, and I re-did it, left-factoring symbols. This final one worked.

The real headache came when I started to parse the files for testing the grammar and saw all of the above “unconventional” uses. In the end, I gave up trying to fix my parser, and decided to try to make the parser context-sensitive.

Formal definitions aside, my idea of a context-sensitive language is one that, as I pointed out above, recognizes that “ref” is a keyword only when used along with “REF CURSOR” and so on. I still think that non-context-sensitive languages (and I like my languages strongly typed too) are easier to develop on. As can be imagined, I do not like JavaScript and I loathe PL\SQL.

I came across some article that suggested a few approaches, and I either failed to understand them properly, or I simply could not get the results they promised.

I ended up trying at least 5 different approaches on getting the parser to recognize context. Syntactic predicates to override testLiteralsTable() required tight integration between lexer and parser. Overriding testLiteralsTable itself didn’t work, as it required lookahead to work, and this advanced the lexer, overwriting the text to be resolved. Parsing optimistically as a keyword, and rewinding on a parser exceptionand trying again as an identifier felt promising, but there was no way to re-invoke the entire parser chain, and such a re-invocation could be at any point of the call stack.

Finally, something worked in a limited situation. I wrote an override for match(), and set a flag, clearing it immediately before returning. Now, when the grammar expected “BULK COLLECT” the generated parser simply calls match(LITERAL_collect) immediately after matching BULK. A flag is set, the lexer checks the flag to know it’s a keyword and flag is cleared after matching. At any other location of code, when a COLLECT is met, the flag would be off, and therefore it must be an identifier.

That sounds nice, but didn’t work very well either. The reason is that many keywords are optional and the majority of matches are done after a lookahead. So the problem was resolving between a keyword and an identifier during lookahead and buffer fills.

Overriding the filter (TokenStreamHiddenTokenFilter, to preserve whitespace) to lookahead once again resulted in the lexer advancing. Buffering tokens was also very tricky.

Finally, I took the easy way out. I wrote a second very simple lexer/parser combination that would simply recognize a stream of keywords, identifiers and literals, and not try to recognize a complex structure in it. The token types are then routed to a list and subsequently to an array, and then the real parsing begins. The real parser now can look ahead as well as back to the boundaries of the file, and whenever we determine that a keyword token is actually an identifier, the array is updated at the corresponding location.

I have just scheduled a complete bulk parse of 5023 PL\SQL files, totalling 180MB in size. The parser has just chewed through 4892 of those (169MB) and that’s in only one day of testing and fixes. It has taken a little over an hour, compared to about 2 1/2 hours it took with my initial “keywords everywhere” parser. The overheads include the repeated invocation of the parser executable, and the double-pass parsing mechanism, but the results are certainly very promising.

About this blog

The occasional, seemingly random things that fire in my head

My recent tweets

October 2017
M T W T F S S
« Jul    
 1
2345678
9101112131415
16171819202122
23242526272829
3031