Category Archives: Tools

Perl debug with strace

Short “FTR”

Not really specific to perl, but handy anyway.

You can use strace utility to inspect the syscalls (filesystem and network operations are usually of most interest) that a process is making.

here’s e.g. how you can see all network activity for a given process:

strace -p $PID -f -e trace=network -s 10000

Also if you have a stuck process you can check if it is waiting on some filehandle read and then check what that filehandle is using

lsof -p $PID

Also filehandles could be found in /proc/$PID/fd/ – so if you run strace on your process and see e.g.

write(3, "foo\n", 4)

you can check aforementioned lsof | grep $PID and see this

perl       9014 bturchik    3w      REG  253,3        4     26758 /tmp/hung

or

ls -l /proc/9014/fd/3

and see this:

l-wx------ 1 bturchik users 64 Aug  6 09:43 /proc/9014/fd/3 -> /tmp/hung

Distributed development keynotes

Attended UX Lausanne for the second time this year – and while it was rather small (which is actually good as you’re better connected to speakers in terms of Q&A and follow-up), it was not solely on UX but a lot on workflow and project development – something that seems to be called Product Experience.

One of the talks was given by a guy from Automattic company (Davide Casali) – first I thought they’re WordPress authors, but turned out they’re (large) contributors and maintainers of WordPress.com – that has a distributed nature, i.e. everyone works remotely. Like, everybody – few colleagues might use same shared space, but even then they’re not in the same team. Now that is definitely not the case for everyone, but nowadays many companies allow at least temporary remote work, thought of sharing key points here – I think they’re quite interesting and some of them could be used in different cases.

So here are their principles:

  • before creating new team (or starting new project), meet live to have “Minimum Viable Discussion” to quickly understand key points, sort out initial discrepancies in domain knowledge and prepare brief plan for the closest next steps
  • transparently share all changes, specs and discussions – “if it’s not transparently shared, it doesn’t exist” (more office-bound sticky-note-flavor version of this proverb was “if it’s not on the wall, it doesn’t exist”)
  • have few communication spaces for different needs:
    • Real-time channel (like Slack or Jabber or, sigh, Facebook) for immediate personal and team communication
    • Team space on some shared documents resource – they, as WordPress-targeted company, use a P2 theme for WordPress that allows posting comment to posts and comments to comments – basically to host a discussion on the subject. Point is to have something that could be subscribed to or viewed by any employee, and that team members would overview daily to see/discuss updates or refer to while developing
    • “Stable” documentation storage for more permanent things like articles or specs of deployed products etc.
  • each team (BTW they also have small teams, 4-5 people of different skills) focuses and collaborates on one thing (project or task)
  • Independent individuals, i.e. everyone maintains own priorities. Tasks are managed with any suitable tracker (they use Trello)
  • “zero waste” in terms of no bureaucracy regarding e.g. permissions or access
  • standup-kind updates are posted to team channel each time team member becomes available – along with overview of the progress, provides an indication when person becomes available
  • live meetups ~4 times a year – 3-5 days of work, discussions and some off-work time together
  • teammates (but not projects) are usually picked from within few timezones from one another to aid live communication
  • everyone can follow any other team by subscribing to their shared documentation channel

This is all “JFTR”, but some could really consider utilising part of these practices for daily work and communication improvements.

Also another reference to keep, https://www.helpscout.net/blog/agile-remote-teams/

Git goodies

Oh, Git! The thing that makes our developers life so much easier, yet – as, perhaps, any involved system – having so much left out of attention of the most. It’s far from original to post this XKCD gag here, but it’s just too true:

Git by XKCD

Me myself, I don’t really understand Git. I mean sure, I’ve read few articles on its structure and what acyclic graphs are and how it works etc., but when things go awry – I’m still puzzled.

To help that a little, I started to collect a set of shortcuts and tricks to make frequent problems less hassle. These come in form of scripts – while I could make aliases, I somehow prefer separate scripts as they let you use bash syntax if task is a little more complex than a simple shortcut. What’s also nice is if you name your file (or alias, I suppose) “git-kill-all-humans“, you can then run it as “git kill-all-humans” and even see it in the tab completion for Git commands!

The full set could be found under “git-tools” directory at https://bitbucket.org/hydralien/tools, below are just a couple of the most used ones.

  • git-forget – to use as “git forget .” to loose all the uncommitted changes or “git forget filename” to just a specific file to revert
git checkout -- $@
  • git-origin – to get the remote URL of the repository, useful to share or to clone other repository that resides at the same server (so I just need to change the name)
git config --get remote.origin.url
  • git-out – to see what changes are scheduled to be pushed to origin
git diff origin/`git rev-parse --abbrev-ref HEAD` -- $@
  • git-import – get changeset from a different host, useful when development happens on same repository cloned on many instances – sometimes changes end up on a wrong instance and need to be moved without getting them into the repository
curdir=`pwd`; ssh $1 "cd $curdir ; git diff"|git apply 
  • git-rush – probably the most used command – when the repository is large and there’s many people pushing to it, getting your changes into the origin might be a daunting process. So this one just tries till it’s done – it’s a little overcomplicated for stats reasons (and uses another shortcut, so there’s two of them here), but here it is:
attempt=1 ; time until git repush; do let "attempt++"; echo "No luck, once again..."; done ; echo "Finished in $attempt tries" ; date

and the git-repush:

git pull --rebase && git push

Bottom line here… Git is good – it just takes a few shortcuts to fully appreciate it =)

And of course there’s a hell lot more to automate if required – hooks, configuration etc. etc. etc.

AnotherTab Chrome extension FTW!

OK, the Google Chrome new tab page extension I’ve been (extremely lazily) developing for quite some time is live now! It’s not much, really – just displaying one bookmarks folder (I use “Bookmarks Bar”) and launchable extensions, but that’s what I use the most myself. Oh, and some Chrome shortcuts – like cookies, passwords etc. – too:

AnotherTab screenshot

It’s on Chrome Web Store – but there’s also a separate page to send around: http://hydralien.net/anothertab/

Oh, and the code is public, too: https://bitbucket.org/hydralien/anothertab

Git grep for Emacs

“git grep” is incredibly useful on large repositories – where regular grep (or awk or whatever) takes minutes to proceed, git grep does the job merely in seconds. Very, very useful. The only thing is, I would really like this working as Emacs search – with files selection, highlighting etc.

Didn’t take too long to find a suitable solution – but took some time to tailor it a little. Basically it’s all taken from https://www.ogre.com/node/447 with some minor adjustments – like, for instance, I do a lot of development in “remote” mode, i.e. I open files from remote hosts via SSH (hence replace-regexp-in-string part). Enjoy!

(defun git-grep (search path)
“git-grep the entire current repo”
(interactive (list (completing-read (concat “Search for (” (current-word) “): “) nil nil nil nil (current-word)) (read-file-name “in directory: ” nil “” t)))
(grep-find (concat “git –no-pager grep -P -n –no-color ”
(shell-quote-argument (if (> (length search) 0) search (or (current-word) “”)))
” ”
(replace-regexp-in-string “/ssh.?:.+:” “” path))))

(provide ‘git-grep)

now if you add this to your .emacs, you can have it launched on some key combination (like Command-c g in my case):

(load-file “~/.emacs.d/git-grep.el”)
(global-set-key (kbd “C-c g”) ‘git-grep)

Space Slaves released!

Well, at least these folks:

work_in_progress

are no longer smashing the tripgang.com – not that they did much, but enough to let them off to whatever they fancy under those opaque helmets.

So… it’s been quite a while. Now the aforementioned TripGang hosts these:

  • KML2GPX (convert Google KML mars into GPX or OSMAND-friendly GPX)
  • MapMarks (search and bookmark travel pinpoints)
  • WikiVert (auto-search for all points from WikiTravel-like text/attractions list)

It’s quite curious how things turn out – the initial idea for the resource was completely different (well, who knows, I might get to it some day after all) – but hey, “whatever works”, right? Hopefully these tools (however immature and weak they are) might be useful to someone (and most hopefully to myself).

Well… bon voyage, there’s not much else to say really.

 

Exporting Apple Mail filters to Sieve format

What’s this?

It’s a script (in AppleScript) that goes through all Apple Mail filters and converts them to Sieve filters, so you could go server-side on email filtering

Why?

The answer is, as usual, “because I’m lazy” – I’ve accumulated a fair bunch of Apple Mail filters along the way, and converting them manually wasn’t much fun. And I found no working option to convert it on the Internet.

Code and disclaimer

Code is available at:

https://bitbucket.org/hydralien/tools/src/23bca8d016ef88085c98cb1278174be86dfbba4e/apple/Mail2Sieve.applescript?at=master

feel free to clone, submit your patches etc.

NOTE that this is validated with a quite limited use case – in my filters, I mostly match against subject and sometimes against CC/To, so it definitely has some issues with other fields. Please review / try loading the results first and don’t disable all your Mail filters right away.

Exporting Mail filters

Just run that script – it will ask you if you want to use disabled filters as active (useful to re-export after you have already disabled Mail filters), then if you want to disable Mail filters (useful when you’re certain in your Sieve filterset), and then where to save the results.

Was it fun?

Well… the answer is “look at the code”. On the one hand, writing in AppleScript is quite unlike writing in most of the “conventional” languages – some constructs are very different, some seem more natural, others more awkward. On the other, loops management reminded me of programming Turing machine – I mean, using THIS as “continue”, really?!

So to conclude – I think it was, as any unusual experience is fun in it own (however peculiar) way. And it’s the “proper way” for the case – you deal with official API, not parsing the XML (which I could, because Mail rules are stored in XML files) because there’s no way to foretell where the source would move or how its structure would change eventually. Mail API is way less likely to do so.

Resources

Some Sieve-related resources FTR:

What’s next?

Have a beer!

Under Siege

This one is another FTR: stress-testing with Siege is a breeze! Here are two cites from http://blog.remarkablelabs.com/2012/11/benchmarking-and-load-testing-with-siege:

You can have this benchmark run forever, or you can specify a specific period of time. To set the duration of the test, use the -t NUMm option. The NUMm format is a representation of time. Here are some time format examples:

  • -t60S (60 seconds)
  • -t1H (1 hour)
  • -t120M (120 minutes)

To demonstrate running a benchmark, let’s run Siege against this blog:

siege -b -t60S  http://blog.remarkablelabs.com 

and then also “user emulation”:

 When creating a load test, you should set the -dand -c NUM options. The -d NUM option sets a random interval between 0 and NUM that each “user” will sleep for between requests. While the -c NUM option sets the concurrent number of simulated users.

Here is an example of a load test of a Heroku application:

$ siege -c50 -d10 -t3M http://some.application.com

and you could use custom headers (think cookies) with -H.

And what’s even better, it’s widely available – I’ve installed it on Mac through Ports (although it’s on brew as well) and then unpacked it from RPM on my GoDaddy SSH shell account (because I couldn’t just go and install it there). It worked well in both cases.

 

Neat little thing, or bash tab-completion for your tools

You know that thing, the magic of having all the options listed in front of you when you [double-]press Tab after typing something on the console? Or the unique option completing itself if there’s a match? Of course you do. One thing that bothered me is the frustration of when it’s suddenly not there.

For general tools it’s already alright, they either come bundled with tab-completion or you can easily set it up – for instance, there’s a setup tutorial for Mac, coming with a Git bundle. One important note on that one: in iTerm, you have to go to settings -> Profiles and change Command to /opt/local/bin/bash -I for your/default profile to run proper bash version.

But then there are your own little tools that start as a one-parameter two-liner but eventually grow to 30-params fire-breathing hydra. And that’s when you start missing that tab-completion thing.

But that’s easy (for simple cases – see a note below) – you just create a script named, say, mycomplete.bash, containing something like this:

_completecmd()
{
  local complist=`fdisk 2>&1|grep -Eo ‘^ +[a-z]+’|tr ‘\n’ ‘ ‘`
  local cur=${COMP_WORDS[COMP_CWORD]}
  COMPREPLY=( $(compgen -W “$complist” — $cur) )
}
complete -F _completecmd yourcmd

where _compelecmd is a unique function name, yourcmd is a command this should be applied to, and complist is constructed from fdisk output just to illustrate the approach – it should be output of yourcmd parsed there. Note: try your parser before you set it up, I faced weird differences on different platforms.

Then you need to add this to your ~/.bashrc:

source /path/to/mycomplete.bash

and you’re done. To have it right away, you can also run source /path/to/mycomplete.bash directly in your bash prompt.

Mind that that this approach wouldn’t work for intricate cases when you have a deep parameter sequence dependency – have a look at Git approach, it’s a bloody burning hell there.

WordPress plugins etc.

I’ve been (quite subconsciously) using WordPress for quite some time now, mostly for my alcoholic beverages blog (it’s in Russian, sorry). Subconsciously because it was the first option GoDaddy offered me a “automated install” blogging platform – and also because I’ve heard the name a number of moons back, so it should’ve been well documented and supported at that point. It’s on PHP, but who cares. I’ve spent years writing PHP code.

So I had this problem: my articles all have a rating (I use Author Post Ratings plugin by Philip Newcomer), but it’s not possible to see all the high-rated articles, nor it is possible to order articles by rating within a category – and this feature made a lot of sense, because when you go to a site with a bunch of reviews, you usually look for the best stuff within some category.

So I gave it a thought and just went and added required functionality – now it’s there on bitbucket, https://bitbucket.org/hydralien/author-post-ratings/src

Turned out writing WordPress plugins is a no-brainer if you need something simple (I started with a post-by-rating list) – you just add directory, create a PHP file with a proper header, and you’re done. Well, after you add your functionality, that is. WordPress has some lovely documentation on that.

It gets trickier if you need to change “internal behavior” – such as category sort order – but documentation helps there as well, there are filter hooks for that.

I guess this is worth a slogan – something like “Better drinking with no hassle” or “Drinking better just got easier”. Or whatever.