Category Archives: configuration

Web server in OSX (Apache2, dammit!)

There’s an Apache2 in OSX!!!

Apache2! It’s been awhile since I touched httpd.conf… so many memories… ūüėČ

It’s all described in this tutorial:¬†

It’s dead simple – see/edit¬†/etc/apache2/httpd.conf to find or set DirectoryRoot (mind directory access rights) and run “sudo apachectl start” (stop, restart).

To enable user websites, add following to etc/apache2/users/USERNAME.conf :

<Directory "/Users/USERNAME/Sites/">
Options Indexes Multiviews
AllowOverride AuthConfig Limit
Order allow,deny
Allow from all

and then it’d be served as¬†

Also, generating self-signed certificates:

And adding them to OSX keychain:

And you’d need to put them into¬†/private/etc/apache2/ssl/ and (potentially) edit¬†/private/etc/apache2/extra/httpd-ssl.conf and enable (uncomment)

Include /private/etc/apache2/extra/httpd-ssl.conf


LoadModule ssl_module libexec/apache2/

in /etc/apache2/httpd.conf

and run “apachectl restart” as root.

Champagne for all!

/   \
||  |
||  |    .    ' .
|'--|  '     \~~~/
'-=-' \~~~/   \_/
       \_/     Y
        Y     _|_

Exporting Apple Mail filters to Sieve format

What’s this?

It’s a script (in AppleScript) that goes through all Apple Mail filters and converts them to Sieve filters, so you could go server-side on email filtering


The answer is, as usual, “because I’m lazy” – I’ve accumulated a fair bunch of Apple Mail filters along the way, and converting them manually wasn’t much fun. And I found no working option to convert it on the Internet.

Code and disclaimer

Code is available at:

feel free to clone, submit your patches etc.

NOTE that this is validated with a quite limited use case – in my filters, I mostly match against subject and sometimes against CC/To, so it definitely has some issues with other fields. Please review / try loading the results first and don’t disable all your Mail filters right away.

Exporting Mail filters

Just run that script – it will ask you if you want to use disabled filters as active (useful to re-export after you have already disabled Mail filters), then if you want to disable Mail filters (useful when you’re certain in your Sieve filterset), and then where to save the results.

Was it fun?

Well… the answer is “look at the code”. On the one hand, writing in AppleScript is quite unlike writing in most of the “conventional” languages – some constructs are very different, some seem more natural, others more awkward. On the other, loops management reminded me of programming Turing machine – I mean, using THIS as “continue”, really?!

So to conclude – I think it was, as any unusual experience is fun in it own (however peculiar) way. And it’s the “proper way” for the case – you deal with official API, not parsing the XML (which I could, because Mail rules are stored in XML files) because there’s no way to foretell where the source would move or how its structure would change eventually. Mail API is way less likely to do so.


Some Sieve-related resources FTR:

What’s next?

Have a beer!

Latencies and sizes

This was hanging around in my Keep for quite a while – right till I realized that I need it somewhere for a quick reference, although it just messes my Keep flow. Thus posting it here in a “JFTR” fashion.

Data access latencies:

L1 cache reference ……………………. 0.5 ns
Branch mispredict ………………………. 5 ns
L2 cache reference ……………………… 7 ns
Mutex lock/unlock ……………………… 25 ns
Main memory reference …………………. 100 ns
Compress 1K bytes with Zippy …………. 3,000 ns = 3 ¬Ķs
SSD random read …………………… 150,000 ns = 150 ¬Ķs
Read 1 MB sequentially from memory ….. 250,000 ns = 250 ¬Ķs
Round trip within same datacenter …… 500,000 ns = 0.5 ms
Read 1 MB sequentially from SSD ….. 1,000,000 ns = 1 ms
Send 1MB over 1 Gbps network ……. 10,000,000 ns = 10 ms
Disk seek ……………………… 10,000,000 ns = 10 ms
Read 1 MB sequentially from disk …. 20,000,000 ns = 20 ms
Send packet CA->Netherlands->CA …. 150,000,000 ns = 150 ms

MySQL Storage Requirement per Data Type:

TINYINT – 1 byte (BOOL is an alias for this)
SMALLINT – 2 bytes
MEDIUMINT – 3 bytes
INT, INTEGER – 4 bytes
BIGINT – 8 bytes
FLOAT(p) – 4 bytes if 0 <= p <= 24, 8 bytes if 25 <= p <= 53
FLOAT – 4 bytes
BIT(M) – approximately (M+7)/8 bytes

Neat little thing, or bash tab-completion for your tools

You know that thing, the magic of having all the options listed in front of you when you [double-]press Tab after typing something on the console? Or the unique option completing itself if there’s a match? Of course you do. One thing that bothered me is the frustration of when it’s suddenly not there.

For general tools it’s already alright, they either come bundled with tab-completion or you can easily set it up – for instance, there’s a setup tutorial for Mac, coming with a Git bundle. One important note on that one: in iTerm, you have to go to settings -> Profiles and change Command to¬†/opt/local/bin/bash -I for your/default profile to run proper bash version.

But then there are your own little tools that start as a one-parameter two-liner but eventually grow to 30-params fire-breathing hydra. And that’s when you start missing that tab-completion thing.

But that’s easy (for simple cases – see a note below) – you just create a script named, say, mycomplete.bash, containing something like this:

¬†¬†local complist=`fdisk 2>&1|grep -Eo ‘^ +[a-z]+’|tr ‘\n’ ‘ ‘`
  local cur=${COMP_WORDS[COMP_CWORD]}
¬†¬†COMPREPLY=( $(compgen -W “$complist” — $cur) )
complete -F _completecmd yourcmd

where _compelecmd is a unique function name, yourcmd is a command this should be applied to, and complist is constructed from fdisk output just to illustrate the approach – it should be output of yourcmd parsed there. Note: try your parser before you set it up, I faced weird differences on different platforms.

Then you need to add this to your ~/.bashrc:

source /path/to/mycomplete.bash

and you’re done. To have it right away, you can also run¬†source /path/to/mycomplete.bash directly in your bash prompt.

Mind that that this approach wouldn’t work for intricate cases when you have a deep parameter sequence dependency – have a look at Git approach, it’s a bloody burning hell there.

Git hooks

Git hooks are lovely. Consider automated syntax check before committing changes:

  • git config –global init.templatedir ‘~/.git-templates’
  • mkdir -p ~/.git-templates/hooks
  • vi ~/ and put, for instance, following:


  for f in $( git diff --cached --name-status|awk '$1 != "R" { print $2 }' ); do
    echo "Veryfying $f..."
    filename=$(basename "$f")
    if [ "$extension" = "pl" ] || [ "$extension" = "pm" ]; then
      perl -c $f
      if [ $lineretcode != 0 ]; then

  if [ $retcode != 0 ]; then
    echo "Pre-commit validation failed. Please fix issues or run 'git commit --no-verify' if you're certain in skipping validation step."
    exit 1

  exit 0

  • chmod a+x ~/
  • ln -sn ~/ ~/.git-templates/hooks/pre-commit (point of this to have it centralised – on next step global template is copied to repository, so if it’s a symlink – it’s easier to adjust for all later)
  • cd to your repository and run git init there – it’d re-instantiate the repository with copying global hook templates there

That’s it – now you have yourself neat little safeguard. There are better techniques of course – flymake, for one, or some IDE build/check configuration. That works, too – but gets too tricky when you have to patch the code on remote instances. In my case, I have to edit files remotely – and while Emacs has no problem with that (Sublime has plugin for that, too, and you can always add some fuse SSH mount), my flymake settings suck, and I’m not keen to dive into a bloodthirsty piragna pool of its remote execution configuration.

So… git hooks it is.

Basic DNS records list

This is a “you learn better when you write it down” sort of post. Never actually got into DNS record types – as a lot of things I’ve missed, there was just no need and I wasn’t curious enough. Although curiosity without regular application of that knowledge is rather pointless – “you soon will forget the tune that you play”, if you play it just once or twice.

That said, I’m gonna be needing this knowledge soon (I presume), so I thought I better do me a hint page (a “crib sheet”, as the dictionary suggests).

  • ¬†A record – “Address”, a connection of a name to an IP address like, for instance, “ IN A” – where IN is for the Internet, i.e. “Internet Address…” Wildcards could be used for “all subdomains”
  • AAAA – “four times the size”, A-address for IPV6 addresses (see a note on IPV6 below)
  • CNAME – Canonical Name, specifies an alias for existing A record, like “ CNAME“. Useful to make sure you only have one IP address in A record, and others rely on A name – so if IP changes, it’s one place you have to change it at.¬†Note: do not use CNAME aliases in MX records.
  • MX – Mail eXchange, specifies which server serves zone’s mail exchange purposes – like, for instance, “ IN MX 0“; final dot is important, 0 is for priority: ther could be multiple MX records for the zone, and they processed in priority order (the lower the number the higher the priority). Same-priority records are processes in random order. Right-side name should be an A record.
  • PTR – specify pointer for a reverse DNS lookup, required to validate hostname identity in some cases – “ IN PTR” (note that IP of¬† is¬†
  • NS – Name Server, specifies a (list of) authoritative DNS server for the domain, for instance: “ IN NS“. This should be specified at authoritative server as well.
  • SOA – State Of Authority, an important record with zone’s name server details –¬†“authoritative information about an Internet domain, the email of the domain administrator, the domain serial number, and several timers relating to refreshing the zone“. Example: ¬† 14400 IN SOA (
    2004123001 ; Serial number
    86000 ; Refresh rate in seconds
    7200 ; Update Retry in seconds
    3600000 ; Expiry in seconds
    600 ; minimum in seconds )
  • SRV – an option to specify a server for a Service, like “ IN SRV 0 5 80” – here’s the service name (_http), priority (0), weight (5) for services with the same priority, and port (80) for the service.
  • NAPTR¬†– recent and complex regexp-based name resolution I’m not keen to into.
  • There’s MUCH MORE of this crap, hope I won’t need to ever dig that deep
  • There’s also a number of decentralized DNS initiatives

Oh, and on IPV6:

  • it’s 128-bit (IPV4 is 32)
  • it’s recorded in hex numbers, 8 quads
  • it has following structure:
global prefix subnet  Interface ID
  • local address is¬†0000:0000:0000:0000:0000:0000:0000:0001
  • and IPV4 record in that case would look like¬†0000:0000:0000:0000:0000:0000:
  • zeroes could be omitted: ::1 or ::
  • to make sure address is shortened correctly, use¬†ipv6calc util:¬†ipv6calc –in ipv6addr –out ipv6addr –printuncompressed ::1

Retrieving multi-line sequences from text files

Had some free time and had a need to parse out a number if similar (but not the same) blocks from a log file. There are tools for that – it could be done with a mixture of grep, sed, bash and some arcane magic – but I’m afraid to find the right toolset, learn required keys for each and experiment with their values and combination would take me longer than to just write me a tool. And it was another opportunity to write a few lines in Python, which I don’t do often enough. And I do love text processing.

So I made me a neat little tool that essentially does one simple thing – starts printing the input stream when some (start) trigger is found there, and stops when another (end) occurs. There are some additions – like, print some lines before/after the block, print couns, unique blocks only etc. – but those are glitter, mostly.

Available in a BitBucket repository.

SECR-2013 notes at the end of 2013

Haven’t updated this place for a while – need to get back on this new habit, really. Well, at least
I hope I do so in upcoming year (should be a big one anyways). So a quick update just before I tear the old calendar down from the wall (and I pity that, it’s the Futurama genuine 2013 calendar) – some records that date back as far as late October, some points I squeezed from that SECR conference I attended back then. These are quick and without any verbose explanation – rather “FTR” stuff to keep.


  • monitor user delivery/connection time, detect slow (comparing to others or expected), direct them to closer/… node/instance by default. Meaning, you have logs, so you have the time of requests processing. So if you have some requests of one kind taking (under circumstances – be it another location or another browser or something) – taking longer that requests of the same king in other set of circumstances, you might need to adjust delivery rules – redirect traffic, test under different environment etc.
  • config/(code?) deployment through the Dropbox – as simple as it sounds, “simple parts” (I’m thinking configs, but might be updates as well) distribution via Dropbox.

Initial project stage:

  • “visual brief” – ask the basic associations, like “fast”, “pretty”, … – give some appropriate images to pick from, iterate. Like, ask customer what impression his product is aiming to project. Say the answer is “paramount”. Then suggest few images – mountain, the Sun, the Earth and the Moon, big teacher and small student – and ask customer to pick one (or few). And then iterate with styles and colors and other ideas.
  • moodboards (MoodShare…) – pick lots of stuff on the screen, let customer select, SCAMPER (


  • try test using DB/ssh connect (separate config) to run local changes against test instance remotely
  • togglers: big-feature switches, allow to disable/enable particular features separately, config- (or environment-) controlled way (triggers in cookies, ENV, config, URL etc.
  • same tests on CI and local, local ran for feature-related validations (branches), CI for default
  • think performance tests


  • measure what you want to adjust (meaning parameters selected fro KPI estimation)

Dev points:

  • make user feedback easy (put feedback interface at a clearly visible place, make it simple)
  • get a cool team (people that fit together (this is important), motivated (not just money, also interested in what they’re doing), productive)
  • make internal communication easy (chat, video, whatever – to let teams or team members connect without any hassle)


  • ask testers to write test cases descriptions right after ticket is created (confirmed, put to backlog) and discuss/adjust along the way. I think this one is really beneficial if used the right way, although no personal experience here yet.

Well, that’s it for now – and probably for this year, too. See me (for I think I’m the only reader – or at least a writer – of this blog) in 2014!

Bluetooth sound quality issues in OSX

I love the wireless technologies, I truly adore them. One thing that bothers me about Bluetooth though is sound glitches and jumps – it works pretty for some time, and then all of a sudden it starts twitching, small fractions of it missing. Minor, but very annoying.

This is actually broader issue than OSX – for instance, no smartphone works with a Bluetooth headset perfectly if I keep the phone in my jeans pocket. It all goes well and nice till it gets jumpy out of the blue – and then back normal again.

Now with Mac it’s a bit different – at some point sound starts twitching, and it goes on then till I re-initiate Bluetooth connection. And it was kinda alright till it got me. One speculation I found over the Internet was connection bitrate is too low and needs to be raised with this command line:

defaults write “Apple Bitpool Min (editable)” 60

because default is 40. So I have tried that, it didn’t help much, so I assumed that’s not the right solution and forgot about it. And after a while I did what I should have done right away – I restarted Bluetooth on Mac. And… speaker stopped working at all – it connected, but failed with something like “compressed stream is unacceptable”. Since I forgot I mangled Bluetooth settings, first thought was my speaker broke.¬†And it was kinda alright (it has wired input, quite old and I don’t use it every day) till it got me.

So I googled up the error message and results stroke me with remembrance. Changing the value in aforementioned command to 53 (that was highest suggested working rate) and… it’s all good now! Can’t say about twitches – they were sporadic and never ocurred right away – but at least I got it back working.

One note: that command is for user space, don’t run it with sudo (no harm there, just wouldn’t work).

Move a directory (or a few) to a new repository, with history, in Mercurial

This popped up on my schedule once again today, so I thought I better save it here – JIC.

The goal is simple: project grew big, some parts got a job and moved to LA¬†are big enough to be developed separately. However it’d be better to move them along with their history. Luckily, hg can get you there, with a little to no sweat. Here’s the deal.

First, you need the convert extension enabled. ¬† (that’s where extra details are)¬†says it should be


in your ~/.hgrc (or wherever level you keep your Mercurial configuration at), but mine has


So… go figure yourself.

Then, you need to get a list of what you would like relocated and save it to some file, each line starting with “include” (you can also do “exclude” if you need some inner parts excluded). Like this (I’m using some real name from project so I could share it with colleagues. Names are pretty vague though, so no confidentiality breach involved):

airhydra:uslicer halien$ cat /tmp/targets
include API/REST
include Scheduler
include UI/lib

if you need to rename some directories in new repository, you can do that with adding “rename olddir newdir” (that should go after “include” list) – for instance,

airhydra:uslicer halien$ cat /tmp/targets
include API/REST
include Scheduler
include UI/lib
rename Scheduler Backend/Scheduler

Now, run that convert command, pointing it to a file with targets/actions, source and target (temporary) directory. In my case, source is current directory. Remember: considering on repository size, it could go on for some time and crap in your console like a bloody deranged elephant. Example (crap wiped off):

airhydra:uslicer halien$ hg convert --filemap /tmp/targets ./ /tmp/TempRepo
initializing destination /tmp/TempRepo repository
scanning source...
15855 add data validation
15854 update data validation
15853 add API placeholder
15852 add test data
15851 Initial API skeleton
(...loads of crap...)

Now go to the repository you need those changes at. Here starts the artificial part – I initialized a new repo called NewWorld, but you should create and clone (or, well, just init, if that’s the case) yours in advance. In that repo, pull from the temporary repository you’ve dumped changes into:

airhydra:NewWorld halien$ hg pull -f /tmp/TempRepo
pulling from /tmp/TempRepo
requesting all changes
adding changesets
adding manifests
adding file changes
added 5809 changesets with 7074 changes to 364 files (+166 heads)
(run 'hg heads' to see heads)

Big one, eh? That’s just a fraction. Deserved a move long ago.

If your repository is new, just update to the default branch (or whatever you fancy). If you’re pulling changes into existing one, do a merge. And if the existing repository had some branches named similarly to those available in pulled changeset, you need to resolve conflicts. Which is ugly, so I wish you don’t have to. Or it might be one of the rare cases to justify¬†push -f (given you don’t care much of those branches).

Aaaand… that should be it:

airhydra:NewWorld halien$ ls
API Backend UI
airhydra:NewWorld halien$ hg branches|wc -l
airhydra:NewWorld halien$ hg hi|grep changeset:|wc -l

It probably could be done in some more, ehm, (cruelly) sophisticated way through pipes – but I don’t want to bother. This one’s good enough for the case.