It’s a short one, but quite important – the common password security patterns and antipatterns: https://www.troyhunt.com/passwords-evolved-authentication-guidance-for-the-modern-era/
Not really specific to perl, but handy anyway.
You can use
strace utility to inspect the syscalls (filesystem and network operations are usually of most interest) that a process is making.
here’s e.g. how you can see all network activity for a given process:
Also if you have a stuck process you can check if it is waiting on some filehandle read and then check what that filehandle is using
Also filehandles could be found in /proc/$PID/fd/ – so if you run strace on your process and see e.g.
you can check aforementioned lsof | grep $PID and see this
and see this:
Attended UX Lausanne for the second time this year – and while it was rather small (which is actually good as you’re better connected to speakers in terms of Q&A and follow-up), it was not solely on UX but a lot on workflow and project development – something that seems to be called Product Experience.
One of the talks was given by a guy from Automattic company (Davide Casali) – first I thought they’re WordPress authors, but turned out they’re (large) contributors and maintainers of WordPress.com – that has a distributed nature, i.e. everyone works remotely. Like, everybody – few colleagues might use same shared space, but even then they’re not in the same team. Now that is definitely not the case for everyone, but nowadays many companies allow at least temporary remote work, thought of sharing key points here – I think they’re quite interesting and some of them could be used in different cases.
So here are their principles:
- before creating new team (or starting new project), meet live to have “Minimum Viable Discussion” to quickly understand key points, sort out initial discrepancies in domain knowledge and prepare brief plan for the closest next steps
- transparently share all changes, specs and discussions – “if it’s not transparently shared, it doesn’t exist” (more office-bound sticky-note-flavor version of this proverb was “if it’s not on the wall, it doesn’t exist”)
- have few communication spaces for different needs:
- Real-time channel (like Slack or Jabber or, sigh, Facebook) for immediate personal and team communication
- Team space on some shared documents resource – they, as WordPress-targeted company, use a P2 theme for WordPress that allows posting comment to posts and comments to comments – basically to host a discussion on the subject. Point is to have something that could be subscribed to or viewed by any employee, and that team members would overview daily to see/discuss updates or refer to while developing
- “Stable” documentation storage for more permanent things like articles or specs of deployed products etc.
- each team (BTW they also have small teams, 4-5 people of different skills) focuses and collaborates on one thing (project or task)
- Independent individuals, i.e. everyone maintains own priorities. Tasks are managed with any suitable tracker (they use Trello)
- “zero waste” in terms of no bureaucracy regarding e.g. permissions or access
- standup-kind updates are posted to team channel each time team member becomes available – along with overview of the progress, provides an indication when person becomes available
- live meetups ~4 times a year – 3-5 days of work, discussions and some off-work time together
- teammates (but not projects) are usually picked from within few timezones from one another to aid live communication
- everyone can follow any other team by subscribing to their shared documentation channel
This is all “JFTR”, but some could really consider utilising part of these practices for daily work and communication improvements.
Also another reference to keep, https://www.helpscout.net/blog/agile-remote-teams/
Oh, Git! The thing that makes our developers life so much easier, yet – as, perhaps, any involved system – having so much left out of attention of the most. It’s far from original to post this XKCD gag here, but it’s just too true:
Me myself, I don’t really understand Git. I mean sure, I’ve read few articles on its structure and what acyclic graphs are and how it works etc., but when things go awry – I’m still puzzled.
To help that a little, I started to collect a set of shortcuts and tricks to make frequent problems less hassle. These come in form of scripts – while I could make aliases, I somehow prefer separate scripts as they let you use bash syntax if task is a little more complex than a simple shortcut. What’s also nice is if you name your file (or alias, I suppose) “git-kill-all-humans“, you can then run it as “git kill-all-humans” and even see it in the tab completion for Git commands!
The full set could be found under “git-tools” directory at https://bitbucket.org/hydralien/tools, below are just a couple of the most used ones.
- git-forget – to use as “git forget .” to loose all the uncommitted changes or “git forget filename” to just a specific file to revert
git checkout -- $@
- git-origin – to get the remote URL of the repository, useful to share or to clone other repository that resides at the same server (so I just need to change the name)
git config --get remote.origin.url
- git-out – to see what changes are scheduled to be pushed to origin
git diff origin/`git rev-parse --abbrev-ref HEAD` -- $@
- git-import – get changeset from a different host, useful when development happens on same repository cloned on many instances – sometimes changes end up on a wrong instance and need to be moved without getting them into the repository
curdir=`pwd`; ssh $1 "cd $curdir ; git diff"|git apply
- git-rush – probably the most used command – when the repository is large and there’s many people pushing to it, getting your changes into the origin might be a daunting process. So this one just tries till it’s done – it’s a little overcomplicated for stats reasons (and uses another shortcut, so there’s two of them here), but here it is:
attempt=1 ; time until git repush; do let "attempt++"; echo "No luck, once again..."; done ; echo "Finished in $attempt tries" ; date
and the git-repush:
git pull --rebase && git push
Bottom line here… Git is good – it just takes a few shortcuts to fully appreciate it =)
And of course there’s a hell lot more to automate if required – hooks, configuration etc. etc. etc.
OK, the Google Chrome new tab page extension I’ve been (extremely lazily) developing for quite some time is live now! It’s not much, really – just displaying one bookmarks folder (I use “Bookmarks Bar”) and launchable extensions, but that’s what I use the most myself. Oh, and some Chrome shortcuts – like cookies, passwords etc. – too:
It’s on Chrome Web Store – but there’s also a separate page to send around: https://www.hydralien.net/anothertab/
Oh, and the code is public, too: https://bitbucket.org/hydralien/anothertab
“git grep” is incredibly useful on large repositories – where regular grep (or awk or whatever) takes minutes to proceed, git grep does the job merely in seconds. Very, very useful. The only thing is, I would really like this working as Emacs search – with files selection, highlighting etc.
Didn’t take too long to find a suitable solution – but took some time to tailor it a little. Basically it’s all taken from https://www.ogre.com/node/447 with some minor adjustments – like, for instance, I do a lot of development in “remote” mode, i.e. I open files from remote hosts via SSH (hence replace-regexp-in-string part). Enjoy!
(defun git-grep (search path)
“git-grep the entire current repo”
(interactive (list (completing-read (concat “Search for (” (current-word) “): “) nil nil nil nil (current-word)) (read-file-name “in directory: ” nil “” t)))
(grep-find (concat “git –no-pager grep -P -n –no-color ”
(shell-quote-argument (if (> (length search) 0) search (or (current-word) “”)))
(replace-regexp-in-string “/ssh.?:.+:” “” path))))
now if you add this to your .emacs, you can have it launched on some key combination (like Command-c g in my case):
(global-set-key (kbd “C-c g”) ‘git-grep)
Well, at least these folks:
are no longer smashing the tripgang.com – not that they did much, but enough to let them off to whatever they fancy under those opaque helmets.
So… it’s been quite a while. Now the aforementioned TripGang hosts these:
- KML2GPX (convert Google KML mars into GPX or OSMAND-friendly GPX)
- MapMarks (search and bookmark travel pinpoints)
- WikiVert (auto-search for all points from WikiTravel-like text/attractions list)
It’s quite curious how things turn out – the initial idea for the resource was completely different (well, who knows, I might get to it some day after all) – but hey, “whatever works”, right? Hopefully these tools (however immature and weak they are) might be useful to someone (and most hopefully to myself).
Well… bon voyage, there’s not much else to say really.
It’s a script (in AppleScript) that goes through all Apple Mail filters and converts them to Sieve filters, so you could go server-side on email filtering
The answer is, as usual, “because I’m lazy” – I’ve accumulated a fair bunch of Apple Mail filters along the way, and converting them manually wasn’t much fun. And I found no working option to convert it on the Internet.
Code and disclaimer
Code is available at:
feel free to clone, submit your patches etc.
NOTE that this is validated with a quite limited use case – in my filters, I mostly match against subject and sometimes against CC/To, so it definitely has some issues with other fields. Please review / try loading the results first and don’t disable all your Mail filters right away.
Exporting Mail filters
Just run that script – it will ask you if you want to use disabled filters as active (useful to re-export after you have already disabled Mail filters), then if you want to disable Mail filters (useful when you’re certain in your Sieve filterset), and then where to save the results.
Was it fun?
Well… the answer is “look at the code”. On the one hand, writing in AppleScript is quite unlike writing in most of the “conventional” languages – some constructs are very different, some seem more natural, others more awkward. On the other, loops management reminded me of programming Turing machine – I mean, using THIS as “continue”, really?!
So to conclude – I think it was, as any unusual experience is fun in it own (however peculiar) way. And it’s the “proper way” for the case – you deal with official API, not parsing the XML (which I could, because Mail rules are stored in XML files) because there’s no way to foretell where the source would move or how its structure would change eventually. Mail API is way less likely to do so.
Some Sieve-related resources FTR:
Have a beer!
Booking.com guys thought a sparkling thought: “if the only purpose of master is to serve binlogs… why should it be a full-featured DB instance?”. So they got themselves a different approach: http://blog.booking.com/mysql_slave_scaling_and_more.html
A neat approach with many benefits. Two things to mention:
- this is a concern when the common replication technique starts being a PITA (with GTID, promoting a new master in MySQL shouldn’t be a problem). So this is not what you should rush for for your 2-slave setup
- this approach applies to pretty much any replication task – not necessarily MySQL, not even necessarily DB replication at all
Anyways, nice idea to remember.
This one is another FTR: stress-testing with Siege is a breeze! Here are two cites from http://blog.remarkablelabs.com/2012/11/benchmarking-and-load-testing-with-siege:
You can have this benchmark run forever, or you can specify a specific period of time. To set the duration of the test, use the
-t NUMmoption. The NUMm format is a representation of time. Here are some time format examples:
- -t60S (60 seconds)
- -t1H (1 hour)
- -t120M (120 minutes)
To demonstrate running a benchmark, let’s run Siege against this blog:
siege -b -t60S http://blog.remarkablelabs.com
and then also “user emulation”:
When creating a load test, you should set the
-c NUMoptions. The
-d NUMoption sets a random interval between 0 and NUM that each “user” will sleep for between requests. While the
-c NUMoption sets the concurrent number of simulated users.
Here is an example of a load test of a Heroku application:
$ siege -c50 -d10 -t3M http://some.application.com
and you could use custom headers (think cookies) with -H.
And what’s even better, it’s widely available – I’ve installed it on Mac through Ports (although it’s on brew as well) and then unpacked it from RPM on my GoDaddy SSH shell account (because I couldn’t just go and install it there). It worked well in both cases.