Server Commander

12 Jul 2017 04:46

Here's a problem I'd like was already solved:

Be able to manage and run "tasks" on a Linux server. More specifically:

  • There should be a WEB interface where:
    • I could define commands or tasks that the server is capable of doing
    • I could schedule tasks to run at specific times (like cron)
  • The tasks/commands should be (optionally) parametrized
  • The tasks/commands should be (optionally) stored in GitHub and/or a database
  • The task runs should not overlap with each other (i.e. when the previous run is still in progress don't start the new one) — configurable
  • The task configuration should include:
    • the command to run
    • task timeout — the task should be killed after the specified time
    • environment variables to set for the task
    • max CPU/memory/disk to use (optional)
    • number of times to retry on failure
    • what to do on success and failure — e-mail, webhook and trigger another task
    • directory to run the command in: I can specify one, or a temporary one is created for each run and cleared after the run
    • user to run the command as: I can specify one, or a temporary user is created for each run and cleared after the run
  • The task run view should include
    • colored command output
    • command exit code: OK/FAIL
    • resources used by the task (optional)
    • time the task took, time the task started and ended
    • environment variables
    • link to next/previous run and a button to re-run
  • The code should be free and open and extremely easy to install
  • There should be an error log in the UI to track all the jobs failures
  • Misconfigured tasks should not take the machine down — there should be some level of monitoring — don't start new jobs when load is high or disk is almost full, etc
  • One should be able to have a nice dashboard with selected tasks for easy launching

What I describe here is to some extent already covered by some of the following:

Here's what's wrong with them:

  • Jenkins' UI is to complicated for simple tasks, also it feel very heavy
  • in Travis-CI you specify the tasks in your code, not in Travis, also it's not software you run locally
  • My experience with Rundeck is it doesn't work out of the box, I could debug it, but I didn't want to, also UI is a bit too complex
  • Minicron is way too simple and (currently) only allows you to run jobs through ssh (not locally)

To reiterate I'd like:

  • A web UI for cron, that's also
  • a bit like Jenkins in a way you can see the output of each of the command runs
  • and have ability to sandbox the processes it runs
  • looks nice and is easy to install and doesn't require maintenance

Comments: 3

How To Disable Kinetic Scrolling On Linux

14 Feb 2017 04:44

I spent more time than I wished searching for how to disable kinetic scrolling on Linux. Here's the spell:

$ synclient CoastingSpeed=0

Comments: 1

Monitoring Freezer Temperature

25 Sep 2016 22:11

For some time we've had suspicions the freezer in our apartment doesn't work correctly. Recently my wife reached to it to get some ice cubes but found water in the form instead. That clearly meant the temperature was over the freezing point and stayed there for some time. She put a thermometer on the fridge, put the sensor into the freezer and started watching the readings. The temperature reached around -20°C and stayed there, fluctuating between -15°C and -20°C.

The webcam reader

I decided, it's time to record the temperature, so I brought the laptop, adjusted the screen angle, set the brightness to max and took a shot from the webcam to see if the thermometer is in the view of the camera:

mplayer tv:// -vo png -frames 3

This takes 3 frames from the webcam and saves them as PNG files: 00000001.png, 00000002.png, and 00000003.png. If you wonder why I need 3 frames, the first one is always under- or overexposed and out of focus, the second is usually OK and the third is almost always good (as for a webcam), so long story short, I'm giving some time to the camera to auto-adjust its settings.

Here's a script that does just a bit more than that:


set -e

export DISPLAY=:0

curdate=`date +%x-%X | sed 's/[\.:]//g'`

mkdir -p /home/quake/git/thermonitor/out/ /home/quake/git/thermonitor/tmp
cd /home/quake/git/thermonitor/tmp

# White background
ristretto ../white.png &

# Wake screen
xte "mousemove 100 100"
sleep 0.2
xte "mousemove 99 99"

# Dump screen
mplayer tv:// -vo png -frames 5 -noconsolecontrols
cp "$tmppng" "$outpng"
echo "$outpng"

kill $!

It's very hacky, but here are the big parts:

  • set -e makes the script stop if any of the commands in it fail
  • export DISPLAY=:0 selects the default X display, so I can run the command from an SSH shell
  • Then we have some hard-coded paths to tmp and out directories
  • ristretto ./white.png & starts a image-viewer and shows a white.png file which is a big all-white file. This makes the screen display enough white, so the LCD thermometer is properly lit
  • I use xte to move the mouse around: this blocks the automatic screen dimming, so the screen continues to be 100% bright
  • Then we have the mplayer command from before, just changed to 5 frames (just to be sure) and -noconsolecontrols. mplayer kind of hangs when you start it without a proper terminal on stdin, unless you pass this option
  • Then I copy the 5th frame to the output file, which has the current date and the time in its path
  • Finally I close the ristretto process I opened before

I run this command in a while loop like that:

while sleep 28 ; do  ./ ; done

./ takes around 2 seconds, so I have about 2 photos a minute.

I put that to a screen session.

Now in the out directory I serve the files using Python HTTP server:

python -m SimpleHTTPServer

This starts an HTTP server on port 8000 and serves all the files and creates an index of them when requesting /.

The processing

How easier to copy the HTTP-exposed files than using a plain-old wget command?

wget -r http://dell:8000/

This creates directory dell:8000 and downloads all the PNG files to it.

Once you have that directory you may later want to update it with only the newer files:

rm dell\:8000/index.html
wget -nc -r http://dell:8000/

The -nc switch makes wget ignore files that are already there. We explicitly remove the index.html so it downloads a new list (that includes newer files).

I wanted to make a program that gets a photo of the 7-segment display and reads it producing a number that along with the date could be used to graph the temperature over time.

First, let's use ImageMagick to:

  • crop the picture to only the interesting part: the OUT reading
  • shear the picture to make the LCD segments vertical and horizontal, not skewed
  • bump contrast/gamma to remove the noise
  • auto-adjust the visual properties of the picture so it's more consistent across different times (like night vs day)
  • make it black and white — turned out it works best if I only use the Green channel, R and B turned out more noisy than G.
  • resize to as small size as needed — for easier and faster processing

Here's the actual command:

convert $file_in -gamma 1.5 -auto-level -brightness-contrast 35x80 -shear -14x0 -crop 260x70+320+227 -channel Green -separate -resize 40x20 $file_out

Input image:


Output image:


Zoomed in:


You can see the bottom segment is not really visible, but that's OK, all the digits can be properly read without that segment being visible, so we'll only consider the 6 segments that we can easily read.

Now comes the meat, the Python program reading the file and returning the temperature:

#!/usr/bin/env python
#encoding: utf-8
from PIL import Image
# Points that hold each segment:
# d1 is the first digit
d1_sA = [(12,1), (13,1), (14,1), (15,1)]
d1_sB = [(18,2), (18,3), (18,4), (18,5)]
d1_sC = [(18,8), (18,9)]
d1_sE = [(11,8), (12,9), (11,9), (12,8)]
# Don't need this, as we only ever have 1 and 2 in the first place
d1_sF = []
d1_sG = [(13,7), (14,7), (15,7), (16,7)]
# d2 is the second digit
d2_sA = [(24,1), (25,1), (26,1), (27,1)]
d2_sB = [(29,2), (29,3), (29,4), (29,5)]
d2_sC = [(29,8), (29,9)]
d2_sE = [(22,8), (22,9)]
d2_sF = [(22,2), (22,3), (22,4), (22,5)]
d2_sG = [(24,6), (25,6), (26,6), (24,7), (25,7), (26,7)]
# d3 is the small digit (first after the decimal point)
d3_sA = [(35,4), (36,4), (37,4)]
d3_sB = [(38,5), (38,6), (38,7)]
d3_sC = [(38,9)]
d3_sE = [(34,9)]
d3_sF = [(34,5), (34,6), (34,7)]
d3_sG = [(35,8), (36,8)]
# Now the tricky part, for each segment I define a threshold below which I consider it "lit"
# 0 means completely black, 255 is white.
# Because of uneven lighting, for each segment (and digit, but we ignore that) the value is different.
tA = 200
tB = 170
tC = 120
tE = 115
tF = 140
tG = 170
# A threshold for the "-" sign
tSIGN = 200
# All of those were obviously updated on the go to match the files
# A nice debugging function that prints which segments the code considers lit
# Also if you wondered what A, B, C, E, F, G meant, here's the schematic:
def print_digit(segs):
    # Only print this if there's a second argument to the script passed
    if len(sys.argv) == 2:
    print '''
                                                                                {F}      {B}
                                                                                {F}      {B}
                                                                                {F}      {B}
                                                                                {E}      {C}
                                                                                {E}      {C}
                                                                                {E}      {C}
    A='###' if 'A' in segs else '   ',
    B='###' if 'B' in segs else '   ',
    C='###' if 'C' in segs else '   ',
    E='###' if 'E' in segs else '   ',
    F='###' if 'F' in segs else '   ',
    G='###' if 'G' in segs else '   ',
# This doesn't do anything spectacular, just passes the coordinates for each of the segments and the average value of the first 9 pixels of the image: (0,0) - (3,3)
def read_digit(im, pointsA, pointsB, pointsC, pointsE, pointsF, pointsG, avg9px):
     segs = ''
     segs += 'A' if read_segment(im, pointsA, tA-avg9px) else ''
     segs += 'B' if read_segment(im, pointsB, tB-avg9px) else ''
     segs += 'C' if read_segment(im, pointsC, tC-avg9px) else ''
     segs += 'E' if read_segment(im, pointsE, tE-avg9px) else ''
     segs += 'F' if read_segment(im, pointsF, tF-avg9px) else ''
     segs += 'G' if read_segment(im, pointsG, tG-avg9px) else ''
# A list of all digits and their representation on the 7-segment display:
     if segs == 'ABCEF':
         return 0
     if segs == 'BC':
         return 1
     if segs == 'ABEG':
         return 2
     if segs == 'ABCG':
         return 3
     if segs == 'BCFG':
         return 4
     if segs == 'ACFG':
         return 5
     if segs == 'ACEFG':
         return 6
     if segs == 'ABC':
         return 7
     if segs == 'ABCEFG':
         return 8
     if segs == 'ABCFG':
         return 9
# A special case for the first digit, it doesn't display "0", just doesn't display any segments
     if segs == '':
         return 0
# The function that takes the PIL image object, gets a list of (x,y) coordinates
# and checks if the average value of them is smaller than the threshold passed: segment "on"
# For 0 points passed it returns False: segment "off"
def read_segment(im, points, threshold=128):
    val = 0
    for point in points:
        val += im.getpixel(point)
    return val < threshold * len(points)
# Nothing interesting in here, just for printing the date from the file name
def get_date(file_name):
    time_str = file_name.split('-')[-1].replace('.png', '')
    d1, d2, m1, m2, y1, y2, y3, y4 = file_name.split('/')[-1].split('-')[0]
    return '{}{}/{}{}/{}{}{}{} '.format(m1, m2, d1, d2, y1, y2, y3, y4) + '{}{}:{}{}:{}{}'.format(*list(time_str))
# A bunch of imports in the middle of the file
# Don't do that at home ;-)
import sys
import subprocess
# This will be for example: pngs/dell\:8000/24092016-101055.png
file_in = sys.argv[1]
# And this: processed-pngs/dell\:8000/24092016-101055.png
file_out = 'processed-' + file_in
# Calling the ImageMagick as discussed in the article
subprocess.check_call(['convert', file_in, '-gamma', '1.5', '-auto-level', '-brightness-contrast', '35x80', '-shear', '-14x0', '-crop', '260x70+320+227', '-channel', 'Green', '-separate', '-resize', '40x20', file_out])
# Reading what it created
# Now a thing I added at some point later.
# Because of different lighting throughout the day and because the ImageMagick command
# above was not good compensating for it (in spite of auto-level and high contrast)
# some of the images were darker than the others. In most images (the ideal scenario for the code)
# the first 9 pixels of the image (0,0) to (3,3) were just white (or very close), but in those darker
# images the whole image was darker and I used the first 9 pixels to detect how much darker
first9px = im.getpixel((0,0)) + im.getpixel((0,1)) + im.getpixel((0,2)) \
         + im.getpixel((1,0)) + im.getpixel((1,1)) + im.getpixel((1,2)) \
         + im.getpixel((2,0)) + im.getpixel((2,1)) + im.getpixel((2,2))
# This is the compensation, for most of the images it's 0 or very few, but for darker images, it's more
avg9px = 255-first9px/9
num = '{}{}{}.{}'.format(
    '-' if read_segment(im, [(2,6), (3,6), (4,6)], tSIGN-avg9px) else '+',
    read_digit(im, d1_sA, d1_sB, d1_sC, d1_sE, d1_sF, d1_sG, avg9px),
    read_digit(im, d2_sA, d2_sB, d2_sC, d2_sE, d2_sF, d2_sG, avg9px),
    read_digit(im, d3_sA, d3_sB, d3_sC, d3_sE, d3_sF, d3_sG, avg9px),
if 'None' not in num:
    print get_date(file_in) + '\t' + num
# If there's a second argument to the script passed, show the original image for comparison
if len(sys.argv) > 2:

Even though this script is so simple, after tweaking the thresholds, most of the files were recognized correctly, those that weren't had one or more non-recognized digits, so they were easy to filter out. For over 1-day worth of images, only 2 or 3 minutes had a missing reading.

I loaded the data to LibreOffice and generated this pretty graph:


Timelapse video

Another approach visualizing the data was to create a video.

The plan:

  • Annotate the images with the recorded time
  • Compose the video from single frames, putting 60 frames per a second of the resulting video

60 frames a second with roughly 2 frames captured a minutes means a day of recording is compressed to:

2 frames a minute * 60 minutes an hour * 24 hours a day = 2880 frames
2880 frames / 60 frames a second = 48 seconds

This makes is "viewable". 60 FPS (versus 30 FPS at higher speed) means you can pause at any time and read the crisp time and temperature.

Here's the annotate part:

#!/usr/bin/env python
import sys
import subprocess
path_in = sys.argv[1]
path_out = sys.argv[2]
date, time = path_in.replace('.png', '').split('/')[-1].split('-')
D1, D2, M1, M2, Y1, Y2, Y3, Y4 = date
h1, h2, m1, m2, s1, s2 = time
label = '{}{}/{}{}/{}{}{}{} {}{}:{}{}:{}{}\\n'.format(M1, M2, D1, D2, Y1, Y2, Y3, Y4, h1, h2, m1, m2, s1, s2)
    'convert', path_in,
    '-gravity', 'south',
    '-pointsize', '45',
    '-font', 'FreeMono',
    '-annotate', '0', label,
    '-fill', 'black',

Here's the result:


Running this in a loop, 16 images at a time:


for file_in in dell*/*.png; do
  file_out="`printf "labeled/%06d.png" $i`"
  echo ./ "$file_in" "$file_out"
done | parallel -j 16

In addition to annotatig the image it also names them as 000001.png, 000002.png etc, which makes it easy for avconv to convert them to a video:

avconv -fflags +genpts -r 60 -i labeled/%06d.png -r 60 temperature.mkv

And here's the video:

Comments: 0

Głupia Nawigacja w Mazdzie 6

13 Feb 2016 21:14

Gdy kupowaliśmy naszą Mazdę, pracownik dilera nie mógł przestać zachwalać systemu "infotainmentu", który Mazda posiada. W mojej ocenie system jest w porządku, ale szału nie robi.

Ogólnie moja opinia na temat samochodów i ich ekranów multimedialnych, jest taka, że dla wszystkich byłoby dużo lepiej, gdyby producenci samochodów przestali się tym w ogóle zajmować, tylko dali jakąś dobrą integrację z tabletem, który można byłoby sobie wsadzić w odpowiedni slot w samochodzie.

Ale do rzeczy. Dlaczego system w aucie, który kupiliśmy jest według mnie taki słaby?

Po pierwsze sposób w jaki startuje. Nawigacja i reszta bajerów (radio, odtwarzacz MP3 itd) uruchamia się dopiero po "przekręceniu kluczyka". Uruchomienie systemu trwa około minutę, więc nici z szybkiego ustawienia nawigacji czy wyboru stacji przed ruszeniem w podróż.

System mógłby już zacząć się uruchamiać gdy otwieramy drzwi a nawet gdy kluczyk zbliża się do auta (mamy czujnik bliskości), wtedy długi czas uruchamiania systemu (o ile nie można go sensownie skrócić) nie byłby taki straszny, bo proces miałby miejsce zanim wsiądziemy do auta.

Po drugie brak możliwości sterowania nawigacją w czasie jazdy. Gdy przekraczamy około 5 mil na godzinę nawigacja przechodzi w tryb ograniczonego sterowania: niemożliwe jest wpisywanie adresu czy nazwy pożądanego punktu, a jeśli nazwę już wpisaliśmy w czasie, gdy auto się toczyło: niemożliwe jest zatwierdzenie wpisanej nazwy.

Są dostępne tylko niektóre opcje, wybrane chyba zupełnie losowo, bo o logice ciężko w tej sytuacji rozmawiać. Auto nie jest zainteresowane kto obsługuje nawigację. Niezależnie od tego, czy następny punkt podróży chce wpisać kierowca czy pasażer "z powodów bezpieczeństwa" jest to niemożliwe.

W sytuacji gdy jedziemy autostradą i w wyniku zmiany planów (np. ktoś się źle poczuł i należy podjechać po drodze do apteki) należy zatrzymać auto (albo zwolnić do około 5 mph) i dopiero wtedy możliwe jest ustawienie następnego punktu podróży.

Jeśli w Mazdzie nazywa się to bezpieczeństwem (zatrzymywanie auta na autostradzie), to jest po prostu głupia (tak jak i ich nawigacja).

Po konsultacji na forum Mazdy6 w Polsce dowiedziałem się, że zabezpieczenie, o którym piszę wcale nie występuje w Polskich samochodach. Bardzo mnie to zdziwiło, bo jeżeli to kwestia bezpieczeństwa, to chyba powinno to działać tak samo we wszystkich krajach? Dla dociekliwych, to też nie kwestia regulacji prawnych w stanach, bo inne marki samochodów nie utrudniają w ten sposób życia swoim kierowcom.

Jak zwykle w takich sytuacjach zwróciłem się do Mazdy z pytaniem jak to naprawić.

Niestety nie mam zapisanego mojego pytania, ale dotyczyło ono dlaczego niemożliwe jest w aucie sterowanie nawigacją (nawet przez pasażera) w czasie jazdy. Oto odpowiedź:

Dear Piotr Gabryjeluk,

Good Morning

Apologies that you are unhappy with our navigation system , to answer your question about operating the navigation while the vehicle is in motion , for safety precautions it could be a distraction to operate while in the vehicle is in motion. For the other issues with the navigation if it is not working as normal I suggest taking the vehicle into a local Mazda dealer to inspect your concern.

Thank you for contacting Mazda Customer Experience Center.

If you have any questions in the future, you can reach me directly using the number and extension below.



Representative, Customer Experience

Nie można takiej odpowiedzi pozostawić bez odzewu, więc zadałem kolejne pytanie w nawiązaniu do tej odpowiedzi:

Dear William,

Could you please tell me how exactly operating the navigation system by a passenger is a distraction?

Thank you very much,
Piotr Gabryjeluk

Po ośmiu dniach nie dostałem żadnej odpowiedzi, więc zapytałem:

Hi there,

Am I going to get the response from William?

My issue is definitely NOT SOLVED!

Piotr Gabryjeluk


Dear Piotr Gabryjeluk,

Good Morning Apologies but it looks like I did supply the answer for you in regards why your not able to use the navigation while vehicle is in motion.

Thank you for contacting Mazda Customer Experience Center.

If you have any questions in the future, you can reach me directly using the number and extension below.

Representative, Customer Experience

Moja odpowiedź:

Dear William,

You claimed that my navigation system cannot be operated by the passenger, because it is a security threat. I asked you why is that a security issue. You did not answer to that. Please answer that question.

Thank you,
Piotr Gabryjeluk

Brak odpowiedzi. Nikt się nie przejmuje. Po przeszukaniu Internetu mam wrażenie, że nie tylko ja mam ten problem i wszyscy są jednakowo ignorowani przez Mazdę.

Jak to naprawić?

Skoro Mazda ma w dupie wygodę klientów, może jest jakiś sposób na naprawę problemu? Wygląda na to, że jest to możliwe. Na stronie forum Mazdy6: znalazłem informację o tym, że problem może być rozwiązany przez zalogowanie się przez ssh do samochodu i wyłączenie flagi.

Tylko jak się zalogować? Okazuje się, że wystarczy podpiąć kartę Ethernetową do złącza USB w aucie, a auto pobierze adres IP przez DHCP, następne pod tym adresem na standardowym porcie SSH należy się zalogować na roota (hasło jci), przemontować system plików w tryb read-write mount -o rw,remount / a następnie wyedytować skrypt /jci/scripts/ wykomentowując linię zawierającą enable_speed_restriction.

Co z tego wynika?

Jeżeli to prawda (jeszcze tego nie sprawdzałem), to znaczy, że Mazda mając techniczną możliwość zdjęcia "zabezpieczenia" (które zresztą jest włączone tylko w niektórych krajach) odmawia tego klientom powołując się na kompletnie bezsensowne argumenty (że jest to podobno kwestia bezpieczeństwa). Dodatkowo obsługa klienta Mazdy nie odpowiada na pytania i ignoruje klientów.

Jednocześnie, zgodnie z opiniami Mazdofili, wychodzi na to, że Mazda postawiła na sprawdzone rozwiązanie (wygląda na to, że pod "maską" systemu infotainmentu mamy dość standardowego Linuxa) co powinno się przełożyć na niezawodność systemu.

Kwestię umożliwienia logowania na konto roota i trzyliterowe hasło zostawiam bez komentarza. Auto nie jest standardowo podłączone z Internetem i wymagany jest fizyczny dostęp do samochodu, żeby się dostać do systemu. Zakładam, że takie "standardowe" hasło i logowanie bezpośrednio na konto roota ułatwia pracę serwisantom, więc nie widzę w tym dużego zagrożenia.


Comments: 0

page 1 of 512345next »
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License