Share screen over HTTP and M-JPEG

13 Oct 2014 20:14

Here's a script to share your screen in a way that you can use a simple web browser to see it:

#!/bin/bash

fps=5
bitrate=15000
port=8080

transcode="transcode{vcodec=MJPG,vb=$bitrate}"
http='http{mime=multipart/x-mixed-replace}'
std="standard{access=$http,mux=mpjpeg,dst=:$port}"
sout="#$transcode:$std"

cvlc -q screen:// ":screen-fps=$fps" --sout "$sout"

Connect using a browser pointed to http://localhost:8080/

Opera seems to work the best for that stream. If you don't set the zoom at 100% if will flicker a bit though.

Comments: 0

Useful oneliner to set your DNS to Google

12 Feb 2014 18:11

This thing eases things a lot when you deal with crappy internet connection (airport, AirBNB etc).

#!/bin/bash

echo nameserver 8.8.8.8 | sudo tee /etc/resolv.conf > /dev/null

Comments: 0

BASH HTTP server evolves

31 Jan 2012 18:34

Some time ago, mainly for fun I created a HTTP server in just BASH and netcat. The aim was to instantly and simply share files between computers in local network with a one-line command:

quake@vaio /home/quake/files $ http_server.sh 8000

And voila, the files in directory /home/quake/files are accessible via a web browser (or a wget command) on every computer in local network.

Some time ago I learned I can achieve the same effect using simple Python one-liner:

python -m SimpleHTTPServer

This work for standard Python 2.x installations, for Python 3.x even simpler:

python -m http.server

No need for custom scripts, netcat or other fancy stuff, you need just a standard Python installation and it works ;-).

But recently I faced a challenge of copying many gigabytes of files over network. Copying files over SSH was too slow (data is copied from an ARM machine, not really blazing fast at encryption stuff). I tried to copy files over FTP, but I failed at configuring read-write FTP server in limited time. I wanted to avoid configuring other fancy file servers like Samba/CIFS. I could use NFS, which is both simple to configure and fast enough, but I decided to go more fancy.

I took my old http_server.sh, tweaked it a bit (including replacing lame cat "$file" | wc -c - with stat -c %s to determine file size and replacing gawk with awk in the script) and then created a specialised version of it: tar_server.sh.

tar_server.sh is a HTTP server based on http_server.sh that shows you list of directories inside of directory it was run from and allows you to download the directories as tar files. It does the taring on the fly, so you don't waste the disk space.

It's as simple as:

quake@vaio /home/quake/files $ tar_server.sh 8000

Then you can see the list of tar files to download at http://your_ip:8000/ . Suppose you have a directory /home/quake/files/backup. You can download it on some other machine using:

quake@other-machine /home/someuser/files $ wget http://your_ip:8000/backup.tar

Or to unpack on the fly:

quake@other-machine /home/someuser/files $ wget http://your_ip:8000/backup.tar -O - | tar -x

This way you can mirror part of your filesystem with almost no dedicated tools. The script is quite OS-independent and requires only netcat, awk, tar and stat commands, that are likely to found in any Unix-like systems.

Also the script proves BASH is still very useful tool and adapting simple scripts is easy and FUN :-).

Remember, the scripts (just like original version) can handle only one client at once, so if you want to do parallel stuff, you need to launch more of them on different port each.

Here are the scripts:

http_server.sh:

tar_server.sh:

Comments: 2

Running things in parallel in BASH

09 Mar 2009 00:21

Suppose you have a nice script that does its job pretty well, but you figured out, that running certain parts of scripts in parallel would speed things up.

This can be the option, when you send a bunch of files to an Internet service, that is generally fast, but the connection sequence is quite slow, so uploading 100 files one after one causes the script to wait 100 times to quickly upload a file.

Other situation could be when you have multi-core machine, for example you have eight processing units, but use only one in your script, and you have a bunch of files to compile or to process in some CPU-expensive manner.

We'll use only BASH to smartly parallelize the tasks and speed up the slow part of your script.

First of all you need to know how many jobs in parallel you want (if you have 8 cores and CPU-expensive part of script, having more than 8 jobs does not help, probably a number between 4 and 8 will do best in this case).

#!/bin/bash

PROC_NUM=4

Generally, we'll ensure, than no more than PROC_NUM processes are forked into background and run another task. If there are PROC_NUM processes running in the background we'll wait a (fraction of) second and check again.

#!/bin/bash

PROC_NUM=4

function run_task() {
    # task to run
    # can be more than one-line
    # can take parameters $1, $2, ...
}

function run_parallel() {
    while [ `jobs | grep Running | wc -l` -ge $PROC_NUM ]; do
        sleep 0.25
    done

    run_task "$@" &
}

run_task "$@" passes all the parameters passed to run_parallel to run_task. You can use "$@" in run_task to pass all the parameters to external command! The "$@" is the best choice when you have spaces, dollars and other special characters in parameters. It doesn't transform anything, it's completely safe (probably the only short way to pass all the parameters).

There are only two things left: invoking the run_parallel and synchronizing the tasks — you need to know when ALL the tasks ended, right?

#!/bin/bash

PROC_NUM=4

function run_task() {
    # task to run
    # can be more than one-line
    # can take parameters $1, $2, ...
}

function run_parallel() {
    while [ `jobs | grep Running | wc -l` -ge $PROC_NUM ]; do
        sleep 0.25
    done

    run_task "$@" &
}

function end_parallel() {
    while [ `jobs | grep Running | wc -l` -gt 0 ]; do
        sleep 0.25
    done
}

# script content

cd /some/where/you/want

# now the parallel operations
# for example in some while

find | while read file; do
    run_parallel "$file"
done

# now you want to continue when ALL parallel tasks ended

end_parallel

# the linear script code again

cd /some/where/else
make something

That's all! Though, there is a different approach to this:

#!/bin/bash

function parallel() {
    local PROC_NUM="$1"
    local SLEEP_TIME="$2"
    shift; shift
    while [ `jobs | grep Running | wc -l` -ge $PROC_NUM ]; do
        sleep $SLEEP_TIME
    done
    "$@" &
}

This function acts as a wrapper to a non-parallel command and runs it in the background assuring that no more than PROC_NUM processes run at once. If there are PROC_NUM processes running in the background, the wrapper waits SLEEP_TIME to re-check the number of background jobs.

Invoking:

parallel PROC_NUM SLEEP_TIME /usr/bin/some-command arguments ...

so

parallel 4 0.5 ls -R /tmp

means: run ls -R /tmp in the background if there is no more than 3 processes already run in the background. Otherwise wait 0.5 seconds and try again. Then run ls -R /tmp if there is no more than 3 processes already run in the background. Otherwise wait 0.5 seconds and try again. Then run ls -R /tmp if …

Quite nice, isn't it?

Comments: 5

page 1 of 212next »
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License