Japanese input method in KDE 3.5 (Hardy Heron)

The easiest (and recommended) way to enable Japanese input in KDE is through System Settings -> Regional & Accessibility -> Country/Region & Language. Installing support for Japanese also installs the scim-anthy input method. If you need to use Japanese in terminal, the easiest way is to set Japanese as system language (but that indeed changes the default language, e.g. most man pages and system commands like apt-get will be in Japanese).

However, this will not enable Japanese input in non-KDE programs like Firefox. For this you need to install scim-bridge-client-gtk and im-switch. Then run

im-switch -s scim-bridge

to set scim-bridge as default input method for all applications.

scim-bridge is a new socket based input method module that fixes the many annoying problems that were present with scim, like various crashes in Firefox and Thunderbird and whitespace mapping bugs in KDE. The only thing that doesn’t work is Japanese input in Skype, which is a pity. Apparently this is fixed in KDE 4.0.

All in all this is very painless procedure compared to what it used to be in Dapper or Edgy.

Advertisements

A script for running processes in parallel in Bash

In Bash you can start new processes (theads) on the background simply by running a command with ampersand &. The wait command can be used to wait until all background processes have finished (to wait for a certain process do wait PID where PID is a process ID). So here’s a simple pseudocode for parallel processing:

for ARG in  $*; do
    command $ARG &
    NPROC=$(($NPROC+1))
    if [ "$NPROC" -ge 4 ]; then
        wait
        NPROC=0
    fi
done

I.e. you run 4 processes at a time and wait until all of them have finished before executing the next four. This is a sufficient solution if all of the processes take equally long to finish. However this is suboptimal if running time of the processes vary a lot.

A better solution is to track the process IDs and poll if all of them are still running. In Bash $! returns the ID of last initiated background process. If a process is running, the corresponding PID is found in directory /proc/.

Based on the ideas given in a Ubuntu forum thread and a template on command line parsing, I wrote a simple script “parallel” that allows you to run virtually any simple command concurrently.

Assume that you have a program proc and you want to run something like proc *.jpg using three concurrent processes. Then simply do

parallel -j 3 proc *.jpg

The script takes care of dividing the task. Obviously -j 3 stands for three simultaneous jobs.
If you need command line options, use quotes to separate the command from the variable arguments, e.g.

parallel -j 3 "proc -r -A=40" *.jpg

Furthermore, -r allows even more sophisticated commands by replacing asterisks in the command string by the argument:

parallel -j 6 -r "convert -scale 50% * small/small_*" *.jpg

I.e. this executes convert -scale 50% file1.jpg small/small_file1.jpg for all the jpg files. This is a real-life example for scaling down images by 50% (requires imagemagick).

Finally, here’s the script. It can be easily manipulated to handle different jobs, too. Just write your command between #DEFINE COMMAND and #DEFINE COMMAND END.

#!/bin/bash
NUM=0
QUEUE=""
MAX_NPROC=2 # default
REPLACE_CMD=0 # no replacement by default
USAGE="A simple wrapper for running processes in parallel.
Usage: `basename $0` [-h] [-r] [-j nb_jobs] command arg_list
 	-h		Shows this help
	-r		Replace asterix * in the command string with argument
	-j nb_jobs 	Set number of simultanious jobs [2]
 Examples:
 	`basename $0` somecommand arg1 arg2 arg3
 	`basename $0` -j 3 \"somecommand -r -p\" arg1 arg2 arg3
 	`basename $0` -j 6 -r \"convert -scale 50% * small/small_*\" *.jpg"

function queue {
	QUEUE="$QUEUE $1"
	NUM=$(($NUM+1))
}

function regeneratequeue {
	OLDREQUEUE=$QUEUE
	QUEUE=""
	NUM=0
	for PID in $OLDREQUEUE
	do
		if [ -d /proc/$PID  ] ; then
			QUEUE="$QUEUE $PID"
			NUM=$(($NUM+1))
		fi
	done
}

function checkqueue {
	OLDCHQUEUE=$QUEUE
	for PID in $OLDCHQUEUE
	do
		if [ ! -d /proc/$PID ] ; then
			regeneratequeue # at least one PID has finished
			break
		fi
	done
}

# parse command line
if [ $# -eq 0 ]; then #  must be at least one arg
	echo "$USAGE" >&2
	exit 1
fi

while getopts j:rh OPT; do # "j:" waits for an argument "h" doesnt
    case $OPT in
	h)	echo "$USAGE"
		exit 0 ;;
	j)	MAX_NPROC=$OPTARG ;;
	r)	REPLACE_CMD=1 ;;
	\?)	# getopts issues an error message
		echo "$USAGE" >&2
		exit 1 ;;
    esac
done

# Main program
echo Using $MAX_NPROC parallel threads
shift `expr $OPTIND - 1` # shift input args, ignore processed args
COMMAND=$1
shift

for INS in $* # for the rest of the arguments
do
	# DEFINE COMMAND
	if [ $REPLACE_CMD -eq 1 ]; then
		CMD=${COMMAND//"*"/$INS}
	else
		CMD="$COMMAND $INS" #append args
	fi
	echo "Running $CMD" 

	$CMD &
	# DEFINE COMMAND END

	PID=$!
	queue $PID

	while [ $NUM -ge $MAX_NPROC ]; do
		checkqueue
		sleep 0.4
	done
done
wait # wait for all processes to finish before exit

Print directory tree disk usage on the command line

Suppose you have a large tree of directories containing lots of data (such as source code of a big project or numerical output of your simulations) and you need to estimate the total size of the whole tree. In graphical user interface this can be done by examining the directory properties. But as usual things can be done faster on the command line, where the suitable command is du (for Disk Usage).

An example (assuming that the root of your directory tree is called data)

du -h --max-depth=1 data
1.1G data/soln
3.0M data/binary
1.1G data

The -h option tells du to use human readable format, i.e. MB, GB etc instead of bytes. Option --max-depth=1 means that only the first subdirectories are listed. For more info on the options run man du.

    It is convenient to create an alias for shortening the long command such as
    alias disku='du -h --max-depth=1'
    For the alias to be present in all future sessions, add the line to your shell initialization file (for bash shell ~/.bashrc for example).

Keep processes running even if you close terminal / log out

You left the office in a hurry on Friday and forgot to run one process. Of course you can ssh to your desktop from home and put it running. But the problem is that the process takes several hours (or days) to complete. Do you need to keep you terminal and ssh connection open for all that time?

The answer is no. There are several ways of detaching a process from any display and thus killing the connection will not kill the process. The easiest method that I have encountered is screen. It is usually in the linux repositories, so for example in Ubuntu you can download it though apt. Note that you need to install it on the machine where want to run the executable.

Here's how it works.
1) Log in to the remote machine.
2) Run screen in command line. A new "virtual" shell will open (after you have pressed space or enter). The new shell is the one that will be detached.
3) Run your process.
4) Detach the display by pressing Ctrl+a and Ctrl+d in sequence. The original shell (where you typed screen) is displayed. You can now freely log out without affecting the detached process.
5) To resume to the virtual shell, log in again and run screen -r on the remote machine.
The detached shell is displayed again just as you left it. As if you had never logged out at all!
6) When you're finally done kill the virtual shell by running exit as usual.

screen has also other more advanced features. To be able to scroll the terminal press
Ctrl-a [ to enable copy mode. In this mode scrolling is possible with up/down keys.
Esc exits copy mode.

It is also possible to use command nohup for this task, but it's not as advanced as screen and it stores all output to logfiles.

Using kompare to view Subversion differences

When you’re using Subversion (svn) repositories for code development, it is sometimes useful to check differences between code revisions (i.e. when things went wrong for the first time). If you are using KDE, Kompare is a graphical difference viewer that can be easily used for this task. Simply run

svn diff -r 1020:1047 | kompare -o -

in a directory that belongs to the svn tree. Kompare window will open showing comparison of all the changed files in that directory tree. The numbers refer to revision numbers, i.e. in this case revisions 1020 and 1047 are being compared. The revision switch -r accepts other formulations, too:

'{' DATE '}' revision at start of the date
'HEAD' latest in repository
'BASE' base rev of item's working copy
'COMMITTED' last commit at or before BASE
'PREV' revision just before COMMITTED

For more information on the svn difference command run svn -h diff.

nVidia 8600 GT direct rendering on Hardy Heron

I made a clean install of the new Kubuntu 8.04 on Shuttle SP35P2. Everything went reasonably smoothly, I could even install nVidia drivers though KDE->System->Hardware Drivers Manager.

However, setting up the compiz desktop effects (KDE->System->Desktop Effects) didn’t quite work out. After reboot I logged in and got the white screen of death. I had to switch to terminal by Ctrl+Alt+F1, restart the X server by

sudo /etc/init.d/kdm restart

and login in failsafe mode. One can access the desktop effects setup from terminal by running

desktop-effects-kde4

After disabling all the effects I was able to log in in normal mode again.

It turned out that although the driver was installed, it wasn’t working properly. Running glxinfo printed out


...
direct rendering: No (If you want to ...
server glx vendor string: SGI
...

Clearly there’s something wrong here…
I installed newest drivers from the repositories, and also installed drivers using EnvyNG. Even went to nVidia homepage and got the latest driver installer and when even that didn’t work I downloaded the nvidia beta drivers from the same site. No remedy.

In the end it turned out that the trouble was package xserver-xgl that prevents direct rendering (DRI) in all cases. Removing the package and rebooting did the trick for me. I’m now using the nVidia beta drivers (173.08) and the compiz effects seems to work ok. The drawback is that I noticed some decrease in performance when xserver-xgl was not installed.

To put the long story short:
If you are using nvidia drivers but don’t get direct rendering working, check if you have the latest (beta?) drivers and if xserver-xgl is installed (remove it if it is).

— Edit 2008-05-28 —
Upgrading to kernel 2.6.24-17 broke the nvidia driver. The nvidia kernel module would not load correctly anymore which was probably due to the manual beta driver installation (see Ubuntu forums). I decided to go back to nvidia-glx-new from Ubuntu repositories.

I had to remove the new kernel 2.6.24-17 and purge all nvidia related packages. After rebooting I reinstalled the new kernel and rebooted again. Then installed nvidia driver from Hardware Drivers Manager and rebooted. Everything was OK except that the 3d desktop effects still don’t work with xserver-xgl.

To summarize: It’s better to stick with Ubuntu drivers for future compatibility.