Sun Sep 21 13:59:16 CEST 2014

bcache

My "sacrificial box", a machine reserved for any experimentation that can break stuff, has had annoyingly slow IO for a while now. I've had 3 old SATA harddisks (250GB) in a RAID5 (because I don't trust them to survive), and recently I got a cheap 64GB SSD that has become the new rootfs initially.

The performance difference between the SATA disks and the SSD is quite amazing, and the difference to a proper SSD is amazing again. Just for fun: the 3-disk RAID5 writes random data at about 1.5MB/s, the crap SSD manages ~60MB/s, and a proper SSD (e.g. Intel) easily hits over 200MB/s. So while this is not great hardware it's excellent for demonstrating performance hacks.

Recent-ish kernels finally have bcache included, so I decided to see if I can make use of it. Since creating new bcache devices is destructive I copied all data away, reformated the relevant partitions and then set up bcache. So the SSD is now 20GB rootfs, 40GB cache. The raid5 stays as it is, but gets reformated with bcache.
In code:
wipefs /dev/md0 # remove old headers to unconfuse bcache
make-bcache -C /dev/sda2 -B /dev/md0 --writeback --cache_replacement_policy=lru
mkfs.xfs /dev/bcache0 # no longer using md0 directly!
Now performance is still quite meh, what's the problem? Oh ... we need to attach the SSD cache device to the backing device!
ls /sys/fs/bcache/
45088921-4709-4d30-a54d-d5a963edf018  register  register_quiet
That's the UUID we need, so:
echo 45088921-4709-4d30-a54d-d5a963edf018 > /sys/block/bcache0/bcache/attach
and dmesg says:
[  549.076506] bcache: bch_cached_dev_attach() Caching md0 as bcache0 on set 45088921-4709-4d30-a54d-d5a963edf018
Tadaah!

So what about performance? Well ... without any proper benchmarks, just copying the data back I see very different behaviour. iotop shows writes happening at ~40MB/s, but as the network isn't that fast (100Mbit switch) it's only writing every ~5sec for a second.
Unpacking chromium is now CPU-limited and doesn't cause a minute-long IO storm. Responsivity while copying data is quite excellent.

The write speed for random IO is a lot higher, reaching maybe 2/3rds of the SSD natively, but I have 1TB storage with that speed now - for a $25 update that's quite amazing.

Another interesting thing is that bcache is chunking up IO, so the harddisks are no longer making an angry purring noise with random IO, instead it's a strange chirping as they only write a few larger chunks every second. It even reduces the noise level?! Neato.

First impression: This is definitely worth setting up for new machines that require good IO performance, the only downside for me is that you need more hardware and thus a slightly bigger budget. But the speedup is "very large" even with a cheap-crap SSD that doesn't even go that fast ...

Edit: ioping, for comparison:
native sata disks:
32 requests completed in 32.8 s, 34 iops, 136.5 KiB/s
min/avg/max/mdev = 194 us / 29.3 ms / 225.6 ms / 46.4 ms

bcache-enhanced, while writing quite a bit of data:
36 requests completed in 35.9 s, 488 iops, 1.9 MiB/s
min/avg/max/mdev = 193 us / 2.0 ms / 4.4 ms / 1.2 ms


Definitely awesome!

Posted by Patrick | Permalink

Fri Sep 5 08:41:43 CEST 2014

32bit Madness

This week I ran into a funny issue doing backups with rsync:
rsnapshot/weekly.3/server/storage/lost/of/subdirectories/some-stupid.file => rsnapshot/daily.0/server/storage/lost/of/subdirectories/some-stupid.file
ERROR: out of memory in make_file [generator]
rsync error: error allocating core memory buffers (code 22) at util.c(117) [generator=3.0.9]
rsync error: received SIGUSR1 (code 19) at main.c(1298) [receiver=3.0.9]
rsync: connection unexpectedly closed (2168136360 bytes received so far) [sender]
rsync error: error allocating core memory buffers (code 22) at io.c(605) [sender=3.0.9]
Oopsiedaisy, rsync ran out of memory. But ... this machine has 8GB RAM, plus 32GB Swap ?!
So I re-ran this and started observing, and BAM, it fails again. With ~4GB RAM free.

4GB you say, eh? That smells of ... 2^32 ...
For doing the copying I was using sysrescuecd, and then it became obvious to me: All binaries are of course 32bit!

So now I'm doing a horrible hack of "linux64 chroot /mnt/server" so that I have a 64bit environment that does not run out of space randomly. Plus 3 new bugs for the Gentoo livecd, which fails to appreciate USB and other things.
Who would have thought that a 16TB partition can make rsync stumble over address space limits ...

Posted by Patrick | Permalink

Wed Sep 3 08:25:27 CEST 2014

AMD HSA

With the release of the "Kaveri" APUs AMD has released some quite intriguing technology. The idea of the "APU" is a blend of CPU and GPU, what AMD calls "HSA" - Heterogenous System Architecture.
What does this mean for us? In theory, once software catches up, it'll be a lot easier to use GPU-acceleration (e.g. OpenCL) within normal applications.

One big advantage seems to be that CPU and GPU share the system memory, so with the right drivers you should be able to do zero-copy GPU processing. No more host-to-GPU copy and other waste of time.

So far there hasn't been any driver support to take advantage of that. Here's the good news: As of a week or two ago there is driver support. Still very alpha, but ... at last, drivers!

On the kernel side there's the kfd driver, which piggybacks on radeon. It's available in a slightly very patched kernel from AMD. During bootup it looks like this:
[    1.651992] [drm] radeon kernel modesetting enabled.
[    1.657248] kfd kfd: Initialized module
[    1.657254] Found CRAT image with size=1440
[    1.657257] Parsing CRAT table with 1 nodes
[    1.657258] Found CU entry in CRAT table with proximity_domain=0 caps=0
[    1.657260] CU CPU: cores=4 id_base=16
[    1.657261] Found CU entry in CRAT table with proximity_domain=0 caps=0
[    1.657262] CU GPU: simds=32 id_base=-2147483648
[    1.657263] Found memory entry in CRAT table with proximity_domain=0
[    1.657264] Found memory entry in CRAT table with proximity_domain=0
[    1.657265] Found memory entry in CRAT table with proximity_domain=0
[    1.657266] Found memory entry in CRAT table with proximity_domain=0
[    1.657267] Found cache entry in CRAT table with processor_id=16
[    1.657268] Found cache entry in CRAT table with processor_id=16
[    1.657269] Found cache entry in CRAT table with processor_id=16
[    1.657270] Found cache entry in CRAT table with processor_id=17
[    1.657271] Found cache entry in CRAT table with processor_id=18
[    1.657272] Found cache entry in CRAT table with processor_id=18
[    1.657273] Found cache entry in CRAT table with processor_id=18
[    1.657274] Found cache entry in CRAT table with processor_id=19
[    1.657274] Found TLB entry in CRAT table (not processing)
[    1.657275] Found TLB entry in CRAT table (not processing)
[    1.657276] Found TLB entry in CRAT table (not processing)
[    1.657276] Found TLB entry in CRAT table (not processing)
[    1.657277] Found TLB entry in CRAT table (not processing)
[    1.657278] Found TLB entry in CRAT table (not processing)
[    1.657278] Found TLB entry in CRAT table (not processing)
[    1.657279] Found TLB entry in CRAT table (not processing)
[    1.657279] Found TLB entry in CRAT table (not processing)
[    1.657280] Found TLB entry in CRAT table (not processing)
[    1.657286] Creating topology SYSFS entries
[    1.657316] Finished initializing topology ret=0
[    1.663173] [drm] initializing kernel modesetting (KAVERI 0x1002:0x1313 0x1002:0x0123).
[    1.663204] [drm] register mmio base: 0xFEB00000
[    1.663206] [drm] register mmio size: 262144
[    1.663210] [drm] doorbell mmio base: 0xD0000000
[    1.663211] [drm] doorbell mmio size: 8388608
[    1.663280] ATOM BIOS: 113
[    1.663357] radeon 0000:00:01.0: VRAM: 1024M 0x0000000000000000 - 0x000000003FFFFFFF (1024M used)
[    1.663359] radeon 0000:00:01.0: GTT: 1024M 0x0000000040000000 - 0x000000007FFFFFFF
[    1.663360] [drm] Detected VRAM RAM=1024M, BAR=256M
[    1.663361] [drm] RAM width 128bits DDR
[    1.663471] [TTM] Zone  kernel: Available graphics memory: 7671900 kiB
[    1.663472] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
[    1.663473] [TTM] Initializing pool allocator
[    1.663477] [TTM] Initializing DMA pool allocator
[    1.663496] [drm] radeon: 1024M of VRAM memory ready
[    1.663497] [drm] radeon: 1024M of GTT memory ready.
[    1.663516] [drm] Loading KAVERI Microcode
[    1.667303] [drm] Internal thermal controller without fan control
[    1.668401] [drm] radeon: dpm initialized
[    1.669403] [drm] GART: num cpu pages 262144, num gpu pages 262144
[    1.685757] [drm] PCIE GART of 1024M enabled (table at 0x0000000000277000).
[    1.685894] radeon 0000:00:01.0: WB enabled
[    1.685905] radeon 0000:00:01.0: fence driver on ring 0 use gpu addr 0x0000000040000c00 and cpu addr 0xffff880429c5bc00
[    1.685908] radeon 0000:00:01.0: fence driver on ring 1 use gpu addr 0x0000000040000c04 and cpu addr 0xffff880429c5bc04
[    1.685910] radeon 0000:00:01.0: fence driver on ring 2 use gpu addr 0x0000000040000c08 and cpu addr 0xffff880429c5bc08
[    1.685912] radeon 0000:00:01.0: fence driver on ring 3 use gpu addr 0x0000000040000c0c and cpu addr 0xffff880429c5bc0c
[    1.685914] radeon 0000:00:01.0: fence driver on ring 4 use gpu addr 0x0000000040000c10 and cpu addr 0xffff880429c5bc10
[    1.686373] radeon 0000:00:01.0: fence driver on ring 5 use gpu addr 0x0000000000076c98 and cpu addr 0xffffc90012236c98
[    1.686375] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[    1.686376] [drm] Driver supports precise vblank timestamp query.
[    1.686406] radeon 0000:00:01.0: irq 83 for MSI/MSI-X
[    1.686418] radeon 0000:00:01.0: radeon: using MSI.
[    1.686441] [drm] radeon: irq initialized.
[    1.689611] [drm] ring test on 0 succeeded in 3 usecs
[    1.689699] [drm] ring test on 1 succeeded in 2 usecs
[    1.689712] [drm] ring test on 2 succeeded in 2 usecs
[    1.689849] [drm] ring test on 3 succeeded in 2 usecs
[    1.689856] [drm] ring test on 4 succeeded in 2 usecs
[    1.711523] tsc: Refined TSC clocksource calibration: 3393.828 MHz
[    1.746010] [drm] ring test on 5 succeeded in 1 usecs
[    1.766115] [drm] UVD initialized successfully.
[    1.767829] [drm] ib test on ring 0 succeeded in 0 usecs
[    2.268252] [drm] ib test on ring 1 succeeded in 0 usecs
[    2.712891] Switched to clocksource tsc
[    2.768698] [drm] ib test on ring 2 succeeded in 0 usecs
[    2.768819] [drm] ib test on ring 3 succeeded in 0 usecs
[    2.768870] [drm] ib test on ring 4 succeeded in 0 usecs
[    2.791599] [drm] ib test on ring 5 succeeded
[    2.812675] [drm] Radeon Display Connectors
[    2.812677] [drm] Connector 0:
[    2.812679] [drm]   DVI-D-1
[    2.812680] [drm]   HPD3
[    2.812682] [drm]   DDC: 0x6550 0x6550 0x6554 0x6554 0x6558 0x6558 0x655c 0x655c
[    2.812683] [drm]   Encoders:
[    2.812684] [drm]     DFP2: INTERNAL_UNIPHY2
[    2.812685] [drm] Connector 1:
[    2.812686] [drm]   HDMI-A-1
[    2.812687] [drm]   HPD1
[    2.812688] [drm]   DDC: 0x6530 0x6530 0x6534 0x6534 0x6538 0x6538 0x653c 0x653c
[    2.812689] [drm]   Encoders:
[    2.812690] [drm]     DFP1: INTERNAL_UNIPHY
[    2.812691] [drm] Connector 2:
[    2.812692] [drm]   VGA-1
[    2.812693] [drm]   HPD2
[    2.812695] [drm]   DDC: 0x6540 0x6540 0x6544 0x6544 0x6548 0x6548 0x654c 0x654c
[    2.812695] [drm]   Encoders:
[    2.812696] [drm]     CRT1: INTERNAL_UNIPHY3
[    2.812697] [drm]     CRT1: NUTMEG
[    2.924144] [drm] fb mappable at 0xC1488000
[    2.924147] [drm] vram apper at 0xC0000000
[    2.924149] [drm] size 9216000
[    2.924150] [drm] fb depth is 24
[    2.924151] [drm]    pitch is 7680
[    2.924428] fbcon: radeondrmfb (fb0) is primary device
[    2.994293] Console: switching to colour frame buffer device 240x75
[    2.999979] radeon 0000:00:01.0: fb0: radeondrmfb frame buffer device
[    2.999981] radeon 0000:00:01.0: registered panic notifier
[    3.008270] ACPI Error: [\_SB_.ALIB] Namespace lookup failure, AE_NOT_FOUND (20131218/psargs-359)
[    3.008275] ACPI Error: Method parse/execution failed [\_SB_.PCI0.VGA_.ATC0] (Node ffff88042f04f028), AE_NOT_FOUND (20131218/psparse-536)
[    3.008282] ACPI Error: Method parse/execution failed [\_SB_.PCI0.VGA_.ATCS] (Node ffff88042f04f000), AE_NOT_FOUND (20131218/psparse-536)
[    3.509149] kfd: kernel_queue sync_with_hw timeout expired 500
[    3.509151] kfd: wptr: 8 rptr: 0
[    3.509243] kfd kfd: added device (1002:1313)
[    3.509248] [drm] Initialized radeon 2.37.0 20080528 for 0000:00:01.0 on minor 0
It is recommended to add udev rules:
# cat /etc/udev/rules.d/kfd.rules 
KERNEL=="kfd", MODE="0666"
(this might not be the best way to do it, but we're just here to test if things work at all ...)

AMD has provided a small shell script to test if things work:
# ./kfd_check_installation.sh 

Kaveri detected:............................Yes
Kaveri type supported:......................Yes
Radeon module is loaded:....................Yes
KFD module is loaded:.......................Yes
AMD IOMMU V2 module is loaded:..............Yes
KFD device exists:..........................Yes
KFD device has correct permissions:.........Yes
Valid GPU ID is detected:...................Yes

Can run HSA.................................YES
So that's a good start. Then you need some support libs ... which I've ebuildized in the most horrible ways
These ebuilds can be found here

Since there's at least one binary file with undeclared license and some other inconsistencies I cannot recommend installing these packages right now.
And of course I hope that AMD will release the sourcecode of these libraries ...

There's an example "vector_copy" program included, it mostly works, but appears to go into an infinite loop. Outout looks like this:
# ./vector_copy 
Initializing the hsa runtime succeeded.
Calling hsa_iterate_agents succeeded.
Checking if the GPU device is non-zero succeeded.
Querying the device name succeeded.
The device name is Spectre.
Querying the device maximum queue size succeeded.
The maximum queue size is 131072.
Creating the queue succeeded.
Creating the brig module from vector_copy.brig succeeded.
Creating the hsa program succeeded.
Adding the brig module to the program succeeded.
Finding the symbol offset for the kernel succeeded.
Finalizing the program succeeded.
Querying the kernel descriptor address succeeded.
Creating a HSA signal succeeded.
Registering argument memory for input parameter succeeded.
Registering argument memory for output parameter succeeded.
Finding a kernarg memory region succeeded.
Allocating kernel argument memory buffer succeeded.
Registering the argument buffer succeeded.
Dispatching the kernel succeeded.
^C
Big thanks to AMD for giving us geeks some new toys to work with, and I hope it becomes a reliable and efficient platform to do some epic numbercrunching :)

Posted by Patrick | Permalink

Thu Aug 7 04:45:16 CEST 2014

googlecode.com, or no tarballs for you

I'm almost amused, see this bug

So when I fetched it earlier the tarball had size 207200 bytes
Most of europe apparently gets a tarball of size 207135 bytes
When I download now again I get a tarball of size 206989 bytes

So I have to assume that googlecode now follows githerp in their tradition of being useless for code hosting. Is it really that hard to generate a consistent tarball once, and then mirror it?
Maybe I should build my own codehosting just to understand why this is apparently impossible ...

Posted by Patrick | Permalink

Mon Jul 14 08:39:32 CEST 2014

Biggest ebuilds in-tree

Random datapoint: There's only about 10 packages with ebuilds over 600 lines.

Sorted by lines, duplicate entries per-package removed, these are the biggest ones:
828 dev-lang/ghc/ghc-7.6.3-r1.ebuild
817 dev-lang/php/php-5.3.28-r3.ebuild
750 net-nds/openldap/openldap-2.4.38-r2.ebuild
664 www-client/chromium/chromium-36.0.1985.67.ebuild
654 www-servers/nginx/nginx-1.4.7.ebuild
658 games-rpg/nwn-data/nwn-data-1.29-r5.ebuild
654 media-video/mplayer/mplayer-1.1.1-r1.ebuild
644 dev-vcs/git/git-9999-r3.ebuild
621 x11-drivers/ati-drivers/ati-drivers-13.4.ebuild
617 sys-freebsd/freebsd-lib/freebsd-lib-9.1-r11.ebuild

Posted by Patrick | Permalink

Fri Jun 27 10:01:01 CEST 2014

Build times

Just for fun, over about 8500 packages built, the slowest three:
     Fri Jun 13 19:40:13 2014 >>> dev-python/pypy-2.2.1
       merge time: 2 hours, 7 minutes and 23 seconds.

     Fri Jun 20 09:58:38 2014 >>> app-office/libreoffice-4.2.4.2
       merge time: 1 hour, 37 minutes and 22 seconds.

     Fri Jun 27 12:52:19 2014 >>> sci-libs/openfoam-2.3.0
       merge time: 1 hour, 5 minutes and 8 seconds.
(Quadcore AMD64, 3.4Ghz, 8GB RAM)

These are also the only packages above 1h build time.
Average seems to be near 5 minutes (hard to filter out all the binpkg merges, which are silly-fast)

Edit: New highscore!
     Sun Jun 29 20:36:09 2014 >>> sci-mathematics/nusmv-2.5.4
       merge time: 2 hours, 58 minutes.

Posted by Patrick | Permalink

Wed Jun 25 08:27:30 CEST 2014

Building Everything

Preparation:
  • Take recent stage3 and unpack to a temporary location
  • Set up things: make.conf, resolv.conf, keywords, ...
  • Update @system, check gcc version etc.
  • Clone this snapshot to 4 locations (4 because of CPU cores)
  • bindmount /usr/portage and friends
Run:
Start a screen session for each clone. Chroot in. Apply magic oneliner:
for i in $( qsearch -NC --all | sort -R ); do 
    if $( emerge --nodeps -pk $i > /dev/null ) ; then 
        emerge --depclean; echo $i; emerge -uNDk1 $i; 
    fi; 
done
Wait 4-5 days, get >10k binary packages, lots of logfiles.

Space usage:
~2.5G logfiles
~35G distfiles
~20G binary packages
~100G temp space (/var/tmp has lots of cruft unless FEATURES="fail-clean")


Triage of these logfiles yields about 1% build failures, on average.
It's not hard to do, just tedious!

make.conf additions:
FEATURES="buildpkg split-log -news"
PORT_LOGDIR="/var/log/portage/"
MAKEOPTS="-j4"
EMERGE_DEFAULT_OPTS="--jobs 4"

CLEAN_DELAY="0"
EMERGE_WARNING_DELAY="0"
ACCEPT_PROPERTIES="* -interactive"

Posted by Patrick | Permalink

Tue Jun 17 09:13:34 CEST 2014

EAPI statistics, again

Start: Thu Jan 16 08:18:45 UTC 2014
End:   Mon Jun 16 00:00:01 UTC 2014

EAPI 0:   5966 ebuilds (15.78 percent) ->  5477 ebuilds (14.40 percent)
EAPI 1:    370 ebuilds (0.98 percent)  ->   215 ebuilds ( 0.57 percent)
EAPI 2:   3335 ebuilds (8.82 percent)  ->  2938 ebuilds ( 7.72 percent)
EAPI 3:   3005 ebuilds (7.95 percent)  ->  2585 ebuilds ( 6.79 percent)
EAPI 4:  12385 ebuilds (32.76 percent) -> 10375 ebuilds (27.27 percent)
EAPI 5:  12742 ebuilds (33.71 percent) -> 16455 ebuilds (43.25 percent)
Total    37803 -> 38045

EAPI 0 change:  -8.2%
EAPI 1 change: -58.1%
EAPI 2 change: -11.9%
EAPI 3 change: -14.0%
EAPI 4 change: -16.2%
EAPI 5 change: +29.1%
So over the last 5 months we had about 2% increase in the total amount of ebuilds. The only growing class is EAPI5, which is quite excellent.

EAPI 0 is the slowest decreasing, as long as there's no coordinated effort to get rid of it it'll be there forever. EAPI1 is now very close to extinction.

EAPI 2,3 and 4 are slowly shrinking away, but at this rate it'll still take years.

Posted by Patrick | Permalink

Fri Jun 13 10:33:22 CEST 2014

A one-line Tinderbox

Needs portage-utils, best to run in a chroot:
for i in $( qsearch --all -CN | sort -R ); do emerge -1 $i; emerge --depclean; done

Posted by Patrick | Permalink

Sat May 3 03:40:52 CEST 2014

KDE's Baloo Indexer: Constructive Criticism

KDE 4.13 was released with a new indexer, named "Baloo". It mostly replaces the 'old' Akonadi indexer, which at first glance appears to be a good idea. It seems to work, so that's quite swell. There's only a problem. Or rather, some little problems, and upstream is one of them as they don't want to acknowledge that these issues exist. So let me try to explain ...
  • There are times when I just need the indexer to not run. For example when I'm watching a movie (IO activity -> stutter), doing a presentation (random lag?!) etc. And there are times (e.g. at night) when the indexer can run as much as it wants.
  • There are times when the indexer interferes with normal operation - e.g. when using firefox, the added IO activity causes the FF UI to lag severely, as if the machine was swapping. Partially also because the IO activity evacuates the filesystem cache, which is quite funny. And fsync plus lots of reads means the latency goes up to multiple seconds or even multiple tens of seconds for a single IO activity
  • The indexer claims to not interfere with normal operation. It limits itself to 10% CPU usage - which is the wrong metric, since I have lots of CPU and very little IO, relatively speaking. Thus it takes 100% of available IO bandwidth. Akonadi used up to 4 CPUs for longer amounts of time, but as it didn't hurt IO much I could just ignore it.
  • The indexer takes a LONG time. On boot it needs about 20 minutes walltime just to figure out if anything has changed. During that time service quality is severely degraded.
  • The indexer takes a long time. The initial scan of my home directory takes about, hmm, 36-48h I think, during which time service quality is severely degraded
  • The indexer isn't polite, it auto-respawns if you just kill the baloo_file_indexer process. You have to kill its parent too, otherwise it'll just respawn and bother you some more
  • [Fixed in next release] Removing a directory from the index causes an index cleaner to run, which is even more severe than the indexer itself
So, to summarize: As much as I like the indexer, it prevents me from working normally, so I have to insist that it has a simple "off" button. A lesson that akonadi learned, that gnome's tracker learned, is that you need to nice yourself down. It would be very much appreciated if baloo were to nice and ionice itself down to idle, which usually avoids the severe lag that foreground tasks may experience.

An extra bonus would be this: The indexer should do a microbenchmark on startup (or let the user provide a guesstimate) to figure out IO capacity in IO/s, and then limit itself to a configurable amount of that. If it takes 1/10th of my IO bandwidth (about 10-15 IO/s with a single SATA disk) it wouldn't even bother me more than, say, Firefox running in the background.

Another interesting glitch is that most indexers use inotify listeners to see if anything in a directory changes. This has the funny effect that it only works on small data sets - on my desktop I get random popups that an application wants to change system limits. Well, /proc/sys/fs/inotify/max_user_watches is already set to "262144" by default, and that's still not enough? This also takes memory, and it can't scale up. I "only" have a few million files, that's not even a lot.

So, to summarize:
Simple fixes:
  • Nice and ionice the indexer on startup
  • Provide users with a simple on/off mechanism
Advanced fixes:
  • Throttle on IO instead of CPU
  • Delay indexer startup for a little while on boot. Maybe 120sec grace period
  • Figure out system limits and fail gracefully instead of annoying users with popups
Well, dear upstream, don't accuse me of not being constructive ...
<DrEeevil> people complain about user-hostile behaviour, and you tell them to ... be nicer and not complain so loud?
<unormal> DrEeevil: To be honest. The only things I see from you in here since hours and days is hostile behaviour. I really would like to ask you to stop this and be constructive or otherwise leave
<DrEeevil> unormal: well, if I didn't have to remove binaries and kill processes I'd be a lot happier
<DrEeevil> since upstream hasn't shown any understanding I'll rather escalate until the bugs are resolved
<DrEeevil> constructive: give me an off button so I can stop the indexer when it hurts me, give me a rate limit so I can run it while using the computer
<DrEeevil> (using 99% of available IO bandwidth for up to 72h is just not acceptable in normal use)
<DrEeevil> I don't want to remove the indexer, but I want to control how much resource usage it has
<DrEeevil> (bonus: ionice + nice it down to lowest/idle, then it doesn't bother that much)
<DrEeevil> it's not THAT hard to figure that out ...
<unormal> DrEeevil: Ok and now explicitely: Please do us a favor and leave the channel.
<DrEeevil> unormal: once I can have baloo installed and working as described above you'll never hear from me again
<DrEeevil> just every time I get a local DoS you get a complaint, so that you don't lose the motivation to fix the bugs
<unormal> That's not how you motivate people, please leave.
<DrEeevil> that's not how you write software, please fix
<DrEeevil> I'll be the stone in your shoe until you stop being the one in mine
<DrEeevil> heck, I'll even test patches once they are provided!
<krop> and ultimately, you'll roll on the floor crying until something happens ? how can gentoo accept immature people in their staff ?
--> seaLne (~seaLne@kde/kenny) has joined #kde-baloo
*** Mode #kde-baloo +o seaLne by ChanServ
<DrEeevil> krop: how can kde have releases with such serious regressions?
<DrEeevil> sorry, I don't deal with C++, in this case I'm just a QA tool
<krop> no, you just behave like a stubborn child
<DrEeevil> because I actually would like to USE kde
<DrEeevil> not sure how you see that, but it's kinda nice usually, except when someone staples in a DoS and then tells me that's all fine and dandy
<DrEeevil> maybe I should use git HEAD again to catch regressions earlier
*** Mode #kde-baloo +b DrEeevil!*@* by seaLne
*** Mode #kde-baloo +b not!*@* by seaLne
*** Mode #kde-baloo +b being!*@* by seaLne
*** Mode #kde-baloo +b constructive!*@* by seaLne
<DrEeevil> heh
<-* seaLne has kicked DrEeevil from #kde-baloo (DrEeevil)

Posted by Patrick | Permalink