#ceph IRC Log

Index

IRC Log for 2012-11-23

Timestamps are in GMT/BST.

[0:03] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:07] * BManojlovic (~steki@212.69.24.38) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:12] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:26] <madkiss> it's thanksgiving, obviously!
[0:27] <lurbs> I'm currently giving thanks that my awful workarounds to prevent scrubbing have kept my test cluster stable for the last day.
[0:28] * ssedov (stas@ssh.deglitch.com) Quit (Ping timeout: 480 seconds)
[0:34] <joao> I'm thankful that either my email account or vger are working properly again
[0:35] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:35] * loicd (~loic@magenta.dachary.org) has joined #ceph
[0:36] * stass (stas@ssh.deglitch.com) has joined #ceph
[0:41] <madkiss> dann kann das ja auf reisen, hm?
[0:41] <madkiss> oops-
[0:41] <madkiss> -EWIN
[0:42] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[0:48] * tnt (~tnt@162.63-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[0:53] * ssedov (stas@ssh.deglitch.com) has joined #ceph
[0:57] * tnt (~tnt@162.63-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[0:59] * stass (stas@ssh.deglitch.com) Quit (Ping timeout: 480 seconds)
[1:07] <madkiss> ahum.
[1:07] <madkiss> Setting up ceph-deploy (0.54+git20121119-1) ...
[1:07] <madkiss> hehe.
[1:15] * scuttlemonkey (~scuttlemo@96-42-146-5.dhcp.trcy.mi.charter.com) has joined #ceph
[1:15] * ChanServ sets mode +o scuttlemonkey
[1:16] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[1:19] <madkiss> http://people.debian.org/~madkiss/ceph-deploy/
[1:19] <madkiss> if somebody is interested in any way
[1:23] * scuttlemonkey (~scuttlemo@96-42-146-5.dhcp.trcy.mi.charter.com) Quit (Quit: This computer has gone to sleep)
[1:24] * maxiz_ (~pfliu@111.192.245.239) has joined #ceph
[1:28] * maxiz_ (~pfliu@111.192.245.239) Quit ()
[1:30] * plut0 (~cory@pool-96-236-43-69.albyny.fios.verizon.net) has joined #ceph
[1:31] * tnt (~tnt@162.63-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[1:38] * ssedov (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[1:39] * stass (stas@ssh.deglitch.com) has joined #ceph
[1:42] <xiaoxi> why I still send mail to ceph-devel@vger.kernel.org
[1:42] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[1:48] <phantomcircuit> ah much better
[1:49] <phantomcircuit> joao, replaced my insane lvm mirror setup with one osd per disk with xfs
[1:49] * calebamiles (~caleb@c-24-128-194-192.hsd1.vt.comcast.net) has left #ceph
[1:49] <phantomcircuit> massive huge reduction in IOPS
[1:49] <phantomcircuit> went from being disk limited on downloads to network limited :)
[1:52] <joao> I bet the noise levels were also reduced ;)
[1:58] <phantomcircuit> considerably
[1:59] <phantomcircuit> i should probably spring for a real server lol
[2:09] <plut0> phantomcircuit: you went from raid to jbod and got better performance?
[2:10] <phantomcircuit> i had two osd's on top of an lvm mirrored volume with mirrored mirror logs
[2:10] * xiaoxi (~xiaoxiche@134.134.137.71) Quit (Remote host closed the connection)
[2:10] <phantomcircuit> so each write resulted in at least 8 actual disk writes
[2:10] <phantomcircuit> shockingly performance was terrible
[2:11] <plut0> so each disk as osd was better?
[2:11] * scalability-junk (~stp@188-193-202-99-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[2:11] <phantomcircuit> plut0, considerably
[2:12] <plut0> what kind of disks?
[2:17] <phantomcircuit> crappy consumer ones
[2:20] <plut0> i see
[2:23] * andreask1 (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[2:30] * yanzheng (~zhyan@jfdmzpr06-ext.jf.intel.com) has joined #ceph
[2:36] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[2:50] <phantomcircuit> Cpu(s): 1.7%us, 0.5%sy, 0.0%ni, 27.4%id, 70.0%wa, 0.0%hi, 0.4%si, 0.0%st
[2:50] <phantomcircuit> sigh
[3:11] * timmclaughlin (~timmclaug@173-25-192-164.client.mchsi.com) has joined #ceph
[3:20] * yoshi (~yoshi@p11251-ipngn4301marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[3:58] * _renzhi_away_ (~renzhi@116.226.37.139) Quit (Quit: Leaving)
[4:02] * xiaoxi (~xiaoxiche@134.134.137.75) has joined #ceph
[4:05] * timmclaughlin (~timmclaug@173-25-192-164.client.mchsi.com) Quit (Remote host closed the connection)
[4:05] * xiaoxi (~xiaoxiche@134.134.137.75) Quit ()
[4:06] * xiaoxi (~xiaoxiche@jfdmzpr06-ext.jf.intel.com) has joined #ceph
[4:19] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[4:24] * plut0 (~cory@pool-96-236-43-69.albyny.fios.verizon.net) has left #ceph
[4:53] * renzhi (~renzhi@116.226.37.139) has joined #ceph
[5:14] * s_parlane (~scott@202.49.72.37) Quit (Ping timeout: 480 seconds)
[5:19] <via> i've been having ceph-mds crash after not a huge amount of time of fairly light load, running argonaut48.2
[5:19] <via> i have stacktraces and symbol dumps
[5:20] <via> but i figured i'd check to see if this is a known issue
[5:46] * lxo (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[6:10] * gaveen (~gaveen@112.134.113.129) has joined #ceph
[6:14] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[6:23] * s_parlane (~scott@121.75.150.140) has joined #ceph
[6:24] * yoshi (~yoshi@p11251-ipngn4301marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[6:43] * yoshi (~yoshi@p11251-ipngn4301marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[6:52] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[6:53] * loicd (~loic@magenta.dachary.org) has joined #ceph
[6:55] * maxiz (~pfliu@202.108.130.138) has joined #ceph
[7:02] * deepsa (~deepsa@122.172.23.135) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[7:05] * SkyEye (~gaveen@112.134.113.113) has joined #ceph
[7:11] * gaveen (~gaveen@112.134.113.129) Quit (Ping timeout: 480 seconds)
[7:34] * s_parlane (~scott@121.75.150.140) Quit (Ping timeout: 480 seconds)
[7:51] * tnt (~tnt@48.29-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[7:59] * deepsa (~deepsa@122.172.23.135) has joined #ceph
[8:12] * maxiz (~pfliu@202.108.130.138) Quit (Ping timeout: 480 seconds)
[8:15] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:15] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:21] * xiaoxi (~xiaoxiche@jfdmzpr06-ext.jf.intel.com) Quit (Remote host closed the connection)
[8:22] * xiaoxi (~xiaoxiche@134.134.137.75) has joined #ceph
[8:31] * maxiz (~pfliu@202.108.130.138) has joined #ceph
[8:35] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:36] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:40] * maxiz (~pfliu@202.108.130.138) Quit (Quit: Ex-Chat)
[8:52] * The_Bishop (~bishop@2001:470:50b6:0:d863:2ddf:b91f:ba88) Quit (Ping timeout: 480 seconds)
[8:53] * brambles (~xymox@shellspk.ftp.sh) Quit (Read error: Connection reset by peer)
[8:53] * brambles_ (~xymox@shellspk.ftp.sh) has joined #ceph
[9:00] * The_Bishop (~bishop@2001:470:50b6:0:9f4:9719:5121:9e0a) has joined #ceph
[9:02] * nosebleedkt (~kostas@kotama.dataways.gr) has joined #ceph
[9:16] * benner (~benner@193.200.124.63) has joined #ceph
[9:20] * benner_ (~benner@193.200.124.63) Quit (Read error: Connection reset by peer)
[9:21] * tnt (~tnt@48.29-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[9:21] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[9:34] <SIN> Hello
[9:35] <SIN> can nay one tell me what does it mean: mon.1 [INF] pgmap v25624: 576 pgs: 576 active+clean; 643 MB data, 2174 MB used, 4520 GB / 4522 GB avail
[9:36] <SIN> why "used" value so big and data wrong
[9:39] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[9:42] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[9:45] * xiaoxi (~xiaoxiche@134.134.137.75) Quit (Ping timeout: 480 seconds)
[9:47] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:51] * renzhi (~renzhi@116.226.37.139) Quit (Quit: Leaving)
[10:04] * yanzheng (~zhyan@jfdmzpr06-ext.jf.intel.com) Quit (Quit: Leaving)
[10:17] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[10:24] * The_Bishop (~bishop@2001:470:50b6:0:9f4:9719:5121:9e0a) Quit (Ping timeout: 480 seconds)
[10:24] * loicd1 (~loic@magenta.dachary.org) has joined #ceph
[10:24] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[10:29] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) has joined #ceph
[10:30] <phantomcircuit> SIN, used includes the journal
[10:32] * The_Bishop (~bishop@2001:470:50b6:0:d863:2ddf:b91f:ba88) has joined #ceph
[10:35] * brambles_ (~xymox@shellspk.ftp.sh) Quit (Read error: Connection reset by peer)
[10:38] * ajoian (53a6c968@ircip2.mibbit.com) has joined #ceph
[10:42] <ajoian> hello , I have a strange error with 0.54 as I'm unable to delete mons : ceph mon delete node0 - unknown command delete
[10:46] <SIN> http://ceph.com/docs/master/rados/operations/add-or-rm-mons/?highlight=mon
[10:48] <SIN> remove section
[10:49] <ajoian> yep I saw that thanks
[10:50] <ajoian> strange enough if you issue the command ceph -h there is no mention of mon remove , but mon stat , add, delete
[11:13] * alexxy[home] (~alexxy@2001:470:1f14:106::2) has joined #ceph
[11:13] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Read error: Connection reset by peer)
[11:14] <SIN> can any one tell me why i get "D" state "bash" process when using bash-complition to lookup dir tree on ceph storage. Even "mc" drops to "D" state when I copy from ceph storage
[11:14] <SIN> And all that began when i add new mds to cluster
[11:15] <SIN> What is wrong with that action?
[11:19] <SIN> Any idea?
[11:29] * SIN (~SIN@78.107.155.77) Quit (Read error: Connection reset by peer)
[11:37] * ajoian (53a6c968@ircip2.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[11:39] * loicd1 (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[11:57] * loicd (~loic@90.84.146.214) has joined #ceph
[12:16] * loicd (~loic@90.84.146.214) Quit (Ping timeout: 480 seconds)
[12:32] * loicd (~loic@90.84.146.214) has joined #ceph
[12:52] * The_Bishop (~bishop@2001:470:50b6:0:d863:2ddf:b91f:ba88) Quit (Ping timeout: 480 seconds)
[12:53] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[12:54] * gucki (~smuxi@80-218-32-183.dclient.hispeed.ch) has joined #ceph
[12:54] <gucki> good morning
[13:00] * The_Bishop (~bishop@2001:470:50b6:0:9f4:9719:5121:9e0a) has joined #ceph
[13:01] <gucki> when trying to run a windows guest on a rbd volume kvm crashes with "floating point exception". whn i export the rbd volume to a file and run kvm against this file all works well. so ther must be some bug in qemu-rbd..? :(
[13:02] <ctrl> hi
[13:02] <ctrl> do you have any vm on rbd?
[13:03] <gucki> ctrl: hey. it's a test server. there's nothing else running
[13:03] <ctrl> ok, which os?
[13:05] <gucki> ctrl: host is ubuntu 12.10, guest is windows 2008
[13:06] <gucki> ctrl: here is how i start kvm: http://pastie.org/5422820
[13:07] <ctrl> gucki: is this a first run? or run after install ?
[13:07] <gucki> ctrl: first run after install (so when install said, reboot now). it's 100% reproducable....windows crashes when booting after a few seconds.
[13:08] <gucki> ctrl: when starting kvm using the file it works fine, windows boots up and continues setting up it's environment..
[13:11] <ctrl> gucki: can u show full log of kvm crush?
[13:12] <gucki> ctrl: how can i get it? it just displays "floating point exception" in the shell and exists. i'll now try to run it using gdb...
[13:13] <ctrl> gucki: or full output of kvm
[13:13] <gucki> ctrl: which output of kvm?
[13:14] <gucki> ctrl: here's the backtrace http://pastie.org/5422842
[13:14] <gucki> ctrl: now it's clear that it's in librd :)
[13:14] <ctrl> gucki: forget about it, try start kvm with less parameters
[13:15] <ctrl> gucki: i think it`s kvm error but not rbd :)
[13:15] <gucki> ctrl: sure? why does it work without ceph then?
[13:15] <gucki> ctrl: i mean when using a raw file instead of a rbd image?
[13:15] <gucki> ctrl: and the backtrace now shows the exception occurs in 0x00007ffff74a6e2e in librbd::AioCompletion::complete() () from /usr/lib/librbd.so.1
[13:16] <ctrl> gucki: wait, i will see )
[13:16] <gucki> ctrl: btw i'm using latest argonaut stable (0.48.2, the one from the ubuntu repos)
[13:19] <ctrl> gucki: yeah, i saw
[13:20] <ctrl> gucki: i don`t have any ideas (
[13:20] * loicd (~loic@90.84.146.214) Quit (Ping timeout: 480 seconds)
[13:22] <gucki> ctrl: ok, i think it's a bug in the caching layer of rbd. when using cache=none instead of cache=writeback it works. i'll file a bug report
[13:25] <ctrl> gucki: thanks for information! )
[13:26] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[13:29] <gucki> ctrl: welcome :). here's the bug report: http://tracker.newdream.net/issues/3521
[13:29] * TheBigB (~TheBigB@145.33.225.243) has joined #ceph
[13:34] * TheBigB (~TheBigB@145.33.225.243) Quit ()
[13:41] * loicd (~loic@magenta.dachary.org) has joined #ceph
[13:43] * deepsa (~deepsa@122.172.23.135) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[13:48] * SIN (~Dead@78.107.155.77) has joined #ceph
[13:48] <SIN> Hello!
[13:49] <SIN> Does any one know what does this params mean http://ceph.com/docs/master/cephfs/mds-config-ref/ ?
[13:50] <SIN> And how can I make one of my mds force standby and "out"
[13:50] <SIN> ?
[13:51] <SIN> Please some one
[13:51] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[13:52] * loicd (~loic@2a01:e35:2eba:db10:ecfc:5795:a1de:9b71) has joined #ceph
[13:54] <SIN> Does anyone able to answer to me?
[13:56] <SIN> anyone?
[14:15] * xiaoxi (~xiaoxiche@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[14:26] * SkyEye (~gaveen@112.134.113.113) Quit (Quit: Leaving)
[14:27] * maxiz (~pfliu@221.223.237.201) has joined #ceph
[14:31] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[14:32] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[14:33] * plut0 (~cory@pool-96-236-43-69.albyny.fios.verizon.net) has joined #ceph
[14:37] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[14:59] * weber (~he@219.85.196.82) has joined #ceph
[15:05] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[15:28] * gaveen (~gaveen@112.134.113.113) has joined #ceph
[15:31] * CristianDM (~CristianD@186.153.254.23) has joined #ceph
[15:31] <CristianDM> Hi.
[15:31] <plut0> hi
[15:32] <CristianDM> Can I use qemu without mount rbd as disk, but inside of the VM load the rbd module and map the rbd unit?
[15:32] <CristianDM> So, I don´t depend of qemu
[15:32] <andreask> yes
[15:32] <CristianDM> And the ceph version, for example into proxmox
[15:32] * Dr_O (~owen@heppc049.ph.qmul.ac.uk) has joined #ceph
[15:33] <plut0> i don't see why not
[15:33] * Dr_O_ (~owen@heppc049.ph.qmul.ac.uk) has joined #ceph
[15:33] <CristianDM> I depent of ceph version inside of proxmox
[15:33] <CristianDM> And the performance is the same?
[15:33] <andreask> 42
[15:34] <CristianDM> Is it possible map a rbd with writeback?
[15:34] <andreask> ;-)
[15:34] <plut0> i would think presenting the raw disk to the VM through the hypervisor would have better performance
[15:35] <andreask> I'd expect the same, yes ... using rbd via librados should be faster
[15:36] <CristianDM> I never used librados, all the mounts have via quemu inside proxmox
[15:37] <andreask> quemu-rbd uses librados
[15:37] <CristianDM> ahh thanks
[15:38] <CristianDM> Today I will change WD Back disks for Samsung 830 SSD
[15:38] <plut0> you know the 840 is out now?
[15:38] <CristianDM> I have very bad performance with small files
[15:38] <CristianDM> I am from Argentina and don´t have 840 yet
[15:38] <plut0> ahh
[15:39] <CristianDM> Initialy I will put 3 nodes with 3 SSD for OS / Jornaling
[15:39] <CristianDM> And will test other SSD in one node to test the performance
[15:39] <CristianDM> If all is fine, will remove 100% of WD Black disks and put all SSD
[15:40] <CristianDM> I don´t know if really this will get good performance in small files
[15:40] * Dr_O_ (~owen@heppc049.ph.qmul.ac.uk) Quit (Quit: Ex-Chat)
[15:40] <CristianDM> I use NFS to export RBD volumes
[15:40] <CristianDM> But I use web servers and this have too many small files
[15:41] <andreask> CristianDM: and writeback mounting is possible
[15:41] <CristianDM> andreask: Thanks.
[15:42] <plut0> wish i had my lab environment so i can start playing with ceph
[15:43] * guigouz1 (~guigouz@201-87-100-166.static-corp.ajato.com.br) has joined #ceph
[15:45] <CristianDM> Another question, btrfs is stable? Currently i use XFS but I need more performance
[15:45] <plut0> i wouldn't say its stable
[15:45] <plut0> i believe ext4 is faster than xfs and btrfs
[15:46] <andreask> ah, latest xfs is really fast
[15:46] <plut0> did you tune xfs properly?
[15:46] <CristianDM> lastest? I use xfs from Ubuntu 12.04
[15:47] <andreask> should be fine, yes
[15:47] <CristianDM> plut0: What need tune? I format without any special command
[15:47] <andreask> you should at least use bigger inode-size
[15:48] <CristianDM> For ceph? Remember I store to many small files
[15:48] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[15:48] <plut0> attr=2, log version=2, lazy-count=1
[15:48] <plut0> make sure you don't have any extents
[15:49] <plut0> or you inode size is too small like andreask said
[15:49] <CristianDM> How I can see the current options
[15:49] <plut0> xfs_info
[15:50] <CristianDM> log internal version 2
[15:50] <CristianDM> isize=256
[15:50] <CristianDM> I have lazy-count=1
[15:50] <plut0> attr?
[15:51] <CristianDM> yes attr=2
[15:51] <plut0> thats good
[15:51] <andreask> these are all defaults on modern xfs
[15:51] <CristianDM> And the block size?
[15:51] <plut0> pick a random file and run xfs_bmap -v /path/to/file
[15:51] <plut0> that will show if you have extents
[15:52] * nosebleedkt (~kostas@kotama.dataways.gr) Quit (Quit: Leaving)
[15:52] <CristianDM> no extents
[15:52] <CristianDM> what is exents?
[15:53] <plut0> andreask: thats good its the defaults now, i used to have to format that correctly by hand
[15:53] <CristianDM> andreask: I need extens on?
[15:53] <andreask> ?
[15:53] <CristianDM> What is extent?
[15:54] <andreask> xfs is an extents based filesystem ... no fixed block size
[15:54] <plut0> extents are a contiguous area for the file, causes a performance hit if you have any
[15:55] <CristianDM> wops, I don´t have extents on
[15:55] <CristianDM> How I can enable this?
[15:55] <plut0> you don't want extents
[15:55] <CristianDM> Ahh sorry, I have bad english
[15:55] <CristianDM> :P
[15:56] <plut0> whats your fragmentation look like?
[15:56] <CristianDM> So all is fine. But I get a very bad performance, 0.20MB in 4096 rados bench
[15:56] <CristianDM> Is it possible that the issue are the WD Blacks?
[15:57] <plut0> xfs_db -c frag -r /dev/devicename
[15:57] <CristianDM> I need to unmount the device?
[15:57] <plut0> no
[15:57] * yanzheng (~zhyan@134.134.139.72) has joined #ceph
[15:58] <CristianDM> actual 44992, ideal 35797, fragmentation factor 20.44%
[15:58] <plut0> not terrible
[15:59] <CristianDM> Is possible that the bad performance is for WD Black disk?
[15:59] <plut0> you can check with iostat
[15:59] <plut0> see if util is 100%
[15:59] <plut0> hdd's have very low iops
[16:01] <CristianDM> 50% iowait
[16:02] <plut0> whats idle?
[16:02] <plut0> er util
[16:02] <andreask> if like with ceph extended-attributes are used to store all the metadata for an object, default inode size is to small to store all the extended attributes for an object within one inode .. might be worth testing with 1k or 2k inode size
[16:02] <plut0> iostat -x
[16:02] <CristianDM> 1%
[16:03] <andreask> nicer .. iostat -dkx
[16:03] <CristianDM> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
[16:03] <CristianDM> sda 0.00 53.00 0.00 245.00 0.00 3221.00 26.29 71.33 310.24 0.00 310.24 4.08 100.00
[16:03] <CristianDM> sdb 0.00 0.00 0.00 27.00 0.00 108.00 8.00 0.01 0.44 0.00 0.44 0.44 1.20
[16:03] <plut0> yeah i'd check some more files for extents
[16:04] <CristianDM> andreask: is it possible change the inode size without format ?
[16:04] <andreask> no
[16:04] <plut0> find /path/to/xfs -type f -exec xfs_bmap -v {} \;
[16:05] <CristianDM> plut0: what is this command?
[16:05] <plut0> checks for extents
[16:06] <andreask> CristianDM: wher is your journal?
[16:07] <andreask> osd journal I mean
[16:07] <CristianDM> In the same WD Black disk than the OSD
[16:07] <andreask> baaaaad idea
[16:07] <CristianDM> sda
[16:07] <CristianDM> Yes, for this exatly reazon i will put ssd journal today
[16:08] <andreask> what is sdb here?
[16:08] <CristianDM> osd
[16:08] <CristianDM> sda journal osd.0
[16:08] <CristianDM> sdb osd.1
[16:08] <CristianDM> 3 nodes with the same setup
[16:09] <andreask> if you only test and have some ram, put the journal on tmpfs
[16:09] <CristianDM> I will check
[16:10] <andreask> .. but if you already have an extra ssd as a journal device ...
[16:11] <CristianDM> I will put OS and Journal in one SDD per node
[16:11] <CristianDM> And in one node I will test one OSD into SSD
[16:29] <Leseb> hi guys, inside my client logs I see a lot of "libceph: osd4 172.20.11.32:6801 socket closed" can I consider this as an error or not?
[16:30] * scuttlemonkey (~scuttlemo@96-42-146-5.dhcp.trcy.mi.charter.com) has joined #ceph
[16:30] * ChanServ sets mode +o scuttlemonkey
[16:30] <Leseb> I don't see any good reason to close the socket since the rbd device is still mapped
[16:30] <Leseb> any idea? thanks in advance :)
[16:31] * pentabular (~sean@adsl-70-231-131-112.dsl.snfc21.sbcglobal.net) has joined #ceph
[16:34] * gaveen (~gaveen@112.134.113.113) Quit (Remote host closed the connection)
[16:34] <andreask> Leseb: http://tracker.newdream.net/issues/2573
[16:35] * andreask needs to run ... nice weekend!
[16:35] <Leseb> andreask: thanks! nice weekend!
[16:36] <Leseb> hum tracker down?
[16:43] <joao> Leseb, certainly looks that way
[16:43] <Leseb> hopefully, there is the google cache
[16:43] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[16:44] * sukiyaki (~Tecca@114.91.101.85) Quit (Ping timeout: 480 seconds)
[16:44] <joao> google cache is no good to update issues :p
[16:44] <Leseb> but it helps to see the first content :)
[16:56] * sukiyaki (~Tecca@114.91.103.114) has joined #ceph
[17:10] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:14] * weber (~he@219.85.196.82) Quit (Remote host closed the connection)
[17:18] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)
[17:19] * tnt (~tnt@48.29-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[17:19] * vata (~vata@208.88.110.46) has joined #ceph
[17:24] * scuttlemonkey (~scuttlemo@96-42-146-5.dhcp.trcy.mi.charter.com) Quit (Quit: This computer has gone to sleep)
[17:27] * SIN (~Dead@78.107.155.77) Quit (Remote host closed the connection)
[17:31] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Remote host closed the connection)
[17:31] * xiaoxi (~xiaoxiche@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[17:31] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[17:32] * guigouz1 (~guigouz@201-87-100-166.static-corp.ajato.com.br) Quit (Ping timeout: 480 seconds)
[17:39] * fmarchand (~fmarchand@212.51.173.12) Quit (Quit: Leaving)
[17:43] * loicd (~loic@2a01:e35:2eba:db10:ecfc:5795:a1de:9b71) Quit (Quit: Leaving.)
[17:47] * maxiz (~pfliu@221.223.237.201) Quit (Ping timeout: 480 seconds)
[17:51] * pentabular (~sean@adsl-70-231-131-112.dsl.snfc21.sbcglobal.net) has left #ceph
[18:05] * Dr_O (~owen@heppc049.ph.qmul.ac.uk) Quit (Quit: Ex-Chat)
[18:22] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[18:23] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[18:29] * match (~mrichar1@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[18:32] * yanzheng (~zhyan@134.134.139.72) Quit (Remote host closed the connection)
[18:41] * yanzheng (~zhyan@134.134.139.76) has joined #ceph
[18:44] * wer (~wer@wer.youfarted.net) Quit (Quit: Leaving)
[18:45] * wer (~wer@wer.youfarted.net) has joined #ceph
[18:51] * gaveen (~gaveen@112.134.112.149) has joined #ceph
[19:01] * noob2 (a5a00214@ircip1.mibbit.com) has joined #ceph
[19:27] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[19:30] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit (Quit: Leaving.)
[19:37] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) Quit (Quit: Leaving.)
[19:40] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[19:41] * gaveen (~gaveen@112.134.112.149) Quit (Remote host closed the connection)
[19:49] * noob2 (a5a00214@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[19:56] * BManojlovic (~steki@212.69.24.38) has joined #ceph
[20:01] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[20:16] * stxShadow (~Jens@ip-178-203-169-190.unitymediagroup.de) has joined #ceph
[20:22] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:23] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[20:25] * yanzheng (~zhyan@134.134.139.76) Quit (Remote host closed the connection)
[20:58] * The_Bishop (~bishop@2001:470:50b6:0:9f4:9719:5121:9e0a) Quit (Ping timeout: 480 seconds)
[21:05] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[21:07] * The_Bishop (~bishop@2001:470:50b6:0:c853:26b5:8bac:d3b3) has joined #ceph
[21:13] * sukiyaki (~Tecca@114.91.103.114) Quit (Ping timeout: 480 seconds)
[21:23] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) has joined #ceph
[21:23] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[21:46] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[21:47] * gucki (~smuxi@80-218-32-183.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:52] * noob2 (a5a00214@ircip3.mibbit.com) has joined #ceph
[21:59] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) Quit (Quit: Leaving.)
[22:07] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[22:08] * stxShadow (~Jens@ip-178-203-169-190.unitymediagroup.de) Quit (Read error: Connection reset by peer)
[22:10] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:12] * loicd (~loic@magenta.dachary.org) Quit ()
[22:36] * Qten (Q@qten.qnet.net.au) Quit (Remote host closed the connection)
[22:40] * alexxy[home] (~alexxy@2001:470:1f14:106::2) Quit (Remote host closed the connection)
[22:40] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[22:41] * Qten (~qgrasso@qten.qnet.net.au) has joined #ceph
[22:45] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:48] * noob2 (a5a00214@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[22:53] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)
[23:25] * CristianDM (~CristianD@186.153.254.23) Quit (Ping timeout: 480 seconds)
[23:29] * vata (~vata@208.88.110.46) has joined #ceph
[23:37] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[23:47] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:47] * loicd (~loic@2a01:e35:2eba:db10:ecfc:5795:a1de:9b71) has joined #ceph
[23:52] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)
[23:56] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) Quit (Remote host closed the connection)
[23:58] * CristianDM (~CristianD@186.153.251.60) has joined #ceph
[23:59] <CristianDM> Hi. Ceph 0.54 works fine inside Ubuntu 12.10?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.