#ceph IRC Log

Index

IRC Log for 2012-12-10

Timestamps are in GMT/BST.

[0:09] * ebo^ (~ebo@233.195.116.85.in-addr.arpa.manitu.net) Quit (Ping timeout: 480 seconds)
[0:34] * Steki (~steki@242-174-222-85.adsl.verat.net) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:52] * maxiz (~pfliu@114.245.251.113) has joined #ceph
[0:58] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[1:03] * SkyEye (~gaveen@112.134.113.212) has joined #ceph
[1:06] * SkyEye (~gaveen@112.134.113.212) Quit ()
[1:14] * gaveen (~gaveen@112.134.113.212) has joined #ceph
[1:14] * mib_5dd5vq (52e7d4bf@ircip2.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[1:19] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has left #ceph
[1:23] * LeaChim (~LeaChim@b0fafb7d.bb.sky.com) Quit (Remote host closed the connection)
[1:26] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[1:26] * loicd (~loic@magenta.dachary.org) has joined #ceph
[1:31] * maxiz (~pfliu@114.245.251.113) Quit (Quit: Ex-Chat)
[1:31] <via> has anyone had issues with rsync'ing to cephfs? things like rsync re-copying files it already has, even if prior to the re-copy they were verified equal with checksums
[1:35] <via> i figured there might be something about ceph's directory size magic causing it, maybe rsync thinks things have changed
[1:40] * gaveen (~gaveen@112.134.113.212) Quit (Quit: leaving)
[1:48] * gaveen (~gaveen@112.134.113.5) has joined #ceph
[2:07] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[2:07] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[2:07] * Leseb_ is now known as Leseb
[2:13] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[2:13] * loicd (~loic@magenta.dachary.org) has joined #ceph
[2:13] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (Ping timeout: 480 seconds)
[2:14] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[2:17] * joao (~JL@89.181.145.42) has joined #ceph
[2:17] * ChanServ sets mode +o joao
[2:25] * joao (~JL@89.181.145.42) Quit (Ping timeout: 480 seconds)
[2:36] * Marshie (~Admin@124-169-53-226.dyn.iinet.net.au) has joined #ceph
[2:54] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:58] <via> i just realized my root directory is getting littered with core dumps
[3:11] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[3:20] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:22] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[3:25] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:29] * Marshie (~Admin@124-169-53-226.dyn.iinet.net.au) has left #ceph
[3:32] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[3:48] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[4:16] * plut0 (~cory@pool-96-236-43-69.albyny.fios.verizon.net) has left #ceph
[4:35] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[4:59] * gregaf (~Adium@2607:f298:a:607:4c22:6146:25c9:b624) Quit (Read error: Connection reset by peer)
[4:59] * gregaf (~Adium@2607:f298:a:607:a9c1:25b7:a54e:1cfe) has joined #ceph
[5:13] * SkyEye (~gaveen@112.134.113.5) has joined #ceph
[5:17] * SkyEye (~gaveen@112.134.113.5) Quit ()
[5:33] * gaveen (~gaveen@112.134.113.5) Quit (Quit: leaving)
[5:35] * gaveen (~gaveen@112.134.113.5) has joined #ceph
[5:55] * BillK (~billk@124.169.79.36) Quit (Read error: Connection reset by peer)
[5:59] <tore_> anythign new today?
[5:59] <tore_> you guys seem pretty quiet
[6:08] * BillK (~billk@124-169-198-193.dyn.iinet.net.au) has joined #ceph
[6:20] <iggy> it's sunday... it's usually quiet in here on weekends
[7:16] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[7:17] * loicd (~loic@magenta.dachary.org) has joined #ceph
[7:25] * gaveen (~gaveen@112.134.113.5) Quit (Quit: leaving)
[7:30] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[7:58] <tore_> lol i always forget abut the time difference with Tokyo
[7:58] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:11] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:18] * jlogan1 (~Thunderbi@2600:c00:3010:1:14a3:ca45:3136:669a) has joined #ceph
[8:23] * nosebleedkt (~kostas@213.140.128.74) has joined #ceph
[8:34] * andret (~andre@pcandre.nine.ch) has joined #ceph
[8:38] <nosebleedkt> hi
[8:38] <nosebleedkt> #ceph -w
[8:38] <nosebleedkt> shows
[8:38] <nosebleedkt> 113/254 degraded (44.488%)
[8:38] <nosebleedkt> what are those numbers?
[8:38] <nosebleedkt> 113/254
[8:41] * ebo^ (~ebo@233.195.116.85.in-addr.arpa.manitu.net) has joined #ceph
[8:53] * gaveen (~gaveen@112.134.113.5) has joined #ceph
[8:53] * SkyEye (~gaveen@112.134.113.5) has joined #ceph
[8:56] * gaveen (~gaveen@112.134.113.5) Quit ()
[9:07] <nosebleedkt> where can i find a link with osd info. Like what means its up and in ?
[9:13] * ebo^ (~ebo@233.195.116.85.in-addr.arpa.manitu.net) Quit (Ping timeout: 480 seconds)
[9:21] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[9:31] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[9:35] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:35] * loicd (~loic@LPuteaux-156-16-100-112.w80-12.abo.wanadoo.fr) has joined #ceph
[9:36] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[9:44] * gucki (~smuxi@HSI-KBW-082-212-034-021.hsi.kabelbw.de) has joined #ceph
[9:44] <gucki> good morning
[9:46] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[9:47] * ebo^ (~ebo@icg1104.icg.kfa-juelich.de) has joined #ceph
[9:57] * low (~low@188.165.111.2) has joined #ceph
[10:04] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[10:07] <ebo^> i tried installing ceph on debian squeeze (with a custom kernel). it ceph-osd crashes in tcmalloc during mkcephfs
[10:07] <ebo^> any ideas?
[10:08] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[10:12] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[10:16] * LeaChim (~LeaChim@b0fafb7d.bb.sky.com) has joined #ceph
[10:25] * jlogan1 (~Thunderbi@2600:c00:3010:1:14a3:ca45:3136:669a) Quit (Ping timeout: 480 seconds)
[10:53] * jtangwk (~Adium@2001:770:10:500:78ec:1d17:6e18:4157) has joined #ceph
[10:57] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[10:59] * Kioob`Taff1 (~plug-oliv@local.plusdinfo.com) has joined #ceph
[11:00] <Kioob`Taff1> Hi
[11:00] * Kioob`Taff1 is now known as Kiooby
[11:02] * Kiooby (~plug-oliv@local.plusdinfo.com) Quit ()
[11:03] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[11:04] <Kioob`Taff> in case of that kind of error on an OSD «monclient(hunting): failed to open keyring: (2) No such file or directory», how can I verify which file(s) it's trying to open ?
[11:04] <Kioob`Taff> I have that error while doing "/etc/init.d/ceph start osd"
[11:06] <Kioob`Taff> how I found
[11:06] <Kioob`Taff> strace /usr/bin/ceph-osd -i 17 --pid-file /var/run/ceph/osd.17.pid -c /etc/ceph/ceph.conf
[11:07] <Kioob`Taff> nop..
[11:10] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[11:10] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:19] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[11:28] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[11:36] * guigouz (~guigouz@177.33.216.27) has joined #ceph
[11:41] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[11:48] * SkyEye (~gaveen@112.134.113.5) Quit (Ping timeout: 480 seconds)
[11:49] * guigouz (~guigouz@177.33.216.27) Quit (Ping timeout: 480 seconds)
[11:52] <jtang> good to see some osx stuff coming down the tube
[11:52] <jtang> i didnt think that libaio was available on osx
[11:56] * SkyEye (~gaveen@112.134.113.232) has joined #ceph
[11:57] * SkyEye (~gaveen@112.134.113.232) Quit ()
[11:58] <Kioob`Taff> cephx va me rendre chèvre...
[11:59] <Kioob`Taff> after a *fresh* mkcephfs, I have a file called «adminkeyring». But, if I launch «ceph osd lspools» with strace, I can see that the file «adminkeyring» is not used at all
[12:00] <Kioob`Taff> I try to rename it to «ceph.client.admin.keyring» (one of the file looked by «ceph osd lspools»), but I still have the same error :
[12:00] <Kioob`Taff> 2012-12-10 11:58:01.864455 7f1f39672760 -1 unable to authenticate as client.admin
[12:10] <Kioob`Taff> ok... I had to convert my «auth supported cephx» line, by 3 lignes with «auth (cluster|service|client) required = cephx»...
[12:11] <lerrie2> ah, you're using v0.55?
[12:13] <Kioob`Taff> yes
[12:14] <Kioob`Taff> the stable version was very slow... so I try the new one
[12:16] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[12:20] <lxo> ceph-fuse crashes in the destructor of a non-empty xlist of a snap_realm upon mds rejoin. no quick reproducer known by me, but I get it at nearly every mds switch on one of my clients. known problem?
[12:22] <lerrie2> Kioob`Taff, I think this has changed somewhere between 0.48 and 0.55. (http://ceph.com/releases/v0-55-released/#more-1981)
[12:24] <Kioob`Taff> yes of course. cephx was already enabled, so I didn't look at that kind of differences
[12:25] <Kioob`Taff> for «bobtail» release, I suppose there will be a «migration» document, which sumarize that
[12:28] <Kioob`Taff> so
[12:28] <Kioob`Taff> now I have :
[12:28] <Kioob`Taff> # rbd map blacksad-hdd --pool hdd3copies --id tolriq
[12:28] <Kioob`Taff> rbd: add failed: (5) Input/output error
[12:28] <Kioob`Taff> is it an auth problem, once again ?
[12:30] <Kioob`Taff> (it's a problem while writing on the socket of the MON)
[12:42] <lxo> ceph-fuse also recently started denying root access to files that shouldn't be readable except for root superpowers. not sure whether that started with 0.55 or a fuse update though
[12:47] * match (~mrichar1@pcw3047.see.ed.ac.uk) has joined #ceph
[13:01] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) has joined #ceph
[13:04] * mark (mark@tilia.nedworks.org) has joined #ceph
[13:13] <ebo^> ls
[13:13] <ebo^> :-)
[13:22] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[13:24] * loicd (~loic@LPuteaux-156-16-100-112.w80-12.abo.wanadoo.fr) Quit (Quit: Leaving.)
[13:24] <Kioob`Taff> how can I know which privileges are required for a RBD client ?
[13:25] <Kioob`Taff> the BD is rw, so I have rwx on OSD, but on MON ?
[13:26] <ebo^> builds for debian squeeze are not tested, are they?
[13:29] <Kioob`Taff> not really tested ebo, see : http://ceph.com/docs/master/install/os-recommendations/#platforms
[13:30] <Kioob`Taff> only Ubuntu 12.04 and CentOS 6.3 are really tested
[13:34] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) has left #ceph
[13:44] * calebmiles (~caleb@65-183-137-95-dhcp.burlingtontelecom.net) has joined #ceph
[13:51] <ebo^> seems like the squeeze packages may not work at all. there is a bug in google-perftools preventing ceph-osd and ceph-mds from starting
[13:52] <nosebleedkt> crazy bug?
[13:52] <nosebleedkt> root@cephfs:~# umount /dev/rbd1
[13:52] <nosebleedkt> root@cephfs:~# rbd unmap /dev/rbd/rbd/foobar1
[13:52] <nosebleedkt> root@cephfs:~# rbd rm foobar1
[13:52] <nosebleedkt> Removing image: 98% complete...2012-12-10 14:51:18.401585 b2059b70 0 -- 192.168.7.189:0/1002704 >> 19 2.168.7.196:6804/12055 pipe(0xa3eb008 sd=4 :0 pgs=0 cs=0 l=1).fault
[13:52] <nosebleedkt> 2012-12-10 14:51:53.926846 b2059b70 0 -- 192.168.7.189:0/1002704 >> 192.168.7.196:6807/12177 pipe(0xa3ed8e0 sd=4 :0 pgs=0 cs=0 l=1).fault
[13:53] <nosebleedkt> Why I can't remove the image 'foorbar1' ?
[13:53] <nosebleedkt> Also it crashes 2 OSDs :(
[13:53] * jtangwk (~Adium@2001:770:10:500:78ec:1d17:6e18:4157) Quit (Ping timeout: 480 seconds)
[13:54] <nosebleedkt> osd daemons
[13:54] * jtangwk (~Adium@2001:770:10:500:8176:da07:8c8c:d8fd) has joined #ceph
[14:19] <Kioob`Taff> with a «dd» I'm writing at a rate of 35MB/s on RBD (kernel module), with a ceph cluster running on 8 OSD (on one uniq host for now). It seems very slow, no ?
[14:22] <jamespage> Kioob`Taff, what speed network?
[14:27] <Kioob`Taff> between OSD, 10Gbps, but between the rbd and osd, only 1Gbps (during testing only)
[14:28] <Kioob`Taff> but 35MB/s is 280Mbps
[14:28] <Kioob`Taff> I will retry with 10Gbps
[14:29] <jamespage> Kioob`Taff, might be worth benchmarking your network as well using something like iperf (runs client server)
[14:29] <jamespage> just to make sure everything is works as designed :-)
[14:30] <jamespage> Kioob`Taff, *working
[14:30] <Kioob`Taff> [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec
[14:30] <Kioob`Taff> with iperf
[14:31] <jamespage> Kioob`Taff, hmm
[14:31] <jamespage> Kioob`Taff, do you see a difference between rbd (kernel) and other tooling?
[14:32] <ebo^> my osds always crash with: ** ERROR: osd init failed: (1) Operation not permitted
[14:32] <Kioob`Taff> dd directly over disks ?
[14:33] <jamespage> ebo^, is that with whats in unstable or testing?
[14:35] <ebo^> it's whatever there currently is at the top of the git repo
[14:35] <ebo^> i had to recompile
[14:36] <Kioob`Taff> jamespage: with that : for X in ceph* ; do dd if=/dev/zero of=$X/test-dd bs=1M count=10240 & done ; wait
[14:36] <Kioob`Taff> I have at least 95 MB/s *per OSD*
[14:36] <jamespage> Kioob`Taff, ack
[14:37] <Kioob`Taff> so maybe because of the journal ?
[14:37] <Kioob`Taff> (journal is on 4 SSD)
[14:38] <ebo^> cephx problem ... disabled
[14:38] <ebo^> yay HEALTH_OK finally
[14:39] <jamespage> ebo^: good!
[14:39] <ebo^> i have a question concerning objects
[14:40] <jamespage> Kioob`Taff, can you try writing object rather than block; i.e. use the rados cli tool to generate some load
[14:40] <jamespage> it has a load-gen command
[14:40] <ebo^> if i save an object with say 1 gb, does the primary for the whole object land on the same osd?
[14:40] <Kioob`Taff> yes, I will look in the doc
[14:41] <ebo^> or does it get striped on several osds
[14:41] <ebo^> same for a file in a cephfs
[14:41] <Kioob`Taff> jamespage: rados bench ?
[14:41] <jamespage> Kioob`Taff, bench and load-gen
[14:45] <jamespage> ebo^, the answer is it depends on how the object got pushed into ceph
[14:45] <jamespage> ebo^, http://ceph.com/docs/master/architecture/ has some information on how striping works in ceph
[14:47] <jamespage> ebo^, rbd and cephfs should strip automatically
[14:47] <jamespage> ebo^, *stripe even
[14:47] <Kioob`Taff> jamespage: on local, without any network access, I have : Bandwidth (MB/sec): 89.661
[14:47] <ebo^> that's what i want to hear. thx :-)
[14:47] <Kioob`Taff> (Max bandwidth (MB/sec): 184 :: Min bandwidth (MB/sec): 0 )
[14:47] * guigouz (~guigouz@177.135.158.234) has joined #ceph
[14:55] * guigouz (~guigouz@177.135.158.234) Quit (Ping timeout: 480 seconds)
[14:56] * aliguori (~anthony@cpe-70-113-5-4.austin.res.rr.com) has joined #ceph
[14:57] <jamespage> Kioob`Taff, OK; can you try 'ceph osd tell N bench' replacing N with the ID of each of your OSD's
[14:57] <jamespage> that might tell us a bit more
[14:57] * guigouz (~guigouz@177.135.158.234) has joined #ceph
[15:00] * loicd (~loic@magenta.dachary.org) has joined #ceph
[15:00] <Kioob`Taff> one at a time jamespage ?
[15:01] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[15:01] <jamespage> Kioob`Taff, I'd recommend that approach
[15:10] <Kioob`Taff> all my OSD are doing 63-65 MB/s with that ceph bench command
[15:11] <ebo^> i guess copious amoutns auf slow requests are normal if i run 2x osd, mds and mon on one hdd?
[15:12] <Kioob`Taff> but, if I start all bench in same time, I obtain only 22MB/s
[15:13] * guigouz (~guigouz@177.135.158.234) Quit (Ping timeout: 480 seconds)
[15:14] <Kioob`Taff> is there a way to disable journal ?
[15:14] <Kioob`Taff> or, bench only journal ?
[15:24] <Kioob`Taff> if I disable the writecache of the RAID controller, I obtain 32MB/s...
[15:24] <Kioob`Taff> great card...
[15:33] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[15:34] * drokita (~drokita@199.255.228.10) has joined #ceph
[15:39] <jamespage> Kioob`Taff, what does your disk topology look? you are running all 8 OSD's on a single host right?
[15:40] <jamespage> what does (journal is on 4 SSD) mean? it is stripped using server raid of some sort? or are the journals spread over 4 SSD's
[15:41] <Kioob`Taff> each SSD have 2 partitions of 20GB, and each partition is a journal for one SSD
[15:41] <Kioob`Taff> I don't use RAID at all
[15:42] <Kioob`Taff> but HDD and SSD are connected on the same (RAID) controller
[15:42] <Kioob`Taff> and yes, for now I only have one host for OSD
[15:43] <Kioob`Taff> If my test are succesfull, I will have 6 hosts like that one
[15:47] <jamespage> Kioob`Taff, I would suspect the controller may be choking in some way; my deployment is more horizontally spread that that (i,e. each host only has ~4 OSD)
[15:51] * joao (~JL@89-181-151-182.net.novis.pt) has joined #ceph
[15:51] * ChanServ sets mode +o joao
[15:52] <joao> hello #ceph
[15:55] <jamespage> Kioob`Taff, I can't believe it sucks that badly though
[15:56] <Kioob`Taff> I'm testing the SSD...
[15:57] <Kioob`Taff> maybe it's the problem
[15:57] <nosebleedkt> --OSD::tracker-- reqid: unknown.0.0:0, seq: 752, time: 2012-12-10 16:46:44.462744, event: done, request: pg_info(1 pgs e198:2.23) v3
[15:57] <nosebleedkt> is that unknown.0.0:0 thing
[15:57] <nosebleedkt> a bad thing?
[15:59] * hhoover (~hhoover@of2-nat1.sat6.rackspace.com) has joined #ceph
[15:59] * hhoover (~hhoover@of2-nat1.sat6.rackspace.com) has left #ceph
[16:00] <Kioob`Taff> jamespage: my SSD are writing at 52MB/s only...
[16:01] <Kioob`Taff> so, with 2 partitions/journals per SSD, I can't really obtain more than 25MB/s per OSD
[16:01] * ninkotech (~duplo@89.177.137.231) Quit (Quit: Konversation terminated!)
[16:03] <Kioob`Taff> they are all "old" INTEL SSD 80Go PostVille
[16:03] <Kioob`Taff> which references do you recommende ?
[16:13] * calebmiles (~caleb@65-183-137-95-dhcp.burlingtontelecom.net) Quit (Ping timeout: 480 seconds)
[16:14] * noob2 (~noob2@ext.cscinfo.com) has joined #ceph
[16:14] * calebmiles (~caleb@65-183-137-95-dhcp.burlingtontelecom.net) has joined #ceph
[16:15] * nosebleedkt (~kostas@213.140.128.74) Quit (Quit: Leaving)
[16:23] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[16:28] <Kioob`Taff> mmm
[16:29] <Kioob`Taff> because of the RAID controller, I can't send TRIM/DISCARD command
[16:29] <Kioob`Taff> maybe it's the problem
[16:39] * l0nk (~alex@173.231.115.58) has joined #ceph
[16:39] <l0nk> hello :)
[16:42] * Leseb (~Leseb@193.172.124.196) Quit (Read error: Connection reset by peer)
[16:43] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[16:43] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[16:45] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[16:46] * loicd (~loic@magenta.dachary.org) has joined #ceph
[16:47] * loicd (~loic@magenta.dachary.org) Quit ()
[16:48] * vata (~vata@208.88.110.46) has joined #ceph
[16:50] * match (~mrichar1@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[16:53] * flakrat (~flakrat@eng-bec264la.eng.uab.edu) Quit (Quit: Leaving)
[16:55] * low (~low@188.165.111.2) Quit (Quit: bbl)
[17:01] <elder> nhm, are you online?
[17:06] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:17] <jamespage> Kioob`Taff, have you read these - http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/
[17:17] <jamespage> and http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/
[17:18] <jamespage> they are a good read and relevant
[17:18] * yeled (~yeled@spodder.com) Quit (Quit: meh..)
[17:19] * yeled (~yeled@spodder.com) has joined #ceph
[17:21] * trhoden (~trhoden@pool-108-28-184-124.washdc.fios.verizon.net) has joined #ceph
[17:21] * yehudasa (~yehudasa@2607:f298:a:607:65d2:38a:861e:163e) Quit (Ping timeout: 480 seconds)
[17:23] <slang> lxo: I don't think those are known issues on ceph-fuse
[17:26] <ron-slc> quick question (hopefully.) I have created a new testing pool for RBD/QEMU use "kvm". Imported a single RAW image, when I execute: 'rados ls -p kvm' I see over 2000 files named similar to: rb.0.1601.6e6b12c8.0000000####, and my one image file: winxp64.img.rbd.. I do not remember seeing these "rb.0.xxxx" files on another ceph cluster, I used. I see no doc-references for these 1000's of rb.0.xxxx files. Does anybody know wh
[17:26] <ron-slc> at they are?
[17:26] * ircolle (~ircolle@c-67-172-132-164.hsd1.co.comcast.net) has joined #ceph
[17:26] * jlogan1 (~Thunderbi@2600:c00:3010:1:14a3:ca45:3136:669a) has joined #ceph
[17:30] * yehudasa (~yehudasa@2607:f298:a:607:6105:88d2:5e1f:8abd) has joined #ceph
[17:32] <ebo^> how bad would it be to run the monitors on the osd-nodes
[17:33] <ron-slc> for small clusters, not a problem, just have plenty of RAM to support your Mon, and also provide a reasonable amount of RAM for use by OSD and caching.
[17:34] <joao> and don't stick all the monitors together in the same place ;)
[17:34] <ron-slc> In the future, when you grow your cluster greatly, you would want to eventually separate them when CPU/RAM needs become too large to co-exist.
[17:35] <ron-slc> Correct. Running all 3 mons in VM's on the same host, is almost pointless.
[17:35] <jtang> tbh, im pretty interested in running ceph in as small of a system as possible
[17:35] <ron-slc> But even for development/testing, you are fine with just 1 mon.
[17:35] <jtang> i might spend some time over christmas to make at least the kernel compile with the ceph modules on my raspi
[17:36] <ebo^> i'll have 6 dedicated osd nodes and probably one extra for the mds
[17:36] <ron-slc> cool, then just put mons on your 3-best osd nodes.
[17:38] <ebo^> if i put journals on one ssd per node, how bad would it be to lose one of those?
[17:38] <joao> jtang, the thing is, if you are going to stick all the monitors together in the same server, you're probably better off with a single monitor
[17:39] <l0nk> hello all, i'd like to know if is it possible to run a cluster with a "some" (30ms max) latency between monitor nodes? if yes how to manage it?
[17:39] <jtang> joao: huh?
[17:39] <jtang> i think that last remark should be for ebo^
[17:39] <joao> jtang, nevermind, yeah
[17:39] <jtang> :)
[17:40] * jtang is considering a move to sanfrancisco right now
[17:40] <jtang> *hmm* my wife just got tempted to move to the US
[17:42] <ron-slc> ebo: I have no experience (yet) on the loss of a journal file.. I've been keeping journals on same disk as OSD for ease of config and admin. Also my bandwidth needs haven't exceeded 100Mb/s yet.
[17:44] <jtang> btw, is the wip-osx branch a community thing or is it backed by inktank?
[17:45] <jtang> it seems like the changes that were made were pretty much the same changes i had made privately, except i didnt get as far as compiling it successfully, is libaio absolutely required to build ceph-fuse ?
[17:50] <slang> lxo: I've submitted bugs for those two ceph-fuse issues you are seeing: #3596,#3597
[17:56] * yasu` (~yasu`@soenat3.cse.ucsc.edu) has joined #ceph
[17:56] * gregorg (~Greg@78.155.152.6) has joined #ceph
[17:57] * BManojlovic (~steki@242-174-222-85.adsl.verat.net) has joined #ceph
[18:07] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Remote host closed the connection)
[18:11] * danieagle (~Daniel@186.214.58.112) has joined #ceph
[18:14] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[18:16] * yasu` (~yasu`@soenat3.cse.ucsc.edu) Quit (Remote host closed the connection)
[18:20] * loicd (~loic@magenta.dachary.org) has joined #ceph
[18:21] * gaveen (~gaveen@112.134.113.232) has joined #ceph
[18:22] * gaveen (~gaveen@112.134.113.232) Quit ()
[18:22] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[18:22] * gaveen (~gaveen@112.134.113.232) has joined #ceph
[18:23] * rweeks (~rweeks@50-76-48-109-ip-static.hfc.comcastbusiness.net) has joined #ceph
[18:31] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) has joined #ceph
[18:31] * slang (~slang@cpe-66-91-114-250.hawaii.res.rr.com) Quit (Remote host closed the connection)
[18:32] <elder> nhm, still shoveling?
[18:34] * ebo^ (~ebo@icg1104.icg.kfa-juelich.de) Quit (Ping timeout: 480 seconds)
[18:35] * gregaf (~Adium@2607:f298:a:607:a9c1:25b7:a54e:1cfe) Quit (Quit: Leaving.)
[18:36] * gregaf (~Adium@2607:f298:a:607:a9c1:25b7:a54e:1cfe) has joined #ceph
[18:40] * verwilst_ (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[18:49] * verwilst_ (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[18:49] * sagewk (~sage@2607:f298:a:607:bdbd:b82b:32ec:ace8) Quit (Ping timeout: 480 seconds)
[18:49] * sstan (~chatzilla@dmzgw2.cbnco.com) has joined #ceph
[18:49] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Ping timeout: 480 seconds)
[18:50] * sagewk (~sage@2607:f298:a:607:64a1:288d:93ad:96c1) has joined #ceph
[18:50] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[18:51] * yehudasa (~yehudasa@2607:f298:a:607:6105:88d2:5e1f:8abd) Quit (Ping timeout: 480 seconds)
[18:51] * yehudasa (~yehudasa@2607:f298:a:607:6105:88d2:5e1f:8abd) has joined #ceph
[18:53] <nhm> elder: got most of it done last night. Laura had a Dr. Appointment this morning so I had to make sure the car could get through.
[18:54] <rweeks> you actually use a shovel?
[18:54] * Cube (~Cube@12.248.40.138) has joined #ceph
[18:56] <nhm> rweeks: snowblower first, then shovel if there is less than 3" more.
[18:56] <rweeks> ok
[18:56] <nhm> rweeks: we got probably a foot of really dense heavy snow.
[18:56] <rweeks> I was going to say: there are these motorized things
[18:56] <nhm> Yeah, sadly mine seems to be burning oil and stalling this year. ;(
[18:59] <ircolle> nhm - Who needs snowblowers? You're young, put your back into it :-)
[18:59] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:59] <rweeks> pff
[18:59] <rweeks> that's how you ruin your back
[19:00] * ebo^ (~ebo@233.195.116.85.in-addr.arpa.manitu.net) has joined #ceph
[19:02] * Ryan_Lane (~Adium@216.38.130.167) has joined #ceph
[19:07] * danieagle (~Daniel@186.214.58.112) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[19:08] <elder> We got 15".
[19:08] <elder> It meant I could only ski Saturday, not Sunday. I guess I'm done for this year.
[19:08] <rweeks> where are you located, elder?
[19:09] <elder> MInnesota.
[19:09] * slang (~slang@cpe-66-91-114-250.hawaii.res.rr.com) has joined #ceph
[19:09] <elder> North East suburbs of the Twin Cities.
[19:09] * rweeks tries to remember where nhm is
[19:09] <elder> Mark is something like 15 miles away,.
[19:09] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[19:10] <elder> He's South of Minneapolis, I"m North of St. Paul.
[19:10] <rweeks> as far out as White Bear Lake?
[19:10] <elder> Near there.
[19:10] <rweeks> ok
[19:10] <nhm> ircolle: I actually like to shovel, but this stuff was half water.
[19:10] <elder> You familiar with the area?
[19:10] <rweeks> my wife lived in St Paul for 5 years before I met her
[19:10] <elder> Ahh. Where?
[19:10] <elder> White4 Bear?
[19:10] <nhm> ircolle: the nasty slush at the end of the drive way is a challenge.
[19:10] <rweeks> so I know it from maps and talking to her, but never have been there
[19:11] <elder> nhm, mine was sticky but not slushy.
[19:11] <rweeks> no she was in St Paul itself
[19:11] * houkouonchi-work (~linux@12.248.40.138) Quit (Remote host closed the connection)
[19:11] <rweeks> somewhere near Frogtown
[19:11] <elder> My brother lives near there.
[19:11] <rweeks> (from what I remember)
[19:11] <elder> Frogtown is Hmongtown now. (But the name hasn't changed.)
[19:15] <rweeks> yeah she has lots of Laotian and Hmong friends back there
[19:15] <rweeks> mostly on facebook these days
[19:16] <elder> I love going over there for lunch, lots of good restaurants.
[19:16] <rweeks> I bet.
[19:16] <elder> Vietnamese Pho and Eggrolls, the best.
[19:17] <nhm> I miss dinkytown.
[19:17] <nhm> Used to eat there all the time.
[19:17] <elder> Dinkytown is even better for restaurants (variety anyway)
[19:17] <elder> Although the chains are taking over.
[19:18] <nhm> yeah, starting to. Hopefully the small places will survive.
[19:20] * drokita (~drokita@199.255.228.10) Quit (Read error: Connection reset by peer)
[19:28] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[19:30] * yasu` (~yasu`@dhcp-59-224.cse.ucsc.edu) has joined #ceph
[19:31] <elder> Isn't there a link to the "old" ceph wiki content?
[19:33] <elder> I'm looking for some notes Dan put together about debugging with gdb on a VM kernel.
[19:34] <joshd> ceph.com/deprecated
[19:35] <elder> Thanks.
[19:35] <gregaf> elder: talk to Zafman too; he's been doing that a lot recently
[19:35] <elder> OK. I got it going before, I just am firing up another one.
[19:36] <elder> Is he on IRC?
[19:36] <elder> (I can use Pidgin)
[19:37] <gregaf> I thought so, but don't see him
[19:38] <elder> joshd, that link is a pain, I have to manually change "wiki" to "deprecated" for linked-to pages.
[19:38] <elder> (But I can do that...)
[19:42] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[19:42] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[19:42] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:44] * guigouz (~guigouz@177.33.216.27) has joined #ceph
[19:54] <nhm> ugh, I shouldn't have been playing warcraftII over the weekend. Now my wrist hurts.
[19:54] <ircolle> TMI
[19:54] <nhm> ircolle: :P
[20:16] <elder> Wow indeed.
[20:31] <nhm> no, WarcraftII, not Wow.
[20:33] * gaveen (~gaveen@112.134.113.232) Quit (Quit: leaving)
[20:35] <Psi-jack> OMG! People! Only during the week, it seems, not the weekend. hehe
[20:36] <rweeks> office hours, you know
[20:36] <Psi-jack> What's "office?" :)
[20:36] <Psi-jack> Is that where the TPS report is? ;}
[20:37] <Psi-jack> Anyway, figured out my logdev for XFS seems nicely sized at 128mb, so that's set.. As for CephFS's journal, that I haven't yet determined.
[20:38] <rweeks> I think dmick is in charge of the TPS reports
[20:38] <rweeks> but he's not here.
[20:38] <Psi-jack> haha
[20:38] <ircolle> Working on a new coversheet
[20:39] * gaveen (~gaveen@112.134.113.232) has joined #ceph
[20:39] <Psi-jack> Hmm, I had a question.. Looking it back up since it's beyond my scrollback. (but still in my ZNC log)
[20:42] <Psi-jack> Curious as to how CephFS would handle multiple mixed size drives. I'm considering balancing out all my HDD's.. NAS1 has 3x320 GB, NAS2 has 4x500 GB, and NAS3 currently has only the SSD, but I'm buying 3x1 TB. But, considering spreading the 3x320 between all three servers, 3 of the 500's accross, and then the new 1 TB's, leaving only 1 500 GB drive to figure out what to do with.
[20:42] <Psi-jack> With the "balanced" approach, NAS1, NAS2, and NAS3 would have a 320GB, 500GB, and 1TB HDD used as OSDs
[20:47] <sstan> Has anyone tried compiling ceph?
[20:47] <Psi-jack> Ahh, 1GB someone reported on, for Ceph journals.
[20:52] <slang> sstan: which version? current master branch of repo?
[20:53] <sstan> slang: http://ceph.com/download/ceph-0.55.tar.gz
[20:53] <Psi-jack> Hmmm, cool.
[20:54] <Psi-jack> This http://learnitwithme.com/?p=303 site, looks pretty encouraging. Since, I'm currently using NFSv4, and switching to using RBD, very interested to see the future. ;)
[20:54] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[20:54] <Psi-jack> Can you mount rbd devices?
[20:54] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:54] <slang> sstan: it builds as part of the automated build setup: http://ceph.com/gitbuilder.cgi
[20:55] <slang> sstan: scroll over for v0.55
[20:55] <slang> sstan: is it a specific platform you're having trouble with?
[20:55] <sstan> slang: cool I'll read that right now
[20:55] <rweeks> Psi-jack: you can mount an RBD device using the kernel client or the QEMU client
[20:55] <Psi-jack> Nice. :)
[20:56] <sstan> slang: well .. I was trying to see if I can get it installed on SLES 11sp2
[20:56] <Psi-jack> From what I read a lot, it sounds like using rbd is faster than using the cephfs fused-based mount
[20:56] <Psi-jack> Likewise, You can access rbd disks via S3 protocol, too?
[20:57] <Psi-jack> Or, a less specific term, an rbd provision. ;)
[20:57] <rweeks> Psi-jack: The ceph gateway doesn't use RBD
[20:57] <rweeks> it writes directly to the object store from front-end requests it receives via S3 or Swift APIs
[20:58] <Psi-jack> I see.
[20:58] <rweeks> same with CephFS - it takes POSIX requests and writes them to the object store, but presents a filesystem to its clients
[20:59] <rweeks> same with RBD
[20:59] <rweeks> RBD presents a block device, but is the Ceph object store underneath
[20:59] <Psi-jack> Well, foo.. Not important. S3's just an added bonus I might or might not use. Mostly I'm going to use CephFS as thin provisionable highly available alternative to iSCSI/iSCSI-md and NFSv4
[21:00] <sstan> slang: I see that the autobuilder can build it (with warnings). Is there a way to see how it is done?
[21:00] <rweeks> I would suggest you use RBD as a thin provisionable highly available alternative to iSCSI
[21:00] <Psi-jack> hehe
[21:00] <rweeks> and CephFS as an alternative to NFS
[21:00] <rweeks> since those are two different use cases
[21:00] <Psi-jack> Hmm, true.
[21:00] * Kioob (~kioob@luuna.daevel.fr) has joined #ceph
[21:01] <Psi-jack> Currently I use NFSv4 only, qcow2 disks for VM OS disks, and NFSv4 within some VM's for shared storage that multiple servers need. Web Servers and Mail Servers mostly.
[21:01] <rweeks> RBD is a good option for the VM disks
[21:02] <Psi-jack> Yep. :)
[21:02] <rweeks> now: do you use NFSv4 ACLs?
[21:03] <Kioob> I'm trying DRBD over RBD, it works, but it's not really fast
[21:03] <Psi-jack> This site I pointed out earlier, has an .ods spreadsheet that, in their test cases, showed CephFS as being painfully slow in terms of create/delete/read tests, in both sequential and random file tests. But blew the competition away with copy /usr and read /usr
[21:03] <Kioob> (no it's not for production, it's just to try migration from DRBD to RBD)
[21:03] <Psi-jack> but... They never detailed what version of CephFS was in use at all, so...
[21:04] <Psi-jack> Oh wait.
[21:04] <Psi-jack> Yes, they detailed it. 0.32
[21:04] <rweeks> oh
[21:04] <rweeks> that's way old
[21:04] <Psi-jack> Yes, I know! :)
[21:04] <rweeks> also: there are many design factors that would affect those tests aside from version
[21:04] <slang> sstan: if you click on v0.55 you can go the page for that tag/branch, and then click on the Warnings/Errors at the top to see the complete log
[21:04] <slang> sstan: http://gitbuilder.sepia.ceph.com/gitbuilder-sles-11sp2-amd64/log.cgi?log=690f8175606edf37a3177c27a3949c78fd37099f
[21:06] <sstan> I've been there :) I wish it was working as well as on the build server. I'm trying to resolve the NSS dependancy
[21:06] <Psi-jack> It was 3 servers, osd+mds + mon on each, 32GB RAM, 24 CPU Cores, and 10x2TB disks (No mention of SAS or SATA) with both btrfs and xfs.
[21:06] <Psi-jack> And 2 copies of data, 3 copies of metadata.
[21:07] <Psi-jack> Speaking of... 2 copies of data would allow the data to be available on 2 physical hosts.. Why 3 copies of metadata?
[21:09] * Ryan_Lane1 (~Adium@216.38.130.167) has joined #ceph
[21:09] <Psi-jack> Heh, my mesely little CephFS setup will only have 2.7TB after considering replica. heh
[21:09] * noob21 (~noob2@ext.cscinfo.com) has joined #ceph
[21:10] <Psi-jack> But.... That's up from the 1.7TB spread out between 2 servers. ;)
[21:11] <sstan> ./configure --without-tcmalloc worked for sles11 sp2
[21:11] * fc (~fc@home.ploup.net) Quit (Quit: leaving)
[21:12] <slang> sstan: ah ok. you're asking about what rpms that build server already has installed?
[21:13] <sstan> slang: I'm just trying to install it .. with a RPM or otherwise
[21:14] * noob2 (~noob2@ext.cscinfo.com) Quit (Ping timeout: 480 seconds)
[21:14] * Ryan_Lane (~Adium@216.38.130.167) Quit (Ping timeout: 480 seconds)
[21:17] <Psi-jack> mds is not the same as the cephfs journal, or is it?
[21:18] <slang> slang: you could try the rpms from the testing branch: http://gitbuilder.ceph.com/ceph-rpm-sles-11sp2-x86_64-basic/ref/testing/RPMS/x86_64/
[21:18] <slang> errr
[21:19] <slang> sstan: see my message to myself
[21:19] * slang eyes himself warily
[21:19] <sstan> haha thanks : )
[21:19] <slang> sstan: that has more recent commits than what's in v0.55, but its similar
[21:22] * CristianDM (~CristianD@host152.186-109-1.telecom.net.ar) has joined #ceph
[21:22] <CristianDM> Hi-
[21:22] <CristianDM> I Can´t start mds
[21:22] <CristianDM> monclient(hunting): no handler for protocol 0
[21:22] <CristianDM> monclient(hunting): none of our auth protocols are supported by the server
[21:23] <CristianDM> After 0.55 upgrade
[21:25] <slang> CristianDM: upgrade from 0.48?
[21:25] <CristianDM> nop
[21:25] <sstan> slang : make: *** [all-recursive] Error 1 .... compile-time error. I'll now try the RPMs you mentioned earlier
[21:25] <CristianDM> From 0.54
[21:26] <CristianDM> And 0.55 have random crash of osd
[21:26] <slang> CristianDM: can you pastebin the log from the crashed osd?
[21:27] <noob21> will ceph have a fit if the nodes are separated across the internet?
[21:27] <noob21> just a curious question
[21:27] <rweeks> not today, noob21
[21:27] <rweeks> since replication is synchronous
[21:28] <rweeks> async replication is planned
[21:28] <slang> CristianDM: also, can you pastebin your ceph.conf?
[21:28] <noob21> ok
[21:28] * rweeks (~rweeks@50-76-48-109-ip-static.hfc.comcastbusiness.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[21:29] <CristianDM> Yes.
[21:29] <CristianDM> This start with slow request.... and show down. Only return with ceph restart
[21:30] * gucki (~smuxi@HSI-KBW-082-212-034-021.hsi.kabelbw.de) Quit (Ping timeout: 480 seconds)
[21:32] * dmick (~dmick@2607:f298:a:607:e07b:b1cf:23d:d5a0) has joined #ceph
[21:32] <CristianDM> slang: log too big
[21:32] <CristianDM> slang: I will put some logs
[21:32] <slang> slang: you can compress it and email it to me (sam.lang@inktank.com)
[21:33] <CristianDM> slang: Yes
[21:33] * ChanServ sets mode +o dmick
[21:33] <CristianDM> slang: rar is fine?
[21:35] <slang> CristianDM: yes, although gzip or lzma will probably compress better
[21:35] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[21:38] <CristianDM> slang: send you ceph-osd.40.log.1.gz now
[21:45] * gaveen (~gaveen@112.134.113.232) Quit (Remote host closed the connection)
[21:45] * gaveen (~gaveen@112.134.113.25) has joined #ceph
[21:47] * wer (~wer@dsl081-246-084.sfo1.dsl.speakeasy.net) has joined #ceph
[21:48] <CristianDM> slang: any idea?
[21:49] <wer> mon.a@0(leader).osd e34 prepare_failure osd.28 10.5.0.190:6812/26739 from osd.24 10.5.0.190:6800/31745 is reporting failure:1
[21:49] <wer> My ceph instance appears not happy.
[21:51] <wer> running any ceph command yields mon.a@0(leader).osd e34 prepare_failure osd.28 10.5.0.190:6812/26739 from osd.24 10.5.0.190:6800/31745 is reporting failure:1 :( One whole node (out of two) decided that all the osd's where dead running…. now they are actually running. Really random/scary. Was in super awesome condition friday :)
[21:52] * jjgalvez (~jjgalvez@12.248.40.138) has joined #ceph
[21:53] <slang> CristianDM: I don't see any crash output at the end of that log
[21:53] <slang> CristianDM: is that the osd that crashed?
[21:55] <wer> I restarted the mon and now it is logging again. Still giving lots of these…. mon.a@0(leader).osd e276 prepare_failure osd.32 10.5.0.190:6824/17661 from osd.36 10.5.0.190:6836/18990 is reporting failure:0 but the health is OK now.
[22:02] * verwilst_ (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[22:19] * l0nk (~alex@173.231.115.58) Quit (Quit: Leaving.)
[22:22] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:33] * brambles (lechuck@s0.barwen.ch) Quit (Remote host closed the connection)
[22:40] * brambles (~xymox@s0.barwen.ch) has joined #ceph
[22:41] * calebmiles (~caleb@65-183-137-95-dhcp.burlingtontelecom.net) Quit (Quit: Leaving.)
[22:45] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) has joined #ceph
[22:48] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[22:49] <dpippenger> hello folks, I'm having a bit of an issue removing an object from my storage pool. I'm running 0.52 on FC17 w/btrfs and I keep getting these errors on one of my objects http://pastebin.com/F6VKX2SR
[22:50] <dpippenger> any suggestions would be appreciated
[22:51] <gregaf> dmick: joshd: what versions of rbd rm have what issues? ;)
[22:51] <dpippenger> is there a way to delete it using the ceph cli tool?
[22:51] <dpippenger> the help is a bit sparse on that tool :)
[22:53] <dpippenger> or if you think maybe I should just go upgrade to solve it, that would work also..
[22:54] <joshd> dpippenger: 0.52 didn't have snap protection implemented properly, 0.54 does. to delete this image, you'll have to use the rados tool to remove the header and id object (rbd_id.$imagename, and rbd_header.169d38bd92d7)
[22:55] <joshd> i.e. rados -p cinder rm rbd_id.volume-3e68a46d-9c14-4256-bbae-cd981a2fc0bd; rados -p cinder rm rbd_header.169d38bd92d7
[22:55] <dpippenger> ahh, ok... that kinda makes sense. The error seemed to be indicating the parent/child mapping got broken somehow
[22:56] <joshd> then 'rbd rm -p cinder volume-3e68a46d-9c14-4256-bbae-cd981a2fc0bd' can clean up the rest
[22:56] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[22:56] <dpippenger> error removing cinder/rbd_header.169d38bd92d7: Device or resource busy
[22:57] <joshd> that should go away in 30s
[22:57] <dpippenger> well doing the rbd rm seemed to work now
[22:57] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Quit: Leaving)
[22:57] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[22:57] <joshd> yeah, just removing the id object will let the rm continue
[22:57] <dpippenger> thanks for your help
[22:57] <joshd> you're welcome
[22:57] <dpippenger> I'll bump up to .55 to avoid getting wedged again
[22:58] <Kioob> what model of SSD should I use for RBD ? Mine are too sllooowww :(
[22:59] <terje> is there a trick to doing something like this: ceph osd getcrushmap | crushtool -d -
[22:59] <terje> to get my crushmap on stdout
[23:00] * brambles (~xymox@s0.barwen.ch) Quit (Remote host closed the connection)
[23:01] * brambles (lechuck@s0.barwen.ch) has joined #ceph
[23:01] <dmick> gregaf: mixing OSD versions with rbd versions can be...tricky
[23:02] <dmick> otherwise, unless objects have disappeared out from under, I don't remember any issues specifically (we've been making rm work better when some objects/info is/are missing lately)
[23:02] <dmick> (and improving backward compatibility)
[23:05] <joshd> terje: I don't think crushtool reads from stdin right now
[23:06] <terje> ok, I'll just write to tmp files, thanks.
[23:11] * verwilst_ (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[23:12] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[23:13] <mikedawson> When an OSD process dies, where should I look for a core dump?
[23:15] * rread (~rread@c-98-234-218-55.hsd1.ca.comcast.net) has joined #ceph
[23:15] <joshd> often / by default
[23:17] * noob21 (~noob2@ext.cscinfo.com) has left #ceph
[23:19] <mikedawson> joshd: I have a file /core, but its a week old.
[23:20] <mikedawson> Moved it, then tried to start OSD, and now I have a new /core. Perhaps new core dumps don't overwrite old core dumps.
[23:23] <joshd> you can change it so /proc/sys/kernel/core_pattern includes pid and executable name
[23:23] <joshd> man 5 core
[23:23] <mikedawson> thx
[23:24] * jjgalvez1 (~jjgalvez@12.248.40.138) has joined #ceph
[23:25] <mikedawson> Could you point me at the process to examine the backtrace? I think I remember reading of a debug package to install for symbols or something.
[23:26] <joshd> yeah, install ceph-dbg, and run 'gdb /usr/bin/ceph-osd /path/to/core', and use the 'bt' command to show the backtrace
[23:29] <mikedawson> Could you look at this backtrace? http://pastebin.com/7gPbEFRA
[23:30] * jjgalvez (~jjgalvez@12.248.40.138) Quit (Read error: Operation timed out)
[23:30] <joshd> that's not good
[23:31] <joshd> it's failing to decode the on-disk structure describing a pg
[23:31] <joshd> sjust may be able to dig deeper
[23:31] <mikedawson> this is 0.55 (and a NOT production)
[23:32] <mikedawson> It it's useful to you, I'll keep it around for bug reporting
[23:32] <sjust> mikedawson: looks like it might be the 0-length pg info nonsense
[23:32] <sjust> can you ls -lah the meta directory on the crashed osd?
[23:33] <mikedawson> sjust: ls -lah /var/lib/ceph/osd/ceph-17/ looks roughly normal
[23:33] <sjust> ./current/meta
[23:34] <mikedawson> got it
[23:34] <mikedawson> http://pastebin.com/tw9jUtAM
[23:35] <joshd> is it normal for pginfo\u4.df__0_28CA2073__none to be 35 bytes less than the rest?
[23:37] <mikedawson> joshd: I'm guessing you are asking sjust
[23:37] * CristianDM (~CristianD@host152.186-109-1.telecom.net.ar) Quit ()
[23:37] <joshd> yeah
[23:38] * Ryan_Lane1 (~Adium@216.38.130.167) Quit (Quit: Leaving.)
[23:38] <sjust> joshd: yeah, that's not a problem
[23:38] <sjust> mikedawson: sorry, I apparently need a recursive ls
[23:39] <mikedawson> backstory: the physical SATA disk dropped offline for some reason right before this. Bounced server, it came back.
[23:41] <mikedawson> Recursive: http://pastebin.com/MAQ77KGM
[23:42] <sage> yehudasa, joshd: wip-conf complains about config parsing unconditionally.. see any problems?
[23:42] <sage> we still don't error out on e.g. bad syntax, which is odd, but it's a larger change to make this late in teh game. i'd rather do that in master.
[23:42] <joshd> does it complain about anything in our default/sample config in the packages?
[23:44] <sjust> mikedawson: hmm, still seems like it's likely to be a corrupted disk
[23:44] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[23:44] <sjust> did the cluster recover without it?
[23:45] <mikedawson> sjust: HEALTH_WARN 55 pgs down; 55 pgs peering; 55 pgs stuck inactive; 55 pgs stuck unclean
[23:45] <sjust> how many osds?
[23:45] <mikedawson> 22, then lost #18, then lost #17 right after
[23:45] <sjust> oh
[23:46] <sjust> replication 2?
[23:46] <mikedawson> yeah
[23:46] <yehudasa> sage: will it error out when running, e.g., ceph --version?
[23:46] <mikedawson> something odd happened. This is in a 2U / four node configuration that shares some backplane wiring. I think flaky hardware is to blame
[23:47] <sage> no, that happens during config parsing
[23:47] <sage> but e.g. 'ceph health' will warn about the syntax (then continue)
[23:47] <yehudasa> seems ok then
[23:47] <sage> joshd: no complaints from sample.ceph.conf
[23:48] <sage> k thanks
[23:48] <sage> we should make it error out properly later :)
[23:48] <mikedawson> sjust: I think I need to adjust the ceph osd tree to account for this type of failure of a shared chassis
[23:48] <sage> or solve that with that guy's config refactor
[23:48] <joshd> sage: does that always complain when you have no config file
[23:48] <terje> are there any utilities to manipulate the crush map other than writing out a new crush map file?
[23:49] <sjust> mikedawson: yeah, it's pretty straightforward to ensure that no pg has both replicas in the same chassis
[23:49] <terje> that is, I wish to create a new pool and put two osd's in it.
[23:49] <sage> joshd: ooh, yeah it does. crap.
[23:49] * jefferai (~quassel@quassel.jefferai.org) Quit (Ping timeout: 480 seconds)
[23:50] <mikedawson> sjust: Thanks. I'll rebuild tomorrow.
[23:52] <mikedawson> sjust: Next time, should I try ceph-deploy or mkcephfs?
[23:52] <sjust> ah, not sure
[23:52] <mikedawson> ha
[23:52] <sjust> greg,sage: cephdeploy or mkcephfs?
[23:53] <sage> mkcephfs, unless you want to test
[23:53] <mikedawson> roger
[23:53] <sage> and are using ubuntu precise, and a few other limitations
[23:54] <joao> is there any way of changing a locked plana's ownership?
[23:55] <joshd> joao: not in one operation. you can only unlock and then lock with a different owner
[23:55] <dmick> but you can mark it 'down' first so it doesn't get stolen
[23:57] <joao> I'm assuming that only the owner can unlock?
[23:57] <joao> nope, the --owner thingy did it :)
[23:57] <dmick> no, you can specify
[23:57] <dmick> vidyo time
[23:58] <joao> yay!
[23:58] <joao> thanks
[23:58] <joao> apparently teuthology will complain if you change your user@host on a new install ;)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.