#ceph IRC Log

Index

IRC Log for 2013-01-02

Timestamps are in GMT/BST.

[0:03] * ScOut3R (~ScOut3R@2E6BADF9.dsl.pool.telekom.hu) has joined #ceph
[0:03] * ScOut3R (~ScOut3R@2E6BADF9.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[0:12] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[0:13] * sleinen2 (~Adium@2001:620:0:25:2168:26b5:8eda:d7fe) Quit (Quit: Leaving.)
[0:15] * gaveen (~gaveen@112.135.135.240) Quit (Quit: Leaving)
[0:17] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[0:24] * ScOut3R (~ScOut3R@2E6BADF9.dsl.pool.telekom.hu) has joined #ceph
[0:25] * Leseb (~Leseb@5ED17881.cm-7-2b.dynamic.ziggo.nl) Quit (Quit: Leseb)
[0:30] * ScOut3R (~ScOut3R@2E6BADF9.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[0:32] * dwm37 (~dwm@northrend.tastycake.net) has joined #ceph
[0:33] * `10 (~10@juke.fm) has joined #ceph
[0:33] <dwm37> I'm being shown the wrong filesystem size in a newly-minted, kernel-mounted cephfs -- "ceph -s" reports 1050GB/1080GB available, but df -h shows only 4.2GB total capacity.
[0:34] <dwm37> This seems to be a relatively recent regression. Is this a known issue?
[0:35] <dwm37> (Entertainingly, writing an 8GB file with 'dd' results in the reported FS utilisation climbing to 81MB.)
[0:35] <dwm37> For reference, ceph --version returns: ceph version 0.56-103-gf1196c7 (f1196c7e93af83405ea5082030a7899e256ded7a)
[1:06] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[1:14] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[1:33] * Leseb (~Leseb@5ED17881.cm-7-2b.dynamic.ziggo.nl) has joined #ceph
[1:34] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) Quit (Read error: Connection reset by peer)
[1:42] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) has joined #ceph
[1:59] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[1:59] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:18] * dpippenger (~riven@cpe-76-166-221-185.socal.res.rr.com) Quit (Remote host closed the connection)
[2:41] * Leseb (~Leseb@5ED17881.cm-7-2b.dynamic.ziggo.nl) Quit (Quit: Leseb)
[3:48] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[3:51] * Cube (~Cube@174-154-172-88.pools.spcsdns.net) has joined #ceph
[3:55] * Cube1 (~Cube@pool-71-108-128-153.lsanca.dsl-w.verizon.net) has joined #ceph
[4:00] * Cube (~Cube@174-154-172-88.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[4:21] * darkfaded (~floh@188.40.175.2) has joined #ceph
[4:26] * darkfader (~floh@188.40.175.2) Quit (Ping timeout: 480 seconds)
[4:32] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[4:32] * loicd (~loic@magenta.dachary.org) has joined #ceph
[4:35] * themgt (~themgt@71-90-234-152.dhcp.gnvl.sc.charter.com) has joined #ceph
[4:41] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[5:06] * Cube1 (~Cube@pool-71-108-128-153.lsanca.dsl-w.verizon.net) Quit (Quit: Leaving.)
[5:19] * benner_ (~benner@193.200.124.63) has joined #ceph
[5:19] * benner (~benner@193.200.124.63) Quit (Read error: Connection reset by peer)
[5:51] * Cube (~Cube@12.248.40.138) has joined #ceph
[6:25] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:52] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[7:10] * gaveen (~gaveen@112.135.159.97) has joined #ceph
[7:11] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[7:11] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:07] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:07] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:36] * low (~low@188.165.111.2) has joined #ceph
[8:37] * ninkotech (~duplo@89.177.137.231) Quit (Ping timeout: 480 seconds)
[8:50] * agh (~agh@www.nowhere-else.org) has joined #ceph
[8:50] <agh> Hello to all
[8:51] * Cube (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[8:53] * verwilst (~verwilst@d5152D6B9.static.telenet.be) has joined #ceph
[8:54] <Vjarjadian> hi
[8:59] <agh> Maybe you will be able to help me
[8:59] <agh> (i hope so) :)
[9:00] <Vjarjadian> uh oh.... lol
[9:00] <agh> I'm testing CephFS quite : I've 35 OSDs, on 7 different hosts
[9:00] <agh> So, I mount my CephFS cloud on 2 different clients
[9:00] <agh> and here, i do a stupid things :
[9:01] <agh> for i in `seq 1 1000` ; do cp debian.iso $i.iso ;done
[9:01] <agh> So it works... until it does not work anymore.
[9:02] <agh> In logs, I have a lot a lot of : 2012-12-31 18:09:32.204425 mon.0 172.17.200.102:6789/0 670110 : [INF] osd.19 172.17.200.111:6804/18630 failed (by osd.1 172.17.200.105:6802/8685)
[9:02] <agh> (a lot of lines of this type, involving every OSDs)
[9:02] <agh> And, of course, the whole cluster become stuck
[9:02] <agh> Any idea to help me ?
[9:03] * illuminatis (~illuminat@0001adba.user.oftc.net) has joined #ceph
[9:07] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[9:08] * agh (~agh@www.nowhere-else.org) has joined #ceph
[9:09] <iggy> agh: might try back in about 8-10 hours when more of the devs are around
[9:11] <iggy> fwiw, I don't think a lot of people are testing cephfs atm (but it should at least behave correctly)
[9:11] * Leseb (~Leseb@2001:980:759b:1:78b7:a4b9:2485:4eb2) has joined #ceph
[9:12] <agh> iggy: ok. Thanks
[9:12] * Leseb_ (~Leseb@193.172.124.196) has joined #ceph
[9:12] <agh> iggy: I know that CepHFS is not considered as stable... But I need CephFS :(
[9:13] <iggy> you and me both
[9:13] <agh> iggy: is it working in your case ?
[9:14] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[9:15] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[9:16] <iggy> I haven't actually tested cephfs in quite some time, but I don't think at present I could even talk managment into testing it
[9:16] <iggy> maybe if there
[9:17] <iggy> 's a big push on cephfs sometime this year I can get it in here
[9:18] <iggy> but we already have enough problems with our current setup
[9:18] <agh> What is your setup ?
[9:19] * Leseb (~Leseb@2001:980:759b:1:78b7:a4b9:2485:4eb2) Quit (Ping timeout: 480 seconds)
[9:19] * Leseb_ is now known as Leseb
[9:20] <iggy> we are currently using nfs (along with duct tape, bailing wire, and a lot of crossed fingers)
[9:23] <agh> with Ceph ? or not at all ? Because, We use NFS too, with NetApp... But the idea is to migrate to CephFS...
[9:24] <iggy> no ceph at all currently
[9:26] <agh> mm... my problem should come from a missconfigured journal size
[9:30] * tziOm (~bjornar@ti0099a340-dhcp0628.bb.online.no) has joined #ceph
[9:35] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[9:36] * loicd (~loic@178.20.50.225) has joined #ceph
[9:38] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[9:45] * dosaboy (~user1@host86-161-240-84.range86-161.btcentralplus.com) has joined #ceph
[9:57] * dosaboy (~user1@host86-161-240-84.range86-161.btcentralplus.com) Quit (Quit: Leaving.)
[9:57] * dosaboy (~user1@host86-161-240-84.range86-161.btcentralplus.com) has joined #ceph
[10:01] * dosaboy (~user1@host86-161-240-84.range86-161.btcentralplus.com) has left #ceph
[10:02] * joshd1 (~jdurgin@2602:306:c5db:310:a5e0:68a1:e60a:3db) has left #ceph
[10:07] * dosaboy1 (~gizmo@host86-161-240-84.range86-161.btcentralplus.com) has joined #ceph
[10:28] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) has joined #ceph
[10:33] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[10:33] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[10:46] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Remote host closed the connection)
[10:47] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[10:55] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[10:55] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[10:55] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Remote host closed the connection)
[10:55] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[11:09] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[11:53] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[11:53] * gaveen (~gaveen@112.135.159.97) Quit (Remote host closed the connection)
[11:53] * agh (~agh@www.nowhere-else.org) has joined #ceph
[12:35] <agh> hello to all
[12:39] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Remote host closed the connection)
[12:43] <wido> hi
[12:43] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[12:45] <agh> I need help :(
[12:46] <agh> I'm testing CephFS quite : I've 35 OSDs, on 7 different hosts
[12:46] <agh> So, I mount my CephFS cloud on 2 different clients
[12:46] <agh> and here, i do a stupid things :
[12:46] <agh> for i in `seq 1 1000` ; do cp debian.iso $i.iso ;done
[12:47] <agh> So it works... until it does not work anymore.
[12:47] <agh> In logs, I have a lot a lot of : 2012-12-31 18:09:32.204425 mon.0 172.17.200.102:6789/0 670110 : [INF] osd.19 172.17.200.111:6804/18630 failed (by osd.1 172.17.200.105:6802/8685)
[12:47] <agh> (a lot of lines of this type, involving every OSDs)
[12:47] <agh> And, of course, the whole cluster become stuck
[12:47] <agh> Any idea to help me ?
[12:48] <wido> which version of Ceph?
[12:48] <wido> Do note that the POSIX filesystem is still in beta and doesn't work that great yet
[12:49] <agh> 0.52
[12:50] <agh> Yes, but I need a distributed POSIX fs... Do you have any clues ? If not CephFS, what ?
[12:51] <dwm37> agh: You almost certainly want to upgrade to the latest Bobtail release.
[12:51] <dwm37> Also, as CephFS is not particularly stable yet, you may instead prefer to try something more established, such as GlusterFS.
[12:51] <agh> dwm37: so the 0.55 version ? (testing repo ?)
[12:52] <dwm37> (Well, I say that, I was playing with the new release yesterday, and seem to have run into a regression regarding reported FS sizes with 'df'..)
[12:53] <jluis> agh, if after upgrading to 0.56 you still hit that, you might also want to gather the logs of the osds that were marked down and make them available somewhere so someone can take a look
[12:53] <jluis> emailing the list is an option; or maybe checking with sjust later today on IRC
[12:53] <agh> jluis: ok. So i'm gonna upgrade and test
[12:54] <agh> jluis: ok. Thanks.
[12:56] <jluis> dwm37, is that regression the one regarding some 80-ish MB being reported on 'df' after using 'dd' to write a 8GB or so file?
[12:56] <dwm37> jluis: Yes.
[12:56] <jluis> have a faint recollection about seeing that somewhere (can't recall where)
[12:56] <jluis> dwm37, just checking
[12:56] <dwm37> Cheers.
[12:56] <jluis> well, fwiw, the fs metadata is also kept on the osds
[12:57] <dwm37> I suspect it's a structure-update change confusing the client.
[12:57] <jluis> don't know if that has anything to do with it
[12:57] <dwm37> i.e. Ceph clearly knows the correct details, and is reporting them, but they're being parsed incorrectly.
[12:57] <dwm37> Happens regardless of whether I use the kernel or FUSE client when mounting.
[12:58] <dwm37> (I note that the 4.2GB total-size being reported is, plus or minus, the exact total capacity of the *journals* on each OSD.)
[12:59] <dwm37> And the 8GB being reported as 80MB could be a somewhat odd units mismatch...
[12:59] <dwm37> But that's just a guess.
[13:06] <agh> do you have any recommendation on journal size ? My OSD nodes are filled with SATA 7.7K disks. Some have SSD for journal.
[13:09] <dwm37> agh: I'd just take the SSD capacity and split it equally between the OSDs in that host.
[13:10] <agh> dwm37: ok. Thanks
[13:15] <agh> is there any way to use RBD in VMWare ?
[13:15] <Gugge-47527> share the blockdevice over iscsi maybe
[13:16] <agh> Gugge-47527 : ok it was my idea... But i was hoping a native rbd plugin for vmware :)
[13:17] <Gugge-47527> I would use KVM and not vmware :)
[13:17] <agh> Gugge-47527: we do too.
[13:24] * dosaboy3 (~gizmo@host86-163-33-19.range86-163.btcentralplus.com) has joined #ceph
[13:27] * dosaboy1 (~gizmo@host86-161-240-84.range86-161.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[13:29] * LeaChim (~LeaChim@5ad684ae.bb.sky.com) Quit (Ping timeout: 480 seconds)
[13:32] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[13:33] * foxhunt (~richard@office2.argeweb.nl) has joined #ceph
[13:37] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[13:44] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[13:58] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[14:10] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[14:15] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[14:17] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[14:18] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:21] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[14:22] * agh (~agh@www.nowhere-else.org) has joined #ceph
[14:30] * ScOut3R_ (~ScOut3R@212.96.47.215) has joined #ceph
[14:36] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[14:38] * aliguori (~anthony@cpe-70-113-5-4.austin.res.rr.com) has joined #ceph
[14:42] * ScOut3R_ (~ScOut3R@212.96.47.215) Quit (Remote host closed the connection)
[14:43] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[15:00] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[15:08] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[15:18] * sstan (~chatzilla@dmzgw2.cbnco.com) Quit (Ping timeout: 480 seconds)
[15:23] * noob2 (~noob2@ext.cscinfo.com) has joined #ceph
[15:47] <agh> Hello to all, I'm using CephFS (0.56). I've 35 OSDs on 7 hosts, for 54 TB.
[15:48] <agh> It seems to work fine, but now, i'm doing a big rsync on it, and, i see that MDS (i have 3 MDS (active/passive)) doing a lot of active...laggy...replay...reconnect...rejoin...active). Why ?
[15:52] * loicd1 (~loic@178.20.50.225) has joined #ceph
[15:52] * loicd (~loic@178.20.50.225) Quit (Read error: Connection reset by peer)
[15:56] <dwm37> Sounds like the MDSs might be resource-starved?
[15:56] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[15:57] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[15:58] <paravoid> I upgraded from 0.55.1 to 0.56 and I see OSDs dying all across the board
[16:00] <nhm> paravoid: Anything useful in the logs?
[16:01] <nhm> agh: were you running a previous version of ceph before? If so, was it happening then as well?
[16:01] <nhm> agh: Unfortunately RBD and RGW have been getting a lot more attention since that's what most of our customers are using these days.
[16:02] <paravoid> backtraces
[16:02] <agh> nhm: yes, but no :) I did a total wipe out of the cluster (mkcephfs)
[16:02] <paravoid> the 0.55.1 daemons consistently die
[16:02] <paravoid> the 0.56 seem to just flap once
[16:02] <agh> nhm: i know :( but i neeeed CephFS :(
[16:03] * sstan (~chatzilla@dmzgw2.cbnco.com) has joined #ceph
[16:03] <nhm> agh: yeah, a lot of us want to work on it more. Gotta pay the bills though!
[16:04] <paravoid> 9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1df) [0x81bbbf]
[16:04] <paravoid> 10: (OSD::handle_op(std::tr1::shared_ptr<OpRequest>)+0x12aa) [0x6174da]
[16:04] <paravoid> is the 0.55.1 crash
[16:04] <paravoid> 11: (OSD::dispatch_op(std::tr1::shared_ptr<OpRequest>)+0xe9) [0x61ef69]
[16:04] <paravoid> 12: (OSD::_dispatch(Message*)+0x26e) [0x626fbe]
[16:04] <paravoid> when peering with 0.56 OSDs or radosgw I guess
[16:05] <paravoid> the 0.56 crashes had something with heartbeat _check
[16:05] <paravoid> I'm going to get rid of 0.55.1 completely for now
[16:05] <paravoid> move to 0.56 entirely
[16:06] <nhm> paravoid: hrm, looks like the filoo guys may have seen something similar.
[16:07] <paravoid> 32 ceph-osd (ceph/43)
[16:07] <paravoid> 35 ceph-osd (ceph/36)
[16:07] <paravoid> 39 ceph-osd (ceph/47)
[16:07] <paravoid> 41 ceph-osd (ceph/46)
[16:07] <paravoid> that's number of crashes
[16:07] <paravoid> in like 5' or so
[16:07] <paravoid> I *think* it happened after I upgraded radosgw
[16:08] <nhm> paravoid: you may want to look at this thread: http://www.spinics.net/lists/ceph-devel/msg10862.html
[16:09] <paravoid> can't see how that's related
[16:09] <paravoid> this is definitely the result of 0.55.1->0.56
[16:09] <paravoid> plus, the backtrace is different
[16:13] <paravoid> that was... interesting
[16:14] <paravoid> it's still trying to recover
[16:15] <nhm> paravoid: hrm, I seem to be seeing things.
[16:17] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[16:17] <nhm> paravoid: ok, ignore that thread. I thought I saw what's in the snippet you provided but didn't.
[16:19] <nhm> paravoid: might want to turn debug OSD 20 on (possibly along with some other debugging) and get some logs ready for Sam.
[16:19] <paravoid> I haven't had a crash yet since I upgraded all OSDs to 0.56
[16:19] <paravoid> however the cluster is still in recovery and all radosgw traffic has ceased
[16:20] <paravoid> so, I'll wait until it recovers and see if it persists
[16:20] <paravoid> definitely an upgrade problem for others though :)
[16:21] <nhm> paravoid: would you mind creating a bug with some of the data you have now? It'd be good to capture the problem at least.
[16:22] <paravoid> as soon as this is over:
[16:22] <paravoid> 2013-01-02 15:22:12.515873 mon.0 [INF] pgmap v552695: 16952 pgs: 277 inactive, 96 active, 4046 active+clean, 6953 active+remapped+wait_backfill, 620 active+degraded+wait_backfill, 22 active+recovery_wait, 5 active+recovering+remapped, 71 stale+active+remapped+wait_backfill, 1 active+recovering+degraded, 556 peering, 1 stale+active+degraded+wait_backfill, 332 remapped, 173 active+remapped, 59 active+remapped+backfilling, 1 down+peering, 8 active+degraded, 3
[16:22] <paravoid> :-)
[16:22] <nhm> :)
[16:29] * nyeates (~nyeates@pool-173-59-239-231.bltmmd.fios.verizon.net) has joined #ceph
[16:30] * agh (~agh@www.nowhere-else.org) has joined #ceph
[16:31] * ircolle (~ircolle@c-67-172-132-164.hsd1.co.comcast.net) has joined #ceph
[16:54] * low (~low@188.165.111.2) Quit (Quit: bbl)
[17:13] <foxhunt> hello, what is the best place to start for creating a config with multple osd per host, thinking about crush map etc?
[17:16] <sstan> vim ?
[17:16] <sstan> I think one can add OSDs without compiling/decompiling the CRUSH map
[17:18] * jlogan (~Thunderbi@2600:c00:3010:1:c1cc:53b1:28d6:4bc2) has joined #ceph
[17:18] <foxhunt> ok but the default crush map, handles more than 1 osd on a single host, so that data is never written 2 times to the same host?
[17:24] <sstan> if you set ceph.conf to have one OSD per host, data will not be written 2 times to the same host by default (provided you have 2 or more hosts)
[17:25] * sagelap (~sage@132.sub-70-197-145.myvzw.com) has joined #ceph
[17:26] * verwilst (~verwilst@d5152D6B9.static.telenet.be) Quit (Quit: Ex-Chat)
[17:26] * sagelap1 (~sage@76.89.177.113) Quit (Ping timeout: 481 seconds)
[17:36] <paravoid> I don't think that's right
[17:36] <paravoid> the default CRUSH map has "choose firstn 0 type osd"
[17:37] <paravoid> so it only replicates across OSDs, not caring about other parts of the tree as I see it
[17:37] <paravoid> but I'm no crush expert
[17:38] <janos> thankfully either way crushmaps can be altered as needed pretty easily
[17:39] <janos> i don't know how it defaults for different scenarios, but a cluster i set up on two hosts this weekend defaulted to firstn ) type host
[17:39] <janos> 0/0
[17:39] <janos> )/0
[17:39] <janos> if i could type
[17:40] <janos> it's possible the default adapts to what resrources are provided
[17:40] * foxhunt (~richard@office2.argeweb.nl) Quit (Remote host closed the connection)
[17:42] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[17:44] * sagelap (~sage@132.sub-70-197-145.myvzw.com) Quit (Ping timeout: 480 seconds)
[17:56] * jlogan (~Thunderbi@2600:c00:3010:1:c1cc:53b1:28d6:4bc2) Quit (Quit: jlogan)
[17:57] * sagelap (~sage@2607:f298:a:607:60d2:4e79:ba9:158c) has joined #ceph
[17:57] <sstan> By default, RADOS includes a rule that two replicas of each object must exist per pool.
[17:58] <sstan> source : http://www.admin-magazine.com/HPC/Articles/RADOS-and-Ceph-Part-2
[17:58] <paravoid> that's irrelevant
[17:58] <noob2> hooray about 0.56 :D
[17:59] <sstan> foxhunt is talking about crush maps ... that link also talks about crush maps ... how is it irrelevant ?
[18:03] * jlogan (~Thunderbi@2600:c00:3010:1:c1cc:53b1:28d6:4bc2) has joined #ceph
[18:04] <paravoid> the number of replicas is not part of crush afaik
[18:04] <paravoid> it's an input for crush, to pick the rule out of the ruleset
[18:04] * jlogan (~Thunderbi@2600:c00:3010:1:c1cc:53b1:28d6:4bc2) Quit ()
[18:05] <paravoid> I'm by no means authoritative, just a user
[18:05] <paravoid> but that's my interpretation of what I've seen so far
[18:06] <sstan> ok, same here, I'm no expert yet either
[18:07] <sstan> hmm ...the number of replicas is a pool attribute
[18:08] <sstan> then, thanks to the crush MAP, an algorithm can distribute objects to the best OSDs
[18:08] <paravoid> yes
[18:09] <paravoid> a pool has a number of replicas and a selected ruleset
[18:09] <paravoid> crush has rulesets, usually with a single rule but sometimes with more
[18:09] <paravoid> so you can have a different rule for 2 copies and a different one for 10, on the same ruleset
[18:10] <paravoid> and pool A can be ruleset=0 replicas=2, while pool B can be ruleset=0 replicas=10
[18:10] <noob2> is 0.56 in the stable repo yet or do we manually download the packages and deploy them?
[18:10] <sstan> now it's more clear, thanks paravoid
[18:10] <paravoid> noob2: I got it from ceph.com/debian-testing/
[18:10] <noob2> ok that's what i thought
[18:11] <paravoid> so the rule then can say how to pick devices per replica
[18:17] <sstan> i.e. how to pick devices for each "replication level" ??
[18:19] <paravoid> there's a nice example in the docs on how you can have the primary replica in SSDs and the rest in platter storage
[18:19] * jlogan (~Thunderbi@2600:c00:3010:1:c1cc:53b1:28d6:4bc2) has joined #ceph
[18:20] <sstan> ok I'll look for that
[18:21] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[18:23] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[18:23] * agh (~agh@www.nowhere-else.org) has joined #ceph
[18:33] <sstan> what rule does a new pool use if the rule isn't specified?
[18:33] <janos> 0
[18:34] <janos> #0 in the crushmap, las ti read anyway
[18:34] <janos> *last i
[18:34] <paravoid> yes, 0 is my understanding as well
[18:35] <sstan> ruleset 0 , which is named "data" out of the box
[18:35] <janos> yes
[18:35] * richardshaw (~richardsh@katya.aggress.net) has left #ceph
[18:35] <sstan> thanks
[18:37] * slang (~slang@c-71-239-8-58.hsd1.il.comcast.net) Quit (Quit: slang)
[18:37] <sstan> hmm .. there is a "pool" bucket in the crush map... Has it anything to do with actual pools?
[18:38] <sstan> or a pool type should I say, named default.
[18:39] * danieagle (~Daniel@177.133.172.16) has joined #ceph
[18:39] <janos> sstan: that i do not know
[18:40] <paravoid> no
[18:40] <paravoid> they renamed it too
[18:40] <paravoid> it's not called pool anymore, to avoid confusion
[18:41] <paravoid> iirc
[18:41] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[18:41] * jlogan (~Thunderbi@2600:c00:3010:1:c1cc:53b1:28d6:4bc2) Quit (Quit: jlogan)
[18:42] <paravoid> 79b30543473423b9b36601e231e7b4b3c1bec134
[18:42] * jlogan (~Thunderbi@2600:c00:3010:1:c1cc:53b1:28d6:4bc2) has joined #ceph
[18:42] <paravoid> crush: change default type from 'pool' to 'root'
[18:42] <paravoid>
[18:42] <paravoid> The 'pool=default' in the default crush maps is confusing wrt rados pools.
[18:42] <paravoid> 'root' makes more sense given that we are talking about hierarchies/trees.
[18:42] <sstan> good stuff
[18:43] <sstan> I figured that out... but wasn't 100% sure, it bothered me : )
[18:43] <paravoid> yeah, it was confusing
[18:43] <paravoid> good thing they renamed it
[18:48] * NashTrash1 (~Adium@166.205.66.177) has joined #ceph
[18:48] * NashTrash1 (~Adium@166.205.66.177) Quit (Remote host closed the connection)
[18:49] * NashTrash1 (~Adium@166.205.66.177) has joined #ceph
[18:49] <NashTrash1> Good morning Ceph'ers
[18:50] <NashTrash1> I am running into a problem that I hope you all can help me with.
[18:50] <NashTrash1> I seem to have the same issue as http://tracker.newdream.net/issues/3597
[18:50] <NashTrash1> I can reproduce this issue
[18:53] <ircolle> What version of ceph and version of fuse are you using?
[18:57] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:59] * loicd1 (~loic@178.20.50.225) Quit (Ping timeout: 480 seconds)
[18:59] * Cube (~Cube@12.248.40.138) has joined #ceph
[19:00] <NashTrash1> The latest of both 0.56
[19:01] <sstan> in the case mkcephfs is never used, does one have to write a crush map from the beginning?
[19:01] <ircolle> Can you add your specific information and the exact permission problem you're hitting to the ticket in redmine?
[19:01] <NashTrash1> ircolle: I don't have a valid user account in the redmine. I try to register but it never sends me the confirmation email.
[19:02] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[19:02] <ircolle> strange - sorry about that
[19:03] <NashTrash1> ircolle: No problem. I would like to get an account in good standing because we are starting to be very active Ceph users and have run into a couple of issues.
[19:04] <ircolle> Have you tried with an OpenID?
[19:05] <NashTrash1> Nope
[19:05] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[19:05] <NashTrash1> Don't really use OpenID
[19:06] <ircolle> OK - thought that might be work around if you're never getting the account registration e-mails
[19:06] <NashTrash1> Is there one person that oversees the Redmine that I could shoot an email to?
[19:08] * joao (~JL@89-181-159-175.net.novis.pt) has joined #ceph
[19:08] * ChanServ sets mode +o joao
[19:09] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[19:11] * slang (~slang@c-71-239-8-58.hsd1.il.comcast.net) has joined #ceph
[19:13] * jluis (~JL@89.181.148.232) Quit (Ping timeout: 480 seconds)
[19:25] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:27] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[19:33] * NashTrash1 (~Adium@166.205.66.177) has left #ceph
[19:42] * LeaChim (~LeaChim@b01bde88.bb.sky.com) has joined #ceph
[19:44] * dpippenger (~riven@216.103.134.250) has joined #ceph
[20:00] <paravoid> can I use injectargs to alter osd-recovery-max-active, osd-recovery-threads, osd-op-threads?
[20:01] <paravoid> I'm trying to see what settings work best and restarting OSDs all the time isn't really going to help me handle the load
[20:03] <paravoid> (suggestions for more knobs to optimize this are also welcome)
[20:04] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[20:05] * buck (~buck@bender.soe.ucsc.edu) Quit (Remote host closed the connection)
[20:05] * agh (~agh@www.nowhere-else.org) has joined #ceph
[20:06] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:06] * Ryan_Lane (~Adium@216.38.130.165) has joined #ceph
[20:06] <wido> sage: Feeling better again! Not a 100% yet, but that will just be a matter of time
[20:07] <Vjarjadian> people still having trouble with rebalancing taking days and days?
[20:08] <paravoid> yes
[20:08] <nhm> wido: were you ill?
[20:09] <wido> nhm: Jep. Went on vacation to Indonesia and when I got back I became sick. Caught some tropical virus and spent some time in the hospital
[20:09] <wido> High fever and lung issues :( But feeling better again!
[20:10] <nhm> wido: yikes! Glad you are recovering!
[20:10] <wido> Me 2 :)
[20:10] <wido> hospitals are boring
[20:10] <nhm> wido: and their wireless networks are often plagued with virus infected computers.
[20:10] <janos> that wireless-reflective glass really stinks
[20:11] <janos> when trying to get your own signal
[20:11] <janos> ;)
[20:11] <wido> well, the WiFi worked really well. Just couldn't focus my eyes to read anything on a screen
[20:12] * dmick (~dmick@2607:f298:a:607:a808:7ad6:87fc:bbed) has joined #ceph
[20:12] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[20:13] * buck (~buck@bender.soe.ucsc.edu) Quit (Remote host closed the connection)
[20:13] <wido> but something else. I just saw sage mention that URLs for the downloads are going to change
[20:13] <wido> anything I need to know for eu.ceph.com?
[20:14] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[20:21] <nhm> I didn't even know that eu.ceph.com existed. :)
[20:25] * dosaboy3 (~gizmo@host86-163-33-19.range86-163.btcentralplus.com) Quit (Quit: Leaving.)
[20:27] * ScOut3R (~ScOut3R@2E6BADF9.dsl.pool.telekom.hu) has joined #ceph
[20:42] <wido> nhm: Shame on you! ;)
[20:42] <wido> I set it up with rturk some time ago since packets going from CA to the EU are slow
[20:43] <rturk> Hi, Wido :)
[20:43] <wido> hi!
[20:43] <wido> nhm: http://ceph.com/docs/master/install/debian/
[20:43] <wido> for example
[20:44] <rturk> URLs for the downloads are likely to change, yes
[20:44] <rturk> but it's possible that they'll all stay in the same top-level directory
[20:44] <rturk> I'll have a chat with Sage when it all settles down and drop you a line if that's not the case
[20:45] <wido> rturk: Thanks! Just want to make sure stuff keeps working
[20:45] <rturk> yep. glad you mentioned it
[20:46] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[20:46] <rturk> We could probably do more on the website to make people aware of the mirror too
[20:46] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:46] <nhm> wido: lol, I'm lucky if I can keep track of when releases are going to get made. My head is constantly swimming with performance data, much to my wife's annoyance. :)
[20:47] <wido> nhm: Hehe, I get the feeling
[20:47] <nhm> Her: "What do you think about switching swimming lessons to 8:30am for the kids?" Me: "er what? Swimming lessons? 8:30am? what?"
[20:48] * rturk steps out to get lunch, back in 20 :)
[20:50] <darkfaded> nhm: thats really too hard for kids
[20:50] <darkfaded> ;)
[20:51] <nhm> darkfaded: aha, our kids are super kids and get up at like 5:30am.
[20:51] <darkfaded> eeeeeek
[20:51] <nhm> darkfaded: actually that's a lie, they are starting to sleep in to like 6:30am these days.
[20:52] <nhm> though 5:30am was the norm for quite a while.
[21:06] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:06] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:15] * jtangwk (~Adium@2001:770:10:500:1cae:c4c8:f588:7d89) Quit (Ping timeout: 480 seconds)
[21:16] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:16] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:18] * fzylogic (~fzylogic@69.170.166.146) has joined #ceph
[21:19] * jtangwk (~Adium@2001:770:10:500:1cdd:1a59:92ca:c0f3) has joined #ceph
[21:28] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[21:29] * danieagle (~Daniel@177.133.172.16) Quit (Ping timeout: 480 seconds)
[21:32] * jskinner_ (~jskinner@69.170.148.179) has joined #ceph
[21:34] * jskinner (~jskinner@69.170.148.179) Quit (Read error: Operation timed out)
[21:42] <noob2> hey guys i have some crush map questions
[21:42] <noob2> i'm a little confused about the choosing of hosts in the crush map
[21:42] * janos answers using Conan voice
[21:42] <janos> Clush your enemies
[21:43] <dmick> lol
[21:43] <noob2> in the default if i make a pool with 3 replica's does it choose 3 hosts if i have 6? or does it choose all 6 and stripe across that?
[21:44] <janos> noob2: unfortunately i don't know the answer, though i would assume it does hinge somewhat on the rules set - like firstn 0 type (host|rack|etc)
[21:44] <noob2> it looks like it chooses 3 in the default ruleset
[21:45] <noob2> yeah the firstn 0 type host is what i'm confused about
[21:45] <janos> yeah, i'd be interested in hearing clarification
[21:45] * janos goes back to lurker mode
[21:48] <noob2> i think maybe the crush pdf paper has the answers
[21:50] <sstan> a pool has many PGs. Each PG can designate any 3 (in this case) OSDs from 6 available osds. So my understanding is that it makes 3 replicas, but OSDs are chosen on a PG basis.
[21:51] <sstan> now idk how it works with stripping
[21:51] <noob2> ok i think that was what we were leaning towards here as the understanding
[21:53] <dmick> noob2: presumably you've read http://ceph.com/docs/master/rados/operations/crush-map/
[21:53] <noob2> yeah
[21:53] <noob2> i'm working my way through that with my coworker here
[21:54] <noob2> if i say replica=3 and put an object into rados it should choose 3 osd's from 3 hosts correct? presumably not on the same host
[21:54] <sstan> noob2, think about a 1000-node cluster. It would make no more sense to stripe across 6 than 1000
[21:54] <noob2> right
[22:06] * agh (~agh@www.nowhere-else.org) Quit (Remote host closed the connection)
[22:07] * agh (~agh@www.nowhere-else.org) has joined #ceph
[22:08] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[22:08] * Cube (~Cube@12.248.40.138) has joined #ceph
[22:22] <sjust> sagewk: wip-journal-aio looks fine, left two comments on the main commit
[22:23] <dmick> noob2: found a couple bugs in the page but not related to what you're asking about
[22:23] <dmick> I need to be better on crushmaps
[22:24] <janos> is there anything like "ceph osd tree" that show how full the osd's are?
[22:27] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[22:31] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:37] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[22:37] <dmick> janos: not sure, looking around
[22:38] <janos> i was thinking it would be a nice glancing view
[22:39] <janos> like getting a health_warn about near-full osd's
[22:39] <joshd> the end of ceph pg dump has stats for each osd
[22:39] <joshd> also --format=json
[22:39] * janos looks
[22:39] <dmick> there it is. tnx joshd
[22:39] <dmick> one might imagine that was on a command under osd
[22:40] <janos> ahhh nice
[22:40] <janos> thanks joshd
[22:40] <janos> yeah, i was poking around in osd commands
[22:42] <janos> i may have to teach myself some python and make a "pretty" cli view
[22:42] <janos> i'm going on the assumption python has some decent json-parsing libs ;)
[22:43] * noob25 (~jimmywong@ext.cscinfo.com) has joined #ceph
[22:43] <noob25> problem with creating new folder in .55
[22:44] <noob25> over the radosgw... was working prior to upgrading from .48
[22:44] <noob25> Internal Server Error
[22:44] <noob25> it is also not writing to logs
[22:44] <noob25> i work with noob2
[22:45] <joshd> janos: it does. it looks like ceph pg dump --format=json has an extra line to stdout instead of stderr, which should be fixed
[22:47] <noob25> how can i get the radosgw to increase the debug output? it seems to write nothing to the logs
[22:49] * NashTrash (~Adium@166.205.66.177) has joined #ceph
[22:50] <mikedawson> joshd: COW is working like a champ in 0.56 without the permissions hack! Thanks!
[22:50] <dmick> janos: yes you can parse JSON in Python :)
[22:50] <dmick> noob25: do you mean new bucket?
[22:50] <joshd> mikedawson: great! I updated the docs and fixed it for 0.56 over the weekend
[22:51] * ScOut3R (~ScOut3R@2E6BADF9.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[22:52] <joshd> mikedawson: you'll still need slighter changed permissions from what was there before to be able to unprotect snapshots: http://ceph.com/docs/master/rbd/rbd-openstack/#setup-ceph-client-authentication
[22:53] <mikedawson> joshd: thanks for the tip
[22:54] <noob25> dmick: yes...a new bucket
[22:54] <mikedawson> joshd: I've moved to specifying <uuid> in the secret.xml. It makes everything *much* easier for multi-node deployments. May want to add that to the docs
[22:54] <noob25> dmick: also when i upgraded from .48 to .56. All my previous files disappeared
[22:55] <dmick> well that certainly sounds like something's seriously wrong with the gateway, yes (assuming by 'files' you mean 'buckets' and/or 'objects' again)
[22:55] <joshd> mikedawson: yeah, that's probably a good idea. ubuntu packages and grizzly let you set the option for nova-compute instead too
[22:56] <noob25> dmick: yes... files meaning buckets and objects
[22:56] <mikedawson> joshd: is that work James put in?
[22:56] <joshd> mikedawson: yeah
[22:59] <dmick> noob25: first things first: is radosgw running?
[22:59] <noob25> yes
[22:59] <dmick> do you see any activity on it when you strace and try making a request?
[22:59] <noob25> dmick: yes, but only when we manually start it as root
[22:59] <dmick> how do you normally start it?
[23:01] <noob25> dmick: radosgw --rgw-socket-path=/tmp/radosgw.sock
[23:02] <dmick> that sounds manual; what I meant was, what are you using "manually start it as root" as an alternative to? Are you saying that if you start it with /etc/init.d/radosgw it doesn't start, or starts but doesn't appear to receive traffic from apache?
[23:02] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[23:02] <mikedawson> joshd: I just pick a uuid, set it in secret.xml, do the virsh stuff, then add it to /etc/cinder/cinder.conf.
[23:02] <noob25> correct. if i do /etc/init.d/radosgw start it immediately fails to start
[23:04] <noob25> i think our config file may be the problem why it's not starting.
[23:05] <noob25> in ceph.conf our client is called client.radosgw.1. init.d's script calls that and then exits but i'm not sure why yet
[23:06] <dmick> can you pastebin your ceph.conf?
[23:07] <noob25> fpaste.org/y3mr
[23:08] <dmick> one thing I see is 'raw socket path'; that should be rgw
[23:09] <noob25> you da man :D
[23:09] <buck> what is the command to manually kick of test suits from ceph-qa-suite?
[23:09] <dmick> another set of eyes == magic
[23:09] * jjgalvez (~jjgalvez@12.248.40.138) has joined #ceph
[23:09] <dmick> buck: schedule_suite.sh
[23:09] <buck> dmick: danke
[23:12] <dmick> noob25: did that fix it?
[23:13] * houkouonchi-home (~linux@pool-108-38-63-48.lsanca.fios.verizon.net) Quit (Remote host closed the connection)
[23:14] <noob25> yes..that resolved the startup... so it is running and logging now... but still cant create a bucket
[23:14] <noob25> set_re_state_err err _no 95
[23:15] <dmick> that's in the log?
[23:15] <noob25> /s3test/:create_bucket:http status=500
[23:16] <noob25> yes ...in the log
[23:16] <dmick> is that an apache error.log?
[23:16] <noob25> no radosgw log
[23:16] <dmick> hm
[23:16] <dmick> $ git grep set_re_state_err
[23:16] <dmick> $
[23:17] <noob25> fpaste.org/ptgb
[23:17] <dmick> maybe fastcgi or apache are managing to log there somehow
[23:17] <dmick> ho
[23:17] <dmick> re*q*
[23:17] <dmick> yeah, that matters :)
[23:17] <noob25> fsorry
[23:17] <noob25> my bad
[23:20] <dmick> 95 is EOPNOTSUPP
[23:21] <dmick> that looks like it might be coming from the lower layers
[23:21] <dmick> anything interesting in the osd log?
[23:21] <dmick> this could also be an auth issue
[23:22] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[23:22] <dmick> so mon log could be relevant too
[23:23] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:30] <noob25> thanks for the help. we're going to head out for the day :)
[23:30] <noob25> we are going to rebuild from scractch
[23:30] <noob25> just the gateway
[23:31] <noob25> it may be easier and we didnt really have any data yet that we cared about
[23:31] * noob2 (~noob2@ext.cscinfo.com) has left #ceph
[23:31] * noob25 (~jimmywong@ext.cscinfo.com) has left #ceph
[23:32] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[23:33] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit ()
[23:43] <dmick> ok
[23:45] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)
[23:46] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:50] * kylehutson (~kylehutso@dhcp231-11.user.cis.ksu.edu) has joined #ceph
[23:53] * jefferai (~quassel@quassel.jefferai.org) Quit (Ping timeout: 480 seconds)
[23:55] * NashTrash (~Adium@166.205.66.177) Quit (Quit: Leaving.)
[23:57] * jskinner_ (~jskinner@69.170.148.179) Quit (Remote host closed the connection)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.