#ceph IRC Log

Index

IRC Log for 2012-08-19

Timestamps are in GMT/BST.

[0:03] <mjblw> I have a question regarding: http://ceph.com/wiki/QEMU-RBD . How are we supposed to specify authentication parameters to this driver?
[0:13] <Tobarja> mjblw: look about half way down this page, see if it applies(just guessing): http://ceph.com/wiki/Rbd
[0:18] <Tobarja> or, i might try just jamming it in that format= line, no spaces, put another comma and tack it on... worth a shot...
[0:33] * tnt_ (~tnt@89.40-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[0:42] * loicd (~loic@brln-4db8110a.pool.mediaWays.net) Quit (Quit: Leaving.)
[1:30] <mjblw> I haven't had any trouble using the kernel driver for rbd. It's the QEMU-KVM driver I'm having the trouble with.
[1:44] * steki-BLAH (~steki@212.200.243.134) Quit (Quit: Ja odoh a vi sta 'ocete...)
[2:39] * nhm (~nhm@184-97-251-210.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[2:58] * nhm (~nhm@184-97-251-210.mpls.qwest.net) has joined #ceph
[3:00] * mjblw1 (~Adium@ip68-0-137-233.tc.ph.cox.net) has joined #ceph
[3:01] * mjblw1 (~Adium@ip68-0-137-233.tc.ph.cox.net) Quit ()
[3:06] * mjblw (~Adium@ip68-0-137-233.tc.ph.cox.net) Quit (Ping timeout: 480 seconds)
[3:07] * mjblw (~Adium@ip68-0-137-233.tc.ph.cox.net) has joined #ceph
[3:09] * mjblw (~Adium@ip68-0-137-233.tc.ph.cox.net) Quit ()
[3:20] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:23] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:37] * mjblw (~Adium@ip68-0-137-233.tc.ph.cox.net) has joined #ceph
[3:49] * mjblw (~Adium@ip68-0-137-233.tc.ph.cox.net) Quit (Quit: Leaving.)
[5:01] * maelfius (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[5:03] * ryann (~chatzilla@216.81.130.180) has joined #ceph
[5:05] <ryann> Say one lost all of his monitors (and their data), yet still had solid osd's. Could he restore his cluster?
[5:07] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:33] * maelfius1 (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[5:53] <sage> ryann: yes, but it'd be a bit painful
[5:53] <sage> ryann: is this a hypothetical or leading question? :)
[5:54] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) has left #ceph
[5:54] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) has joined #ceph
[5:59] * maelfius1 (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[6:00] * maelfius (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[6:00] * maelfius (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) Quit ()
[6:05] <ryann> sage: No, I actually did dump all of my monitors...
[6:05] <ryann> All of my OSD's are intact. I do have data on them. Not enritely important, but here's a challenge for me to see if I can repair my damage. :-/
[6:06] <sage> hmm ok, i'tll be a bit tedious, but it can be done.
[6:06] <ryann> I tend to think that restoring my keyring for the monitors is the leading goal, correct?
[6:07] <sage> that's one part of it
[6:07] <sage> harder bit will be assimilating the osdmaps on teh osds and getting those on the monitor, with the right fsids.
[6:07] <sage> getting the pgmap right may be a challenge too.
[6:08] <sage> but not needed just ot get your data, probably
[6:08] <sage> first create a keyring file with all teh daemon keys, and do ceph-mon --mkfs with that.
[6:08] <sage> then manually adjust the fsid in the generated osdmap, monmap files in mon data
[6:08] <sage> the gather all the osdmap and inc osdmaps and populate the osdmap and osdmap_full dirs
[6:08] <ryann> That's the step i'm on right now... I assimilted the mkcephfs script to get an understnaing of the order.
[6:08] <sage> and adjust the paxos state files (last_committed, etc.)
[6:09] <ryann> ...the keyring step, sorry.
[6:09] <ryann> Yeah. I really screwed the pooch on this one.
[6:11] <ryann> sage: Thanks!
[6:29] <ryann> sage: ok, my keyring is ready. normally I would recover the osdmap via the ceph tool, however since the cluster is down, is there back door way to access that? in the osd, perhaps?
[6:33] <ryann> found it osd.?/current/meta/osdmap## - matches the rados design I'm using.
[6:52] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) Quit (Read error: Operation timed out)
[6:56] <ryann> sage: my cluster is online with one monitor. the number of PG's match as before (3080) all stuck unclean. rados lspools returns my custom names.
[7:03] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[7:15] * nhm (~nhm@184-97-251-210.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[7:39] * bshah_ (~bshah@sproxy2.fna.fujitsu.com) has joined #ceph
[7:41] * bshah (~bshah@sproxy2.fna.fujitsu.com) Quit (Remote host closed the connection)
[7:46] * bshah (~bshah@sproxy2.fna.fujitsu.com) has joined #ceph
[7:46] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[7:48] * bshah_ (~bshah@sproxy2.fna.fujitsu.com) Quit (Ping timeout: 480 seconds)
[7:55] * bshah (~bshah@sproxy2.fna.fujitsu.com) Quit (Remote host closed the connection)
[8:04] * bshah (~bshah@sproxy2.fna.fujitsu.com) has joined #ceph
[8:21] * bchrisman1 (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[8:26] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:32] * bshah_ (~bshah@sproxy2.fna.fujitsu.com) has joined #ceph
[8:34] * bshah (~bshah@sproxy2.fna.fujitsu.com) Quit (Ping timeout: 480 seconds)
[8:40] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[8:45] * bchrisman1 (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:49] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[8:51] * tnt (~tnt@89.40-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[9:12] * bshah_ (~bshah@sproxy2.fna.fujitsu.com) Quit (Ping timeout: 480 seconds)
[9:31] * loicd (~loic@brln-4db8110a.pool.mediaWays.net) has joined #ceph
[9:31] * bshah (~bshah@sproxy2.fna.fujitsu.com) has joined #ceph
[9:33] <Tobarja> ryann: you're documenting all of this, right? at least take notes for the eventuality that it'll happen to someone else. you can email it to me if you want, tobarja@outlook.com
[9:48] * bshah (~bshah@sproxy2.fna.fujitsu.com) Quit (Remote host closed the connection)
[9:58] * bshah (~bshah@sproxy2.fna.fujitsu.com) has joined #ceph
[10:33] * bshah_ (~bshah@sproxy2.fna.fujitsu.com) has joined #ceph
[10:35] * bshah (~bshah@sproxy2.fna.fujitsu.com) Quit (Ping timeout: 480 seconds)
[10:46] * bshah_ (~bshah@sproxy2.fna.fujitsu.com) Quit (Remote host closed the connection)
[10:52] <tnt> Mmm, somehow my mon keeps taking huge amount of memory and ends up being killed by the OOM killer ...
[11:12] * bshah (~bshah@sproxy2.fna.fujitsu.com) has joined #ceph
[11:24] <tnt> Oh FFS ! Wireshark modified the GMR dissector AGAIN without consulting me ... that does it, I just maintain my branch and won't bother with upstream again.
[11:31] <tnt> and ... -ECHAN sorry about that :p
[13:29] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[13:36] * lofejndif (~lsqavnbok@09GAAHOHC.tor-irc.dnsbl.oftc.net) has joined #ceph
[13:51] * BManojlovic (~steki@212.200.243.134) has joined #ceph
[13:56] * Meths_ is now known as Meths
[14:24] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (Quit: Leaving...)
[14:31] * The_Bishop (~bishop@e179007082.adsl.alicedsl.de) has joined #ceph
[14:34] * nhm (~nhm@184-97-251-210.mpls.qwest.net) has joined #ceph
[15:01] * BManojlovic (~steki@212.200.243.134) Quit (Ping timeout: 480 seconds)
[16:08] * mjblw (~Adium@ip68-0-137-233.tc.ph.cox.net) has joined #ceph
[16:08] * nhm (~nhm@184-97-251-210.mpls.qwest.net) Quit (Read error: Operation timed out)
[16:09] * nhm (~nhm@184-97-251-210.mpls.qwest.net) has joined #ceph
[16:20] * lofejndif (~lsqavnbok@09GAAHOHC.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[16:44] * The_Bishop_ (~bishop@e179002196.adsl.alicedsl.de) has joined #ceph
[16:49] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) has joined #ceph
[16:52] * The_Bishop (~bishop@e179007082.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[17:27] * loicd1 (~loic@brln-4d0ce39f.pool.mediaWays.net) has joined #ceph
[17:32] * loicd (~loic@brln-4db8110a.pool.mediaWays.net) Quit (Ping timeout: 480 seconds)
[17:50] * The_Bishop_ (~bishop@e179002196.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[18:06] * mjblw (~Adium@ip68-0-137-233.tc.ph.cox.net) Quit (Quit: Leaving.)
[18:06] * nhm (~nhm@184-97-251-210.mpls.qwest.net) Quit (Read error: Operation timed out)
[18:11] * lofejndif (~lsqavnbok@19NAABX34.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:26] * lofejndif (~lsqavnbok@19NAABX34.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[18:26] * lofejndif (~lsqavnbok@82VAAFOPW.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:45] * darkfader (~floh@188.40.175.2) Quit (Read error: Operation timed out)
[18:48] * The_Bishop (~bishop@2a01:198:2ee:0:edbc:1f8b:86e7:b914) has joined #ceph
[19:45] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[20:03] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[20:06] <lightspeed> hello there, I'm hoping someone can help me out...
[20:06] <lightspeed> I had a perfectly happy ceph "cluster" with just a single host (running 1 OSD, 1 MON, 1 MDS and replication level set to 1... hence the quotes around "cluster")
[20:06] <lightspeed> today I added 2 new hosts, each running 1 OSD and 1 MON, and then increased the replication level to 2
[20:06] <lightspeed> but I noticed that the new OSDs remained empty - it didn't seem to be making any attempt to replicate/move any of the existing data to them
[20:06] <lightspeed> I later restarted the daemons on the first host, after which it looked like a bunch of things sprang into life...
[20:06] <lightspeed> some data then made its way onto the other OSDs, but not all of it
[20:06] <lightspeed> but now I'm seeing this as the health of the cluster (and it's not recovering any further by itself):
[20:06] <lightspeed> HEALTH_WARN 84 pgs peering; 81 pgs stale; 84 pgs stuck inactive; 84 pgs stuck stale; 111 pgs stuck unclean
[20:06] <lightspeed> any advice on what I should do?
[20:06] <lightspeed> btw I'm running 0.48, and also note that no data has been written to the cluster since I added the extra hosts (there aren't any clients running at the moment)
[20:13] * mdrnstm (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[20:14] * mdrnstm (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[20:14] * maelfius (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[20:15] * maelfius (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) Quit ()
[20:16] * maelfius (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[20:16] * maelfius is now known as mdrnstm
[20:18] * mdrnstm (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) has left #ceph
[20:19] * mdrnstm (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[20:20] * mdrnstm (~Adium@pool-71-160-33-115.lsanca.fios.verizon.net) Quit ()
[20:20] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[20:22] * mjblw (~Adium@ip68-0-137-233.tc.ph.cox.net) has joined #ceph
[20:22] * BManojlovic (~steki@212.200.243.134) has joined #ceph
[20:48] * BManojlovic (~steki@212.200.243.134) Quit (Quit: Ja odoh a vi sta 'ocete...)
[20:58] * BManojlovic (~steki@212.200.243.134) has joined #ceph
[20:59] * lofejndif (~lsqavnbok@82VAAFOPW.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[21:02] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[21:04] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[21:04] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[21:16] * mjblw (~Adium@ip68-0-137-233.tc.ph.cox.net) Quit (Quit: Leaving.)
[21:27] * deepsa (~deepsa@117.203.16.182) Quit (Ping timeout: 480 seconds)
[21:37] * darkfader (~floh@188.40.175.2) has joined #ceph
[21:40] * bitsweat (~bitsweat@ip68-106-243-245.ph.ph.cox.net) Quit (Quit: Leaving...)
[21:44] * fghaas (~florian@91-119-204-193.dynamic.xdsl-line.inode.at) has joined #ceph
[21:44] * fghaas (~florian@91-119-204-193.dynamic.xdsl-line.inode.at) has left #ceph
[21:53] * darkfader (~floh@188.40.175.2) Quit (Ping timeout: 480 seconds)
[21:53] * darkfaded (~floh@188.40.175.2) has joined #ceph
[21:55] * danieagle (~Daniel@177.43.213.15) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[21:56] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[21:59] <NaioN> lightspeed: you should update the crushmap
[22:00] <NaioN> after adding osds you need the update the crushmap also, else there will be no data placed on the new osds
[22:00] <NaioN> lightspeed: http://ceph.com/docs/master/ops/manage/grow/osd/
[22:01] <NaioN> and: http://ceph.com/docs/master/ops/manage/crush/#adjusting-crush
[22:01] * The_Bishop (~bishop@2a01:198:2ee:0:edbc:1f8b:86e7:b914) Quit (Ping timeout: 480 seconds)
[22:02] <lightspeed> yeah I believe I managed to do that... at least it changed the output from "ceph osd tree" to something that looks sensible
[22:03] <lightspeed> ie all the OSDs have a weight of 1
[22:03] <lightspeed> (whereas the new ones had a weight of 0 initially, I think)
[22:07] * fghaas (~florian@91-119-204-193.dynamic.xdsl-line.inode.at) has joined #ceph
[22:07] * fghaas (~florian@91-119-204-193.dynamic.xdsl-line.inode.at) has left #ceph
[22:08] <NaioN> are those pgs still stuck?
[22:10] <lightspeed> yes they are
[22:10] * The_Bishop (~bishop@2a01:198:2ee:0:2c2e:766f:d684:56d2) has joined #ceph
[22:11] <NaioN> and restarting the osds doesn't help?
[22:12] <NaioN> with "ceph health detail" you'll see which pgs are stuck
[22:12] <NaioN> and with "ceph pg ID query" you'll see why the pg is stuck
[22:13] <NaioN> at the bottom of the output it states something for the recovery state
[22:13] <lightspeed> yeah I've restarted all the OSDs (one at a time) since I initially mentioned the problems, and it didn't help, in fact even more PGs ended up in one of the problem states
[22:14] <lightspeed> I tried that ceph pg ... query on one of them earlier, and the recover state said "Reset"
[22:14] * darkfaded (~floh@188.40.175.2) Quit (Ping timeout: 480 seconds)
[22:14] * Meths_ (rift@2.27.72.157) has joined #ceph
[22:14] <lightspeed> but I didn't find any reference in the docs to what that might mean
[22:14] <NaioN> no further information?
[22:14] <lightspeed> I'll check some of the others as well...
[22:15] <NaioN> lightspeed: have you looked here: http://ceph.com/docs/master/ops/manage/failures/
[22:15] <lightspeed> no, that was all there was in the Recovery state section (plus a timestamp of some kind)
[22:15] <NaioN> and ceph pg dump_stuck unclean
[22:17] * lofejndif (~lsqavnbok@28IAAGY8M.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:17] * nhm (~nhm@184-97-251-210.mpls.qwest.net) has joined #ceph
[22:17] <lightspeed> all the PGs listed by "ceph pg dump_stuck unclean" are in state "peering"
[22:19] <NaioN> what does ceph -s say?
[22:19] <NaioN> are all osds up and in?
[22:20] * Meths (rift@2.25.214.99) Quit (Ping timeout: 480 seconds)
[22:20] <lightspeed> yes, 3 osds up and in
[22:20] <lightspeed> shall I paste the rest of the output?
[22:21] <NaioN> yeah
[22:21] <lightspeed> health HEALTH_WARN 111 pgs peering; 81 pgs stale; 111 pgs stuck inactive; 81 pgs stuck stale; 111 pgs stuck unclean
[22:21] <lightspeed> monmap e4: 3 mons at {audi=172.29.203.1:6789/0,bentley=172.29.203.2:6789/0,chrysler=172.29.203.3:6789/0}, election epoch 12, quorum 0,1,2 audi,bentley,chrysler
[22:21] <lightspeed> osdmap e245: 3 osds: 3 up, 3 in
[22:21] <lightspeed> pgmap v30650: 192 pgs: 81 stale+active+clean, 111 peering; 9250 MB data, 15709 MB used, 554 GB / 600 GB avail
[22:21] <lightspeed> mdsmap e180: 1/1/1 up {0=audi=up:replay}
[22:23] <NaioN> can all osds reach each orther?
[22:23] <NaioN> did you check the logs of the different osds?
[22:24] <lightspeed> yes they can reach each other... however does the "peering" state indicate connectivity problems of some kind? because perhaps my network setup is slightly unusual
[22:25] <lightspeed> each of the 3 hosts has a direct point-to-point link to the other two (ie a separate little IP subnet on each link)
[22:26] <lightspeed> their monitor daemons listen on a /32 loopback address (ie which is reachable over any interface)
[22:26] <NaioN> yeah that's a problem
[22:26] <NaioN> all osds have to contact each other
[22:27] <lightspeed> I actually hoped I could make them bind to those loopback IPs for outbound connections, but didn't find a way of doing that
[22:27] <NaioN> that's because if they have to replicate a pg to the other they peer with each other
[22:27] <lightspeed> well with this setup they can all contact any of each others' addresses
[22:27] <lightspeed> however the source IP they use will be different depending on which other OSD they're talking to
[22:28] <lightspeed> is that an issue?
[22:28] <NaioN> under the osd section you can you the "cluster network" and "public network" clauses
[22:29] <NaioN> with the cluster network you can define the network over which they can reach the other osds and monitors
[22:29] <NaioN> and over the public network they can reach the clients
[22:30] <NaioN> but as far as I know the cluster network has to be a single network
[22:30] <NaioN> so your setup wouldn't work
[22:31] <NaioN> but I'm not one of the developers you could ask them for sure
[22:31] <lightspeed> yeah I read about the "cluster/public" network stuff, and concluded that indeed it limits the cluster network to a single subnet
[22:31] <lightspeed> so I thought using loopback addresses would avoid that limitation altogether
[22:31] <NaioN> hmmm your more a networking guy :)
[22:32] <NaioN> it's not like bgp or ospf or something :)
[22:32] <lightspeed> yeah, well spotted :)
[22:33] <lightspeed> I bet it'd work if I could force them to use the loopback as the source IP though
[22:33] <NaioN> I don't know if you have that control
[22:33] <lightspeed> oh by the way, I didn't see anything interesting in the OSD logs... do you think there might be more evidence pointed to this as the problem if I up the logging level?
[22:34] <NaioN> yeps
[22:34] <NaioN> but I think I'm pretty sure the problem is the network layout
[22:34] <NaioN> just try it with a plain subnet and ask the developers if it's possible and how
[22:35] <NaioN> then you're sure it's a network layout problem
[22:36] <NaioN> btw why would you want to originate from the loopback?
[22:37] <lightspeed> well that way they're presenting the same source IP to both the other nodes (rather than a different one to each), so perhaps less likely to cause confusion
[22:38] <lightspeed> and using a loopback rather than arbitrarily choosing one of the physical interface IPs means it's not affected by any interface going down
[22:39] <lightspeed> although I'm not 100% sure whether linux would refuse to source connections from an IP associated with a downed interface or not... some OSes certainly would
[22:40] <NaioN> hmmm the loopback won't go down
[22:40] <lightspeed> indeed
[22:40] <NaioN> so you could build a robust setup with multiple subnets
[22:40] <NaioN> you want to avoid bonding/trunking :)
[22:41] <NaioN> So you want to build the redundancy on layer 3 :)
[22:41] <lightspeed> yeah pretty much
[22:41] <lightspeed> and the reason for avoiding bonding is due to the physical layer...
[22:42] <NaioN> what has the physical layer has to do with bonding?
[22:42] <lightspeed> I'm doing poor-man's 10G ethernet (ie second-hand SDR dual-port infiniband adapters from Ebay using IPoIB)
[22:42] <NaioN> aha :)
[22:42] <NaioN> we too
[22:42] <NaioN> well I've 20G :)
[22:42] <lightspeed> haha
[22:42] <NaioN> and some 10G
[22:43] <lightspeed> of course I also don't have an IB switch, so it's point-to-point links
[22:43] <NaioN> hmmm 10G IB switches are cheap
[22:43] <lightspeed> and with 3 hosts I have a ring... each host connects to each other one
[22:44] <lightspeed> really? I guess I didn't put much effort into looking for one
[22:44] <lightspeed> and it depends on the definition of cheap
[22:44] <NaioN> I've seen 24p 10G IB switches for as low as 70-80 dollar
[22:44] <lightspeed> oh ok
[22:44] <NaioN> cisco 7000p
[22:44] <lightspeed> yeah that's affordable
[22:45] <NaioN> I couldn't ebay them, because they only delivered to the USA
[22:45] <NaioN> so I bought one for aboit $400
[22:45] <NaioN> but that's still dirt cheap for 24p 10G
[22:46] <lightspeed> yeah
[22:46] <NaioN> search for the cisco 7000p or the topswitch (cisco bought topswitch a while ago)
[22:46] <lightspeed> although the IB switch is then a single point of failure, which is avoided with the ring setup I've been trying to go for
[22:47] <NaioN> lightspeed: http://www.ebay.com/itm/Topspin-120-24-Port-Switch-CISCO-/360479408792?pt=COMP_EN_Hubs&hash=item53ee3f4298
[22:47] <NaioN> well buy two switches :)
[22:48] <lightspeed> thanks
[22:48] <lightspeed> yeah true
[22:48] <lightspeed> I have to be careful though, as this is a little lab of mine at home, and I have limited space in my rack :)
[22:48] <NaioN> hehe ok
[22:49] <NaioN> well I've a cluster in production now with IB
[22:49] <NaioN> works like a charm
[22:49] <NaioN> and IB under linux is really really good
[22:49] <NaioN> well have to go now. getting late here...
[22:49] <NaioN> bye
[22:49] <lightspeed> ok bye
[22:50] <lightspeed> thanks for your advice
[23:03] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[23:07] * darkfader (~floh@188.40.175.2) has joined #ceph
[23:25] <lightspeed> NaioN: maybe I shouldn't have been so quick to dismiss the "cluster addr" option... I added that for all the OSDs, and now half the problematic PGs are healthy again (all the ones that had been "peering")
[23:26] <lightspeed> I found some documentation suggesting that in fact the purpose of "cluster addr" is in fact specifying a chosen source address, so I gave it a go
[23:26] <lightspeed> now I'm left with: health HEALTH_WARN 6 pgs degraded; 81 pgs stale; 81 pgs stuck stale; 6 pgs stuck unclean; recovery 79/3736 degraded (2.115%)
[23:55] * pentabular (~sean@adsl-71-141-229-185.dsl.snfc21.pacbell.net) has joined #ceph
[23:58] <pentabular> Ken Franklin rooolz

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.