#ceph IRC Log

Index

IRC Log for 2013-03-28

Timestamps are in GMT/BST.

[0:02] * rustam (~rustam@5e0f5b1e.bb.sky.com) Quit (Remote host closed the connection)
[0:04] <pioto> hi, so... let's say i already built a ceph cluster, and now i wanna manage it with chef... is there an easy way to tell chef about the existing one, without having it totally break everything?
[0:05] <pioto> fortunately, i'm still dealing with small test clusters, so if i have to rebuild from scratch, it's not a deal breaker. but, if there's a way to save effort, i'd be all for it
[0:05] * jjgalvez (~jjgalvez@12.248.40.138) Quit (Ping timeout: 480 seconds)
[0:06] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Remote host closed the connection)
[0:10] * jlogan (~Thunderbi@72.5.59.176) has joined #ceph
[0:15] * maxiz (~pfliu@222.128.145.154) Quit (Ping timeout: 480 seconds)
[0:19] <alram> pioto: chef uses ceph-disk-prepare for OSDs, and re-running ceph-disk-prepare on an osd device should be safe
[0:20] <alram> and for mon it should be too
[0:21] <alram> although there are some assumptions in the cookbooks. Everything uses default paths and a mon name is its hostname
[0:22] * houkouonchi-work (~linux@12.248.40.138) Quit (Read error: Connection reset by peer)
[0:22] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[0:25] * jmlowe1 (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has left #ceph
[0:38] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) has joined #ceph
[0:38] * loicd (~loic@209.117.47.248) has joined #ceph
[0:38] <pioto> alram: ok, thanks for the info. i'll export my test data "just in case" first
[0:40] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[0:41] <alram> pioto: how did you deploy in the first time?
[0:46] <pioto> well, one of them was basically 'manual', following the online docs, the other is currently just a single node running it all, built wiht mkcephfs, i think
[0:48] <pioto> something i haven't really figured out yet with chef is... when does it decide to "do stuff" on a node? like, as soon as you tweak the settings for it? or, whenever it decides to 'check in' next?
[0:50] * ivotron (~ivotron@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) Quit (Quit: ivotron)
[0:57] <alram> whenever it decides to check in (every 30 min by default if i recall), or you can run manually chef-client.
[1:00] * tnt (~tnt@109.130.89.104) Quit (Ping timeout: 480 seconds)
[1:00] * xmltok (~xmltok@y216182.ppp.asahi-net.or.jp) has joined #ceph
[1:01] <alram> pioto: if you're on ubuntu, the recipes uses upstart. I'm not sure mkcephfs creates the file for it. So you need to check that also.
[1:03] <alram> and if you're not on Ubuntu, the recipe will only work starting with Cuttlefish
[1:10] * loicd1 (~loic@209.117.47.248) has joined #ceph
[1:10] * loicd (~loic@209.117.47.248) Quit (Read error: No route to host)
[1:11] * loicd1 (~loic@209.117.47.248) Quit ()
[1:16] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[1:19] * xmltok (~xmltok@y216182.ppp.asahi-net.or.jp) Quit (Quit: Leaving...)
[1:19] * buck (~buck@bender.soe.ucsc.edu) Quit (Quit: Leaving.)
[1:22] * maxiz (~pfliu@222.128.134.167) has joined #ceph
[1:23] <pioto> alram: hm. ok
[1:24] <pioto> an unrelated question... if I had OSDs using differnet sided disk drives... will i have problems? mostly thinking in terms of a gradual buildout kind of scenario
[1:25] <pioto> where nodes added later on are using larger drives because they're cheper, or whatever
[1:25] <dmick> presumably you mean 'sized'
[1:25] <pioto> will the capacity be limited by the smallest drive?
[1:25] <pioto> yes, sorry
[1:25] <pioto> different sized disk drives
[1:25] <dmick> and you can handle that by tuning the OSD weights
[1:26] <pioto> ok
[1:26] <pioto> so, say, you'd give the bigger one more weight?
[1:26] <pioto> and it'll get a large proportion of the pgs?
[1:26] <pioto> *larger
[1:28] <iggy> i don't know if it effects the # of PGs, or if the PGs just hold more, but the same end result
[1:28] <pioto> ok. thanks
[1:29] <pioto> and i think the weight can be a float, so i guess a cheap answer is just to make the weight, say, the number of TB of the data drive for the OSD?
[1:30] <lurbs> The downside is that (larger) disks with a higher weight will get more reads/writes hitting them though, potentially causing performance issues, isn't it?
[1:30] <iggy> that's pretty common, yeah
[1:30] <iggy> that was for pioto
[1:31] <pioto> lurbs: hm. maybe so
[1:32] <pioto> i assume it's similarish to the tradeoff that i seem to see between using fewer nodes with more drives in each, or more nodes with fewer drives in each
[1:32] <lurbs> Not necessarily going to cause major problems, but it's something to be aware of.
[1:32] <pioto> fewer nodes w/ more drives is probably cheaper to get going (fewer motherboards/cpus to purchase), but probably won't perform as well, or be as resilient to failure
[1:32] <pioto> (right?)
[1:32] <iggy> re: more IOs.... it's possible... in practice, i doubt it makes a lot of difference unless it's horribly unbalanced
[1:33] <pioto> like, some 3TB drives mixed in with 80GB ones? :)
[1:33] <dmick> See http://ceph.com/docs/master/rados/operations/crush-map/#crush-map-bucket-hierarchy for a discussion
[1:34] <dmick> (of how weights work and recommendations for assignment, not for how more IOPs leads to other issues)
[1:34] <pioto> k, thanks. i had skimmed that before
[1:34] <pioto> i guess that, basically, it chooses things as "far apart" as possible?
[1:34] <pioto> like, it'll prefer things in different racks/datacenters over two hosts in the same rack?
[1:35] <pioto> and it'd only choose 2 OSDs on the same host if that's the only host?
[1:35] <iggy> right
[1:35] <pioto> great. thanks.
[1:36] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[1:39] <pioto> so, right now, the docs imply that chef won't update the crush map for new nodes... is that something planned for cuttlefish?
[1:41] <pioto> or, just not desirable to have chef take care of that?
[1:45] <pioto> and, totally unrelated question again... is there any plan for complete encryption of the network traffic? as far as i can judge from the docs on cephx, it shouldn't be possible for someone to request some arbitrary data, or manipulate anything, but they could still sniff all your active traffic (and, say, snoop on your /etc/shadow or whatever from your rbd)
[1:47] <dmick> pioto: it chooses based on your crushmap. That's the way you'd probably want to structure it, yes, but you can write a single-failure-domain crushmap if you like
[1:48] <dmick> don't know about chef and crushmap updates. I do know we've very-recently added the ability to add new things to the crushmap via ceph CLI invocations, so that could be used in future mods to the chef recipes
[1:49] <pioto> yeah, i was able to just 'set' things from the command line, w/o having to decompile/recompile the whole crush map. is that what you mean?
[1:49] <dmick> there are more coming
[1:49] <dmick> osd crush add-bucket
[1:49] <dmick> osd crush link
[1:49] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[1:50] <dmick> osd crush unlink
[1:50] <pioto> neat
[1:50] <dmick> http://www.inktank.com/about-inktank/roadmap/ may be interesting
[1:51] <dmick> I haven't heard much talk about network encryption, so I don't know about that one
[1:54] <pioto> yeah, my googling mostly pointed to on-disk encryption
[1:54] <pioto> and i guess maybe also using lmcrypt or something in the rbd itself, which would kinda have the same effect
[1:55] <pioto> but that doesn't help for cephfs, i think
[1:55] * SvenPHX1 (~scarter@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[1:56] * mjblw1 (~mbaysek@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[1:56] <lurbs> I've been meaning to do some tests just using transport mode IPsec, for the cluster itself and librbd clients.
[1:57] <pioto> hm, so, doing it outside of ceph itself?
[1:57] <pioto> interesting.
[1:58] <lurbs> If Ceph doesn't yet do, I'm going to have to.
[1:58] <tchmnkyz> ok guys
[1:58] <lurbs> s/do/do it/
[1:58] <tchmnkyz> i am looking for a way to do DR inside of ceph
[1:58] <tchmnkyz> is there a easy way to replicate one cluster to another cluster?
[1:59] <dmick> certainly not easy.
[1:59] <tchmnkyz> figured that was the case
[2:00] * maxiz (~pfliu@222.128.134.167) Quit (Remote host closed the connection)
[2:00] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[2:00] <tchmnkyz> is there anything documentation wise on it?
[2:00] * mjblw (~mbaysek@wsip-174-79-34-244.ph.ph.cox.net) Quit (Ping timeout: 480 seconds)
[2:01] <iggy> tchmnkyz: i think there was some noise recently about x-datacenter clusters
[2:01] <iggy> maybe on the blog
[2:01] * LeaChim (~LeaChim@b0fae63d.bb.sky.com) Quit (Read error: Connection reset by peer)
[2:01] <tchmnkyz> nice
[2:01] <tchmnkyz> ok
[2:03] <dmick> there are plans for rgw georeplication, because that's easier
[2:03] <tchmnkyz> k
[2:03] <dmick> and we know georeplication is interesting
[2:04] <tchmnkyz> yeathat is kinda what i want to setup
[2:04] <tchmnkyz> i have a DC in downtown chi twon
[2:04] <tchmnkyz> and one in the subs
[2:04] <dmick> as I understand it, the sorta fundamental idea is "have enough OSDs in different failure domains so you can handle many failures". A natural desire is "different data centers", but Ceph is pretty latency-dependent right now.
[2:05] <dmick> if you have a fast link, you can just have the cluster span the DCs, but not many people have links that fast
[2:05] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[2:05] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[2:05] <dmick> search ceph-devel archives for discussion with more meat
[2:05] <tchmnkyz> i have a 10gbps fiber link between the two but it has other traffic on there
[2:06] <tchmnkyz> so it is not 10gbps dedicated to ceph
[2:06] <dmick> but the sorta rule of thumb is "write latency is more-or-less the worst replica latency", so all write performance depends on the remoteness of the most-remote replica
[2:06] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[2:06] <iggy> depending on the workload....
[2:06] * esammy (~esamuels@host-2-103-103-175.as13285.net) Quit (Quit: esammy)
[2:07] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[2:07] <iggy> if it's just to the burbs..s
[2:08] <iggy> and bandwidth wise, most people are still gigabit'ing it up
[2:08] <tchmnkyz> oh i kinda have DDR infiniband over ip running for ceph
[2:08] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Read error: Connection reset by peer)
[2:08] <tchmnkyz> and i see close to 10gbps spikes
[2:09] <iggy> well, i'm sure it'll use every bit it can... i guess it depends on your expectations
[2:10] <iggy> what's the rtt look like on that?
[2:10] <tchmnkyz> rtt?
[2:10] <iggy> round trip
[2:11] * wer_ (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[2:11] <tchmnkyz> ok on my ib?
[2:11] <tchmnkyz> .001ms avg
[2:11] <iggy> seems like you should be fine latency wise
[2:11] <tchmnkyz> i have a Voltair ISR-9288
[2:12] <tchmnkyz> so far everything has been flawless with the IBoIP
[2:12] <tchmnkyz> i was worried about its overhead
[2:12] <tchmnkyz> but realistically it had been good
[2:13] <dmick> well that's not x-dc, right?
[2:13] <tchmnkyz> nop
[2:13] <dmick> (maybe iggy was thinking of that)
[2:13] * darkfader (~floh@88.79.251.60) Quit (Read error: Operation timed out)
[2:14] <tchmnkyz> maybe
[2:14] <tchmnkyz> the latency on the 10g link x-dc is like 1 - 3 ms depending on load
[2:15] <iggy> oh, yeah, i was asking about x-dc
[2:16] <iggy> still... 1-3ms... not terrible
[2:17] * jlogan (~Thunderbi@72.5.59.176) Quit (Ping timeout: 480 seconds)
[2:17] <dmick> http://thread.gmane.org/gmane.comp.file-systems.ceph.devel/728 is one of many threads
[2:18] <tchmnkyz> thnx
[2:19] * rustam (~rustam@5e0f5b1e.bb.sky.com) has joined #ceph
[2:22] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[2:25] * darkfader (~floh@88.79.251.60) has joined #ceph
[2:27] <houkouonchi-work> tchmnkyz: how much distance is that?
[2:28] <houkouonchi-work> tchmnkyz: or is it 1-3ms when its being saturated?
[2:30] * rektide (~rektide@deneb.eldergods.com) Quit (Ping timeout: 480 seconds)
[2:35] * xiaoxi (~xiaoxiche@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[2:36] * chftosf (uid7988@id-7988.hillingdon.irccloud.com) Quit ()
[2:40] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:41] * rturk is now known as rturk-away
[2:42] * dpippenger (~riven@216.103.134.250) Quit (Remote host closed the connection)
[2:45] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[2:45] <tchmnkyz> distance is ~36 miles
[2:45] <tchmnkyz> maybe 45 cable miles
[2:46] <tchmnkyz> at close to full saturation it can see like 10ms
[2:49] * rustam (~rustam@5e0f5b1e.bb.sky.com) Quit (Remote host closed the connection)
[2:51] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Read error: Connection reset by peer)
[2:51] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[2:55] * rustam (~rustam@5e0f5b1e.bb.sky.com) has joined #ceph
[2:55] * rustam (~rustam@5e0f5b1e.bb.sky.com) Quit (Remote host closed the connection)
[2:56] * maxiz (~pfliu@202.108.130.138) has joined #ceph
[3:07] <houkouonchi-work> tchmnkyz: ah ok.. yeah that latency is pretty decent for 36 miles
[3:07] <houkouonchi-work> i have gotten as good as 2.4ms from home to downtown L.A. which is 40 miles
[3:08] <houkouonchi-work> i think 1.6ms is the best I have ever seen for that distance
[3:16] <joao> lucky you; I'm still on 200ms rtt to the sepia lab
[3:16] <joao> apparently I should be able to blame it all on the ddos that is compromising the whole internet
[3:17] <houkouonchi-work> he its like 15ms to irvine cause its going through san jose or something i get much better routing to down town
[3:17] <houkouonchi-work> joao: where are you located at?
[3:17] <joao> Lisbon
[3:17] <houkouonchi-work> I used to live in Japan and that was 120ms. I know laggy ssh connections get old really quick with latency over 100ms
[3:17] <joao> there's a whole continent and an ocean between me and the sepia lab fwiw :p
[3:18] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[3:18] <houkouonchi-work> i would think you could do it in more like 150ms-ish for that distance
[3:18] <joao> I used to
[3:18] <joao> been bad for the last week or so
[3:19] <houkouonchi-work> what do you ping to 208.97.141.21 ? (that is in down-town)
[3:20] <joao> usually, whichever plana I'm logged into
[3:20] <joao> just now I was pinging plana26
[3:20] <joao> ah, get 170ms to that one
[3:21] <houkouonchi-work> ah not that much better then
[3:21] <dmick> stunned that I get <3ms to sepia
[3:22] <joao> dmick, from aon?
[3:22] <dmick> yep
[3:23] <houkouonchi-work> i get like 3.5ms to sepia from brea
[3:23] <houkouonchi-work> the brea office
[3:43] * xiaoxi (~xiaoxiche@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[3:48] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[4:00] <nz_monkey_> We are getting 0.3ms on our 10km links
[4:00] <nz_monkey_> using CWDM
[4:13] <Qten> mmm cwdm :)
[4:13] * ivotron (~ivotron@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) has joined #ceph
[4:18] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[4:26] * KevinPerks1 (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[4:26] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Read error: Connection reset by peer)
[4:26] <nz_monkey_> Yep, if only we could get 8 wavelengths of 40gbit down a single pair
[4:33] * xiaoxi (~xiaoxiche@134.134.137.73) has joined #ceph
[4:44] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90 [Firefox 19.0.2/20130307023931])
[4:48] * ivotron (~ivotron@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) Quit (Remote host closed the connection)
[5:12] * ez (~Android@184.78.103.213) has joined #ceph
[5:19] * ez (~Android@184.78.103.213) Quit (Quit: -a-)
[5:20] * ezconsulting (~ezcon@184.78.103.213) has joined #ceph
[5:27] * KevinPerks1 (~Adium@cpe-066-026-239-136.triad.res.rr.com) Quit (Quit: Leaving.)
[5:32] * ivotron (~ivo@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) has joined #ceph
[5:50] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[5:51] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[6:07] * ivotron (~ivo@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) Quit (Remote host closed the connection)
[6:15] * Rocky_ (~r.nap@188.205.52.204) has joined #ceph
[6:21] * ivotron (~ivo@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) has joined #ceph
[6:23] * b1tbkt (~Peekaboo@68-184-193-142.dhcp.stls.mo.charter.com) Quit (Ping timeout: 480 seconds)
[6:34] * ivotron (~ivo@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) Quit (Quit: leaving)
[6:42] * ivotron (~ivo@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) has joined #ceph
[7:04] * esammy (~esamuels@host-2-103-103-175.as13285.net) has joined #ceph
[7:06] * l0nk (~alex@87-231-111-125.rev.numericable.fr) has joined #ceph
[7:20] * sleinen1 (~Adium@2001:620:0:25:542d:c86c:1a50:fe93) has joined #ceph
[7:25] * sleinen1 (~Adium@2001:620:0:25:542d:c86c:1a50:fe93) Quit (Quit: Leaving.)
[7:25] * ivotron (~ivo@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[7:27] * tziOm (~bjornar@ip-202-1-149-91.dialup.ice.net) Quit (Ping timeout: 480 seconds)
[7:57] * scheuk (~scheuk@204.246.67.78) Quit (Server closed connection)
[7:57] * scheuk (~scheuk@204.246.67.78) has joined #ceph
[7:59] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (Quit: Leaving.)
[8:06] * phantomcircuit (~phantomci@covertinferno.org) Quit (Server closed connection)
[8:06] * sileht (~sileht@sileht.net) Quit (Server closed connection)
[8:07] * sileht (~sileht@sileht.net) has joined #ceph
[8:07] * tnt (~tnt@109.130.89.104) has joined #ceph
[8:08] * phantomcircuit (~phantomci@covertinferno.org) has joined #ceph
[8:10] * tziOm (~bjornar@ip-166-151-230-46.dialup.ice.net) has joined #ceph
[8:12] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[8:13] * Anticimex (anticimex@netforce.csbnet.se) Quit (Server closed connection)
[8:13] * Anticimex (anticimex@netforce.csbnet.se) has joined #ceph
[8:14] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (Server closed connection)
[8:14] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:14] * raso (~raso@deb-multimedia.org) Quit (Server closed connection)
[8:14] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[8:15] * raso (~raso@deb-multimedia.org) has joined #ceph
[8:15] * l0nk (~alex@87-231-111-125.rev.numericable.fr) Quit (Quit: Leaving.)
[8:18] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) Quit (Server closed connection)
[8:18] * ninkotech (~duplo@ip-89-102-24-167.net.upcbroadband.cz) has joined #ceph
[8:19] * houkouonchi-home (~linux@pool-108-38-63-48.lsanca.fios.verizon.net) Quit (Server closed connection)
[8:19] * houkouonchi-home (~linux@pool-71-177-96-171.lsanca.fios.verizon.net) has joined #ceph
[8:20] * gregorg_taf (~Greg@78.155.152.6) Quit (Server closed connection)
[8:20] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[8:24] * nhm (~nh@184-97-180-204.mpls.qwest.net) Quit (Server closed connection)
[8:24] * nhm (~nh@184-97-180-204.mpls.qwest.net) has joined #ceph
[8:24] * ivotron (~ivo@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) has joined #ceph
[8:26] * tchmnkyz (~jeremy@0001638b.user.oftc.net) Quit (Server closed connection)
[8:26] * tchmnkyz (~jeremy@ip23.67-202-99.static.steadfastdns.net) has joined #ceph
[8:26] * tchmnkyz is now known as Guest481
[8:27] * NuxRo (~nux@85.13.211.140) Quit (Server closed connection)
[8:27] * NuxRo (~nux@85.13.211.140) has joined #ceph
[8:28] * markl (~mark@tpsit.com) Quit (Server closed connection)
[8:28] * markl (~mark@tpsit.com) has joined #ceph
[8:30] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (Server closed connection)
[8:30] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[8:34] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[8:34] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[8:35] * stass (stas@ssh.deglitch.com) Quit (Server closed connection)
[8:35] * stass (stas@ssh.deglitch.com) has joined #ceph
[8:35] * l0nk (~alex@83.167.43.235) has joined #ceph
[8:42] * sleinen (~Adium@130.59.94.118) has joined #ceph
[8:43] * __jt__ (~james@rhyolite.bx.mathcs.emory.edu) Quit (Server closed connection)
[8:43] * __jt__ (~james@rhyolite.bx.mathcs.emory.edu) has joined #ceph
[8:44] * sleinen1 (~Adium@2001:620:0:26:fd1d:841a:ba15:ff0a) has joined #ceph
[8:49] * KindTwo (~KindOne@h152.211.89.75.dynamic.ip.windstream.net) has joined #ceph
[8:50] * sleinen (~Adium@130.59.94.118) Quit (Ping timeout: 480 seconds)
[8:51] * loicd (~loic@90.84.144.224) has joined #ceph
[8:53] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:53] * KindTwo is now known as KindOne
[8:57] * maxiz (~pfliu@202.108.130.138) Quit (Server closed connection)
[8:57] * sleinen1 (~Adium@2001:620:0:26:fd1d:841a:ba15:ff0a) Quit (Quit: Leaving.)
[8:58] * maxiz (~pfliu@202.108.130.138) has joined #ceph
[9:00] * gerard_dethier (~Thunderbi@85.234.217.115.static.edpnet.net) has joined #ceph
[9:01] * Rocky_ (~r.nap@188.205.52.204) Quit (Server closed connection)
[9:01] * Rocky_ (~r.nap@188.205.52.204) has joined #ceph
[9:01] * loicd (~loic@90.84.144.224) Quit (Quit: Leaving.)
[9:02] * esammy (~esamuels@host-2-103-103-175.as13285.net) Quit (Server closed connection)
[9:02] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) Quit (Quit: Copywight 2007 Elmer Fudd. All wights wesewved.)
[9:09] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Read error: Connection reset by peer)
[9:09] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[9:13] * sleinen (~Adium@130.59.94.118) has joined #ceph
[9:14] * sleinen1 (~Adium@2001:620:0:26:e8c0:b0a7:6da8:3565) has joined #ceph
[9:16] * tnt (~tnt@109.130.89.104) Quit (Read error: Operation timed out)
[9:19] * sleinen1 (~Adium@2001:620:0:26:e8c0:b0a7:6da8:3565) Quit ()
[9:20] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Read error: Connection reset by peer)
[9:20] * loicd (~loic@90.84.144.178) has joined #ceph
[9:20] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[9:21] * sleinen (~Adium@130.59.94.118) Quit (Ping timeout: 480 seconds)
[9:22] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) has joined #ceph
[9:28] * dosaboy (~gizmo@faun.canonical.com) has joined #ceph
[9:29] * loicd (~loic@90.84.144.178) Quit (Ping timeout: 480 seconds)
[9:37] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) Quit (Quit: Leaving)
[9:37] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:38] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) has joined #ceph
[9:39] * jtang1 (~jtang@79.97.135.214) Quit (Quit: Leaving.)
[9:44] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[9:46] * baz_ (~baz@2001:610:110:6e1:986f:a364:188a:e0ce) has left #ceph
[9:47] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[9:49] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:50] * dosaboy (~gizmo@faun.canonical.com) Quit (Read error: No route to host)
[9:52] * dosaboy (~gizmo@faun.canonical.com) has joined #ceph
[9:54] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit ()
[9:56] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[10:11] * LeaChim (~LeaChim@b0fae63d.bb.sky.com) has joined #ceph
[10:26] * loicd (~loic@magenta.dachary.org) has joined #ceph
[10:34] * barryo (~borourke@cumberdale.ph.ed.ac.uk) Quit (Ping timeout: 480 seconds)
[10:37] * barryo (~borourke@cumberdale.ph.ed.ac.uk) has joined #ceph
[10:37] * leseb (~Adium@83.167.43.235) has joined #ceph
[10:42] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[10:48] * sleinen (~Adium@130.59.94.118) has joined #ceph
[10:49] * sleinen1 (~Adium@2001:620:0:26:f4b9:a078:9bb0:7526) has joined #ceph
[10:55] * bithin (~bithin@115.249.1.61) has joined #ceph
[10:56] * sleinen (~Adium@130.59.94.118) Quit (Ping timeout: 480 seconds)
[11:01] * maxiz (~pfliu@202.108.130.138) Quit (Ping timeout: 480 seconds)
[11:08] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[11:09] * ivotron (~ivo@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[11:15] * slang (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) Quit (Remote host closed the connection)
[11:16] * slang1 (~slang@207-229-177-80.c3-0.drb-ubr1.chi-drb.il.cable.rcn.com) has joined #ceph
[11:19] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[11:21] <lxo> writes to my 0.59-running server have been incredibly slow. there seems to be a lot of dir splits going on, causing a lot of metadata traffic, and each split takes long enough that, with a few of them in the queue, the whole server hits a suicide timeout and dies
[11:22] <absynth> yikes
[11:22] <lxo> I'm sure this is not intended, but I'm wondering if it makes sense to report this as a bug
[11:22] <lxo> presumably some threshold was changed in the default config, causing most dirs to require splitting
[11:22] <lxo> so I hope this is transient, and if I endure it for long enough it will fix itself
[11:23] <lxo> but you guys might want to rethink the splitting logic so that it can be done in background instead of holding up individual writes
[11:25] <slang1> lxo: what's your workload?
[11:25] <lxo> at the moment, none whatsoever. it's been like this for a couple of days, getting as little as 2GB of new data in files of various sizes onto the 64 data PGs
[11:26] <slang1> lxo: how many active mds servers do you have?
[11:26] <lxo> a ceph.ko mount is still trying to flush the data written since a reboot yesterday
[11:26] <lxo> one
[11:27] <barryo> If I have 3 servers in separate buildings each running a number of OSD's and a MON, what would happen if a fire destroyed two of the buildings? replication is set to 3. Am I right in thinking that the remaining server would freeze until I removed the others from the CRUSH map?
[11:27] <slang1> lxo: how are you seeing dir splits?
[11:27] <lxo> when this one osd restarts, it gets a few hundred messages from this client, and it makes ridiculously slow progress through them. then it dies. then I restart it, and it takes forever to process the journal. rinse and repeat ;-)
[11:28] <lxo> strace
[11:28] <lxo> then I attached gdb and got a stack trace, that showed there was a write in the stack trace
[11:29] <lxo> I got as far as re-creating one of the osds, figuring there was something wrong with it because of this slow down. btrfsck actually showed some corruption on that one, so I wanted to re-create it anyway. but it's taking too long to replicate data back to it because one of the other osds keeps dying
[11:29] <slang1> lxo: ok -- doesn't sound like that has anything to with metadata
[11:29] <lxo> (the one that's dying is the one I'm speaking of now)
[11:30] <lxo> no, metadata operations are fine. they're all on other disks, and I just went through a bunch of cp -lR, and those were all right
[11:30] <joao> slang1, insomnia? isn't it like 5am over there?
[11:30] * slang1 nods
[11:31] <joao> eh
[11:31] <lxo> slang1, oh, sorry; it's btrfs metadata that I was talking about
[11:31] <slang1> lxo: I figured it must be the local fs
[11:31] <lxo> tons of links and unlinks that have to be committed to disk
[11:32] <lxo> that makes for lots of commits when snapshots are being constantly taken
[11:32] <slang1> barryo: everything stops until you bring back at least one monitor in that case
[11:33] <lxo> I've seen some syncs taken by ceph taking several tens of seconds
[11:34] <slang1> barryo: so yeah - at least one monitor, and create two new osds
[11:34] <lxo> so it's not hard to see why suicide timeouts are hit. what I don't quite get is why is it the op tp that hits it; why the (slow) progress doesn't prevent that, and why new messages seem to pop up as delayed in the queue
[11:35] <lxo> as in, at one moment, the logs display 6 out of 6 delayed operations on that osd. a few minutes later, it shows another 6 out of 6 delayed operations, all of them with timestamps that differ from the ones that appeared before by a few milliseconds
[11:36] <lxo> it's like those other messages were being held up somewhere that the delayed message processing can't see them, but where they already got a (reception?) timestamp
[11:36] <barryo> slang1: thanks, that's what I suspected.
[11:37] <barryo> slang1: would it not be possible to drop replication to 1 and edit the crush map to reflect the fact two of the servers no longer exist?
[11:37] <lxo> this makes it very hard to guess how long it will take before I can restart the server that had uploaded data to the cluster
[11:37] <slang1> barryo: you would need to bring up another monitor first so they can do quorum
[11:38] <slang1> barryo: but then you should be able to do that, yeah
[11:40] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[11:43] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[11:45] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:51] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) Quit (Quit: Leaving.)
[11:57] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[12:01] * diegows (~diegows@190.190.2.126) has joined #ceph
[12:05] * BillK (~BillK@58-7-223-82.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[12:10] * madkiss (~madkiss@089144192173.atnat0001.highway.a1.net) has joined #ceph
[12:12] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[12:14] * leseb (~Adium@83.167.43.235) Quit (Quit: Leaving.)
[12:14] * BillK (~BillK@124-169-229-198.dyn.iinet.net.au) has joined #ceph
[12:14] * lxndrp (~papaspyro@212-29-41-179.ip.dokom21.de) has joined #ceph
[12:14] * lxndrp (~papaspyro@212-29-41-179.ip.dokom21.de) has left #ceph
[12:14] * lxndrp (~papaspyro@212-29-41-179.ip.dokom21.de) has joined #ceph
[12:14] <lxndrp> Hey folks.
[12:15] * leseb (~Adium@83.167.43.235) has joined #ceph
[12:15] * leseb (~Adium@83.167.43.235) Quit ()
[12:16] <lxndrp> I have a question re: ceph.conf.
[12:16] <absynth> just shoot, maybe someone can help
[12:16] <lxndrp> All docs seem to say that I have to roll identical ceph.conf files on all hosts.
[12:17] <lxndrp> Actually, I want to do the rollout with chef; it would make more sense to only have the [osd.x] sections on the host where the osd is actually running...
[12:17] <lxndrp> And the mon.y configs on the mons only.
[12:19] <lxndrp> It would make my chef recipes incredibly more complicated, if the configuration files would have to be the same on all hosts...
[12:33] <janos> lxndrp: i thought i read long ago that the conf does not need to include everything else
[12:33] <janos> lxndrp: though i would think each osd would need the mon addresses
[12:33] <lxndrp> janos: yeah, I guessed so...
[12:34] <janos> lxndrp: i'd wait for a more expert assessment than mine, but i could have sworn i've read that before
[12:34] <lxndrp> :)
[12:38] * madkiss (~madkiss@089144192173.atnat0001.highway.a1.net) Quit (Ping timeout: 480 seconds)
[12:41] * rzerres (~ralf@pd95b5253.dip0.t-ipconnect.de) has joined #ceph
[12:43] * madkiss (~madkiss@089144192173.atnat0001.highway.a1.net) has joined #ceph
[12:43] * rzerres1 (~ralf@pd95b5253.dip0.t-ipconnect.de) has joined #ceph
[12:45] * leseb (~Adium@83.167.43.235) has joined #ceph
[12:46] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[12:49] * rzerres (~ralf@pd95b5253.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[12:52] * leseb (~Adium@83.167.43.235) Quit (Read error: Operation timed out)
[12:56] <lxndrp> The other thing I was wondering about: the OSD ids need to be integers, starting at zero, no gaps, right?
[13:18] * madkiss (~madkiss@089144192173.atnat0001.highway.a1.net) Quit (Ping timeout: 480 seconds)
[13:19] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[13:19] * leseb (~Adium@83.167.43.235) has joined #ceph
[13:20] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[13:24] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[13:24] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Read error: Connection reset by peer)
[13:26] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) Quit (Remote host closed the connection)
[13:28] * diegows (~diegows@190.190.2.126) Quit (Read error: Operation timed out)
[13:37] * timmclaughlin (~timmclaug@69.170.148.179) has joined #ceph
[13:46] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[13:54] <lxo> oh, I didn't mention before (indeed, I hadn't confirmed yet), but the osd slowdowns started with the upgrade to 0.59
[14:15] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[14:18] * fghaas (~florian@91-119-65-118.dynamic.xdsl-line.inode.at) has joined #ceph
[14:18] * Lennie`away (~leen@lennie-1-pt.tunnel.tserv11.ams1.ipv6.he.net) Quit (Ping timeout: 480 seconds)
[14:24] * Lennie`away (~leen@lennie-1-pt.tunnel.tserv11.ams1.ipv6.he.net) has joined #ceph
[14:25] * BManojlovic (~steki@91.195.39.5) Quit (Read error: Operation timed out)
[14:26] * BManojlovic (~steki@197-166-222-85.adsl.verat.net) has joined #ceph
[14:37] * sivanov (~sivanov@gw2.maxtelecom.bg) has joined #ceph
[14:38] <sivanov> scuttlemonkey I managed to fix the problem from yesterday
[14:38] <sivanov> The problem is max devices in osdmap
[14:39] <sivanov> when i build the cluster my max osd in conf is 122
[14:39] * maxiz (~pfliu@111.194.213.46) has joined #ceph
[14:40] <sivanov> Because of this, all major sds are out of range
[14:40] <sivanov> now my max osds are 1000
[14:40] <sivanov> and everything work fine
[14:42] <scuttlemonkey> sivanov: ah hah
[14:43] <scuttlemonkey> so it's a < not <=
[14:43] * eschnou (~eschnou@227.159-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[14:43] <sivanov> not understand you
[14:43] <scuttlemonkey> no worries
[14:43] <scuttlemonkey> glad you got it sorted
[14:44] <xiaoxi> hi, I have created a lot of pools (also a lot of pgs),and the ceph wired,then I deleted all these pools,but the OSDs are still flipping
[14:44] <xiaoxi> I would say seems monitor is flipping, I can see a lot of "wrongly mark me down " and "xxx boot" in the log
[14:46] * sivanov (~sivanov@gw2.maxtelecom.bg) Quit (Quit: Leaving)
[14:46] * steki (~steki@91.195.39.5) has joined #ceph
[14:48] <scuttlemonkey> xiaoxi: you can try changing the timeout settings, or manually setting 'nodown' while you troubleshoot the network
[14:48] <scuttlemonkey> http://ceph.com/docs/master/rados/operations/troubleshooting-osd/#flapping-osds
[14:48] <scuttlemonkey> and: http://ceph.com/docs/master/rados/configuration/mon-osd-interaction/
[14:48] <scuttlemonkey> may help
[14:51] * BManojlovic (~steki@197-166-222-85.adsl.verat.net) Quit (Ping timeout: 480 seconds)
[14:51] <xiaoxi> thanks,but I think my network is good , the problem may caused by I have wrongly create 48 pools, each pool have 8192 PGs....
[15:00] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[15:04] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[15:07] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[15:07] <scuttlemonkey> that's quite a lot
[15:08] <scuttlemonkey> but that shouldn't be having an adverse effect on your mons...
[15:14] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[15:15] <nhm> scuttlemonkey: I've actually starting seeing some bad behavior up around 100k PGs.
[15:15] <nhm> xiaoxi: when I did something like that I was seeing mon problems too.
[15:15] <nhm> xiaoxi: High CPU usage, and timeouts
[15:18] * jtang1 (~jtang@2001:770:10:500:68d2:6952:9c82:7c99) has joined #ceph
[15:18] * ezconsu34 (~ezcon@166.137.98.10) has joined #ceph
[15:20] <scuttlemonkey> nhm: that have to do w/ pg size and running out of space on a small cluster w/ huge pg numbers?
[15:20] * jtang2 (~jtang@2001:770:10:500:b503:79ec:5d2:3f2d) has joined #ceph
[15:24] <nhm> scuttlemonkey: not running out of space, but having say 100k PGs on a cluster with 24 OSDs and 1 mon.
[15:25] <scuttlemonkey> ah
[15:25] <nhm> scuttlemonkey: the mon started using like 70-80% CPU and was failing to respond to requests in a timely manner.
[15:25] <scuttlemonkey> just not enough beef to go around
[15:25] <nhm> More mons may have helped.
[15:26] * ezconsulting (~ezcon@184.78.103.213) Quit (Ping timeout: 480 seconds)
[15:26] * jtang1 (~jtang@2001:770:10:500:68d2:6952:9c82:7c99) Quit (Ping timeout: 480 seconds)
[15:26] <xiaoxi> nhn:but how can I recover from such situation?
[15:26] <xiaoxi> restart the cluster seems doesn't help
[15:37] * doubleg (~doubleg@69.167.130.11) Quit (Quit: Lost terminal)
[15:41] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Remote host closed the connection)
[15:41] * diegows (~diegows@190.190.2.126) has joined #ceph
[15:42] * doubleg (~doubleg@69.167.130.11) has joined #ceph
[15:46] * xiaoxi (~xiaoxiche@134.134.137.73) Quit (Remote host closed the connection)
[15:50] * xiaoxi (~xiaoxiche@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[15:52] * terje (~Adium@c-50-134-173-158.hsd1.co.comcast.net) has joined #ceph
[15:52] * timmclaughlin (~timmclaug@69.170.148.179) Quit (Remote host closed the connection)
[15:53] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[16:00] * timmclaughlin (~timmclaug@69.170.148.179) has joined #ceph
[16:02] * drokita (~drokita@199.255.228.128) has joined #ceph
[16:04] * aliguori (~anthony@32.97.110.51) has joined #ceph
[16:05] * jlogan (~Thunderbi@2600:c00:3010:1:fc52:a0e0:824c:3a1d) has joined #ceph
[16:06] * joshd1 (~joshd@2602:306:c5db:310:75ab:a73f:744:36ef) has joined #ceph
[16:10] * ezconsulting (~ezcon@166.137.98.141) has joined #ceph
[16:12] * ezconsu82 (~ezcon@ip-64-134-160-191.public.wayport.net) has joined #ceph
[16:12] * ezconsulting (~ezcon@166.137.98.141) Quit (Read error: Connection reset by peer)
[16:16] * ezconsu34 (~ezcon@166.137.98.10) Quit (Ping timeout: 480 seconds)
[16:26] * ezconsulting (~ezcon@166.137.98.141) has joined #ceph
[16:29] * dosaboy (~gizmo@faun.canonical.com) Quit (Ping timeout: 480 seconds)
[16:31] * portante|afk is now known as portante
[16:33] * ezconsu82 (~ezcon@ip-64-134-160-191.public.wayport.net) Quit (Ping timeout: 480 seconds)
[16:35] * ezconsu29 (~ezcon@208.73.128.34) has joined #ceph
[16:35] * ezconsulting (~ezcon@166.137.98.141) Quit (Read error: Connection reset by peer)
[16:37] <sstan> still having monitors fail at runtime :S http://pastebin.com/j0BnfrHR
[16:39] * eschnou (~eschnou@227.159-201-80.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[16:41] * dosaboy (~gizmo@faun.canonical.com) has joined #ceph
[16:44] * sleinen1 (~Adium@2001:620:0:26:f4b9:a078:9bb0:7526) Quit (Quit: Leaving.)
[16:44] * sleinen (~Adium@130.59.94.118) has joined #ceph
[16:47] * lxndrp_ (~papaspyro@85-22-138-202.ip.dokom21.de) has joined #ceph
[16:47] * sleinen1 (~Adium@2001:620:0:26:d595:81dd:bb4c:601d) has joined #ceph
[16:48] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) has joined #ceph
[16:51] * lxndrp (~papaspyro@212-29-41-179.ip.dokom21.de) Quit (Ping timeout: 480 seconds)
[16:51] * lxndrp_ is now known as lxndrp
[16:54] * sleinen (~Adium@130.59.94.118) Quit (Ping timeout: 480 seconds)
[16:58] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) has joined #ceph
[17:05] * gerard_dethier (~Thunderbi@85.234.217.115.static.edpnet.net) Quit (Quit: gerard_dethier)
[17:07] <joao> sstan, mind filing a ticket for that, attaching logs and output from the following commands?
[17:07] <joao> ceph_test_store_tool /var/lib/ceph/mon/FOO/store.db list auth
[17:08] <joao> ceph_test_store_tool /var/lib/ceph/mon/FOO/store.db get auth first_committed
[17:08] <joao> ceph_test_store_tool /var/lib/ceph/mon/FOO/store.db get auth last_committed
[17:08] * BillK (~BillK@124-169-229-198.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[17:08] <joao> ceph_test_store_tool /var/lib/ceph/mon/FOO/store.db get auth full_latest
[17:09] <joao> FOO being your mon id, or rather /var/lib/ceph/mon/FOO being your monitor's data directory
[17:10] <matt_> joao, have you had any love finding the cause of the monitor crashing on osd starts? No pressure, just checking : )
[17:10] * alram (~alram@38.122.20.226) has joined #ceph
[17:10] <joao> matt_, not yet, and it will have to wait until next week
[17:10] <joao> thus why I'm asking sstan to file the bug; I'm stockpiling on bugs for the next week :)
[17:15] * xiaoxi (~xiaoxiche@jfdmzpr02-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[17:15] * ezconsu29 (~ezcon@208.73.128.34) Quit (Read error: Connection reset by peer)
[17:15] * ezconsulting (~ezcon@166.137.98.141) has joined #ceph
[17:15] * ezconsu35 (~ezcon@208.73.128.34) has joined #ceph
[17:15] * ezconsulting (~ezcon@166.137.98.141) Quit (Read error: Connection reset by peer)
[17:16] * sleinen1 (~Adium@2001:620:0:26:d595:81dd:bb4c:601d) Quit (Quit: Leaving.)
[17:17] <matt_> fair enough :)
[17:22] * bithin (~bithin@115.249.1.61) Quit (Quit: Leaving)
[17:29] <pioto> hm. just doing an 'umount /mnt/somethingcephfs' caused a kernel panic, it seems... is it still basically "don't use this in production?"
[17:30] <pioto> i should be on the latest version of bobtail on both the cluster and the client, fwiw
[17:30] <pioto> and 'whatever the latest ubuntu 12.04 kernel is'
[17:32] <Kioob`Taff> mmm "ceph status" report near 15MB/s of write activity, but on one (of five) OSD I see 254MB/s of write activity
[17:33] <Kioob`Taff> what's happening ?
[17:34] <Kioob`Taff> mmm it's because of snapshots, each write request have to duplicate the 4MB block, right ?
[17:45] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[17:50] <Kioob`Taff> so, I suppose I need to change the «--order» when I create an image
[17:50] <Kioob`Taff> which of course I can't change
[17:52] <gregaf1> Kioob`Taff: no, it only COWs the changed portion
[17:57] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:57] * tziOm (~bjornar@ip-166-151-230-46.dialup.ice.net) Quit (Read error: Connection reset by peer)
[17:57] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Ping timeout: 480 seconds)
[17:58] <Kioob`Taff> gregaf1 : ok, thanks !
[17:58] <Kioob`Taff> so... any idea why there is so many writes on OSD ?
[18:00] * terje (~Adium@c-50-134-173-158.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[18:00] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) has joined #ceph
[18:00] <sjustlaptop> Kioob`Taff: you will have to explain how you measured the activity
[18:01] <Kioob`Taff> iostat 10 -kx
[18:01] <sjustlaptop> which devices?
[18:01] <Kioob`Taff> on all OSD
[18:02] <sjustlaptop> how many ceph-osd daemons are running on the machine?
[18:02] <Kioob`Taff> 8
[18:02] <sjustlaptop> you only saw these writes on one machine?
[18:03] <Kioob`Taff> no
[18:03] <Kioob`Taff> but I measured only on one
[18:03] <sjustlaptop> ok, and all osd devices saw IO?
[18:03] <Kioob`Taff> near all
[18:03] <sjustlaptop> filesystem?
[18:03] <Kioob`Taff> xfs
[18:04] <sjustlaptop> can you identify the process doing the writes?
[18:05] <Kioob`Taff> on OSD ?
[18:05] <sjustlaptop> is it the ceph-osd daemon performing the writes?
[18:05] <Kioob`Taff> yes
[18:06] <sjustlaptop> is there recovery?
[18:06] <Kioob`Taff> no
[18:07] <sjustlaptop> is iostat returning Mb/s instead of MB/s?
[18:07] <dmick> elder: so for whatever it's worth, that's explicitly-coded behavior
[18:07] <dmick> << " --yes-i-really-really-mean-it" << std::endl;
[18:07] <dmick> cout << nargs << std::endl;
[18:07] <elder> It doesn't look very nice.
[18:07] <dmick> doesn't seem like it adds much
[18:08] <Kioob`Taff> sjustlaptop: no, iostat report KB/s
[18:08] <Kioob`Taff> (with -k option)
[18:08] <Kioob`Taff> and iotop confirm that differences
[18:08] * tnt (~tnt@109.130.89.104) has joined #ceph
[18:11] * fghaas (~florian@91-119-65-118.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[18:12] <Kioob`Taff> mmm I have to go :/ (train...) is there some logs which can help about that, which I should enable ?
[18:12] <Kioob`Taff> thanks
[18:13] <dmick> elder: https://github.com/ceph/ceph/pull/167
[18:13] <elder> dmick, looks good to me, thanks a lot.
[18:13] <elder> Reviewed-by:
[18:15] * lxndrp (~papaspyro@85-22-138-202.ip.dokom21.de) Quit (Quit: lxndrp)
[18:15] <dmick> there should be a way for you to do that and cause the merge. I confess I am not clear on the interface.
[18:16] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[18:28] <elder> Oh, I'm not used to doing that. I can commit it the way I know how though (manually, not using the web interface).
[18:28] * loicd (~loic@magenta.dachary.org) has joined #ceph
[18:29] <elder> Another question dmick, does the rbd CLI accept all options (e.g., --size) in any order?
[18:29] <elder> And can all options use either "--size=<sz>" or "--size <sz>" format?
[18:29] <dmick> 1) reviewing the pull request help, and will find out. hold on that one.
[18:30] <dmick> 2) cli: I think mostly, but I've never plumbed that space exhaustively
[18:30] <dmick> I tend to use options before image/snap names and don't look back
[18:30] <elder> OK, I'm looking at the code now.
[18:30] <dmick> and I tend not to use any of the options for the names; I use pool/image@snap exclusively because that's how I roll
[18:31] <dmick> pull reqs: apparently when you "merge" you can enter a commit msg
[18:31] * calebamiles1 (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) has joined #ceph
[18:31] <dmick> I will do, nad make sure your reviewed-by gets added
[18:32] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[18:33] * BillK (~BillK@58-7-174-95.dyn.iinet.net.au) has joined #ceph
[18:34] <dmick> overkill for that tiny commit, but at least now I know how it works
[18:34] * calebamiles (~caleb@c-107-3-1-145.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[18:34] <elder> Looks to me like all options allow either "---size <sz>" or "--size=<sz>" equivalently.
[18:34] <sagewk> ye
[18:34] <sagewk> p
[18:35] <elder> And in any order.
[18:38] * jtang2 (~jtang@2001:770:10:500:b503:79ec:5d2:3f2d) Quit (Quit: Leaving.)
[18:38] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[18:39] * loicd (~loic@magenta.dachary.org) has joined #ceph
[18:44] * eschnou (~eschnou@227.159-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:45] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[18:47] * timmclau_ (~timmclaug@69.170.148.179) has joined #ceph
[18:47] * timmclaughlin (~timmclaug@69.170.148.179) Quit (Read error: Connection reset by peer)
[18:48] * Meths (rift@2.25.193.124) Quit (Read error: Connection reset by peer)
[18:48] * Meths (rift@2.25.193.124) has joined #ceph
[18:48] * nebrera (~pablo@90.173.22.83) has joined #ceph
[18:49] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[18:49] <nebrera> helllo
[18:50] <nebrera> I am using new version and when I modprobe rbd
[18:50] <nebrera> the module is not found
[18:50] * l0nk (~alex@83.167.43.235) Quit (Quit: Leaving.)
[18:50] <nebrera> I have even compiled the source code and no ko module is found
[18:50] <nebrera> is the documentation updated
[18:50] <dmick> the kernel module is in the kernel source tree
[18:51] <dmick> what distro are you using?
[18:51] <nebrera> centos 6.2
[18:51] <dmick> that's an ancient kernel by default
[18:51] <nebrera> there are no rbd.ko
[18:52] * davidzlap (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[18:52] <dmick> the easiest way to a more-recent kernel is the elrepo
[18:52] <dmick> our kernel fork is github.com/ceph/ceph-client
[18:56] * eschnou (~eschnou@227.159-201-80.adsl-dyn.isp.belgacom.be) Quit (Remote host closed the connection)
[19:03] <nebrera> is there any ceph-client package ?
[19:03] * leseb (~Adium@83.167.43.235) Quit (Quit: Leaving.)
[19:04] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[19:08] <dmick> nebrera: we don't build and distribute kernel .debs to my knowledge, if that's what you mean. (that repo is a full kernel repo)
[19:08] * rzerres1 (~ralf@pd95b5253.dip0.t-ipconnect.de) has left #ceph
[19:09] <nebrera> which repo elrepo has the ceph-client package ?
[19:09] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) Quit (Quit: When the chips are down, well, the buffalo is empty)
[19:09] <dmick> it's not a ceph-client package
[19:09] <dmick> it's a kernel package
[19:09] <dmick> kernel
[19:10] <dmick> the repo is somewhat-confusingly named; it refers to the original CephFS kernel client. But really it's just a "latest kernel" now that ceph and rbd modules are in the mainline
[19:13] * dpippenger (~riven@216.103.134.250) has joined #ceph
[19:15] * jmlowe1 (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[19:17] <nebrera> ok
[19:18] <nebrera> but normally when I download your ceph package from your webpage the ko module is not included
[19:18] <nebrera> I will git clone th repo ceph-client and I will generate the module in my mcahine
[19:18] <dmick> (11:08:25 AM) dmick: nebrera: we don't build and distribute kernel .debs
[19:18] <nebrera> this is not included in the quick start :-D
[19:19] <dmick> what I meant by that is that we don't build and distribute kernel .debs
[19:19] <nebrera> ok
[19:19] <dmick> the quick start says
[19:19] <dmick> Install a recent release of Debian or Ubuntu (e.g., 12.04 precise) on your Ceph server machine and your client machine.
[19:20] <nebrera> yes
[19:20] <nebrera> but my base is centos
[19:20] <nebrera> and I cannot change it
[19:21] <dmick> that means the quick start does not apply to you
[19:21] <dmick> and you must do extra work
[19:21] <nebrera> :-D
[19:21] <nebrera> I will
[19:21] <nebrera> don't worry
[19:21] <nebrera> it looks working but I only needed the module
[19:22] <nebrera> server and services are running
[19:22] <nebrera> even I can execute rbd list I get the partitions created
[19:23] <dmick> yes. and you don't need the kernel module to use rbd with qemu-kvm, for instance
[19:23] <nebrera> but when I try to map I get last error because the modules is not found
[19:23] <dmick> it just depends on how you want to use the cluster.
[19:23] <nebrera> I see
[19:23] <dmick> what is your goal? maybe there's a better way?
[19:24] <nebrera> I would like to use shared storage for several servers
[19:24] <jmlowe1> does anybody know how to attach a rbd snapshot to a domain with libvirt?
[19:25] <dmick> jmlowe1: same way you attach an image, just name it 'image@snap', I would think; did you try that and something went wrong?
[19:26] <dmick> nebrera: there are many ways to use shared storage; not all of them involve the kernel module. If you have some details we may be able to save you some time with the kernel stuff
[19:26] <nebrera> what do you need
[19:26] <nebrera> I am going to test ceph for druid
[19:26] * Tribaal (uid3081@id-3081.richmond.irccloud.com) Quit (Ping timeout: 480 seconds)
[19:26] <nebrera> instead of S3
[19:26] <dmick> What's druid?
[19:27] <jmlowe1> <source protocol='rbd' name='rbd/gw10@Thu_Mar_28_13:35:43_2013'/> yields disk not found with attach-device, drop the snap spec and it works
[19:27] <dmick> https://github.com/metamx/druid/wiki ?
[19:27] <nebrera> yes
[19:27] * BillK (~BillK@58-7-174-95.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[19:27] <dmick> jmlowe1: hm, the colons might be an issue
[19:27] <jmlowe1> ah, right, escape them maybe?
[19:27] <nebrera> dmick you are so fast :-D
[19:28] <dmick> I'm not sure. can you try with a snap named without colons?
[19:28] <nebrera> dmick, which way do you recommend me ?
[19:29] <dmick> nebrera: so if you will be accessing the cluster through S3, the kernel rbd module is not necessary
[19:29] <dmick> the rgw gateway works with Apache to provide S3 access to the cluster
[19:30] <jmlowe1> dmick: yep, colons, works when I s/:/./
[19:30] <dmick> jmlowe1: that's annoying, but not surprising.
[19:30] <dmick> that parser is....unfriendly.
[19:30] <dmick> up to filing an issue?
[19:30] <jmlowe1> is the parser part of libvirt?
[19:30] <nebrera> dmick: do you have url with doc about how to mount it ?
[19:30] <dmick> S3 doesn't do "mount"
[19:31] <dmick> it's an object store, accessed through a RESTful interface over HTTP
[19:31] <nebrera> so ceph will be as s3 compatible via restful over http ?
[19:31] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) Quit (Quit: Leaving.)
[19:31] <nebrera> which service do I have to start in this case ?
[19:32] <dmick> (11:29:56 AM) dmick: the rgw gateway works with Apache to provide S3 access to the cluster
[19:34] <nebrera> cool
[19:34] * ezconsulting (~ezcon@208.73.128.34) has joined #ceph
[19:34] <nebrera> docs to mount this ?
[19:35] * ezconsu69 (~ezcon@166.137.98.141) has joined #ceph
[19:35] <dmick> nebrera: you should spend some time browsing the presentations on inktank.com and the docs at ceph.com/docs
[19:35] <dmick> there's a *lot* of information there
[19:37] <nebrera> I will read but I have spent some days only
[19:37] <nebrera> my first days
[19:37] <nebrera> and it is complicated :-D
[19:37] <dmick> yes
[19:37] <dmick> once again:
[19:37] <dmick> (11:30:59 AM) dmick: S3 doesn't do "mount"
[19:37] <dmick> (11:31:15 AM) dmick: it's an object store, accessed through a RESTful interface over HTTP
[19:38] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) has joined #ceph
[19:38] <dmick> maybe http://ceph.com/docs/master/radosgw/ will make sense, but you may need some more basic info from the presentations
[19:39] <nebrera> ok
[19:42] <nebrera> dmick, normally in a cluster with 30 servers with ceph
[19:42] <nebrera> where is the apache ?
[19:42] * ezconsu35 (~ezcon@208.73.128.34) Quit (Ping timeout: 480 seconds)
[19:42] <nebrera> 30 is an example :-D
[19:42] <janos> that sounds like "what is the sound of one hand clapping?"
[19:42] * ezconsulting (~ezcon@208.73.128.34) Quit (Ping timeout: 480 seconds)
[19:44] <nebrera> eh
[19:46] * drokita (~drokita@199.255.228.128) Quit (Read error: Operation timed out)
[19:48] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Quit: Leaving)
[19:48] <dmick> what do you mean "where is the Apache"?
[19:48] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[19:52] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[19:54] * rturk-away is now known as rturk
[19:56] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) Quit (Quit: Leaving.)
[19:59] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) has joined #ceph
[20:00] * eschnou (~eschnou@227.159-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[20:02] * ezconsulting (~ezcon@208.73.128.34) has joined #ceph
[20:02] * ezconsu69 (~ezcon@166.137.98.141) Quit (Read error: Connection reset by peer)
[20:03] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[20:03] * nebrera (~pablo@90.173.22.83) Quit (Quit: nebrera)
[20:07] <jmlowe1> dmick: Hey you remember how libvirt was picking about input when attaching, it's quite liberal when detaching this detached the device but doesn't exist <source protocol='rbd' name='rbd/gw10@testThu_Mar_28_14.53.12_2013'/>
[20:09] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[20:09] * rustam (~rustam@5e0f5b1e.bb.sky.com) has joined #ceph
[20:13] * gregaf1 (~Adium@cpe-76-174-249-52.socal.res.rr.com) has joined #ceph
[20:15] * dosaboy (~gizmo@faun.canonical.com) Quit (Quit: Leaving.)
[20:16] * drokita (~drokita@199.255.228.128) has joined #ceph
[20:21] * rustam (~rustam@5e0f5b1e.bb.sky.com) Quit (Remote host closed the connection)
[20:29] * ivotron (~ivo@dhcp-59-168.cse.ucsc.edu) has joined #ceph
[20:33] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[20:36] * rustam (~rustam@5e0f5b1e.bb.sky.com) has joined #ceph
[20:38] * jjgalvez (~jjgalvez@cpe-76-175-30-67.socal.res.rr.com) Quit (Quit: Leaving.)
[20:46] <dmick> jmlowe1: not sure I get you
[20:48] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) has joined #ceph
[20:49] <jmlowe1> It's picky about the inputs, but doesn't seem to care what snap name you give it, it will detach devices even when the snap name doesn't match what was attached
[20:49] * sleinen1 (~Adium@2001:620:0:25:b015:a22a:eec:1aa2) has joined #ceph
[20:50] <jmlowe1> i.e. gw10@Thu_Mar_28_14.53.12_2013 was attached, detaching gw10@testThu_Mar_28_14.53.12_2013 actually detaches gw10@tThu_Mar_28_14.53.12_2013
[20:52] <dmick> it....elided the 'tes'?
[20:52] <dmick> tf?
[20:52] <joshd1> jmlowe: it's probably detaching based on device id rather than file name
[20:53] <jmlowe1> yeah but the file name is required, why make me use it if you are just going to throw it away?
[20:54] <joshd1> yeah, it'd be nice if there was a sanity check there
[20:54] * jjgalvez (~jjgalvez@12.248.40.138) has joined #ceph
[20:56] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:58] <dmick> how do you mechanize the attach/detach operation? Is this a virsh command?
[20:58] <jmlowe1> virsh attach-device or virsh detach-device
[20:58] * rustam (~rustam@5e0f5b1e.bb.sky.com) Quit (Remote host closed the connection)
[20:59] <dmick> I see. I've only set them in the static config, haven't done dynamic. I know that there's a lot of virsh I don't know :)
[21:00] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[21:01] <dmick> it seems like detach-device only takes the path as an arg; I guess I don't understand how the path could not match and it still work
[21:06] <jmlowe1> takes path to xml describing the device as an arg
[21:10] * steki (~steki@91.195.39.5) Quit (Remote host closed the connection)
[21:12] <elder> gregaf, I accidentally clicked "force rebuild" on your branch wip-flush-error-checking
[21:12] <gregaf1> okay
[21:12] * nwat (~Adium@eduroam-233-33.ucsc.edu) has joined #ceph
[21:12] <gregaf1> I wonder what that branch is
[21:12] <dmick> jmlowe1: ah. ok.
[21:12] <gregaf1> ;)
[21:12] <elder> Well, if you don't need it, please delete it. *Maybe* that will stop the build and mine will get done faster.
[21:13] <nwat> is there an OSD setting that will cause reads to use O_DIRECT and bypass teh OS cache?
[21:13] <gregaf1> ah, in the kernel — that was the one to look at the flush return values when calling the filesystem's sync
[21:14] <gregaf1> I didn't want to toss out the code in case it became useful as a reference or something someday, but you can if you like
[21:14] <elder> Are you sure?
[21:14] <gregaf1> yeah
[21:15] <elder> Hmmm. I think you have to.
[21:15] <elder> In fact, I'm pretty sure that's the case because I also happened to create my own branch with the same name as one you had (wip-4550-1) and they're both there.
[21:15] <elder> Wait!
[21:15] <gregaf1> umm, that's not how git works…?
[21:15] <gregaf1> *confuzzlement*
[21:16] <elder> NVM
[21:16] <gregaf1> okay :)
[21:16] <elder> Wrong git tree.
[21:16] <elder> Glad you didn't have a branch by that name on ceph.git
[21:16] <elder> It's gitbuilder that allows multiple branches with the same owner. Not sure how that gets resolved...
[21:16] <dmick> elder: check the magazine *and* the chamber :)
[21:17] <elder> OK with you if I delete "wip-4450-1"?
[21:17] <elder> (Because I'm not sure what's going to happen to your built copy when I delete mine)
[21:17] * Cotolez (~aroldi@81.88.224.110) has joined #ceph
[21:18] <gregaf1> I don't have a wip-4450-1...
[21:18] <elder> Then it is by definition fine.
[21:18] <elder> Or something like that.
[21:18] <gregaf1> branch names need to be distinct and there's not a real concept of ownership as far as git knows, so I'm really confused by the conversation we're having
[21:19] <elder> Look at branch(es) wip-4550-1 on http://gitbuilder.sepia.ceph.com/gitbuilder-precise-kernel-amd64/
[21:19] <elder> And nevermind, my eyes were deceiving me.
[21:19] <dmick> there's a 4450-1 and a 4550-1
[21:19] <elder> Yours is 4450 mine is 4550
[21:19] <elder> Carry on.
[21:19] <dmick> whew. was worrying about crossing the streams again.
[21:20] <Cotolez> Hi, I would test the Samba vfs for ceph but I need some help because I don't understand how to set correctly the smb.conf
[21:20] <elder> (It puzzled me too, but I quickly assumed gitbuilder somehow distinguished between people)
[21:21] <Cotolez> I added the line "vfs object = ceph" in my share section
[21:22] * fghaas (~florian@91-119-65-118.dynamic.xdsl-line.inode.at) has joined #ceph
[21:22] <Cotolez> what are other steps to take?
[21:23] * Cube (~Cube@12.248.40.138) has joined #ceph
[21:23] * Tribaal (uid3081@id-3081.richmond.irccloud.com) has joined #ceph
[21:26] <gregaf1> the samba stuff really isn't packaged up at all; slang1 might know the answers but I doubt anybody else does
[21:26] <gregaf1> (if he's around/available)
[21:27] <Cotolez> It seems away
[21:27] * jjgalvez (~jjgalvez@12.248.40.138) Quit (Read error: Connection reset by peer)
[21:27] <scheuk> question about radosgw, I am trying to uplaod a 5GB file, and radosgw is loging a HTTP 400 EntityTooLarge, is there a size limit in rados/radosgw (I am running ceph 0.48.3)
[21:29] * ezconsu53 (~ezcon@208.73.128.34) has joined #ceph
[21:29] * ezconsulting (~ezcon@208.73.128.34) Quit (Read error: Connection reset by peer)
[21:30] * jtang1 (~jtang@79.97.135.214) has joined #ceph
[21:41] * eschnou (~eschnou@227.159-201-80.adsl-dyn.isp.belgacom.be) Quit (Remote host closed the connection)
[21:41] * ezconsulting (~ezcon@166.137.98.141) has joined #ceph
[21:41] * ezconsu53 (~ezcon@208.73.128.34) Quit (Read error: Connection reset by peer)
[21:41] * ezconsulting (~ezcon@166.137.98.141) Quit (Read error: Connection reset by peer)
[21:41] * ezconsulting (~ezcon@208.73.128.34) has joined #ceph
[21:43] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[21:43] * eschnou (~eschnou@227.159-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[21:48] <sagewk> sjust, davidz: that wip-4490 patch reminds me.. did we fix the thing where each PG's copy of PGPool isn't getting updated on, say, pool rename?
[21:48] <sagewk> (or, in this case, pg_pool_t flag update?)
[21:48] <sjustlaptop> sagewk: I didn't fix it
[21:48] <sjustlaptop> what is the bug number?
[21:48] <sagewk> i forget
[21:49] <sjustlaptop> 4471
[21:49] <sjustlaptop> looking
[21:49] * nwat (~Adium@eduroam-233-33.ucsc.edu) Quit (Quit: Leaving.)
[21:52] * ezconsu53 (~ezcon@166.137.98.141) has joined #ceph
[21:52] * eschnou (~eschnou@227.159-201-80.adsl-dyn.isp.belgacom.be) Quit (Remote host closed the connection)
[21:52] * ezconsu53 (~ezcon@166.137.98.141) Quit (Read error: Connection reset by peer)
[21:52] * ezconsu88 (~ezcon@166.137.98.141) has joined #ceph
[21:53] * eschnou (~eschnou@227.159-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[21:55] * jjgalvez (~jjgalvez@12.248.40.138) has joined #ceph
[21:55] <sjustlaptop> sagewk, joao: the osd doesn't really ever see the pool name
[21:56] <sagewk> hmm
[21:56] <sjustlaptop> it deals in terms of pool id
[21:56] <sjustlaptop> I think this may be an issue with the caps?
[21:56] <sagewk> PGPool::name
[21:56] <sagewk> which is matched against the caps
[21:56] <sagewk> and is not updated when the name changes in an inc osdmap
[21:56] <sjustlaptop> ah
[21:56] <sjustlaptop> there it is
[21:56] <sjustlaptop> I was looking at pg_pool_t
[21:57] <joao> btw, unrelated matter: is a pool removal done synchronously?
[21:57] <sagewk> same is going to be true of PGPool::info in this case probably, with the full bit
[21:57] <sjustlaptop> I see it now
[21:57] <sjustlaptop> that does get updated
[21:57] <sjustlaptop> PGPool::update
[21:58] <sagewk> oh, just not the name :)
[21:58] <sjustlaptop> yeah, fixing now
[21:58] <sagewk> cool
[21:58] * ezconsulting (~ezcon@208.73.128.34) Quit (Ping timeout: 480 seconds)
[21:59] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:59] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:00] * ezconsulting (~ezcon@208.73.128.34) has joined #ceph
[22:00] * ezconsu88 (~ezcon@166.137.98.141) Quit (Read error: Connection reset by peer)
[22:10] <sjustlaptop> pull request submitted for 4471
[22:15] <sjustlaptop> joao: no, not really
[22:15] <sagewk> sjustlaptop: merged.. don't forget ot close the bug
[22:15] <sjustlaptop> yep
[22:15] <sagewk> hmm should backport that to bobtail too
[22:16] <sagewk> at least the name patch
[22:16] <joao> sjust, yeah, thought so
[22:16] <sjustlaptop> sagewk: yeah, forgot about that
[22:16] <sjustlaptop> I'll backport
[22:16] <sagewk> joshd: pushed updated wip-rbd-diff, now with striping
[22:16] <joao> so, is there a chance the osds will still send pool related messages to the monitors after a pool has been removed by the user but not by the osds?
[22:17] <sjustlaptop> probably, but they will be tagged with a prior epoch
[22:17] <sjustlaptop> anyway, what pool messages do the osds send to the mon?
[22:17] <sjustlaptop> osd stats?
[22:19] <joshd1> sagewk: cool, I'll take a look
[22:20] <sagewk> sjustlaptop: the final version of the SNAPDIR osd changes are in there too, if you want to look
[22:20] <sjustlaptop> looking
[22:27] <joao> sjustlaptop, not really sure; just wondering about that given an issue xiaoxi had today with his monitors after deleting 48 pools, that appeared to have all the symptoms from overloaded monitors
[22:36] <sjustlaptop> sagewk: that looks ok, though either a read or a write to head might reorder past a list_snaps of head
[22:36] <sagewk> i think that's already possible with read vs write reordering. i guess not with read vs read.
[22:37] <sjustlaptop> I think currently a read may reorder past a write, but not a write past a read
[22:38] * eschnou (~eschnou@227.159-201-80.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[22:38] * jtang1 (~jtang@79.97.135.214) Quit (Quit: Leaving.)
[22:41] * nwat (~Adium@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[22:59] * Cotolez (~aroldi@81.88.224.110) Quit (Quit: Sto andando via)
[22:59] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) has joined #ceph
[23:03] * drokita1 (~drokita@199.255.228.128) has joined #ceph
[23:03] * aliguori (~anthony@32.97.110.51) Quit (Remote host closed the connection)
[23:04] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[23:04] * rustam (~rustam@5e0f5b1e.bb.sky.com) has joined #ceph
[23:09] * drokita (~drokita@199.255.228.128) Quit (Ping timeout: 480 seconds)
[23:09] * timmclau_ (~timmclaug@69.170.148.179) Quit (Ping timeout: 480 seconds)
[23:16] * drokita1 (~drokita@199.255.228.128) Quit (Ping timeout: 480 seconds)
[23:19] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:23] * BillK (~BillK@124-149-76-183.dyn.iinet.net.au) has joined #ceph
[23:26] * fghaas (~florian@91-119-65-118.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[23:26] <sagewk> sjust: pushed fix for the pg num thing
[23:30] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:31] <sjustlaptop> oh, in teuth?
[23:31] <sjustlaptop> I also pushed one to a branch, couldn't test it since I can't get into slider
[23:36] * portante (~user@66.187.233.206) Quit (Quit: home)
[23:42] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) has joined #ceph
[23:53] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[23:53] <PerlStalker> Does 0.56.4 fix the osd memory leak?
[23:55] <sjustlaptop> PerlStalker: there is a change to reduce memory usage due to pg logs
[23:55] <sjustlaptop> when you observed memory leaking, were all of your pgs active+clean?
[23:57] <PerlStalker> I saw it when suddenly a bunch of things got stuck peering.
[23:59] * ezconsulting (~ezcon@208.73.128.34) Quit (Remote host closed the connection)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.