#ceph IRC Log

Index

IRC Log for 2013-07-25

Timestamps are in GMT/BST.

[0:00] <joshd> loicd: I think we'd need a librgw anyway before we could plug anything in there, which is a lot of work by itself
[0:00] <loicd> ok
[0:00] <loicd> thanks for the update :-)
[0:01] <joshd> no problem
[0:02] <loicd> The conclusion is that a swift checklist to show what's implemented and what's missing in rgw won't be a waste of time. The current implementation of the API is not going to change any time soon.
[0:03] * janisg (~troll@85.254.50.23) Quit (Ping timeout: 480 seconds)
[0:03] <loicd> s/change/be replaced/ :-)
[0:04] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[0:05] <joshd> yes, certainly. even if it were replaced, some features would need backend support, so the list is still useful
[0:06] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:10] * Guest926 (~zack@65-36-76-12.dyn.grandenetworks.net) Quit (Ping timeout: 480 seconds)
[0:12] * yehudasa__ (~yehudasa@2607:f298:a:607:ea03:9aff:fe98:e8ff) Quit (Ping timeout: 480 seconds)
[0:17] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[0:17] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[0:18] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[0:18] <infernix> so I seem to have two different methods of keyfiles here and I'm not sure if one is just older/obsoleted
[0:19] <infernix> on the one hand, for admin, I have only /etc/ceph/ceph.client.admin.keyring which contains [client.admin] key=$key
[0:20] <infernix> on the other, i have a "[client.someuser] keyfile=/etc/ceph/someuser.keyfile" in ceph.conf, and i have "[client.someuser] key=$key" in /etc/ceph/someuser.keyfile
[0:20] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[0:21] <infernix> can i just drop the definition from ceph.conf and rename that someuser.keyfile to ceph.client.someuser.keyfile in more recent versions, with all the clients (e.g. librbd) automatically finding the key?
[0:22] <infernix> hm, no, seems that doesn't work
[0:23] <joshd> infernix: .keyring instead of .keyfile
[0:23] <joshd> keyfile = /path/to/file is for a file containing just the base64 key, no [client.x] or key= in it
[0:24] <infernix> documentation states it's $cluster.client.admin.keyring
[0:24] <infernix> so is it also $cluster.client.$id.keyring then?
[0:24] * infernix tries
[0:25] <joshd> yeah
[0:25] <infernix> ha
[0:25] <infernix> great
[0:25] * janisg (~troll@85.254.50.23) has joined #ceph
[0:26] <infernix> but how is $cluster not always 'ceph'?
[0:28] <infernix> ah i see
[0:30] <infernix> why would I want to run multiple clusters on the same hardware? I mean, I can't assign one OSD to two clusters, can I?
[0:30] <infernix> are we talking about client hardware here?
[0:38] <loicd> yehudasa: joshd I went to the swiftstack booth at OSCON and found John Dickinson to get an update
[0:38] <joshd> loicd: great, what'd he say?
[0:39] * loicd trying to summarize
[0:41] <loicd> redhat is working to use glusterfs as a storage backend for swift. It's low level : it is not just a layer under the API, swift uses it as a backend to do replication. I.e. most of the logic of swift is still there including what Ceph already implements.
[0:41] <loicd> joshd: is it more or less what he discussed with you back in April ?
[0:41] <joshd> loicd: yes, that was my understanding of the lfs work
[0:42] <loicd> regarding the API itself he says that he would welcome something that provides an abstract API but confirmed that there is no incentive to work in this direction for any of the swift developers at the moment. They are happy with the way things are and the fact that other software such as Ceph lag behind is not enough of a concern for them to act on it. Which makes sense ;-)
[0:43] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[0:43] * grepory1 (~Adium@50-115-70-146.static-ip.telepacific.net) Quit (Read error: Connection reset by peer)
[0:44] <loicd> he mentionned that proprietary software vendors ( EMC and others ) have swift compatible implementations of their own. And of course it would make sense for all the projects re-implementing the swift APIs to get together and share the code. But again, the swift developers won't be a driving force to make that happen.
[0:44] <lxo> infernix, you can have multiple osds on the same host. even on the same filesystems. I've used multiple clusters on the same hardware to migrate from one corrupted ceph filesystem to a fresh one
[0:45] <loicd> Except for Ceph most (all ?) implementations are from proprietary software vendors and getting them to cooperate in this direction is likely to be a challenge to say the least.
[0:45] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[0:45] <sjustlaptop> loicd: I've updated 5433 with the direction I think we should go next
[0:46] * loicd looking 5433
[0:46] <sjustlaptop> the more I think about it, the more it became clear that factoring out the PG RecoveryState logic was going to be a nightmare
[0:46] <sjustlaptop> instead, we should essentially factor out everything else
[0:46] <joshd> loicd: thanks, that's good to know. that's pretty much what he told me before, but it's good to have it confirmed
[0:47] <sjustlaptop> nearly all of what we've done so far still applies
[0:48] <loicd> sjustlaptop: that makes sense to me. So PG.{cc,h} as it is would essentially become the RecoveryState & supporting functions. And what does not belong gets factored out. Is this what you mean ?
[0:49] * loicd slightly paraphrasing http://tracker.ceph.com/issues/5433#note-6 :-)
[0:49] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[0:50] <sjustlaptop> loicd: yeah, that's the gist of it
[0:50] <sjustlaptop> also, backfill, scrub stay where they are
[0:50] <sjustlaptop> some replication logic (e.g., what to replicate and when) will move from ReplicatedPG to PG
[0:50] <sjustlaptop> how to replicate will remain in ReplicatedPG
[0:50] * mschiff (~mschiff@85.182.236.82) Quit (Remote host closed the connection)
[0:51] <sjustlaptop> correction: backfill is in ReplicatedPG, some of that logic will float up to PG
[0:52] <loicd> sjustlaptop: so ReplicatedPG would no longer exist as a derived class of PG. It would become a PGBackend from which PGReplicatedBackend and PGErasureCodeBackend are derived. And PG would use the PGBackend interface ?
[0:52] <sjustlaptop> yeah
[0:52] <loicd> I like that :-)
[0:52] <sjustlaptop> and hopefully, the PGBackends won't need a PG interface to work with at all
[0:53] <sjustlaptop> that's my main beef with the current ReplicatedPG/PG setup, ReplicatedPG knows way too much about the contents of PG
[0:53] <loicd> I'll go in this direction sjustlaptop , thanks :-)
[0:54] <loicd> :-)
[1:00] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (Ping timeout: 480 seconds)
[1:04] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Ping timeout: 480 seconds)
[1:06] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[1:08] * pras (~prasanna@c-67-163-128-131.hsd1.pa.comcast.net) has joined #ceph
[1:08] * pras (~prasanna@c-67-163-128-131.hsd1.pa.comcast.net) Quit ()
[1:09] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[1:09] * prasr (~prasanna@c-67-163-128-131.hsd1.pa.comcast.net) has joined #ceph
[1:09] * jmlowe1 (~Adium@2601:d:a800:97:c5bd:db07:ec9a:3a90) has left #ceph
[1:09] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Ping timeout: 480 seconds)
[1:10] * prasr (~prasanna@c-67-163-128-131.hsd1.pa.comcast.net) has left #ceph
[1:12] * LeaChim (~LeaChim@0540adc6.skybroadband.com) Quit (Ping timeout: 480 seconds)
[1:15] * Enigmagic (~nathan@c-98-234-189-23.hsd1.ca.comcast.net) has joined #ceph
[1:15] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[1:19] <Enigmagic> Is anyone familiar with the RADOS clone_range call? I've been trying to use to copy an object but it keeps failing with "No such file or directory" whether or not I create the destination object first.
[1:22] <sagewk> Enigmagic: there are some caveats to using it
[1:23] <sagewk> the src and dest objects need to be stored with the same locator key
[1:23] <sagewk> it is probably complaining because the src object isn't stored on the node with the destination object
[1:24] <Enigmagic> sagewk: hum, is there a reasonable way to rename an object without reading/writing the full thing?
[1:24] <sagewk> in general, no, because the name of the object determines where the cluster it is stored
[1:25] <sagewk> you can clone it to a different name but use the old name as the key, but then the object doesn't move, and all readers/writesr have to know to look under the old key+new name combo
[1:26] <Enigmagic> can't do that... i'm just trying to clean up some of our RADOS backend for LevelDB.
[1:27] <Enigmagic> the files it renames are generally small so i'll just copy the bits around
[1:31] * janisg (~troll@85.254.50.23) Quit (Ping timeout: 480 seconds)
[1:33] * janisg (~troll@85.254.50.23) has joined #ceph
[1:40] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) Quit (Quit: Leaving.)
[1:50] * alram (~alram@38.122.20.226) Quit (Quit: leaving)
[1:53] <loicd> joshd: yehudasa talked to chmouel and he brings an interesting new view on LFS :-)
[1:54] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Remote host closed the connection)
[1:56] <loicd> in a nutshell, he thinks LFS is a perfect fit for Ceph and is not as low level as John Dickinson suggested.
[1:56] <loicd> he says that the LFS driver API is being discussed at the moment. He will send the URL of the mail thread ( it's somewhere in the openstack-swift area )
[1:57] <loicd> he also says that the driver API should not be too complicated to implement.
[1:59] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[2:01] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[2:04] <joshd> loicd: hmm, very interesting. maybe the repo linked earlier isn't the most current then
[2:16] * yehudasa__ (~yehudasa@2602:306:330b:1410:ea03:9aff:fe98:e8ff) has joined #ceph
[2:31] * chamings (~jchaming@jfdmzpr01-ext.jf.intel.com) Quit (Quit: Lost terminal)
[2:31] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley)
[2:31] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[2:37] <_robbat2|irssi> yehudasa__, loicd: do you have a ballpark idea how much work (manhours for yehudasa) to add IAM policies? (specifically after per-folder read/write granularity); my boss is asking re bounty funding
[2:38] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[2:40] * sagelap1 (~sage@2600:1012:b00a:5524:883a:a042:a9ec:31d3) has joined #ceph
[2:43] <yehudasa__> _robbat2|irssi: I don't really know, not a trivial amount of time.
[2:44] * sagelap (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:44] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[2:44] <yehudasa__> will probably require defining a subset of IAM
[2:45] <yehudasa__> if I had to guestimate, I'd say that it's ~3 sprints to do something useful enough. However, there's more to features than just the functionality. There's QA, docs, etc.
[2:59] * dpippenger1 (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:01] * huangjun (~kvirc@111.175.164.32) has joined #ceph
[3:08] * yy-nm (~chatzilla@218.74.32.66) has joined #ceph
[3:11] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[3:11] <- *dpippenger* s -la
[3:11] <- *dpippenger* whee
[3:28] * sagelap1 (~sage@2600:1012:b00a:5524:883a:a042:a9ec:31d3) Quit (Read error: No route to host)
[3:45] * sagelap (~sage@2600:1012:b00a:5524:f8d2:767e:3963:f98d) has joined #ceph
[3:48] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[3:52] * markbby (~Adium@168.94.245.4) has joined #ceph
[3:56] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[3:59] * markbby1 (~Adium@168.94.245.2) has joined #ceph
[3:59] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[4:01] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[4:07] * scuttlemonkey (~scuttlemo@75-150-32-73-Oregon.hfc.comcastbusiness.net) has joined #ceph
[4:07] * ChanServ sets mode +o scuttlemonkey
[4:12] * Vjarjadian (~IceChat77@90.214.208.5) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Kdecherf (~kdecherf@shaolan.kdecherf.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * jf-jenni (~jf-jenni@stallman.cse.ohio-state.edu) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * nigwil (~idontknow@174.143.209.84) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * cjh_ (~cjh@ps123903.dreamhost.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * lmb (lmb@212.8.204.10) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * terje- (~root@135.109.216.239) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * jeroenmoors (~quassel@193.104.8.40) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * guppy (~quassel@guppy.xxx) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * AaronSchulz (~chatzilla@216.38.130.164) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * s2r2 (uid322@id-322.ealing.irccloud.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * WarrenTheAardvarkUsui (~WarrenUsu@38.122.20.226) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * jnq (~jon@0001b7cc.user.oftc.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * X3NQ (~X3NQ@195.191.107.205) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * maswan (maswan@kennedy.acc.umu.se) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * fireD (~fireD@93-142-246-152.adsl.net.t-com.hr) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Jakdaw (~chris@puma-mxisp.mxtelecom.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * soren (~soren@hydrogen.linux2go.dk) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * sbadia (~sbadia@yasaw.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * chutz (~chutz@rygel.linuxfreak.ca) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Azrael (~azrael@terra.negativeblue.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * markl (~mark@tpsit.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Fetch_ (fetch@gimel.cepheid.org) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * mxmln (~maximilia@212.79.49.65) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Tamil (~tamil@38.122.20.226) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Psi-Jack_ (~psi-jack@psi-jack.user.oftc.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * tdb (~tdb@willow.kent.ac.uk) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * NaioN_ (stefan@andor.naion.nl) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * terje_ (~joey@97-118-115-214.hlrn.qwest.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * iggy (~iggy@theiggy.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * jochen (~jochen@laevar.de) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * \ask (~ask@oz.develooper.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * `10__ (~10@juke.fm) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * JM (~oftc-webi@193.252.138.241) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * fridudad (~oftc-webi@fw-office.allied-internet.ag) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * markbby1 (~Adium@168.94.245.2) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * huangjun (~kvirc@111.175.164.32) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * jjgalvez1 (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * DarkAceZ (~BillyMays@50.107.55.36) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * lxo (~aoliva@lxo.user.oftc.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Meths (rift@2.25.189.113) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * via (~via@smtp2.matthewvia.info) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * jamespage (~jamespage@culvain.gromper.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * gregaf1 (~Adium@38.122.20.226) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * houkouonchi-home (~linux@pool-108-38-63-48.lsanca.fios.verizon.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * hybrid5121 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Zethrok (~martin@95.154.26.34) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * todin_ (tuxadero@kudu.in-berlin.de) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * sagelap (~sage@2600:1012:b00a:5524:f8d2:767e:3963:f98d) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * yy-nm (~chatzilla@218.74.32.66) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * yehudasa__ (~yehudasa@2602:306:330b:1410:ea03:9aff:fe98:e8ff) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Enigmagic (~nathan@c-98-234-189-23.hsd1.ca.comcast.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * sagewk (~sage@38.122.20.226) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * yehudasa (~yehudasa@38.122.20.226) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * lupine (~lupine@lupine.me.uk) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * morse (~morse@supercomputing.univpm.it) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * PerlStalker (~PerlStalk@72.166.192.70) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * ismell_ (~ismell@host-24-56-171-198.beyondbb.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * houkouonchi-work (~linux@12.248.40.138) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * ntranger (~ntranger@proxy2.wolfram.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * zynzel (zynzel@spof.pl) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * loicd (~loicd@bouncer.dachary.org) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * MooingLemur (~troy@phx-pnap.pinchaser.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * xdeller (~xdeller@91.218.144.129) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * _Tass4da1 (~tassadar@tassadar.xs4all.nl) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Svedrin (svedrin@ketos.funzt-halt.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Pauline (~middelink@2001:838:3c1:1:be5f:f4ff:fe58:e04) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * sig_wall (~adjkru@185.14.185.91) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * jtang (~jtang@sgenomics.org) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Anticimex (anticimex@95.80.32.80) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * lurbs (user@uber.geek.nz) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * MapspaM (~clint@xencbyrum2.srihosting.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * dalegaar1 (~dalegaard@vps.devrandom.dk) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * hflai_ (~hflai@alumni.cs.nctu.edu.tw) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * ofu_ (ofu@dedi3.fuckner.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * nhm (~nhm@184-97-255-87.mpls.qwest.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * sjust (~sam@38.122.20.226) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * joshd (~joshd@38.122.20.226) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * tchmnkyz (~jeremy@0001638b.user.oftc.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * jerrad (~jerrad@dhcp-63-251-67-70.acs.internap.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * jcfischer (~fischer@peta-dhcp-13.switch.ch) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * off_rhoden (~anonymous@pool-173-79-66-35.washdc.fios.verizon.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * eternaleye (~eternaley@2002:3284:29cb::1) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * _robbat2|irssi (nobody@www2.orbis-terrarum.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * ShaunR (~ShaunR@staff.ndchost.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * alexbligh (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * [fred] (fred@konfuzi.us) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Psi-Jack (~Psi-Jack@yggdrasil.hostdruids.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * baffle_ (baffle@jump.stenstad.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * [cave] (~quassel@boxacle.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Ormod (~valtha@ohmu.fi) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * nwf (~nwf@67.62.51.95) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * saaby (~as@mail.saaby.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * codice (~toodles@75-140-71-24.dhcp.lnbh.ca.charter.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * rennu_ (sakari@turn.ip.fi) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * liiwi (liiwi@idle.fi) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Meyer^ (meyer@c64.org) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * iggy__ (~iggy@theiggy.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * madd (~m@workstation.sauer.ms) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * nwl (~levine@atticus.yoyo.org) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Sargun_ (~sargun@208-106-98-2.static.sonic.net) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * scheuk (~scheuk@204.246.67.78) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * Esmil (esmil@horus.0x90.dk) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * thelan (~thelan@paris.servme.fr) Quit (reticulum.oftc.net synthon.oftc.net)
[4:12] * wonko_be_ (bernard@november.openminds.be) Quit (reticulum.oftc.net synthon.oftc.net)
[4:13] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[4:13] * sjust (~sam@38.122.20.226) has joined #ceph
[4:13] * joshd (~joshd@38.122.20.226) has joined #ceph
[4:13] * Vjarjadian (~IceChat77@90.214.208.5) has joined #ceph
[4:13] * markl (~mark@tpsit.com) has joined #ceph
[4:13] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[4:13] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[4:13] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[4:13] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[4:13] * tchmnkyz (~jeremy@0001638b.user.oftc.net) has joined #ceph
[4:13] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[4:13] * jerrad (~jerrad@dhcp-63-251-67-70.acs.internap.com) has joined #ceph
[4:13] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[4:13] * Kdecherf (~kdecherf@shaolan.kdecherf.com) has joined #ceph
[4:13] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[4:13] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[4:13] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[4:13] * jf-jenni (~jf-jenni@stallman.cse.ohio-state.edu) has joined #ceph
[4:13] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[4:13] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[4:13] * Jakdaw (~chris@puma-mxisp.mxtelecom.com) has joined #ceph
[4:13] * nigwil (~idontknow@174.143.209.84) has joined #ceph
[4:13] * cjh_ (~cjh@ps123903.dreamhost.com) has joined #ceph
[4:13] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[4:13] * lmb (lmb@212.8.204.10) has joined #ceph
[4:13] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[4:13] * sbadia (~sbadia@yasaw.net) has joined #ceph
[4:13] * terje- (~root@135.109.216.239) has joined #ceph
[4:13] * maswan (maswan@kennedy.acc.umu.se) has joined #ceph
[4:13] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[4:13] * mjeanson (~mjeanson@00012705.user.oftc.net) has joined #ceph
[4:13] * jnq (~jon@0001b7cc.user.oftc.net) has joined #ceph
[4:13] * jeroenmoors (~quassel@193.104.8.40) has joined #ceph
[4:13] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[4:13] * guppy (~quassel@guppy.xxx) has joined #ceph
[4:13] * AaronSchulz (~chatzilla@216.38.130.164) has joined #ceph
[4:13] * s2r2 (uid322@id-322.ealing.irccloud.com) has joined #ceph
[4:13] * WarrenTheAardvarkUsui (~WarrenUsu@38.122.20.226) has joined #ceph
[4:13] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[4:13] * X3NQ (~X3NQ@195.191.107.205) has joined #ceph
[4:13] * soren (~soren@hydrogen.linux2go.dk) has joined #ceph
[4:13] * fireD (~fireD@93-142-246-152.adsl.net.t-com.hr) has joined #ceph
[4:13] * Fetch_ (fetch@gimel.cepheid.org) has joined #ceph
[4:13] * jcfischer (~fischer@peta-dhcp-13.switch.ch) has joined #ceph
[4:13] * mxmln (~maximilia@212.79.49.65) has joined #ceph
[4:13] * Tamil (~tamil@38.122.20.226) has joined #ceph
[4:13] * off_rhoden (~anonymous@pool-173-79-66-35.washdc.fios.verizon.net) has joined #ceph
[4:13] * eternaleye (~eternaley@2002:3284:29cb::1) has joined #ceph
[4:13] * Psi-Jack_ (~psi-jack@psi-jack.user.oftc.net) has joined #ceph
[4:13] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[4:13] * _robbat2|irssi (nobody@www2.orbis-terrarum.net) has joined #ceph
[4:13] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[4:13] * alexbligh (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[4:13] * tdb (~tdb@willow.kent.ac.uk) has joined #ceph
[4:13] * NaioN_ (stefan@andor.naion.nl) has joined #ceph
[4:13] * terje_ (~joey@97-118-115-214.hlrn.qwest.net) has joined #ceph
[4:13] * iggy (~iggy@theiggy.com) has joined #ceph
[4:13] * jochen (~jochen@laevar.de) has joined #ceph
[4:13] * \ask (~ask@oz.develooper.com) has joined #ceph
[4:13] * `10__ (~10@juke.fm) has joined #ceph
[4:13] * scheuk (~scheuk@204.246.67.78) has joined #ceph
[4:13] * [fred] (fred@konfuzi.us) has joined #ceph
[4:13] * Psi-Jack (~Psi-Jack@yggdrasil.hostdruids.com) has joined #ceph
[4:13] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[4:13] * Esmil (esmil@horus.0x90.dk) has joined #ceph
[4:13] * baffle_ (baffle@jump.stenstad.net) has joined #ceph
[4:13] * thelan (~thelan@paris.servme.fr) has joined #ceph
[4:13] * Daviey (~DavieyOFT@bootie.daviey.com) has joined #ceph
[4:13] * madd (~m@workstation.sauer.ms) has joined #ceph
[4:13] * rennu_ (sakari@turn.ip.fi) has joined #ceph
[4:13] * liiwi (liiwi@idle.fi) has joined #ceph
[4:13] * iggy__ (~iggy@theiggy.com) has joined #ceph
[4:13] * Sargun_ (~sargun@208-106-98-2.static.sonic.net) has joined #ceph
[4:13] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[4:13] * codice (~toodles@75-140-71-24.dhcp.lnbh.ca.charter.com) has joined #ceph
[4:13] * saaby (~as@mail.saaby.com) has joined #ceph
[4:13] * nwf (~nwf@67.62.51.95) has joined #ceph
[4:13] * Meyer^ (meyer@c64.org) has joined #ceph
[4:13] * Ormod (~valtha@ohmu.fi) has joined #ceph
[4:13] * [cave] (~quassel@boxacle.net) has joined #ceph
[4:13] * wonko_be_ (bernard@november.openminds.be) has joined #ceph
[4:14] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[4:14] * markbby1 (~Adium@168.94.245.2) has joined #ceph
[4:14] * sagelap (~sage@2600:1012:b00a:5524:f8d2:767e:3963:f98d) has joined #ceph
[4:14] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[4:14] * yy-nm (~chatzilla@218.74.32.66) has joined #ceph
[4:14] * huangjun (~kvirc@111.175.164.32) has joined #ceph
[4:14] * yehudasa__ (~yehudasa@2602:306:330b:1410:ea03:9aff:fe98:e8ff) has joined #ceph
[4:14] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[4:14] * Enigmagic (~nathan@c-98-234-189-23.hsd1.ca.comcast.net) has joined #ceph
[4:14] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[4:14] * janos (~janos@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[4:14] * jjgalvez1 (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) has joined #ceph
[4:14] * sagewk (~sage@38.122.20.226) has joined #ceph
[4:14] * yehudasa (~yehudasa@38.122.20.226) has joined #ceph
[4:14] * lupine (~lupine@lupine.me.uk) has joined #ceph
[4:14] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[4:14] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[4:14] * JM (~oftc-webi@193.252.138.241) has joined #ceph
[4:14] * fridudad (~oftc-webi@fw-office.allied-internet.ag) has joined #ceph
[4:14] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:14] * ismell_ (~ismell@host-24-56-171-198.beyondbb.com) has joined #ceph
[4:14] * Meths (rift@2.25.189.113) has joined #ceph
[4:14] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[4:14] * ntranger (~ntranger@proxy2.wolfram.com) has joined #ceph
[4:14] * loicd (~loicd@bouncer.dachary.org) has joined #ceph
[4:14] * zynzel (zynzel@spof.pl) has joined #ceph
[4:14] * via (~via@smtp2.matthewvia.info) has joined #ceph
[4:14] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[4:14] * gregaf1 (~Adium@38.122.20.226) has joined #ceph
[4:14] * MooingLemur (~troy@phx-pnap.pinchaser.com) has joined #ceph
[4:14] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) has joined #ceph
[4:14] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[4:14] * xdeller (~xdeller@91.218.144.129) has joined #ceph
[4:14] * houkouonchi-home (~linux@pool-108-38-63-48.lsanca.fios.verizon.net) has joined #ceph
[4:14] * _Tass4da1 (~tassadar@tassadar.xs4all.nl) has joined #ceph
[4:14] * Svedrin (svedrin@ketos.funzt-halt.net) has joined #ceph
[4:14] * hybrid5121 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[4:14] * Pauline (~middelink@2001:838:3c1:1:be5f:f4ff:fe58:e04) has joined #ceph
[4:14] * sig_wall (~adjkru@185.14.185.91) has joined #ceph
[4:14] * jtang (~jtang@sgenomics.org) has joined #ceph
[4:14] * Zethrok (~martin@95.154.26.34) has joined #ceph
[4:14] * Anticimex (anticimex@95.80.32.80) has joined #ceph
[4:14] * lurbs (user@uber.geek.nz) has joined #ceph
[4:14] * MapspaM (~clint@xencbyrum2.srihosting.com) has joined #ceph
[4:14] * dalegaar1 (~dalegaard@vps.devrandom.dk) has joined #ceph
[4:14] * todin_ (tuxadero@kudu.in-berlin.de) has joined #ceph
[4:14] * hflai_ (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[4:14] * ofu_ (ofu@dedi3.fuckner.net) has joined #ceph
[4:14] * nhm (~nhm@184-97-255-87.mpls.qwest.net) has joined #ceph
[4:14] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Max SendQ exceeded)
[4:15] * ChanServ sets mode +o sagewk
[4:16] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[4:23] <phantomcircuit> pgmap v6844865: 576 pgs: 1 stale+active+clean+scrubbing+deep, 29 active+clean, 540 stale+active+clean, 2 active+recovering+degraded, 2 stale+active+degraded+wait_backfill, 2 stale+active+degraded+backfilling; 725 GB data, 403 GB used, 6465 GB / 6869 GB avail; 5736/384288 degraded (1.493%); 5/192144 unfound (0.003%)
[4:23] <phantomcircuit> so apparently im screwed
[4:23] <phantomcircuit> :/
[4:23] <phantomcircuit> rbd volume data is on those 5 unfound pgs
[4:29] <dmick> do you know what happened to them?
[4:30] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[4:31] <phantomcircuit> dmick, im trying to remove 2 of 4 osds
[4:31] <phantomcircuit> i set them to out and waited for about 4 hours
[4:32] <dmick> and the cluster was healthy beforehand?
[4:32] <phantomcircuit> yeah
[4:32] <phantomcircuit> well for now i'm going to turn them back on so they can be found
[4:33] <yy-nm> The capability of left 2 osd can hold all data?
[4:33] <phantomcircuit> yeah easily
[4:34] <phantomcircuit> hmm
[4:35] <phantomcircuit> osd.3 heartbeat_map is_healthy 'OSD::op_tp thread 0x7fdae5d73700' had timed out after 15
[4:35] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[4:36] <phantomcircuit> that makes zero sense
[4:38] <phantomcircuit> nothing is more than 1 ms away from anything else
[4:38] <dmick> that message probably doesn't mean what you think it means
[4:38] <phantomcircuit> it's the only thing in osd.3's logs
[4:44] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[4:45] <dmick> sjustlaptop: what exactly does "osd.3 heartbeat_map is_healthy 'OSD::op_tp thread 0x7fdae5d73700' had timed out after 15" mean?
[4:53] * markbby (~Adium@168.94.245.3) has joined #ceph
[4:53] * markbby1 (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[4:55] <phantomcircuit> dmick, im going to try this a different way
[4:56] <phantomcircuit> i've turned on the two osd's that i removed and set them to up/in
[4:56] <phantomcircuit> im going to wait for everythign to be active+clean
[4:56] * yanzheng (~zhyan@134.134.139.74) has joined #ceph
[4:56] * sagelap1 (~sage@76.89.177.113) has joined #ceph
[4:56] <phantomcircuit> and then im just going to stop one of the osd's that i want to kill
[4:56] <phantomcircuit> wait for it to be marked as out automatically
[4:56] <phantomcircuit> and wait for everythign to be marked active+clean again
[4:57] <dmick> one at a time is probably good, if the cluster will respond sensibly to your plan
[4:57] <phantomcircuit> im guessing it wont but it's worth a try
[5:00] <phantomcircuit> dmick, a number of pg's are listed as active+remapped+backfilling
[5:00] <phantomcircuit> why would they be listed as remapped for extended periods
[5:02] * sagelap (~sage@2600:1012:b00a:5524:f8d2:767e:3963:f98d) Quit (Ping timeout: 480 seconds)
[5:04] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[5:06] * fireD_ (~fireD@93-136-12-230.adsl.net.t-com.hr) has joined #ceph
[5:07] <dmick> um
[5:07] * fireD (~fireD@93-142-246-152.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:09] <dmick> say a pg was set to go to 0,2
[5:09] <dmick> (osd 0 primary, osd 2 secondary)
[5:09] <dmick> you take 2 down
[5:09] <dmick> osd 0 will have to replicate it to some other up osd, say, 3
[5:10] <dmick> in that state, it's remapped: it's on a different acting set ([0,3]) than what CRUSH says it should be ([0,2])
[5:10] <dmick> it'll stay that way until 2 comes back and gets its current copy
[5:10] <dmick> or until you change the crushmap
[5:10] <phantomcircuit> ah
[5:11] <phantomcircuit> so setting the osd to out would be changing the crushmap and they should stop being listed as remapped
[5:11] <dmick> no, setting to out doesn't affect the crushmap
[5:12] <phantomcircuit> iirc it changes the weight to 0 also
[5:12] <dmick> the osd weight, yeah. I think the crushmap weight stays what it was
[5:13] <phantomcircuit> oh
[5:13] <dmick> but osd weight == 0 is the same as "out" and osd weight == 1 is the same as "in"
[5:13] <phantomcircuit> item osd.0 weight -0.000
[5:13] <phantomcircuit> item osd.1 weight -0.000
[5:13] <phantomcircuit> also negative zero
[5:13] <phantomcircuit> wat
[5:13] <dmick> what's that from? a crushmap decompile?
[5:13] <phantomcircuit> yeah
[5:14] <dmick> hm. well, maybe the crush weight is also affected; I didn't think so
[5:14] <dmick> let me see
[5:17] <dmick> no, at least not in the current version. You can see both weights in 'ceph osd tree'
[5:18] * jianpeng (~majianpen@218.242.10.181) has joined #ceph
[5:19] <dmick> (for example, I have a test cluster with 2 osds; I just took one out, and all my pgs are now remapped because they map to the one remaining OSD, but crush still says they should go to both)
[5:19] <acaos> so has anyone noticed that chooseleaf rules with large numbers of OSDs have poor performance?
[5:20] <dmick> not to my knowledge, and I'd be somewhat surprised. How are you measuring crush rule performance?
[5:20] <acaos> by attempting to build a fresh ceph cluster
[5:20] <acaos> 480 OSDs across 30 hosts
[5:21] <phantomcircuit> huh that's interesting
[5:21] <phantomcircuit> osd weight is 1, crushmap weight is uh -3.052e-05
[5:22] <acaos> when I use 'step choose firstn 1 type rack / step choose firstn 0 type host / step choose firstn 1 type device', it's VERY fast to boot up new OSDs
[5:22] <acaos> when I use 'step choose firstn 1 type rack / step chooseleaf firstn 0 type host' it's VERY slow
[5:22] <acaos> that's the only change
[5:22] <dmick> phantomcircuit: try looking at ceph osd crush dump
[5:22] <dmick> (the actual storage is integers, used as fixed-point floats)
[5:23] <acaos> (and in the second scenario mon CPU usage hits 100% each time an OSD is added, which takes ~5-10 seconds)
[5:23] <dmick> acaos: hm
[5:23] <acaos> I've already fiddled with the crush tunables too
[5:23] <phantomcircuit> dmick, the host entry for the 2 osds which im trying to remove is set to 0
[5:23] <phantomcircuit> and now the osd entries are also
[5:23] <acaos> (though I will admit there may be some I have not tried)
[5:23] <phantomcircuit> the rule is chooseleaf osd
[5:26] * zackc (~zack@65-36-76-12.dyn.grandenetworks.net) has joined #ceph
[5:27] * zackc is now known as Guest988
[5:28] <dmick> acaos: I'm not solid enough on crush rules to know for sure what those configs imply, I guess, but regardless, I would not expect crushmaps to really affect new OSD add time except as it affects data movement for rebalancing with a new set
[5:31] <acaos> it definitely does, I suspect because the chooseleaf is continuously colliding and restarting from scratch
[5:31] <acaos> it's not just add time though, it's pretty much any operation which might touch the crush map
[5:32] <lurbs> It shouldn't make a huge difference, but which type of algorithm is being used at each level?
[5:33] <acaos> I've tried tree at all but the bottom (which is straw), and straw at all levels
[5:33] <acaos> by bottom I mean the host level
[5:34] <acaos> so it's tree/tree/straw and straw/straw/straw
[5:34] <acaos> straw/straw/straw is actually even worse than tree/tree/straw
[5:37] <dmick> trying to experiment with crushtool --test, and I keep making maps which break crushtool
[5:37] <dmick> acaos: are you using repl size 2?
[5:38] <acaos> I've used both 2 and 3
[5:38] * eternaleye (~eternaley@2002:3284:29cb::1) Quit (Ping timeout: 480 seconds)
[5:38] <acaos> currently using 2
[5:39] * matt__ (~matt@220-245-1-152.static.tpgi.com.au) has joined #ceph
[5:39] <dmick> so the non-chooseleaf version is intended to pick two hosts, one dev on each, as is the chooseleaf version, then?
[5:39] * julian (~julianwa@125.70.135.241) has joined #ceph
[5:39] <acaos> one group, then two hosts within that group, then one device oneach, yes
[5:40] <dmick> do all the hosts in the map have at least one osd up?
[5:40] <acaos> none have any OSDs up yet because the cluster is still being built
[5:40] <acaos> this is trying to do the initial setup of the OSDs
[5:41] <dmick> but....if there's no data, then crush is irrelevant. I don't get it.
[5:41] * eternaleye (~eternaley@2002:3284:29cb::1) has joined #ceph
[5:41] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:41] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[5:41] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley)
[5:41] <acaos> it's still used to determine which OSDs get which PGs
[5:42] <acaos> so when the OSDs boot up, the monitors do a crush map calculation to generate the pg map
[5:42] <dmick> ok. I guess that's true
[5:43] <acaos> (and of course, as each one boots, the pg map is recalculated)
[5:46] <dmick> ok. so "chooseleaf with a large crushmap but few OSDs up slows OSD adds and makes the mons work hard". I wonder what happens if you use chooseleaf, but leave the map mostly empty until the OSDs are actually running, and then inject the new map
[5:47] <acaos> hmm, that's not a bad idea
[5:47] <acaos> let me try that, give me a few
[5:48] <dmick> so without chooseleaf, I assume the pgmap just ends up having a bunch of unmappable pgs? because I think choose doesn't backtrack, so if it chooses a rack/host that has no OSDs, you're just done, right?
[5:52] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[5:52] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[5:52] <acaos> correct
[5:53] <acaos> (of course, the configuration is written to preclude that scenario in most cases)
[5:53] <acaos> that definitely seems to have done the trick, though
[5:53] * markbby (~Adium@168.94.245.2) has joined #ceph
[5:53] <acaos> at least, I can now build the cluster
[5:54] <acaos> let me see about updating the map
[5:55] <dmick> not sure what you mean by "configuration is written to preclude.."
[5:55] * markbby (~Adium@168.94.245.2) Quit ()
[5:55] * markbby (~Adium@168.94.245.2) has joined #ceph
[5:56] * scuttlemonkey (~scuttlemo@75-150-32-73-Oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[5:56] <acaos> oh, there are no hosts without OSDs defined
[5:56] <dmick> yes, but you're saying there are hosts without OSDs up
[5:57] <acaos> correct - actually, for our use case, that's desired behavior
[5:57] <acaos> that said, we now have a different use case for this built which required chooseleaf
[5:57] <acaos> er, this build
[5:58] <dmick> but if you're first choosing n-1 racks, and then n hosts, if you choose n hosts that don't both have at least one OSD, that can't result in a valid PG mapping, is what I'm trying to say
[5:58] <dmick> using chooseleaf works a lot harder to try to satisfy, is my theory, because it retries, where as choose/choose/choose simply fails
[5:59] <acaos> oh, I know
[5:59] <acaos> but in this case, for initial cluster setup
[5:59] <acaos> there could not be a valid pg mapping
[5:59] <acaos> since the OSDs hadn't even been brought up yet
[6:00] <acaos> and it made bringing them up take much longer than it should have
[6:00] <acaos> since it was thrashing every time one came in
[6:02] <dmick> ah. so if you knew all that you could have saved me some time :)
[6:02] <acaos> well, you came up with the idea of using an empty crush map
[6:02] <acaos> which I hadn't thought of
[6:02] <acaos> so, thank you =)
[6:02] <dmick> but, yes, I believe you're right, with that sort of cluster bringup procedure, things are gonna be thrashy
[6:02] <dmick> heh, yw
[6:03] <acaos> I guess most people bring up their cluster before setting their crush map, then
[6:03] <dmick> I wonder what, say, the chef cookbooks do wrt the crushmap
[6:03] <dmick> I don't rightly know
[6:04] <dmick> I *suspect* that most 'industrial strength' deployment strategies treat every osd as "you may be adding this to a live cluster, so, do it all: init, bring up, add to crush map'
[6:04] <dmick> but I don't know
[6:05] <acaos> yeah, we've had .. less than stellar success with adding things to running clusters like that
[6:05] <acaos> the problem being that when you add something to a crushmap, the container weights change, causing a massive cluster rebalance .. so we prefer to adjust the internal weights within a container so the entire container maintains the same total weight
[6:08] <dmick> in this case, however (a fresh cluster), the rebalance should be mostly a noop
[6:09] <acaos> yep
[6:10] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[6:10] <dmick> I would think that for adding a new OSD to a running cluster, you're in control of the weight you use when adding, first of all, but second, I would also think that changed weight would mostly affect new write placement. Maybe I misunderstand just what happens on a crushmap update.
[6:10] <acaos> crushmap update = all PGs get rebalanced to the new crushmap
[6:11] <acaos> if you have containers (racks, etc) that change weights
[6:11] <acaos> that means you may well have significant rebalance between racks
[6:11] <dmick> that shouldn't be the case. that's the 'stable' part of the hash
[6:11] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit ()
[6:12] <dmick> but I'm obviously stretching here. It's a topic I'd like to discuss with Sage tomorrow
[6:14] <acaos> it's definitely at least somewhat stable, but we've had cases where half or more of our PGs move around with even a small change to one weight
[6:15] <acaos> (whether we use tree/tree/straw or straw/straw/straw)
[6:16] <acaos> anyway, thanks for your help, starting with a much smaller crushmap really made it work much faster
[6:16] <dmick> cool
[6:16] <dmick> and I'll try to at least educate myself more on the effects of reweighting, because that doesn't match what I thought I knew, so one or the other are wrong :)
[6:16] <dmick> (or maybe it is/was a bug. what version, out of curiosity?0
[6:17] <acaos> currently using 0.61.4
[6:17] <acaos> most of our reweighting experience has been from argonaut and earlier, so we know there have been some fixes since then
[6:17] <dmick> ok
[6:18] <acaos> but I actually have a test reweighting scenario on our lab
[6:18] <acaos> so I'll probably try and run it against 0.61
[6:18] <acaos> and see how that handles it
[6:18] <acaos> we know under argonaut it brought the entire lab to its knees in a few minutes
[6:18] <dmick> if you could, bring that up on the mailing list
[6:18] <acaos> I will verify it still exists under 0.61
[6:18] <acaos> before wasting anyone's time
[6:19] <dmick> thanks!
[6:19] <acaos> the crushmap issue for building fresh clusters though, I will bring that up
[6:19] <dmick> and yes, by all means, that too
[6:19] <acaos> at least a warning to not set your crushmap before your cluster is up
[6:27] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Ping timeout: 480 seconds)
[6:32] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has left #ceph
[6:32] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[7:02] * julian (~julianwa@125.70.135.241) Quit (Quit: afk)
[7:17] * capri (~capri@212.218.127.222) has joined #ceph
[7:24] * lx0 is now known as lxo
[7:27] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[7:30] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Quit: erice)
[7:49] * Vjarjadian (~IceChat77@90.214.208.5) Quit (Quit: Clap on! , Clap off! Clap@#&$NO CARRIER)
[7:57] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:58] * Guest988 (~zack@65-36-76-12.dyn.grandenetworks.net) Quit (Ping timeout: 480 seconds)
[8:00] * sagelap1 (~sage@76.89.177.113) Quit (Ping timeout: 480 seconds)
[8:07] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[8:08] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[8:10] * sleinen (~Adium@2001:620:0:2d:1f3:bfcd:1a62:ce77) has joined #ceph
[8:11] * sleinen1 (~Adium@2001:620:0:26:41dd:fe8:85d2:a1e9) has joined #ceph
[8:18] * sleinen (~Adium@2001:620:0:2d:1f3:bfcd:1a62:ce77) Quit (Ping timeout: 480 seconds)
[8:27] * AfC (~andrew@2001:44b8:31cb:d400:997e:78b7:e195:37cd) has joined #ceph
[8:34] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[8:43] * capri (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[8:44] * capri (~capri@212.218.127.222) has joined #ceph
[8:47] * toabctl (~toabctl@toabctl.de) Quit (Quit: WeeChat 0.3.7)
[9:06] * madkiss (~madkiss@2001:6f8:12c3:f00f:390c:5c2b:7291:22c4) Quit (Read error: Network is unreachable)
[9:06] * madkiss (~madkiss@2001:6f8:12c3:f00f:390c:5c2b:7291:22c4) has joined #ceph
[9:07] * Tamil1 (~tamil@38.122.20.226) has joined #ceph
[9:12] * Tamil (~tamil@38.122.20.226) Quit (Ping timeout: 480 seconds)
[9:13] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[9:16] * mschiff (~mschiff@p4FD7D94C.dip0.t-ipconnect.de) has joined #ceph
[9:16] * mxmln_ (~mxmln@212.79.49.65) has joined #ceph
[9:19] * scuttlemonkey (~scuttlemo@75-150-32-73-Oregon.hfc.comcastbusiness.net) has joined #ceph
[9:19] * ChanServ sets mode +o scuttlemonkey
[9:27] * waxzce (~waxzce@2a01:e35:2e1e:260:69e4:b92f:54f7:c99c) has joined #ceph
[9:29] * hybrid5121 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[9:34] * jianpeng (~majianpen@218.242.10.181) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[9:36] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:36] * leseb (~Adium@83.167.43.235) has joined #ceph
[9:39] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit ()
[9:41] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:47] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[9:48] * waxzce (~waxzce@2a01:e35:2e1e:260:69e4:b92f:54f7:c99c) Quit (Remote host closed the connection)
[9:54] * sleinen (~Adium@2001:620:0:2d:54e5:6b3f:98df:99b3) has joined #ceph
[9:58] * sleinen2 (~Adium@2001:620:0:26:85fd:19c5:3040:568) has joined #ceph
[10:00] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:00] * sleinen1 (~Adium@2001:620:0:26:41dd:fe8:85d2:a1e9) Quit (Ping timeout: 480 seconds)
[10:04] * sleinen (~Adium@2001:620:0:2d:54e5:6b3f:98df:99b3) Quit (Ping timeout: 480 seconds)
[10:07] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:08] * LeaChim (~LeaChim@0540adc6.skybroadband.com) has joined #ceph
[10:16] * scuttlemonkey (~scuttlemo@75-150-32-73-Oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[10:17] * mschiff (~mschiff@p4FD7D94C.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[10:17] * mschiff (~mschiff@p4FD7D94C.dip0.t-ipconnect.de) has joined #ceph
[10:19] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[10:20] * bergerx_ (~bekir@78.188.101.175) has joined #ceph
[10:25] <paravoid> sage: upgraded everything to 0.67-rc2; seems to work so far
[10:25] * lx0 is now known as lxo
[10:29] * sleinen2 (~Adium@2001:620:0:26:85fd:19c5:3040:568) Quit (Quit: Leaving.)
[10:29] * sleinen (~Adium@130.59.94.169) has joined #ceph
[10:33] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:390c:5c2b:7291:22c4) has joined #ceph
[10:33] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[10:35] * sleinen1 (~Adium@2001:620:0:25:c14a:cfcf:92a5:fb9c) has joined #ceph
[10:35] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[10:37] * sleinen (~Adium@130.59.94.169) Quit (Ping timeout: 480 seconds)
[10:38] * kenneth (~kenneth@202.60.8.252) has joined #ceph
[10:38] * madkiss (~madkiss@2001:6f8:12c3:f00f:390c:5c2b:7291:22c4) Quit (Ping timeout: 480 seconds)
[10:39] <kenneth> hi all!
[10:39] <kenneth> i have 3 ceph nodes with 2 ethernet interfaces each
[10:40] <kenneth> how do I configure that the ceph nodes talk in Eth0 and the clients talk in the Eth1 network?
[10:40] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[10:47] <Gugge-47527> kenneth: that is what the cluster network is for
[10:48] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[10:48] <kenneth> yup.. i got a cluster working with only 1 NIC, now that I got a new gigabit switch, I can use the 2 NICs and seperate the cluster network from the public network
[10:48] <kenneth> i don't know where to start
[10:49] * xdeller (~xdeller@91.218.144.129) Quit (Quit: Leaving)
[10:50] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[10:52] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) Quit (Quit: Bye)
[10:52] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) has joined #ceph
[10:59] <Gugge-47527> kenneth: i havent tried, but i would stop the osd's and set the cluster network in the conf, and start the osds again
[11:00] <kenneth> i'll try it.. public network = 10.2.0.0/24 cluster network = 10.1.0.0/24 in the ceph.conf file right?
[11:00] <Gugge-47527> yes
[11:01] <Gugge-47527> of 10.2.0.0/24 is your current network :)
[11:01] <Gugge-47527> and 10.1.0.0/24 is the new network
[11:02] <kenneth> and wat is the ceph conf to be used for the public clients? the same as the ceph nodes conf?
[11:05] <Gugge-47527> the clients only really needs the mon addresses
[11:05] <Gugge-47527> everything else they get from the mons
[11:05] <Gugge-47527> but nothing wrong with using the same conf everywhere, if that is easier :)
[11:05] <kenneth> so i'll give them the address 10.2.0.0/24 for their mons?
[11:05] <Gugge-47527> huh?
[11:05] <Gugge-47527> mon_host = ip, ip, ip
[11:05] <Gugge-47527> you cant put a network in mon_host
[11:08] <kenneth> my setup: 3 nodes, 2 osd each, 1 monitor each, 2 NIC each. i configured 10.1.0.0/24 network for cluster network (eth0) and 10.2.0.0/24 (eth1)
[11:08] <kenneth> then I add a client, connecting to the 10.2.0.0/24 network. I can't seem to get a ceph-OK
[11:09] <Gugge-47527> what about ceph -s from one of your 3 nodes?
[11:09] <Gugge-47527> does that show ok?
[11:09] <kenneth> yes
[11:10] <Gugge-47527> paste the client ceph.conf somewhere
[11:10] <kenneth> before i added the public and cluster? or when they are all running in the same network?
[11:11] * xdeller (~xdeller@91.218.144.129) has joined #ceph
[11:11] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[11:11] <kenneth> I already made a previously running ceph cluster with clients when they are all on the 10.1.0.0 netowork
[11:11] <Gugge-47527> the one you are trying to use now :)
[11:11] <Gugge-47527> the nonworking one you use right now
[11:12] <kenneth> here, hope you can decipher it :)
[11:12] <kenneth> [global]
[11:12] <kenneth> public network = 10.2.0.0/24
[11:12] <kenneth> cluster network = 10.1.0.0/24
[11:12] <kenneth> fsid = 6e40bd1f-acd6-4e26-8bbb-f2382c666914
[11:12] <kenneth> mon_initial_members = ceph-node1, ceph-node2, ceph-node3
[11:12] <kenneth> mon_host = 10.2.0.11,10.2.0.12,10.2.0.13
[11:12] <Gugge-47527> with (hopefully) the "mon_host = mon1ip, mon2ip, mon3ip" in it
[11:12] <kenneth> auth_supported = cephx
[11:12] <kenneth> osd_journal_size = 1024
[11:12] <kenneth> filestore_xattr_use_omap = true
[11:12] <Gugge-47527> noooo, dont paste in the channel
[11:12] <kenneth> sorry, but that's the whole of it..
[11:12] <kenneth> (and the channel is not very busy)
[11:12] <Gugge-47527> what error does ceph -s give you?
[11:13] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[11:13] <kenneth> on ceph nodes they are Health OK
[11:13] <kenneth> on clients
[11:13] <kenneth> 2013-07-25 17:12:58.363508 7f225c817700 0 -- :/2309 >> 10.2.0.11:6789/0 pipe(0x2979560 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
[11:13] <kenneth> 2013-07-25 17:13:01.363626 7f2262f46700 0 -- :/2309 >> 10.2.0.13:6789/0 pipe(0x7f2254000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1).fault
[11:13] <kenneth> 2013-07-25 17:13:04.363821 7f225c817700 0 -- :/2309 >> 10.2.0.12:6789/0 pipe(0x7f2254003010 sd=4 :0 s=1 pgs=0 cs=0 l=1).fault
[11:13] <Gugge-47527> noooo, dont paste in the channel
[11:14] <Gugge-47527> and the monitors are listening to those 3 addresses?
[11:14] <paravoid> three lines doesn't really count as paste does it
[11:15] <Gugge-47527> and what ip does the client have?
[11:19] <kenneth> sorry, the errors are just repeating...anyway, the ceph nodes are listening to 10.1.0.0/24 network
[11:19] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit ()
[11:19] <kenneth> that is the ceph cluster network.. the public network is 10.2.0.0/24
[11:19] <Gugge-47527> the monitors should only listen on the public network
[11:19] <Gugge-47527> if you changed the mon ip's i guess you have to remove/add each monitor one at a time, to get them to use the new addresses
[11:19] <Gugge-47527> (you should have kept the public network, and only added the cluster, not change both)
[11:19] <kenneth> how about the OSDs? will they transfer the data replication in the 10.1.0.0 network even though the monitors are in the 10.2.0.0 network?
[11:19] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[11:19] <Gugge-47527> osd's will do crosstraffic on the cluster network
[11:19] <Gugge-47527> and talk with the monitor and clients on the public network
[11:19] <kenneth> ahhh ok i'll try it again...
[11:19] <kenneth> I need to remove a monitor and add it again with a new IP???
[11:19] <kenneth> can't that be done on the conf file?
[11:19] <Gugge-47527> it can be done by changing the monmap as far as i know
[11:21] <kenneth> thanks for the info...so the monitor should be the one to talk to the public...i may have a lot of work to do
[11:21] <Gugge-47527> yes, clients needs access to monitors and osd's on the public network
[11:31] <kenneth> how can i be sure that ceph replication is on the cluster network? just a simple cluster_network = 10.1.0.0/24 line in the conf?
[11:32] * jjgalvez1 (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) Quit (Quit: Leaving.)
[11:38] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[11:42] <Gugge-47527> you can check the traffic on the network
[11:44] * bwesemann_ (~bwesemann@2001:1b30:0:6:edc3:28ca:34e8:47d7) Quit (Remote host closed the connection)
[11:44] <kenneth> ok..thanks for the advise
[11:45] <kenneth> advice
[11:53] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[11:54] * yanzheng (~zhyan@134.134.139.74) Quit (Remote host closed the connection)
[11:55] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[12:01] * erwan_taf is wondering something about ceph
[12:02] <erwan_taf> I'm running 35 OSD on my server and I found that while stressing them, they feel very "polite" and "organized" regarding the cpu load like : https://paste.ring.enovance.com/view/dd431b53
[12:02] <erwan_taf> looks like a pyramid of cpu load from 14 to 100%
[12:03] <erwan_taf> very few %wa
[12:03] <erwan_taf> so cpu are not stuck are waiting data
[12:03] <erwan_taf> does some have already seen such behavior ?
[12:06] * hybrid5121 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[12:06] * hybrid5121 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit ()
[12:07] <Anticimex> erwan_taf: your pastie required login
[12:07] <erwan_taf> arg
[12:07] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:08] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) Quit (Ping timeout: 480 seconds)
[12:08] <erwan_taf> http://pastebin.com/iL4bkYCB
[12:08] <erwan_taf> shall be better
[12:10] * zynzel (zynzel@spof.pl) Quit (Read error: Connection reset by peer)
[12:10] * zynzel (zynzel@spof.pl) has joined #ceph
[12:12] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[12:13] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[12:18] * bwesemann (~bwesemann@2001:1b30:0:6:bc92:1101:ba5e:bd27) has joined #ceph
[12:25] <Gugge-47527> erwan_taf: what is wrong with using little cpu? :)
[12:25] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Read error: Connection reset by peer)
[12:41] * huangjun (~kvirc@111.175.164.32) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[12:46] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[12:48] * yy-nm (~chatzilla@218.74.32.66) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 22.0/20130618035212])
[12:59] * kenneth (~kenneth@202.60.8.252) Quit (Ping timeout: 480 seconds)
[13:12] * skatteola (~david@c-0784e455.16-0154-74657210.cust.bredbandsbolaget.se) has joined #ceph
[13:33] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[13:33] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[13:42] * markbby (~Adium@168.94.245.2) has joined #ceph
[13:43] <niklas> How does librados communicate to RADOS?
[13:44] <niklas> Obviously using the Network, but is the protocoll documented?
[13:44] * yanzheng (~zhyan@134.134.137.73) has joined #ceph
[13:49] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[13:49] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[13:51] * dutchie (~josh@2001:ba8:1f1:f092::2) has joined #ceph
[13:51] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[13:52] <dutchie> hi, i'm having issues that look similar to http://tracker.ceph.com/issues/5205 and http://tracker.ceph.com/issues/5195 even though they should have been fixed in 0.61.5 and I have .6
[13:53] <dutchie> i did "ceph-deploy new host{1..3}; ceph-deploy install host{1..3}; ceph mon create host{1..3}" and then it fails to start
[13:53] <dutchie> that same assertion failure appears in the logs
[13:54] <dutchie> i've tried both with and without having public network set in ceph.conf (assuming changing the one that ceph-deploy new made before doing the rest is the Right Thing)
[13:59] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[14:01] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[14:04] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:18] <janos> if i have a host whose osd's are getting full
[14:19] <janos> can i add an osd to another host to alleviate the fullness?
[14:19] <janos> or do i really need to add to that host/failed domain
[14:19] <janos> failure not failed
[14:24] <joelio> janos: afaik if you add more OSDs, there should be some rebalancing that happens transparrently across all OSDs (best to check though)
[14:24] <joelio> shouldn't need to add to any specific host, but not sure of setup/crushmap details etc.
[14:24] <janos> the crush is pretty stock
[14:24] <janos> 3 hosts
[14:24] <joelio> yea, should 'just work' then
[14:25] <janos> i'll give that a shot
[14:25] <janos> thanks!
[14:25] <joelio> n/p
[14:26] * yanzheng (~zhyan@134.134.137.73) Quit (Remote host closed the connection)
[14:28] * markbby (~Adium@168.94.245.2) has joined #ceph
[14:32] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[14:32] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[14:37] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit ()
[14:37] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[14:38] * Vulture (~kvirc@office.meganet.ru) has joined #ceph
[14:39] * yanzheng (~zhyan@134.134.137.73) has joined #ceph
[14:39] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit ()
[14:40] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[14:46] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[14:51] * waxzce (~waxzce@2a01:e34:ee97:c5c0:4446:12fd:605:29c9) has joined #ceph
[14:53] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[14:53] * yanzheng (~zhyan@134.134.137.73) Quit (Remote host closed the connection)
[14:54] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[15:03] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley)
[15:06] * diegows (~diegows@190.190.2.126) has joined #ceph
[15:06] * waxzce_ (~waxzce@office.clever-cloud.com) has joined #ceph
[15:11] * jerrad (~jerrad@dhcp-63-251-67-70.acs.internap.com) has left #ceph
[15:12] * john_barbee (~jbarbee@173-16-234-208.client.mchsi.com) has joined #ceph
[15:14] * waxzce (~waxzce@2a01:e34:ee97:c5c0:4446:12fd:605:29c9) Quit (Ping timeout: 480 seconds)
[15:16] <iggy> janos: make sure if it's a different size, you give it a different weight
[15:18] <janos> iggy: will do
[15:18] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[15:18] * markbby (~Adium@168.94.245.2) has joined #ceph
[15:18] <janos> this one will be only slightly diff. the ssd journal disk is max'd, so i'll be making a small journal partition on this disk
[15:18] <janos> temp to alleviate some strain
[15:20] * huangjun (~kvirc@221.234.36.134) has joined #ceph
[15:26] * agh (~oftc-webi@gw-to-666.outscale.net) has joined #ceph
[15:26] * matt__ (~matt@220-245-1-152.static.tpgi.com.au) Quit (Quit: Leaving)
[15:26] <agh> question : is possible to deploy monitors not in the "public network" defined in ceph.conf ?, but accessible via routing ?
[15:28] <agh> example, my "public network" is 192.168.0.0/24, can i have a monitor in 172.18.0.0/24 ?
[15:30] <alfredodeza> agh: when you say "deploy monitors" you mean via ceph-deploy or some other means?
[15:30] <agh> via ceph-deploy, or not.
[15:30] <alfredodeza> the requirement for that to work with ceph-deploy is that it can reach the host
[15:30] <alfredodeza> so if you can route to it, ceph-deploy should
[15:30] <agh> "deploy" was not linked to "ceph-delpoy"
[15:30] <alfredodeza> right, just mentioning it because I was not sure
[15:30] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Remote host closed the connection)
[15:34] <agh> alfredodeza: ok. thanks. so i'l try to do so. I was afraid that monitors HAVE TO be in the same subnet
[15:40] * Vulture (~kvirc@office.meganet.ru) Quit (Quit: KVIrc 4.3.1 Aria http://www.kvirc.net/)
[15:42] * Vulture (~kvirc@office.meganet.ru) has joined #ceph
[15:44] * BillK (~BillK-OFT@124-169-67-32.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:45] * Vulture (~kvirc@office.meganet.ru) Quit ()
[15:52] * fridudad (~oftc-webi@fw-office.allied-internet.ag) Quit (Remote host closed the connection)
[15:57] <loicd> ccourtaut: hi
[15:58] * mxmln (~maximilia@212.79.49.65) Quit (Remote host closed the connection)
[15:58] * mxmln_ is now known as mxmln
[15:58] * mxmln3 (~maximilia@212.79.49.65) has joined #ceph
[16:11] * bergerx_ (~bekir@78.188.101.175) Quit (Ping timeout: 480 seconds)
[16:11] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[16:12] * skm (~smiley@205.153.36.170) Quit (Quit: Leaving.)
[16:15] * zackc (~zack@65-36-76-12.dyn.grandenetworks.net) has joined #ceph
[16:15] * zackc is now known as Guest1058
[16:18] * aliguori (~anthony@32.97.110.51) has joined #ceph
[16:28] * bergerx_ (~bekir@212.57.23.98) has joined #ceph
[16:30] * danieagle (~Daniel@177.97.248.238) has joined #ceph
[16:30] <jtang> haproxy + radosgw's == cool
[16:36] <nhm> jtang: that's what DH does to. Are you having good success?
[16:36] <nhm> s/to/too
[16:37] <jtang> nhm: its in a test environment but yea
[16:37] <jtang> im using the latest devel verison of haproxy so i can terminate ssl at the loadbalancer
[16:37] <jtang> i just got my radosgw's deploying in a pretty automated way, and templated up my haproxy config
[16:38] <jtang> though i am having some fun issues with radosgw itself in relation to acl's and bucket naming conventions
[16:38] <jtang> s3cmd seems to want to create capitalised bucket names
[16:38] <jtang> and duplicity expects a wildcard dns entry for the gateway
[16:39] <nhm> jtang: are you seeing good scaling with more RGW processes?
[16:39] <jtang> nhm: im not doing it for performance, rather im doing haproxy for availability
[16:40] <nhm> ah, ok
[16:40] <jtang> im running them on a private cloud so i can float machines between 'datacenters'
[16:40] <jtang> and for rolling updates
[16:40] <jtang> i guess im planning ahead
[16:40] <nhm> nice
[16:40] <jtang> if i can share the load over two machines and get all the benefits of haproxy im going to run with it
[16:41] <jtang> mind you i can do live migrations so its not a big deal
[16:42] * scuttlemonkey (~scuttlemo@75-150-32-73-Oregon.hfc.comcastbusiness.net) has joined #ceph
[16:42] * ChanServ sets mode +o scuttlemonkey
[16:42] <jtang> the guys here are developing against radosgw for our app
[16:42] <jtang> i think we're pretty happy with it so far
[16:43] <jtang> we're gonna migrate more vm's to using rbd (via opennebula) soonish
[16:44] <nhm> jtang: I've been curious about what's going on with opennebula these days. Is cern still using it?
[16:45] <off_rhoden> nhm: I feel like I always see references to CERN and OpenStack. I didn't know they played with OpenNebula too.
[16:46] <nhm> off_rhoden: apparently CERN is not still using opennebula: http://gigaom.com/2013/05/31/heres-why-cern-ditched-opennebula-for-openstack/
[16:47] * sprachgenerator (~sprachgen@130.202.135.191) has joined #ceph
[16:47] <off_rhoden> oh, excellent find. I"ll give it a read.
[16:53] * dobber (~dobber@213.169.45.222) has joined #ceph
[16:59] <nhm> ceph health: Bus error (core dumped)
[16:59] <nhm> ruhroh
[17:01] * jjgalvez (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) has joined #ceph
[17:02] * john_barbee (~jbarbee@173-16-234-208.client.mchsi.com) Quit (Ping timeout: 480 seconds)
[17:05] * Guest1058 (~zack@65-36-76-12.dyn.grandenetworks.net) Quit (Quit: leaving)
[17:05] * bergerx_ (~bekir@212.57.23.98) Quit (Remote host closed the connection)
[17:05] * john_barbee (~jbarbee@173-16-234-208.client.mchsi.com) has joined #ceph
[17:06] * zackc (~zack@0001ba60.user.oftc.net) has joined #ceph
[17:12] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[17:12] * markbby (~Adium@168.94.245.2) has joined #ceph
[17:16] * yeled (~yeled@spodder.com) Quit (Ping timeout: 480 seconds)
[17:17] * john_barbee (~jbarbee@173-16-234-208.client.mchsi.com) Quit (Ping timeout: 480 seconds)
[17:23] * yeled (~yeled@spodder.com) has joined #ceph
[17:31] * mschiff_ (~mschiff@p4FD7D94C.dip0.t-ipconnect.de) has joined #ceph
[17:31] * mschiff (~mschiff@p4FD7D94C.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[17:32] * godog (~filo@0001309c.user.oftc.net) Quit (Quit: Reconnecting)
[17:32] * godog (~filo@esaurito.net) has joined #ceph
[17:33] * scuttlemonkey (~scuttlemo@75-150-32-73-Oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[17:34] * sagelap (~sage@2600:1012:b004:4448:6dfe:4226:98e4:18b4) has joined #ceph
[17:35] * dobber (~dobber@213.169.45.222) Quit (Remote host closed the connection)
[17:39] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:46] <jtang> nhm: dunno tbh
[17:46] <jtang> we have a few ex opennebula/stratuslab peeps here
[17:46] <jtang> i could ask
[17:47] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[17:48] * john_barbee (~jbarbee@173-16-234-208.client.mchsi.com) has joined #ceph
[17:56] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[17:56] <loicd> ccourtaut: regarding the S3 feature list, what would be great is to have a link to the code implementing it as well as the associated unit tests
[17:57] <ccourtaut> loicd: yes, good suggestion
[17:57] <loicd> and the output of a run of the unit tests as well as the code coverage
[17:57] * yehuda_hm (~yehuda@2602:306:330b:1410:baac:6fff:fec5:2aad) has joined #ceph
[17:57] * loicd dreaming ...
[17:57] <ccourtaut> XD
[17:58] <loicd> ahaha
[17:58] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Quit: erice)
[17:58] <loicd> well, when reading such lists, I always wonder how accurate it is.
[17:58] * danieagle (~Daniel@177.97.248.238) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[17:59] * huangjun (~kvirc@221.234.36.134) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[17:59] * sagelap (~sage@2600:1012:b004:4448:6dfe:4226:98e4:18b4) Quit (Read error: Connection reset by peer)
[17:59] <ccourtaut> providing as much information as possible it always a good thing
[18:00] <loicd> For instance http://ceph.com/docs/next/radosgw/swift/#api says CORS Not Supported although https://github.com/ceph/ceph/blob/master/src/rgw/rgw_cors.h
[18:01] * JM (~oftc-webi@193.252.138.241) Quit (Quit: Page closed)
[18:01] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[18:02] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[18:06] <loicd> tiens on dirait que Babu ne commit plus
[18:06] <loicd> git log --all --author='Shanmugam' -1 => 9 juillet
[18:06] <loicd> et avant 28 juin
[18:06] <loicd> dommage :-(
[18:06] <loicd> ahum... wrong language ;-)
[18:07] <ccourtaut> XD
[18:07] <ccourtaut> in portland it's still early, we can understand :D
[18:07] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[18:07] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[18:08] <paravoid> loicd: what was this LFS talk about?
[18:08] <loicd> paravoid: you mean with Chmouel & Dickinson ?
[18:08] <paravoid> yes
[18:09] <paravoid> I saw your mail on ceph-dev
[18:09] <paravoid> but it's a bit cryptic :)
[18:09] <paravoid> what was it about and how does it relate to ceph?
[18:09] <paravoid> (just curious)
[18:10] <ccourtaut> paravoid: afaik, lfs is a project to separate the swift api for its implementation
[18:11] <loicd> paravoid: it is completely cryptic and hopefully chmouel will respond with links to clarify. In a nutshell it's an effort to separate the swift api front the swift implementation. And they are at the stage where they define the driver API. A good time to figure out if ceph could be used as a backend or if it is better to keep chasing the API evolution.
[18:11] <ccourtaut> so the idea might be to use lfs for the swift implementation in ceph
[18:11] <loicd> :-D
[18:11] <ccourtaut> :D
[18:11] <ccourtaut> redundancy sry
[18:11] <paravoid> when you say api, you mean internal api
[18:11] <paravoid> I read something here yesterday and I thought the swift api *specification*
[18:12] <nwl> loicd: morning
[18:12] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:12] <paravoid> which is also something that they're trying to do
[18:12] <loicd> nwl: \o
[18:12] <paravoid> interesting
[18:14] <loicd> paravoid: I did not know about that ( API specs ). And yes, it's an internal API : a backend driver will have to ipmlement this API and would then be accessible thru the swift API layer.
[18:14] <paravoid> yeah, got it now
[18:14] * wer (~wer@55.sub-70-208-144.myvzw.com) has joined #ceph
[18:15] <loicd> yehuda_hm: thinks ( and I tend to trust his judgement ;-) that adapting Ceph to such a drive API will be *more* work than re-implementing the API itself.
[18:15] <loicd> s/drive/driver/
[18:17] * john_barbee (~jbarbee@173-16-234-208.client.mchsi.com) Quit (Ping timeout: 480 seconds)
[18:18] * sprachgenerator (~sprachgen@130.202.135.191) Quit (Read error: Connection reset by peer)
[18:18] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[18:19] * sprachgenerator (~sprachgen@130.202.135.191) has joined #ceph
[18:21] <ntranger> hey all! I got ceph installed, and the config made, but when I try to start ceph, I get "no filesystem type defined". I'm at a loss as to what direction to take from here
[18:23] <sagewk> what do you mean by "start ceph" ?
[18:24] <ntranger> start the service
[18:25] <ntranger> I run "service ceph start" and it starts mon.0 and mds.0 and when it gets to osd.0 it says no filesystem type defined.
[18:25] * sleinen1 (~Adium@2001:620:0:25:c14a:cfcf:92a5:fb9c) Quit (Quit: Leaving.)
[18:26] * sleinen (~Adium@130.59.94.169) has joined #ceph
[18:26] <paravoid> sagewk: not sure if you saw my earlier message; 0.67-rc looks good so far.
[18:26] <sagewk> awesome
[18:27] <paravoid> this shouldn't need an explicit report, but I tend to be... unlucky enough :)
[18:28] * scuttlemonkey (~scuttlemo@67.23.204.2) has joined #ceph
[18:28] * ChanServ sets mode +o scuttlemonkey
[18:28] <yehuda_hm> paravoid: did you look at rgw yet?
[18:28] <paravoid> what about it?
[18:28] * hybrid5121 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[18:28] <yehuda_hm> with 0.67-rc
[18:28] <paravoid> yeah
[18:28] <paravoid> I don't use ceph for anything else but rgw atm
[18:28] <paravoid> and I upgraded mons/rgw/osds
[18:29] <paravoid> seems to work
[18:29] <ccourtaut> yehuda_hm: hi
[18:29] <yehuda_hm> oh, right, you went through the next branch before
[18:29] <paravoid> yep
[18:29] <yehuda_hm> ccourtaut: hi!
[18:29] <paravoid> multiple times :)
[18:29] <yehuda_hm> heh
[18:29] <paravoid> I haven't tested any of the region stuff though
[18:29] <paravoid> nor did I set any new ceph.conf options
[18:30] <yehuda_hm> paravoid: baby steps
[18:30] <ccourtaut> i don't know if you had time to take a look at this, https://github.com/ceph/ceph/pull/455 ?
[18:30] <paravoid> heh, I guess
[18:30] * mxmln (~mxmln@212.79.49.65) Quit (Ping timeout: 480 seconds)
[18:30] <paravoid> I have no other zone yet anyway
[18:30] <yehuda_hm> ccourtaut: it's on my list
[18:30] <paravoid> do we have docs about regions/zones yet?
[18:31] <ccourtaut> yehuda_hm: ok no worries, i know there is oscon this week, and that dumpling requieres all attention :)
[18:31] <yehuda_hm> yeah, on my end it's dumpling
[18:31] <yehuda_hm> paravoid: we're working on it
[18:31] <paravoid> okay
[18:31] * leseb (~Adium@83.167.43.235) Quit (Quit: Leaving.)
[18:32] <paravoid> I'm not in any hurry
[18:33] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[18:34] * sleinen (~Adium@130.59.94.169) Quit (Ping timeout: 480 seconds)
[18:41] * mschiff_ (~mschiff@p4FD7D94C.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[18:44] * hybrid5121 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[18:44] <ntranger> I'm sure its something simple i'm over looking, but I can't put my finger on it
[18:47] * john_barbee (~jbarbee@173-16-234-208.client.mchsi.com) has joined #ceph
[18:55] * chamings (~jchaming@134.134.139.70) has joined #ceph
[18:56] * ntranger (~ntranger@proxy2.wolfram.com) has left #ceph
[18:56] * ntranger (~ntranger@proxy2.wolfram.com) has joined #ceph
[19:00] <janos> urg
[19:00] <janos> i had one osd getting full
[19:00] <janos> and on that host 3 osd's. one 65% full, 75% and 85%
[19:00] <janos> after attempting to change weights
[19:01] <janos> i'm at 74, 85, 85
[19:01] <janos> :O
[19:01] <janos> i've added an osd on another host
[19:01] <janos> but it does not seem to want to let it in due to being a fraction of a percent degraded
[19:01] <janos> though i cannot confirm that. it's a suspicion
[19:02] <janos> i currently see 5 pgs stuck unclean - should i attempt to rememdy that first?
[19:02] <janos> (and if so how)
[19:04] * alram (~alram@38.122.20.226) has joined #ceph
[19:06] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Remote host closed the connection)
[19:07] * scuttlemonkey (~scuttlemo@67.23.204.2) Quit (Ping timeout: 480 seconds)
[19:08] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[19:11] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[19:13] * yasu` (~yasu`@adsl-99-30-224-94.dsl.pltn13.sbcglobal.net) has joined #ceph
[19:13] * yasu` (~yasu`@adsl-99-30-224-94.dsl.pltn13.sbcglobal.net) Quit (Remote host closed the connection)
[19:14] <gregaf1> lxo: those aren't file xattrs, they're rados object xattrs, and the information is maintained lazily
[19:14] <gregaf1> in the case of a directory rename it really doesn't matter since the parent inode hasn't changed, and we don't want to go around touching each of 100,000 (or however many) file objects before a directory rename completes!
[19:14] * gregaf1 (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[19:16] * tocasz (~pty2@bl9-250-156.dsl.telepac.pt) has joined #ceph
[19:16] * gregaf (~Adium@38.122.20.226) has joined #ceph
[19:24] * davidzlap (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[19:26] <loicd> sage: ccourtaut agreed to submit a blueprint to propose the creation of a feature list for S3 / swift APIs and make it so it helps tracking its evolution and is appealing to contributors willing to participate in fixing / improving it. He will propose the blueprint tomorrow.
[19:28] * iggy__ (~iggy@theiggy.com) Quit (Quit: leaving)
[19:30] * tocasz (~pty2@bl9-250-156.dsl.telepac.pt) Quit (autokilled: Do not spam. Mail support@oftc.net with questions. (2013-07-25 17:30:09))
[19:35] * lautriv (~lautriv@f050081055.adsl.alicedsl.de) has joined #ceph
[19:36] * iggy (~iggy@theiggy.com) Quit (Remote host closed the connection)
[19:38] * haomaiwa_ (~haomaiwan@117.79.232.243) Quit (Ping timeout: 480 seconds)
[19:40] * haomaiwa_ (~haomaiwan@117.79.232.196) has joined #ceph
[19:40] <Psi-Jack_> Heh, upgrade from 0.61.5 from 0.61.4 was a bit bumpier than I expected. mons didn't even talk to each other.
[19:41] <Psi-Jack_> Well, weren't accepting the newer version anyway.
[19:42] * iggy (~iggy@theiggy.com) has joined #ceph
[19:46] * iggy__ (~iggy@theiggy.com) has joined #ceph
[19:47] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[19:51] <sjustlaptop> sagewk: is max_open_files the correct config? seems to default to 0
[19:51] <sagewk> # Increase max_open_files, if the configuration calls for it.
[19:51] <sagewk> get_conf max_open_files "8192" "max open files"
[19:51] <sagewk> in ceph.in
[19:52] <sjustlaptop> I see
[19:53] <sjustlaptop> ah, init-ceph.in
[19:54] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[19:55] <sagewk> yeah sorry
[19:56] <sjustlaptop> oh, so the get_conf part ignores the default in config_opts.h?
[20:07] <sjustlaptop> sagewk: review on wip-wb-xfs-defaults?
[20:07] <sagewk> k
[20:08] <sagewk> sjustlaptop: looks good
[20:08] <sjustlaptop> k
[20:09] <sjustlaptop> we don't have perf tests for the btrfs values, but I assume they shouldn't be lower than the xfs ones
[20:12] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[20:12] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[20:19] * houkouonchi-work (~linux@12.248.40.138) Quit (Remote host closed the connection)
[20:23] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[20:25] * houkouonchi-work (~linux@12.248.40.138) Quit (Remote host closed the connection)
[20:25] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[20:29] <sagewk> yehudasa__: https://github.com/dalgaaf/ceph/commit/ebff1ebd1011968bfdf1d88208d3b95ec1c2e476
[20:34] * ishkabob (~c7a82cc0@webuser.thegrebs.com) has joined #ceph
[20:34] * josef (~seven@li70-116.members.linode.com) has joined #ceph
[20:34] <josef> is gary in here?
[20:34] <nhm> sjustlaptop: we have some data on btrfs. I didn't run all of the tests, but we have some. I don't expect that the new values will hurt btrfs performance.
[20:35] <glowell> hi josef
[20:35] <josef> hey
[20:35] <josef> so you want to keep the stable stuff in fedora and not put any of hte devel stuff in there?
[20:35] <glowell> Right, just the current stable release
[20:36] <josef> k
[20:36] <ishkabob> hello again ceph devs
[20:36] <josef> i'm doing a mockbuild now, hopefully that just works and i can push and forget about you guys for another couple of months :)
[20:37] <ishkabob> does anyone know if I can use ceph-deploy to format my drives using the ceph.conf as a starting point? sort of the same way mkcephfs used to?
[20:38] <ishkabob> i'm generating my ceph.conf based on facter facts from puppet, so it should be a complete configuration for use with my ceph cluster. I would like to take that conf file, and push it, and generated keys where it needs to go as well as format and prepare the osds
[20:41] <sjustlaptop> nhm: cool
[20:41] <sagewk> josef: hi!
[20:42] <josef> sagewk: hey~
[20:42] <gregaf> ishkabob: I've gotta run right now, but if you're using Puppet you probably…want to use Puppet. No need for ceph-deploy, just use the same hooks it does to config the Ceph daemons. :)
[20:42] <josef> sigh its not building because it cant find tcmalloc
[20:43] <ishkabob> cool, i'll dig deeper into the documentation and try to figure out what it's doint for each step
[20:43] <ishkabob> thanks gregaf
[20:48] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:49] <glowell> josef: tcmalloc is n the epel repository in the gperftools package, I think fedora19 has it as well
[20:49] * john_barbee (~jbarbee@173-16-234-208.client.mchsi.com) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 22.0/20130618035212])
[20:51] * waxzce_ (~waxzce@office.clever-cloud.com) Quit (Remote host closed the connection)
[20:52] * waxzce (~waxzce@office.clever-cloud.com) has joined #ceph
[20:53] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[20:53] <josef> glowell: yeah i'm not sure why its complaining, it definitely pulls it in
[20:53] * waxzce (~waxzce@office.clever-cloud.com) Quit (Remote host closed the connection)
[20:53] <josef> configure doesnt find it tho
[20:53] <josef> trying again with fedora to see if its just a mock+epel problem
[20:53] * waxzce (~waxzce@office.clever-cloud.com) has joined #ceph
[20:55] * waxzce (~waxzce@office.clever-cloud.com) Quit (Remote host closed the connection)
[20:55] * waxzce (~waxzce@2a01:e34:ee97:c5c0:b483:d016:4139:e58e) has joined #ceph
[20:57] * waxzce (~waxzce@2a01:e34:ee97:c5c0:b483:d016:4139:e58e) Quit (Remote host closed the connection)
[20:57] * waxzce (~waxzce@2a01:e34:ee97:c5c0:b483:d016:4139:e58e) has joined #ceph
[21:08] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:09] * KindTwo (~KindOne@h176.56.186.173.dynamic.ip.windstream.net) has joined #ceph
[21:09] * KindTwo is now known as KindOne
[21:10] * wer (~wer@55.sub-70-208-144.myvzw.com) Quit (Ping timeout: 480 seconds)
[21:11] * sleinen (~Adium@2001:620:0:26:d051:d75:1cf:35fb) has joined #ceph
[21:11] * waxzce (~waxzce@2a01:e34:ee97:c5c0:b483:d016:4139:e58e) Quit (Remote host closed the connection)
[21:18] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[21:27] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:28] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[21:35] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[21:40] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:42] <dmick> sagewk: docs for adding OSD by hand say "mon rwx"; do we want to change that to 'mon allow profile osd'?
[21:43] * mschiff (~mschiff@port-16072.pppoe.wtnet.de) has joined #ceph
[21:43] <sagewk> yes
[21:44] <Gugge-47527> is it still problematic going from one mon to tree?
[21:48] <sagewk> alfredodeza: can you look at wip-ceph-disk in ceph.git too? similar fix
[21:49] * alfredodeza was looking
[21:49] <alfredodeza> do we know what ceph version we have installed so we can special case this?
[21:49] * ishkabob (~c7a82cc0@webuser.thegrebs.com) Quit (Quit: TheGrebs.com CGI:IRC)
[21:49] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[21:49] <sagewk> the osds (and ceph-disk) might be a different versoin than teh running monitors, depending on what whether and in what order they upgrade
[21:50] <sagewk> nhm: not that i can think of
[21:50] * waxzce (~waxzce@561591080.ipsat.francetelecom.net) has joined #ceph
[21:51] <alfredodeza> so is it just 'attempt to do this and if you fail, try again with a different flag' ?
[21:52] <sagewk> yeah
[21:52] <sagewk> it would be better to only retry if we specifically get EACCESS
[21:52] <sagewk> i always forget the syntax there
[21:54] <sagewk> and i guess that should be check_call in the try
[21:55] <alfredodeza> I will push my changes to that branch
[21:56] * alfredodeza is on it
[21:57] <sagewk> thanks!
[22:00] * sleinen (~Adium@2001:620:0:26:d051:d75:1cf:35fb) Quit (Quit: Leaving.)
[22:00] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[22:02] * sleinen1 (~Adium@2001:620:0:26:f952:34f7:8c2e:3dc4) has joined #ceph
[22:02] <sagewk> sjusthm: yehudasa__: any thoughts on http://tracker.ceph.com/issues/5752 ?
[22:03] <sjusthm> sagewk: seems like there would still be a race even if we loaded all classes on startup
[22:04] <sjusthm> albeit a much shorter one
[22:04] <sagewk> yeah, just vanishingly small (vs the lifetime of teh ceph-osd process)
[22:04] <sjusthm> true
[22:04] * waxzce (~waxzce@561591080.ipsat.francetelecom.net) Quit (Remote host closed the connection)
[22:05] <gregaf> so this is just that a mismatch between the class and the osd
[22:05] <sagewk> yeah
[22:05] <gregaf> *can* we load all classes on startup? (do we necessarily know where they are?)
[22:05] <sagewk> 2013-07-25 11:13:48.000738 7f4e05c3e700 0 _load_class could not open class /usr/lib/rados-classes/libcls_rgw.so (dlopen failed): /usr/lib/rados-classes/libcls_rgw.so: undefined symbol: _Z21cls_current_subop_numPv
[22:05] <sjusthm> is there a disadvantage to loading on startup?
[22:05] <sjusthm> you wouldn't be able to inject a class without restarting an osd
[22:06] <sagewk> they can also load on demand in case a new one appears..
[22:06] <sjusthm> true
[22:06] <gregaf> yeah, it would be easy to add an admin socket command for that
[22:06] <sagewk> s/also/still/
[22:06] <sjusthm> seems like doing it on startup would be easiest
[22:06] <sjusthm> and add a reload_all_admin to the admin socket
[22:06] <sjusthm> *reload_all_classses
[22:06] <sjusthm> s/sss/ss
[22:06] <gregaf> forgive my lack of linker-fu, but does loading it on startup actually prevent issues if it gets upgraded out from under us?
[22:06] <sagewk> yeah
[22:07] <sagewk> it has an fd open for the file
[22:07] <sagewk> and upgrade creates a new file
[22:07] <gregaf> so we won't have page-out problems like we used to see on NFS?
[22:07] <sagewk> the same problem will exist
[22:07] <gregaf> I didn't think dynamic linking involved keeping an fd open to it
[22:08] <sagewk> that's nfs's fault
[22:08] <dmick> gregaf: it must handle it, or you'd never be able to update libc
[22:08] <sagewk> i think it just mmaps the file the same way, not actually different than teh exectuable
[22:08] <gregaf> ah, use-based proofs
[22:08] <gregaf> k
[22:08] <gregaf> I was just worried that if we didn't use it then the vm would close the file and re-open it when we tried to actually access the symbol
[22:09] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:09] <sagewk> nope
[22:09] <dmick> yeah, it's a reasonable worry, but I think the world would have already fallen apart if that were the case
[22:10] <sagewk> heh
[22:11] <gregaf> so what was NFS doing that was naughty, then?
[22:11] <gregaf> (and different?)
[22:12] <sagewk> it had an open handle on the old executable (that got renamed over on teh server), but bc that happened server side there was no silly rename, and if it later faulted the ino didn't exist, and the resulting ESTALE -> SIGBUS
[22:13] <dmick> i.e. the server didn't keep an open fd, I think, right?...
[22:13] <dmick> just the client
[22:13] <sagewk> so upgrades are normally safe because we have a handle to thebinary/sos, but that all falls apart with shared root nfs if you delete teh old stuff
[22:13] <sagewk> right
[22:13] <gregaf> ah
[22:13] <yehudasa__> sagewk: I remember us chasing that issue ...
[22:14] <gregaf> yehudasa_: yeah, that's why i'm asking about it now :)
[22:14] <gregaf> (in relation to the linking)
[22:14] <dmick> I broke my Solaris desktop once installing a bad libc.so. it was....painful.
[22:14] <dmick> because not only couldn't I start anything new, all the old stuff starting shooting itself in the head
[22:15] * sagewk recalls less-than-fond memories of the great libc5->libc6 transition from ~2000
[22:17] <gregaf> fyi, sagewk, http://tracker.ceph.com/issues/5753
[22:17] <sagewk> k
[22:17] <gregaf> traceless rename reply caused a segfault; that might be a new regression?
[22:17] <sjusthm> sagewk: I think it may be harmless to have the wrong same_*_since values since the pg history will have been trimmed to the same point and (I think) the osd which missed the map hole must not be active for that interval
[22:18] <sagewk> maybe, more likely a race reconnect bug.. that was with the mds thrasher i assume?
[22:18] <sjusthm> so generate_past_intervals will end up with the same answer
[22:18] <sjusthm> and same_interval_since and friends will be fixed when the next interval happens
[22:19] <gregaf> sagewk: looks like no — it's running with the traceless-replies option on, and the client should be able to handle that
[22:22] <gregaf> gregf@kai:~/src/teuthology [master]$ ./bootstrap
[22:22] <gregaf> Usage: virtualenv [OPTIONS] DEST_DIR
[22:22] <gregaf> virtualenv: error: no such option: --system-site-packages
[22:22] <gregaf> umm?
[22:22] <dmick> is that the one it's been warning about in past releases?...
[22:23] <gregaf> I have no idea, I haven't used teuthology in a while so I ran bootstrap after a pull, then installed the new packages it asked for, then again and got this
[22:23] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:23] <gregaf> looks like Warren added it at the beginning of June
[22:24] <dmick> maybe that was --no-site-packages
[22:24] <gregaf> I guess I can try upgrading python-virtualenv?
[22:24] <gregaf> it looks like one does exist
[22:24] <dmick> I have 1.7.1.2.2 fwiw
[22:25] <dmick> er, 2-2
[22:28] <gregaf> I have 1.4.9, it's what's in debian squeeze, and it doesn't support that option
[22:28] <gregaf> dammit
[22:29] <gregaf> well, I have been meaning to move over to pudgy from kai
[22:29] * zack_ (~zack@formosa.juno.dreamhost.com) has joined #ceph
[22:29] * zackc (~zack@0001ba60.user.oftc.net) Quit (Quit: leaving)
[22:30] * zack_ is now known as zackc
[22:30] <dmick> well I think you can just remove it and it should still work
[22:30] <dmick> you'd have to remember to hack that much of teuthology
[22:30] <dmick> but it's a small bit
[22:30] <gregaf> yeah...
[22:30] <dmick> it seems to just control how much stuff virtualenv makes a private copy of
[22:31] <gregaf> I'll just move and put a bug in the tracker
[22:34] * Psi-Jack (~Psi-Jack@yggdrasil.hostdruids.com) Quit (Ping timeout: 480 seconds)
[22:34] * Psi-Jack_ is now known as Psi-jack
[22:39] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (Quit: Leaving.)
[22:42] * markl (~mark@tpsit.com) Quit (Ping timeout: 480 seconds)
[22:57] * todin_ (tuxadero@kudu.in-berlin.de) Quit (Read error: Connection reset by peer)
[23:11] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[23:16] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[23:17] * dxd828 (~dxd828@host-2-97-79-23.as13285.net) has joined #ceph
[23:19] * Vjarjadian (~IceChat77@90.214.208.5) has joined #ceph
[23:19] * jjgalvez1 (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) has joined #ceph
[23:22] * BillK (~BillK-OFT@124-169-67-32.dyn.iinet.net.au) has joined #ceph
[23:23] * Guest257 (~coyo@thinks.outside.theb0x.org) Quit (Quit: om nom nom delicious bitcoins...)
[23:26] * jjgalvez (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[23:30] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[23:30] * todin (tuxadero@kudu.in-berlin.de) Quit (Read error: Connection reset by peer)
[23:35] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[23:44] * vipr_ (~vipr@78-21-227-195.access.telenet.be) Quit (Remote host closed the connection)
[23:45] <sagewk> nwat: ping!
[23:47] * erwan_taf (~erwan@lns-bzn-48f-62-147-157-222.adsl.proxad.net) Quit (Ping timeout: 480 seconds)
[23:48] * Coyo (~coyo@thinks.outside.theb0x.org) has joined #ceph
[23:48] * Coyo is now known as Guest1109
[23:52] <sagewk> sjusthm: on same_*_since: hmm, yeah. we should at least note in teh changelog that same_*_since may not be 100% correct in that case. and/or in osd_types.h
[23:52] <sjustlaptop> sagewk: yeah... testing seems to suggest that it works, but I'm going to puzzle over it a bit more
[23:53] <sjustlaptop> making the values correct would involve actually asking someone, which isn't a very attractive option
[23:53] <sagewk> it's really only the stray case that we need to worry about, i suspect, since if the new osd comes back into acting a new map will reflect the change.
[23:53] <sagewk> yeah
[23:53] <sjustlaptop> sagewk: that's what I think as well
[23:54] <sjustlaptop> sagewk: so if there is a discontinuity it proves that there is a clean interval between the first map of the current run of maps and now and that we are not active in the current interval
[23:54] <sagewk> like, we ignore messages older than same_acting_since... but didn't we make something set those epochs to the interval start at one point (instead of current epoch)?
[23:55] <sagewk> rihgt
[23:56] <sjustlaptop> sagewk: thats the last_peering_reset epoch, and it's actually not part of history
[23:56] <sagewk> ok cool.
[23:56] <sagewk> i think it's ok then
[23:56] <sjustlaptop> so we would actually ignore a message originating prior to the first map, but that's probably ok
[23:57] <sagewk> yeah, shouldn't even be possible since the recovering osd wasn't up in the first map
[23:57] <sagewk> (in its current instantiation)
[23:57] <sagewk> ok i'm convinced
[23:59] * PerlStalker (~PerlStalk@72.166.192.70) Quit (Quit: ...)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.