#ceph IRC Log

Index

IRC Log for 2012-09-27

Timestamps are in GMT/BST.

[0:03] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has left #ceph
[0:13] * BManojlovic (~steki@195.13.166.253) Quit (Ping timeout: 480 seconds)
[0:17] * The_Bishop (~bishop@f052102057.adsl.alicedsl.de) has joined #ceph
[0:26] * cblack101 (c0373727@ircip1.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[0:31] * tren (~Adium@184.69.73.122) Quit (Quit: Leaving.)
[0:40] * pentabular (~sean@adsl-70-231-141-17.dsl.snfc21.sbcglobal.net) has joined #ceph
[0:41] * KevinPerks (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[0:43] <pentabular> Watching MESS meetup. 1 view. :)
[0:46] <gregaf1> and now suddenly it's at 9
[0:47] <gregaf1> assuming you mean "Ceph at Media Entertainment and Scientific Storage Meetup"
[0:51] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[0:51] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) Quit (Quit: Ex-Chat)
[0:53] <pentabular> yes, apparently from yesterday
[0:59] <nhm_> gregaf1: I seem to remember going to that like a month or two ago.
[0:59] <gregaf1> yeah, it's from back in August
[0:59] <nhm_> ok, glad I'm not in a time warp
[1:00] <nhm_> though with the way my headcold is going I might as well be in one.
[1:00] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[1:05] * KevinPerks (~Adium@38.122.20.226) has joined #ceph
[1:14] <pentabular> oh yeah.. that must be what Aug means in large letters
[1:15] <pentabular> was just *posted* yesterday.
[1:15] * pentabular was in a timewarp
[1:16] <pentabular> "The year is 1987, and NASA launches the last of it's deep space missions. Captain William "Buck" Rodgers......"
[1:16] <pentabular> gotta love netflix
[1:21] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[1:22] <pentabular> can/should I use ceph-deploy with Argonaut, or does that mate better with ~0.51/master?
[1:22] <Tv_> pentabular: using 0.48.2 here just fine
[1:23] <pentabular> thanks. mkcephfs from the new ubuntu pkgs was looking troublesome
[1:24] <Tv_> pentabular: just to be clear: ceph-deploy still has ways to go, as far as features and stability goes
[1:24] <Tv_> like, if you don't have passphraseless ssh & sudo, i expect some really funky error messages, etc
[1:24] <pentabular> right, SSH is pretty clearly the conduit there
[1:25] * sagelap (~sage@74.sub-70-199-195.myvzw.com) has joined #ceph
[1:26] * Tv_ (~tv@38.122.20.226) Quit (Quit: Tv_)
[1:30] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[1:32] * jlogan1 (~Thunderbi@2600:c00:3010:1:2431:489a:70ae:7aa0) Quit (Ping timeout: 480 seconds)
[1:59] * jjgalvez1 (~jjgalvez@12.248.40.138) has joined #ceph
[2:00] * sagelap (~sage@74.sub-70-199-195.myvzw.com) Quit (Ping timeout: 480 seconds)
[2:03] * jjgalvez (~jjgalvez@12.248.40.138) Quit (Ping timeout: 480 seconds)
[2:03] * LarsFronius (~LarsFroni@95-91-242-169-dynip.superkabel.de) has joined #ceph
[2:04] * sagelap (~sage@207.sub-70-199-194.myvzw.com) has joined #ceph
[2:06] * danieagle (~Daniel@177.97.248.180) has joined #ceph
[2:10] * jjgalvez1 (~jjgalvez@12.248.40.138) Quit (Ping timeout: 480 seconds)
[2:16] * sagelap (~sage@207.sub-70-199-194.myvzw.com) Quit (Ping timeout: 480 seconds)
[2:16] * LarsFronius (~LarsFroni@95-91-242-169-dynip.superkabel.de) Quit (Quit: LarsFronius)
[2:17] * LarsFronius (~LarsFroni@95-91-242-169-dynip.superkabel.de) has joined #ceph
[2:18] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[2:20] * KevinPerks (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[2:24] * nhm_ (~nhm@67-220-20-222.usiwireless.com) Quit (Ping timeout: 480 seconds)
[2:26] * pentabular (~sean@adsl-70-231-141-17.dsl.snfc21.sbcglobal.net) has left #ceph
[2:39] * danieagle (~Daniel@177.97.248.180) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[2:44] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Quit: Leaving.)
[2:45] * LarsFronius (~LarsFroni@95-91-242-169-dynip.superkabel.de) Quit (Quit: LarsFronius)
[2:52] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[2:56] * nhm (~nhm@174-20-32-79.mpls.qwest.net) has joined #ceph
[3:31] * deepsa (~deepsa@122.172.18.250) has joined #ceph
[3:38] * sagelap (~sage@63.sub-70-197-143.myvzw.com) has joined #ceph
[3:48] * slang (~slang@38.122.20.226) Quit (Quit: slang)
[3:50] * slang (~slang@38.122.20.226) has joined #ceph
[3:51] * deepsa (~deepsa@122.172.18.250) Quit (Ping timeout: 480 seconds)
[3:52] * sagelap (~sage@63.sub-70-197-143.myvzw.com) Quit (Quit: Leaving.)
[3:54] * maelfius (~mdrnstm@66.209.104.107) Quit (Quit: Leaving.)
[4:14] * slang (~slang@38.122.20.226) Quit (Quit: slang)
[4:17] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) has joined #ceph
[4:17] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) Quit (Remote host closed the connection)
[4:22] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[4:25] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[4:27] * chutzpah (~chutz@100.42.98.5) Quit (Quit: Leaving)
[4:34] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[4:37] * deepsa (~deepsa@115.184.17.126) has joined #ceph
[4:37] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit ()
[4:42] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[4:45] * deepsa_ (~deepsa@115.184.76.54) has joined #ceph
[4:49] * deepsa (~deepsa@115.184.17.126) Quit (Ping timeout: 480 seconds)
[4:49] * deepsa_ is now known as deepsa
[4:50] * Cube (~Adium@12.248.40.138) Quit (Quit: Leaving.)
[5:32] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[7:02] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has joined #ceph
[7:36] * dmick is now known as dmick_away
[7:45] * glowell (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[8:00] * glowell (~Adium@68.170.71.123) has joined #ceph
[8:08] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:08] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[8:12] * loicd (~loic@magenta.dachary.org) Quit ()
[8:13] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[8:15] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) has joined #ceph
[8:24] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[8:45] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[8:57] * deepsa_ (~deepsa@122.172.0.225) has joined #ceph
[9:02] * deepsa (~deepsa@115.184.76.54) Quit (Ping timeout: 480 seconds)
[9:02] * deepsa_ is now known as deepsa
[9:19] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[9:22] * loicd (~loic@178.20.50.225) has joined #ceph
[9:34] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[9:34] * BManojlovic (~steki@87.110.183.173) has joined #ceph
[9:47] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) has joined #ceph
[9:50] <loicd> Hi, what's the current status of the OpenStack keystone integration with radosgw ?
[9:52] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[9:54] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[10:23] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[10:40] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has joined #ceph
[10:58] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[11:02] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:43] * loicd (~loic@178.20.50.225) Quit (Quit: Leaving.)
[11:43] * loicd1 (~loic@178.20.50.225) has joined #ceph
[11:52] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[11:56] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit ()
[11:57] * MaxPruts (~17891@www.hethooghuis.nl) has joined #ceph
[11:57] <MaxPruts> hey
[11:57] * MaxPruts (~17891@www.hethooghuis.nl) has left #ceph
[12:00] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[12:08] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[12:08] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[12:09] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[12:47] * mguru (ca3c3e64@ircip1.mibbit.com) has joined #ceph
[12:47] * mguru (ca3c3e64@ircip1.mibbit.com) Quit ()
[13:14] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[13:15] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[14:02] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[14:02] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Read error: Connection reset by peer)
[14:02] * LarsFronius_ is now known as LarsFronius
[14:04] * sage1 (~sage@76.89.177.113) Quit (Ping timeout: 480 seconds)
[14:07] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[14:14] * sage1 (~sage@76.89.177.113) has joined #ceph
[14:15] * loicd (~loic@90.84.144.61) has joined #ceph
[14:21] * loicd1 (~loic@178.20.50.225) Quit (Ping timeout: 480 seconds)
[14:23] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[14:34] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) has joined #ceph
[14:34] * The_Bishop (~bishop@f052102057.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[14:39] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: zzzzzzzzzzzzzzzzzzzz)
[14:43] * The_Bishop (~bishop@f052102057.adsl.alicedsl.de) has joined #ceph
[15:00] * mrjack_ (mrjack@office.smart-weblications.net) Quit ()
[15:09] * loicd1 (~loic@178.20.50.225) has joined #ceph
[15:16] * loicd (~loic@90.84.144.61) Quit (Ping timeout: 480 seconds)
[15:33] * BManojlovic (~steki@87.110.183.173) Quit (Quit: Ja odoh a vi sta 'ocete...)
[15:39] * scuttlemonkey (~scuttlemo@173-14-58-198-Michigan.hfc.comcastbusiness.net) has joined #ceph
[15:42] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:46] * deepsa (~deepsa@122.172.0.225) Quit (Ping timeout: 480 seconds)
[15:48] * gaveen (~gaveen@112.135.136.228) has joined #ceph
[15:49] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[15:49] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[15:49] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Read error: No route to host)
[15:50] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[15:52] * gaveen (~gaveen@112.135.136.228) Quit ()
[15:55] * gaveen (~gaveen@112.135.156.122) has joined #ceph
[15:57] * deepsa (~deepsa@122.172.16.106) has joined #ceph
[16:08] * loicd1 (~loic@178.20.50.225) Quit (Ping timeout: 480 seconds)
[16:13] * nhm_ (~nhm@67-220-20-222.usiwireless.com) has joined #ceph
[16:15] * nhm (~nhm@174-20-32-79.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[16:20] * cblack101 (c0373729@ircip3.mibbit.com) has joined #ceph
[16:42] * slang (~slang@38.122.20.226) has joined #ceph
[16:44] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[16:47] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[16:49] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[16:57] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[16:59] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[17:07] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[17:07] * jjgalvez1 (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[17:08] * sagelap (~sage@90.sub-70-197-147.myvzw.com) has joined #ceph
[17:08] * jjgalvez2 (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[17:08] * glowell (~Adium@68.170.71.123) Quit (Quit: Leaving.)
[17:10] * loicd (~loic@magenta.dachary.org) has joined #ceph
[17:14] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:15] * jjgalvez1 (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:17] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:18] * glowell (~Adium@38.122.20.226) has joined #ceph
[17:19] * jjgalvez2 (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[17:29] * KevinPerks (~Adium@2607:f298:a:607:894:fdf:9614:9645) has joined #ceph
[17:38] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) Quit (Remote host closed the connection)
[17:38] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:43] * sagelap1 (~sage@91.sub-70-197-150.myvzw.com) has joined #ceph
[17:45] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:45] * sagelap (~sage@90.sub-70-197-147.myvzw.com) Quit (Ping timeout: 480 seconds)
[17:49] * sagelap (~sage@38.122.20.226) has joined #ceph
[17:50] * Tv_ (~tv@2607:f298:a:607:b899:20f7:e1bb:234c) has joined #ceph
[17:51] * sagelap1 (~sage@91.sub-70-197-150.myvzw.com) Quit (Ping timeout: 480 seconds)
[17:56] * slang (~slang@38.122.20.226) Quit (Quit: slang)
[17:57] * KevinPerks1 (~Adium@38.122.20.226) has joined #ceph
[17:59] * KevinPerks1 (~Adium@38.122.20.226) Quit ()
[18:00] * KevinPerks1 (~Adium@38.122.20.226) has joined #ceph
[18:01] * KevinPerks (~Adium@2607:f298:a:607:894:fdf:9614:9645) Quit (Ping timeout: 480 seconds)
[18:02] * slang (~slang@38.122.20.226) has joined #ceph
[18:02] * dilemma (~dilemma@2607:fad0:32:a02:1e6f:65ff:feac:7f2a) has joined #ceph
[18:02] <sagewk> elder: can you look at the layout branch briefly?
[18:02] <sagewk> wondering if the divide by zero and ioctl patches need to go upstream now
[18:02] <elder> I have a couple of minutes...
[18:03] <dilemma> I'm running into a problem that looks a lot like this one: http://tracker.newdream.net/issues/2476 where I can't remove an RBD volume due to stuck watchers, even though I have no clients connected to that volume.
[18:04] <dilemma> Assuming that I'm running into that bug, is anyone aware of a work-around, where I can remove the watcher, or otherwise delete the volume?
[18:05] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[18:05] <dilemma> Even a way to query for information on a volume's watchers would help me out.
[18:05] * loicd (~loic@magenta.dachary.org) has joined #ceph
[18:07] <elder> sagewk, how likely is it we'll hit an invalid object mapping? Would that be due to corruption?
[18:07] <elder> Is there any way a user could somehow force that to occur?
[18:08] * cblack101 (c0373729@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[18:09] <elder> With my fairly quick scan of these changes I think they look OK but it was a superficial review.
[18:09] <elder> It's *awfully* late to send in a bug for this release... If it can't be triggered by someone on purpose maybe it can wait and be sent as a stable update, first thing.
[18:10] <elder> I have to leave now though, sagewk. I'll talk to you in a few hours when I'm back from lunch.
[18:13] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) has joined #ceph
[18:20] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:24] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[18:27] * KevinPerks1 (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[18:33] <gregaf1> loicd: no keystone integration; it's got auth v1.0 only
[18:35] <loicd> gregaf1: thanks :-) Is there an ongoing effort to implement that ? Or is it now on the roadmap yet ?
[18:35] <gregaf1> dilemma: I believe you'll find that the watch disappears after 30 seconds or something, but joshd can tell you more when he gets in
[18:35] <gregaf1> loicd: I don't think anybody's too concerned about it right now…yehudasa?
[18:36] <dilemma> gregaf1: it sat overnight with no clients connected
[18:37] <gregaf1> you'll have to wait for Josh to get in then, sorry
[18:38] <yehudasa> loicd: we're still evaluating what it actually means in keystone integration
[18:39] <yehudasa> loicd: if it's just having keystone handling the auth tokens the same way as swift auth does, while still keeping rgw's user management then there's not much to it
[18:41] <yehudasa> loicd: how do you think keystone integration should work?
[18:42] <loicd> As an OpenStack admins I would like ceph to rely on keystone for authentication so that it's transparent, from the user point of view, to replace swift with ceph, for the object storage part.
[18:43] <loicd> Maybe there is a workaround such as a converter that can be invoked to inject into the rgw user management each time keystone is modified ?
[18:45] <yehudasa> loicd: I'm not too familiar with the keystone internals, that would definitely work
[18:47] <yehudasa> loicd: it may be best if there was some translation layer within the gateway so that it would be able to read the user info from keystone instead
[18:47] <loicd> ok
[18:48] <yehudasa> loicd: but that will require more work
[18:55] * cblack101 (86868949@ircip2.mibbit.com) has joined #ceph
[18:58] * slang (~slang@38.122.20.226) Quit (Quit: slang)
[19:01] <dmick_away> dilemma: yeah, they should go away after 30s if that's the same bug; sounds like something else may be going on
[19:01] * dmick_away is now known as dmick
[19:02] <dilemma> any way to list watchers?
[19:06] * chutzpah (~chutz@100.42.98.5) has joined #ceph
[19:07] * maelfius (~mdrnstm@66.209.104.107) has joined #ceph
[19:07] <dilemma> actually, dmick, the bug I linked to describes a scenario where there is "an unbounded delay for the watch timeout"
[19:08] <dilemma> which is why I believed I might be effected by it
[19:09] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[19:10] * KevinPerks (~Adium@38.122.20.226) has joined #ceph
[19:10] * KevinPerks1 (~Adium@38.122.20.226) has joined #ceph
[19:10] * KevinPerks (~Adium@38.122.20.226) Quit (Read error: Connection reset by peer)
[19:13] <dmick> dilemma: ah, sorry, I just assumed it was the one fixed recently
[19:13] <joao> sagewk, gregaf1, on-start conversion is working :)
[19:18] * MikeMcClurg (~mike@firewall.ctxuk.citrix.com) Quit (Ping timeout: 480 seconds)
[19:22] <joshd> dilemma: so the unbounded delay happens only when the primary changes, and the object is not used
[19:23] <joshd> dilemma: if you try removing the image, that tries to access the header, which triggers the watch timeout. so if you try to remove it, you should be able to wait 30 seconds and the watchers will have timed out
[19:23] <dilemma> yeah, that failed me as well
[19:24] <joshd> so the remaining possibilities are: there's a client you're not aware of that still has it open, or there's a new bug in the watch handling code
[19:25] <dilemma> I'd be willing to believe that there's a client connected that I'm not aware of. From the cluster side, how do I identify this?
[19:25] <joshd> you can see if there are watchers in the osd log when you have 'debug osd = 10' on the primary responsible for the image's header
[19:26] <dilemma> can I inject that config into a running osd process?
[19:27] <joshd> yeah, http://ceph.com/docs/master/config-cluster/ceph-conf/?highlight=injectargs#runtime-changes
[19:28] <dilemma> I'll give that a shot, thanks
[19:28] <joshd> look for dump_watchers in the log
[19:28] <joshd> if you do 'rbd info' on the image, it should show up
[19:28] <dilemma> hmm... aside from searching the OSD data dirs, how do I identify what PG or OSD an object is associated with?
[19:29] <joshd> ceph osd map <pool> <object>
[19:30] <dilemma> perfect, thanks
[19:35] * Cube (~Adium@38.122.20.226) has joined #ceph
[19:36] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Ping timeout: 480 seconds)
[19:38] <mikeryan> sagewk: review/merge on wip_backfill_full2 please
[19:38] <mikeryan> this includes sjust's wip_backfill_reservation
[19:38] <mikeryan> both branches passed regression with flying colors
[19:38] <sjust> yep
[19:38] * Cube (~Adium@38.122.20.226) Quit ()
[19:39] * slang (~slang@38.122.20.226) has joined #ceph
[19:40] * KevinPerks1 (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[19:40] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[19:41] * Cube1 (~Adium@38.122.20.226) has joined #ceph
[19:42] * slang (~slang@38.122.20.226) Quit ()
[19:43] * KevinPerks (~Adium@38.122.20.226) has joined #ceph
[19:43] <joshd> dilemma: fixing now, but the doc syntax for that runtime injection is wrong - you need to pass the args to osd as a single argument, i.e. ceph osd tell 0 injectargs '--debug-osd 10 --debug-ms 1'
[19:43] * slang (~slang@38.122.20.226) has joined #ceph
[19:46] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) Quit (Quit: Leseb)
[19:50] * The_Bishop (~bishop@f052102057.adsl.alicedsl.de) Quit (Remote host closed the connection)
[19:54] * KevinPerks (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[19:57] * zodiak (~stef@CPE2cb05d3ebdcb-CM602ad07b9954.cpe.net.cable.rogers.com) has joined #ceph
[19:58] <zodiak> hey everyone :)
[19:59] <zodiak> does anyone know of the auth calls required for keystone integration ?
[20:01] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[20:04] <loicd> zodiak: yehudasa & gregaf1 said earlier that it's not supported yet. yehudasa was under the impression that it might not be very complex work, IIRC.
[20:04] <zodiak> whelp, I have done keystone integration for a billing system at <major company who can not be named>
[20:05] <zodiak> if someone can point me to auth calls needed/req'd I should be able to knock something up
[20:05] * adjohn is now known as Guest8418
[20:05] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[20:07] <zodiak> so, I guess I have to wait until yehudasa comes back alive :)
[20:08] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[20:08] <Tv_> as far as i know, yehudasa hasn't been bitten by a zombie
[20:08] <yehudasa> Tv_: not yet
[20:08] <Tv_> as i sit next to him, i would like to know of these things
[20:08] <zodiak> *laughs*
[20:09] <dmick> although I have seen his eyes get wide at the prospect of brains for lunch, so..you never know
[20:09] <zodiak> well, you never know when the zombie apocalypse will happen ;)
[20:09] <Tv_> zodiak: so.. radosgw currently doesn't really talk to keystone at all
[20:10] <Tv_> zodiak: we'd *love* help on all that, but i don't know if we have any simple answers
[20:10] <zodiak> gotcha
[20:10] <Tv_> zodiak: perhaps you could expand on what you want to achieve?
[20:10] <yehudasa> zodiak: I'm not sure I even have simple questions
[20:10] <Tv_> well, at least i *am* simple, so there's that
[20:10] <zodiak> well, I guess the question is, what does radosgw need ?
[20:10] <Tv_> zodiak: for what?-)
[20:11] <yehudasa> zodiak: how do you see keystone integration?
[20:11] <zodiak> hoping to get ceph into an openstack instance (well, devstack, you get the idea)
[20:11] * Guest8418 (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[20:11] <Tv_> zodiak: can you explain how you *think* the final solution should work?
[20:11] <zodiak> yehudasa, Tv_ well, I don't know how ceph does tenants/roles/users .. is there some docs on that ?
[20:12] <zodiak> I am coming at this from the keystone/openstack side of things
[20:12] <zodiak> my knowledge of ceph is pretty woeful
[20:12] <yehudasa> zodiak: this is more of a radosgw issue, rather than ceph specific
[20:13] <yehudasa> zodiak: internally radosgw manages its own users
[20:13] <zodiak> yehudasa, aaahh.. yup.. reading about radosgw even as we speak
[20:13] <yehudasa> currently there's a single gateway tenant, though in the future that's not going to be true
[20:13] <zodiak> so, no support for multi tenants .. okay
[20:14] * KevinPerks (~Adium@2607:f298:a:607:19a:5dfe:5443:9fcc) has joined #ceph
[20:14] <yehudasa> zodiak, at the moment.. have done some ground work to change that, and it's not that far away
[20:14] <Tv_> yehudasa: be careful with your definition of "gateway tenant"
[20:14] <Tv_> multiple tenants of a gateway; only one "gateway tenant" toward RADOS -- right?
[20:14] <Tv_> (multiple radosgw processes form one logical gateway)
[20:15] <yehudasa> Tv_: yes and yes
[20:15] <zodiak> ah. so. "gateway tenant" == region ?
[20:15] <zodiak> trying to map the constructs to openstack ideas :)
[20:18] <Tv_> yeah something like
[20:19] <Tv_> we're talking about terminology at the office right now ;)
[20:19] <zodiak> aaahh
[20:20] <zodiak> sorry :)
[20:20] <zodiak> if you have the swift api stuff there, keystone shouldn't be too hard at all (since swift normally passes along the auth stuff to the keystone backend anyway)
[20:21] <yehudasa> zodiak: currently with the swift implementation, we have a user defined in the gateway that maps into a different user that has been defined on the swift auth
[20:21] <yehudasa> zodiak: so there need to be a dual configuration .. unless you use the radosgw itself for managing the authentication
[20:22] <yehudasa> zodiak: in which case you take swift out of the equation
[20:23] <zodiak> hhrrmmm
[20:27] <zodiak> I mean, there is an auth_token for exactly doing this sort of 'backend request' type of thing
[20:27] <zodiak> in keystone anyway
[20:28] <yehudasa> zodiak: yeah, but we need a way to get the user info from the keystone service and map it into a gateway user
[20:28] <yehudasa> .. or at least into a data type that the gateway understands
[20:29] <zodiak> sorry.. on a work conf call (ugh :)
[20:32] <zodiak> okay.. back
[20:33] <zodiak> so.. someway to map a gateway user eh
[20:33] <zodiak> is there any docs/DbC stuff on radosgw ?
[20:34] <zodiak> I mean, rather, any published contracts of 'I will give you foo, I want back bar' ?
[20:37] <zodiak> I see what you mean about dual config .. since the rados would have to create it on the keystone side as well
[20:37] <zodiak> does the radosgw have any callbacks or publish events ?
[20:37] * jmlowe (~Adium@2001:18e8:2:28a2:c46b:b2bc:a972:fae1) has joined #ceph
[20:39] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:a090:5b58:9437:490c) has joined #ceph
[20:42] * KevinPerks (~Adium@2607:f298:a:607:19a:5dfe:5443:9fcc) Quit (Quit: Leaving.)
[20:42] * KevinPerks (~Adium@2607:f298:a:607:19a:5dfe:5443:9fcc) has joined #ceph
[20:47] * Cube1 (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[20:47] * Cube (~Adium@38.122.20.226) has joined #ceph
[20:48] <dmick> zodiak: everyone just went to lunch; they'll likely be back before 1PM PDT
[20:48] <zodiak> *LAUGHS*
[20:48] <zodiak> jst my luck :D
[20:48] <zodiak> I should be here until 2pm PDT .. but since I am on EST .. yeah ;)
[20:49] <dmick> wage slaves :)
[20:49] <zodiak> that's it in one :)
[20:51] <maelfius> I'm a little scared. RBD with ocfs2 layered on top of it (until cephfs is "primetime" ready)
[20:53] <maelfius> seems to work beautifly
[20:53] <maelfius> but, of course, has some limitations
[20:53] <maelfius> (or well… a lot)
[20:54] <joshd> how's the performance?
[21:03] * slang (~slang@38.122.20.226) Quit (Quit: slang)
[21:13] * KevinPerks (~Adium@2607:f298:a:607:19a:5dfe:5443:9fcc) Quit (Quit: Leaving.)
[21:16] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) has joined #ceph
[21:19] <maelfius> joshd: well, this is in a POC environment, 1G interfaces. I was seeing (unloaded) ~30MB/s (2 nodes in OCFS cluster, 11 OSDs)
[21:19] <maelfius> for sequential writes concurrent on both nodes
[21:19] <maelfius> drives are SATA2 iirc
[21:19] <joshd> how does that compare with plain RBD in that setup?
[21:19] <maelfius> that was my next test ;)
[21:20] <joshd> cool
[21:23] <maelfius> looks like ~46MB/s with plain RBD
[21:24] <maelfius> so, not a huge loss, but again, only 2 ocfs2 nodes atm. it'll probably be a higher overhead with more nodes.
[21:24] * jmlowe (~Adium@2001:18e8:2:28a2:c46b:b2bc:a972:fae1) Quit (Quit: Leaving.)
[21:27] * slang (~slang@38.122.20.226) has joined #ceph
[21:28] * jmlowe (~Adium@2001:18e8:2:28a2:8a0:2ef8:67c0:8a3d) has joined #ceph
[21:28] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Remote host closed the connection)
[21:29] * KevinPerks (~Adium@38.122.20.226) has joined #ceph
[21:29] * KevinPerks1 (~Adium@2607:f298:a:607:fdcc:55a8:c0c4:82ac) has joined #ceph
[21:29] * KevinPerks (~Adium@38.122.20.226) Quit (Read error: Connection reset by peer)
[21:30] <joshd> yeah, probably
[21:30] * jjgalvez (~jjgalvez@38.122.20.226) has joined #ceph
[21:30] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[21:49] <loicd> How good is test coverage on radosgw ?
[21:51] <gregaf1> should be pretty good; you can look at s3tests.git iirc
[21:51] <joshd> see https://github.com/ceph/s3-tests and https://github.com/ceph/teuthology/blob/master/teuthology/task/radosgw-admin.py
[21:52] * aliguori (~anthony@32.97.110.59) has joined #ceph
[21:52] <cblack101> Question: on the radosgw, can I set up multiple gatewars and load balance them?
[21:52] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Ping timeout: 480 seconds)
[21:52] <cblack101> *gateways
[21:53] <joshd> yes
[21:53] <cblack101> cool, Wanted to make sure I had that right before I told the working group yes... :-)
[21:53] <joshd> they only cache acls, and they manage invalidation of that cache themselves
[21:57] * jmlowe (~Adium@2001:18e8:2:28a2:8a0:2ef8:67c0:8a3d) Quit (Quit: Leaving.)
[22:04] * jmlowe (~Adium@2001:18e8:2:28a2:40c3:a56a:fcbd:fee2) has joined #ceph
[22:17] * glowell (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[22:20] * glowell (~Adium@38.122.20.226) has joined #ceph
[22:25] <loicd> joshd: I'm told running teuthology is difficult because it has a number of dependencies that are not easy to replicate (i.e. in depends on inktank internal "things")
[22:26] <loicd> joshd: if you tell me there is no reason to fear this, I'll give it a shot ;-)
[22:27] <loicd> However, my question was more about code coverage from unit tests internal to ceph itself.
[22:31] <stan_theman> is anyone from inktank here?
[22:32] <nhm_> stan_theman: there's lots of us. :)
[22:32] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[22:35] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit ()
[22:35] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[22:35] * psandin (psandin@staff.linode.com) has joined #ceph
[22:36] <sagewk> joshd: what's the status of wip-watch-header-race?
[22:37] <joshd> loicd: yeah, at this point there's no good way to setup the machines exactly the same way... you can see the chef scripts that configure them at https://github.com/ceph/ceph-qa-chef/blob/master/cookbooks/ceph-qa/recipes/default.rb, but there's a bunch of custom stuff
[22:37] <sagewk> mikeryan: what's the status of wip_backfill_full_stats_nuke?
[22:37] <joshd> sagewk: ready to be merged
[22:37] <loicd> joshd: thanks
[22:37] <sagewk> sjust: wip_backfill_peering?
[22:38] <mikeryan> sagewk: i never successfully reproduced the scrub error
[22:39] <sagewk> sjust,mikeryan: wip_cur_perf_journal, wip_filestore_perf, wip_filestore_perf_master?
[22:39] <sjust> oops, wip_backfill_peering can die
[22:39] <sjust> all of the *perf should stick around
[22:39] <sjust> not sure about wip_cur_perf_journal
[22:39] <mikeryan> wip_cur_perf_journal is based on wip_filestore_perf_master
[22:39] <mikeryan> it has all the benchers from _master plus a journal bencher
[22:39] <mikeryan> so it supercedes the branch completely
[22:40] <mikeryan> imo _master can go away
[22:41] <sagewk> yehudasa: wip-2504 (multiple notify objects), wip-3225, wip-atomic-small, wip-post-object, wip-admin-rest, wip-rest-cleanup, wip-rgw-refcount
[22:41] <sagewk> are some of those redundant? all awaiting review?
[22:42] * Cube (~Adium@38.122.20.226) Quit (Ping timeout: 480 seconds)
[22:42] <yehudasa> sagewk: checking
[22:43] * Cube (~Adium@38.122.20.226) has joined #ceph
[22:43] <sjust> mikeryan: I'd like to keep wip_filestore_perf_master
[22:44] <yehudasa> sagewk: wip-2504 can be removed, wip-3225 waiting review, wip-atomic-small can be removed, wip-post-object is caleb's work on post, wip-admin-rest waits for review, wip-rest-cleanup, wip-rgw-refcount can be removed
[22:45] <yehudasa> to sum up: all of the following can be removed: wip-2504, wip-atomic-small, wip-rest-cleanup, wip-rgw-refcount
[22:45] <gregaf1> what's wip-3225? is that waiting on me?
[22:46] <yehudasa> gregaf1: it's waiting for a review, yeah, if you want to, a single commit I think
[22:46] <yehudasa> gregaf1: anyway, it should be done after wip-admin-rest
[22:46] <yehudasa> it builds on top of it
[22:46] <mikeryan> sagewk: wip_backfill_full can go away
[22:46] <gregaf1> trying to get some crowbar stuff done before we lose those machines, but I'll queue it up
[22:48] * jmlowe (~Adium@2001:18e8:2:28a2:40c3:a56a:fcbd:fee2) Quit (Quit: Leaving.)
[22:50] <sagewk> sjust, joao: what is leveldbstore-init-refactor about?
[22:50] <sjust> sagewk: joao wanted a way to init a leveldbstore without creating it
[22:50] <sagewk> k
[22:51] <sagewk> psandin: what kind of issues?
[22:52] <psandin> getting multiple active MDSes working
[22:53] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[22:53] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[22:54] <sagewk> psandin: ah. yeah, that's not something we're working on right now... we've been focusing on rados stability and rbd/radosgw. just starting to pivot back to fs work now.
[22:54] <sagewk> would be great to get some testing going, but don't run it in production just yet!
[22:56] <psandin> we'd like to help contrbute to getting it prodction ready, but there's a lot of existing code that we're unfamilair with
[22:58] <sagewk> yeah :)
[22:58] <sagewk> one of the easiest ways to contribute at this point is helping us build of the test suite for the file system.
[22:58] <sagewk> s/build of/build up/
[22:59] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) Quit (Quit: Leaving)
[23:02] * scuttlemonkey (~scuttlemo@173-14-58-198-Michigan.hfc.comcastbusiness.net) Quit (Quit: zzzzzzzzzzzzzzzzzzzz)
[23:04] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[23:06] * benpol (~benp@garage.reed.edu) has joined #ceph
[23:13] <dmick> sagewk: thanks for being on the spot about the rados manpage :)
[23:13] <sagewk> np
[23:13] <dmick> I was just writing up a .txt file to put in admin/ explaining how to update a manpage
[23:14] <dmick> after noticing that rados.8 was just about exactly a year out of date :)
[23:14] <sagewk> yeah, i edited in place instead of rebuilding. you should rebuild them all for good measure, most liekly
[23:18] * Cube (~Adium@38.122.20.226) Quit (Ping timeout: 480 seconds)
[23:18] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:a090:5b58:9437:490c) Quit (Quit: LarsFronius)
[23:20] * Cube (~Adium@2607:f298:a:697:ca2a:14ff:fe16:4a67) has joined #ceph
[23:22] <dmick> oh dear yes
[23:33] * aliguori (~anthony@32.97.110.59) Quit (Read error: Operation timed out)
[23:35] * jmlowe (~Adium@adsl-99-124-128-85.dsl.ipltin.sbcglobal.net) has joined #ceph
[23:36] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:38] * loicd (~loic@magenta.dachary.org) has joined #ceph
[23:48] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[23:54] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) has joined #ceph
[23:56] * cblack101 (86868949@ircip2.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.