#ceph IRC Log

Index

IRC Log for 2012-08-20

Timestamps are in GMT/BST.

[0:16] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[0:20] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[0:26] * ninkotech (~duplo@89.177.137.231) Quit (Read error: Connection reset by peer)
[0:28] * ninkotech (~duplo@89.177.137.231) has joined #ceph
[0:30] * BManojlovic (~steki@212.200.243.134) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:35] * loicd1 (~loic@brln-4d0ce39f.pool.mediaWays.net) Quit (Quit: Leaving.)
[0:55] * sjust (~sam@38.122.20.226) Quit (Ping timeout: 480 seconds)
[1:06] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[1:27] * tnt (~tnt@89.40-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[1:28] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[2:02] * lofejndif (~lsqavnbok@28IAAGY8M.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[2:12] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) Quit (Remote host closed the connection)
[2:58] * tightwork (~tightwork@142.196.239.240) Quit (Read error: Operation timed out)
[3:35] <ryann> I have one mds running, and its log is indicating that several OSD's are "wrong node" Can i tell the mds to "start over"? or, drop its assumed map and generate a new one?
[4:08] * The_Bishop (~bishop@2a01:198:2ee:0:2c2e:766f:d684:56d2) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[4:08] <Tobarja> ryann: it doesn't scale, but you can just dump it, decode it, fix it, and push it back in
[4:10] <ryann> Tobarja: Thanks! And yes, I'm trying to document all of my mon failures. Still working with keyring issues. I'll have those fixed in about 15 (just got back on this...)
[4:17] * tightwork (~didders@142.196.239.240) has joined #ceph
[4:26] <ryann> Tobarja: What do you mean by "decode it"?
[4:26] * tightwork (~didders@142.196.239.240) Quit (Ping timeout: 480 seconds)
[4:50] * deepsa (~deepsa@117.203.2.61) has joined #ceph
[4:55] * yoshi_ (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[4:55] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Read error: Connection reset by peer)
[4:56] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[4:56] * yoshi_ (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Read error: Connection reset by peer)
[4:57] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[5:18] * nhm (~nhm@184-97-251-210.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[5:53] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[6:17] * ryann (~chatzilla@216.81.130.180) has left #ceph
[6:37] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[7:08] * Cube1 (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[7:15] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[7:36] <Tobarja> ryann: http://ceph.com/wiki/Custom_data_placement_with_CRUSH
[7:37] <Tobarja> aww dang... missed him
[8:06] * tnt (~tnt@89.40-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[8:38] * Meths_ is now known as Meths
[8:45] <NaioN> lightspeed: nice! I'll keep that setup in mind, it's really nice to have the redundancy
[8:45] <NaioN> lightspeed: well no you have to find out why they're still stuck and you can use "ceph health detail" to see which pgs and with "ceph pg ID query" you can query a single pg
[8:46] <NaioN> at the bottom of the output it states somehing about the recovery_state.
[8:46] <NaioN> hopefully it will tell you what's wrong
[8:49] <Tobarja> NaioN: is there generally a way to get things back in order?
[8:52] <Tobarja> In this ceph-devel post from May: http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/6757 Sage described the use of "public network" and "cluster network" in an [osd] block. Do the other services obey those fields, or do they do anything that cares?
[9:00] <NaioN> as far as I know the others don't obey the clauses
[9:00] <NaioN> so you can only use them in the osd block
[9:00] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:08] * tnt (~tnt@89.40-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[9:09] <lightspeed> NaioN: most of the remaining issues cleared up by restarting the OSDs a second time after the config change
[9:10] <NaioN> ok
[9:10] <NaioN> so now everything healthy?
[9:10] <lightspeed> I now just have 9 active+remapped (what does "remapped" mean?) and 6 active+degraded (these ones simply haven't replicated yet, don't really understand why)
[9:11] <NaioN> yeah that's no problem
[9:11] <NaioN> remapped means the pg's are replicated but they reside on different osds then they should be according to the crushmap
[9:12] <NaioN> and degraded means there's no replica
[9:13] <NaioN> and those remapped are normal when you add osds, because then the crushmap gets updated and some pgs have to move to different osds
[9:13] <NaioN> but in the meanwhile they are still accessible on the old place
[9:13] <lightspeed> shouldn't it fix those up by itself pretty quickly though? I mean it's not like I have much data to move around, so I'd have thought it'd sort it out in a couple of minutes
[9:13] <NaioN> but if it's correct it says more then just active+remapped and active+degraded
[9:14] <lightspeed> but yeah, I can certainly access all the data again now
[9:14] <NaioN> if it's correct they also state +recovering?
[9:15] <lightspeed> no mention of recovering
[9:15] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[9:15] <NaioN> hmmm do you see traffic flowing?
[9:16] <NaioN> lightspeed: http://ceph.com/docs/master/dev/placement-group/#user-visible-pg-states
[9:16] <lightspeed> there is some traffic, according to tcpdump
[9:16] <NaioN> hmmm they should state +recovering and/or backfill
[9:17] <NaioN> so you could still query the pgs in order to look why they don't want to recover
[9:22] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:23] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:24] <lightspeed> here's a query for one of the active+degraded pgs: http://pastebin.com/24rgwMR7 and one of the active+remapped pgs: http://pastebin.com/aA6yMaY1
[9:26] <lightspeed> only difference between the recovery section in each is the degraded one has a value for "scrub_epoch_start"
[9:27] <lightspeed> doesn't seem to tell me much though
[9:27] * pentabular (~sean@adsl-71-141-229-185.dsl.snfc21.pacbell.net) has left #ceph
[9:31] * fghaas (~florian@91-119-204-193.dynamic.xdsl-line.inode.at) has joined #ceph
[9:31] <lightspeed> hmm, I wonder whether it's the same as this (at least for the degraded pgs): http://tracker.newdream.net/issues/2874
[9:33] <NaioN> lightspeed: you can see if you have the same issue
[9:33] <NaioN> just look on which osds the pgs are mapped
[9:34] * loicd (~loic@brln-4d0ce39f.pool.mediaWays.net) has joined #ceph
[9:37] <NaioN> lightspeed: the degraded only have 1 osd
[9:37] <NaioN> at up and acting you only see osd 0
[9:38] <NaioN> with the remapped you see acting 2 and 0 (2 primary)
[9:38] <NaioN> but it states only 2 is up
[9:38] <lightspeed> yeah all the degraded ones are on osd 0 only
[9:38] <lightspeed> a couple of the remapped ones say acting 1 and 0
[9:38] <NaioN> or 2 and 0
[9:39] <lightspeed> yeah 6 of them are 2 and 0
[9:39] <lightspeed> 3 are 1 and 0
[9:41] <NaioN> seems you have the same problem
[9:41] <NaioN> and with "ceph osd tree" all looks fine?
[9:41] <lightspeed> also, if I run that same osdmaptool command as shown in the issue notes, referencing any of my degraded pgs, then they all map only to osd 0
[9:42] <NaioN> does the log of osd 0 tell you something?
[9:42] <lightspeed> yes the tree looks fine to me
[9:44] <lightspeed> not really
[9:44] <lightspeed> hey thanks for all your help
[9:44] <lightspeed> but I need to go into work now
[9:45] <lightspeed> I'll continue trying to figure this out when I'm back in 10 hours or so
[9:53] <NaioN> ok...
[9:56] * EmilienM (~EmilienM@78.251.191.177) has joined #ceph
[9:59] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (Remote host closed the connection)
[10:05] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[10:14] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (Ping timeout: 480 seconds)
[10:16] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[10:31] * EmilienM (~EmilienM@78.251.191.177) Quit (Ping timeout: 480 seconds)
[10:31] * EmilienM (~EmilienM@vau75-1-81-57-77-50.fbx.proxad.net) has joined #ceph
[11:05] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:59] * fghaas (~florian@91-119-204-193.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[12:00] * fghaas (~florian@91-119-204-193.dynamic.xdsl-line.inode.at) has joined #ceph
[12:03] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[12:08] * fghaas (~florian@91-119-204-193.dynamic.xdsl-line.inode.at) has left #ceph
[12:11] * gregorg (~Greg@78.155.152.6) has joined #ceph
[12:11] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[12:33] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:40] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[12:42] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:44] * The_Bishop (~bishop@2a01:198:2ee:0:edbc:1f8b:86e7:b914) has joined #ceph
[12:54] * nhm (~nhm@184-97-251-210.mpls.qwest.net) has joined #ceph
[13:15] * Cube1 (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[13:19] <tnt> What memory size are people using on osd and mon ? I keep getting OOM killed despite having 4G of ram in the machine for running 2 osd process. And my mon process is currently using 2G ... (on another machine).
[13:22] <NaioN> a bit more :)
[13:22] <NaioN> i've about 1G per osd
[13:22] <NaioN> and for the mons I have 16G per mon
[13:23] * tightwork (~didders@142.196.239.240) has joined #ceph
[13:23] <NaioN> but the mons don't use that many mem
[13:23] <NaioN> in my cluster a couple of hunderds of megabytes
[13:24] <NaioN> the osds use about 300MB each
[13:24] <NaioN> (looking at RES)
[13:26] <tnt> 1G per OSD is what I was targetting. the machine is currently running 2 process so I though 2G for the process and then a few couple G free for the OS to cache/buffer stuff.
[13:27] <NaioN> well yeah that should be enough
[13:27] <tnt> but here I have 2.4G res for the mon and this basically keeps climbing until it crashes and restart.
[13:27] <tnt> Must have a memory leak somewhere.
[13:27] <NaioN> hmmm looks more like a mem-leak...
[13:27] <NaioN> which version?
[13:28] <tnt> 0.48.1argonaut-1precise
[13:29] <NaioN> we run the same version (well at least also 48.1 but we build it)
[14:06] * ninkotech (~duplo@89.177.137.231) Quit (Read error: Connection reset by peer)
[14:06] * ninkotech (~duplo@89.177.137.231) has joined #ceph
[14:08] * tightwork (~didders@142.196.239.240) Quit (Ping timeout: 480 seconds)
[14:50] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[15:04] * The_Bishop (~bishop@2a01:198:2ee:0:edbc:1f8b:86e7:b914) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[15:11] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[15:20] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) has joined #ceph
[15:20] <tjpatter> Is there any documentation out there on the cephx authentication capabilities. Specifically, what capabilities exist and when should they be used?
[15:28] * dilemma (~dilemma@2607:fad0:32:a02:21b:21ff:feb7:82c2) has joined #ceph
[15:28] <dilemma> I'm looking for some docs on cephx caps, if there are any
[15:29] <dilemma> I'm trying to understand what caps are needed for what operations
[15:29] <dilemma> and what kind of limits can be placed on users
[16:07] <dilemma> all I can find is the manpage of ceph-authtool (http://manpages.ubuntu.com/manpages/precise/man8/ceph-authtool.8.html) which doesn't go into much detail about the syntax of capabilities, and what the available options are in the clauses
[16:37] <elder> sage, sagewk I'm interested in a simple summary of the state of the ceph-client tree, and how it might have changed since about 10 days ago. I'm working with it now, but if there's anything special I should know about please let me know.
[16:48] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:49] * mgalkiewicz (~mgalkiewi@staticline58611.toya.net.pl) has joined #ceph
[16:50] * stxShadow (~Jens@ip-78-94-238-69.unitymediagroup.de) has joined #ceph
[16:58] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:59] * NaioN (~stefan@andor.naion.nl) Quit (Quit: leaving)
[16:59] * NaioN (stefan@andor.naion.nl) has joined #ceph
[17:03] * ninkotech (~duplo@89.177.137.231) Quit (Read error: Connection reset by peer)
[17:04] * ninkotech (~duplo@89.177.137.231) has joined #ceph
[17:12] * flakrat (~flakrat@eng-bec264la.eng.uab.edu) Quit (Read error: Connection reset by peer)
[17:13] * lx0 is now known as lxo
[17:18] <elder> nhm, know anything about this, or its subject matter? http://ampcamp.berkeley.edu/
[17:26] * stxShadow (~Jens@ip-78-94-238-69.unitymediagroup.de) has left #ceph
[17:28] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[17:38] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[17:43] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:44] * Tv_ (~tv@2607:f298:a:607:38b3:897f:20fd:72b9) has joined #ceph
[17:48] * nhmhome (~nh@184-97-251-210.mpls.qwest.net) has joined #ceph
[17:52] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:56] * senner (~Wildcard@68-113-228-89.dhcp.stpt.wi.charter.com) has joined #ceph
[17:57] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[17:59] * tnt (~tnt@212-166-48-236.win.be) Quit (Read error: Operation timed out)
[18:00] <tjpatter> Is there any way that we can mirror the Ceph repositories??? We would like to keep a local sync of this going...
[18:00] * tjpatter (~tjpatter@69.167.130.11) has left #ceph
[18:01] * tjpatter (~tjpatter@69.167.130.11) has joined #ceph
[18:03] <senner> Just use wget and you can sync them, if they don't mind the spidering/load, or use a apt manager cache/proxy/forward proxy.
[18:04] <elder> I'm getting a build error on current ceph/master
[18:04] <elder> Should I expect thaqt?
[18:04] <tjpatter> No rsync service on Ceph's side we could pull from?
[18:06] <senner> Well they just setup a european location for packages. http://eu.ceph.com/debian/ but no idea on rsync/ssh access for private access.
[18:07] <sagewk> elder: it's a bit of a mess due to the networking regression in -rc1. finally tracked that down and am testing a fix now.
[18:07] <elder> You're talking about ceph-client, or ceph?
[18:07] <sagewk> testing-next is where new stuff should go.. hopefully we'll be able to switch back to actually using it once this is resolved
[18:07] <sagewk> ceph-client
[18:07] <elder> OK.
[18:08] <elder> I had just decided to adjust what I had to work against current testing.
[18:08] <elder> I'll take a look at what you have in testing-next.
[18:12] * tnt (~tnt@89.40-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:16] <dilemma> most projects that run a debian repo prefer opening rsync access so that heavy users can sync twice a day for a local mirror
[18:17] <dilemma> seems odd that ceph doesn't run an rsync service for their repo
[18:22] <Tv_> dilemma: most open source projects are starved for bandwidth; we're a spin-off of a hosting company..
[18:23] <elder> Doesn't mean bandwidth should be wasted...
[18:23] <Tv_> yeah though we might want to explore CDN options before rsync
[18:24] <Tv_> just because maintaining a good mirror network is a lot of work
[18:24] <dilemma> also, many ceph users (such as tpatter and I) would prefer a local mirror
[18:24] <Tv_> dilemma: individual users shouldn't hit a central rsync daemon anyway; that's really wasteful
[18:24] <Tv_> dilemma: run apt-cacher-ng
[18:24] <dilemma> tjpatter and I are working on a half-petabyte ceph deployment, and we'll need to run a local mirror
[18:25] <Tv_> just saying, rsync will transfer a *lot* even when you're normally not installing all those packages
[18:25] <dilemma> no, rsync will only grab what has changed since the last check-in
[18:25] <Tv_> and we have a huge amount of churn in the apt repos
[18:26] <Tv_> with care, we could probably isolate releases and master into separate rsync targets, but that's work that we don't have anyone available for right now..
[18:26] <Tv_> s/master/autobuilt/
[18:27] <Tv_> and then you end up with the question, how do you deal with out of date mirrors, etc?
[18:27] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[18:27] <Tv_> as i said, i think it'd make sense to explore cdn options first
[18:28] <Tv_> manual configuration and management in modern days is just not a nice idea, imho
[18:30] <dilemma> well, a cdn won't work for us. Our ceph cluster won't be able to reach your cdn. A local mirror will be required for our use case
[18:31] <dilemma> We can certainly use apt-cacher, or a similar solution
[18:31] <dilemma> but we're used to just plugging in yet another rsync target to our existing local mirror setup
[18:32] <dilemma> if a mirror network was something you'd be interested in, we'd be happy to provide an official mirror as well
[18:33] <dilemma> apt-cacher or similar is just an annoyance we were hoping to avoid
[18:37] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[18:39] * mgalkiewicz (~mgalkiewi@staticline58611.toya.net.pl) has left #ceph
[18:41] <sagewk> elder: okay, pushed updated testing branch with proper fix
[18:42] <elder> So I should base everything on the updated testing, righyt?
[18:42] <elder> right?
[18:46] <sagewk> actually, i'm about ot rebase on current linus/master, since the fuse fixes are now upstream
[18:46] <sagewk> 5 min
[18:49] <sagewk> elder: pushed
[18:50] <sagewk> er, actually, let me reorder that with the stuff that needs to go back upstream for 3.6
[18:51] <sagewk> k
[18:53] * chutzpah (~chutz@100.42.98.5) has joined #ceph
[18:57] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[19:00] <elder> sagewk, yes that looks much cleaner.
[19:06] * BManojlovic (~steki@212.200.243.134) has joined #ceph
[19:08] * maelfius (~mdrnstm@66.209.104.107) has joined #ceph
[19:12] * rosco (~r.nap@188.205.52.204) Quit (Remote host closed the connection)
[19:12] * rosco (~r.nap@188.205.52.204) has joined #ceph
[19:15] * dmick (~dmick@38.122.20.226) has joined #ceph
[19:15] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[19:15] * EmilienM (~EmilienM@vau75-1-81-57-77-50.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[19:16] <nhmhome> ugh, phone locked up. I'll be on the call in a bit.
[19:19] <glowell> Is gotomeeting working this morning ?
[19:20] <Cube> still waiting as well
[19:20] <Cube> There we go!
[19:20] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[19:20] <nhmhome> phone is still having trouble. Looks like it's time to upgrade.
[19:23] * EmilienM (~EmilienM@vau75-1-81-57-77-50.fbx.proxad.net) has joined #ceph
[19:24] * adjohn (~adjohn@adsl-75-36-203-144.dsl.pltn13.sbcglobal.net) has joined #ceph
[19:27] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[19:27] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[19:29] * senner (~Wildcard@68-113-228-89.dhcp.stpt.wi.charter.com) Quit (Quit: Leaving.)
[19:36] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[19:36] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[19:40] * adjohn is now known as Guest3656
[19:40] * adjohn (~adjohn@67.21.1.58) has joined #ceph
[19:46] * Guest3656 (~adjohn@adsl-75-36-203-144.dsl.pltn13.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[19:51] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[19:51] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit ()
[20:01] * adjohn (~adjohn@67.21.1.58) Quit (Quit: adjohn)
[20:02] * The_Bishop (~bishop@2a01:198:2ee:0:6871:48f8:3cc2:f1a8) has joined #ceph
[20:04] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[20:18] * EmilienM (~EmilienM@vau75-1-81-57-77-50.fbx.proxad.net) Quit (Remote host closed the connection)
[20:25] * The_Bishop (~bishop@2a01:198:2ee:0:6871:48f8:3cc2:f1a8) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[20:30] * Ryan_Lane (~Adium@216.38.130.164) has joined #ceph
[20:54] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[21:02] <elder> FYI my graphics driver is causing regular hangs. I'm about to try to update. At this point I predict problems, based on past experience. Wish me luck.
[21:13] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[21:58] * trhoden (~trhoden@pool-108-28-184-160.washdc.fios.verizon.net) has joined #ceph
[22:00] <trhoden> anybody know a way to get the device created by "rbd map" as output to stdout? I'm writing a script that maps an RBD then turns around and mounts it. But right now I have to call "rbd map" then "rbd showmapped" (and parse the output) to get the device.
[22:00] <trhoden> it'd be nice if there was an option to "rbd map" to have it echo that device on success
[22:04] <Tv_> trhoden: with a typical udev setup, it's /dev/rbd/POOL/IMAGE
[22:05] * dilemma (~dilemma@2607:fad0:32:a02:21b:21ff:feb7:82c2) Quit (Quit: Leaving)
[22:05] <trhoden> Tv_: oh good point. I was only thinking of /dev/rbd<x>, not that there were symlinks involved. That'll work!
[22:06] <dmick> trhoden: but it would indeed be nice to see that as output, and I don't see any option for it
[22:06] <Tv_> dmick: well it's really that it's not up to "rbd map"
[22:06] <Tv_> it'd literally need to do showmapped after an add
[22:06] <dmick> ask the kernel somehow, yeah
[22:07] <trhoden> Tv_: thanks again. While I have you -- I sent a couple patches recently about using SSh for "service cpeh -a status" and a fix to the man page for "mkcephfs". never tried a patch through mailing list before. I was hoping they were formatted correctly.
[22:08] <Tv_> trhoden: that's more sagewk's territory -- i'll go on record as hating the 520-line shell script ;)
[22:08] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:08] <Tv_> trhoden: it's a pain to qa sufficiently, and that makes me very wary of it
[22:08] <trhoden> Tv_: haha. so I've noticed. =) I merely mentioned you because you had helped me by responding to my hostname issues and cluter/private network stuff
[22:11] * Meths_ (rift@2.25.191.72) has joined #ceph
[22:12] <sagewk> trhoden: i'll take a look
[22:16] <sagewk> trhoden: looks like your email program mangled the whitespace enough for it to not apply cleanly, but the patch is good. fixing itup
[22:16] * Meths (rift@2.27.72.157) Quit (Ping timeout: 480 seconds)
[22:16] <trhoden> sagewk: darn??? but thanks for look at it!
[22:19] <sagewk> trhoden: np, applied and pushed. thanks!
[22:19] <trhoden> sagewk: the ssh one in mkcephfs was one I wasn't sure about. if it does what I think it does -- it never worked for remote machines. So I was surprised. =)
[22:19] <trhoden> it certainly never worked for me, but does now
[22:21] <sagewk> you mean the status command, right? (not mkcephfs?)
[22:21] <trhoden> correct. "service ceph -a status"
[22:22] <trhoden> sagewk: about that typo. didn't mean to scare you. haha
[22:22] <sagewk> :)
[22:24] * EmilienM (~EmilienM@vau75-1-81-57-77-50.fbx.proxad.net) has joined #ceph
[22:24] * EmilienM (~EmilienM@vau75-1-81-57-77-50.fbx.proxad.net) has left #ceph
[22:26] <Tv_> sagewk: funky commit message
[22:27] <sagewk> the charset crap? yeah
[22:27] <Tv_> yeah
[22:27] <Tv_> ohhh ceph auth get
[22:27] <Tv_> i didn't even know of such a creature
[22:28] <trhoden> I hope it wasn't me re charset. It was a cut and paste straight into GMail -- using plaintext!
[22:28] <Tv_> trhoden: oh that was what messed up your whitespace, then... gmail ain't safe
[22:29] <trhoden> Tv_: I'll try to find a better alternative next time
[22:31] <Tv_> trhoden: https://git.wiki.kernel.org/index.php/GitTips#Using_gmail_to_send_your_patches
[22:32] <Tv_> trhoden: for background, http://www.kernel.org/pub/software/scm/git/docs/git-format-patch.html and the things pointed there; search for "gmail"
[22:32] <trhoden> Tv_: thanks! sorry for th newbie mistakes
[22:33] <Tv_> trhoden: not really a newbie mistake, just.. gmail ain't built for programmers ;)
[22:34] * deepsa_ (~deepsa@117.203.2.13) has joined #ceph
[22:35] <lightspeed> NaioN: I now have HEALTH_OK :)
[22:35] <lightspeed> I followed the suggestion from sage on http://tracker.newdream.net/issues/2874 to set the tunables to 0 (after upgrading sufficiently), and all remaining problem PGs then recovered
[22:36] * deepsa (~deepsa@117.203.2.61) Quit (Ping timeout: 480 seconds)
[22:36] * deepsa_ is now known as deepsa
[22:41] * tightwork (~didders@rrcs-71-43-128-65.se.biz.rr.com) has joined #ceph
[22:52] <joao> gregaf, are you around?
[22:55] * pentabular (~sean@adsl-71-141-229-185.dsl.snfc21.pacbell.net) has joined #ceph
[23:00] * jluis (~JL@89.181.156.250) has joined #ceph
[23:05] * joao (~JL@89-181-148-52.net.novis.pt) Quit (Ping timeout: 480 seconds)
[23:05] * danieagle (~Daniel@177.43.213.15) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[23:06] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[23:21] * tightwork (~didders@rrcs-71-43-128-65.se.biz.rr.com) Quit (Ping timeout: 480 seconds)
[23:30] * EmilienM (~EmilienM@vau75-1-81-57-77-50.fbx.proxad.net) has joined #ceph
[23:49] <elder> Does anyone know why ceph/master gives me this error with a fresh build?
[23:49] <elder> make[3]: *** No rule to make target `librbd.cc', needed by `librbd_la-librbd.lo'. Stop.
[23:53] * prometheanfire (~promethea@rrcs-24-173-105-83.sw.biz.rr.com) has joined #ceph
[23:53] <prometheanfire> what filesystem is prefered? xfs, ext4, btrfs?
[23:53] <prometheanfire> zfsonlionux :P
[23:54] <Fruit> when your storage just isn't experimental enough :P
[23:54] <Tv_> elder: try re-running ./autogen.sh, automake, ./configure etc
[23:55] <elder> I did.
[23:55] <elder> I do that every time.
[23:55] <elder> #!/bin/bash
[23:55] <elder> cd /home/elder/ceph/ceph
[23:55] <elder> make distclean
[23:55] <elder> ./do_autogen.sh -d3
[23:55] <elder> make -j 4
[23:55] <elder> I'll try without the "-j 4"
[23:55] <Fruit> no autoreconf?
[23:56] <prometheanfire> zfsonlinux is working great for me as a rootfs so far, going a couple of months now

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.