#ceph IRC Log


IRC Log for 2011-08-17

Timestamps are in GMT/BST.

[0:02] * The_Bishop (~bishop@port-92-206-21-65.dynamic.qsc.de) Quit (Ping timeout: 480 seconds)
[0:27] * mgalkiewicz (~mgalkiewi@84-10-109-25.dynamic.chello.pl) Quit (Quit: Ex-Chat)
[0:29] * The_Bishop (~bishop@port-92-206-21-65.dynamic.qsc.de) has joined #ceph
[0:31] * Juul (~Juul@3408ds2-vbr.4.fullrate.dk) Quit (Quit: Leaving)
[0:51] <bchrisman> can we control the mds logging to perhaps only output locking requests?
[0:52] <bchrisman> or rather.. any lock activity?
[0:52] <bchrisman> trying to debug an issue where it looks like we have a bum lock??? but putting mds=20 spams our logfiles
[1:01] <cmccabe> bchrisman: I don't see any configuration setting that directly corresponds to that
[1:02] <bchrisman> I guess I'll check whether lockign has a specific debug level, or if 20 is the only way to get it.
[1:03] <cmccabe> I see some log output in Locker.cc
[1:03] <cmccabe> perhaps you will want to change the levels of that
[1:11] * cmccabe (~cmccabe@ Quit (synthon.oftc.net charm.oftc.net)
[1:11] * jojy (~jojyvargh@70-35-37-146.static.wiline.com) Quit (synthon.oftc.net charm.oftc.net)
[1:11] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) Quit (synthon.oftc.net charm.oftc.net)
[1:11] * pruby (~tim@leibniz.catalyst.net.nz) Quit (synthon.oftc.net charm.oftc.net)
[1:11] * Meths (rift@ Quit (synthon.oftc.net charm.oftc.net)
[1:11] * tjikkun (~tjikkun@195-240-187-63.ip.telfort.nl) Quit (synthon.oftc.net charm.oftc.net)
[1:12] * Tv (~Tv|work@aon.hq.newdream.net) has joined #ceph
[1:12] * iggy (~iggy@theiggy.com) Quit (Ping timeout: 480 seconds)
[1:14] * jojy (~jojyvargh@70-35-37-146.static.wiline.com) has joined #ceph
[1:16] * Meths (rift@ has joined #ceph
[1:21] * cmccabe (~cmccabe@ has joined #ceph
[1:22] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[1:22] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[1:23] * pruby (~tim@leibniz.catalyst.net.nz) has joined #ceph
[1:23] * bchrisman (~Adium@ Quit (Quit: Leaving.)
[1:24] * MarkN (~nathan@ has joined #ceph
[2:04] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Ping timeout: 480 seconds)
[2:25] * iggy (~iggy@theiggy.com) has joined #ceph
[2:58] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[3:08] * cmccabe (~cmccabe@ has left #ceph
[3:15] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[3:28] * huangjun (~root@ has joined #ceph
[3:36] * jojy (~jojyvargh@70-35-37-146.static.wiline.com) Quit (Quit: jojy)
[4:26] * jojy (~jojyvargh@75-54-231-2.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[4:27] * jojy (~jojyvargh@75-54-231-2.lightspeed.sntcca.sbcglobal.net) Quit ()
[5:25] * lx0 (~aoliva@19NAAC3JZ.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[5:25] * lx0 (~aoliva@09GAAF6EK.tor-irc.dnsbl.oftc.net) has joined #ceph
[5:37] * amichel (~amichel@salty.uits.arizona.edu) has left #ceph
[7:21] * lx0 (~aoliva@09GAAF6EK.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[7:26] * lxo (~aoliva@659AADOWU.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:18] * huangjun (~root@ Quit (Quit: Lost terminal)
[8:25] * leonardo_ (~leonardo@ has joined #ceph
[8:25] * leonardo_ (~leonardo@ Quit ()
[9:13] * gregorg (~Greg@ has joined #ceph
[9:58] * The_Bishop (~bishop@port-92-206-21-65.dynamic.qsc.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[10:00] * The_Bishop (~bishop@port-92-206-21-65.dynamic.qsc.de) has joined #ceph
[11:30] * huangjun (~root@ has joined #ceph
[13:03] * huangjun (~root@ Quit (Quit: Lost terminal)
[14:21] * linus (~linus@ has joined #ceph
[14:22] * linus (~linus@ Quit ()
[15:25] * Juul (~Juul@ has joined #ceph
[15:40] * morse (~morse@supercomputing.univpm.it) Quit (Quit: Bye, see you soon)
[15:41] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[16:30] * greglap (~Adium@ has joined #ceph
[16:47] * Juul (~Juul@ Quit (Ping timeout: 480 seconds)
[16:54] * NeonLicht (~NeonLicht@darwin.ugr.es) has left #ceph
[17:21] * greglap (~Adium@ Quit (Read error: Connection reset by peer)
[17:39] * greglap (~Adium@aon.hq.newdream.net) has joined #ceph
[17:40] * greglap (~Adium@aon.hq.newdream.net) Quit ()
[18:34] <Tv> sagewk: regarding #1401, soo if the mds enforcement isn't there, but you can't access the underlying pool... that means you can still run find / and see other customers' data?
[18:34] <sagewk> you could still mount / and see the file names, but not read them
[18:34] <Tv> ok
[18:35] <sagewk> and if the uids aren't distinct you could delete them, create new files, rename, etc.
[18:35] <sagewk> or you're root
[18:35] <gregaf> sagewk: aren't normal *nix perms enforced on the MDS?
[18:36] <sagewk> gregaf: nope
[18:36] <gregaf> or do our MDS caps not support only allowing specific user IDs right now
[18:36] <gregaf> ah
[18:36] <Tv> err durr so root@customerA gets to rm -rf /customerB
[18:36] <Tv> that's sad
[18:36] <sagewk> right. that's why we need the mds caps piece to lock a client into a subdir
[18:36] <Tv> so what's the ceph-level "uid" thing that i've seen somewhere?
[18:37] <sagewk> the auid in the cephx caps?
[18:37] <Tv> yeah auid
[18:37] <gregaf> that's a rados-level authenticated user id
[18:37] <gregaf> it's how we can do the pool caps, but it's not related to *nix users
[18:37] <sagewk> just an id on the cephx user. i think it' sjust used for pool ownership so far.
[18:38] <Tv> i'm not clear on the difference between having a different auid vs having a different key with different caps
[18:38] <sagewk> you can write teh cap in terms of the auid, to say 'read and write any pool with owner=my_auid'
[18:38] <Tv> ahhh
[18:39] <sagewk> instead of explicitly listing the pools you own/created
[18:39] <Tv> i need to finagle my way out of dho duties and write this all down (better)
[18:39] <sagewk> yeah
[18:41] * jojy (~jojyvargh@70-35-37-146.static.wiline.com) has joined #ceph
[18:41] <Tv> i wish unix used strings not numbers for users.. *sigh plan9*
[18:44] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[18:57] * cmccabe (~cmccabe@c-24-23-254-199.hsd1.ca.comcast.net) has joined #ceph
[18:58] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[19:56] <sagewk> tv: how difficult do you think it'd be to make teuthology run hadoop?
[19:56] <Tv> well
[19:56] <Tv> the only version of hadoop i know to work well is cloudera
[19:56] <Tv> which implies installing debs
[19:57] <Tv> so install/remove get ugly
[19:57] <Tv> let me put it this way
[19:57] <Tv> how long does a full reinstall take these days, and could we perhaps use that as the cleanup...
[19:57] <sagewk> we can just leave them installed? it's the config sandboxing that'll the challenge
[19:58] <Tv> it's the config changes that are a pain
[19:58] <Tv> so i always considered the /tmp/cephtest thing a kludge
[19:59] <Tv> but it was necessary, because i don't trust deb purging for sanity, and reinstall was too laborsome
[19:59] <Tv> if reinstall gets easier, then i think we'd even get value from really installing the ceph debs
[19:59] <Tv> and at *that* point, the hadoop install would be easy too
[19:59] <sagewk> yeah
[20:00] <Tv> we could leave all hadoop components installed on all sepia nodes, and then configure & start the ones relevant for the test
[20:00] <sagewk> yeah
[20:00] <darkfaded> Tv: http://deranfangvomende.wordpress.com/2011/01/06/todays-best-debian-command/ (thats as close to reinstalling debs as i ever got)
[20:00] <Tv> i don't remember how ugly their init scripts are, will they get in the way or not
[20:01] <Tv> but we could basically disable the init scripts, write config files to /etc/hadoop, and then start the daemons directly under teuthology
[20:01] <Tv> and every run just wipes out the previous /etc/hadoop config
[20:02] <Tv> they have that update-alternatives config switching thing; just make that subdir a symlink to /tmp/cephtest
[20:02] <sagewk> yeah that seems simplest.
[20:03] <Tv> that means we end up re-implementing bits of their init scripts, but that's just about unavoidable
[20:03] <sagewk> or we host the rc?.d links and leave the init scripts intact in /etc/init.d
[20:03] <Tv> the teuth controller can generate the right xml configs and write them to remote "tee /tmp/cephtest/...", that part is fairly easy
[20:03] <sagewk> s/host/hose/
[20:04] <Tv> oh yeah that's how you kill init script properly
[20:04] <Tv> but i kinda don't want to use "sudo /etc/init.d/hadoop-namenode start" under teuthology, because that means losing control of it
[20:04] <Tv> so i'd rather just have teuth run the underlying command directly
[20:05] <Tv> in a non-forking mode
[20:05] <Tv> just like the ceph daemons
[20:05] <Tv> dunno if that'll really matter
[20:05] <Tv> and the rc?.d link manipulation is easy via update-rc.d
[20:06] <Tv> we can put all that in ceph-qa-deploy fabfile
[20:06] <Tv> so all nodes have all cloudera debs installed but inactive
[20:08] <Tv> so about the RBD email: "As we don't have a lot of disks (only 16 at the moment), this adds up to a high number of write IOPS on the OSD disks with a negligible throughput." <-- that makes me think it is about performance, not just neatness
[20:08] <yehudasa> sagewk: fuse_lowlevel_notify_inval_inode() breaks compilation on skinny
[20:09] <Tv> i think the next paragraph is basically asking to only do the journal->disk work in batches
[20:09] <sagewk> yehudasa: which version of libfuse2?
[20:09] <Tv> on the assumption of journal being faster than disk
[20:09] <yehudasa> sagewk: 2.6.5
[20:09] <sagewk> tv: yeah. the thing is the fs writes are all buffered anyway, so there's no point in delaying them.
[20:09] <sagewk> yehudasa: ah it's a fuse 2.8 thing i think
[20:09] <Tv> not sure if that would actually help; it sounds like it'd need operation coalescing before it'd help
[20:10] <yehudasa> sagewk: so we should conditionally build it
[20:10] <sagewk> yehudasa: i think there's a way to build conditionally based on the fuse version, yeah
[20:10] <sagewk> using the macros in the fuse header
[20:11] <gregaf> hmm, ceph-qa-deploy is still located on cephbooter, right?
[20:11] <gregaf> I'm getting a bizarre git error when trying to push to it to add valgrind to the packages list
[20:11] <gregaf> gregf@kai:~/src/ceph-qa-deploy$ git push
[20:11] <gregaf> Counting objects: 5, done.
[20:11] <gregaf> Delta compression using up to 8 threads.
[20:11] <gregaf> Compressing objects: 100% (3/3), done.
[20:11] <gregaf> Writing objects: 100% (3/3), 322 bytes, done.
[20:11] <gregaf> Total 3 (delta 2), reused 0 (delta 0)
[20:11] <gregaf> error: unable to create temporary sha1 filename ./objects/08: File exists
[20:11] <gregaf> fatal: failed to write object
[20:11] <gregaf> error: unpack failed: unpacker exited with error code
[20:11] <gregaf> To cephbooter.ceph.dreamhost.com:/git/ceph-qa-deploy.git
[20:11] <gregaf> ! [remote rejected] master -> master (n/a (unpacker error))
[20:11] <gregaf> error: failed to push some refs to 'cephbooter.ceph.dreamhost.com:/git/ceph-qa-deploy.git'
[20:12] <Tv> gregaf: sounds like filesystem access control
[20:12] <Tv> gregaf: you're not in the right group
[20:12] <yehudasa> sagewk: there's FUSE_VERSION
[20:12] <gregaf> Tv: how are you pulling that out of those errors?
[20:13] <Tv> gregaf: drwxrwsr-x 2 sage ceph 4096 2011-06-17 14:15 objects/08
[20:13] <Tv> uid=1002(greg) gid=1002(greg) groups=1002(greg)
[20:13] <yehudasa> so it'll basically be #if FUSE_VERSION >= FUSE_MAKE_VERSION(2, 8)
[20:13] <gregaf> oh, separate channels, okay
[20:13] <gregaf> heh
[20:13] <Tv> gregaf: and some chicken bones, shaken on a drum, while mumbling
[20:14] <yehudasa> sagewk: the question would be whether it's ok to drop that call?
[20:21] <yehudasa> cmccabe: I'm getting make[2]: *** No rule to make target `../src/gtest/lib/libgtest.la', needed by `test_rados_api_io' following your merge
[20:23] <cmccabe> yehudasa: you need to make in the top level directory
[20:23] <cmccabe> yehudasa: at least once, to make gtest
[20:23] <cmccabe> yehudasa: similar to how make check has always operated
[20:23] <yehudasa> cmccabe: can you make it compile conditionally please?
[20:24] <yehudasa> --with-tests, e.g.
[20:24] <cmccabe> yehudasa: it does compile conditionally... it's only on for debug builds
[20:24] <yehudasa> oh, ok
[20:24] <cmccabe> same as the other test stuff
[20:25] <cmccabe> the way gtest is built is annoying, but I can't really suggest a way to make it easier
[20:25] <cmccabe> except for moving gtest into the src dir, which I don't think we want to do
[20:25] <cmccabe> or not using automake, which would be good, but probably not on the agenda in the short term
[20:26] <sagewk> cmccabe: oh i had a question for you.. did you see the wip-uninline branch?
[20:26] <cmccabe> sagewk: I took a quick look, seemed pretty straightforward
[20:26] <sagewk> cmccabe: basically just uninlines a bunch of stuff in buffer.h and osd_types.h. the weird thing is the build with symbols gets bigger. stripped it's smaller.
[20:26] <cmccabe> sagewk: in general the compiler is better at making inlining decisions these days
[20:27] <cmccabe> sagewk: I think inline functions don't get entries in the symbol table
[20:27] <sagewk> and it can still inline at the link stage? cuz this moves them into the .o file
[20:27] <cmccabe> sagewk: which would explain why uninlining would increase the symbol table size
[20:28] <cmccabe> sagewk: well, it can't do cross-module optimizations unless you use that flag
[20:28] <cmccabe> sagewk: where module is .o
[20:28] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[20:28] <cmccabe> sagewk: I last encountered the symbol table issue when dealing with backtrace_symbols_fd, btw
[20:29] <cmccabe> sagewk: the GNU backtrace function can't resolve symbols for inlined functions without -g, if memory serves
[20:29] <sagewk> yeah
[20:29] <sagewk> so... wip-uninline basically means none of those calls can get inlined by teh compiler, bc we're not using that option, right? or are we?
[20:29] <sagewk> /should we?
[20:30] <cmccabe> I'm not sure how feasible it is to use it
[20:30] <cmccabe> http://gcc.gnu.org/wiki/LinkTimeOptimization
[20:32] <cmccabe> I guess some people argue that C++ accessors and other trivial functions should be put in the header file, since otherwise they can't get inlined most of the time
[20:33] <cmccabe> I am ok with that, but the function ought to be truly trivial... like just a return statement
[20:33] <cmccabe> also you almost never want to inline destructors because of the bad effect on compilation time
[20:33] <sagewk> in this case, moving methods to the .cc means they never get optimized (except for intra-class calls)
[20:34] <gregaf> probably not worth worrying about until we have performance benchmarks and profiling data?
[20:34] <gregaf> for all we know shrinking the executables by not inlining them is more efficient anyway
[20:34] <cmccabe> gregaf: good point
[20:34] <cmccabe> gregaf: the i-cache hit rate is important
[20:35] <cmccabe> sagewk: yeah, I don't really have all the answers. nobody does
[20:36] <cmccabe> sagewk: I do know this: rarely-used functions like buffer::list::encode_base64 are not good candidates for inlining
[20:36] <sagewk> so should we merge that? i'm inclined to err on the side of not inlining
[20:36] <cmccabe> sagewk: it seems reasonable to merge based on what I've read of it so far
[20:36] <sagewk> k thanks
[20:36] <cmccabe> sagewk: the pg_pool_t stuff is another thing that I really don't think inlining would help with
[20:37] <cmccabe> sagewk: which I can see you uninlined in this change
[20:37] <sagewk> yehudasa: grr, so all my teuth tests are hanging because the fuse invalidate callback deadlocks.
[20:37] <yehudasa> heh
[20:37] <cmccabe> sagewk: the only things I would even consider for inlining are maybe the really highly used bufferlist functions... and even then, it ought to be profiled
[20:37] <sagewk> k
[20:37] <yehudasa> sagewk: downgrade your libfuse2 ;)
[20:37] <sagewk> hehe
[20:46] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) Quit (Remote host closed the connection)
[20:47] * mtk (L9XVrWELdl@panix2.panix.com) has joined #ceph
[20:53] * monrad (~mmk@domitian.tdx.dk) Quit (Quit: bla)
[20:54] * DLange (~DLange@dlange.user.oftc.net) Quit (Remote host closed the connection)
[20:55] <sagewk> gregaf: this also explains by cfuse was becoming zombie
[21:00] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[21:23] * mtk (L9XVrWELdl@panix2.panix.com) Quit (Remote host closed the connection)
[21:23] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[21:25] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) Quit ()
[21:25] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[21:26] * monrad-51468 (~mmk@domitian.tdx.dk) has joined #ceph
[21:27] * mtk0 (g24spPQFdn@panix2.panix.com) has joined #ceph
[21:28] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) Quit (Remote host closed the connection)
[21:28] * mtk0 (g24spPQFdn@panix2.panix.com) Quit ()
[21:28] * mtk (KkDCxodNrp@panix2.panix.com) has joined #ceph
[21:33] * The_Bishop (~bishop@port-92-206-21-65.dynamic.qsc.de) Quit (Ping timeout: 480 seconds)
[21:36] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[21:40] * The_Bishop (~bishop@port-92-206-21-65.dynamic.qsc.de) has joined #ceph
[21:53] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[22:15] <lxo> so, I bumped up the number of PGs in a pool, following the instructions in the wiki. now pg_num and pgp_num are different. will pgp_num be increased on its own once the splits are complete, or must I do something else on my own?
[22:16] <lxo> PGs were empty, in a newly-created filesystem, then; now I'm filling them up and concerned about the files undergoing the no-so-well-tested splits (per the wiki)
[22:17] <gregaf> lxo: I think it's increased on its own, but sjust would know better than I
[22:17] * jojy (~jojyvargh@70-35-37-146.static.wiline.com) Quit (Quit: jojy)
[22:18] * jojy (~jojyvargh@70-35-37-146.static.wiline.com) has joined #ceph
[22:18] <sjust> Ixo: splitting pretty much doesn't work at the moment
[22:18] <lxo> thanks, I'll keep watching it as I move the data from an older (possibly corrupt) filesystem to the new one
[22:18] <sjust> ok
[22:18] <lxo> err, that was for gregaf :-)
[22:19] <sjust> pgp_num I believe is for the preferred osd pgs
[22:19] <sjust> it must be increased independently
[22:19] <sjust> are you using the preferred pg mechanism?
[22:19] <lxo> so... what now? start over, drop pg_num back down, or what?
[22:19] <lxo> I don't even know what you're talking about
[22:19] <sjust> Ixo: sorry, it's somewhat confusing
[22:19] <gregaf> sjust: I thought that pgp_num was num bits used for PG-to-OSD placement calculations
[22:20] <sjust> gregaf: one sec
[22:20] <lxo> I read some code and got the idea that pgp_num was the effective number of PGs
[22:20] <gregaf> which is initially held steady as you increase pg_num so that the split PGs remain on the same OSD
[22:20] <sjust> oops, I'm wrong
[22:20] <lxo> yep, that's what the comments in rados.h IIRC say
[22:20] <sjust> I was thinking of lpg_num
[22:21] <lxo> aah, no, that's something else
[22:21] <lxo> that I haven't delved into
[22:21] <sjust> yes, sorry about that
[22:21] <lxo> np
[22:21] <lxo> in what sense does ???split??? not work? won't do anything whatsoever? will mess things up? will mess other things up in the process? :-)
[22:22] <sjust> Ixo: last I checked, it tends to trigger some crashes
[22:22] <sagewk> sjust, lxo: pgp_num doesn't increase automagic, nope.
[22:23] <lxo> so, if I bump it up now, after loading some 500k small files into the cluster...
[22:23] <sagewk> lxo: if the split worked then you're all good
[22:23] <sagewk> changing pgp_num just reshuffles the pg locations on disk so they are actually distributed
[22:24] <sagewk> (bumping pg_num just splits the pg contents in place)
[22:24] <sagewk> so you can safely increase (or decrease) pgp_num and all will be well
[22:24] <gregaf> so right now you need to somehow figure out manually when the splits are done?
[22:24] <sagewk> ceph pg stat will tell you
[22:25] <lxo> hmm, it never did to me. do degraded PGs prevent splitting?
[22:26] <lxo> (what I really wanted was smaller PG directories, for rsyncing them up between replicas was taking too long, so maybe I already got what I wanted with pg_num < pgp_num)
[22:27] <gregaf> sagewk: looks like I was just checking the wrong gitbuilder, sorry about that
[22:27] <sagewk> gregaf: np
[22:27] <lxo> as far as ceph is really concerned, I could have a single PG per pool, or maybe per OSD for load-balancing, for I replicate everything onto all 3 nodes
[22:28] <sagewk> lxo: yeah
[22:28] <sagewk> lxo: pg_num==1 just means the entire pool will be in one dir, on one set of (3) osds. ok for small pools, not for big ones.
[22:28] <lxo> now, I'm thinking of reducing the replication level for some directories, and I'm pretty sure I saw something somewhere about assigning certain directories to different pools, but I can't seem to find that now. any pointers?
[22:28] <sagewk> man cephfs
[22:30] <lxo> THANK YOU! :-)
[22:39] <gregaf> sagewk: Tv: anything likely to break if I set the cfuse teuthology task to chown the mountpoint to the ubuntu user?
[22:40] <sagewk> dunno. why do you need that though?
[22:41] <gregaf> just seems more practical than setting all the tasks which need to write to the mountpoint to sudo or change it themselves
[22:42] <sagewk> works for me. you'd need to change the kclient one too. check with tv tho
[22:42] <gregaf> will do
[22:46] <gregaf> hmmm, he likes having all the tasks make their own so we can check the behavior on mount
[22:48] <Tv> i like having the test behavior be realistic; i want to be able to write [ "$(stat --printf='%A %u %g' /tmp/cephtest/mnt.42)" = "drwxr-xr-x 0 0" ]
[22:48] <Tv> because how else will you test what the initial values are
[22:49] <Tv> and, basically, most mountpoints in real use are owner by root:root, and then the subdirs get chowned/chgrped; that's just more typical
[22:50] <Tv> *owned
[22:50] <Tv> and well, now i see we should do install --owner=1042 --group=1034 --mode=0123 /tmp/cephtest/mnt.0/foo and then mount ceph:/foo in another dir, and check *that*, etc
[23:05] * Dantman (~dantman@S0106001731dfdb56.vs.shawcable.net) Quit (Remote host closed the connection)
[23:15] * darkfader (~floh@ has joined #ceph
[23:15] * Dantman (~dantman@S0106001731dfdb56.vs.shawcable.net) has joined #ceph
[23:17] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[23:20] * darkfaded (~floh@ Quit (Ping timeout: 480 seconds)
[23:42] <lxo> is there any way to assign a crush ruleset to a newly-created pool? or to specify the ruleset at the time of its creation?
[23:52] <slang> sagewk: for the second mds crash (in Locker::issue_caps), I was able to get a log with debug mds = 20
[23:52] <sagewk> slang: oh great
[23:52] <sagewk> can you post/attach that somewhere?
[23:52] <slang> sagewk: but its very large, and has a lot of filenames that I'm not able to share publicly, unfortunately
[23:52] <slang> sagewk: and it would take a while to scrub it
[23:53] <slang> sagewk: do you need the whole replay log, or just the last 1000 lines or so?
[23:53] <sagewk> slang: this is showing the 2011-08-16 16:46:51.113638 7f88566ea700 log [WRN] : bad client_range {4209=0-0@0} on ino 1000000656f
[23:53] <sagewk> assert(in->is_head()) one?
[23:53] <slang> (I know that sounds lame, but those are the constraints I'm under)
[23:53] <sagewk> slang: np
[23:54] <slang> sagewk: yes that one
[23:55] <sagewk> slang: we can do it iteratively, and start with a grep 'inode 1000000656f', if you have the patience to go back and forth :)
[23:56] <slang> sure

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.