#ceph IRC Log

Index

IRC Log for 2011-03-30

Timestamps are in GMT/BST.

[0:07] <Tv> what's the deal with "public addr" vs "cluster addr"?
[0:07] <Tv> i think i'm seeing a bug, but would like to understand..
[0:08] <Tv> what's the intended use, what's the difference between those two?
[0:12] * allsystemsarego (~allsystem@188.27.164.67) Quit (Quit: Leaving)
[0:32] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Quit: Yoric)
[0:32] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[1:30] * DJLee (82d8d198@ircip2.mibbit.com) has joined #ceph
[1:37] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[2:02] * samsung (~samsung@61.184.205.41) has joined #ceph
[2:30] * Dantman (~dantman@S0106001eec4a8147.vs.shawcable.net) Quit (Remote host closed the connection)
[3:06] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) Quit (Remote host closed the connection)
[3:07] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) has joined #ceph
[3:31] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[4:08] <samsung> hi,all
[4:09] <samsung> i complie kclient on kernel 2.6.38,and got error:inode.c:848: error: ‘dcache_lock’ undeclared (first use in this function)
[4:09] <samsung> and inode.c:1795: error: too few arguments to function ‘generic_permission
[4:09] <samsung> my ceph version is 0.24.3
[4:10] <samsung> so what should i do to resolve this problem?
[4:14] <iggy> samsung: use the backports branch or whatever it's called
[4:18] <samsung> i am not sure
[4:21] * Dantman (~dantman@S0106001eec4a8147.vs.shawcable.net) has joined #ceph
[5:48] * Guest123 (quasselcor@bas11-montreal02-1128535815.dsl.bell.ca) Quit (Remote host closed the connection)
[5:56] * bbigras (quasselcor@bas11-montreal02-1128535815.dsl.bell.ca) has joined #ceph
[5:56] * bbigras is now known as Guest542
[5:58] * f4m8_ (~f4m8@lug-owl.de) has joined #ceph
[6:00] * f4m8 (~f4m8@lug-owl.de) Quit (Ping timeout: 480 seconds)
[6:01] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (Ping timeout: 480 seconds)
[6:03] * Hugh (~hughmacdo@soho-94-143-249-50.sohonet.co.uk) Quit (Ping timeout: 480 seconds)
[6:10] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[6:12] * Hugh (~hughmacdo@soho-94-143-249-50.sohonet.co.uk) has joined #ceph
[6:51] * lidongyang (~lidongyan@222.126.194.154) Quit (Remote host closed the connection)
[7:00] * lidongyang (~lidongyan@222.126.194.154) has joined #ceph
[7:37] * samsung (~samsung@61.184.205.41) Quit (Ping timeout: 480 seconds)
[9:04] * allsystemsarego (~allsystem@188.27.164.67) has joined #ceph
[9:49] * cmccabe (~cmccabe@c-24-23-254-199.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[12:40] * Guest542 (quasselcor@bas11-montreal02-1128535815.dsl.bell.ca) Quit (Remote host closed the connection)
[12:43] * bbigras (quasselcor@bas11-montreal02-1128535815.dsl.bell.ca) has joined #ceph
[12:44] * bbigras is now known as Guest576
[13:43] * underdark (~innerheig@underdark.nl) has joined #ceph
[13:43] <underdark> mornin folks
[14:18] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[14:59] * lxo (~aoliva@201.82.32.113) has joined #ceph
[15:45] * neurodrone (~neurodron@dhcp205-136.wireless.buffalo.edu) has joined #ceph
[16:42] * allsystemsarego_ (~allsystem@188.27.164.67) has joined #ceph
[16:42] * allsystemsarego_ (~allsystem@188.27.164.67) Quit ()
[16:46] * darkfaded (~floh@188.40.175.2) has joined #ceph
[16:49] * cclien (~cclien@ec2-175-41-146-71.ap-southeast-1.compute.amazonaws.com) has joined #ceph
[16:51] * cclien_ (~cclien@ec2-175-41-146-71.ap-southeast-1.compute.amazonaws.com) Quit (Ping timeout: 480 seconds)
[16:51] * darkfader (~floh@188.40.175.2) Quit (Ping timeout: 480 seconds)
[16:51] * monrad-51468 (~mmk@domitian.tdx.dk) Quit (Ping timeout: 480 seconds)
[16:51] * monrad-51468 (~mmk@domitian.tdx.dk) has joined #ceph
[17:18] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) has joined #ceph
[17:37] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[17:39] * alexxy[home] (~alexxy@79.173.81.171) Quit (Ping timeout: 480 seconds)
[17:39] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[17:41] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:41] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) Quit (Quit: Leaving.)
[17:53] * greglap (~Adium@166.205.138.223) has joined #ceph
[17:56] * athinkingmeat (~athinking@changeme.ebuddy.com) has joined #ceph
[18:33] * athinkingmeat (~athinking@changeme.ebuddy.com) Quit (Quit: athinkingmeat)
[18:37] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[18:40] * greglap (~Adium@166.205.138.223) Quit (Quit: Leaving.)
[18:49] * cmccabe (~cmccabe@208.80.64.121) has joined #ceph
[18:59] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) has joined #ceph
[19:04] * st-8622 (~st-8622@a89-154-147-132.cpe.netcabo.pt) has joined #ceph
[19:07] * st-8622 (~st-8622@a89-154-147-132.cpe.netcabo.pt) Quit (Remote host closed the connection)
[19:11] * st-9028 (~st-9028@a89-154-147-132.cpe.netcabo.pt) has joined #ceph
[19:13] * st-9028 (~st-9028@a89-154-147-132.cpe.netcabo.pt) Quit (Remote host closed the connection)
[19:14] * st-9200 (~st-9200@a89-154-147-132.cpe.netcabo.pt) has joined #ceph
[19:20] * st-9200 (~st-9200@a89-154-147-132.cpe.netcabo.pt) Quit (Quit: Quiting...)
[19:23] * st-9398 (~st-9398@a89-154-147-132.cpe.netcabo.pt) has joined #ceph
[19:26] * st-9398 (~st-9398@a89-154-147-132.cpe.netcabo.pt) Quit ()
[19:40] * st-9666 (~st-9666@a89-154-147-132.cpe.netcabo.pt) has joined #ceph
[19:41] * st-9666 (~st-9666@a89-154-147-132.cpe.netcabo.pt) Quit ()
[19:41] * st-9783 (~st-9783@a89-154-147-132.cpe.netcabo.pt) has joined #ceph
[19:44] * st-9783 (~st-9783@a89-154-147-132.cpe.netcabo.pt) Quit ()
[19:55] * neurodrone (~neurodron@dhcp205-136.wireless.buffalo.edu) Quit (Quit: neurodrone)
[19:57] <bchrisman> what applications are currently using the libceph client?
[19:59] <lxo> amazing! it looks like dropping the external USB disks from my ceph/btrfs cluster made it far more stable! something to do with frequent syncing, lower disk throughput and/or inability to control the disk's write cache, I gather. one more tip to note in the docs, I suppose
[20:00] <cmccabe> bchrisman: I think there are hadoop bindings using libceph
[20:00] * athinkingmeat (~athinking@ip565e7fcc.direct-adsl.nl) has joined #ceph
[20:00] <bchrisman> cmccabe: cool.. would help to look at an existing implementation's usage… thx
[20:01] <cmccabe> bchrisman: java bindings are maybe not the easiest example code to follow
[20:01] <cmccabe> bchrisman: I think synclient might be a better starting point
[20:01] <bchrisman> cmccabe: is there anything else using?
[20:01] <lxo> oh, another thing I found out was that ext4 required the user_xattr mount option, but even the initial creation of the filesystem was intolerably slow because of multiple disks, some usb, and frequent all-system syncing by ceph. btrfs was much faster with single-FS syncing, but the slowness hinted me at looking into sync performance and write caches, and that led to a solution
[20:01] <bchrisman> cmccabe: is that a testing tool?
[20:01] <cmccabe> yes
[20:02] <cmccabe> bchrisman: client/SyntheticClient.cc
[20:02] <bchrisman> cmccabe: thx
[20:02] <cmccabe> bchrisman: actually, wait, I don't think that uses libceph per se
[20:02] <lxo> the need for user_xattr for ext4 (and <4?) is something that should probably be in the docs somewhere, if it isn't yet
[20:03] <bchrisman> yeah, I'd guess a testing client for internal use would exercise the C++ interface/objects
[20:04] <cmccabe> bchrisman: well, there is client/testceph.cc
[20:05] <bchrisman> cmccabe: heh… that's… true… :)
[20:05] <cmccabe> bchrisman: I think the header files and the hadoop bindings might be your best bet for now
[20:06] <gregaf> Ixo: I think that it's written down somewhere, but if it's not clear you can edit the wiki :)
[20:06] <cmccabe> bchrisman: I thought there were more users in-tree but maybe not
[20:06] <gregaf> you need xattrs to work on whatever filesystem you're using, it's just that btrfs never turns them off
[20:06] <bchrisman> cmccabe: yeah.. cool.. will check that out.
[20:06] <gregaf> cmccabe: we don't really have in-tree users of libceph because any in-tree users will have a simple enough time just using Client :)
[20:07] <cmccabe> gregaf: yeah
[20:07] <gregaf> AFAIK the only libceph users are the Hadoop bits and the (out-of-tree? I think?) Hypertable client
[20:08] <cmccabe> gregaf: it looks like at least some of the hypertable stuff made it in
[20:10] <lxo> gregaf, oh, nice, I didn't realize the wiki was open for editing. I'll add info as I figure it out
[20:10] <gregaf> If it's not open let us know — we locked it down a lot to deal with spammers but once you've confirmed your email with a new account you should be good
[20:11] <lxo> the thing with xattrs is that AFAICT ext4 does support xattrs in general, but it requires user_xattrs to accept xttrs in the user. namespace
[20:11] <lxo> ... which ceph uses
[20:17] * Juul (~Juul@slim.dhcp.lbl.gov) has joined #ceph
[20:18] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: Leaving)
[20:28] * ghaskins (~ghaskins@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[20:54] * neurodrone (~neurodron@dhcp205-136.wireless.buffalo.edu) has joined #ceph
[21:07] * Juul_ (~Juul@131.243.46.59) has joined #ceph
[21:13] * Juul (~Juul@slim.dhcp.lbl.gov) Quit (Ping timeout: 480 seconds)
[21:34] <gregaf> bchrisman: do you still have all the logs from your last run of #917?
[21:35] <gregaf> we need osd4 too
[21:37] <bchrisman> I have osd4… don't have 8+… so we're in luck
[21:37] <bchrisman> gregaf: I'll pull the logs from that same time forward
[21:38] <gregaf> excellent
[21:41] <sagewk> anyone care to comment on the sanity of ceph-client.git dentry_unhash before i send to -fsdevel?
[21:42] <Tv> sagewk: reading..
[21:42] <sagewk> what's the url for autotest?
[21:43] <sjust1> autotest.ceph.newdream.net
[21:45] <bchrisman> gregaf: that log is up..
[21:48] <Tv> sagewk: a6204f makes vfs_rmdir not dput on e.g. EBUSY, that doesn't sound like it'd be safe to change that behavior
[21:48] <Tv> oh i guess it only dput'ed because it unhashed it
[21:48] <sagewk> there is a dget in dentry_unhash()
[21:48] <Tv> avoiding one means avoiding the other, yeah
[21:50] <sagewk> tv: is there a ssh key i should use to ssh into sepia*?
[21:51] <Tv> sagewk: i guess i should add yours...
[21:52] <sagewk> there are 3 i use
[22:00] <Tv> sagewk: ssh keys are in
[22:00] <sagewk> thanks
[22:00] * Juul_ (~Juul@131.243.46.59) Quit (Quit: Leaving)
[22:00] <Tv> it seems sepia13 has gone down :(
[22:01] <Tv> oh it's attempting a reinstall currently
[22:01] <Tv> sjust1: there's some debootstrap warning on the console
[22:02] <sjust1> hmm
[22:03] <sagewk> tv: some have the key and some don't.. does it have to do with whether they're locked?
[22:04] <sagewk> 20-22 are the ones i grabbed
[22:04] <Tv> sagewk: shouldn't.. can you name a box without the key?
[22:04] <sagewk> those 3. also 19
[22:05] <Tv> i don't see why those would have failed, checking..
[22:05] <Tv> the keys are in
[22:05] <Tv> sagewk: ohh it's ubuntu@, not individual user accounts
[22:05] <Tv> sagewk: perhaps that explains it?
[22:05] <gregaf> bchrisman: ahah! pushed a fix to next branch: 493e2d952ad24d8c8cab372e942ea3e18169ab4e
[22:05] <sagewk> tv: oh i was going in as root
[22:06] * Tv is a big believer in sudo
[22:06] <sagewk> tv: the kernel doesn't have rbd compiled in (seems to have libceph and ceph tho)
[22:08] <Tv> sagewk: so sjust fumbled that one ;)
[22:09] <Tv> sagewk: i've been thinking of setting up a gitbuilder for our kernel
[22:09] <sagewk> tv: oh, yes please!
[22:11] <sjust1> sagewk: will rebuild now with rbd
[22:12] <sagewk> sjust1: thanks
[22:13] <sjust1> just CONFIG_BLK_DEV_RBD ?
[22:15] <Tv> sagewk: 6371b has a few instances of + if (new_inode && S_ISDIR(new_dentry->d_inode->i_mode))
[22:15] <Tv> sagewk: i expected s/new_dentry->d_inode/new_inode/
[22:15] <sagewk> sjust1: yup
[22:16] <sagewk> ah tnx
[22:18] <Tv> sagewk: not sure of implications of this, but e.g. hfsplus/dir.c hfsplus_rename now has dentry_unhash(); hfsplus_rmdir(), and hfsplus_rmdir() has dentry_unhash() in it
[22:19] <Tv> sagewk: ahh so when you moved unhash into ->rmdir, now all non-vfs callers of rmdir have an extra dget?
[22:20] <Tv> that sounds bad, right?
[22:20] <sagewk> that's why teh earlier patch removes the dentry_unhash dget
[22:20] <Tv> ahh
[22:20] <sagewk> b3d5cf39e81277e5214c2c4bdca19beb91accf34
[22:20] <Tv> teaches me to read orig source & gitk, not the patched source ;)
[22:21] <sagewk> ah yeah teh hfsplus_rename one is unnecessary now
[22:23] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Quit: Yoric)
[22:23] <Tv> sagewk: shouldn't the dentry_unhash comment now say "count of 1"?
[22:23] <sagewk> yeah
[22:30] <bchrisman> gregaf: nice catch...:)
[22:31] <gregaf> thx
[22:31] <gregaf> was confused for a sec when I saw it got sent out from osd4 in the right order :)
[22:31] <gregaf> but the messenger logging was lower than I realized on osd3
[22:31] <bchrisman> yeah… makes sense… good to see it wasn't a more fundamental issue
[22:33] <Tv> sagewk: I have no clue what Al's going to say but i can't find more nits to pick in the commits..
[22:33] <sagewk> cool thanks
[23:10] <cmccabe> why check S_ISDIR(inode...) in ocfs2's unlink, but not elsewhere?
[23:10] <cmccabe> there is probably a good reason, I'm just curious
[23:11] <DJLee> guys, when a FS reads and write some largish file, say, 500mb, how many blocks does it read at a time? i see it as 4KB normal?
[23:12] <cmccabe> I thought that you couldn't unlink a directory, you had to use the rmdir syscall
[23:12] <Tv> cmccabe: this is not about syscalls directly though
[23:13] <cmccabe> tv: yeah, I assume that the code is somehow funnelling dirs and regular files into the same function or something
[23:13] <cmccabe> tv: just curious
[23:13] <Tv> cmccabe: looking up if i have a decent answer..
[23:14] <Tv> .unlink = ocfs2_unlink,
[23:14] <Tv> .rmdir = ocfs2_unlink,
[23:14] <Tv> basically, it uses the same handler for both
[23:14] <cmccabe> tv: ah, exactly
[23:14] <cmccabe> DJLee: I'm not really sure how many blocks are requested at a time. I think it depends on the FS to a certain degree
[23:14] <Tv> now why shouldn't a non-dir be unhashed.... ;)
[23:15] <cmccabe> DJLee: I know that one big feature of ext4 is support for extents, which essentially means that the FS can request a bunch of contiguous blocks
[23:15] <cmccabe> DJLee: on spinning platters this is a big win because they end up being close together on the disk
[23:16] <Tv> DJLee: readahead is controlled by what is basically a heuristic algorithm, there's no real guarantees on what it does
[23:16] <Tv> DJLee: are you seeing an actual problem?
[23:16] <DJLee> yes, let me just quickly explain, :p
[23:16] <Tv> cmccabe: extents are more so that a big file doesn't need a huge allocation bitmap, and the overhead of managing that
[23:17] <cmccabe> tv: yeah, that's another big benefit
[23:17] <Tv> cmccabe: contiguousness is more about the allocation algorithm, which got pretty good for ext3 before the extent work
[23:17] <cmccabe> DJLee: I think there are some readahead tunables lurking somewhere in proc or sys
[23:17] <Tv> cmccabe: s/bitmap/blockmap/
[23:18] <cmccabe> tv: hmm
[23:18] <DJLee> i mean, if i suppose have a real file size of say, 1kb x 9, and 1GB x 1, (10 files) then when I tell it to write these files to a disk, how does OS/FS actually decide how many per-blocks to write?
[23:19] <DJLee> if it is default to 4KB, it's a waste for 2KB?, and if its for 1GB, its a inefficnet for 1GB (1Gb will maybe work best for 1MB chunk)
[23:19] <cmccabe> tv: I don't quite understand how the FS tells the block allocator that x,y, and z should be close together
[23:19] <Tv> DJLee: ahh *that's* what you're asking
[23:19] <cmccabe> tv: I mean, if I'm writing 3 files at once
[23:19] <cmccabe> tv: and there isn't enough page cache to just buffer them all in memory
[23:19] <Tv> DJLee: basically, a big write request will remain as big as possible, and then when e.g. the ATA layer enforces a maximum blocksize, that's when it's split up, into as big chunks as possible
[23:20] <cmccabe> DJLee: I'm pretty sure that the blocksize is always 512 bytes
[23:20] <cmccabe> DJLee: at least as far as SATA is concerned
[23:20] <Tv> cmccabe: not really true anymore, btw
[23:21] <Tv> oh wait
[23:21] <cmccabe> tv: you're talking about 4k sector drives?
[23:21] <DJLee> yeah, sry for newb as it may sound, because, im just trying to benchmark a realistic files on the ceph mount, and i've specified bunch of file disttributions with sizes, but still per-block read i need to 'set it up'
[23:21] <Tv> now i see that DJLee asked about the *final* storage
[23:21] <cmccabe> tv: those are experimental and not well supported yet I think
[23:21] <Tv> DJLee: that's completely up to the filesystem in question
[23:21] <DJLee> which is set to 4KB
[23:21] <Tv> DJLee: ext3 etc have historically picked a blocksize at mkfs time, based on the size of the disk
[23:22] <cmccabe> ah, the old confusion between sector size (on the hard disk), block size (on the FS). Sorry for misunderstanding which you were referring to.
[23:22] <Tv> DJLee: some fs'es did what they called "tail packing", putting the final non-full blocks together; that's fairly rare & problematic
[23:22] <Tv> DJLee: some fs'es these days do extents, where file chunks are just byte ranges, not strict blocks
[23:22] <cmccabe> tv: hey, I'm pretty sure btrfs packs like that!
[23:22] <cmccabe> tv: problematic, maybe, but not rare any more :)
[23:22] <Tv> cmccabe: i just hope Mr. Btr won't kill *his* wife
[23:23] <cmccabe> ...
[23:23] <Tv> >:->
[23:23] <DJLee> cmccabe, right sry, yeah those 512 merged to 4kb at minimum i think, and the new ones are already 4kb, so im just happy with per-IO-block being 4kb
[23:23] <cmccabe> DJLee: I guess you should ask sage what he thinks a realistic workload would be
[23:23] <Tv> DJLee: i'm still unclear what you mean by per-IO-block
[23:24] <cmccabe> DJLee: i have a vague idea that small files is probably a stressful workload for any FS, but not sure how ceph in particular does
[23:24] <Tv> oh yes, but that's more because of the number of metadata operations than really the filesizes themselves
[23:24] <DJLee> copying a bunch of large size file, say, movie files (GB); it's gotta read and write at X blocks..
[23:25] * neurodrone (~neurodron@dhcp205-136.wireless.buffalo.edu) Quit (Quit: neurodrone)
[23:25] <DJLee> unlike those 'dd' stuffs where we use to fix bs=1MB, etc, I believe this dd type is not right.
[23:25] <cmccabe> DJLee: yeah, benchmarking is a black art.
[23:26] <cmccabe> DJLee: probably best to identify something similar to what you think your users will do, and do that
[23:26] * allsystemsarego (~allsystem@188.27.164.67) Quit (Quit: Leaving)
[23:26] <DJLee> so if I drag and drop, 2GB movie files to ceph mount, will it be the same as 'dd bs=1M' ? or dd bs=4kb', etc.. hehe;;
[23:26] <cmccabe> DJLee: I think dd just calls write(2), so it would be pretty similar
[23:27] <cmccabe> DJLee: I mean cp is just going to call write(2) as well.
[23:27] <cmccabe> DJLee: unless you're using some weird dd options like "direct" or something
[23:28] <Tv> DJLee: i don't think you'll see much difference between those, as long as your client machine isn't an actual 386..
[23:28] <Tv> DJLee: client machine writes get buffered by its local kernel first, it should coalesce the writes
[23:28] <cmccabe> I think finding an actual 386 would be challenging these days
[23:29] <cmccabe> but yeah, I think you're write... I would expect similar behavior out of those two
[23:29] <cmccabe> er right
[23:31] <DJLee> argh about ffsb, when it reports the results in the end, there's actually two MB/s reported;
[23:31] <DJLee> the first one is at the 'thread group' section, MB/s, and the other is at 'total result', and suprizingly they are different,
[23:32] <DJLee> the first one reported being always faster by 20%, e.g., 120mb/s vs the total result 100mb/s
[23:32] <DJLee> anyone noticed that in ffsb/
[23:33] <gregaf> haven't used ffsb much, but it might be that the first is the time it takes to run the functions, and the second is based on adding how long a final fsync takes
[23:33] <DJLee> so ive captured flows going in/out and i see it matches with the first 'thread group' mb/s
[23:33] * athinkingmeat (~athinking@ip565e7fcc.direct-adsl.nl) Quit (Quit: athinkingmeat)
[23:34] <DJLee> right
[23:34] <DJLee> but fsync isn't enabled in the config; will check this
[23:36] <Tv> if you have sepia machines locked that you're not using, please unlock; i'm only upgrading kernels when i can lock the machines
[23:37] <DJLee> also about pg's, is it ok to assume that more pgs make better balanced distributions, but perhaps more means more processing workload (or then what's the real disadv. for having too much)
[23:37] <DJLee> and perhaps later, having access to change the object size (defaulted to 4mb) to smaller/larger could also benefit the performance?
[23:39] <Tv> DJLee: more pgs = finer-grained load spreading = more even performance, until you have too many pgs per osd
[23:40] <Tv> DJLee: too many pgs will make you run out of memory, spend too much cpu in administrative overhead, etc
[23:41] <Tv> DJLee: there's probably a *lot* more interesting things to optimize than the ceph block size
[23:41] <DJLee> right, thanks!
[23:41] <DJLee> yeah, lots heh;
[23:54] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[23:58] <DJLee> i think some massive meta op can stress mds, but in any normal case (file write/remove, create bunch of dirs, etc) still doesnt make mds busy at all
[23:59] <DJLee> so im not sure when and how many much as 50% could be meta ops

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.