#ceph IRC Log

Index

IRC Log for 2010-11-01

Timestamps are in GMT/BST.

[18:27] -magnet.oftc.net- *** Looking up your hostname...
[18:27] -magnet.oftc.net- *** Checking Ident
[18:27] -magnet.oftc.net- *** No Ident response
[18:27] -magnet.oftc.net- *** Found your hostname
[18:27] * CephLogBot (~PircBot@fubar.widodh.nl) has joined #ceph
[18:28] * wido (~wido@fubar.widodh.nl) has joined #ceph
[18:28] <wido> hi
[18:28] <cmccabe> hi
[18:29] <wido> i've got some issues with my fresh filesystem, it's in a normal state
[18:29] <wido> but mounting gives me "Input/Output Error"
[18:29] <wido> running 2.6.36-rc8, latest master branch on the client
[18:30] <wido> in my dmesg I see that libceph makes a connection with a monitor, but that's it..
[18:30] <wido> MDS'es are running fine too
[18:31] <sagewk> looks like mds isn't up: 2010-11-01 18:30:59.358085 mds e6: 1/1/1 up {0=up:creating}, 1 up:standby
[18:31] <wido> sagewk: but they are running?
[18:32] <sagewk> yeah, but apparently it wasn't able to create it's root inode, directory, etc.
[18:32] <sagewk> restarting with logging enabled to see what the deal is
[18:32] <sagewk> actually, it looks like the osds aren't doing any io..
[18:32] <wido> could this be related to #462? I had to restart the OSD's and MDS's
[18:33] <wido> to get rid of those messages
[18:33] <sagewk> oh, could be
[18:38] <sagewk> hmm, restarting the osds fixed it.
[18:40] <wido> hmm, ok?
[18:41] <sagewk> looks ok?
[18:41] <sagewk> the logs weren't cranked up, so it's not clear exactly what happened
[18:42] <wido> yes, I can mount it now. And I didn't crank up the logs indeed, still have a lack of free diskspace for that
[18:42] <wido> but #462 keeps coming back in my env
[18:42] <sagewk> maybe you can put 'debug auth = 20' on the osds and monitor so that if/when you see this again we'll have all the info
[18:42] <wido> we'll doo
[18:45] <leander_yu> Hi Sage, we are still suffering the getattr lockup issue
[18:45] <sagewk> hey
[18:46] <sagewk> were you able to figure out why it doesn't finish ceph_check_caps?
[18:46] <leander_yu> a quick question about mapping_is_empty()
[18:47] <leander_yu> it calls find_get_page(mapping, 0);
[18:47] <leander_yu> with offset = 0 , can it make sure that there is no nrpages?
[18:49] <leander_yu> I mean if find_get_page(mapping, 0) == null does it means nrpages will be 0?
[18:51] <sagewk> the old version did that.. the new version should just look at i_data.nrpages
[18:52] <sagewk> actually, i think the helper is gone now
[18:52] <leander_yu> we found that there will be a chance that the i_rdcache_gen == 0 but nrpages ==1
[18:53] <sagewk> hmm.. any idea how that's happening? maybe mmap?
[18:54] <leander_yu> hmmm.... we have try to modify this if condition to check nrpages instead of rdcache_gen and we also found that your unstable branch has the same modification.
[18:54] <sagewk> the current unstable, btw, is what was merged for 2.6.37-rc1.
[18:56] <leander_yu> however, when we do the testing, although the getattr hang is gone, but it trigger BUG_ON in inode.c when we do unmount
[18:56] <sagewk> which BUG_ON?
[18:57] <leander_yu> in iput() BUG_ON(inode->i_state & I_CLEAR);
[18:57] <leander_yu> fs/inode.c
[18:59] <sagewk> this is with master + check_caps changes? have you tried with unstable?
[19:00] <sagewk> i'm not sure that's the problem, but it'll simplify things if we're working with the same code
[19:00] <sagewk> also, what base kernel version are you building the module for? the backports aren't well tested.
[19:03] <leander_yu> http://github.com/tcloud/ceph-client-standalone/tree/Elaster-1.5
[19:03] <leander_yu> this is the code we use
[19:07] * Meths_ (rift@91.106.146.21) has joined #ceph
[19:08] <leander_yu> it pretty much sync with master code + the check_caps revert
[19:09] <leander_yu> we use kernel version 2.6.32 + xen patches
[19:11] <leander_yu> however doing following changes in __ceph_caps_used() seem workaround the issue
[19:11] <leander_yu> - if (ci->i_rdcache_ref || ci->vfs_inode.i_data.nrpages)
[19:12] <leander_yu> + if (ci->i_rdcache_ref || ci->i_rdcache_gen/*ci->vfs_inode.i_data.nrpages*/)
[19:13] * Meths (rift@91.106.192.121) Quit (Ping timeout: 480 seconds)
[19:16] <sagewk> my question is how is data getting into the cache? does a process have an open file and call read(2), or is it mmapped?
[19:21] <leander_yu> I am not sure, it's a VM image used by qemu, we use losetup to map the vm image file to a loop back device.
[19:21] <sagewk> hmm.. yeah, it sounds like mmap.
[19:22] <sagewk> you can verify by turning on debug output for file.c and addr.c. you should see readpage/readpages called either way (addr.c), but in the read(2) case it'll be triggered by aio_read in file.c
[19:47] * terang (~me@pool-173-55-24-140.lsanca.fios.verizon.net) has joined #ceph
[19:48] <leander_yu> doesn't look like mmap by checkin the loop.c
[19:48] <leander_yu> loop.c calls file->f_op->write(file, buf, len, &pos); to write data to image file
[19:52] <yehudasa> leander_yu: this is only called if the underlying fs does not support aops
[20:07] <leander_yu> seems so, would need to turn on debug log to double check
[20:08] <yehudasa> leander_yu: you're running over some loopback device?
[20:09] <leander_yu> ya, use losetup to map a VM file to loop device
[20:09] <leander_yu> VM file is stored on ceph
[20:10] <leander_yu> qemu will create a VM with a block device points to the loop device
[20:11] <yehudasa> yeah, well.. in theory you could use either qemu-rbd or just the rbd module for that, but that's a different issue
[20:16] <leander_yu> we whould love to use rbd but we have existing code developed before rbd was introduced.
[20:16] <leander_yu> that's why we still use losetup
[20:21] <leander_yu> any suggestion how should I move forward? should I change the i_rdcache_gen check to nrpages like unstable branch and figure out why BUG_ON is trigged or should I check why i_rdcache_gen == 0 but nrpages ==1?
[20:23] <yehudasa> hmm.. both are probably two symptoms of the same issue
[20:24] <yehudasa> it'd be best if you could test it with the unstable version of the code, so that we know whether the problem is still there
[20:29] <leander_yu> we tried pretty much the same modification as unstable version except in ceph_invalidate_work(struct work_struct *work) function, we check nrpages instead of rdcache_gen. it ends up trigger BUG_ON in iput() fs/inode.c when we do unmount
[20:30] <leander_yu> I will do the same test by using unstable version to double check
[20:30] <leander_yu> however, what's the side effect if I change __ceph_caps_used() to use if (ci->i_rdcache_ref || ci->i_rdcache_gen) instead of if (ci->i_rdcache_ref || ci->vfs_inode.i_data.nrpages)
[20:31] <leander_yu> I found the code back to Sep was using rdcache_gen and it works ok
[20:36] <leander_yu> I guess it will release the cap while still have cached pages
[20:50] * leander_yu (7220fd5e@ircip2.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[20:56] <wido> "libceph: tid 25 timed out on osd10, will reset osd" does that mean that the OSD is responding to slow?
[21:02] * Meths_ is now known as Meths
[21:13] <yehudasa> wido: either that, or the osd is not responding at all
[21:15] <wido> ok, that's weird. It's giving those errors about all my OSD's right now, but their load is low
[21:15] <wido> network seems fine too
[21:16] <sagewk> i bet it's the auth issue again..d o you see those errors in the logs?
[21:18] <wido> sagewk: no, but i'm seeing "accept we reset (peer sent cseq 4), sending RESETSESSION"
[21:18] <wido> the FS is really slow, trying to rsync some data, seems to be stalling
[21:21] * terang (~me@pool-173-55-24-140.lsanca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[21:34] <wido> i'm going afk, i'll test it some more tomorrow
[21:58] * terang (~me@ip-66-33-206-8.dreamhost.com) has joined #ceph
[22:02] * MarkN1 (~nathan@118.107.146.51) has joined #ceph
[22:10] * Meths_ (rift@91.106.242.9) has joined #ceph
[22:10] * MarkN1 (~nathan@118.107.146.51) Quit (Ping timeout: 480 seconds)
[22:10] * MarkN (~nathan@59.167.240.178) has joined #ceph
[22:17] * Meths (rift@91.106.146.21) Quit (Ping timeout: 480 seconds)
[23:01] * Meths_ is now known as Meths
[23:17] * henrycc (~henry_c_c@219.86.164.64) Quit (Quit: HydraIRC -> http://www.hydrairc.com <- The professional IRC Client :D)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.