#ceph IRC Log

Index

IRC Log for 2011-06-11

Timestamps are in GMT/BST.

[0:07] * Juul_ (~Juul@slim.dhcp.lbl.gov) has joined #ceph
[0:16] * Juul_ (~Juul@slim.dhcp.lbl.gov) Quit (Quit: Leaving)
[0:24] <gregaf> is anybody else having odd issues when they test master?
[0:24] <gregaf> I've been running multi-mds fsstress runs all day
[0:25] <gregaf> after yesterday I thought it was stable but testing today revealed unending issues
[0:25] <gregaf> but when I revert back to my mds_rename branch???all is well
[0:26] <gregaf> I'm going to have to start adding commits from master to see what breaks it but the only stuff I see that touches the MDS code is the deglobalization pieces
[0:26] <cmccabe> gregaf: what kind of issues are you seeing
[0:26] <gregaf> that's the odd part
[0:27] <gregaf> most of them look like legitimate MDS bugs on the surface
[0:27] <gregaf> although I think I've seen a few issues with contexts on ceph tool startup
[0:27] <gregaf> but I can't reproduce any of them on mds_rename
[0:28] <gregaf> which makes me think it might be the threading
[0:28] <cmccabe> the threading didn't change
[0:28] <gregaf> once I establish a baseline clean I'm going to start cherry-picking commits in to see if I can narrow it down
[0:28] <gregaf> but I was curious if anybody else was having odd issues
[0:28] <Tv> i've seen startup-time issues but that might be already fixed, i used stuff from yesterday for a while
[0:28] <cmccabe> when you say you've "seen a few issues with contexts on ceph tool startup"
[0:28] <cmccabe> do you mean crashes or what
[0:28] <gregaf> yeah, segfaults I think
[0:29] <gregaf> not sure if they were on latest or not though
[0:29] <cmccabe> I fixed one bug like that yesterday
[0:29] <gregaf> yeah, I think maybe there's another one but I haven't been paying enough attention to be sure :)
[0:29] <cmccabe> I feel a little frustrated when people tell me they have a problem, but don't have any actual backtrace
[0:29] <gregaf> I'll have real bug reports for you eventually but in the meantime I wondered if I was the only one having odd issues with things that weren't obviously related to that
[0:30] <cmccabe> or bad test they can point to, etc
[0:31] <cmccabe> so you say you "can't reproduce any of them on mds_rename"
[0:32] <cmccabe> that means you have some actual concrete problem?
[0:32] <cmccabe> is this something I can try to reproduce?
[0:32] <gregaf> no
[0:32] <gregaf> that's why I was asking if anybody else had "odd issues"
[0:32] <gregaf> I dumped 6 new issues into the tracker today that are related to cross-mds renames
[0:32] <gregaf> at least superficially, based on where they failed and stuff
[0:33] <gregaf> but I can't generate any of them on the mds_rename branch, which includes all the MDS changes not related to deglobalizing
[0:33] <gregaf> so I'm trying to figure out what could be causing that
[0:33] <gregaf> I am going to have to start cherry-picking commits :(
[0:33] <cmccabe> can you explain the way the issues manifest?
[0:34] <cmccabe> assert, error message, call to grandma/?
[0:34] <gregaf> they're failed asserts in the MDS code
[0:34] <gregaf> you don't want to dig into them
[0:34] <Tv> gregaf: just checking: you know git bisect, right?
[0:34] <gregaf> I am still trying to determine what could have caused them
[0:34] <gregaf> YES!J)(J":KLvjGKL:jhgfxz;d
[0:35] <gregaf> ;)
[0:35] <gregaf> the state of master doesn't make it very helpful at the moment, though, since there are broken sequences in the middle of the relevant areas
[0:36] <cmccabe> broken in what sense?
[0:36] <gregaf> and the issues I'm seeing aren't very friendly to git-bisect if they even do have the same root cause
[0:36] <gregaf> like the stretch there where ./vstart.sh -n crashed all the monitors
[0:36] <cmccabe> oh, yeah, that.
[0:37] <cmccabe> I'd just like to point out that a branch named mds_rename was merged on Wednesday
[0:37] <cmccabe> perhaps that caused bugs in... mds rename?
[0:37] <rsharpe> This appears to be where the problem with Client::chdir:
[0:37] <cmccabe> I know, I know. It's a crazy idea.
[0:37] <rsharpe> int Client::path_walk(const filepath& origpath, Inode **final, bool followsym)
[0:37] <rsharpe> {
[0:37] <rsharpe> filepath path = origpath;
[0:37] <rsharpe> Inode *cur = cwd; // **** THIS IS WRONG
[0:37] <rsharpe> assert(cur);
[0:38] <rsharpe> dout(10) << "path_walk " << path << dendl;
[0:38] <rsharpe> for (unsigned i=0; i<path.depth() && cur; i++) {
[0:38] <rsharpe> const string &dname = path[i];
[0:38] <rsharpe> dout(10) << " " << i << " " << *cur << " " << dname << dendl;
[0:38] <rsharpe> Inode *next;
[0:38] <rsharpe> int r = _lookup(cur, dname.c_str(), &next);
[0:38] <rsharpe> since file path encodes whether it is an absolute or relative path, the indicated line above should be a little different
[0:39] <gregaf> that's the branch I'm working on, Colin. That's why I kept referring to the mds_rename branch...
[0:39] <rsharpe> ie, Inode *cur = (path->ino) ? root : cwd;
[0:39] <cmccabe> gregaf: well, it was merged into master on 8cd949f67590aab432a15d1f3c6bb3b74c85fed9
[0:40] <gregaf> cmccabe, I am at this point intimately familiar with the commits involving mds rename over the last week
[0:40] <Tv> sagewk1: ignore earlier, found typo
[0:40] <gregaf> if you haven't been seeing any weird issues, then that answers my question and is really all I needed
[0:40] <gregaf> rsharpe: hmm, I'll check it out
[0:41] <rsharpe> Should be path.get_ino() ...
[0:42] <cmccabe> tv: do we have any tests we can run on master to try to resolve these questions?
[0:43] <Tv> cmccabe: the autotest stuff has never gone away; teuthology is also able to run those things, these days
[0:44] <cmccabe> it's just that I'm going to want to merge my branch into master on monday, and I can't do that if there's a lot of fear, uncertainty, and doubt
[0:45] <cmccabe> so maybe we should run some tests today to try to identify any potential problems
[0:46] <cmccabe> anyway, is the readme up to date in teuthology?
[0:46] <Tv> who do you mean we, Kemosabe?
[0:46] <gregaf> rsharpe: well, only if it actually is an absolute path
[0:47] <gregaf> if it's relative then there's not a valid inode there
[0:48] <Tv> cmccabe: i am not aware of the teuthology readme having any horrible issues
[0:48] <cmccabe> well, first of all, are we still using the autotest web site to launch tests?
[0:49] <cmccabe> I know you've been talking about an alternate system, is that purely backend?
[0:49] <Tv> teuthology-the-new-thing doesn't use the autotest web user interface in any way
[0:50] <Tv> you as the human using teuthology need to use the web ui to lock machines, for now
[0:50] <Tv> or just use autotest, to avoid the learning curve right now
[0:50] <cmccabe> autotest to avoid the learning curve?
[0:50] <Tv> the devil you know..
[0:50] <cmccabe> I never really knew autotest that well. But I will give it a try.
[0:51] <cmccabe> so any test can be a multi-mds test if I supply the right ceph.conf, right?
[0:52] <gregaf> rsharpe: http://pastebin.com/HiGrA4Gs look good to you?
[0:52] <Tv> cmccabe: roles not ceph.conf
[0:53] <cmccabe> it looks like there's only 4 machines ready, the rest are in repair failed?
[0:53] <Tv> and a bunch locked by me, sam and josh, i guess
[0:53] <Tv> welcome to the life of the sepia cluster
[0:54] <joshd> I can unlock 3
[0:55] <cmccabe> I'm not sure if this is actually going to be practical
[0:55] <cmccabe> how many more steps do I have to do to get to test running
[0:55] <Tv> frankly, i don't think cmccabe needs more than 4 machines
[0:56] <rsharpe> Looks good to me ...
[0:56] <rsharpe> gregaf: Yeah, looks good. Thanks.
[0:56] <gregaf> rsharpe: can you test that and let us know if it works?
[0:56] <gregaf> (or equivalent of your own)
[0:57] <cmccabe> tv: I guess what I'm asking, and I understand if this doesn't exist yet, is whether there's a simple test I can run for multi-mds
[0:57] <gregaf> we're not doing a lot of our own uclient tests atm compared to you guys :)
[0:57] <rsharpe> Shall do ...
[0:57] <Tv> cmccabe: ANY ONE OF THEM
[0:57] <cmccabe> tv: ok
[0:57] <gregaf> I've been using fsstress, the problem being that it's hard to know if errors get hit are actual MDS bugs or not
[0:57] <gregaf> I've already dumped some time into this and am going to have to continue doing so
[0:58] <cmccabe> gregaf: is it possible that me doing the same thing would just be duplicated effort?
[0:58] <gregaf> I dunno, probably
[0:58] <cmccabe> gregaf: ok.
[0:58] <gregaf> if you really want to look at the logs you can check out the bugs I created today and look at the logs for them
[0:58] <gregaf> and see if you can trace any issues
[0:58] <gregaf> I really woudn't bother though, your stuff's already in master so...
[0:59] <gregaf> I mean, let's be clear: it could just be the odds working out horrifically against me
[1:00] <gregaf> that's why I asked if anybody else had problems
[1:00] <cmccabe> well, it could also be a change in the way data structures are laid out causing issues
[1:00] <gregaf> anyway, I'm leaving, I will hopefully have more on Monday
[1:00] <cmccabe> do you ever run the MDS with valgrind?
[1:00] <gregaf> lol
[1:01] <cmccabe> seriously
[1:01] <gregaf> ???seriously, lol
[1:01] <cmccabe> yeah, I know it would be slow. Horribly slow.
[1:01] <gregaf> I think we have in the past, it's been a long while now
[1:01] <cmccabe> but that slowness itself would probably expose more bugs :)
[1:01] <cmccabe> anyway, have a good weekend, and let me know on monday if there's anything I can do to help
[1:01] <gregaf> the kinds of tests that are exposing issues could not practically be run under valgrind
[1:14] <rsharpe> gregaf: That seems to fix it ...
[1:23] <Tv> should librgw be in librados.deb?
[1:24] <Tv> or a separate deb?
[1:24] <cmccabe> tv: it seems reasonable to put it in librados, don't you think?
[1:24] <Tv> guessing separate but it's kinda small
[1:24] <Tv> yeah i'm having second thoughts
[1:24] <Tv> mostly on the packaging complexity, it's been a while since i did multi-shared lib packages
[1:25] * Tv digs into policy manual
[1:25] <cmccabe> tv: yeah, I don't really know what the best approach is there
[1:25] <cmccabe> tv: I mean logically it's part of rgw right?
[1:26] <cmccabe> tv: what .deb is rgw in these days?
[1:26] <Tv> well you don't need the webby bits to use librgw
[1:26] <Tv> e.g. your backup taking machine might not be the machine running the fastcgi
[1:26] <Tv> and as far i recall bundling shared libs with anything else lead to bad things
[1:27] <Tv> on upgrades, etc
[1:27] <Tv> oh right it's the soname
[1:27] <Tv> ok that's it librgw lives in its own package
[1:27] <Tv> because otherwise upgrades are messy
[1:28] <Tv> "If you have several shared libraries built from the same source tree, you may lump them all together into a single shared library package provided that all of their SONAMEs will always change together. Be aware that this is not normally the case, and if the SONAMEs do not change together, upgrading such a merged shared library package will be unnecessarily difficult because of file conflicts with the old version of the package. When in doubt,
[1:28] <Tv> always split shared library packages so that each binary package installs a single shared library."
[1:28] <Tv> <3 debian policy
[1:29] <cmccabe> splitting them is ok as long as yehuda and those guys remember to upgrade the two debs in tandem
[1:29] <Tv> oh that'll be enforced by the deps
[1:29] <cmccabe> great...
[1:30] <Tv> though i think nothing in ceph uses ceph's shared libs
[1:30] <Tv> it's all bundled, several times over..
[1:31] <cmccabe> we really do need to revisit the library strategy
[1:31] <cmccabe> using the .so would probably be a big performance win for users who have multiple daemons on the same machine
[1:31] <Tv> actually no
[1:32] <cmccabe> you think the effect of -fPIC would dominate?
[1:32] <Tv> there aren't that many *types* of daemons
[1:32] <Tv> and all instances of a type of daemon already share pages
[1:32] <Tv> shared libs are slower
[1:32] <cmccabe> well, libcommon is 38M
[1:32] <Tv> but anyway, talking about 5% improvements when there's 200% to be had elsewhere
[1:33] <cmccabe> so if you have cosd, cmon, cmds, that's 38M * 2 of extra text you don't need
[1:33] <Tv> that's kinda huge
[1:33] <cmccabe> heh
[1:34] <cmccabe> templates
[1:34] <cmccabe> I thought I heard somewhere that gcc was doing template folding tricks now
[1:34] <cmccabe> but apparently it's not tricky enough, at least on gcc 4.4.5
[1:34] <Tv> srsly
[1:34] <Tv> ls -lhS /usr/lib/*.so.*|grep -v ^l|head -5
[1:34] <Tv> -rw-r--r-- 1 root root 18M 2010-10-13 11:18 /usr/lib/libwebkit-1.0.so.2.17.7
[1:34] <Tv> -rw-r--r-- 1 root root 16M 2009-11-23 15:05 /usr/lib/libicudata.so.42.1
[1:34] <Tv> -rw-r--r-- 1 root root 9.7M 2010-09-21 17:50 /usr/lib/libgs.so.8.71
[1:34] <Tv> 38MB is insanity
[1:35] <cmccabe> wow... bigger than webkit
[1:35] <cmccabe> that's... special
[1:35] <Tv> yeah i'm gonna blame ceph not templates
[1:35] <cmccabe> well, there's still that 20% reduction in program text size if we ever decide to use -fno-exceptions
[1:35] <cmccabe> which btw, webkit does
[1:36] <Tv> the linking is a huge mess as is, at this point i wouldn't be surprised to find the binaries having the same symbols over and over again
[1:37] <cmccabe> surely that is impossible...
[1:51] <cmccabe> I'm going to try with -Os to see if gcc can do any better than this
[1:57] <cmccabe> wow!
[1:57] <cmccabe> with -Os and stripped, I get 3.5M
[1:57] <cmccabe> for ./src/libcommon.a
[2:03] * verwilst (~verwilst@d51A5B088.access.telenet.be) Quit (Quit: Ex-Chat)
[2:07] <cmccabe> with -O3, I get 3.2M
[2:07] <cmccabe> which, strangely, is smaller than -Os (optimized for space)
[2:07] <cmccabe> well, if we can learn anything from this, it's that gcc's unoptimized output is VERY unoptimized.
[2:08] <cmccabe> it's pretty important to use -O3
[2:17] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[2:26] * cmccabe (~cmccabe@208.80.64.174) has left #ceph
[2:27] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[2:33] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[2:34] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) has joined #ceph
[2:35] * sjusthm (~sam@adsl-76-208-165-254.dsl.lsan03.sbcglobal.net) Quit (Remote host closed the connection)
[2:56] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[3:16] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[3:38] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[3:41] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[3:47] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[4:45] * votz (~votz@dhcp0020.grt.resnet.group.UPENN.EDU) Quit (Quit: Leaving)
[4:50] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[5:11] * alexxy (~alexxy@79.173.81.171) Quit (Remote host closed the connection)
[5:15] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[7:39] * gregorg_taf (~Greg@pom74-2-82-247-190-184.fbx.proxad.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.