#ceph IRC Log

Index

IRC Log for 2011-12-06

Timestamps are in GMT/BST.

[0:00] <mgalkiewicz> ok I have found it thx for info
[0:00] * mgalkiewicz (~maciej.ga@85.89.186.247) Quit (Quit: Ex-Chat)
[0:04] * NightDog_ (~karl@52.84-48-58.nextgentel.com) has joined #ceph
[0:04] * NightDog (~karl@52.84-48-58.nextgentel.com) Quit (Read error: Connection reset by peer)
[0:31] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Remote host closed the connection)
[0:40] * The_Bishop_ (~bishop@p4FCDF3AF.dip.t-dialin.net) has joined #ceph
[0:42] * The_Bishop (~bishop@p4FCDF3AF.dip.t-dialin.net) Quit (Read error: Operation timed out)
[0:59] * The_Bishop_ (~bishop@p4FCDF3AF.dip.t-dialin.net) Quit (Ping timeout: 480 seconds)
[1:09] * The_Bishop_ (~bishop@p5DC11C1E.dip.t-dialin.net) has joined #ceph
[1:28] * Tv (~Tv|work@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[1:45] * cp (~cp@c-67-180-21-185.hsd1.ca.comcast.net) Quit (Quit: cp)
[1:46] * buck (~buck@bender.soe.ucsc.edu) Quit (Quit: Leaving)
[1:48] * The_Bishop_ (~bishop@p5DC11C1E.dip.t-dialin.net) Quit (Ping timeout: 480 seconds)
[1:52] * The_Bishop_ (~bishop@p5DC11C1E.dip.t-dialin.net) has joined #ceph
[2:15] * cp (~cp@adsl-75-6-243-75.dsl.pltn13.sbcglobal.net) has joined #ceph
[2:15] * cp (~cp@adsl-75-6-243-75.dsl.pltn13.sbcglobal.net) Quit ()
[2:27] * The_Bishop_ (~bishop@p5DC11C1E.dip.t-dialin.net) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[2:41] <lxo> sagewk, ceph -w fix confirmed, thanks
[2:41] <sagewk> lxo great, thanks
[2:45] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[2:59] * The_Bishop (~bishop@port-92-206-76-165.dynamic.qsc.de) has joined #ceph
[3:06] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) has joined #ceph
[3:08] * adjohn is now known as Guest19453
[3:08] * Guest19453 (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[3:08] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) has joined #ceph
[3:26] * adjohn is now known as Guest19456
[3:26] * Guest19456 (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[3:26] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) has joined #ceph
[3:37] <ajm-> http://pastebin.com/Sv1hjvzZ
[3:37] <ajm-> does that look familar to anyone or should I get a full debug log?
[3:41] <joshd> ajm-: that's similar to a bug sjust fixed in 0.39 (#1530)
[3:42] <ajm-> hrm, ok
[3:42] * ajm- is now known as ajm
[3:42] <joshd> I'm not sure if that's the same bug, or a different one causing similar symptoms though
[3:42] <ajm> full log then?
[3:42] <ajm> i'd rather not upgrade right now if I can avoid it
[3:42] <joshd> sure, attach it to #1530
[3:43] <ajm> recognize this one while your here: http://pastebin.com/6Mc0y5UF ?
[3:44] <joshd> no, sjust is the one to ask about that
[3:45] <ajm> ok, thanks joshd
[3:45] <joshd> np
[3:49] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[3:58] * aa (~aa@r186-52-207-94.dialup.adsl.anteldata.net.uy) has joined #ceph
[4:00] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[4:10] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) has joined #ceph
[4:45] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[5:02] * aa (~aa@r186-52-207-94.dialup.adsl.anteldata.net.uy) Quit (Remote host closed the connection)
[5:42] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) has joined #ceph
[6:17] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[7:45] * adjohn is now known as Guest19483
[7:45] * Guest19483 (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[7:45] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) has joined #ceph
[7:46] * adjohn is now known as Guest19484
[7:46] * Guest19484 (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) Quit (Read error: Connection reset by peer)
[7:46] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) has joined #ceph
[8:03] <chaos_> sagewk, thanks for updating doc ;)
[9:51] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[9:56] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) has joined #ceph
[10:06] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[10:16] * gregorg (~Greg@78.155.152.6) Quit (Ping timeout: 480 seconds)
[10:26] * gregorg (~Greg@78.155.152.6) has joined #ceph
[11:47] * NightDog_ (~karl@52.84-48-58.nextgentel.com) Quit (Read error: Connection reset by peer)
[11:47] * NightDog_ (~karl@52.84-48-58.nextgentel.com) has joined #ceph
[12:43] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[12:43] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[13:12] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:51] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[13:51] * NightDog_ (~karl@52.84-48-58.nextgentel.com) Quit (Read error: Connection reset by peer)
[13:52] * NightDog_ (~karl@52.84-48-58.nextgentel.com) has joined #ceph
[14:01] * aa (~aa@r190-135-24-39.dialup.adsl.anteldata.net.uy) has joined #ceph
[14:45] * NightDog_ (~karl@52.84-48-58.nextgentel.com) Quit (Read error: Connection reset by peer)
[14:45] * NightDog_ (~karl@52.84-48-58.nextgentel.com) has joined #ceph
[14:54] * NightDog (~karl@dhcp-025020.wlan.ntnu.no) has joined #ceph
[15:19] * NightDog (~karl@dhcp-025020.wlan.ntnu.no) Quit (Quit: Leaving)
[15:25] * NightDog (~karl@dhcp-025020.wlan.ntnu.no) has joined #ceph
[15:58] * aa (~aa@r190-135-24-39.dialup.adsl.anteldata.net.uy) Quit (Remote host closed the connection)
[16:49] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[17:00] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) has joined #ceph
[17:24] * NightDog (~karl@dhcp-025020.wlan.ntnu.no) Quit (Quit: This computer has gone to sleep)
[17:46] * MK_FG (~MK_FG@188.226.51.71) Quit (Read error: Operation timed out)
[17:49] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[17:58] * NightDog__ (~karl@52.84-48-58.nextgentel.com) has joined #ceph
[17:58] * NightDog_ (~karl@52.84-48-58.nextgentel.com) Quit (Read error: Connection reset by peer)
[18:35] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Quit: Ex-Chat)
[19:07] * adjohn (~adjohn@70-36-139-247.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[19:10] * Tv (~Tv|work@aon.hq.newdream.net) has joined #ceph
[19:11] * _Shiva_ (shiva@whatcha.looking.at) Quit (Quit: Operator halted - Coffee not found)
[19:18] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[19:28] <ajm> anyone around who could take a look at these two osd issues? http://adam.gs/osd.5.log.bz2 http://adam.gs/osd.7.log.bz2
[19:33] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[19:35] * MarkDude (~MT@64.134.236.67) has joined #ceph
[19:37] <sagewk> ajm: getting a 404 on those
[19:38] <ajm> oops, fixed
[19:46] <chaos_> good morning california;-)
[19:46] <chaos_> sagewk, new day new problems ;) Do you have a minute?
[19:51] <chaos_> I'm sure you have, I've two problems I don't know if first is caused by second or second by first, maybe they aren't related at all, but last night one of my mds crashed - http://wklej.org/id/642452/, monit restarted it and it's working till now, but why it crashed?:( It first thing. Second one as you noticed few days ago I've hundreads of messages scraming about "we are laggy" - why this happens? What is response time for mds? Only thing that is running
[19:53] <sagewk> chaos_: for #1 we need to add a dout print before that assert so we can see what the error code actually is
[19:54] <sagewk> #2 is harder w/o looking closely at your system. you can increase the mds beacon interval..
[19:54] <chaos_> it's configurable from ceph.conf?
[19:54] <sagewk> yeah
[19:54] <sagewk> grpe beacon from common/config_opts.h to see the option(s)
[19:54] <chaos_> sagewk, #1, I've to do this? and rebuild my ceph?
[19:54] <chaos_> ok greping now
[19:55] <sagewk> chaos_: yeah
[19:55] <chaos_> :/
[19:55] <sagewk> ajm: for osd.5 pushed something that will print more info to the log so we can see what the garbage op is
[19:55] <sagewk> osd.7 is the zeroed pginfo file. are you running on extN or btrfs?
[19:56] <ajm> xfs
[19:56] <chaos_> ok what should be printed to dout there?
[19:56] <sagewk> r
[19:56] <chaos_> oh.. just r
[19:56] <chaos_> ok ;-)
[19:56] <chaos_> thanks
[19:57] <ajm> sagewk: is it safe to run head in place of 0.39 at the moment, no irreversible changes or should I apply that patch to 0.39 ?
[19:57] <sagewk> ajm: master is safe
[19:57] <ajm> k
[19:59] <ajm> the zero'ed pginfo thing, its a known issue or your saying there's a zero'ed pginfo and your not sure why :)
[20:01] <gregaf> ajm: we've seen it happen under a couple different scenarios that sjust can talk about
[20:01] <gregaf> and if you had run a restart on the box in question, it might also be an unidentified xfs bug...
[20:02] <sjust> ajm: looking
[20:02] <ajm> its possible, more interested in how to fix it :)
[20:02] <ajm> and get that osd back up
[20:09] <sjust> ajm for osd.5, you just upgraded to 0.39?
[20:09] <ajm> yes, from 0.37
[20:09] <sjust> ok
[20:10] <ajm> i was having weird issues with 0.37 where nodes would die after some period, 0.39 definitely fixed that though
[20:10] <todin> sjust: any idea when bug 1738 is fixed?
[20:11] <Tv> sagewk: fyi for rbd related sprint planning: http://tracker.newdream.net/issues/1790
[20:12] <sjust> todin: not sure, you should be able to work around it fairly easily though by adjusting your crushmap
[20:13] <sagewk> tv: looks good to me.
[20:13] <sagewk> tv: i wonder if we should revisit how to enumerate daemons (osds etc.) at the same time.. may be some common ground here
[20:14] <todin> sjust: I know the workaround, but I wanted to test failure domains, so for the meantime I need osds with more disks?
[20:14] <Tv> sagewk: so i have some plans on that already, because i needed it for osds
[20:14] <sjust> no, it's just that the bug is really only triggered when the nodes at the bottom of the heirarchy contain only one osd
[20:14] <sjust> in which case you might as well just move the osds to the next higher level
[20:14] <Tv> sagewk: but yeah, i went mostly after "i *need* this", not after "this would be nice"
[20:15] <sjust> resulting an a basically equivalent, but shorter hierarchy
[20:17] <sjust> root - rack - node - osd
[20:17] <sjust> - node - osd
[20:17] <sjust> - rack - node - osd
[20:17] <sjust> - node - osd
[20:17] <sjust> becomes
[20:17] <todin> sjust: how should I change the crushmap to to that? http://85.214.49.87/ceph/bug-1738/crushmap.txt
[20:17] <sjust> root - rack - osd
[20:17] <sjust> - osd
[20:17] <sjust> - rack - osd
[20:17] <sjust> - osd
[20:18] * pr-23393 (winter@bmw.isprime.com) has joined #ceph
[20:18] <todin> sjust: hmm, your ascii art doesn't explaint it well to me :-(
[20:18] <sjust> yeah, sorry, looking at your crushmap now :)
[20:18] <ajm> sagewk: sjust: http://pastebin.com/3D6e5TnW <osd.5 with that patch
[20:19] <todin> sjust: I want to have to maschines in one rack the other two in the other rack, each rack should be an failure domain, so that I could power down a whole rack
[20:20] <sjust> todin: yeah, so you remove the host level and place the osds under the rack level
[20:20] <sjust> one sec, I'll fix it up and pastebin it
[20:24] <sjust> todin: http://pastebin.com/E1FGtd1j
[20:25] <todin> sjust: ahh, I see, thanks, I will try id
[20:26] * aliguori (~anthony@32.97.110.59) has joined #ceph
[20:27] <sagewk> sjust: so this'll continue to be a problem for people now that ceph is generating a rack/host hierarchy by default..
[20:27] <sjust> sagewk: ah...
[20:34] <todin> and what is the timeline for trim support im rbd layer? issue #1692?
[20:37] <darkfaded> hehe, redmine time tracking: spent time 410hrs
[20:37] <sagewk> todin: probably january
[20:38] <todin> sagewk: ok, that would be quite nice
[20:39] * lxo (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[20:39] <sagewk> todin: are you using librbd or the kernel client?
[20:39] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:40] <todin> sagewk: librbd
[20:40] <sagewk> todin: k
[20:41] <todin> sagewk: the patches from yesterday a very stable, atm I cannot crash the cluster
[20:41] <chaos_> sagewk, it looks like increasing beacon interval to 8 seconds helped a lot ;-) at least for now
[20:41] * todin needs new test cases
[20:42] <sagewk> todin: great
[20:45] <todin> is there some where an explaination of ceph pg dump?
[20:47] <todin> and btw btrfs in rc4 is quite good, no increase in the load over time
[20:50] <ajm> sagewk: sjust: lmk if you need the full log from that osd.5
[20:51] <sagewk> todin: not currently, sorry
[20:52] <sagewk> ajm: yeah, full log would be good. there is garbage in the journal it looks like
[20:52] <ajm> sagewk: ok, i'll grab a full log, any idea how to get that out?
[20:52] <todin> sagewk: hmm, crushmap changes aren't well testet, two crashes in a minute
[20:53] <sagewk> ajm: need to see the log first to tell whether the reader is incorrectly seeing bad entries as good, or whether the writer actually wrote crap
[20:53] <ajm> ok
[20:55] <todin> sagewk: ./include/interval_set.h: 295: FAILED assert(!empty()) what info do you need?
[21:16] <ajm> sagewk: http://adam.gs/osd.5.log.1323202564.bz2
[21:44] <sjust> todin: was that crash a result of loading in the crusmap I pastebin'd?
[21:47] <todin> sjust: I am not sure about that, between loding the map and the crash where a few minutes.
[21:47] <sjust> backtrace?
[21:47] <todin> sjust: I have a core, the whole bt?
[21:47] <sjust> yeah
[21:52] <todin> sjust: http://pastebin.com/Ex5KQVFC
[21:59] <Tv> gregaf: http://gitbuilder.ceph.newdream.net/output/sha1/019597e6f480bc10d14adcde2aba54a0b6021ca0/ceph.x86_64.tgz etc
[22:01] <grape> I am running mkcephfs without having created /srv/mon.* and mkcephfs complains that it can't find the mon store. When I add the missing mon dir and run mkcephsf again, it complains that it can't read magic from mon data. I had this set up and running before the upgrade to 0.39, and I went back to make sure it was repeatable, but I must have monkeyed around with it too much. Any idea what I might be overlooking in this case?
[22:01] <Tv> grape: you're not supposed to create the mon.* directories, those are created by ceph-mon mkfs
[22:01] <grape> Tv: exactly
[22:02] <Tv> grape: if you wipe them out and re-run, please share exact error message
[22:02] <grape> Tv: will do
[22:05] <grape> Tv: the output from when it tries to start mon.a (as well as mon.b and mon.c) is:
[22:05] <grape> problem opening monitor store in /srv/mon.a: error 2: No such file or directory
[22:05] <grape> failed: ' /usr/bin/ceph-mon -i a -c /etc/ceph/ceph.conf '
[22:05] <Tv> grape: can you pastebin your conf file please
[22:05] <grape> Tv: sure thing
[22:06] <sjust> todin: thanks, looking
[22:14] <grape> Tv: Here's that config: http://pastebin.com/vtHNp0p1
[22:15] <grape> Tv: I thought I had found a problem, but didn't fix the error.
[22:16] <Tv> grape: and /srv exists on all the nodes, right?
[22:16] <grape> Tv: yes
[22:18] <grape> osd's are all mounted to their respective drives as well, though that doesn't seem to impact the error
[22:18] <Tv> grape: just to make this complete, can you pastebin the command you ran and the full output; i'll set up 3 vms here to reproduce
[22:19] <grape> sure
[22:19] <grape> I should just update it on github
[22:23] * MarkDude (~MT@64.134.236.67) Quit (Quit: Leaving)
[22:34] <grape> Tv: http://pastebin.com/vRizDjX5
[22:34] <grape> Tv: sorry for the delay - wanted to make sure you had something accurate
[22:40] <grape> Tv: here
[22:40] <grape> Tv: here is the clean-up script that goes along with the setup script: http://pastebin.com/UD3fWVwP
[22:40] * NightDog__ (~karl@52.84-48-58.nextgentel.com) Quit (Read error: Connection reset by peer)
[22:40] * NightDog__ (~karl@52.84-48-58.nextgentel.com) has joined #ceph
[22:44] <Tv> grape: my turn to be sorry about the delay -- we just had a fire drill here
[22:45] <grape> Tv: lol that's what you get with those fancy big-city offices ;-)
[22:45] <Tv> grape: it's a trade-off for 50th floor views ;)
[22:45] <grape> Tv: nice!
[22:45] <Tv> http://www.flickr.com/photos/tv42/5332021114/
[22:46] <Tv> we have this floor on 3 sides of the building now
[22:46] <Tv> pretty darn sweet
[22:47] <grape> Tv: That's a great shot!
[22:47] <Tv> another direction: http://www.flickr.com/photos/tv42/5988929797/
[22:49] <grape> Tv: amazing
[22:49] <grape> Tv: I was just kidding about the big-city offices, but I wasn't far off.
[22:51] <grape> Tv: California is on it's own partition in my mind. You could show me George Jetson flying past your window and I wouldn't be surprised.
[22:52] <nwatkins> sagewk: any good place to stick a 10 GB log file?
[22:55] <Tv> grape: can't quite live up to that, but... 1) helicopters often fly *below* our window 2) a few weeks ago, we had a job interview get badly distracted because LAPD was training outside; 6 people hanging on the outside of a helicopter, standing on the landing pads
[23:00] <grape> Tv: Crazy
[23:08] * MattBenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[23:08] * NightDog__ (~karl@52.84-48-58.nextgentel.com) Quit (Read error: Connection reset by peer)
[23:08] * NightDog__ (~karl@52.84-48-58.nextgentel.com) has joined #ceph
[23:33] <ajm> sagewk/sjust: lmk if there is other info you guys need or if you have a chance to look at either of those bugs
[23:46] <Tv> grape: it started up just fine for me :-/
[23:47] <sjust> ajm: do you have logs for osd5 from before the upgrade?
[23:48] <ajm> sjust: I don't unfortunately :/
[23:48] <sjust> ajm: worth a shot, still looking
[23:50] <grape> Tv: are you mounting the osds prior to mkcephfs
[23:50] <Tv> grape: i'm running your scripts
[23:50] <grape> lol
[23:50] <Tv> grape: after a whitespace-cleanup, pastebin introduced ^Ms
[23:50] <Tv> grape: and i did a s/node/tv-mkceph-/g
[23:51] <grape> Tv: https://github.com/nugoat/ceph/tree/master/doc/ops/install/ceph-ubuntu-howto-scripts
[23:51] <Tv> grape: one thing i did notice is you have no error checking in the script, so perhaps something fails early on and you just missed it in the output
[23:51] <Tv> grape: "set -e" would do wonders
[23:51] <Tv> grape: but next up, can you just post the *full* output
[23:52] <grape> Tv: thanks. Just a minute ago I did find an error preceding mkcephfs
[23:54] <Tv> grape: really, the setup scripts are almost identical, i just have different hostnames etc
[23:54] <grape> Tv: set -e doesn't play around ;-)
[23:54] <Tv> grape: please post your full output and i'll diff *that*
[23:55] <Tv> same thing with mycluster.conf, i have different hostnames & ips, otherwise identical
[23:56] * Tv hugs /usr/bin/script
[23:59] <Tv> grape: http://pastebin.com/rHaTA7cb
[23:59] <Tv> grape: this is ubuntu 10.10 because that's what i had lying around, but i don't expect that to make much of a difference

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.