#ceph IRC Log

Index

IRC Log for 2012-04-24

Timestamps are in GMT/BST.

[0:22] <joao> just checking, the meeting should start in ~8 minutes, right?
[0:28] <joshd> joao: 2 minutes according to my clock
[0:29] <joao> we're in sync then, aside from the 8h time difference :p
[0:32] * aa__ (~aa@217.115.112.241) Quit (Quit: Konversation terminated!)
[0:32] * aa__ (~aa@217.115.112.241) has joined #ceph
[0:34] <joshd> joao, elder, nhm: 5 more minutes or so
[0:47] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) Quit (Quit: LarsFronius)
[0:50] * pruby (~tim@leibniz.catalyst.net.nz) has joined #ceph
[0:59] * loicd (~loic@173.231.115.58) Quit (Quit: Leaving.)
[1:08] * votz (~votz@c-67-188-115-159.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[1:09] * BManojlovic (~steki@212.200.243.246) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:19] * loicd (~loic@modemcable075.145-176-173.mc.videotron.ca) has joined #ceph
[1:25] * aa__ (~aa@217.115.112.241) Quit (Quit: Konversation terminated!)
[1:25] * aa__ (~aa@217.115.112.241) has joined #ceph
[1:35] * buck (~buck@bender.soe.ucsc.edu) Quit (Quit: Leaving)
[2:00] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[2:03] * lofejndif (~lsqavnbok@82VAADB10.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[2:19] <yehudasa> gregaf: nhm has some issues with the full_ratio on one of his clusters
[2:21] * Tv_ (~tv@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[2:27] <nhm> I just commented on 2286
[2:28] <nhm> gregaf: this is on the patched BTRFS cluster. I shared a google doc with the cluster layout if you need to poke around.
[2:30] * joao (~JL@89-181-153-140.net.novis.pt) Quit (Ping timeout: 480 seconds)
[2:38] * yoshi (~yoshi@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:40] * aa__ (~aa@217.115.112.241) Quit (Quit: Konversation terminated!)
[2:40] * aa (~aa@217.115.112.241) has joined #ceph
[2:42] * The_Bishop (~bishop@cable-89-16-138-109.cust.telecolumbus.net) has joined #ceph
[2:43] <gregaf> nhm: sorry, was running around talking to people
[2:46] * joao (~JL@89-181-154-158.net.novis.pt) has joined #ceph
[2:48] <gregaf> nhm: do these clusters have monitor logging enabled at all? and did you follow the same procedure in updating them?
[2:55] * jefferai (~quassel@quassel.jefferai.org) Quit (Remote host closed the connection)
[2:56] * jefferai (~quassel@quassel.jefferai.org) has joined #ceph
[3:20] * joao (~JL@89-181-154-158.net.novis.pt) Quit (Ping timeout: 480 seconds)
[3:33] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[3:40] * alexxy (~alexxy@79.173.81.171) Quit (Remote host closed the connection)
[3:40] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[4:10] * aa (~aa@217.115.112.241) Quit (Quit: Konversation terminated!)
[4:10] * andresambrois (~aa@217.115.112.241) has joined #ceph
[4:10] * chutzpah (~chutz@216.174.109.254) Quit (Quit: Leaving)
[4:42] <nhm> gregaf: Yeah, all three had ceph updated the same way.
[4:43] <nhm> gregaf: Sorry, I didn't have any monitor debugging on, just osd and radosgw. Let me know what you'd like it set to.
[5:17] * grape (~grape@216.24.166.226) Quit (Remote host closed the connection)
[5:40] * renzhi (~renzhi@180.169.73.90) has joined #ceph
[5:57] <renzhi> NaioN: to continue from yesterday, does your setup support multiple copies of the same file? Just curious
[6:01] <The_Bishop> arrrr! my cluster was so damaged that i lost all the data within :(
[6:02] <The_Bishop> it was no important data though, but this is a show stopper
[6:02] <The_Bishop> reformat, reinit, retry ;(
[6:03] <The_Bishop> too often the OSD journal gets bad and leads to assertation failures (=abort)
[6:04] <The_Bishop> hmm, next time i will keep more logs
[6:11] <renzhi> The_Bishop: that is painful...
[6:11] <renzhi> I'm setting up a test cluster to see how it goes
[6:13] <The_Bishop> the cephfs-subsystem is quite unstable so far
[6:13] <The_Bishop> have no experience with rados, though
[6:15] <renzhi> The_Bishop: oh, I thought you were using rados, I am trying to set up rados, this was scary for a minute
[6:15] <renzhi> rados ' supposed to be quite stable
[6:16] <The_Bishop> yep
[6:16] <The_Bishop> is RADOS resizable without reformatting? this would be an alternative to the posix-layer
[6:17] <renzhi> no idea, first timer here
[6:17] <dmick> The_Bishop: you can add storage to a rados cluster, yes
[6:18] <The_Bishop> i would put the "interesting" data to RADOS, if i can resize the devices without copying
[6:19] <renzhi> what I'm looking for is: dynamic resizing (without shutting down any node), multiple copies of same file (i.e if one ods goes down, I should still be able to retrieve any data)
[6:20] <renzhi> I can't find any info on my second requirement though
[6:20] <iggy> that's basically the point of ceph
[6:21] <dmick> renzhi: yeah, pretty much what iggy says. R is for Reliable
[6:22] <iggy> the default number of copies of objects is 2... that should say it all
[6:23] <renzhi> thanks guys, and that's why I'm setting up the cluster for testing :)
[6:23] <renzhi> iggy: where do I change the default number?
[6:24] <The_Bishop> renzhi: well; umount+resize+mount would be ok for me if i don't need to copy all the data
[6:24] <iggy> i don't know the exact option, but it can be set in the conf file
[6:24] <iggy> it can also be set per pool with the ceph tool
[6:25] <renzhi> hmm, that's interesting. Need to find that out
[6:26] <renzhi> The_Bishop: have you found a way to do that?
[6:26] <The_Bishop> renzhi: no, just considering so far; as the posix layer comes out that flakey
[6:27] <iggy> The_Bishop: i'm pretty sure rbd devices can be resized... they would be significantly less useful otherwise
[6:27] <The_Bishop> sure :)
[6:28] * renzhi running out for a sandwich...
[6:28] <The_Bishop> i'm waiting for the recompile and then...
[6:29] <iggy> http://ceph.newdream.net/docs/master/man/8/rbd/
[6:29] <iggy> resize is one of the verbs, so you should be good
[6:30] <The_Bishop> as a side note: i try out ceph on rotten hardware with small disks... maybe not the primary concern of the project
[6:31] <iggy> they seem to be optimizing for the journal on ssd, medium amount of sata disk config
[6:31] <The_Bishop> well, but if it works out it seems better than using the disks independantly
[6:31] <The_Bishop> i had this struggle before...
[6:35] <The_Bishop> cluster speed comes as a benefit when distributing on several disks, but my point is error resilience (with old hardware) and unified storage
[6:36] <The_Bishop> only have 100mbit interconnect between nodes here
[6:37] <iggy> that's going to be rough
[6:38] <iggy> your speed is going to be effectively limited to 6ish MB/s
[6:39] <The_Bishop> as said, getting along with failing storage is more important for me than bleeding-edge speeds
[6:39] <The_Bishop> for now, i had more problems with failing ceph-osd processes than failing disks
[6:42] <iggy> :/ i haven't been actively testing it too much lately, but things seem to have been getting a lot better in the past 6 months or so
[6:42] <The_Bishop> i am on ceph for a month or so, don't know about the earlier times
[6:44] <The_Bishop> especially the journal content leads to program aborts from my experience
[6:44] <iggy> i've been following the project for quite some time
[6:44] <The_Bishop> had that several times
[6:45] <The_Bishop> "ceph-osd --mkjournal" did not work out for me, had to issue "ceph-osd --mkfs" to get an OSD working again
[6:46] <iggy> july 2008 apparently
[6:46] <iggy> yeesh
[6:47] <iggy> i think i was one of the first people aside from devs to put stuff in the wiki
[6:48] * wido (~wido@rockbox.widodh.nl) Quit (Remote host closed the connection)
[7:00] -kilo.oftc.net- *** Looking up your hostname...
[7:00] -kilo.oftc.net- *** Checking Ident
[7:00] -kilo.oftc.net- *** No Ident response
[7:00] -kilo.oftc.net- *** Found your hostname
[7:00] * CephLogBot (~PircBot@rockbox.widodh.nl) has joined #ceph
[7:00] <The_Bishop> hello CephLogBot :)
[7:00] <iggy> can't compile elsewhere?
[7:01] <The_Bishop> well, i prefer native compiles because i prefer optimized builds
[7:02] * iggy gentoo faps
[7:02] <The_Bishop> two ubuntu boxes and two gentoo boxes, all different hardware
[7:03] <The_Bishop> and different library versions plus different gcc versions...
[7:03] <iggy> there's your problem...
[7:03] <The_Bishop> not to forget different optimizer flags
[7:03] <The_Bishop> well, yes
[7:06] <The_Bishop> my testbed is P3-800, dual P3-1000, VIA-C3-1000 and amd-barcelona-2700
[7:14] <The_Bishop> ccache is a good tool to help with frequent recompiles ;)
[7:15] <The_Bishop> this saved me many hours of compile time
[7:21] * cattelan is now known as cattelan_away
[7:28] * andresambrois (~aa@217.115.112.241) Quit (Quit: Konversation terminated!)
[7:28] * andresambrois (~aa@217.115.112.241) has joined #ceph
[7:42] * jpieper (~josh@209-6-86-62.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[8:00] * alexxy (~alexxy@79.173.81.171) Quit (Remote host closed the connection)
[8:06] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[8:12] * andresambrois (~aa@217.115.112.241) Quit (Quit: Konversation terminated!)
[8:12] * andresambrois (~aa@217.115.112.241) has joined #ceph
[8:27] * jpieper (~josh@209-6-86-62.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) has joined #ceph
[8:35] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[8:37] * renzhi starts to love ceph, or is it too early to fall in love???
[8:48] <NaioN> renzhi: if you mean replication of objects, yes I have the default of 2
[8:49] <renzhi> NaioN: thanks, you know where to change the default value?
[8:49] <NaioN> you can change it with the ceph command
[8:49] <NaioN> per pool
[8:50] <renzhi> ok
[8:50] <NaioN> have to search for the exact command
[8:50] <NaioN> mom
[8:50] <renzhi> I'll take a look
[8:51] <NaioN> http://ceph.newdream.net/wiki/Adjusting_replication_level
[8:51] * pmjdebruijn just checked the 3.2.x kernel changelogs
[8:51] <pmjdebruijn> I don't see any of the ceph fixes from 3.3.x in there
[8:51] <NaioN> ceph osd pool set $poolname size $replicationlevel
[8:52] <pmjdebruijn> has anybody considering pushing them to GregKH or now to benhutchingsuk ?
[8:52] <NaioN> renzhi: http://ceph.newdream.net/docs/master/control/#osd-subsystem
[8:52] <NaioN> some more info about the commands
[8:53] <renzhi> NaioN: thx a lot for the pointer
[8:54] <NaioN> the second link points to the new documentation of ceph
[8:56] <renzhi> NaioN: quick question regarding your setup. I'm assuming that your osd are on normal ethernet, do you see busy data exchange between the nodes, when they are idle?
[8:56] <NaioN> I tested with normal ethernet (gigabit)
[8:56] <NaioN> but the production cluster uses infiniband
[8:56] <renzhi> i.e. do they generate a lot of messages to sync with each other?
[8:56] <renzhi> oh, nice
[8:57] <NaioN> yes they generate a fair amount of traffic
[8:58] <renzhi> ok, just trying to make sure I'm not doing something stupid :)
[9:01] * martin (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[9:02] * martin is now known as Guest1520
[9:04] * Madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[9:05] * s[X]_ (~sX]@ppp59-167-154-113.static.internode.on.net) has joined #ceph
[9:09] * s[X]__ (~sX]@ppp59-167-154-113.static.internode.on.net) has joined #ceph
[9:09] * s[X]_ (~sX]@ppp59-167-154-113.static.internode.on.net) Quit (Read error: Connection reset by peer)
[9:12] * s[X] (~sX]@ppp59-167-154-113.static.internode.on.net) Quit (Ping timeout: 480 seconds)
[9:14] * Theuni (~Theuni@195.62.106.91) has joined #ceph
[9:16] * f4m8_ is now known as f4m8
[9:32] * s[X]__ (~sX]@ppp59-167-154-113.static.internode.on.net) Quit (Ping timeout: 480 seconds)
[9:36] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[9:36] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[9:56] * eightyeight (~atoponce@pthree.org) Quit (Ping timeout: 480 seconds)
[10:07] * The_Bishop_ (~bishop@cable-89-16-138-109.cust.telecolumbus.net) has joined #ceph
[10:07] * The_Bishop (~bishop@cable-89-16-138-109.cust.telecolumbus.net) Quit (Read error: Connection reset by peer)
[10:26] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[10:39] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[10:47] * johnl_ (~johnl@2a02:1348:14c:1720:24:19ff:fef0:5c82) Quit (Remote host closed the connection)
[11:20] * Guest1520 (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Quit: Lost terminal)
[11:21] <renzhi> I'm trying to mount with the kernel module, but always got the error:
[11:21] <renzhi> sudo mount -t ceph test1:6789:/ /mnt/ceph -vv -o name=admin,secret=AQC3PpZPmDupChAAWjgyeUxXnRjgYQcm51BSRg==
[11:21] <renzhi> parsing options: rw,name=admin,secret=AQC3PpZPmDupChAAWjgyeUxXnRjgYQcm51BSRg==
[11:21] <renzhi> mount: error writing /etc/mtab: Invalid argument
[11:22] <renzhi> ceph module is loaded, lsmod shows ceph and libceph
[11:22] <renzhi> ceph-authtool -l xpcluster.keyring
[11:22] <renzhi> [client.admin]
[11:22] <renzhi> key = AQC3PpZPmDupChAAWjgyeUxXnRjgYQcm51BSRg==
[11:24] * andresambrois (~aa@217.115.112.241) Quit (Read error: Connection reset by peer)
[11:24] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[11:25] * andresambrois (~aa@217.115.112.241) has joined #ceph
[11:26] <renzhi> any idea why? the mount seems to work though
[11:39] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[11:41] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[12:02] * lofejndif (~lsqavnbok@04ZAACUQ1.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:02] * lofejndif (~lsqavnbok@04ZAACUQ1.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[12:03] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[12:18] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[12:27] <renzhi> I saw the following warning in the wiki:
[12:27] <renzhi> Warning: Don't mount ceph using kernel driver on the osd server. Perhaps it will freeze the ceph client and your osd server.
[12:28] <renzhi> what's the reason behind this?
[12:29] * yoshi (~yoshi@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:31] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[12:50] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[12:54] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[12:58] * andresambrois (~aa@217.115.112.241) Quit (Quit: Konversation terminated!)
[12:58] * andresambrois (~aa@217.115.112.241) has joined #ceph
[13:04] * renzhi (~renzhi@180.169.73.90) Quit (Quit: Leaving)
[13:08] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[13:17] <nhm> rehnzhi: The kernel client can deadlock if it's trying to flush pages when the OSD is also trying to flush pages at the same time.
[13:20] <nhm> oh, I guess you left.
[13:20] <nhm> well, for historical reasons, here's a good explanation: http://www.spinics.net/lists/ceph-devel/msg01425.html
[13:21] <ceph-test> Xnj&
[13:29] * joao (~JL@89-181-154-158.net.novis.pt) has joined #ceph
[13:30] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[13:34] <joao> howdy #ceph
[13:44] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[13:59] <nhm> good morning joao
[14:08] * The_Bishop_ (~bishop@cable-89-16-138-109.cust.telecolumbus.net) Quit (Ping timeout: 480 seconds)
[14:30] <elder> nhm, joao, I am interested to collect info about peoples' flights and where they're staying. I don't know if anyone else cares though. If you send me your info I'll provide a summary.
[14:32] <joao> elder, I was going to ask you and mark for the same thing, so sure
[14:32] <joao> as soon as I finish my lunch I'll speak with you :)
[14:37] <nhm> elder: sure, let me take a look
[14:57] * eightyeight (~atoponce@pthree.org) has joined #ceph
[15:10] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[15:19] * loicd (~loic@modemcable075.145-176-173.mc.videotron.ca) Quit (Quit: Leaving.)
[15:23] <joao> elder, email or pvt?
[15:23] <elder> Either is fine.
[15:29] * The_Bishop (~bishop@158.181.82.102) has joined #ceph
[15:33] * loicd (~loic@173.231.115.58) has joined #ceph
[15:35] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[15:44] * f4m8 is now known as f4m8_
[15:48] * pmjdebruijn (~pascal@62.133.201.16) Quit (Remote host closed the connection)
[15:57] * Mareo (~mareo@mareo.fr) has joined #ceph
[16:00] * cattelan_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[16:08] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[16:22] * cattelan_away (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[16:25] * wido (~wido@rockbox.widodh.nl) has joined #ceph
[16:28] <elder> sage, I'm looking at parse_reply_info_extra() in mds_client.c and trying to see whether what the MDS does will ever cause it to be called.
[16:28] <elder> I think it will. But it's broken because of byteorder (sparse to the rescue!)
[16:32] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has left #ceph
[16:32] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[16:35] <dwm_> Hmm, with the newly simplified heartbeat code, should I expect 'hb in' at the bottom of 'ceph pg dump' to contain the full set of OSDs?
[16:40] * cattelan_away is now known as cattelan
[16:40] <elder> sage, in fact the filelock stuff seems to have quite a few basic problems with byte order.
[16:41] <elder> I think I'm going to ignore them for now, it ought to have a more comprehensive look.
[16:45] * Theuni (~Theuni@195.62.106.91) Quit (Ping timeout: 480 seconds)
[16:48] * MoXx (~Spooky@fb.rognant.fr) Quit (Ping timeout: 480 seconds)
[16:50] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[16:56] * andresambrois (~aa@217.115.112.241) Quit (Ping timeout: 480 seconds)
[17:16] * BManojlovic (~steki@91.195.39.5) Quit (Remote host closed the connection)
[17:21] * MoXx (~Spooky@fb.rognant.fr) has joined #ceph
[17:35] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[17:48] * Theuni (~Theuni@46.253.59.219) has joined #ceph
[18:08] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[18:08] * alo (~alo@194.244.1.93) has joined #ceph
[18:09] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Quit: Ex-Chat)
[18:16] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[18:28] <gregaf> dwm_: yeah, hb_in is on all the OSDs; it's the address they receive heartbeat pings at
[18:31] * aliguori (~anthony@32.97.110.59) has joined #ceph
[18:32] <joao> sagewk, sjust, poke me whenever you're available :)
[18:35] <sjust> joao: I'm here
[18:36] <gregaf> nhm: I pulled the monitor pgmaps off your patched btrfs cluster and that's all the useful information we can pull off of it, so feel free to do whatever you want without worrying preserving information for us
[18:36] <nhm> gregaf: ok, I'll get it going again.
[18:37] <nhm> gregaf: would you like me to set debugging options on the mons?
[18:38] <gregaf> nhm: I doubt it would hurt anything, and they're generally useful if there's a problem
[18:38] <sagewk> joao: hey
[18:38] <sagewk> gregaf: did you grab the osdmaps too?
[18:38] <nhm> gregaf: sure, what would you like them set to?
[18:38] <sagewk> i'd take the whole mon dir, bc then you get the logs too
[18:38] <gregaf> oh, no ??? don't think there'll be anything in them?
[18:39] <gregaf> sagewk: no logs at all :(
[18:39] <sagewk> k
[18:39] <gregaf> but hey, look, I got the osdmaps too ;)
[18:39] <nhm> yeah, sadly we just had logging on for the OSDs and radosgw
[18:39] <yehudasa> sjust: client.4697.0:3246564
[18:39] <gregaf> nhm: debug ms 1 and debug mon 10, probably
[18:40] <yehudasa> sjust: under /var/log/ceph/a/osd.1.log.1
[18:41] <nhm> gregaf: Is there any kind of admin socket or anything we want?
[18:42] <gregaf> I don't normally think about those, are they off by default?
[18:42] <gregaf> and???I guess?
[18:42] <gregaf> but I think you'd want that more than I would :p
[18:45] <nhm> gregaf: honestly I've barely thought about the monitors.
[18:46] <nhm> gregaf: Until yesterday they were just kind of a black box that I hoped were working right. ;)
[18:48] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[18:49] * Theuni (~Theuni@46.253.59.219) Quit (Quit: Leaving.)
[18:53] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Quit: LarsFronius)
[18:59] <nhm> gregaf: should injecting mon_osd_full_ratio change the full_ratio value returned when doing a pg dump?
[18:59] <gregaf> not any more
[19:00] <gregaf> it used to
[19:00] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[19:02] <nhm> gregaf: ok, I basically want to get the third cluster into the same state as the other two.
[19:02] <gregaf> oh, I get you, sorry
[19:03] <gregaf> you want to run "ceph pg set_full_ratio 0.95"
[19:03] <gregaf> and "ceph pg set_nearfull_ratio 0.85"
[19:03] <nhm> ah, nice
[19:03] <nhm> gregaf: "could not convert 0.95to a float"
[19:04] <gregaf> uh
[19:06] <gregaf> (looking)
[19:06] <nhm> same thing for set_nearfull_ratio btw
[19:09] * Theuni (~Theuni@46.253.59.219) has joined #ceph
[19:09] <gregaf> shoot, I think maybe the parsing is wrong
[19:10] <gregaf> stupid conversion functions and their stupid non-uniform interfaces
[19:11] <nhm> gregaf: is that just at the user interface layer? Could it be related to why it originally got set to 0?
[19:11] <gregaf> just the ui, and no
[19:11] <nhm> ok
[19:11] <gregaf> I have a pretty good idea of how it got set to 0, but this isn't it
[19:22] * Theuni (~Theuni@46.253.59.219) Quit (Remote host closed the connection)
[19:23] * andresambrois (~aa@217.115.112.241) has joined #ceph
[19:27] * yehudasa (~yehudasa@aon.hq.newdream.net) Quit (Remote host closed the connection)
[19:29] <gregaf> nhm: do you know which branch you're running on those nodes?
[19:31] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[19:32] <nhm> gregaf: this as master from yesterday. 0.45-281-g0777613
[19:35] <gregaf> okay, I got the problm
[19:35] <gregaf> durr
[19:37] <nhm> gregaf: which one?
[19:37] <gregaf> nhm: hrm, what's the head commit?
[19:37] <gregaf> I can't find that hash
[19:38] <gregaf> for the version of ceph you're running
[19:38] <gregaf> what's the last commit
[19:40] <nhm> gregaf: no, I meant which problem did you figure out? ;) As for the commit, I don't know. I'm just going based on what gitbuilder named the packages. Let me see if I can find out.
[19:40] <gregaf> I figured out the parsing problem
[19:40] <gregaf> ah, you're using the packages, aren't you
[19:41] <gregaf> so it's not a useful hash (*sigh*)
[19:42] * josef (~seven@nat-pool-rdu.redhat.com) has joined #ceph
[19:42] * Tv_ (~tv@aon.hq.newdream.net) has joined #ceph
[19:42] <nhm> gregaf: Yep. So gitbuilder doesn't keep track of that when it builds the packages?
[19:42] <josef> trying to build ceph and its saying i dont have a crypto library installed when i run ./configure even tho i have cryptopp-devel installed
[19:43] <gregaf> nhm: it doesn't look like anything else changed the monitors yesterday, so probably your best bet is checking out master branch (I just pushed a commit to it), building it, and then replacing your ceph executables with your newly-built ones
[19:43] <Tv_> nhm: can you repeat the gitbuilder thing, i'm missing context?
[19:44] <gregaf> Tv_: nhm: the package gitbuilder has different hash IDs than the git tree proper does
[19:44] <Tv_> gregaf: that doesn't sound right
[19:44] * andresambrois (~aa@217.115.112.241) Quit (Quit: Konversation terminated!)
[19:44] * andresambrois (~aa@217.115.112.241) has joined #ceph
[19:44] <gregaf> hrmm, I could be misremembering but I thought I'd run into this before
[19:45] <gregaf> josef: what's the actual error?
[19:45] <josef> configure: error: no suitable crypto library found
[19:46] <josef> checking for CRYPTOPP... no
[19:47] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) has joined #ceph
[19:47] <sagewk> josef: you can do --with-nss if nss-devel (or whatever) is installed
[19:47] <nhm> Tv_: gregaf: let me know if you guys figure out how the mapping works, I wouldn't mind knowing.
[19:48] <Tv_> nhm: describe your problem please
[19:48] <gregaf> Tv: nhm is giving me 0.45-281-g0777613 as the version out of gitbuilder, and I can't find 777613 as a hash in my tree???I had this idea it's because the hash gets replaced during the packaging steps but I could be wrong
[19:48] <gregaf> oh, wait, the 0 isn't superfluous
[19:48] <Tv_> hah
[19:48] <gregaf> you know what, I think I just hadn't pulled in a fresh enough tree
[19:48] <gregaf> because I've got it now
[19:49] <gregaf> nhm: so the mapping is perfectly fine as long as your head isn't too thick :)
[19:49] <nhm> gregaf: good to know. :)
[19:49] <sagewk> gregaf: when you have a minute can you look at wip-2341?
[19:49] <gregaf> josef: I assume you also have libcryptopp installed, not just the dev stuff?
[19:49] <gregaf> sagewk: that's the osd timeout fix?
[19:50] <sagewk> yeah
[19:50] * jmlowe (~Adium@129-79-195-139.dhcp-bl.indiana.edu) has joined #ceph
[19:50] <gregaf> well the first version you pushed looked fine to me
[19:50] <gregaf> what did you change when you redid them?
[19:51] <gregaf> the const thing?
[19:51] <nhm> gregaf: Let me know if you need anything else, happy to oblige.
[19:51] <gregaf> nhm: nothing for me, just build and replace and it should be good
[19:52] <gregaf> sagewk: yep, 2341 looks good to me
[19:52] <nhm> gregaf: Ok. I'll probably try to get whatever the next version being deployed to congress is.
[19:52] <gregaf> nhm: if you want it up faster it's just a binary replacement for the monitors; shouldn't impact anything else and you can do it while the cluster's running :)
[19:52] <jmlowe> Quick question, if I needed to take an osd down for a few minutes is it better to kill it or ceph osd down N then kill?
[19:52] <sagewk> gregaf: thanks
[19:54] <nhm> gregaf: ok, I'd like to keep all three clusters at the same version of ceph if possible. Looks like I might be able to restart all three soonish.
[19:54] <gregaf> whatever works for you, just letting you know what it takes to make it work again ;)
[19:55] <nhm> gregaf: yes, the more info the better. :)
[19:55] <josef> gregaf: yeah
[19:55] <josef> --with-nss worked
[19:56] <josef> wierd that it doesnt pick up cryptopp
[19:56] <josef> thats how it gets build in fedora proper
[19:57] <gregaf> josef: are you on RHEL? I thought they didn't like libcryptopp because it's not certified
[19:57] <josef> gregaf: no i'm on fedora
[19:57] <josef> who the hell runs rhel?
[19:57] <josef> ;)
[19:57] <sagewk> :)
[19:58] <Tv_> jmlowe: explicit down means clients don't wait for a timeout
[19:59] <Tv_> rhell
[20:00] <jmlowe> people with vendors who won't talk to you if you have anything other than rhel installed run rhel, *cough*ibm*cough*
[20:02] <josef> wtf, now it says boost spirit isnt installed
[20:02] <josef> if thats the case stat is lying to me
[20:02] <nhm> josef: Make sure the version is new enough
[20:03] <nhm> josef: I seem to remember someone else having that problem (with boost)
[20:03] <Tv_> jmlowe: that's like un-jailbreaking an iphone before taking it in for service.. ;)
[20:03] <josef> boost-devel-1.47.0-6.fc16.x86_64
[20:05] <jmlowe> which reminds me, good news everybody hp will officially support precise http://www.extremetech.com/computing/126563-hp-certify-ubuntu-12-04-for-its-proliant-servers
[20:06] <nhm> jmlowe: yeah, saw that the other day.
[20:06] <nhm> jmlowe: An interesting move.
[20:06] <jmlowe> now all they have to do is fix their raid cli to run on linux kernels >=3.0 so I don't have to use a wrapper that lies to it and tells it that it's running on a 2.6 series kernel
[20:07] <gregaf> well precise is a new kernel, so presumably that goes along with official support ;)
[20:08] <Tv_> you underestimate the disconnect between PR and execution..
[20:08] <gregaf> maybe I'm just optimistic!
[20:08] <Tv_> e.g. Dan's recent discovery of Dell IPMI inputting gibberish if you type very fast
[20:08] <Tv_> Serial over LAN that is
[20:09] <Tv_> it's one thing to claim a feature, another to have it work
[20:14] <joshd> josef: that's definitely new enough boost
[20:14] <josef> yeah i dont get wtf is wrong
[20:16] <joshd> configure is just checking if boost/spirit/include/classic_core.hpp or boost/spirit.hpp are in your include path
[20:17] <josef> its in /usr/include, not entirely sure why it cant find it there
[20:17] * BManojlovic (~steki@212.200.243.246) has joined #ceph
[20:22] * johnl (~johnl@ipv6.srv-98rr6.gb1.brightbox.com) has joined #ceph
[20:25] * johnl (~johnl@ipv6.srv-98rr6.gb1.brightbox.com) Quit (Remote host closed the connection)
[20:27] * alo (~alo@194.244.1.93) Quit (Quit: Sto andando via)
[20:32] * yehudasa (~yehudasa@aon.hq.newdream.net) has joined #ceph
[20:32] * johnl (~johnl@2a02:1348:14c:1720:29c6:1136:ca1a:b083) has joined #ceph
[20:32] <yehudasa> nhm: can you install the -dbg debs on burnupi01? I'm getting an error
[20:35] <nhm> yehudasa: sure
[20:41] * chutzpah (~chutz@216.174.109.254) has joined #ceph
[20:55] * LarsFronius (~LarsFroni@95-91-243-252-dynip.superkabel.de) Quit (Quit: LarsFronius)
[20:59] <josef> argh ok i give up, back to other things
[21:03] * jmlowe (~Adium@129-79-195-139.dhcp-bl.indiana.edu) Quit (Ping timeout: 480 seconds)
[21:09] * mgalkiewicz (~mgalkiewi@85.89.186.247) has joined #ceph
[21:11] <mgalkiewicz> hi I have reported http://tracker.newdream.net/issues/2267 but no-one is assinged. Can I speed up fixing this bug somehow?
[21:12] <elder> mgalkiewicz, that one likely goe to me. I am occupied at a couple of other problems that have vaguely similar symptoms.
[21:12] <elder> My hope is I'll find one underlying cause that might help explain this whole family of problems.
[21:13] <elder> I will assign this one to myself however. I don't know that it will "speed up" anything but you can be assured it's got my attention.
[21:14] <elder> mgalkiewicz, are you able to reproduce this problem?
[21:15] <mgalkiewicz> yes but only in my production environment
[21:15] <mgalkiewicz> two rbd volumes just crashed once again
[21:15] <mgalkiewicz> when my osd.0 crash
[21:16] <mgalkiewicz> still have osd.1
[21:16] <elder> Well the osd.0 ought not to be crashing I suppose, but the client needs to tolerate it even if it does.
[21:18] <mgalkiewicz> the whole server crashed it is probably not ceph related however like u said client shouldnt crash like this
[21:19] <mgalkiewicz> how can I help u with fixing this?
[21:19] * lofejndif (~lsqavnbok@82VAADC1H.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:19] <mgalkiewicz> it is really annoying
[21:19] <elder> The best thing would be if you can somehow narrow down a simple way to reproduce the problem, so I can reproduce it myself. But I don't expect you to sacrifice your data or anything to do so.
[21:20] <nhm> elder: would death throes from a serial console on an OSD node be at all interesting to you?
[21:21] <elder> I don't think so.
[21:21] <nhm> this was on an XFS cluster, though I have no evidence it is XFS related.
[21:21] <elder> So far I'm pretty focused on the client side.
[21:21] <mgalkiewicz> well I am not able to this I have a simple ceph config with 2 osd, 2 mds and 3 mon and if I restart or stop one of osds clients just crash but only 2 of them
[21:22] <elder> nhm, I could glance at it I suppose for some triage-level diagnosis (*maybe*).
[21:23] <elder> So mgalkiewicz, with the config you listed here: http://tracker.newdream.net/issues/2267
[21:23] <elder> You are saying that restart one OSD may reliably reproduce the problem?
[21:23] <elder> (restart meaning reboot the node)
[21:23] * detaos|cloud (~cd9b41e9@webuser.thegrebs.com) has joined #ceph
[21:23] <detaos|cloud> hello everybody
[21:24] <mgalkiewicz> just restarting osd process
[21:24] <elder> nhm OOM
[21:24] <mgalkiewicz> ?
[21:25] <elder> mgalkiewicz, so restarting one osd process is enough to reproduce the problem?
[21:25] <mgalkiewicz> elder: yes, but
[21:25] <mgalkiewicz> elder: I dont think that you will reproduce the problem this way because my staging env with almost the same configuration is not affected by this bug
[21:26] <nhm> elder: yeah, that much I see. I'm trying to figure out why.
[21:27] <elder> mgalkiewicz, I've made a note of that in the bug. Just to be exact about it, is it always osd.0 that you restart?
[21:27] <mgalkiewicz> no osd.1 as well. Doesnt really matter which osd is restarted
[21:27] <elder> nhm, I will have to research what the log you sent is telling us. I can do that, but just hold off a little bit.
[21:28] <elder> Can you tell me how the staging configuration differs from your production config?
[21:28] <mgalkiewicz> 1 mon instead of 3
[21:28] <elder> That's the only difference?
[21:28] <mgalkiewicz> yes
[21:28] <mgalkiewicz> ok if you dont find anything let me know. I can try to prepare my production for reproducing the problem.
[21:28] <elder> Staging has 1 mon, production has 3?
[21:29] <mgalkiewicz> yes
[21:29] <elder> OK. Noted. THanks.
[21:29] <mgalkiewicz> elder: there is one more thing but I think that I did it after the bug revealed.
[21:30] <elder> and that is...
[21:30] <mgalkiewicz> Staging has ceph 0.45 on the node with one osd and one mds. The other node have 0.44 osd, mds, mon.
[21:31] <elder> OK, I'll add that to the bug also. That's important.
[21:31] <mgalkiewicz> but like I said it was probably upgraded after the bug has revealed on production.
[21:32] <elder> But it's still possible that the staging config might exhibit the problem if it were still running 0.44, right?
[21:32] <mgalkiewicz> dont think so
[21:32] <mgalkiewicz> I can try if you can
[21:32] <mgalkiewicz> if you want*
[21:32] <elder> I'm just trying to understand your point about "upgraded after the bug was revealed"
[21:33] <elder> The bug occurrs in production, running 0.45
[21:33] <elder> NO
[21:33] <mgalkiewicz> no
[21:33] <elder> The bug occurs in production, running 0.44
[21:33] <elder> It has not occurred in staging, running 0.45
[21:33] <elder> Correct?
[21:33] <mgalkiewicz> yes
[21:34] <mgalkiewicz> The bug occured in production, running 0.44, before staging was upgraded to 0.45
[21:34] <elder> Right.
[21:34] <elder> OK. So there's still a possibility that the staging config (with 1 mon) could exhibit the problem if it were still running 0.44.
[21:34] <elder> I'm not asking you to do this, just making sure I'm clear on the situation.
[21:35] <mgalkiewicz> yes
[21:35] <nhm> elder: When you get a chance, take a look at bug 752. Maybe the stack trace I sent you is related?
[21:37] <elder> mgalkiewicz, everything on staging is running 0.45 (now). And everything on production ins running 0.44, right?
[21:38] <mgalkiewicz> you are right with production
[21:39] <mgalkiewicz> on staging only node with mds and osd (osd.1) have 0.45
[21:39] <elder> and on staging the others are running 0.44?
[21:39] <mgalkiewicz> yes
[21:39] <elder> OK
[21:42] <nhm> actually, maybe not 752.
[21:44] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[21:45] <elder> mgalkiewicz, sorry, trying to deduce your staging cluster config. you say only one mon. Are there only two hosts?
[21:46] <elder> Here is what I have for production, in summary:
[21:46] <elder> - running 0.44 on node n3c1: mon.n3c1, mds.n3c1, osd.1
[21:46] <elder> - running 0.44 on node n4c1: mon.n4c1, mds.n4c1, osd.0
[21:46] <elder> - running 0.44 on node n8c1: mon.n8c1
[21:47] <mgalkiewicz> thats right and staging has two hosts
[21:49] <elder> So staging is something like:
[21:49] <elder> - running 0.45 on node host0: mon.n3c1, mds.host0, osd.1
[21:49] <elder> - running 0.44 on node host1: mds.host1, osd.0
[21:50] <mgalkiewicz> - running 0.45 on node n2cc: mds.n2cc, osd.1
[21:51] <mgalkiewicz> - running 0.44 on node cc: mds.cc, osd.0
[21:51] <mgalkiewicz> ehh my mistake
[21:51] <mgalkiewicz> - running 0.44 on node cc: mon.cc, mds.cc, osd.0
[21:52] <elder> OK, got it.
[21:52] <elder> Whew
[21:53] <elder> Now I'm less sure it's related to the other problems, but it's still a client problem if it's dying when an osd gets restarted.
[21:53] <mgalkiewicz> and one more thing
[21:53] <mgalkiewicz> about previous bugs w8 a sec
[21:54] <mgalkiewicz> I was also affected by this http://tracker.newdream.net/issues/1868
[21:57] <mgalkiewicz> it was quite similar from my perspective
[22:00] <elder> Having only briefly looked at that problem it doesn't look related to me.
[22:05] <gregaf> hey detaos|cloud, did you need something or just being friendly? :)
[22:06] <detaos|cloud> gregaf: i'm in the middle of setting up a ceph cluster ... just being friendly for now :)
[22:06] <gregaf> coolio, welcome and let us know if you run into trouble
[22:06] <detaos|cloud> will do, thanks :)
[22:13] * lofejndif (~lsqavnbok@82VAADC1H.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[22:19] <sagewk> joshd: wip-rbd-snapid looks good to me
[22:21] <sagewk> joshd: can you look at wip-rbd-removal when you have a chance?
[22:21] <joshd> sagewk: sure
[22:23] <nhm> sagewk: IMed you a stacktrace from one of the btrfs clusters. That's the only btrfs trace I saw on any of the nodes.
[22:38] <sagewk> nhm: do you have the first few lines of the dump?
[22:39] <nhm> sagewk: yeah, one sec
[22:41] * andresambrois (~aa@217.115.112.241) Quit (Ping timeout: 480 seconds)
[22:41] <nhm> sagewk: sorry about that
[22:42] <sagewk> thanks.
[22:43] <sagewk> nhm: not sure what we should make of that. in any case, not the thing christian reported on btrfs-devel (he's running an rc kernel)
[22:44] <nhm> sagewk: yeah. Btw, found another one on burnupi26 that I missed earlier.
[22:45] <mgalkiewicz> elder: another hint for you
[22:46] <mgalkiewicz> elder: I have finally restored the server which crashed and after a few minutes the same client crashed so it looks like it is affected by changes in ceph cluster no matter if it is osd stop or start
[22:47] <elder> You never stopped the osd on this latest client crash?
[22:47] <elder> Back in about 30 minutes.
[22:48] <mgalkiewicz> elder: not sure what u mean
[22:51] <detaos|cloud> HEALTH_OK :)
[23:01] <joshd> sagewk: should wip-rbd-snapid go into next?
[23:02] <sagewk> yeah probably
[23:03] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has left #ceph
[23:05] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Quit: Leaving)
[23:14] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)
[23:40] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[23:43] * detaos|cloud (~cd9b41e9@webuser.thegrebs.com) Quit (Quit: TheGrebs.com CGI:IRC)
[23:43] <mgalkiewicz> elder: any progress?
[23:54] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.