#ceph IRC Log

Index

IRC Log for 2012-07-10

Timestamps are in GMT/BST.

[0:16] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:17] * danieagle (~Daniel@177.43.213.15) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[0:26] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[0:27] * dmick (~dmick@38.122.20.226) Quit (Quit: Leaving.)
[0:30] * ninkotech (~duplo@89.177.137.231) Quit (Remote host closed the connection)
[0:33] * ninkotech (~duplo@89.177.137.231) has joined #ceph
[0:37] * dmick (~dmick@38.122.20.226) has joined #ceph
[0:52] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[0:56] * anonymous (~anonymous@cpe-98-155-79-65.san.res.rr.com) has joined #ceph
[0:56] * anonymous (~anonymous@cpe-98-155-79-65.san.res.rr.com) Quit ()
[1:00] * danieagle (~Daniel@177.43.213.15) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[1:11] <iggy> what's the biggest rbd device you guys have heard of?
[1:14] * goedi (goedi@195.26.5.166) Quit (Read error: Connection reset by peer)
[1:14] <joshd> I accidentally made an exabyte one the other day
[1:14] * goedi (goedi@195.26.5.166) has joined #ceph
[1:14] <gregaf> excellent@!
[1:14] <gregaf> we're already exabyte-scale computing, on a single machine!
[1:18] <gregaf> seriously though: given how it currently works (no per-block state) I really wouldn't expect RBD to have any size limits that aren't intrinsic to QEMU, the kernel, or RADOS itself
[1:18] <iggy> specifically I'd be using the kernel driver probably
[1:19] <iggy> and we'd be talking in the 60T range
[1:25] * goedi (goedi@195.26.5.166) Quit (Read error: Connection reset by peer)
[1:25] * goedi (goedi@195.26.5.166) has joined #ceph
[1:26] <iggy> ideally once the filesystem gets into shape we'd migrate data from the RBD device to cephfs
[1:27] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) Quit (Remote host closed the connection)
[1:35] * LarsFronius (~LarsFroni@95-91-243-249-dynip.superkabel.de) has joined #ceph
[1:58] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[1:59] * LarsFronius (~LarsFroni@95-91-243-249-dynip.superkabel.de) Quit (Quit: LarsFronius)
[2:16] <elder> joshd, is there a fixed limit on the name of an RBD snapshot object on the user-space side?
[2:16] <elder> The kernel implements one and I want to know if that's necessary, or if I can remove that limitation.
[2:17] <elder> Same question, but for the name of the image itself. Is that subject to a fixed limit?
[2:17] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[2:18] <elder> sagewk, sage, dmick, I'm not picky about who knows the answer...
[2:18] <gregaf> elder: if you mean the size limits, no, I don't think there are any in userspace
[2:18] <gregaf> the kernel one has resulted in bugs in the past, actually
[2:18] <elder> I'm talking about the length of the object name.
[2:19] <elder> (names)
[2:19] <elder> I'm going to take your answer as authoritative, gregaf. Thank you.
[2:19] <gregaf> yes, name lengths is what I meant
[2:20] <joshd> elder: yeah, there are no limits, except in the kernel in the osd client, but presumably that can be removed
[2:21] <elder> It will be removed in the kernel.
[2:21] <joshd> excellent
[2:32] * Tv_ (~tv@38.122.20.226) Quit (Quit: Tv_)
[3:00] * joshd (~joshd@38.122.20.226) Quit (Quit: Leaving.)
[3:55] * nhmlap (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[3:57] <renzhi> morning
[3:58] <dmick> hi renzhi
[3:59] <renzhi> dmick: how are you doing?
[3:59] <dmick> I am well, and you?
[4:00] <renzhi> good, just busy testing ceph :)
[4:01] <dmick> good man! :)
[4:01] <renzhi> btw, what kind of management tools do you use for managing ceph cluster?
[4:02] <dmick> you mean for setting them up, or dealing with them once they're running?
[4:02] <renzhi> yeah
[4:02] <dmick> yeah to the last one?
[4:02] <renzhi> I'm looking at my ceph.conf, it's error-prone to do it manually.
[4:03] <dmick> so for setting up, chef is becoming a better and better choice, although I haven't personally done anything with it for clusters yet
[4:03] <renzhi> Basically, I'm learning to do it in a better way. Like initial setup, crush map management, then add/remove osd/mon/mds, etc
[4:04] <dmick> I don't know of much directed toward modifying a running cluster
[4:04] <renzhi> I'm looking at Chef too, but didn't have much time to dig deeper.
[4:04] <dmick> that's not to say there isn't anything, but I haven't heard of much personally
[4:04] <iggy> yeah, a pointy clicky kind of web interface for at least templating a rough ceph.conf would be handy
[4:05] <renzhi> I'm wondering if the Inktank folks are working on something, for commercial
[4:06] <iggy> it's called: you pay them, they make it for you
[4:06] <dmick> iggy, yeah, that would be handy
[4:06] <renzhi> LOL
[4:08] <renzhi> we are planning for 20 servers, with 10 disks each. That's 200 osds. And that's for initial phase. I can't imagine when we have 200 servers :)
[4:09] <dmick> certainly no one wants to set those up by hand, that's clear. Chef is the medium-term answer, along with 'better defaults'
[4:09] <renzhi> either we have to roll our sleeves and work on something, or we get help from outside :)
[4:09] <iggy> I'm trying to convince the powers that be to ditch 2 aging satabeasts with a ceph setup
[4:10] <renzhi> hehe
[4:11] <iggy> would be 50-70T useable to start
[4:11] <dmick> ditch and abandon to you for a Ceph cluster, you mean?
[4:11] <iggy> one of the downsides at this point is no single source for hardware and software support
[4:12] <renzhi> same here
[4:12] * chutzpah (~chutz@100.42.98.5) Quit (Quit: Leaving)
[4:12] <renzhi> we are taking the risk now :)
[4:12] <iggy> yeah, replace with 2U 12disk boxes
[4:13] <iggy> the satabeasts are out of warranty
[4:13] <iggy> would eventually like to move some of our other storage to ceph too
[4:14] <iggy> we also have 8 isilon nodes and 24 panasas shelves
[4:36] * nhmlap (~Adium@12.238.188.253) has joined #ceph
[4:41] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[5:08] * nhmlap bored
[5:09] <dmick> nhmlap: http://movies.netflix.com/WiMovie/Rocky_Bullwinkle/70140431?trkid=2361637
[5:10] <nhmlap> you still at work?
[5:10] <dmick> yeah
[5:10] <nhmlap> dmick = workaholic
[5:12] <elder> nhmlap, you not working?
[5:14] <elder> I never would have thought to look that one up, dmick but it sounds like a great idea.
[5:15] <dmick> I just saw a note about it recently; just added
[5:15] <dmick> I'm excited
[5:15] <dmick> Fractured Fairy Tales are there as well
[5:38] <nhmlap> elder: nope, I took a break to eat dinner. :D
[5:39] <dmick> slacker
[5:39] <nhmlap> elder: excited to work on this though: http://tracker.newdream.net/issues/2765
[5:43] <elder> Is it reproducable?
[5:45] <lurbs> Is having each disk as its own OSD still the recommended setup, instead of having, for example, a RAID 5 or 6 array per machine and therefore one OSD?
[5:51] <nhmlap> elder: yeah, I've seen it before, and Sam said he's seen it too.
[5:51] <dmick> lurbs: it's still considered a good guideline, yes.
[5:52] <sage> dmick: did you say that the vpn was working for you?
[5:52] <sage> i can't get to the aon machines still
[5:52] <nhmlap> elder: luckily we've got objecter, filestore, and journal logs (along with blktrace, perf, and collectl data), so hopefully we can figure it out. Sam thinks there is a pathological case in the journal where if it fills up no data can be written for a while.
[5:52] <dmick> sage: no, as of the switchover, routing is still messed up is the last I heard
[5:53] <sage> dmick: i think i hallucinated an email that it was working
[5:53] <nhmlap> lurbs: for machines with many drives, it may be worth trying out both.
[5:53] <dmick> I sent email saying it was not, today
[5:53] <sage> not sure which i prefer
[5:54] <sage> ah.. i just misread
[5:54] <dmick> because Henry's email didn't make that clear
[5:55] <lurbs> dmick: What's the reasoning behind having an OSD per disk? I'm not seeing the advantage, just the configuration overhead.
[5:56] <dmick> well the RADOS recovery model can only take effect when a disk actually fails, so you sorta lose a lot of the advantages RADOS has that way
[5:57] <dmick> and don't get rid of the disadvantages of "a failure stresses the remainder in the pack hard and can make them fail, plus makes access to the pack slow" of traditional HW RAID
[5:58] <lurbs> Yeah, never been a big fan of RAID 5 or 6 for that reason - we tend to run RAID 10. But that doesn't make sense in the RADOS/Ceph world.
[5:58] <dmick> plus it's fairly easy to run several OSD processes on one box.
[5:59] <sage> dmick: can you set me up another vpn key?
[5:59] <dmick> on sepia? sure
[6:00] <nhmlap> sage: anything else I should do to help get things moving on 2765?
[6:01] <dmick> sage: do you have the tarball handy?
[6:01] <sage> yeah
[6:01] <nhmlap> the interesting bit is between 20:09:04 and 20:09:29
[6:01] <nhmlap> in the OSD Log.
[6:02] <sage> lookin
[6:03] <nhmlap> specifically regarding TID 312322
[6:03] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[6:04] <nhmlap> Sam said he thinks it may be due to a pathological case where if the journal fills up nothing can be written to btrfs for a while. (Maybe until the next sync?)
[6:04] <sage> could be
[6:18] <elder> I'm pretty sure joshd confirmed this for me the other day. But an old format rbd image has no concept of an "id" right?
[6:18] <dmick> no separate id
[6:18] <elder> OK.
[6:18] <dmick> some of the code calls an "object name" an "object ID"
[6:19] <dmick> but the separation from "internal ID" to "externally-visible name" is strictly new
[6:19] <elder> It has a name, but in the new format the name can change, and the numeric id is the only thing that's fixed.
[6:19] <dmick> righ
[6:19] <dmick> t
[6:19] <elder> o
[6:19] <elder> k
[6:21] <sage> nhmlap: hrm, sort of looks like there was just a burst of writes around that time
[6:28] * Meths_ (rift@2.25.193.127) has joined #ceph
[6:32] <elder> sage, how does the client determine whether a given image getting mapped is valid? Does it just assume it is, and whether it exists or not, it will just start being used?
[6:32] <sage> just by opening the header
[6:33] <elder> Oh yeah.
[6:33] <sage> if it gets valid metadata, and can register the watch, we're all set
[6:33] <elder> OK, got it.
[6:34] <elder> In rbd_init_disk.
[6:34] <elder> It's a little later than I want to do it now.
[6:34] <elder> I'll have to think about how to rearrange that in the morning.
[6:34] <elder> Later.
[6:34] * Meths (rift@2.25.193.55) Quit (Ping timeout: 480 seconds)
[6:39] * deepsa (~deepsa@122.172.0.114) Quit (Ping timeout: 480 seconds)
[6:40] <nhmlap> sage: that was the oldest of a bunch of laggy tids on OSD 19 when there were so many outstanding ops that the client basically shut down.
[6:40] <sage> elder: ok!
[6:41] * deepsa (~deepsa@122.172.22.153) has joined #ceph
[6:41] <sage> nhmlap: it looked the the journal was regularly commiting small bits, then suddenly had a whole bunch of data to chew on, but worked its way through it without going too slowly
[6:43] <nhmlap> sage: It looked to me like what Sam hypothesized earlier seems to have happened. A laggy TID on OSD 19 caused a bunch of ops to back up until a limit was reached (in this case 800 which is about the 100MB limit) and then things stalled.
[6:43] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[6:44] <nhmlap> hrm, I didn't upload the rados bench output.
[6:45] <nhmlap> ok, rados bench output is at http://nhm.ceph.com/object_latency/output.plana83
[6:46] <nhmlap> interesting bit is around second 162
[6:47] * alexxy (~alexxy@79.173.81.171) Quit (Read error: Connection reset by peer)
[6:47] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[6:58] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) has joined #ceph
[7:02] * dmick (~dmick@38.122.20.226) Quit (Quit: Leaving.)
[7:04] * nhmlap (~Adium@12.238.188.253) Quit (Quit: Leaving.)
[7:55] * macan (~macan@2400:dd01:1001:0:7d54:4a6:2412:2515) has joined #ceph
[7:56] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:12] * gregaf1 (~Adium@38.122.20.226) has joined #ceph
[8:14] * mkampe (~markk@38.122.20.226) Quit (Ping timeout: 480 seconds)
[8:14] * sjust (~sam@38.122.20.226) Quit (Ping timeout: 480 seconds)
[8:14] * gregaf (~Adium@38.122.20.226) Quit (Ping timeout: 480 seconds)
[8:14] * yehudasa (~yehudasa@38.122.20.226) Quit (Ping timeout: 480 seconds)
[8:15] * sagewk (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[8:15] * mgalkiewicz (~mgalkiewi@toya.hederanetworks.net) has left #ceph
[8:25] * yehudasa (~yehudasa@38.122.20.226) has joined #ceph
[8:26] * sjust (~sam@38.122.20.226) has joined #ceph
[8:26] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) has joined #ceph
[8:36] * loicd (~loic@2001:67c:28dc:850:1d6e:df2f:f13e:5e62) has joined #ceph
[8:41] * gregaf (~Adium@38.122.20.226) has joined #ceph
[8:43] * gregaf1 (~Adium@38.122.20.226) Quit (Ping timeout: 480 seconds)
[8:43] * sjust (~sam@38.122.20.226) Quit (Ping timeout: 480 seconds)
[8:44] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[8:44] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) Quit (Ping timeout: 480 seconds)
[8:44] * yehudasa (~yehudasa@38.122.20.226) Quit (Ping timeout: 480 seconds)
[8:46] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit ()
[8:54] * sjust (~sam@38.122.20.226) has joined #ceph
[8:54] * adjohn (~adjohn@50-0-133-101.dsl.static.sonic.net) has joined #ceph
[8:54] * yehudasa (~yehudasa@38.122.20.226) has joined #ceph
[8:55] * sagewk (~sage@38.122.20.226) has joined #ceph
[8:55] * adjohn (~adjohn@50-0-133-101.dsl.static.sonic.net) Quit ()
[9:05] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[9:21] * mkampe (~markk@38.122.20.226) has joined #ceph
[9:28] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:33] <pmjdebruijn> hi guys, mornin
[9:33] <pmjdebruijn> Something funny is happening for me
[9:34] <pmjdebruijn> I have a /dev/rbd/pool/rbdname symlinked to ../../rbd3p1 (which another rbdname is symlinked to ../../rbd3), so that can't be right
[9:34] <pmjdebruijn> when I invoke ceph-rbdnamer on my 0/1/2/3/4/5 it outputs sensible output
[9:37] * pmjdebruijn should dig a bit deeper into udev
[9:37] * loicd (~loic@2001:67c:28dc:850:1d6e:df2f:f13e:5e62) Quit (Quit: Leaving.)
[9:54] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) Quit (Remote host closed the connection)
[9:56] * loicd (~loic@2001:67c:28dc:850:1d6e:df2f:f13e:5e62) has joined #ceph
[9:57] <pmjdebruijn> it seems in the ceph udev rules %n is resolved to rbd3p1 to 1 instead of 3
[9:57] <pmjdebruijn> :(
[9:57] * pmjdebruijn still has to doublecheck that, but that's the working theory
[10:00] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:07] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[10:13] * gregaf1 (~Adium@38.122.20.226) has joined #ceph
[10:16] * gregaf (~Adium@38.122.20.226) Quit (Ping timeout: 480 seconds)
[10:16] * mkampe (~markk@38.122.20.226) Quit (Ping timeout: 480 seconds)
[10:16] * sjust (~sam@38.122.20.226) Quit (Ping timeout: 480 seconds)
[10:17] * yehudasa (~yehudasa@38.122.20.226) Quit (Ping timeout: 480 seconds)
[10:17] * sagewk (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[10:17] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[10:21] * andreask (~andreas@2001:6f8:12c3:f00f:213:2ff:fe52:6b80) has joined #ceph
[10:24] * andreask (~andreas@2001:6f8:12c3:f00f:213:2ff:fe52:6b80) has left #ceph
[10:26] * yehudasa (~yehudasa@38.122.20.226) has joined #ceph
[10:27] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) has joined #ceph
[10:28] * sjust (~sam@38.122.20.226) has joined #ceph
[10:46] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:51] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[10:56] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[10:56] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[10:56] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:57] * loicd (~loic@2001:67c:28dc:850:1d6e:df2f:f13e:5e62) Quit (Quit: Leaving.)
[10:58] * loicd (~loic@151.216.22.26) has joined #ceph
[11:16] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[11:25] * brambles (brambles@79.133.200.49) Quit (Remote host closed the connection)
[11:40] * loicd (~loic@151.216.22.26) Quit (Quit: Leaving.)
[11:44] * loicd (~loic@151.216.22.26) has joined #ceph
[11:44] * mkampe (~markk@2607:f298:a:607:222:19ff:fe31:b5d3) has joined #ceph
[11:45] * andreask (~andreas@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[11:58] * ninkotech (~duplo@89.177.137.231) Quit (Remote host closed the connection)
[12:03] * gregaf (~Adium@38.122.20.226) has joined #ceph
[12:05] * mkampe (~markk@2607:f298:a:607:222:19ff:fe31:b5d3) Quit (Ping timeout: 480 seconds)
[12:05] * sjust (~sam@38.122.20.226) Quit (Ping timeout: 480 seconds)
[12:06] * gregaf1 (~Adium@38.122.20.226) Quit (Ping timeout: 480 seconds)
[12:06] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) Quit (Ping timeout: 480 seconds)
[12:06] * yehudasa (~yehudasa@38.122.20.226) Quit (Ping timeout: 480 seconds)
[12:09] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[12:14] * yehudasa (~yehudasa@38.122.20.226) has joined #ceph
[12:15] * ninkotech (~duplo@89.177.137.231) has joined #ceph
[12:15] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) has joined #ceph
[12:16] * sjust (~sam@38.122.20.226) has joined #ceph
[12:20] * mkampe (~markk@2607:f298:a:607:222:19ff:fe31:b5d3) has joined #ceph
[12:30] * brambles (brambles@79.133.200.49) has joined #ceph
[12:36] * loicd (~loic@151.216.22.26) Quit (Ping timeout: 480 seconds)
[12:36] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[12:37] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[12:40] * andreask (~andreas@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[12:41] * renzhi (~renzhi@69.163.36.54) Quit (Quit: Leaving)
[12:46] * macan (~macan@2400:dd01:1001:0:7d54:4a6:2412:2515) Quit (Ping timeout: 480 seconds)
[13:07] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[13:35] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[13:35] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[13:44] * andreask (~andreas@2001:6f8:12c3:f00f:213:2ff:fe52:6b80) has joined #ceph
[14:06] * loicd (~loic@2001:67c:28dc:850:412e:1426:df7a:be35) has joined #ceph
[14:10] * nhmlap (~Adium@12.238.188.253) has joined #ceph
[14:15] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Ping timeout: 480 seconds)
[14:16] * Dieter_b1 (~Dieterbe@dieter2.plaetinck.be) has left #ceph
[14:28] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[14:35] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[14:43] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Remote host closed the connection)
[14:45] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[14:53] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[14:58] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) has joined #ceph
[15:03] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Ping timeout: 480 seconds)
[15:16] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[15:20] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Remote host closed the connection)
[15:26] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[15:40] * deepsa (~deepsa@122.172.22.153) Quit (Remote host closed the connection)
[15:41] * deepsa (~deepsa@122.172.39.24) has joined #ceph
[15:44] * andreask (~andreas@2001:6f8:12c3:f00f:213:2ff:fe52:6b80) Quit (Ping timeout: 480 seconds)
[16:00] * The_Bishop (~bishop@2a01:198:2ee:0:5855:d51d:11e8:430a) has joined #ceph
[16:11] * andreask (~andreas@2001:6f8:12c3:f00f:213:2ff:fe52:6b80) has joined #ceph
[16:13] * loicd (~loic@2001:67c:28dc:850:412e:1426:df7a:be35) Quit (Quit: Leaving.)
[16:27] * Ryan_Lane (~Adium@128.164.7.124) has joined #ceph
[16:32] * Ryan_Lane1 (~Adium@128.164.17.107) has joined #ceph
[16:36] * Ryan_Lane (~Adium@128.164.7.124) Quit (Ping timeout: 480 seconds)
[16:59] * themgt (~themgt@24-181-215-214.dhcp.hckr.nc.charter.com) has joined #ceph
[17:11] * andreask (~andreas@2001:6f8:12c3:f00f:213:2ff:fe52:6b80) Quit (Ping timeout: 480 seconds)
[17:14] * nhmlap (~Adium@12.238.188.253) Quit (Quit: Leaving.)
[17:28] * loicd (~loic@2001:67c:28dc:850:412e:1426:df7a:be35) has joined #ceph
[17:34] * Tv_ (~tv@2607:f298:a:607:c4fb:49d5:841d:f90) has joined #ceph
[17:34] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[17:35] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) Quit (Quit: Ex-Chat)
[17:50] * nhmlap (~Adium@2607:f298:a:607:6d53:e1b:eb1c:6ec3) has joined #ceph
[17:56] * Ryan_Lane1 (~Adium@128.164.17.107) Quit (Quit: Leaving.)
[18:02] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[18:05] * loicd (~loic@2001:67c:28dc:850:412e:1426:df7a:be35) has left #ceph
[18:06] * aliguori (~anthony@32.97.110.59) has joined #ceph
[18:11] <Tv_> nhmlap: i sent an email to ceph-devel just now, but you especially might want to look at http://code.google.com/p/ioping/
[18:13] <elder> Can we agree on a maximum size for an rbd image id? If I query it, it will come back as a string, and I'm trying to size my buffer accordingly.
[18:14] <elder> For now I'm defining RBD_IMAGE_ID_LEN_MAX
[18:14] <elder> I'll use 64, which ought to be PLENTY.
[18:14] <Tv_> 64 bytes should be enough for everyone?
[18:15] <elder> What would you ever need more for? DOS?
[18:16] <Tv_> I think there is a world market for maybe five RBD installations.
[18:17] <elder> Actually, we should plan on just one. The Internet implemented as RBD storage.
[18:27] <joao> I'm okay with that if I get to be responsible for that one installation
[18:28] <joao> I promise to take good care of the data and be totally respectful of other's privacy
[18:30] * ninkotech (~duplo@89.177.137.231) Quit (Remote host closed the connection)
[18:30] * loicd (~loic@151.216.22.26) has joined #ceph
[18:31] * loicd (~loic@151.216.22.26) Quit ()
[18:33] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Ping timeout: 480 seconds)
[18:35] * mgalkiewicz (~mgalkiewi@toya.hederanetworks.net) has joined #ceph
[18:36] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:37] * Ryan_Lane (~Adium@128.164.7.124) has joined #ceph
[18:51] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) has joined #ceph
[18:52] * pentabular (~sean@adsl-71-141-233-37.dsl.snfc21.pacbell.net) has joined #ceph
[18:52] <pentabular> echo (..echo ..echo)
[18:53] * pentabular (~sean@adsl-71-141-233-37.dsl.snfc21.pacbell.net) has left #ceph
[18:53] <joao> lol?
[18:55] * loicd (~loic@2001:67c:28dc:850:412e:1426:df7a:be35) has joined #ceph
[18:56] * chutzpah (~chutz@100.42.98.5) has joined #ceph
[19:00] * loicd (~loic@2001:67c:28dc:850:412e:1426:df7a:be35) has left #ceph
[19:00] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Quit: Leaving)
[19:01] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[19:01] * LarsFronius (~LarsFroni@95-91-243-243-dynip.superkabel.de) has joined #ceph
[19:04] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[19:06] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:17] * jluis (~JL@89.181.146.50) has joined #ceph
[19:22] * joao (~JL@89-181-154-67.net.novis.pt) Quit (Ping timeout: 480 seconds)
[19:24] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[19:25] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Ping timeout: 480 seconds)
[19:27] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[19:35] * jluis is now known as joao
[20:05] <themgt> whats the recommended filesystem currently? XFS?
[20:05] * Meths_ is now known as Meths
[20:09] <gregaf> themgt: yeah, it seems to be the stablest and is generally the best performer
[20:11] <themgt> gregaf: thanks. been testing on ext4 and it's having performance issues when working on moderate #s of files
[20:17] <Tv_> themgt: that sounds a little bit weird; ext4 should handle tens of thousands of files even in a single directory without real problems
[20:19] <themgt> well, this is inside all sorts of VMs on top of LVM volumes, fuse mounted, etc???. so probably not an ideal setup in the first place. just trying to get a hang of how the whole thing works
[20:19] * Dr_O__ (~owen@host-2-96-191-87.as13285.net) has joined #ceph
[20:24] * Dr_O_ (~owen@host-78-145-29-241.as13285.net) Quit (Ping timeout: 480 seconds)
[20:27] * The_Bishop (~bishop@2a01:198:2ee:0:5855:d51d:11e8:430a) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[20:27] * The_Bishop (~bishop@2a01:198:2ee:0:5855:d51d:11e8:430a) has joined #ceph
[20:35] <elder> joshd, is there anything we can say about an rbd format 2 image id that restricts what form it will take? In particular, I'm interested in defining a distinct value that can be used for format 1 images. Maybe a zero-length string?
[20:36] <joshd> zero-length string would work, if you don't want to check whether the image is format 2
[20:36] <elder> So an image id for format 2 must have non-zero length?
[20:37] * yehudasa (~yehudasa@38.122.20.226) Quit (Quit: Ex-Chat)
[20:37] <joshd> yes
[20:37] <elder> It just might be convenient in some code to have it defined for old-format images, but I just want to be sure it's not a valid vlue.
[20:37] <elder> That's what I'll use then. zero-length string.
[20:38] <joshd> sounds good
[20:38] <elder> I'll also disallow zero-length as invalid for format 2.
[20:51] <mgalkiewicz> hi I have a problem with adding osd https://gist.github.com/3078131
[20:59] * lofejndif (~lsqavnbok@04ZAAEFI8.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:20] * dmick (~dmick@38.122.20.226) has joined #ceph
[21:29] * biebian (~shahi@04ZAAEFJA.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:36] * biebian (~shahi@04ZAAEFJA.tor-irc.dnsbl.oftc.net) Quit (Killed (tjfontaine (Take your spam elsewhere)))
[21:43] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[21:45] * biebian (~shahi@82VAAE1DV.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:47] * biebian (~shahi@82VAAE1DV.tor-irc.dnsbl.oftc.net) Quit ()
[21:53] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)
[21:53] * aliguori (~anthony@32.97.110.59) has joined #ceph
[21:58] * deepsa_ (~deepsa@122.172.6.109) has joined #ceph
[21:58] * deepsa (~deepsa@122.172.39.24) Quit (Ping timeout: 480 seconds)
[21:58] * deepsa_ is now known as deepsa
[22:30] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) Quit (Quit: Leaving)
[22:38] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[22:56] <sagewk> gregaf: wanna look at the new patch in wip-msgr?
[22:56] <gregaf> sure
[22:57] <gregaf> looks like you pushed more wip-msgr-cleanup stuff too, is that ready for review?
[23:00] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[23:11] <gregaf> sagewk: I don't think that will cooperate with how the MDS and the OSD track sessions ?????they've already created a new one for that Connection by this point (thanks to SimpleMessenger::verify_authorizer and [OSD|MDS]::ms_verify_authorizer)
[23:12] <sagewk> gregaf: der, need to check osd/mon. wip-msgr-cleanup fixes the mds race
[23:12] <gregaf> okay ??? that'll also need to be backported to argonaut, then
[23:13] <gregaf> I'm not sure if we want them to be separate patches for that stuff
[23:13] <gregaf> (by this I mean, pushing them individually to branches is probably not the right way to do it)
[23:14] * Dr_O__ is now known as Dr_O
[23:15] * lofejndif (~lsqavnbok@04ZAAEFI8.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[23:21] * lofejndif (~lsqavnbok@04ZAAEFMZ.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:21] * stxShadow (~Jens@ip-78-94-238-69.unitymediagroup.de) has joined #ceph
[23:23] * stxShadow (~Jens@ip-78-94-238-69.unitymediagroup.de) has left #ceph
[23:29] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[23:39] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)
[23:41] * Ryan_Lane (~Adium@128.164.7.124) Quit (Quit: Leaving.)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.