#ceph IRC Log

Index

IRC Log for 2012-05-29

Timestamps are in GMT/BST.

[0:16] * The_Bishop (~bishop@p5DC112FB.dip.t-dialin.net) has joined #ceph
[0:50] * joao (~JL@aon.hq.newdream.net) has joined #ceph
[1:53] * BManojlovic (~steki@212.200.243.232) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:55] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[2:23] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Remote host closed the connection)
[2:23] <elder> Anyone around who can kick the gitbuilder kernel amd64 machine?
[2:25] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[2:47] <joao> nope
[2:47] <joao> at least, I can't :p
[3:01] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[3:04] * henrycc (~henrycc@219-86-164-64.dynamic.tfn.net.tw) Quit (Quit: Bersirc 2.2: Looks, feels and sounds (?!) different! [ http://www.bersirc.org/ - Open Source IRC ])
[3:06] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[3:08] <elder> I've been wanting to test something since Saturday morning but gitbuilder hasn't been building anything for me to test.
[3:12] * Meths (rift@2.25.214.19) Quit (Ping timeout: 480 seconds)
[3:40] * Meths (rift@2.25.189.76) has joined #ceph
[3:42] * danieagle (~Daniel@177.43.213.15) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[3:48] <joao> elder, the long weekend drove everyone god knows where :)
[4:24] * The_Bishop (~bishop@p5DC112FB.dip.t-dialin.net) Quit (Ping timeout: 480 seconds)
[4:33] * The_Bishop (~bishop@p4FCDE8D3.dip.t-dialin.net) has joined #ceph
[4:43] * The_Bishop (~bishop@p4FCDE8D3.dip.t-dialin.net) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[6:17] * johnl (~johnl@2a02:1348:14c:1720:4c85:c827:3c1d:b094) Quit (Remote host closed the connection)
[6:17] * johnl (~johnl@2a02:1348:14c:1720:90c2:4d90:e349:1a9c) has joined #ceph
[6:43] * joao (~JL@aon.hq.newdream.net) Quit (Quit: Leaving)
[7:10] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) has joined #ceph
[7:17] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[7:36] * yoshi (~yoshi@p3167-ipngn3601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[8:10] * dwin (~dwin@83TAAGB4B.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:11] * dwin (~dwin@83TAAGB4B.tor-irc.dnsbl.oftc.net) has left #ceph
[8:32] * Ryan_Lane (~Adium@c-98-210-205-93.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:52] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[8:53] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:02] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:07] * guido_ (~guido@mx1.hannover.ccc.de) Quit (Read error: Operation timed out)
[9:08] * guido (~guido@mx1.hannover.ccc.de) has joined #ceph
[9:14] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:20] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) Quit (Remote host closed the connection)
[9:28] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:30] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[9:50] * ceph-test (~Runner@mail.lexinter-sa.COM) Quit (Ping timeout: 480 seconds)
[9:51] * ceph-test (~Runner@mail.lexinter-sa.COM) has joined #ceph
[9:56] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[10:12] * gregorg (~Greg@78.155.152.6) has joined #ceph
[11:05] * aliguori (~anthony@202.108.130.138) has joined #ceph
[11:07] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) has joined #ceph
[11:53] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) Quit (Remote host closed the connection)
[12:00] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) has joined #ceph
[12:05] * yoshi (~yoshi@p3167-ipngn3601marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:06] * aliguori (~anthony@202.108.130.138) Quit (Quit: Ex-Chat)
[12:19] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[13:05] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) Quit (Remote host closed the connection)
[13:12] * hijacker (~hijacker@213.91.163.5) Quit (Quit: Leaving)
[13:23] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[13:32] * mkampe (~markk@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[13:32] * mkampe (~markk@aon.hq.newdream.net) has joined #ceph
[13:41] <guido> gregaf: I was getting permission denied errors when creating new files on a cephfs mounted via in-kernel client, but not if it was mounted via fuse or when writing into existing files
[13:42] <guido> gregaf: Apparently, the root problem lay with SELinux on the client side. I haven't quite figured out all details, but after setting SELinux to permissive, things work again
[13:43] * aliguori (~anthony@222.128.202.191) has joined #ceph
[13:43] <elder> Yip skip!!! My branch is built!!!
[13:58] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) has joined #ceph
[14:40] * hijacker (~hijacker@213.91.163.5) Quit (Quit: Leaving)
[14:40] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[15:39] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) Quit (Remote host closed the connection)
[16:58] * joao (~JL@aon.hq.newdream.net) has joined #ceph
[17:01] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[17:10] * nhm (~nh@184-97-241-32.mpls.qwest.net) has joined #ceph
[17:13] <joao> hello nhm
[17:13] <nhm> joao: good morning!
[17:13] <joao> so, is today another holiday or smth?
[17:13] <nhm> joao: that'd be nice if it was. ;)
[17:14] <joao> the city is awfully quiet
[17:14] <nhm> Maybe people taking extra vacation or something.
[17:15] * verwilst (~verwilst@d5152FEFB.static.telenet.be) Quit (Quit: Ex-Chat)
[17:15] <jmlowe> why have a 3 day weekend when you can use a vacation day and make it a 4 day weekend
[17:15] <joao> true that
[17:20] <ninkotech> hi, do you think i should use rbd for my VPS machines for production usage ? people here are scared and they would prolly like to see something like (x)NBD.... they are afraid that they might lost control over RBD and end up with many blobs of unreadable data...
[17:21] <ninkotech> i like how RBD looks, but i really never tested it enough
[17:24] <ninkotech> if the answer is no, when do you think RBD will be stable enough?
[17:24] <ninkotech> (months? years? never?:)
[17:26] <nhm> ninkotech: Elder is working on rbd quite a bit right now, he might have some insight.
[17:26] <nhm> ninkotech: I'm going to be doing some performance testing work later this month.
[17:26] <ninkotech> i need highly available VPS
[17:27] <ninkotech> this month ends soon
[17:27] <nhm> ninkotech: Sorry, next month. :)
[17:27] <ninkotech> imho rados must be much faster than using *nbd
[17:27] <ninkotech> (didnt try yet)
[17:27] <ninkotech> but stripping makes it possibly much faster
[17:28] <ninkotech> but is it safe enough? what if something goes wrong?
[17:28] <nhm> ninkotech: it's always important to have backups. :D
[17:28] <ninkotech> nhm: it is, but i am talking about HA wannabe solution
[17:29] <nhm> ninkotech: Yeah. I'm not sure where things are at exactly right now. I can only say rbd is a pretty high priority for us and we've got a couple of people working on it.
[17:29] <jmlowe> I'm doing rbd in a production environment
[17:29] <ninkotech> jmlowe: no fears?
[17:29] <ninkotech> jmlowe: is it hard to maintain?
[17:30] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:30] <ninkotech> what if few machines go wrong and it will FAIL ?
[17:30] <jmlowe> always fears, but so far so good, my major problems have been with flaky disks that don't go offline but keep writing and reading bad data
[17:30] <elder> rbd is stable, but still under development.
[17:30] <ninkotech> ah, i know that...
[17:30] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[17:30] <ninkotech> elder: :) that didnt say much, you talk like lawyer
[17:31] <elder> How about "we are firmly committed to rbd"
[17:31] <ninkotech> :)
[17:31] <ninkotech> sounds good
[17:31] <jmlowe> let me put it this way, I tried some new btrfs options last week on one of my osd's and it went belly up Sunday, rebooted due to watchdog timer, vm's didn't skip a beat
[17:31] <ninkotech> elder, jmlowe: can i ask why rbd?
[17:31] <elder> It is stable but there are some known problems (and always will be, for a bit)
[17:32] <ninkotech> i know, software always fails
[17:32] <elder> I guess it's solid but as people begin to use it in the near future they are almost by definition early adopters.
[17:32] <jmlowe> I've also successfully upgraded piecemeal from 0.45 to 0.47.2 without downtime
[17:32] <ninkotech> even my software does :)
[17:32] <nhm> elder: "always will be, for a bit". ;)
[17:32] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:32] <elder> Was that wishy washy?
[17:32] <elder> Forever, for now.
[17:33] <ninkotech> elder: what we are worried about is worst case scenario: everything crashes and no vps will be able to access rbd
[17:33] <ninkotech> --> how to repair.. when to make sepukku etc
[17:34] <ninkotech> for how big clusters is rbd best suitable?
[17:34] <elder> We worry about that too, of course. I can't give you a good solid "you have nothing to worry about right now," I can just tell you that we're satisfied with its stability, but there is still room to improve it.
[17:34] <jmlowe> why rbd: I have two data centers 50 miles apart with a 100Gig sub 1 ms latency link, I can migrate around power and networking outages and shutdown half my infrastructure with rbd without service interruption, I didn't see any other solution that really let me do that kind of thing
[17:34] <ninkotech> elder: thanks a lot for your words
[17:35] <ninkotech> jmlowe: lovely
[17:35] <elder> I wish I could tell you things with a lot more certainty but it really is relatively new technology and we'll basically gain greater and greater confidence with time and use.
[17:35] <nhm> elder: btw, have you been following recent kernel releases much? I'm concerned with the report from the mailing list that performance has tanked between 3.0 and 3.4 with XFS. I'm going to try to replicate it today.
[17:36] <elder> I have not been following that aspect of things nhm.
[17:36] <jmlowe> I'm dealing with kind of a small scale, 4 osd's with 6 disks each backing 8 vm hosting machines
[17:36] <jmlowe> I have every confidence I could go up by an order of magnitude
[17:36] <nhm> elder: actually, he said btrfs too, so it's probably not xfs specific.
[17:36] <ninkotech> jmlowe: i am talking about ~30-400 vm hosting machines
[17:37] <jmlowe> ninkotech: I'll probably double in size in the next year
[17:38] <jmlowe> ninkotech: how many disks, that's really more important is the vm to disk ratio
[17:38] * Tv_ (~tv@aon.hq.newdream.net) has joined #ceph
[17:39] <ninkotech> hard to tell now. but you can guess 8-24 vm on 1 server having ~ 1-2 disks --> 240-9600 disks -- by my numbers
[17:40] <ninkotech> the other solution is to use somthign really down-to-earth and rock stable :)
[17:40] <ninkotech> but i gues i am early adopter :)
[17:41] <ninkotech> still, i do not wish to be hanged for that decision
[17:42] <jmlowe> ninkotech: completely without basis here but I'd estimate that if you are using local disks to back your vms currently you could sum the vms and disks you have now and get by with 20% fewer disks to vm's
[17:43] <jmlowe> ninkotech: with rbd caching on (things aren't really usable without it), I can untar a stock 2.6.29 linux kernel gzipped tarball in under 30 seconds
[17:45] <jmlowe> ninkotech: my original plans were to snapshot and backup the snapshot images, things aren't really to that point now so I export via nfs to a management machine and backup to tape from there
[17:46] <ninkotech> jmlowe: snapshots are not working?
[17:47] <ninkotech> ... interesting... i roughly agree with you
[17:47] <ninkotech> i mean, i see RBD and ceph having really great future...
[17:47] <ninkotech> but i have to have it stable
[17:47] <ninkotech> reliable
[17:50] <jmlowe> ninkotech: I believe right now with a stock ubuntu 12.04 libvirt, qemu-kvm, ceph 0.47.2 if you do a virsh snapshot qemu isn't notified in the same way that it would be for say qcow2 and therefor the snapshot isn't clean and often isn't mountable, snapshots work as expected for an offline machine
[17:51] <jmlowe> ninkotech: as long as your osd dies when you have problems and you have a sufficient number of replicas, then in my experience you are going to find things very stable
[18:13] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[18:15] <sagewk> elder: there?
[18:26] <elder> I am now
[18:27] <elder> sagewk,
[18:28] <sagewk> elder: did you see the comment on the null deref bug? looks like its in the bio struct.. is that new info?
[18:29] <elder> Remind me which bug number.
[18:30] <sagewk> elder: 2267
[18:30] <elder> Well I guess that is new information.
[18:31] <elder> I don't have a ton of confidence in the bio stuff in the messenger... But I haven't really gone through it.
[18:31] <elder> I had the impression it wasn't even being used by RBD a couple months ago, but must be mistaken.
[18:32] <elder> Anyway, I'll follow up on that add to 2267 a little later today.
[18:32] <sagewk> ok!
[18:32] <elder> Trying to get to a decent stopping point on the messenger state machine stuff.
[18:32] <sagewk> btw i kicked the gitbuilder last night.. i think the 60 second timeout just wasn't enough to pull down the latest from linus' tree.
[18:32] <elder> I'm really converging on it now, but it's very slow, methodical, thoughtful work.
[18:33] <sagewk> sounds good. i'll try to take a look later today
[18:33] <elder> We need to figure out how to make that gitbuilder stoppage not be a problem.
[18:33] <sagewk> yeah
[18:33] <elder> It really held up some possible progress this weekend. I would not have had to work much time, but I did want to run tests and was unable to do that.
[18:42] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[18:47] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[18:55] * gregaf1 (~Adium@aon.hq.newdream.net) has joined #ceph
[18:55] * gregaf (~Adium@aon.hq.newdream.net) Quit (Read error: Connection reset by peer)
[18:59] * chutzpah (~chutz@216.174.109.254) has joined #ceph
[19:00] * gregorg (~Greg@78.155.152.6) Quit (Ping timeout: 480 seconds)
[19:14] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[19:29] * Ryan_Lane (~Adium@207.239.114.206) has joined #ceph
[19:30] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:35] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[19:39] * BManojlovic (~steki@212.200.243.232) has joined #ceph
[19:41] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[19:58] * sbohrer (~sbohrer@173.227.92.65) has joined #ceph
[19:58] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Ping timeout: 480 seconds)
[20:01] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[20:12] <sbohrer> I've got a hopefully simple configuration question. Is there a way to make the monitors listen on multiple interfaces?
[20:14] <sbohrer> My problem is I've got an IB subnet, and a 1g subnet and I'd like my ceph servers to serve them both. The servers have both an IB card and 1g.
[20:15] <gregaf1> sbohrer: hrm, right now there's unfortunately no way to make the ceph daemons themselves handle multiple interfaces like that
[20:15] <gregaf1> nor is there any support for direct use of IB
[20:15] <sbohrer> Yeah, I'm using IPoIB at the moment
[20:15] <Tv_> sbohrer: what Greg said.. then again, on Linux IP addresses aren't really specific to interfaces; if you can make traffic to the IP address come in via IB, it'll be served
[20:16] <sbohrer> can I run a second ceph-mon instance on each host? One for IBoIP and one for 1g?
[20:17] <Tv_> sbohrer: clients won't know which to talk to etc.. don't do that
[20:18] <ninkotech> sbohrer: you just need to configure routing..
[20:18] * Hugh (~hughmacdo@soho-94-143-249-50.sohonet.co.uk) Quit (Quit: Ex-Chat)
[20:29] <nhm> sbohrer: if this is something important to you, please submit a feature request!
[20:31] * Ryan_Lane1 (~Adium@211.sub-166-250-45.myvzw.com) has joined #ceph
[20:35] * Ryan_Lane (~Adium@207.239.114.206) Quit (Ping timeout: 480 seconds)
[20:39] * adjohn is now known as Guest1880
[20:39] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[20:45] * Guest1880 (~adjohn@69.170.166.146) Quit (Ping timeout: 480 seconds)
[20:52] * stxShadow (~Jens@ip-78-94-238-69.unitymediagroup.de) has joined #ceph
[20:57] * Ryan_Lane1 (~Adium@211.sub-166-250-45.myvzw.com) Quit (Quit: Leaving.)
[21:01] <elder> gregaf1, sagewk, who uses LOSSYTX communcation again?
[21:17] <nhm> Were librgw-dev and librgw1 removed from the 0.47.* releases?
[21:18] <joshd> nhm: yeah, I think so - they were incomplete, but might be finished later
[21:19] * szaydel (~szaydel@c-67-169-107-121.hsd1.ca.comcast.net) has joined #ceph
[21:33] <sagewk> elder: client/osd and client/mon
[21:33] <sagewk> elder: client/mds is the only non-lossy for the kenrel client
[21:33] <elder> OK.
[21:34] <elder> I think I found a possible reason for a bug that shows up when running iozone then.
[21:35] <elder> The symptom looks like this:
[21:35] <elder> [ 9711.771984] libceph: osd1 10.214.131.13:6800 socket closed
[21:35] <elder> And then it hangs.
[21:36] <elder> I think for lossy connections, if the other end causes a disconnect, the connection needs to be closed. But for lossy it doesn't actually close it, and there's nothing to get the communication going again. Something like that.
[21:36] <elder> In any case it'll be fixed as I work through this stuff...
[21:37] <sagewk> elder: in that case the reset callback should be called, and osd_client should then close+open+resend..?
[21:41] <elder> You mean ceph_connection->ops->fault, which is osd_reset()?
[21:41] <sagewk> yeah
[21:41] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[21:42] <elder> osd_reset() only does this, basically:
[21:42] <elder> kick_osd_requests(osdc, osd);
[21:42] <elder> send_queued(osdc);
[21:42] <elder> But meanwhile the socket never got closed.
[21:42] <elder> I think.
[21:43] <elder> I haven't really nailed it down, it's just a hunch about the whereabouts of the problem at this point.
[21:43] <sagewk> k
[22:03] * dmick (~dmick@aon.hq.newdream.net) has joined #ceph
[22:08] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Quit: Leaving)
[22:17] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[22:24] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[22:29] <Tv_> hahaha "BTW, I noticed OSD usings XFS are much much slower than OSD with btrfs right now, particulary in rbd tests."
[22:30] <Tv_> there just won't be a single winner for a while, it seems
[22:32] <nhm> Tv_: Yeah, there's a lot of corner cases. XFS seems to be faster on SSDs where seeks don't kill it. btrfs is faster on spinning rust, but seems to degrade faster than XFS (even with -l 64k -n 64k) and ends up being the same speed or slower over time depending on the request size.
[22:33] <nhm> Tv_: at least that's what seems to be happening so far for me. I need to test xfs on precise with spinning drives to see how much syncfs helps.
[22:36] <gregaf1> did that guy have a sane journal size?
[22:36] <gregaf1> I can't remember what the unit is and I noticed he was specifying it so I thought maybe that was why *shrug*
[22:38] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) has joined #ceph
[22:46] * stxShadow (~Jens@ip-78-94-238-69.unitymediagroup.de) has left #ceph
[22:49] * pmdz (~pmdz@afaq76.neoplus.adsl.tpnet.pl) has joined #ceph
[22:53] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[22:54] <joao> can anyone think of a reason for this to happen?
[22:54] <joao> INFO:teuthology.orchestra.run.err:bash: /tmp/cephtest/enable-coredump: No such file or directory
[23:01] <dmick> either enable-coredump doesn't exist, or its arguments didn't?
[23:04] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) has joined #ceph
[23:15] <Tv_> joao: unmodified teuthology.git? what yaml file as input?
[23:15] <Tv_> joao: possible explanation: somebody was naughty and nuked the host you were using
[23:15] <Tv_> joao: share full log to get more help..
[23:16] <joao> Tv_, it keeps happening; just cloned teuthology git and bootstrapped it; the yaml is one of the qa configs that failed
[23:16] <joao> give me a second on that log
[23:17] <joao> Tv_, http://pastebin.com/sgj4pea0
[23:18] <joao> Tv_, nevermind
[23:18] <joao> sage just solved it :x
[23:18] <joao> silly me
[23:19] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[23:20] * aliguori_ (~anthony@222.128.202.2) has joined #ceph
[23:25] * aliguori (~anthony@222.128.202.191) Quit (Ping timeout: 480 seconds)
[23:26] * dmick (~dmick@aon.hq.newdream.net) Quit (Quit: Leaving.)
[23:27] * szaydel (~szaydel@c-67-169-107-121.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[23:28] * dmick (~dmick@aon.hq.newdream.net) has joined #ceph
[23:37] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) has joined #ceph
[23:41] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) Quit (Remote host closed the connection)
[23:46] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[23:47] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[23:48] * adjohn (~adjohn@69.170.166.146) Quit ()
[23:48] <darkfader> sbohrer: will you open that feature request? it's similar to some open questions of mine
[23:49] <darkfader> i'd like to keep an eye on it :)
[23:49] <nhm> darkfader: if he doesn't, you should! :)
[23:49] <darkfader> nhm: yes :)
[23:50] <darkfader> i'd still rather see two people work on it together because then the request will be more understandable
[23:50] <darkfader> but i didn't know there was some place for feature requests, now i do
[23:51] <darkfader> so i'll do it if he doesnt
[23:51] <nhm> darkfader: cool
[23:52] <nhm> darkfader: enduser interaction++
[23:52] <darkfader> hehe
[23:54] <darkfader> i had promised to make a sketch of what i think the ceph setup will look for me. you asked for that because i have uneven sized nodes
[23:54] <darkfader> and i just verified it didn't get wiped in that whole "cleanup.sh" disaster i recently had :)
[23:55] <darkfader> http://www.deranfangvomen.de/floh/infiniband.png
[23:55] <darkfader> beware, the whole stuff is turned OFF.
[23:56] <darkfader> but i can test specifics when i'm on holiday.
[23:57] <nhm> darkfader: ah, interesting. I was look at this more closely later on. I'm trying to get some results out to the mailing list and then have company coming for dinner.
[23:58] <darkfader> np, i think it took me 3 weeks to get back on it
[23:58] <darkfader> so take your time too
[23:58] * s[X] (~sX]@ppp59-167-154-113.static.internode.on.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.