#ceph IRC Log

Index

IRC Log for 2012-08-24

Timestamps are in GMT/BST.

[0:08] * dpemmons (~dpemmons@204.11.135.146) Quit (Read error: Operation timed out)
[0:11] <Tobarja> "Clos network" == mind blown
[0:13] <dspano> Tobarja: I'll second that.
[0:14] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[0:17] <darkfader> hehe, you'd love a meshed fibrechannel core
[0:19] * dpemmons (~dpemmons@204.11.135.146) has joined #ceph
[0:45] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)
[0:48] <Tv_> here, let me explain vxlan briefly... ;)
[0:48] <Tv_> there's a bunch of networking stuff people largely gloss over
[0:54] * cattelan (~cattelan@2001:4978:267:0:21c:c0ff:febf:814b) Quit (Quit: Terminated with extreme prejudice - dircproxy 1.2.0)
[1:16] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[1:16] * bitsweat (~bitsweat@ip68-106-243-245.ph.ph.cox.net) has joined #ceph
[1:21] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[1:23] * BManojlovic (~steki@212.200.243.134) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:32] * cattelan (~cattelan@2001:4978:267:0:21c:c0ff:febf:814b) has joined #ceph
[1:42] * Tv_ (~tv@2607:f298:a:607:24:9854:b7ba:106e) Quit (Quit: Tv_)
[1:46] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[1:52] <iggy> are there actually clos ethernet networks?
[2:02] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:05] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) has joined #ceph
[2:12] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) has joined #ceph
[2:17] * Tv (~tv@cpe-24-24-131-250.socal.res.rr.com) has joined #ceph
[2:35] * tightwork (~tightwork@142.196.239.240) has joined #ceph
[2:37] * deepsa (~deepsa@115.241.132.79) Quit (Ping timeout: 480 seconds)
[2:41] * Ryan_Lane (~Adium@216.38.130.164) Quit (Quit: Leaving.)
[2:58] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) Quit (Quit: Ex-Chat)
[3:00] * maelfius (~mdrnstm@66.209.104.107) Quit (Quit: Leaving.)
[3:19] * ryann (~chatzilla@216.81.130.180) has joined #ceph
[3:33] <ryann> say one has a pg that is active+remapped, and has remained in that state for 3 days. Can, should one issue some command that would cause it to complete its transition?
[3:39] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[3:42] <gregaf> ryann: it's probably remapped because there was a CRUSH failure and it's not mapping to the correct number of OSDs, so a previous OSD is being kept in the acting set
[4:15] <ryann> gregaf: We fixed it. ceph reweight # 1 immediately took care of all my problems. Thanks for you help today!
[4:23] <gregaf> ryann: that was you on the mailing list?
[4:29] <ryann> Yes. it was. :)
[4:30] <gregaf> so you set the very small weights back to 1?
[4:31] <ryann> Yes. Once I'm off a support call, I'll try to read/write to an image, to make sure that it works. I do, however have 3080 PG total and 3080 PG' clean+active
[4:32] * dmick (~dmick@38.122.20.226) Quit (Quit: Leaving.)
[4:38] <gregaf> cool
[4:39] <gregaf> I sent an email on the list too ??? if you do need to adjust weights because the nodes are actually different sizes, you should set the CRUSH weights, not the monitor overrides
[4:39] <gregaf> (which I realize is confusing)
[4:40] <ryann> i understand. However. I'm trying to treat the 2 different storage areas like 2 different clusters (with the exception of metadata, which is stored on the SCSI side).
[4:41] <ryann> At least, that was the goal. Perhaps I need to read up on Crush some more. While drinking an Orange Crush. :P
[4:42] <gregaf> ah, you'll probably want to split them apart into two different CRUSH hierarchies then (which you can do) and create some custom rules for the RADOS pools to put them in the appropriate CRUSH hierarchy
[4:42] <gregaf> ???at least, if I'm understanding you correctly
[4:46] <ryann> Hmmm. So far i thought I had done so. I've produced a Crushmap (in that pastebin) that keeps the Large Nodes separate from the Small ones. and I've create the SCSI pool so that it stores only on the smaller faster nodes. Hopefully.
[4:47] <ryann> So far, when i send data to the different rados pools, I do see that (juding by the lights on the drives, lol) that data is going to the correct storage drives.
[4:49] <gregaf> oh, so they are
[4:50] <gregaf> should have read more carefully
[4:50] <gregaf> (sorry, trying to get something done for tomorrow as I type this)
[4:50] <gregaf> so why were you trying to reweight-by-utilization again?
[4:54] <ryann> HAHA! I knew you would ask... i was seeing osds being filled with data, while others were not (withing the SCSI pool). After research, I've found some mailing list entry that mentioned that this is natural behavior for the cluster until you get real data on it.
[4:57] <gregaf> oh....
[4:57] <gregaf> hah
[4:57] <gregaf> your SCSI pool only has 8 PGs in it
[4:57] <gregaf> that's why
[4:58] <gregaf> when you create a pool you should use "ceph osd pool create SCSI 1024" to set 1024 PGs
[5:04] <ryann> AAAaaaaaaaahhhhh That's why. I saw that, but wasn't sure what to do there....
[5:05] <ryann> Help: why exactly 1024? is that a generic? should I consider some math and determine a perfect amount?
[5:13] <gregaf> 1024 is the number that was auto-generated for your other pools so I assume it's good
[5:13] <gregaf> :)
[5:13] <gregaf> there's a writeup about the implications of different PG sizes in the docs somewhere
[5:14] <ryann> somewhere. :P hehe
[5:19] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[5:20] <Tobarja> this perhaps? http://ceph.com/docs/master/dev/placement-group/
[5:21] <Tobarja> (I've had 6 ceph specific tabs open in chrome for a couple of days now)
[5:23] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[5:24] <gregaf> that's a little out of date I think???I was thinking of http://ceph.com/docs/master/ops/manage/grow/placement-groups/#optimal-total-pg-count
[5:26] <ryann> I was thinking http://www.arbys.com/
[5:27] <ryann> Sorry, I had to. I was put up to it...
[5:28] <ryann> Thanks, for the link, seriously I think I'll read up some more before I continue.
[5:34] * deepsa (~deepsa@115.242.109.91) has joined #ceph
[5:39] <ryann> gregaf: when executing ...osd create SCSI 1024, is there more to that command to indicate with Crush rule you would like it to adhere to?
[5:39] <ryann> found it. nvrmnd.
[5:40] <gregaf> ryann: I don't believe there is; you need to run a separate command "ceph osd pool set SCSI crush_ruleset <x>"
[5:40] <gregaf> I think that's the syntax
[5:40] <ryann> Thx
[5:53] * chutzpah (~chutz@100.42.98.5) Quit (Quit: Leaving)
[5:53] * Tv (~tv@cpe-24-24-131-250.socal.res.rr.com) Quit (Quit: Tv)
[6:13] <ryann> gregaf: Thanks, for your time tonight. I've restored the SCSI pool, with 1024 PG's. I have a 20G image loaded and configured to a ISCSI target, whom I've mounted on a Windows 7 box and formatter NTFS. Getting around 120MB/s.
[6:13] <ryann> Sigh...It's nice when stuff works.
[6:51] * deepsa (~deepsa@115.242.109.91) Quit (Ping timeout: 480 seconds)
[7:15] * tightwork (~tightwork@142.196.239.240) Quit (Ping timeout: 480 seconds)
[7:17] * gregaf (~Adium@2607:f298:a:607:6198:95e2:4274:ad1d) Quit (Read error: Connection reset by peer)
[7:17] * gregaf (~Adium@2607:f298:a:607:4990:c1e3:3fe9:b77f) has joined #ceph
[7:44] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[7:46] * ryann (~chatzilla@216.81.130.180) has left #ceph
[8:02] * deepsa (~deepsa@115.241.154.192) has joined #ceph
[8:45] * verwilst (~verwilst@d5152D6B9.static.telenet.be) has joined #ceph
[8:59] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[9:03] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit ()
[9:03] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[9:05] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:07] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[9:13] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[9:15] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:16] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[9:21] <kblin> morning folks
[9:58] <kblin> hm, looks like overnight my osds died of clock drift and refused to come up while using cephx auth..
[9:58] <kblin> they start fine if I remove the auth supoorted = cephx line (after making sure ntpd is running on all the systems so my clocks don't drift again)
[10:01] * loicd1 is now known as loicd
[10:14] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (Quit: Leaving...)
[10:20] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[10:29] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (Quit: Leaving...)
[10:29] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[10:30] * BManojlovic (~steki@91.195.39.5) Quit (Read error: Operation timed out)
[10:30] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[10:46] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:09] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[11:09] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[11:39] * renzhi (~renzhi@180.169.73.90) Quit (Read error: Connection reset by peer)
[11:42] * renzhi (~renzhi@180.169.73.90) has joined #ceph
[12:20] * jantje_ (~jan@paranoid.nl) has joined #ceph
[12:23] * _are__ (~quassel@2a01:238:4325:ca02::42:4242) has joined #ceph
[12:24] * ninkotech_ (~duplo@89.177.137.231) has joined #ceph
[12:24] * ferai (~quassel@quassel.jefferai.org) has joined #ceph
[12:25] * Ludo_ (~Ludo@falbala.zoxx.net) has joined #ceph
[12:25] * wido (~wido@2a00:f10:104:206:9afd:45af:ae52:80) Quit (Remote host closed the connection)
[12:25] * Ludo (~Ludo@falbala.zoxx.net) Quit (Read error: Connection reset by peer)
[12:25] * wido (~wido@2a00:f10:104:206:9afd:45af:ae52:80) has joined #ceph
[12:25] * jefferai (~quassel@quassel.jefferai.org) Quit (Read error: Connection reset by peer)
[12:26] * _are_ (~quassel@2a01:238:4325:ca02::42:4242) Quit (Read error: No route to host)
[12:27] * jantje (~jan@paranoid.nl) Quit (Ping timeout: 480 seconds)
[12:27] * ninkotech (~duplo@89.177.137.231) Quit (Read error: Connection reset by peer)
[12:30] * Leseb_ (~Leseb@193.172.124.196) has joined #ceph
[12:31] * Leseb (~Leseb@193.172.124.196) Quit (Read error: Connection reset by peer)
[12:31] * Leseb_ is now known as Leseb
[12:34] * deepsa_ (~deepsa@117.203.19.67) has joined #ceph
[12:36] * deepsa (~deepsa@115.241.154.192) Quit (Ping timeout: 480 seconds)
[12:36] * deepsa_ is now known as deepsa
[12:57] * deepsa (~deepsa@117.203.19.67) Quit (Quit: Computer has gone to sleep.)
[13:14] * tightwork (~tightwork@142.196.239.240) has joined #ceph
[13:16] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[13:43] * tightwork (~tightwork@142.196.239.240) Quit (Ping timeout: 480 seconds)
[13:59] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Ping timeout: 480 seconds)
[14:27] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[14:53] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) has joined #ceph
[15:03] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[15:41] * hr (~hr@odm-mucoffice-02.odmedia.net) has joined #ceph
[15:48] * ihwtl (~ihwtl@odm-mucoffice-02.odmedia.net) has joined #ceph
[15:51] <ihwtl> ?
[15:51] <joao> !
[16:00] * gregorg (~Greg@78.155.152.6) Quit (Quit: Quitte)
[16:09] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (Quit: Leaving...)
[16:09] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[16:12] * Tv (~tv@cpe-24-24-131-250.socal.res.rr.com) has joined #ceph
[16:46] * verwilst (~verwilst@d5152D6B9.static.telenet.be) Quit (Quit: Ex-Chat)
[16:47] <gregaf> anybody in here know anything about the qemu-kvm and rbd integration?
[16:48] <gregaf> I can't seem to get the right command to start up a kvm instance with rbd???.openstack is generating one that looks like:
[16:48] <gregaf> kvm -drive file=rbd:rbd/volume-00000001:id=client.d52-54-00-ac-43-05.nova:key=AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==:auth_supported=cephx none,if=none,id=drive-virtio-disk0,format=raw,cache=none:
[16:48] <gregaf> but that doesn't work ??? when I strace it I can see it attempting opens on a literal "rbd:rbd/volume-00000001:id=d52-54-00-ac-43-05.nova:key=AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==:auth_supported=cephx:conf=/etc/ceph/ceph.conf"
[16:49] <gregaf> (ie, it's not noticing that I'm asking it to talk to RBD)
[16:49] <Tv> gregaf: that's what an unpatched qemu used to do, but that really shouldn't be the cause here
[16:49] <gregaf> well, I'm staring at the libvirt.d log with the command and the response
[16:50] <gregaf> and copying the command and running it under strace produces (in part):
[16:50] <gregaf> open("rbd:rbd/volume-00000001:id=d52-54-00-ac-43-05.nova:key=AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==:auth_supported=cephx:conf=/etc/ceph/ceph.conf", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory)
[16:51] <gregaf> but yes, I agree that it *shouldn't* be the cause here; and indeed kvm ???version says QEMU version 1.0.50, which best I can tell is sufficiently new
[16:52] <Tv> i can confirm the -drive file=rbd:$POOL/$IMAGE:blahblah part looks completely right
[16:52] <gregaf> so???.image built with it disabled just to screw with me? or any other possibilities?
[16:52] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[16:53] <Tv> oh here's something
[16:53] <gregaf> the results are the same on one of our dev boxes, btw
[16:54] <Tv> josh said that "newer qemu" doesn't like spaces in there
[16:54] <Tv> instead of auth_supported=cephx none, he said to use cephx;none
[16:55] <Tv> and that would match the strace cutting it exactly at the space
[16:56] <gregaf> doesn't appear to change anything
[16:56] <gregaf> strace kvm -drive file=rbd:rbd/volume-00000001:id=d52-54-00-ac-43-05.nova:key="AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==":auth_supported=cephx:none,if=none,id=drive-virtio-disk0,format=raw,cache=none 2>&1 | less
[16:56] <gregaf> c-43-05.nova:key=AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==:auth_supported=cephx:none", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory)
[16:56] <gregaf> open("rbd:rbd/volume-00000001:id=d52-54-00-
[16:56] <Tv> that's a colon not semicolon
[16:56] <gregaf> wow, that didn't copy correctly
[16:57] <gregaf> anyway, same with a semi-colon
[16:57] <Tv> what exactly does it try to open, then?
[16:58] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[16:58] <gregaf> strace kvm -drive file=rbd:rbd/volume-00000001:id=d52-54-00-ac-43-05.nova:key="AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==":auth_supported=cephx;none,if=none,id=drive-virtio-disk0,format=raw,cache=none
[16:59] <gregaf> timer_create(CLOCK_REALTIME, {(nil), 14, SIGEV_THREAD_ID, {28478}}, {0x9d42582000000001}) = 0
[16:59] <gregaf> open("rbd:rbd/volume-00000001:id=d52-54-00-ac-43-05.nova:key=AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==:auth_supported=cephx", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory)
[16:59] <gregaf> open("rbd:rbd/volume-00000001:id=d52-54-00-ac-43-05.nova:key=AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==:auth_supported=cephx", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory)
[16:59] <gregaf> stat("rbd:rbd/volume-00000001:id=d52-54-00-ac-43-05.nova:key=AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==:auth_supported=cephx", 0x7fffba4760b0) = -1 ENOENT (No such file or directory)
[16:59] <gregaf> write(2, "kvm:", 4kvm:) = 4
[16:59] <gregaf> write(2, " -drive", 7 -drive) = 7
[16:59] <gregaf> write(2, " file=rbd:rbd/volume-00000001:id"..., 122 file=rbd:rbd/volume-00000001:id=d52-54-00-ac-43-05.nova:key=AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==:auth_supported=cephx) = 122
[16:59] <gregaf> write(2, ": ", 2: ) = 2
[16:59] <gregaf> write(2, "could not open disk image rbd:rb"..., 169could not open disk image rbd:rbd/volume-00000001:id=d52-54-00-ac-43-05.nova:key=AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==:auth_supported=cephx: No such file or directory) = 169
[16:59] <gregaf> write(2, "\n", 1
[16:59] <gregaf> ) = 1
[16:59] <Tv> it's still missing the ;none part
[16:59] <Tv> that's a bit worrying
[17:00] <Tv> i think it's supposed to parse as file=foo,if=none,id=...,format=raw,cache=none for qemu, and the "foo" part should be passed to the block device emulation subsystem, which should separate it into scheme:stuff_to_pass_to_scheme
[17:02] <gregaf> a bit worrying???*snorts*
[17:02] <gregaf> ;)
[17:03] <Tv> and it has ways of enumerating built-in hardware device simulators, but not disk types..
[17:03] <Tv> so i don't know how to ask it what it supports, either
[17:04] <Tv> but you might need to install our patched debs
[17:04] <gregaf> doesn't 12.04 have the right ones by default?
[17:04] <gregaf> this command-line isn't doing the right thing on pudgy either
[17:04] <Cube> gregf: can you try cephx\;none
[17:04] <Cube> all my processes show like that.
[17:04] <Tv> gregaf: i thought it did
[17:05] <gregaf> well, escaping the semicolon got the none included as part of the filename...
[17:05] <gregaf> open("rbd:rbd/volume-00000001:id=d52-54-00-ac-43-05.nova:key=AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==:auth_supported=cephx;none", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory)
[17:06] <Tv> gregaf: you know, i see those opens in strace, but then it starts looking for ceph.conf and does the right thing
[17:06] <Tv> gregaf: that might be a red herring
[17:06] <gregaf> hmmm
[17:06] <gregaf> I have no such lookups following on
[17:06] <Tv> with my not-ceph-configured desktop, it ends with "unable to find any monitors in conf. please specify monitors via -m monaddr or -c ceph.conf" etc
[17:06] <gregaf> let me try sticking the ceph.conf in there too, I guess
[17:07] <Tv> so it's clearly running librbd
[17:07] <Tv> i did a strace -e open kvm -drive file=rbd:rbd/volume-00000001:id=d52-54-00-ac-43-05.nova:key="AQA/jjdQECmZHhAAao97tSc3Zlqfilz2bWeL2g==":auth_supported=cephx;none,if=none,id=drive-virtio-disk0,format=raw,cache=none
[17:07] <Tv> on a 12.04 box that has never had any ceph thing configured on it
[17:07] * deepsa (~deepsa@117.203.14.172) has joined #ceph
[17:07] <Tv> oh heh that ; went unparsed on the command line
[17:07] <Tv> i mean, bash split on it
[17:08] <Tv> same thing if i do -drive 'file=...'
[17:10] <gregaf> thus the escaping from Cube :)
[17:14] <gregaf> Tv: you coming in now?
[17:15] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:24] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[17:28] * loicd1 (~loic@brln-4d0ce5a9.pool.mediaWays.net) has joined #ceph
[17:30] * loicd (~loic@brln-d9bad602.pool.mediaWays.net) Quit (Ping timeout: 480 seconds)
[17:31] * ihwtl (~ihwtl@odm-mucoffice-02.odmedia.net) Quit (Ping timeout: 480 seconds)
[17:32] * hr (~hr@odm-mucoffice-02.odmedia.net) Quit (Ping timeout: 480 seconds)
[17:34] <Tv> gregaf: just packing up
[17:35] * Tv (~tv@cpe-24-24-131-250.socal.res.rr.com) Quit (Quit: Tv)
[17:35] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (Quit: Leaving...)
[17:36] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[17:36] * BManojlovic (~steki@212.200.243.134) has joined #ceph
[17:36] <gregaf> Tv: hmm, does ldd show librbd linked against your kvm?
[17:37] <gregaf> it's not on either this VM or pudgy
[17:37] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[17:51] * bitsweat (~bitsweat@ip68-106-243-245.ph.ph.cox.net) Quit (Quit: Linkinus - http://linkinus.com)
[17:55] * bitsweat (~bitsweat@ip68-106-243-245.ph.ph.cox.net) has joined #ceph
[17:56] * Tv_ (~tv@2607:f298:a:607:24:9854:b7ba:106e) has joined #ceph
[17:59] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[18:04] * nhm (~nhm@174-20-15-49.mpls.qwest.net) has joined #ceph
[18:05] <Tv_> gregaf: s.sub(/^client\./, '')
[18:23] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Ping timeout: 480 seconds)
[18:32] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[18:40] * deepsa_ (~deepsa@117.203.22.45) has joined #ceph
[18:41] * deepsa (~deepsa@117.203.14.172) Quit (Ping timeout: 480 seconds)
[18:41] * deepsa_ is now known as deepsa
[18:52] * ihwtl (~ihwtl@p549F71D0.dip.t-dialin.net) has joined #ceph
[18:54] * ihwtl (~ihwtl@p549F71D0.dip.t-dialin.net) Quit ()
[18:58] * deepsa_ (~deepsa@115.241.228.179) has joined #ceph
[19:01] * deepsa (~deepsa@117.203.22.45) Quit (Ping timeout: 480 seconds)
[19:01] * deepsa_ is now known as deepsa
[19:03] * ihwtl (~ihwtl@p549F71D0.dip.t-dialin.net) has joined #ceph
[19:09] * maelfius (~mdrnstm@66.209.104.107) has joined #ceph
[19:15] * dmick (~dmick@2607:f298:a:607:1a03:73ff:fedd:c856) has joined #ceph
[19:17] <dmick> gotomeeting hates us; slight delay while it times out for password flooding
[19:20] * ihwtl (~ihwtl@p549F71D0.dip.t-dialin.net) Quit (Remote host closed the connection)
[19:20] * chutzpah (~chutz@100.42.98.5) has joined #ceph
[19:20] <Tv_> ahaha
[19:21] <elder> Is that why we have silence?
[19:23] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[19:23] * BManojlovic (~steki@212.200.243.134) Quit (Quit: Ja odoh a vi sta 'ocete...)
[19:27] * thingee (~thingee@ps91741.dreamhost.com) has joined #ceph
[19:28] * BManojlovic (~steki@212.200.243.134) has joined #ceph
[19:29] * ihwtl (~ihwtl@p549F71D0.dip.t-dialin.net) has joined #ceph
[19:34] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (Read error: Connection reset by peer)
[19:38] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[19:47] * ihwtl (~ihwtl@p549F71D0.dip.t-dialin.net) Quit (Quit: ihwtl)
[19:49] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (Read error: Connection reset by peer)
[19:50] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[19:55] * Ryan_Lane (~Adium@216.38.130.164) has joined #ceph
[19:58] * ihwtl (~ihwtl@p549F71D0.dip.t-dialin.net) has joined #ceph
[19:58] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[19:58] * ihwtl (~ihwtl@p549F71D0.dip.t-dialin.net) has left #ceph
[19:59] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit ()
[19:59] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[20:04] * pentabular (~sean@adsl-71-141-232-146.dsl.snfc21.pacbell.net) has joined #ceph
[20:24] * tightwork (~tightwork@rrcs-71-43-128-65.se.biz.rr.com) has joined #ceph
[20:25] * deepsa (~deepsa@115.241.228.179) Quit (Quit: bye)
[20:36] <pentabular> Good luck in SD. Stop by the #salt booth while you're there. :)
[20:38] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Ping timeout: 480 seconds)
[20:48] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Remote host closed the connection)
[20:51] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) Quit (Read error: Connection reset by peer)
[20:54] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[21:01] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[21:06] * tightwork (~tightwork@rrcs-71-43-128-65.se.biz.rr.com) Quit (Ping timeout: 480 seconds)
[21:10] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Remote host closed the connection)
[21:14] * nhm (~nhm@174-20-15-49.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[21:15] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[21:17] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[21:17] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[21:18] * nhm (~nhm@174-20-15-49.mpls.qwest.net) has joined #ceph
[21:19] * pentabular (~sean@adsl-71-141-232-146.dsl.snfc21.pacbell.net) has left #ceph
[21:20] <nhm> yay, wl driver is working much better than brcmsmac. No more disconnects hopefully.
[21:41] <nhm> alright, have a good weekend guys, I'm heading out early.
[21:46] * tightwork (~tightwork@rrcs-71-43-128-65.se.biz.rr.com) has joined #ceph
[21:49] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has left #ceph
[21:51] * nhm (~nhm@174-20-15-49.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[22:01] * EmilienM (~EmilienM@98.49.119.80.rev.sfr.net) has joined #ceph
[22:21] * tightwork (~tightwork@rrcs-71-43-128-65.se.biz.rr.com) Quit (Ping timeout: 480 seconds)
[22:31] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:43] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[22:49] <elder> I'm headed out now too. Back on line a bit later on, and again over the weekend.
[22:53] <dmick> have a good one elder
[23:17] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) Quit (Remote host closed the connection)
[23:56] * bitsweat_ (~bitsweat@ip68-106-243-245.ph.ph.cox.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.