#ceph IRC Log

Index

IRC Log for 2012-09-29

Timestamps are in GMT/BST.

[0:00] <John> For "journal align min size", I scrubbed this from the comments "Align data payloads greater than the specified minimum.". Is that saying that aligning writes only occurs if it's bigger than some size?
[0:00] * sagelap1 (~sage@38.122.20.226) has joined #ceph
[0:00] <sjust> it's a different kind of alignment, iirc
[0:00] <sjust> it shouldn't need to be changed
[0:01] <John> How about "journal replay from"?
[0:01] <sjust> journal replay from set to N causes us to start replay no earlier than N+1
[0:01] <sjust> it's useful if there is a bug in the FileStore code and we need to skip a journal entry on replay
[0:01] <sjust> it's not something a user should need to mess with except in dire circumstances
[0:02] <gregaf1> probably doesn't need to be documented, unless you want to stick a "DO NOT SET unless instructed by a developer" flag on it
[0:02] * BManojlovic (~steki@195.13.166.253) Quit (Ping timeout: 480 seconds)
[0:02] <John> Ok. I'll remove it from the reference.
[0:03] <sjust> journal zero on create causes the Filestore to overwrite the entire journal with zeros during mkfs
[0:03] * sagelap (~sage@2607:f298:a:607:4050:be7d:df57:b639) Quit (Ping timeout: 480 seconds)
[0:03] <John> journal zero on create? Should that be removed too?
[0:03] <sjust> not necessarily
[0:03] <sjust> if you create a large file without writing to it, the actual blocks won't have been created, and the initial writes to the journal will be slower than they should be
[0:04] <sjust> might be worthwhile if using a file for a journal rather than using a block device
[0:04] <gregaf1> doesn't it fallocate the journal so it should be good to go in a standard setup?
[0:04] * slang (~slang@216.3.101.62) Quit (Quit: slang)
[0:05] <sjust> not always enough, fallocate just ensure you won't hit an ENOSPC
[0:07] <John> For "filestore" config_opts.h said "IGNORE FOR NOW" and it was default to false. Should I keep that in the reference?
[0:08] <sjust> where?
[0:08] <sjust> oh
[0:09] <sjust> that should be ignorable
[0:10] <John> line 359... so I can just remove it from the ref?
[0:10] <John> The comment is gone now, so I'm not sure.
[0:10] <sjust> yeah, I think
[0:11] <John> How about "filestore max inline xattr size" and "filestore max inline xattrs"?
[0:12] <sjust> ah, those are interesting to users
[0:12] <sjust> filestore max inline xattr size defines the largest xattr (rados level, we do xattrs) which will be stored in a filesystem xattr
[0:12] <sjust> anything larger will be stored elsewhere
[0:12] * pentabular (~sean@adsl-70-231-141-17.dsl.snfc21.sbcglobal.net) has joined #ceph
[0:13] <sjust> filestore max inline xattrs defines the max number of xattrs that will be stored directly in the fs rather than elsewhere
[0:13] <sjust> some filesystems (ext4, xfs) have limits on either the size of a single xattr or on the total xattr size
[0:13] <sjust> but rados doesn't so we need to work around it
[0:14] <sjust> also, in some cases, the filesystem xattrs might not be super fast, so storing them elsewhere might be faster
[0:14] * danieagle (~Daniel@186.214.56.10) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[0:15] <John> such as omap / leveldb?
[0:16] <John> Also, in this case when you say directly in the fs, you mean XFS, btrfs, ext4, not cephfs, right?
[0:18] <gregaf1> yes, that's what he means
[0:20] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[0:21] <sjust> yeah
[0:24] <John> The filestore max inline xattrs default is 2. I'm assuming we'd allow more than two xattrs in a file system. Is this on a per-object basis?
[0:24] <sjust> yes
[0:25] <sjust> basically, it's a heuristic to encourage the object_info data to be stored next to the inode for the object
[0:25] <sjust> it's not normally very large and most objects don't have any other xattrs
[0:26] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:28] <John> I don't have much of a general description for "filestore queue" settings. Any general comments on the queue?
[0:28] <sjust> limits on how large the filestore queue is allowed to get
[0:29] * loicd (~loic@magenta.dachary.org) has joined #ceph
[0:29] * pentabular (~sean@adsl-70-231-141-17.dsl.snfc21.sbcglobal.net) has left #ceph
[0:30] <John> Is there a performance tradeoff... smaller isn't as good, but bigger isn't necessarily better?
[0:31] <John> Also, any general comments on extents? filestore fiemap, filestore fiemap threshold?
[0:32] <joshd> filestore fiemap should always be off (and is by default). they don't really need to be there right now
[0:32] <John> ok... will remove.
[0:33] * cblack101 (86868949@ircip4.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[0:33] <John> I don't have descriptions for "filestore journal parallel", "filestore journal writeahead" and "filestore journal trailing"...
[0:34] <John> this is one of those areas where the filestore and journal settings might benefit from a cross-reference... or so it seems.
[0:43] <John> Ok... just got the crowbar stuff back from tamil... so I'm going to set this aside for a bit...
[0:44] * John (~john@astound-69-42-3-170.ca.astound.net) Quit (Quit: Leaving)
[0:45] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) Quit (Quit: kill -9 EmilienM)
[0:45] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has joined #ceph
[1:12] * maelfius (~mdrnstm@66.209.104.107) has joined #ceph
[1:12] * maelfius (~mdrnstm@66.209.104.107) Quit ()
[1:12] * maelfius (~mdrnstm@66.209.104.107) has joined #ceph
[1:25] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) Quit (Remote host closed the connection)
[1:26] * slang (~slang@216.3.101.62) has joined #ceph
[1:29] * jlogan1 (~Thunderbi@72.5.59.176) Quit (Ping timeout: 480 seconds)
[1:31] * slang (~slang@216.3.101.62) Quit (Read error: Connection reset by peer)
[1:42] * amatter (~amatter@209.63.136.130) has joined #ceph
[1:43] * yehudasa (~yehudasa@2607:f298:a:607:e895:24db:46:69e3) has joined #ceph
[1:44] <amatter> good evening. I have an active mds and two standbys. I want to set the second standby as the active and the other two to standby. Can I do this by adjusting the mds rank?
[1:44] * dmick (~dmick@2607:f298:a:607:2c4b:49ec:38c1:e567) has joined #ceph
[1:46] * gregaf (~Adium@38.122.20.226) has joined #ceph
[1:48] <gregaf> amatter: right now the best docs for what you're after are at http://ceph.com/wiki/Standby-replay_modes
[1:50] * houkouonchi-work (~linux@12.248.40.138) Quit (Ping timeout: 480 seconds)
[1:52] * gregaf2 (~Adium@2607:f298:a:607:ac37:7738:ddc0:b4b0) has joined #ceph
[1:52] * gregaf (~Adium@38.122.20.226) Quit (Read error: Connection reset by peer)
[1:54] * sagewk (~sage@38.122.20.226) has joined #ceph
[1:55] <amatter> gregaf: thanks. saw that before but was hoping there was something more straight forward. got it working now how I wanted using those config options
[1:58] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[2:01] <gregaf2> unfortunately nothing else, but glad you got it!
[2:12] <amatter> I'm trying to troubleshoot a performance issue. Using rados bench the cluster writes for a few seconds then there's a long delay. rados bench: http://pastebin.com/AS6b8L1G
[2:13] <gregaf2> what are you running on?
[2:14] <amatter> ubuntu 12.04 / 0.48.2argonaut
[2:15] <amatter> 8 osds all identical mdadm dual drive raid0
[2:15] * sagelap1 (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:16] <amatter> also there's a three or four second delay when I do a "ceph -s" from any machine in the cluster. not sure if that's related to the same issue
[2:16] <gregaf2> so the pattern looks like you're getting fast commits to your journal and then taking a while to flush them out to your backing store
[2:16] <gregaf2> but a 4-5MB/s backing store off of 4MB writes is…really slow
[2:17] <amatter> yes, that's what I thought, but the journal is on the same disks as the data, not a separate partition or sdd
[2:17] <amatter> *ssd
[2:17] <gregaf2> so you should check your disks using dd with conv=fdatasync, or some equivalent
[2:19] <gregaf2> 3-4 seconds for a ceph -s is also fairly slow, but I dunno what else could do that
[2:26] * yehudasa_ (~yehudasa@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:28] * sagelap (~sage@152.sub-70-197-140.myvzw.com) has joined #ceph
[2:29] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) Quit (Ping timeout: 480 seconds)
[2:31] <amatter> hmm. I think the performance issue is a problem with btrfs. If I rebuild my osds using xfs rather than btrfs, what features do I lose?
[2:31] <gregaf2> nothing from the top level
[2:33] <gregaf2> the OSDs will all use writeahead rather than parallel journaling
[2:33] <gregaf2> I don't think you care about any of these details; nobody else has so far :)
[2:34] <amatter> I do, in fact, very interested in performance. just bbought about 100 ssds to build a cluster entirely out of ssds
[2:34] <amatter> if the entire osd is on a ssd do I even need a journal?
[2:34] <sjust> very yes
[2:34] <gregaf2> the journal is a correctness-under-failures thing as well as a performance thing
[2:35] * maelfius (~mdrnstm@66.209.104.107) Quit (Quit: Leaving.)
[2:36] <amatter> ok, I am also looking at an upgrade to my current configuration using ssd for journal plus raid0 of spinning disks for osd data
[2:36] <amatter> but in that case probably parallel write isn't so important because the disks will always lag far behind the ssd journal
[2:37] <gregaf2> so there's the change in journaling, which you are most likely to care about, but is only an issue if you are doing reads to an object *immediately* following writes
[2:39] <gregaf2> and with SSDs it's even less of an issue
[2:39] <gregaf2> all the other costs I can think of are things like snapshots being moderately less efficient because the OSD's FileStore has to handle the COWing instead of letting the FS do it internally
[2:41] <gregaf2> I'm off for the evening but I'll look later if you've got more questions
[2:41] * gregaf2 (~Adium@2607:f298:a:607:ac37:7738:ddc0:b4b0) Quit (Quit: Leaving.)
[2:42] <amatter> ok. so far each of my osds have around a terabyte of data and the performance gets progressively worse on btrfs filesystem as the fielsystem grows. On the same machine and the same disks, I have a boot partition running ext4 and the dd w/fdatasync gives me about 40MB/s. Running it on the btrfs osd partition I get 1/MB sec or less, this is with no load on the cluster and no pgs moving arounf
[2:43] <joshd> are you using kernel 3.4 or 3.5?
[2:43] <joshd> there were a bunch of improvements in those for btrfs
[2:43] <amatter> 3.2
[2:44] <amatter> hmm. sounds like I should upgrade the kernel first and see if things improve
[2:45] <amatter> thanks all for your help.
[2:52] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[2:53] * chutzpah (~chutz@100.42.98.5) Quit (Ping timeout: 480 seconds)
[2:54] * loicd (~loic@magenta.dachary.org) has joined #ceph
[2:54] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Quit: Leaving.)
[2:58] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has left #ceph
[3:05] * sagelap (~sage@152.sub-70-197-140.myvzw.com) Quit (Ping timeout: 480 seconds)
[3:10] * sagelap (~sage@91.sub-70-197-139.myvzw.com) has joined #ceph
[3:18] * ChanServ sets mode +o sagelap
[3:18] * sagelap changes topic to 'v0.52 is out with rbd cloning and locking support'
[3:35] * Cube (~Adium@12.248.40.138) Quit (Ping timeout: 480 seconds)
[3:39] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) has joined #ceph
[3:41] * amatter (~amatter@209.63.136.130) Quit (Ping timeout: 480 seconds)
[3:46] * sagelap (~sage@91.sub-70-197-139.myvzw.com) Quit (Ping timeout: 480 seconds)
[4:10] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[4:12] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:22] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:23] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:47] * amatter (~amatter@c-174-52-137-136.hsd1.ut.comcast.net) has joined #ceph
[4:50] * amatter_ (~amatter@209.63.136.130) has joined #ceph
[4:55] * amatter (~amatter@c-174-52-137-136.hsd1.ut.comcast.net) Quit (Ping timeout: 480 seconds)
[5:46] * Sergey (~Sergey@12.248.40.138) has joined #ceph
[5:47] <Sergey> so I just upgraded from 0.48 to 0.52 .. everything seems to start okay, but I get flooded with "warning: line 48: 'host' in section 'osd.4' redefined" type messages
[5:47] <Sergey> maybe the config format changed, but I don't see anything about that in the docs
[5:57] <Sergey> oops, nevermind, I had the same config section in the file twice!
[5:57] * Sergey (~Sergey@12.248.40.138) Quit ()
[6:06] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[6:26] * houkouonchi-work (~linux@12.248.40.138) Quit (Read error: Connection reset by peer)
[7:00] * gaveen (~gaveen@112.135.146.151) has joined #ceph
[7:04] * dmick (~dmick@2607:f298:a:607:2c4b:49ec:38c1:e567) Quit (Quit: Leaving.)
[7:07] * deepsa (~deepsa@122.172.2.82) Quit (Read error: Connection reset by peer)
[7:08] * deepsa (~deepsa@122.172.157.106) has joined #ceph
[7:31] * glowell (~Adium@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[7:38] * deepsa (~deepsa@122.172.157.106) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[7:39] * deepsa (~deepsa@122.172.157.106) has joined #ceph
[8:06] * pentabular (~sean@adsl-70-231-141-17.dsl.snfc21.sbcglobal.net) has joined #ceph
[8:27] * glowell (~Adium@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:27] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[8:46] * Ryan_Lane1 (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[8:46] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[8:55] * amatter (~amatter@209.63.136.130) has joined #ceph
[8:58] * amatter_ (~amatter@209.63.136.130) Quit (Ping timeout: 480 seconds)
[9:02] * Cube1 (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[9:05] * Cube1 (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit ()
[9:08] * amatter (~amatter@209.63.136.130) Quit (Ping timeout: 480 seconds)
[9:40] * sagelap (~sage@94.sub-70-197-140.myvzw.com) has joined #ceph
[10:01] * loicd (~loic@magenta.dachary.org) has joined #ceph
[10:24] * The_Bishop_ (~bishop@e179021006.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[10:34] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[10:34] * loicd (~loic@magenta.dachary.org) has joined #ceph
[11:07] * Ryan_Lane1 (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[11:15] * The_Bishop (~bishop@2001:470:50b6:0:6d2c:7b18:867f:63cd) has joined #ceph
[11:38] * pentabular (~sean@adsl-70-231-141-17.dsl.snfc21.sbcglobal.net) has left #ceph
[11:39] * mistur (~yoann@kewl.mistur.org) Quit (Read error: Operation timed out)
[11:53] * mistur (~yoann@kewl.mistur.org) has joined #ceph
[11:57] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has joined #ceph
[12:02] * SkyEye (~gaveen@112.135.140.203) has joined #ceph
[12:04] * sagelap1 (~sage@76.89.177.113) has joined #ceph
[12:07] * gaveen (~gaveen@112.135.146.151) Quit (Ping timeout: 480 seconds)
[12:07] * sagelap (~sage@94.sub-70-197-140.myvzw.com) Quit (Ping timeout: 480 seconds)
[12:11] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[12:12] * mistur (~yoann@kewl.mistur.org) Quit (Read error: Operation timed out)
[12:36] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) has joined #ceph
[12:37] * mistur (~yoann@kewl.mistur.org) has joined #ceph
[13:06] * cattelan (~cattelan@2001:4978:267:0:21c:c0ff:febf:814b) Quit (Read error: Operation timed out)
[13:17] * cattelan (~cattelan@2001:4978:267:0:21c:c0ff:febf:814b) has joined #ceph
[13:38] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[13:45] * SkyEye is now known as gaveen
[14:13] * antsygeek (~antsygeek@gw.ptr-62-65-141-67.customer.ch.netstream.com) has joined #ceph
[14:14] <antsygeek> hmm. what do i need on client side? i'm getting an error saying that the ceph kernel module isn't loaded: http://pastie.org/4860303
[14:14] <antsygeek> "modprobe ceph" fails with: "FATAL: Module ceph not found"
[14:16] <vhasi> that usually means you don't have the ceph module built and in the proper place for modprobe to find it
[14:16] <antsygeek> vhasi: i've added the debian repos and installed ceph
[14:16] <antsygeek> shouldn't that also install the kernel module?
[14:16] <antsygeek> because the ceph servers are running
[14:17] <antsygeek> or is the server in userspace only?
[14:17] <vhasi> the deb packages might expect you to make sure the kernel module is available on your own, i don't really know since i don't use Debian myself
[14:19] <antsygeek> hm and where would i get the kernel module? do i need to compile it by myself?
[14:20] <vhasi> you could check if the repos contain a pre-built module, otherwise you'll probably have to build it yourself
[14:20] <antsygeek> uh
[14:35] <antsygeek> i must be the only one who wants to mount a ceph volume
[14:36] <antsygeek> or i just can't find the information
[14:36] <antsygeek> some tutorials just say mount -t ceph and you're fine...
[14:39] * MikeMcClurg (~mike@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) has joined #ceph
[14:48] <Fruit> antsygeek: which debian version?
[14:49] <antsygeek> Fruit: squeeze
[14:49] <antsygeek> 6.0.5
[14:49] <antsygeek> amd64
[14:49] <Fruit> ah yes, the ceph module is not in that version
[14:49] <antsygeek> uh
[14:50] <Fruit> it's in wheezy though
[14:50] <antsygeek> only in debian unstable?
[14:50] <antsygeek> hmm
[14:50] <Fruit> which repository did you say you added?
[14:51] <antsygeek> these here: http://ceph.com/docs/master/install/debian/ (stable release)
[14:54] <Fruit> right, that repo does not seem to contain the kernel module. you can use your squeeze machine as a server, but not as a client
[14:54] <antsygeek> Fruit: is there a repo for production clients? :-)
[14:55] <Fruit> you could get your kernel from backports.org
[14:55] * The_Bishop (~bishop@2001:470:50b6:0:6d2c:7b18:867f:63cd) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[14:57] * gaveen (~gaveen@112.135.140.203) Quit (Remote host closed the connection)
[14:57] <antsygeek> is this temporary? what is the reason why there is no kernel module for stable?
[14:58] <Fruit> squeeze has 2.6.32, which simple didn't have ceph yet
[14:58] <antsygeek> so ceph is in the mainstream kernel?
[14:58] <Fruit> yes
[14:58] <antsygeek> ok
[14:59] <Fruit> (I should note that I'm a ceph newbie myself)
[14:59] <antsygeek> so for a quick test, i could also install debian unstable and should be able to mount
[15:02] <antsygeek> thanks Fruit
[15:02] <Fruit> well for a quick test I'd recommend that bpo kernel :)
[15:05] <antsygeek> ok, i'll try that. never played with backports so far
[15:10] <Fruit> it's fairly easy and all you'll get is an extra kernel in grub
[15:11] <Fruit> you'll be able to switch with a single reboot and remove the kernel when done
[15:12] <antsygeek> cool
[15:27] <antsygeek> thanks Fruit, it worked :-)
[15:28] <antsygeek> i have three ceph servers with 20gb each, and i see 60gb on the client mount
[15:28] <antsygeek> have to find out how to use replicated mode though
[15:28] <antsygeek> but thanks
[15:30] <Fruit> cool :)
[15:32] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:35] * loicd (~loic@magenta.dachary.org) has joined #ceph
[15:41] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:12] * Leseb_ (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) has joined #ceph
[16:19] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) Quit (Read error: Operation timed out)
[16:19] * Leseb_ is now known as Leseb
[16:42] * amatter (~amatter@209.63.136.130) has joined #ceph
[17:18] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[17:19] * loicd (~loic@magenta.dachary.org) has joined #ceph
[17:55] * glowell (~Adium@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[18:19] * guerby (~guerby@nc10d-ipv6.tetaneutral.net) has joined #ceph
[19:01] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[19:01] * antsygeek (~antsygeek@gw.ptr-62-65-141-67.customer.ch.netstream.com) Quit (Read error: Connection reset by peer)
[19:02] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:24] * sagelap1 (~sage@76.89.177.113) Quit (Ping timeout: 480 seconds)
[19:28] * LarsFronius (~LarsFroni@2a02:8108:3c0:79:f0db:62:5d47:bc9e) has joined #ceph
[19:40] * deepsa_ (~deepsa@122.172.16.64) has joined #ceph
[19:41] * deepsa (~deepsa@122.172.157.106) Quit (Ping timeout: 480 seconds)
[19:41] * deepsa_ is now known as deepsa
[20:11] * deepsa (~deepsa@122.172.16.64) Quit (Quit: Computer has gone to sleep.)
[20:12] * deepsa (~deepsa@122.172.16.64) has joined #ceph
[20:25] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[20:27] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:36] * glowell1 (~Adium@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[20:36] * glowell (~Adium@c-98-210-224-250.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[20:52] * eternaleye_ (~eternaley@tchaikovsky.exherbo.org) has joined #ceph
[20:52] * gregaf (~Adium@2607:f298:a:607:25ce:c8f2:6135:761c) has joined #ceph
[20:54] * cephalobot` (~ceph@ps94005.dreamhost.com) has joined #ceph
[20:55] * dabeowul1 (dabeowulf@free.blinkenshell.org) has joined #ceph
[20:56] * spaceman139642 (l@89.184.139.88) has joined #ceph
[20:56] * jmcdice_ (~root@135.13.255.151) has joined #ceph
[20:56] * yehudasa (~yehudasa@2607:f298:a:607:e895:24db:46:69e3) Quit (synthon.oftc.net oxygen.oftc.net)
[20:56] * sage1 (~sage@76.89.177.113) Quit (synthon.oftc.net oxygen.oftc.net)
[20:56] * gregaf1 (~Adium@2607:f298:a:607:610d:c78a:75d2:9ef8) Quit (synthon.oftc.net oxygen.oftc.net)
[20:56] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) Quit (synthon.oftc.net oxygen.oftc.net)
[20:56] * jmcdice (~root@135.13.255.151) Quit (synthon.oftc.net oxygen.oftc.net)
[20:56] * cephalobot (~ceph@ps94005.dreamhost.com) Quit (synthon.oftc.net oxygen.oftc.net)
[20:56] * spaceman-39642 (l@89.184.139.88) Quit (synthon.oftc.net oxygen.oftc.net)
[20:56] * Enigmagic (enigmo@c-24-6-51-229.hsd1.ca.comcast.net) Quit (synthon.oftc.net oxygen.oftc.net)
[20:56] * hijacker_ (~hijacker@213.91.163.5) Quit (synthon.oftc.net oxygen.oftc.net)
[20:56] * dabeowulf (dabeowulf@free.blinkenshell.org) Quit (synthon.oftc.net oxygen.oftc.net)
[20:56] * morse (~morse@supercomputing.univpm.it) Quit (synthon.oftc.net oxygen.oftc.net)
[20:56] * eternaleye (~eternaley@tchaikovsky.exherbo.org) Quit (synthon.oftc.net oxygen.oftc.net)
[21:07] * yehudasa (~yehudasa@2607:f298:a:607:a441:8a7d:2bd6:4130) has joined #ceph
[21:07] * hijacker_ (~hijacker@213.91.163.5) has joined #ceph
[21:08] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:08] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) has joined #ceph
[21:08] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:09] * sage1 (~sage@76.89.177.113) has joined #ceph
[21:10] * danieagle (~Daniel@177.97.249.235) has joined #ceph
[21:24] * BManojlovic (~steki@212.200.241.157) has joined #ceph
[21:33] * joao (~JL@89.181.146.210) has joined #ceph
[21:34] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[22:49] * danieagle (~Daniel@177.97.249.235) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[23:25] * guerby (~guerby@nc10d-ipv6.tetaneutral.net) Quit (Ping timeout: 480 seconds)
[23:55] * The_Bishop (~bishop@2001:470:50b6:0:1536:2959:8832:280) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.