#ceph IRC Log

Index

IRC Log for 2014-08-02

Timestamps are in GMT/BST.

[0:00] * sigsegv (~sigsegv@188.25.123.201) Quit (Quit: sigsegv)
[0:06] * sz0 (~sz0@94.55.197.185) Quit (Quit: My iMac has gone to sleep. ZZZzzz???)
[0:07] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[0:11] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[0:13] * Eco (~eco@adsl-99-105-55-80.dsl.pltn13.sbcglobal.net) has joined #ceph
[0:13] * madkiss (~madkiss@178.188.60.118) Quit ()
[0:15] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[0:15] <Eco> wondering if anyone has some advice for an issue i am getting trying to set up ceph for the first time. trying to follow along with http://ceph.com/docs/master/start/quick-ceph-deploy/ but keep getting an error when running ceph health: HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
[0:16] <Eco> my google fu has not turned up any good troubleshooting that gets around this, is it a false positive or a real issue?
[0:21] <Vacum_> Eco: how many osd nodes do you have?
[0:21] <Eco> two per the initial tutorial, and a third waiting to join in
[0:22] <Vacum_> Eco: default pool size is now 3. and failure domain is set to host level. so you need at least 3 osd nodes to get those pgs clean
[0:23] <Eco> ok, in the tutorial it mentions to edit the conf file to set the default to two nodes
[0:23] <Eco> i can take that out and deploy the third
[0:23] <Vacum_> Eco: that would then only help for new pools. you can change the replication of the existing pools too
[0:24] <Vacum_> Eco: you can simply deploy the third node, join it in - and the pgs will become clean too
[0:27] <Eco> error is still there are there any services that need to be bounced after editing the conf file?
[0:27] <Vacum_> Eco: you change the conf file how?
[0:28] <Eco> removed the entry the tutorial said to add: [default]
[0:28] <Eco> osd pool default size = 2
[0:28] <Vacum_> Eco: yes. this will only affect new pools. not existing ones
[0:28] <Eco> technically this would be a new pool correct?
[0:29] <Eco> since i have never deployed before
[0:29] <Vacum_> Eco: if you never deployed before, where do you get 192 pgs from?
[0:29] <Eco> form following the tutorial ;)
[0:29] <Vacum_> Eco: so you did already deploy
[0:29] <Eco> yes
[0:30] <Eco> conf file was edited before the deploy however
[0:30] <Eco> if that makes a difference
[0:30] <Vacum_> ah!
[0:30] <Vacum_> Eco: run ceph osd dump and pastebin the first 10 lines please
[0:31] <Eco> http://fpaste.org/122662/69322851/
[0:32] <Vacum_> Eco: see line 6: pool 0 'data' replicated size 3
[0:32] <Vacum_> Eco: this pool has a replication size of 3
[0:32] <Vacum_> Eco: as the other 2 pools too btw
[0:32] <Eco> there is a third node now
[0:32] <Eco> im assuming it went back to defaults after editing the conf?
[0:33] * Praba (~oftc-webi@zccy01cs106.houston.hp.com) has joined #ceph
[0:33] <Vacum_> Eco: I have no idea why those default pools were created with repl size of 3
[0:33] * Lotus907efi (~sad@cpe-24-210-236-113.neo.res.rr.com) has joined #ceph
[0:33] <Eco> not sure either, followed the tutorial step by step
[0:33] * oro (~oro@77-59-135-139.dclient.hispeed.ch) has joined #ceph
[0:34] <Eco> i know next to nothing about ceph, is there a better beginners guide to follow than the one i mentioned?
[0:35] <Sysadmin88> just add another host
[0:35] * Eco added another host
[0:35] <Sysadmin88> how many total?
[0:35] <Eco> 3 total
[0:36] * rendar (~I@host30-181-dynamic.20-87-r.retail.telecomitalia.it) Quit ()
[0:36] <Eco> sorry
[0:36] <Eco> 1 admin node, three storage nodes
[0:37] <Praba> Do we have Ceph firefly 0.80.5 package for Debian Wheezy? I don't see the package from this link - http://ceph.com/debian-firefly/pool/main/c/ceph/
[0:37] * oblu- (~o@62.109.134.112) has joined #ceph
[0:37] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[0:38] * garphy is now known as garphy`aw
[0:41] <Lotus907efi> I do not see wheezy packages there either
[0:41] <Lotus907efi> I do not know what the bpo60+1 or bpo70+1 packages are for though
[0:46] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: leaving)
[0:52] * wschulze1 (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[0:54] * oblu- (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[0:54] * oblu (~o@62.109.134.112) has joined #ceph
[0:55] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[0:56] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[0:59] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:02] * Lotus907efi (~sad@cpe-24-210-236-113.neo.res.rr.com) Quit (Quit: Leaving.)
[1:03] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[1:04] * DV (~veillard@veillard.com) has joined #ceph
[1:05] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[1:05] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[1:08] * eternaleye (~eternaley@50.245.141.73) Quit (Ping timeout: 480 seconds)
[1:10] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[1:12] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[1:14] * eternaleye (~eternaley@50.245.141.73) has joined #ceph
[1:16] <tcos> If I'm using cache tiering is it worth splitting the journal onto SSD for my spinning disks? Or should I just not bother with that
[1:17] * oms101 (~oms101@p20030057EA4DB300EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:19] <tcos> I'm also running some tests on a single node. I've created a pool purely using SSD and I'm seeing performance that is 5% of the performance of a single SSD natively. Is this the expected level of overhead in Ceph? Or would this indicate a misconfiguration somewhere
[1:21] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[1:25] <mongo> single nodes typically don't do well but it also depends on how you have your network/disks etc...configured.
[1:25] * oms101 (~oms101@p20030057EA3A8100EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:26] * fdmanana (~fdmanana@bl5-245-222.dsl.telepac.pt) Quit (Quit: Leaving)
[1:26] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[1:27] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[1:30] <tcos> mongo, I had 2 SSDs in a pool with a single OSD node and created an RBD (RBD mapped on a separate node from the OSD node). I then mapped the rbd to iSCSI to have a test system consume over a 10gbe network.
[1:30] <tcos> I got 800 iops and 70ms average latency. However as a comparison, I mapped 1 SSD directly with the same iSCSI (LIO) software and got 30,000 iops and 1ms average latency.
[1:32] <tcos> er, let me rephrase that first part, still learning the terms. I had 2 OSDs, 1 per SSD on a single host in a pool.
[1:33] * oro (~oro@77-59-135-139.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:34] * vbellur (~vijay@122.171.86.97) has joined #ceph
[1:34] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:41] * WhIteSidE (~chatzilla@wsip-70-184-76-157.tc.ph.cox.net) has joined #ceph
[1:42] <WhIteSidE> Hello all
[1:42] <WhIteSidE> I'm tearing my hair out over a problem with rbd/ceph with some missing dependancy
[1:42] <WhIteSidE> When I run rbd on a couple of nodes I get the message
[1:42] <WhIteSidE> rbd: symbol lookup error: rbd: undefined symbol: _Z18common_init_finishP11CephContexti
[1:43] <WhIteSidE> There must be some library I've failed to install, but for the life of me, I cannot figure out what it is
[1:43] <WhIteSidE> On the machines where it works, I have (as far as I can tell), the exact same version of ceph
[1:44] <mongo> tcos, with the default crush map it would be slow in that config.
[1:44] <WhIteSidE> # ceph --version
[1:44] <WhIteSidE> ceph version 0.81 (8de9501df275a5fe29f2c64cb44f195130e4a8fc)
[1:44] <WhIteSidE> # ceph --version
[1:44] <WhIteSidE> ceph version 0.81 (8de9501df275a5fe29f2c64cb44f195130e4a8fc)
[1:47] <tcos> mongo, okay, good to know
[1:50] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[1:54] <joshd1> WhIteSidE: make sure you have matching versions of librbd1 and librados2 installed
[1:54] <WhIteSidE> joshd1: Hmm, checking
[1:54] <tcos> Is it worth splitting the journals still when using an SSD tier?
[1:54] <WhIteSidE> joshd1: Dang. They're not matching, I wonder how that happened
[1:55] <joshd1> WhIteSidE: strange, the package requirements should prevent that, unless they're both quite old versions
[1:55] <WhIteSidE> No
[1:55] <WhIteSidE> That's what's weird about it
[1:55] <WhIteSidE> yum be trippin'
[1:56] <WhIteSidE> Yep
[1:56] <WhIteSidE> Fixed
[1:56] <WhIteSidE> I had to do it as a yum transaction though
[1:56] <WhIteSidE> Weird
[1:56] <WhIteSidE> And this affected another system as well
[1:56] * sjustwork (~sam@2607:f298:a:607:b547:4c7d:784e:a50f) Quit (Quit: Leaving.)
[1:57] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[1:57] <WhIteSidE> Now I'm getting failed to load rbd kernel module
[1:57] <WhIteSidE> Yeah, modprobe rbd says it's not found
[1:57] <WhIteSidE> Is it in the ceph package?
[2:02] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[2:03] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Read error: Operation timed out)
[2:05] * gregsfortytwo (~Adium@2607:f298:a:607:a9cb:324b:75bf:b0f2) Quit (Quit: Leaving.)
[2:06] * gregsfortytwo (~Adium@38.122.20.226) has joined #ceph
[2:06] * bandrus1 (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[2:09] * Praba (~oftc-webi@zccy01cs106.houston.hp.com) Quit (Quit: Page closed)
[2:11] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[2:19] * KevinPerks (~Adium@cpe-098-025-128-231.sc.res.rr.com) has joined #ceph
[2:19] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[2:20] * wschulze1 (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:20] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[2:24] * gregsfortytwo1 (~Adium@126-206-207-216.dsl.mi.winntel.net) Quit (Quit: Leaving.)
[2:24] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:28] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[2:29] <dmick> WhIteSidE: no, it's part of the kernel
[2:30] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:30] <WhIteSidE> Yeah, I guess upgraded ceph needs an upgraded kernel
[2:30] <dmick> no; rbd was either there all along or it wasn't
[2:30] <dmick> you don't necessarily need the kernel module
[2:31] <WhIteSidE> I don't know what happened with this upgrade then, since it went forward in kernel version, but it's not working
[2:31] <WhIteSidE> Or apparently, some package set grub to boot a 2.x kernel instead of the 3.x kernel
[2:32] * vbellur (~vijay@122.171.86.97) Quit (Quit: Leaving.)
[2:34] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:35] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[2:36] <WhIteSidE> Thanks for all the help guys
[2:36] * WhIteSidE (~chatzilla@wsip-70-184-76-157.tc.ph.cox.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 30.0/20140605174243])
[2:39] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) has joined #ceph
[2:45] * aknapp (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Read error: Operation timed out)
[2:46] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[2:47] * aknapp_ (~aknapp@fw125-01-outside-active.ent.mgmt.glbt1.secureserver.net) Quit (Ping timeout: 480 seconds)
[3:00] * capri_on (~capri@212.218.127.222) has joined #ceph
[3:07] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[3:14] * oblu (~o@62.109.134.112) has joined #ceph
[3:15] * vbellur (~vijay@42.104.62.187) has joined #ceph
[3:15] * fsimonce (~simon@host133-25-dynamic.250-95-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[3:20] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:28] * joshd1 (~jdurgin@2602:306:c5db:310:88dd:b962:f959:2aec) Quit (Quit: Leaving.)
[3:28] * KevinPerks (~Adium@cpe-098-025-128-231.sc.res.rr.com) Quit (Read error: Connection reset by peer)
[3:28] * KevinPerks (~Adium@cpe-098-025-128-231.sc.res.rr.com) has joined #ceph
[3:51] * vbellur (~vijay@42.104.62.187) Quit (Quit: Leaving.)
[4:12] * LeaChim (~LeaChim@host86-161-89-237.range86-161.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[4:15] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[4:19] * AfC (~andrew@b4B81.static.pacific.net.au) has joined #ceph
[4:57] * KevinPerks (~Adium@cpe-098-025-128-231.sc.res.rr.com) Quit (Read error: No route to host)
[4:57] * KevinPerks (~Adium@cpe-098-025-128-231.sc.res.rr.com) has joined #ceph
[5:05] * KevinPerks (~Adium@cpe-098-025-128-231.sc.res.rr.com) Quit (Read error: No route to host)
[5:05] * KevinPerks (~Adium@cpe-098-025-128-231.sc.res.rr.com) has joined #ceph
[5:06] * AfC (~andrew@b4B81.static.pacific.net.au) Quit (Ping timeout: 480 seconds)
[5:09] * KevinPerks1 (~Adium@cpe-098-025-128-231.sc.res.rr.com) has joined #ceph
[5:09] * KevinPerks (~Adium@cpe-098-025-128-231.sc.res.rr.com) Quit (Read error: Connection reset by peer)
[5:15] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[5:21] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[5:22] * oblu (~o@62.109.134.112) has joined #ceph
[5:23] * KevinPerks1 (~Adium@cpe-098-025-128-231.sc.res.rr.com) Quit (Quit: Leaving.)
[5:28] * vbellur (~vijay@122.171.86.97) has joined #ceph
[5:29] * Vacum (~vovo@i59F7947F.versanet.de) has joined #ceph
[5:36] * Vacum_ (~vovo@88.130.216.9) Quit (Ping timeout: 480 seconds)
[6:16] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[6:27] * Qu310 (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Remote host closed the connection)
[6:27] * Qu310 (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[6:37] * scuttlemonkey is now known as scuttle|afk
[6:58] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[7:08] * vbellur (~vijay@122.171.86.97) Quit (Quit: Leaving.)
[7:09] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[7:26] * bandrus (~Adium@216.57.72.205) has joined #ceph
[7:28] * bandrus (~Adium@216.57.72.205) Quit ()
[7:48] * Cube (~Cube@66.87.130.180) Quit (Quit: Leaving.)
[8:18] * vbellur (~vijay@122.172.107.83) has joined #ceph
[9:10] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[9:11] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[9:41] * fdmanana (~fdmanana@bl5-245-222.dsl.telepac.pt) has joined #ceph
[9:58] * rotbeard (~redbeard@dslb-188-103-200-006.188.103.pools.vodafone-ip.de) has joined #ceph
[10:00] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[10:04] * madkiss (~madkiss@178.188.60.118) Quit (Read error: Connection reset by peer)
[10:14] * fdmanana (~fdmanana@bl5-245-222.dsl.telepac.pt) Quit (Quit: Leaving)
[10:17] * flaf (~flaf@2001:41d0:1:7044::1) has joined #ceph
[10:20] <flaf> Hi, is there a ".pdf" version of the online documentation of ceph?
[10:21] * LeaChim (~LeaChim@host86-161-89-237.range86-161.btcentralplus.com) has joined #ceph
[10:21] <flaf> (to read on my touchpad)
[10:37] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) has joined #ceph
[10:40] * oro (~oro@84-75-253-80.dclient.hispeed.ch) has joined #ceph
[10:45] * fdmanana (~fdmanana@bl5-245-222.dsl.telepac.pt) has joined #ceph
[10:45] * steki (~steki@212.200.65.136) has joined #ceph
[11:04] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:07] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) Quit (Remote host closed the connection)
[11:14] * d3fault (~user@ip70-171-243-167.tc.ph.cox.net) has joined #ceph
[11:15] <d3fault> couldn't find anything about this in documentation: how does ceph handle two concurrent writes to the same object/key? is there a way to write-but-fail-if-the-object-exists?
[11:27] <d3fault> or, similarly, making sure that the object you're writing/modifying is the same one you read a few moments earlier. ex: 2 web browsers open to a "user profile" page. in first browser the user updates their age from 17 to 21 and submits. second browser now has cached/stale age (17), so if they were to change their email address or any other field, the age would have been erroneously changed back to it's original value of 17 (what i want to happ
[11:27] <d3fault> en is for the second browser submit to fail). couchbase has a thing called CAS for this purpose
[11:52] <iggy> d3fault: that's something your application should handle
[11:54] <d3fault> hmm, darn. thanks for your answer
[11:59] <d3fault> rados_write_op_assert_exists sounds like the exact opposite of what i want, and it looks like it's something that is batched (i'm new to ceph so really going by what i've read in the docs). wouldn't it be trivial and provide a lot of functionality to provie a rados_write_op_assert_doesnt_exist ??
[12:00] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) has joined #ceph
[12:04] * rendar (~I@87.19.176.94) has joined #ceph
[12:09] * nljmo_ (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) Quit (Quit: Textual IRC Client: www.textualapp.com)
[12:11] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[12:21] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) Quit (Remote host closed the connection)
[12:24] * joao|lap (~JL@bl8-144-15.dsl.telepac.pt) has joined #ceph
[12:24] * ChanServ sets mode +o joao|lap
[12:26] * Nacer_ (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) has joined #ceph
[12:49] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[12:57] * joao|lap (~JL@bl8-144-15.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[12:59] * Nacer_ (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) Quit (Remote host closed the connection)
[12:59] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) has joined #ceph
[12:59] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) Quit (Remote host closed the connection)
[13:02] * rotbeard (~redbeard@dslb-188-103-200-006.188.103.pools.vodafone-ip.de) Quit (Quit: Leaving)
[13:06] * cok (~chk@46.30.211.29) has joined #ceph
[13:06] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) has joined #ceph
[13:16] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) Quit (Remote host closed the connection)
[13:23] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (Read error: Operation timed out)
[13:25] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[13:27] * cok (~chk@46.30.211.29) Quit (Quit: Leaving.)
[13:38] * DV (~veillard@veillard.com) Quit (Ping timeout: 480 seconds)
[13:56] * theanalyst (theanalyst@0001c1e3.user.oftc.net) has joined #ceph
[14:10] * beardo_ (~sma310@216-15-72-201.c3-0.drf-ubr1.atw-drf.pa.cable.rcn.com) Quit (Read error: Operation timed out)
[14:23] * vbellur (~vijay@122.172.107.83) Quit (Ping timeout: 480 seconds)
[14:31] * diegows (~diegows@190.190.5.238) has joined #ceph
[14:45] * steki (~steki@212.200.65.136) Quit (Ping timeout: 480 seconds)
[14:49] * joao|lap (~JL@78.29.191.247) has joined #ceph
[14:49] * ChanServ sets mode +o joao|lap
[15:01] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[15:34] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[15:37] * oro (~oro@84-75-253-80.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[15:48] * oro (~oro@77-59-135-139.dclient.hispeed.ch) has joined #ceph
[15:49] * joao|lap (~JL@78.29.191.247) Quit (Ping timeout: 480 seconds)
[15:51] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[16:20] * DV__ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[16:20] * KevinPerks (~Adium@cpe-098-025-128-231.sc.res.rr.com) has joined #ceph
[16:25] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[16:34] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[16:44] * KevinPerks (~Adium@cpe-098-025-128-231.sc.res.rr.com) Quit (Read error: Connection reset by peer)
[16:44] * dmsimard_away is now known as dmsimard
[16:51] * jtaguinerd (~Adium@203.215.116.76) has joined #ceph
[16:54] * DV__ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[17:16] * jtaguinerd (~Adium@203.215.116.76) Quit (Quit: Leaving.)
[17:20] * dmsimard is now known as dmsimard_away
[17:24] * Sysadmin88_ (~IceChat77@94.4.6.195) has joined #ceph
[17:28] * Sysadmin88 (~IceChat77@054287fa.skybroadband.com) Quit (Ping timeout: 480 seconds)
[17:30] * mtl2 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[17:30] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) Quit (Read error: Connection reset by peer)
[17:36] * Sysadmin88 (~IceChat77@176.250.163.7) has joined #ceph
[17:40] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[17:40] * Sysadmin88_ (~IceChat77@94.4.6.195) Quit (Ping timeout: 480 seconds)
[17:49] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) Quit (Quit: Leaving.)
[18:03] * Sysadmin88_ (~IceChat77@176.250.167.32) has joined #ceph
[18:08] * Sysadmin88 (~IceChat77@176.250.163.7) Quit (Ping timeout: 480 seconds)
[18:28] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[18:28] * ChanServ sets mode +v andreask
[18:29] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit ()
[18:30] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[18:30] * ChanServ sets mode +v andreask
[18:31] * Sysadmin88_ (~IceChat77@176.250.167.32) Quit (Ping timeout: 480 seconds)
[18:32] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit ()
[18:32] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[18:32] * ChanServ sets mode +v andreask
[18:39] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[18:42] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[18:42] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[18:57] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[18:58] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:10] * joao|lap (~JL@78.29.191.247) has joined #ceph
[19:10] * ChanServ sets mode +o joao|lap
[19:19] * sjm (~sjm@108.53.250.33) Quit (Read error: Operation timed out)
[19:21] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[19:25] * D-Spair (~dphillips@cpe-74-130-79-134.swo.res.rr.com) Quit (Quit: ZNC - http://znc.in)
[19:29] * thomnico (~thomnico@2a01:e35:8b41:120:d49c:34a7:fc4d:6c6) has joined #ceph
[19:30] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[19:34] * thomnico (~thomnico@2a01:e35:8b41:120:d49c:34a7:fc4d:6c6) Quit ()
[19:38] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) Quit (Remote host closed the connection)
[19:40] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) has joined #ceph
[19:45] * ghost1 (~pablodelg@c-174-61-25-255.hsd1.fl.comcast.net) has joined #ceph
[19:57] * Sysadmin88 (~IceChat77@176.250.173.243) has joined #ceph
[19:58] * ghost1 (~pablodelg@c-174-61-25-255.hsd1.fl.comcast.net) Quit (Quit: ghost1)
[20:05] * Sysadmin88 (~IceChat77@176.250.173.243) Quit (Ping timeout: 480 seconds)
[20:07] * vbellur (~vijay@122.172.243.14) has joined #ceph
[20:10] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[20:11] * oro (~oro@77-59-135-139.dclient.hispeed.ch) Quit (Remote host closed the connection)
[20:17] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[20:18] * rendar (~I@87.19.176.94) Quit (Read error: Operation timed out)
[20:22] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) has joined #ceph
[20:24] * rendar (~I@87.19.176.94) has joined #ceph
[20:39] * finster (~finster@cmdline.guru) Quit (Ping timeout: 480 seconds)
[20:49] * joao|lap (~JL@78.29.191.247) Quit (Ping timeout: 480 seconds)
[20:55] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:26] * ahmett (~horasan@88.244.182.84) has joined #ceph
[21:26] * ahmett (~horasan@88.244.182.84) Quit (autokilled: Do not spam other people. Mail support@oftc.net if you feel this is in error. (2014-08-02 19:25:07))
[21:32] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[21:37] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[21:42] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[21:43] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Remote host closed the connection)
[21:51] * rotbeard (~redbeard@dslb-188-103-200-006.188.103.pools.vodafone-ip.de) has joined #ceph
[22:06] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[22:11] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[22:17] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[22:25] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[22:39] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[22:41] * rotbeard (~redbeard@dslb-188-103-200-006.188.103.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[22:43] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[22:44] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[22:56] * rotbeard (~redbeard@dslb-188-103-200-006.188.103.pools.vodafone-ip.de) has joined #ceph
[22:59] * steki (~steki@212.200.65.129) has joined #ceph
[22:59] * rotbeard (~redbeard@dslb-188-103-200-006.188.103.pools.vodafone-ip.de) Quit ()
[23:19] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[23:32] * diegows (~diegows@190.190.5.238) has joined #ceph
[23:36] * bandrus (~Adium@216.57.72.205) has joined #ceph
[23:37] * markl (~mark@knm.org) Quit (Remote host closed the connection)
[23:44] * ismell (~ismell@host-24-56-188-10.beyondbb.com) Quit (Ping timeout: 480 seconds)
[23:45] * bandrus (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[23:48] * ismell (~ismell@host-24-52-35-110.beyondbb.com) has joined #ceph
[23:54] * sputnik13 (~sputnik13@99.166.16.162) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.