#ceph IRC Log

Index

IRC Log for 2014-08-29

Timestamps are in GMT/BST.

[0:00] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: leaving)
[0:00] <bens> there are abou 90 people anxiously awaiting for you to ask your question.
[0:00] <bens> so make it a good one.
[0:00] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[0:00] <bens> (actually, there ae 331 people here)
[0:00] <bens> no pressure.
[0:05] <ron-slc> Stage fright. :)
[0:05] <seapasul1i> hahaha
[0:05] * dmsimard is now known as dmsimard_away
[0:05] <seapasul1i> kentoj: ^^
[0:05] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Read error: Connection reset by peer)
[0:05] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[0:05] <bens> Anyone ever run into max open fds in ceph?
[0:05] <bens> i am digging into it and it looks like ceph sets it up to the hard limit
[0:06] * sjusthm (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[0:08] <bens> i ask because the are hobering right around 1k
[0:09] <seapasul1i> the max open files?
[0:09] <bens> yes
[0:09] <bens> i feel like something in my cluster is running out of resources
[0:09] <seapasul1i> I think I recently increased that on my 36 disk servers while a Ceph training guy was here but it was only out of suspicion. No actual direct error message lead me to do so
[0:09] <bens> the intermittent failures are driving me bokers
[0:10] <seapasul1i> yeah i was having intermittent failures (Still am sort of but not as bad)
[0:10] <bens> seapasul1i: there is a thread going around on openstack-operators
[0:10] <bens> http://tracker.ceph.com/issues/6142
[0:10] <bens> instead of max open files, maybe it's max pid
[0:10] <bens> (for you)
[0:11] <bens> http://lists.openstack.org/pipermail/openstack-operators/2014-August/005015.html
[0:11] <seapasul1i> yup this is exactly what we did. I upped pids and open files
[0:11] <bens> is that you?
[0:11] <seapasul1i> nope not I
[0:11] * jobewan (~jobewan@snapp.centurylink.net) Quit (Quit: Leaving)
[0:12] <seapasul1i> We just did this this week.
[0:12] <bandrus> https://github.com/ceph/ceph/blob/firefly/src/common/config_opts.h#L32
[0:12] <kentoj> sorry about that
[0:12] <kentoj> For some reason there was a huge chat delay at the start
[0:12] * steki (~steki@net249-134-245-109.mbb.telenor.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:12] <bens> thanks bandrus
[0:13] <kentoj> my question is this: do we know the approximate date range that Ceph's object store and block storage became production ready?
[0:13] <bens> i don't think we do
[0:13] <bens> because you don't
[0:13] <kentoj> I don't
[0:13] <bens> and I think we is inclusive.
[0:14] <bens> why are you asking?
[0:14] <kentoj> Jude Fitzgerald from Inktank told me that the object store and block storage are production ready today and they hope to have the FS production ready in 12 months.
[0:14] <bens> and what do you mean by production ready?
[0:14] <iggy> there are people using the fs in production now
[0:14] <kentoj> I am evaluating different distributed filesystems to pick the one I want to integrate with my web app.
[0:14] <bens> On July 3, 2012, the Ceph development team released Argonaut, the first major "stable" release of Ceph.
[0:14] <iggy> that doesn't necessarly mean it's a good choice for everybody
[0:15] <bens> http://en.wikipedia.org/wiki/Ceph_(software)
[0:15] <kentoj> I am primarily concerned with the stability and maturity of the core part of the technology.
[0:15] <kentoj> which I thought was the object store
[0:15] <lurbs> What they mean by 'production ready' generally means 'able to get a support contract from Inktank/RedHat'.
[0:15] <kentoj> I don't know much about filesystems or distributed filesystems yet. I am on the beginning of the learning curve for sure
[0:16] * sjustlaptop (~sam@mb10436d0.tmodns.net) has joined #ceph
[0:16] <iggy> kentoj: what part of ceph are you planning on using?
[0:16] * linuxkidd_ (~linuxkidd@rtp-isp-nat-pool1-1.cisco.com) Quit (Remote host closed the connection)
[0:16] <kentoj> I am hoping to store documents like .docx, .pdf, .png, .xlsx, and the likes in the object store through Ceph's ReST gateway.
[0:17] <iggy> then you should be fine
[0:17] <seapasul1i> like swift/s3? if so I can say it works fine.
[0:17] <seapasul1i> attest*
[0:18] <bens> kentoj: some people would say s3 isn't production ready
[0:18] <bens> it all depends on you.
[0:19] <kentoj> Do I have anything to worry about with the FS not being production ready?
[0:19] <kentoj> I thought the FS was built on top of Ceph's object store so that if the Object store was production ready then I wouldn't need to worry about the FS. Is that right?
[0:21] <seapasul1i> http://ceph.com/docs/master/cephfs/ -- Important Ceph FS is currently not recommended for production data.
[0:21] <bens> so the FS sits on top of the object store.
[0:22] <bens> if you use the object store without the FS< you can consider it production ready.
[0:22] <seapasul1i> https://wiki.ceph.com/FAQs/Is_Ceph_Production-Quality%3F
[0:23] <bens> seapasul1i: are you cheating? using google or something
[0:23] <seapasul1i> hahaha
[0:23] <seapasul1i> what?
[0:23] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[0:24] <kentoj> I did read that resource and have been reading a few articles though none seem to address the topic in a way that answers my questions.
[0:25] <kentoj> Thanks for your help though
[0:25] <kentoj> It does explicitly state that Ceph's object store is production ready
[0:26] <kentoj> However, it does not say when that happened.
[0:27] <seapasul1i> Can you clarify? perhaps we can help. What do you mean by "production ready"
[0:28] <iggy> you're unlikely to find that kind of information in any project
[0:28] <kentoj> I mean ready to integrate into a small-scale (3000 user) content management system web app. I need it to be reliable (avoid data loss) and be able to scale up for the future.
[0:29] <kentoj> Right, I might just have to see if I can contact one of the development team.
[0:29] <iggy> it's more a matter of "okay, we haven't had severe issues with this in a while, it's good to go"
[0:29] <kentoj> or one of Inktank's customers that uses it
[0:29] <kentoj> Thanks for your input iggy, bens, and seapasul1i
[0:30] <iggy> if you want an absolute date, I'd say bens answer was the best "18:14 < bens> On July 3, 2012, the Ceph development team released Argonaut, the first major "stable" release of Ceph."
[0:30] * julian (~julian@221.237.148.132) Quit (Ping timeout: 480 seconds)
[0:31] * julian (~julian@221.237.148.132) has joined #ceph
[0:39] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:41] * rendar (~I@host108-9-dynamic.2-79-r.retail.telecomitalia.it) Quit ()
[0:42] * kentoj (~oftc-webi@173-165-128-130-utah.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[0:45] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:48] * sjustlaptop (~sam@mb10436d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[0:52] * mib_e538wk (bca75587@78.129.202.38) has joined #ceph
[0:53] * mib_e538wk (bca75587@78.129.202.38) Quit ()
[1:00] * cookednoodles_ (~eoin@eoin.clanslots.com) has joined #ceph
[1:02] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[1:03] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Ping timeout: 480 seconds)
[1:04] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[1:04] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:06] * JC1 (~JC@46.189.28.89) Quit (Quit: Leaving.)
[1:09] * lofejndif (~lsqavnbok@tor.h4ck.me) has joined #ceph
[1:11] * nwat (~textual@eduroam-238-17.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:13] * sjustlaptop (~sam@172.56.17.10) has joined #ceph
[1:22] * oms101 (~oms101@p20030057EA4B4600C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:23] * dmsimard_away is now known as dmsimard
[1:24] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[1:28] * Nacer (~Nacer@2001:41d0:fe82:7200:6d4c:6227:2ba4:74f1) Quit (Remote host closed the connection)
[1:31] * oms101 (~oms101@p20030057EA4CC400C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:36] * bitserker (~toni@169.38.79.188.dynamic.jazztel.es) has joined #ceph
[1:45] * sjustlaptop1 (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) has joined #ceph
[1:46] * sjustlaptop1 (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) Quit ()
[1:46] * sjustlaptop1 (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) has joined #ceph
[1:48] * linjan (~linjan@83.149.9.64) Quit (Ping timeout: 480 seconds)
[1:48] * sjustlaptop (~sam@172.56.17.10) Quit (Ping timeout: 480 seconds)
[1:49] * angdraug (~angdraug@131.252.204.134) Quit (Quit: Leaving)
[1:50] * DP (~oftc-webi@zccy01cs105.houston.hp.com) has joined #ceph
[1:54] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:54] * cookednoodles__ (~eoin@eoin.clanslots.com) has joined #ceph
[1:56] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: leaving)
[1:57] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[1:57] * cookednoodles_ (~eoin@eoin.clanslots.com) Quit (Ping timeout: 480 seconds)
[2:00] * bitserker (~toni@169.38.79.188.dynamic.jazztel.es) Quit (Quit: Leaving.)
[2:02] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[2:02] * lofejndif (~lsqavnbok@37PAABGSR.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[2:04] * zack_dolby (~textual@e0109-114-22-3-49.uqwimax.jp) has joined #ceph
[2:06] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[2:06] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:07] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[2:08] * zack_dolby (~textual@e0109-114-22-3-49.uqwimax.jp) Quit (Read error: Connection reset by peer)
[2:10] * Vacuum (~vovo@i59F79217.versanet.de) Quit (Ping timeout: 480 seconds)
[2:10] * xarses (~andreww@12.164.168.117) Quit (Remote host closed the connection)
[2:10] * zack_dolby (~textual@e0109-114-22-3-49.uqwimax.jp) has joined #ceph
[2:16] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:18] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[2:20] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:20] * Pedras (~Adium@216.207.42.140) Quit (Quit: Leaving.)
[2:29] * joef1 (~Adium@2601:9:280:f2e:c9d5:9a88:d911:1295) has joined #ceph
[2:29] * sjustlaptop1 (~sam@24-205-54-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[2:51] * reed (~reed@rackspacesf2.static.monkeybrains.net) Quit (Ping timeout: 480 seconds)
[2:57] * dmsimard is now known as dmsimard_away
[2:58] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[3:07] * cookednoodles__ (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[3:14] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:15] * capri (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[3:18] * bgardner_ (~bgardner@fw.oremut02.us.wh.verio.net) has joined #ceph
[3:20] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[3:22] * millsu2 (~bgardner@fw.oremut02.us.wh.verio.net) Quit (Read error: Operation timed out)
[3:22] * LeaChim (~LeaChim@host86-159-115-162.range86-159.btcentralplus.com) Quit (Read error: Operation timed out)
[3:29] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[3:34] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[3:35] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[3:35] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[3:36] * LeaChim (~LeaChim@host86-174-29-56.range86-174.btcentralplus.com) has joined #ceph
[3:36] * LeaChim (~LeaChim@host86-174-29-56.range86-174.btcentralplus.com) Quit (Remote host closed the connection)
[3:44] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[3:47] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:49] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[3:53] * DP (~oftc-webi@zccy01cs105.houston.hp.com) Quit (Remote host closed the connection)
[3:59] * DP (~oftc-webi@zccy01cs102.houston.hp.com) has joined #ceph
[4:02] * angdraug (~angdraug@131.252.204.134) has joined #ceph
[4:07] * apolloJess (~Thunderbi@202.60.8.252) has joined #ceph
[4:09] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[4:09] * sjm (~sjm@pool-108-53-147-245.nwrknj.fios.verizon.net) has joined #ceph
[4:11] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[4:12] * zack_dol_ (~textual@e0109-114-22-3-49.uqwimax.jp) has joined #ceph
[4:12] * zack_dolby (~textual@e0109-114-22-3-49.uqwimax.jp) Quit (Read error: Connection reset by peer)
[4:20] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[4:21] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:21] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[4:22] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit ()
[4:33] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[4:35] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit ()
[4:35] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[4:46] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[4:46] * joef1 (~Adium@2601:9:280:f2e:c9d5:9a88:d911:1295) Quit (Quit: Leaving.)
[4:47] * joef (~Adium@2601:9:280:f2e:d4e3:da21:5830:1ffa) has joined #ceph
[4:47] * joef (~Adium@2601:9:280:f2e:d4e3:da21:5830:1ffa) Quit ()
[4:58] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Read error: Operation timed out)
[5:00] * angdraug (~angdraug@131.252.204.134) Quit (Quit: Leaving)
[5:01] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[5:06] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) has joined #ceph
[5:13] * haomaiwa_ (~haomaiwan@203.69.59.199) has joined #ceph
[5:13] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[5:17] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[5:18] * smiley_ (~smiley@pool-173-66-4-176.washdc.fios.verizon.net) Quit (Read error: Connection reset by peer)
[5:18] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:23] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[5:23] * haomaiwa_ (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[5:23] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[5:23] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[5:24] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) has joined #ceph
[5:25] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[5:32] * bgardner (~bgardner@fw.oremut02.us.wh.verio.net) has joined #ceph
[5:34] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[5:36] * Vacuum (~vovo@i59F79D9C.versanet.de) has joined #ceph
[5:37] * bgardner_ (~bgardner@fw.oremut02.us.wh.verio.net) Quit (Ping timeout: 480 seconds)
[5:37] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[5:38] * bgardner_ (~bgardner@fw.oremut02.us.wh.verio.net) has joined #ceph
[5:40] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[5:40] * KevinPerks (~Adium@2606:a000:80a1:1b00:5035:13ab:ba2c:7c44) Quit (Quit: Leaving.)
[5:44] * bgardner (~bgardner@fw.oremut02.us.wh.verio.net) Quit (Read error: Operation timed out)
[5:44] * bgardner (~bgardner@fw.oremut02.us.wh.verio.net) has joined #ceph
[5:46] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[5:47] * adamcrume (~quassel@2601:9:6680:47:dc:c46c:26f0:23ed) Quit (Remote host closed the connection)
[5:47] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[5:49] * DP (~oftc-webi@zccy01cs102.houston.hp.com) Quit (Remote host closed the connection)
[5:49] * bgardner_ (~bgardner@fw.oremut02.us.wh.verio.net) Quit (Ping timeout: 480 seconds)
[5:56] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[6:10] * bgardner_ (~bgardner@fw.oremut02.us.wh.verio.net) has joined #ceph
[6:11] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[6:11] * sjm (~sjm@pool-108-53-147-245.nwrknj.fios.verizon.net) has left #ceph
[6:16] * bgardner (~bgardner@fw.oremut02.us.wh.verio.net) Quit (Ping timeout: 480 seconds)
[6:34] * KevinPerks (~Adium@2606:a000:80a1:1b00:1103:9d6e:8be0:a0cb) has joined #ceph
[6:41] * sleinen (~Adium@2001:620:1000:3:7ed1:c3ff:fedc:3223) has joined #ceph
[6:42] * sleinen (~Adium@2001:620:1000:3:7ed1:c3ff:fedc:3223) Quit ()
[6:45] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) has joined #ceph
[7:09] * ircolle (~Adium@2601:1:a580:145a:2435:8c61:ed62:6620) Quit (Quit: Leaving.)
[7:11] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:12] * shang (~ShangWu@175.41.48.77) has joined #ceph
[7:18] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[7:18] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[7:18] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[7:20] * haomaiwa_ (~haomaiwan@203.69.59.199) has joined #ceph
[7:20] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Read error: Connection reset by peer)
[7:23] * michalefty (~micha@p20030071CF0E750014A1E354E762F7C8.dip0.t-ipconnect.de) has joined #ceph
[7:23] * michalefty (~micha@p20030071CF0E750014A1E354E762F7C8.dip0.t-ipconnect.de) has left #ceph
[7:27] * capri (~capri@212.218.127.222) has joined #ceph
[7:28] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[7:36] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[7:37] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[7:42] * haomaiwa_ (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[7:55] * Concubidated (~Adium@66.87.66.184) Quit (Quit: Leaving.)
[7:57] * lucas1 (~Thunderbi@222.247.57.50) Quit (Remote host closed the connection)
[8:01] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[8:01] * Tamil (~Adium@cpe-108-184-74-11.socal.res.rr.com) has left #ceph
[8:01] * purpleidea is now known as Guest831
[8:01] * purpleidea (~james@216.252.87.141) has joined #ceph
[8:03] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:03] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[8:03] * Guest831 (~james@216.252.94.224) Quit (Ping timeout: 480 seconds)
[8:05] * bandrus (~Adium@66-87-66-184.pools.spcsdns.net) Quit (Quit: Leaving.)
[8:08] * vbellur (~vijay@117.201.203.108) has joined #ceph
[8:12] * joef1 (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) has joined #ceph
[8:12] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[8:21] * michalefty (~micha@p20030071CE066B521A3DA2FFFE07E324.dip0.t-ipconnect.de) has joined #ceph
[8:22] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:23] * joef1 (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:24] * michalefty (~micha@p20030071CE066B521A3DA2FFFE07E324.dip0.t-ipconnect.de) has left #ceph
[8:40] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[8:41] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:43] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[8:46] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Remote host closed the connection)
[8:52] * rendar (~I@host230-19-dynamic.3-79-r.retail.telecomitalia.it) has joined #ceph
[8:55] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[8:55] * Qu310 (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Read error: Connection reset by peer)
[8:56] * Qu310 (~Qu310@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[8:58] * JC (~JC@195.127.188.220) has joined #ceph
[9:00] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[9:01] * thomnico (~thomnico@2a01:e35:8b41:120:380d:f60d:2d69:84ba) has joined #ceph
[9:04] * Cybert1nus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[9:04] * Cybertinus (~Cybertinu@2a00:6960:1:1:0:24:107:1) Quit (Ping timeout: 480 seconds)
[9:04] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:06] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[9:06] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[9:08] * Cybert1nus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Read error: Connection reset by peer)
[9:09] * Cybertinus (~Cybertinu@2a00:6960:1:1:0:24:107:1) has joined #ceph
[9:10] * blackmen (~Ajit@121.244.87.115) has joined #ceph
[9:15] * coreping (~Michael_G@hugin.coreping.org) Quit (Quit: WeeChat 0.4.3)
[9:15] * Nacer (~Nacer@37.160.13.96) has joined #ceph
[9:15] * coreping (~Michael_G@hugin.coreping.org) has joined #ceph
[9:19] * davidz1 (~Adium@cpe-23-242-12-23.socal.res.rr.com) has joined #ceph
[9:20] * davidz (~Adium@cpe-23-242-12-23.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[9:22] * analbeard (~shw@support.memset.com) has joined #ceph
[9:23] * Cybertinus (~Cybertinu@2a00:6960:1:1:0:24:107:1) Quit (Remote host closed the connection)
[9:24] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[9:25] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[9:25] * masta (~masta@190.7.213.210) Quit (Quit: Leaving...)
[9:29] * Nacer (~Nacer@37.160.13.96) Quit (Remote host closed the connection)
[9:32] * fsimonce (~simon@host135-17-dynamic.8-79-r.retail.telecomitalia.it) has joined #ceph
[9:36] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Remote host closed the connection)
[9:37] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[9:39] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:40] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Remote host closed the connection)
[9:44] * zack_dol_ (~textual@e0109-114-22-3-49.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:45] * KevinPerks (~Adium@2606:a000:80a1:1b00:1103:9d6e:8be0:a0cb) Quit (Quit: Leaving.)
[9:48] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[9:49] * hyperbaba (~hyperbaba@private.neobee.net) has joined #ceph
[9:49] * zack_dolby (~textual@e0109-114-22-3-49.uqwimax.jp) has joined #ceph
[9:52] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Remote host closed the connection)
[9:54] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:55] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[9:56] * zack_dolby (~textual@e0109-114-22-3-49.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:57] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[9:57] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Remote host closed the connection)
[9:58] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[10:03] * yanzheng1 (~zhyan@171.221.143.132) has joined #ceph
[10:05] * yanzheng (~zhyan@171.221.143.132) Quit (Ping timeout: 480 seconds)
[10:07] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[10:07] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[10:10] * zack_dolby (~textual@e0109-114-22-3-49.uqwimax.jp) has joined #ceph
[10:19] * lucas1 (~Thunderbi@222.240.148.154) Quit (Quit: lucas1)
[10:26] * nhm_ (~nhm@65-128-166-198.mpls.qwest.net) Quit (Read error: Operation timed out)
[10:28] * yanzheng1 (~zhyan@171.221.143.132) Quit (Ping timeout: 480 seconds)
[10:36] * tomaw_ (tom@manuel.tomaw.net) has joined #ceph
[10:36] * _saschaw (~oftc-webi@88.134.28.62) has joined #ceph
[10:39] * evanjfraser (~quassel@122.252.188.1) Quit (Quit: No Ping reply in 180 seconds.)
[10:39] * evanjfraser (~quassel@122.252.188.1) has joined #ceph
[10:39] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[10:42] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[10:43] * rmoe_ (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[10:45] * darkling (~hrm@00012bd0.user.oftc.net) has joined #ceph
[10:46] <_saschaw> good morning, I am interested in installing a ceph cluster to host our vm images. My question is, if ceph storage can sync the images betwenn several datacenters which are connected by vpn
[10:46] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[10:50] <tnt> _saschaw: not really.
[10:51] <tnt> _saschaw: technically you could span a cluster accross several RC but it wold be sync replication which implies an insanely fast and low latency link between the two or it will be slowed to a crawl. And if one of the DC is down, things would be bad.
[10:51] <tnt> definitey not recommended.
[10:51] <tnt> s/RC/DC/
[10:52] <tnt> if you want another async mechanism, you need to build it yourself over ceph.
[10:52] <tnt> async multi-dc aware replication is only supported for radosgw (S3 / Swift stuff)
[10:53] <_saschaw> thanks for the info. that is really a pity
[10:54] <tnt> Well, I'm not sure there are any good solution for this problem.
[10:55] <tnt> (short of having insanely fast and redudant links between your DC that you can consider it to be one giant DC).
[10:57] <absynth_> did anyone ever play with that chinese ARM hardware that's marketed as "Ceph ready"?
[10:58] <_saschaw> we were offered a storage solution by a company called nimblestorage but it is expensive and documentation consists mostly of marketing blah-blah
[10:59] <_saschaw> tnt: thanks for your answer
[11:02] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[11:03] * lucas1 (~Thunderbi@218.76.25.66) Quit ()
[11:08] <absynth_> _saschaw: i was referring to this stuff - http://www.ambedded.com.tw/solutions.php
[11:08] <absynth_> i'm quite certain that these ARM boards, combined with a spinner and a single 1gbps uplink per spinner make for some shitty OSDs
[11:08] <absynth_> but maybe someone looked at those in more detail yet
[11:08] * tremon (~aschuring@d594e6a3.dsl.concepts.nl) has joined #ceph
[11:09] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[11:09] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[11:09] * garphy`aw is now known as garphy
[11:11] <tnt> absynth_: Mm, if you don't need ultra low latency (so more like S3 workload than RBD), that might not be a bad way to do it.
[11:11] <s3an2> Hey, I have had an unfound object in my ceph cluster for the last 3 days - Managed to track down the RBD that it relates to but trying to delete the RBD hangs at 99%. I noticed that the related pg thinks a copy of the object is within an OSD that has been removed from the crush map, Is there some way to get the PG to notice that OSD is not more?
[11:13] <_saschaw> absynth_: sounds nice nevertheless.
[11:14] <absynth_> depends all on the price per unit though
[11:14] <absynth_> i'd imagine those solutions to be WAY cheaper than x86 hardware, both in purchase and running cost, otherwise why bother?
[11:15] <tnt> absynth_: yeah, I guess that would be the point. Also much lower power and so you can densely pack the rack and not get too hot, nor get too much power.
[11:15] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[11:16] <tnt> They also use "cheap" WD RED drives so it's really "throw many cheap nodes at the problem and consider failure as part of normal operation".
[11:16] <absynth_> i have a datasheet here that gives a wattage of 7.04kwatt for a rack populated with 1.844 tb of gross storage
[11:16] <kraken> http://i.imgur.com/XEEI0Rn.gif
[11:17] <absynth_> jeez, kraken, your string parser is really crappy
[11:17] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[11:17] <tnt> lol
[11:19] * rendar (~I@host230-19-dynamic.3-79-r.retail.telecomitalia.it) Quit (Read error: Connection reset by peer)
[11:23] * _saschaw (~oftc-webi@88.134.28.62) Quit (Quit: Page closed)
[11:24] * haomaiwa_ (~haomaiwan@223.223.183.114) has joined #ceph
[11:28] <kfei> A cluster with only 2 OSDs and a pool with replication factor 3, after one of OSDs has been shutdown (i.e. 2up, 1in), `ceph -s` report 192 PGs are incomplete, why?
[11:28] <kfei> Document says incomplete means "Ceph detects that a placement group is missing a necessary period of history from its log. If you see this state, report a bug, and try to start any failed OSDs that may contain the needed information."
[11:28] * mourgaya (~kvirc@80.124.164.139) has joined #ceph
[11:30] <kfei> But the history (journal?) should be in both 2 OSDs, so how does it miss that?
[11:30] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[11:30] <tnt> replication of 3 with 2 OSD can't ever produce any good results.
[11:33] <absynth_> how do you replicate data three times if you only have two places to store copies?
[11:33] <absynth_> that is really, really pointless, kfei, isn't it? ;)
[11:33] <kfei> tnt, yes but how to explain the "missing history from its log" issue?
[11:34] <kfei> absynth_, 2/3 degraded was fine, but why 1/3 goes "incomplete"?
[11:35] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[11:36] <tnt> kfei: well, if the PG was never fully created, I wouldn't expect stuff to behave.
[11:37] <tnt> try to create 3 OSD and let all the PG reach their good state, then take 2 OSD out and see if you have the same issue.
[11:37] <absynth_> http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#fewer-osds-than-replicas
[11:37] * zack_dolby (~textual@e0109-114-22-3-49.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:37] <kfei> tnt, OK now trying :p
[11:37] <absynth_> the PGs that are now incomplete were not in active+clean state before, but in active+degraded, because of what tnt said
[11:38] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[11:39] * lucas1 (~Thunderbi@218.76.25.66) Quit ()
[11:41] * thomnico (~thomnico@2a01:e35:8b41:120:380d:f60d:2d69:84ba) Quit (Ping timeout: 480 seconds)
[11:43] * thomnico (~thomnico@2a01:e35:8b41:120:ed91:f990:4dac:573d) has joined #ceph
[11:44] <kfei> tnt, still got incomplete
[11:45] <tnt> kfei: mm, then I'm not really sure.
[11:45] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[11:46] * fghaas1 (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[11:46] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[11:48] * haomaiwa_ (~haomaiwan@223.223.183.114) Quit (Ping timeout: 480 seconds)
[11:50] * haomaiwang (~haomaiwan@223.223.183.114) has joined #ceph
[11:54] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[11:57] <kfei> tnt, absynth_, now it seems that even I have one working copy of 3 replicas, the PG still not usable
[11:58] * analbeard (~shw@support.memset.com) Quit (Ping timeout: 480 seconds)
[11:59] <kfei> Can't find more details about the "incomplete" state, the RADOS paper did not mention about "necessary period of history from its log" things
[12:04] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[12:19] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[12:29] * haomaiwang (~haomaiwan@223.223.183.114) Quit (Remote host closed the connection)
[12:31] * thomnico (~thomnico@2a01:e35:8b41:120:ed91:f990:4dac:573d) Quit (Remote host closed the connection)
[12:32] * vbellur1 (~vijay@61.3.62.27) has joined #ceph
[12:36] * vbellur (~vijay@117.201.203.108) Quit (Read error: Operation timed out)
[12:40] * mancdaz (~mancdaz@2a00:1a48:7807:102:94f4:6b56:ff08:886c) has joined #ceph
[12:45] * ikrstic (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) has joined #ceph
[12:48] * thomnico (~thomnico@2a01:e35:8b41:120:2135:9410:bc07:ad4a) has joined #ceph
[12:49] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[12:50] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[12:55] * monsterzz (~monsterzz@93.158.191.31-spb.dhcp.yndx.net) has joined #ceph
[13:04] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[13:05] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[13:12] * KevinPerks (~Adium@2606:a000:80a1:1b00:8d07:4585:7d9e:d071) has joined #ceph
[13:26] * dignus (~jkooijman@t-x.dignus.nl) Quit (Ping timeout: 480 seconds)
[13:27] * analbeard (~shw@support.memset.com) has joined #ceph
[13:38] * andreask (~andreask@zid-vpnn061.uibk.ac.at) has joined #ceph
[13:38] * ChanServ sets mode +v andreask
[13:49] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:59] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Remote host closed the connection)
[14:00] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:01] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[14:01] * madkiss (~madkiss@chello080108052132.20.11.vie.surfer.at) Quit (Quit: Leaving.)
[14:02] * darkling (~hrm@00012bd0.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:05] * steki (~steki@178-222-78-201.dynamic.isp.telekom.rs) has joined #ceph
[14:06] * jordanP (~jordan@185.23.92.11) has joined #ceph
[14:06] <BranchPredictor> greetings from Inktank training in Munich!
[14:06] <BranchPredictor> :)
[14:08] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[14:08] * ChanServ sets mode +o elder
[14:09] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[14:09] * apolloJess (~Thunderbi@202.60.8.252) Quit (Quit: apolloJess)
[14:13] * swizgard (~swizgard@port-87-193-133-18.static.qsc.de) Quit (Remote host closed the connection)
[14:14] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:15] * monster__ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:15] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:16] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:16] * monster__ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:17] * monster__ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:17] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:17] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[14:18] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:18] * monster__ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:19] * monster__ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:19] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:20] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:20] * monster__ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:20] * monsterzz (~monsterzz@93.158.191.31-spb.dhcp.yndx.net) Quit (Ping timeout: 480 seconds)
[14:20] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:21] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:22] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:22] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:23] * mathias (~mathias@p54AFE75C.dip0.t-ipconnect.de) has joined #ceph
[14:23] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:23] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:24] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:24] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:24] <mathias> My cluster is not coming out of this state: http://pastebin.com/vJcDqiVC What happend and how to resolve?
[14:24] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:25] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:26] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:26] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:26] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit ()
[14:26] * monsterz_ (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:26] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:27] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[14:27] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Read error: Connection reset by peer)
[14:27] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[14:27] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) has joined #ceph
[14:28] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[14:30] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[14:30] * jordanP (~jordan@185.23.92.11) has joined #ceph
[14:32] <absynth_> replicacount 3 and only 2 osds?
[14:33] * andreask (~andreask@zid-vpnn061.uibk.ac.at) has left #ceph
[14:34] <mathias> absynth_: where do you see this?
[14:35] <absynth_> i was guessing
[14:36] <absynth_> you only have two OSDs up, and about 1/3 of your PGs are degraded
[14:37] <absynth_> you also only have one mon
[14:37] <absynth_> sure that is a test cluster
[14:37] <absynth_> and i also presume you are either using defaults or what martin wrote in his ceph how-to article in iX, so probably replicacound 3
[14:39] <mathias> I have osd pool default size = 2 in ceph.conf - that should set it to 2 replicas, shoudlnt it?
[14:42] <ganders> mathias: you could issue a "ceph osd dump | grep 'replicated'" and see the replicated size attribute
[14:43] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[14:43] <mathias> it shows 2 for all three pools: http://pastebin.com/jZ9WNpiW
[14:44] * Sysadmin88 (~IceChat77@054533bc.skybroadband.com) Quit (Ping timeout: 480 seconds)
[14:44] <ganders> and the min_size?
[14:44] <mathias> 1
[14:44] <absynth_> sounds good enough
[14:44] <ganders> yeah
[14:48] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[14:48] <mgarcesMZ> hi everyone!
[14:48] <mathias> I set up two nodes as OSDs but when I used ceph-deploy prepare I didnt realize the 2nd HDD (/dev/vdb) was already formatted and mounted. So ceph-deploy probably used the root file system for storage. I realized what happened and added vdb properly. As a result I had 4 OSDs running in 2 nodes. Then tried to remove the "broken" OSDs - since then the cluster shows this state. The OSD tree looks like this: http://pastebin.com/16Kf55bq
[14:59] * darkling (~hrm@00012bd0.user.oftc.net) has joined #ceph
[15:05] * nhm (~nhm@65-128-184-37.mpls.qwest.net) has joined #ceph
[15:05] * ChanServ sets mode +o nhm
[15:06] * blackmen (~Ajit@121.244.87.115) Quit (Quit: Leaving)
[15:08] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:6c15:26b4:f404:566a) has joined #ceph
[15:10] * steki (~steki@178-222-78-201.dynamic.isp.telekom.rs) Quit (Remote host closed the connection)
[15:10] * steki (~steki@91.195.39.5) has joined #ceph
[15:13] * KevinPerks (~Adium@2606:a000:80a1:1b00:8d07:4585:7d9e:d071) Quit (Ping timeout: 480 seconds)
[15:14] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[15:19] * gregmark (~Adium@68.87.42.115) has joined #ceph
[15:25] <mathias> no ideas? should I start from scratch?
[15:26] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:27] <tnt> mathias: you didn't remove them from CRUSH
[15:38] <mathias> tnt: ok just for the record (I already purged my nodes) - how would I remove them from CRUSH?
[15:40] <tnt> not sure of the exact syntax ... something liek ceph crush remove ....
[15:46] * ircolle (~Adium@2601:1:a580:145a:90f2:e77:7428:686e) has joined #ceph
[15:53] * JC (~JC@195.127.188.220) Quit (Quit: Leaving.)
[15:54] * kanagaraj (~kanagaraj@115.241.47.105) has joined #ceph
[15:55] * dmsimard_away is now known as dmsimard
[15:57] * kanagaraj (~kanagaraj@115.241.47.105) Quit ()
[15:57] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[16:01] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[16:01] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:02] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[16:03] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has left #ceph
[16:03] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[16:05] * andreask (~andreask@zid-vpnn061.uibk.ac.at) has joined #ceph
[16:05] * ChanServ sets mode +v andreask
[16:05] * rendar (~I@host230-19-dynamic.3-79-r.retail.telecomitalia.it) has joined #ceph
[16:09] * ircolle is now known as ircolle-afk
[16:13] * mourgaya (~kvirc@80.124.164.139) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[16:14] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Quit: Leaving.)
[16:14] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[16:18] * bandrus (~Adium@66.87.66.184) has joined #ceph
[16:20] * sprachgenerator (~sprachgen@130.202.135.20) Quit (Quit: sprachgenerator)
[16:21] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[16:22] * sprachgenerator (~sprachgen@130.202.135.20) has joined #ceph
[16:22] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[16:23] * mathias_ (~mathias@p54AFF789.dip0.t-ipconnect.de) has joined #ceph
[16:23] * monsterzz (~monsterzz@77.88.2.43-spb.dhcp.yndx.net) Quit (Ping timeout: 480 seconds)
[16:23] * sprachgenerator (~sprachgen@130.202.135.20) Quit ()
[16:26] * andreask (~andreask@zid-vpnn061.uibk.ac.at) has left #ceph
[16:30] * mathias (~mathias@p54AFE75C.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[16:30] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[16:31] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[16:35] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[16:35] * mathias_ (~mathias@p54AFF789.dip0.t-ipconnect.de) Quit (Quit: Lost terminal)
[16:38] * KevinPerks (~Adium@2606:a000:80a1:1b00:51cb:ba43:b9bd:df73) has joined #ceph
[16:38] * sleinen1 (~Adium@2001:620:0:68::100) has joined #ceph
[16:38] * jtang_ (~jtang@80.111.83.231) Quit (Remote host closed the connection)
[16:41] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Read error: Connection reset by peer)
[16:43] * bandrus (~Adium@66.87.66.184) Quit (Quit: Leaving.)
[16:43] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:6c15:26b4:f404:566a) Quit (Ping timeout: 480 seconds)
[16:43] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[16:45] * fghaas1 (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[16:46] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[16:47] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[16:52] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[17:05] * mgarcesMZ (~mgarces@5.206.228.5) Quit (Quit: mgarcesMZ)
[17:05] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:06] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[17:06] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[17:07] * mgarcesMZ (~mgarces@5.206.228.5) has joined #ceph
[17:10] * masta (~masta@190.7.213.210) has joined #ceph
[17:11] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[17:13] * markbby (~Adium@168.94.245.3) has joined #ceph
[17:14] * markbby (~Adium@168.94.245.3) Quit ()
[17:14] * markbby (~Adium@168.94.245.3) has joined #ceph
[17:19] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[17:26] * haomaiwang (~haomaiwan@203.69.59.199) has joined #ceph
[17:29] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[17:34] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[17:34] * jgornick (~jgornick@2600:3c00::f03c:91ff:fedf:72b4) Quit (Quit: ZNC - http://znc.in)
[17:35] * jgornick (~jgornick@2600:3c00::f03c:91ff:fedf:72b4) has joined #ceph
[17:40] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[17:40] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[17:40] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:40] * haomaiwang (~haomaiwan@203.69.59.199) Quit (Ping timeout: 480 seconds)
[17:40] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[17:41] * schmee (~quassel@phobos.isoho.st) Quit (Remote host closed the connection)
[17:43] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[17:47] * mgarcesMZ (~mgarces@5.206.228.5) Quit (Ping timeout: 480 seconds)
[17:49] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[17:52] * rmoe_ (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:53] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:53] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[17:54] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[18:01] * jobewan (~jobewan@snapp.centurylink.net) has joined #ceph
[18:02] * Sysadmin88 (~IceChat77@176.250.164.108) has joined #ceph
[18:02] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:03] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[18:05] * ircolle-afk is now known as ircolle
[18:15] * schmee (~quassel@41.78.129.253) has joined #ceph
[18:15] * garphy is now known as garphy`aw
[18:17] * steki (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[18:22] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[18:24] * DP (~oftc-webi@zccy01cs104.houston.hp.com) has joined #ceph
[18:24] * thomnico (~thomnico@2a01:e35:8b41:120:2135:9410:bc07:ad4a) Quit (Quit: Ex-Chat)
[18:25] * KevinPerks1 (~Adium@2606:a000:80a1:1b00:287e:9962:8e48:a193) has joined #ceph
[18:28] * KevinPerks (~Adium@2606:a000:80a1:1b00:51cb:ba43:b9bd:df73) Quit (Ping timeout: 480 seconds)
[18:28] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[18:30] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[18:31] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[18:31] * darkling (~hrm@00012bd0.user.oftc.net) Quit (Quit: Absinthe makes the heart grow fonder.)
[18:41] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[18:45] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[18:48] * neurodrone (~neurodron@107.107.58.75) has joined #ceph
[18:53] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[18:57] * zack_dolby (~textual@p843a3d.tokynt01.ap.so-net.ne.jp) has joined #ceph
[19:00] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[19:03] * sjustwork (~sam@2607:f298:a:607:25fe:4f82:e03b:52f) has joined #ceph
[19:05] * reed (~reed@rackspacesf2.static.monkeybrains.net) has joined #ceph
[19:07] * sprachgenerator (~sprachgen@130.202.135.20) has joined #ceph
[19:09] * steveeJ (~junky@HSI-KBW-085-216-022-246.hsi.kabelbw.de) has joined #ceph
[19:16] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: Lost terminal)
[19:18] * sleinen1 (~Adium@2001:620:0:68::100) Quit (Ping timeout: 480 seconds)
[19:18] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[19:18] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:19] * sprachgenerator (~sprachgen@130.202.135.20) Quit (Quit: sprachgenerator)
[19:23] * adamcrume (~quassel@2601:9:6680:47:cc3c:790b:4b7f:6f50) has joined #ceph
[19:24] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:26] * Concubidated (~Adium@66.87.64.101) has joined #ceph
[19:27] <Anticimex> so, DDR3-NVDIMM is appearing now
[19:28] <Anticimex> apparently lots better support with DDR4, but some avail. for DDR3
[19:28] <Anticimex> 1.5M IOPS / module - yes i'd like to put journal there
[19:29] <Anticimex> or if entire host becomes persistent, skip journal?
[19:29] <diegows> hi
[19:29] <diegows> I have a two nodes cluster, that isn't being activated... I remember that there were a config parameters to allow two node clustesr
[19:30] <diegows> but I don't remember :(
[19:30] <Anticimex> your monitors
[19:30] <diegows> osd pool default size = 2
[19:30] <Anticimex> paxos require odd number
[19:30] <Anticimex> (guessing)
[19:30] <diegows> I have one monitor
[19:30] <diegows> it's just a simple test
[19:30] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:30] <Anticimex> dont you need to tell it to believe it's ok to be just 1?
[19:31] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[19:31] <diegows> no, that's not the issue
[19:31] <steveeJ> diegows: what is "activated" for you?
[19:31] <diegows> it's somethin g with osd
[19:31] <Anticimex> diegows: oki
[19:31] <diegows> I fixed it in the past
[19:32] <diegows> but I don't remember now :P
[19:32] <Anticimex> min replicas unresolvable with only 2 osd?
[19:32] <diegows> I mean, not degraded :)
[19:32] <diegows> everything is up, but the pkgs are degraded
[19:32] <steveeJ> well, default size is not changing any existing pools
[19:32] <steveeJ> make sure you have size = 2 for them too
[19:33] <diegows> oh, right...
[19:33] <diegows> that's it :)
[19:33] <diegows> thanks
[19:33] <steveeJ> you're welcome
[19:34] <diegows> min size is 2 for all the pools
[19:34] <diegows> but you are close, there was a command to run :P
[19:35] <steveeJ> not min_size, but size
[19:35] <steveeJ> if they're degraded, they can't be replicated to the set size
[19:36] <diegows> looks better now :)
[19:36] <steveeJ> possibly because you have less OSDs than size setting, or your crush map is not valid for your OSD setup.
[19:36] <diegows> health HEALTH_WARN 192 pgs stuck unclean
[19:37] <steveeJ> can you show "ceph health detail" and your decompiled crush map and "ceph osd tree" ?
[19:38] <diegows> no, I've just installed the cluster using ceph-deploy
[19:38] <diegows> ran activate on each osd
[19:38] <diegows> and I had the degraded issue
[19:38] <diegows> degraded issue fixed
[19:39] <diegows> not I have this new one :)
[19:39] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[19:39] <diegows> http://paste.ubuntu.com/8180526/
[19:39] <diegows> health detail
[19:40] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[19:41] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[19:43] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit ()
[19:44] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[19:46] * Pedras (~Adium@216.207.42.129) has joined #ceph
[19:57] * xarses (~andreww@12.164.168.117) has joined #ceph
[20:05] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has joined #ceph
[20:07] * t0rn (~ssullivan@c-68-62-1-186.hsd1.mi.comcast.net) has left #ceph
[20:08] <steveeJ> diegows: you still didn't show your crushmap and the osd tree
[20:08] <diegows> sorry
[20:08] <diegows> cleanup everything
[20:08] <diegows> and I've started form script with the osd default pool size =2 and worked :)
[20:10] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Quit: Ex-Chat)
[20:12] * vbellur1 (~vijay@61.3.62.27) Quit (Quit: Leaving.)
[20:16] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (Read error: Connection reset by peer)
[20:17] * hyperbaba (~hyperbaba@private.neobee.net) Quit (Ping timeout: 480 seconds)
[20:19] * neurodrone_ (~neurodron@mobile-198-228-196-046.mycingular.net) has joined #ceph
[20:21] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[20:22] * angdraug (~angdraug@host-200-119.pubnet.pdx.edu) has joined #ceph
[20:24] * neurodrone (~neurodron@107.107.58.75) Quit (Ping timeout: 480 seconds)
[20:24] * neurodrone_ is now known as neurodrone
[20:29] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[20:30] * Concubidated (~Adium@66.87.64.101) Quit (Quit: Leaving.)
[20:34] <dmsimard> Woot, got my 10 years Ceph shirt :) https://twitter.com/dmsimard/status/505422950531862528
[20:36] * jcsp finds his name on it :-)
[20:36] <jcsp> that's cool, I didn't actually know what the little writing was from the other pics of it
[20:37] * Nacer (~Nacer@2001:41d0:fe82:7200:4df0:5bab:c4db:6c7f) has joined #ceph
[20:41] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[20:45] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[20:46] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[20:49] * rendar (~I@host230-19-dynamic.3-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:51] * rendar (~I@host230-19-dynamic.3-79-r.retail.telecomitalia.it) has joined #ceph
[20:52] * BManojlovic (~steki@95.180.4.243) has joined #ceph
[20:54] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[20:54] * dignus (~jkooijman@t-x.dignus.nl) Quit (Ping timeout: 480 seconds)
[20:56] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[20:57] * yanzheng (~zhyan@171.221.143.132) has joined #ceph
[20:57] * monsterzz (~monsterzz@94.19.146.224) Quit (Ping timeout: 480 seconds)
[20:57] * sleinen1 (~Adium@2001:620:0:68::100) has joined #ceph
[21:04] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[21:04] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:06] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[21:12] * ikrstic (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) Quit (Read error: Connection reset by peer)
[21:12] * ikrstic (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) has joined #ceph
[21:19] * kevinc (~kevinc__@client64-180.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[21:28] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 31.0/20140716183446])
[21:30] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[21:35] * mnaser (~textual@MTRLPQ5401W-LP130-02-1178024983.dsl.bell.ca) has joined #ceph
[21:36] * yanzheng (~zhyan@171.221.143.132) Quit (Quit: This computer has gone to sleep)
[21:40] <mnaser> What's the "weapon of choice" for deploying ceph in production.. is ceph-deploy designed to create production-rated installs? I already have an existing puppet infrastructure..
[21:44] * scuttlemonkey is now known as scuttle|afk
[21:45] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[21:47] <ircolle> mnaser - ceph-deploy is what you're looking for
[21:47] <mnaser> okay good, i was *a bit* worried that it wasn't the right tool for a prod. deployment
[21:48] <mnaser> i'll have to do a bit of reading on how to tweak around, need things like cache tier and erasure coding, but thanks ircolle
[21:48] <ircolle> mnaser - you're welcome
[21:50] * kevinc (~kevinc__@client64-180.sdsc.edu) has joined #ceph
[21:56] * monsterzz (~monsterzz@94.19.146.224) has joined #ceph
[22:09] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[22:10] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[22:12] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[22:14] * ikrstic (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[22:19] * ikrstic (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) has joined #ceph
[22:21] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) has joined #ceph
[22:21] * neurodrone (~neurodron@mobile-198-228-196-046.mycingular.net) Quit (Ping timeout: 480 seconds)
[22:24] * ikrstic_ (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) has joined #ceph
[22:24] * ikrstic (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) Quit (Read error: Connection reset by peer)
[22:30] * ikrstic_ (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) Quit (Remote host closed the connection)
[22:30] * garphy`aw is now known as garphy
[22:33] * BManojlovic (~steki@95.180.4.243) Quit (Ping timeout: 480 seconds)
[22:34] * tinklebear (~tinklebea@cpe-066-057-253-171.nc.res.rr.com) has joined #ceph
[22:41] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[22:41] * dgbaley27 (~matt@ucb-np1-206.colorado.edu) has joined #ceph
[22:46] * ikrstic_ (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) has joined #ceph
[22:48] * Nacer (~Nacer@2001:41d0:fe82:7200:4df0:5bab:c4db:6c7f) Quit (Remote host closed the connection)
[22:52] <dmsimard> mnaser: If you're interested in puppet there's https://github.com/enovance/puppet-ceph and https://github.com/ceph/puppet-ceph (mirrored from https://github.com/stackforge/puppet-ceph)
[22:53] <dmsimard> Don't believe cache tiering or erasure coding is implemented in either
[22:53] * ikrstic_ (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) Quit (Remote host closed the connection)
[22:54] * ikrstic_ (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) has joined #ceph
[22:56] * BManojlovic (~steki@93-86-44-204.dynamic.isp.telekom.rs) has joined #ceph
[23:00] <mnaser> dmsimard: bummer, while my puppet-fu is up to par.. my ceph-fu is not strong enough to help bring that in there :-P
[23:01] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (Quit: WeeChat 0.4.3)
[23:01] * ikrstic_ (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) Quit (Remote host closed the connection)
[23:03] * ikrstic_ (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) has joined #ceph
[23:05] * BManojlovic (~steki@93-86-44-204.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[23:06] * BManojlovic (~steki@93-86-44-204.dynamic.isp.telekom.rs) has joined #ceph
[23:06] * garphy is now known as garphy`aw
[23:06] * ikrstic_ (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[23:07] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[23:08] * ikrstic_ (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) has joined #ceph
[23:12] * ikrstic_ (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) Quit (Remote host closed the connection)
[23:14] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (Quit: WeeChat 1.1-dev)
[23:14] <dmsimard> mnaser: Aw. If you ever want to contribute, let me know :)
[23:15] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[23:15] <mnaser> dmsimard: i think i'll start with ceph-deploy and i'll feel it out, once i'm confident enough i'll def. contrib
[23:15] <dmsimard> No worries !
[23:15] <mnaser> also
[23:15] <mnaser> local montrealer too?
[23:15] <mnaser> nice :)
[23:15] <dmsimard> Uh oh, I'm spotted.
[23:16] <mnaser> :-P
[23:16] <dmsimard> *runs away*
[23:16] * ikrstic_ (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) has joined #ceph
[23:16] * ikrstic_ (~ikrstic@93-86-222-245.dynamic.isp.telekom.rs) Quit (Remote host closed the connection)
[23:19] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit ()
[23:20] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[23:21] * fghaas (~florian@85-127-80-104.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[23:21] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[23:26] * monsterz_ (~monsterzz@94.19.146.224) has joined #ceph
[23:26] * monsterzz (~monsterzz@94.19.146.224) Quit (Read error: Connection reset by peer)
[23:26] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (Quit: WeeChat 1.1-dev)
[23:27] * dgbaley27 (~matt@ucb-np1-206.colorado.edu) Quit (Quit: Leaving.)
[23:27] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[23:28] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[23:36] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[23:37] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[23:38] * joef1 (~Adium@2601:9:280:f2e:dc2:d771:f1af:ff04) has joined #ceph
[23:40] * dgurtner (~dgurtner@241-236.197-178.cust.bluewin.ch) has joined #ceph
[23:45] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[23:51] * dgurtner (~dgurtner@241-236.197-178.cust.bluewin.ch) Quit (Quit: leaving)
[23:51] * dgurtner (~dgurtner@241-236.197-178.cust.bluewin.ch) has joined #ceph
[23:56] * jobewan (~jobewan@snapp.centurylink.net) Quit (Quit: Leaving)
[23:58] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.