#ceph IRC Log

Index

IRC Log for 2016-08-10

Timestamps are in GMT/BST.

[0:03] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) has joined #ceph
[0:04] * MACscr (~Adium@c-73-9-230-5.hsd1.il.comcast.net) has joined #ceph
[0:05] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[0:05] * sebastian-w (~quassel@212.218.8.138) has joined #ceph
[0:06] <MACscr> So i just setup ceph-dash on one of my monitors and im typically only seeing about 200k writes and 300k reads with about 30 iops. I have 10 kvm virtual machines being served through rbd. Is the load really that low? lol
[0:08] <rkeene> blizzow, Ceph is smart -- the default CRUSH map is node-wise
[0:08] * sebastian-w_ (~quassel@212.218.8.138) Quit (Ping timeout: 480 seconds)
[0:09] * rendar (~I@host222-180-dynamic.12-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:09] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[0:11] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[0:11] * Sigma (~AG_Clinto@tsn109-201-154-176.dyn.nltelcom.net) has joined #ceph
[0:12] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:15] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:4d98:7dea:2462:19d7) has joined #ceph
[0:15] <blizzow> rkeene: thanks.
[0:19] * tsg (~tgohad@192.55.54.44) has joined #ceph
[0:22] * [0x4A6F]_ (~ident@p508CD79D.dip0.t-ipconnect.de) has joined #ceph
[0:23] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:23] * [0x4A6F]_ is now known as [0x4A6F]
[0:23] * srk (~Siva@32.97.110.51) Quit (Ping timeout: 480 seconds)
[0:27] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[0:35] * t4nk852 (~oftc-webi@pubip.ny.tower-research.com) Quit (Ping timeout: 480 seconds)
[0:41] * Sigma (~AG_Clinto@tsn109-201-154-176.dyn.nltelcom.net) Quit ()
[0:45] * lcurtis (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[0:46] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[0:46] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[0:50] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[0:54] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:56] * icarroll (~icarroll@c-73-25-61-32.hsd1.wa.comcast.net) has joined #ceph
[0:56] <icarroll> #join ceph-devel
[0:57] <icarroll> oops
[0:59] * wak-work (~wak-work@2620:15c:202:0:85ce:c4bc:c124:86cb) Quit (Remote host closed the connection)
[0:59] * wak-work (~wak-work@2620:15c:202:0:4d68:fd7b:61d9:5e9a) has joined #ceph
[0:59] * TGF (~osuka_@162.251.167.74) has joined #ceph
[1:00] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) has joined #ceph
[1:07] * wak-work (~wak-work@2620:15c:202:0:4d68:fd7b:61d9:5e9a) Quit (Remote host closed the connection)
[1:08] * wak-work (~wak-work@2620:15c:202:0:4d68:fd7b:61d9:5e9a) has joined #ceph
[1:10] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[1:12] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:4d98:7dea:2462:19d7) Quit (Ping timeout: 480 seconds)
[1:12] * icarroll (~icarroll@c-73-25-61-32.hsd1.wa.comcast.net) Quit (Remote host closed the connection)
[1:18] * andreww (~xarses@64.124.158.192) Quit (Ping timeout: 480 seconds)
[1:20] * ircolle (~Adium@2601:285:201:633a:4df6:620d:b2a0:5bed) Quit (Quit: Leaving.)
[1:21] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) has joined #ceph
[1:22] * oarra (~rorr@45.73.146.238) Quit (Quit: oarra)
[1:26] * salwasser (~Adium@2601:197:101:5cc1:3095:37eb:69a4:2524) has joined #ceph
[1:28] * goberle (~goberle@mid.ygg.tf) Quit (Remote host closed the connection)
[1:29] * Unai (~Adium@50-115-70-150.static-ip.telepacific.net) Quit (Ping timeout: 480 seconds)
[1:29] * TGF (~osuka_@26XAAAX94.tor-irc.dnsbl.oftc.net) Quit ()
[1:32] * danieagle (~Daniel@201-69-183-143.dial-up.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[1:32] * salwasser1 (~Adium@2601:197:101:5cc1:f198:564f:3a61:d0d9) has joined #ceph
[1:32] * wak-work (~wak-work@2620:15c:202:0:4d68:fd7b:61d9:5e9a) Quit (Remote host closed the connection)
[1:33] * wak-work (~wak-work@2620:15c:202:0:4d68:fd7b:61d9:5e9a) has joined #ceph
[1:38] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[1:39] * salwasser (~Adium@2601:197:101:5cc1:3095:37eb:69a4:2524) Quit (Ping timeout: 480 seconds)
[1:41] * truan-wang (~truanwang@58.247.8.186) has joined #ceph
[1:44] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[1:46] * oms101 (~oms101@p20030057EA020500C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:55] * oms101 (~oms101@p20030057EA01F800C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:56] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:57] * goberle (~goberle@mid.ygg.tf) has joined #ceph
[2:02] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) Quit (Ping timeout: 480 seconds)
[2:04] * goberle_ (~goberle@jot.ygg.tf) has joined #ceph
[2:04] * goberle (~goberle@mid.ygg.tf) Quit (Remote host closed the connection)
[2:05] * goberle_ is now known as goberle
[2:07] * goberle (~goberle@jot.ygg.tf) Quit (Remote host closed the connection)
[2:07] * goberle (~goberle@jot.ygg.tf) has joined #ceph
[2:09] * goberle (~goberle@jot.ygg.tf) Quit (Read error: Connection reset by peer)
[2:09] * goberle (~goberle@jot.ygg.tf) has joined #ceph
[2:13] * whatevsz (~quassel@b9168edc.cgn.dg-w.de) has joined #ceph
[2:16] <Tene> Hey, any chance anyone near SF bay area wants a couple hundred ancient (5-7 years old) 1U rack-mount servers?
[2:16] <wak-work> oh god
[2:16] <wak-work> got more specifics on those?
[2:17] <Tene> Dell poweredge 1950s, in a data center in oakland right now.
[2:17] <Tene> They're terribly power-inefficient compared to anything remotely modern.
[2:17] <wak-work> oh yeah, thanks but nvm
[2:17] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[2:17] <wak-work> yeah exaclty
[2:17] <Tene> I can't imagine anyone could have any use for them if they've got any budget at all and are paying for power.
[2:18] <wak-work> especially in the bay area
[2:18] <wak-work> i threw out anything older than SB for my homelab
[2:18] <Tene> But, it seemed possible that someone working on ceph might want a huge pile of chassis to power on occasionally for testing or something.
[2:19] <wak-work> the problem is that these are soo cheap
[2:19] <wak-work> http://www.ebay.com/itm/CM8062101082713-INTEL-XEON-E5-2670-8-CORE-2-60GHz-20M-8GT-s-115W-PROCESSOR-/281457118681?hash=item418826cdd9:g:tvcAAOSwU-pXqLfM
[2:19] * Silentspy (~Aal@tor.exit.relay.dedicatedpi.com) has joined #ceph
[2:19] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[2:20] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[2:20] <Tene> Yeah, if you want remotely sane performance per watt, these are terrible and useless.
[2:20] <Tene> Hence, us planning to dump them on an electronics recycler.
[2:20] * Nacer (~Nacer@vir78-1-82-232-38-190.fbx.proxad.net) has joined #ceph
[2:29] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[2:31] <Kingrat_> I currently work for an electronics recycler, will confirm 1950/2950 are scrapped now most of the time...
[2:33] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[2:34] * Nacer (~Nacer@vir78-1-82-232-38-190.fbx.proxad.net) Quit (Remote host closed the connection)
[2:37] <iggy> we are scrapping all of our 610s like they are going out of style (mostly been rid of 1950/2950 for a while)
[2:43] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[2:48] * tsg (~tgohad@192.55.54.44) Quit (Remote host closed the connection)
[2:49] * Silentspy (~Aal@5AEAAAVS2.tor-irc.dnsbl.oftc.net) Quit ()
[2:53] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[2:54] * blizzow (~jburns@50.243.148.102) Quit (Ping timeout: 480 seconds)
[3:10] * ron-slc (~Ron@173-165-129-118-utah.hfc.comcastbusiness.net) Quit (Quit: Ex-Chat)
[3:11] * Zeis (~Teddybare@edwardsnowden0.torservers.net) has joined #ceph
[3:18] * yanzheng (~zhyan@125.70.22.133) has joined #ceph
[3:26] * yanzheng1 (~zhyan@125.70.22.133) has joined #ceph
[3:27] * yanzheng (~zhyan@125.70.22.133) Quit (Ping timeout: 480 seconds)
[3:28] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[3:34] * kefu (~kefu@114.92.96.253) has joined #ceph
[3:34] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[3:40] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:41] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[3:41] * sebastian-w_ (~quassel@212.218.8.139) has joined #ceph
[3:41] * Zeis (~Teddybare@5AEAAAVUE.tor-irc.dnsbl.oftc.net) Quit ()
[3:43] * kefu (~kefu@114.92.96.253) has joined #ceph
[3:43] * sebastian-w (~quassel@212.218.8.138) Quit (Ping timeout: 480 seconds)
[3:44] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[3:45] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:46] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[3:46] * kefu (~kefu@114.92.96.253) has joined #ceph
[3:53] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[3:59] * xENO_ (~isaxi@tor-exit.gansta93.com) has joined #ceph
[4:07] * truan-wang (~truanwang@58.247.8.186) Quit (Ping timeout: 480 seconds)
[4:14] * johnavp19891 (~jpetrini@166.170.20.218) has joined #ceph
[4:14] <- *johnavp19891* To prove that you are human, please enter the result of 8+3
[4:15] * georgem (~Adium@76-10-180-154.dsl.teksavvy.com) has joined #ceph
[4:16] * truan-wang (~truanwang@220.248.17.34) has joined #ceph
[4:20] * jfaj (~jan@p20030084AF337B005EC5D4FFFEBB68A4.dip0.t-ipconnect.de) has joined #ceph
[4:21] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[4:26] * jfaj__ (~jan@p4FE4ED78.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[4:29] * xENO_ (~isaxi@5AEAAAVVC.tor-irc.dnsbl.oftc.net) Quit ()
[4:31] * georgem (~Adium@76-10-180-154.dsl.teksavvy.com) Quit (Quit: Leaving.)
[4:31] * georgem (~Adium@206.108.127.16) has joined #ceph
[4:33] * salwasser1 (~Adium@2601:197:101:5cc1:f198:564f:3a61:d0d9) Quit (Quit: Leaving.)
[4:35] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[4:37] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[4:48] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[4:48] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[4:49] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) Quit (Remote host closed the connection)
[4:53] * kefu (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:54] * johnavp19891 (~jpetrini@166.170.20.218) Quit (Ping timeout: 480 seconds)
[4:54] * kefu (~kefu@114.92.96.253) has joined #ceph
[5:01] * Shesh (~xolotl@tor2r.ins.tor.net.eu.org) has joined #ceph
[5:04] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[5:09] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[5:09] * flisky (~Thunderbi@210.12.157.85) has joined #ceph
[5:11] * flisky (~Thunderbi@210.12.157.85) Quit ()
[5:18] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[5:23] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) has joined #ceph
[5:27] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:29] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) has joined #ceph
[5:29] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[5:31] * Shesh (~xolotl@9YSAAA8X7.tor-irc.dnsbl.oftc.net) Quit ()
[5:31] * Fapiko (~skney@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[5:32] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[5:32] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[5:38] * Vacuum_ (~Vacuum@88.130.223.234) has joined #ceph
[5:43] * vimal (~vikumar@114.143.165.8) has joined #ceph
[5:44] * Vacuum__ (~Vacuum@88.130.202.82) Quit (Ping timeout: 480 seconds)
[5:51] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[5:57] * vimal (~vikumar@114.143.165.8) Quit (Quit: Leaving)
[5:58] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Quit: Leaving.)
[6:01] * truan-wang (~truanwang@220.248.17.34) Quit (Remote host closed the connection)
[6:01] * Fapiko (~skney@61TAAA7YK.tor-irc.dnsbl.oftc.net) Quit ()
[6:01] * walcubi_ (~walcubi@p5797A25F.dip0.t-ipconnect.de) has joined #ceph
[6:08] * walbuci (~walcubi@p5797AF87.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:10] * flisky (~Thunderbi@210.12.157.85) has joined #ceph
[6:10] * ntpttr_ (~ntpttr@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[6:14] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[6:17] * karnan (~karnan@2405:204:5502:b48e:3602:86ff:fe56:55ae) has joined #ceph
[6:28] * vimal (~vikumar@121.244.87.116) has joined #ceph
[6:32] * ntpttr_ (~ntpttr@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[6:41] * truan-wang (~truanwang@58.247.8.186) has joined #ceph
[6:43] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[6:50] * swami1 (~swami@49.38.0.169) has joined #ceph
[7:03] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[7:04] * lmg (~tuhnis@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[7:06] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:19] * haomaiwang (~oftc-webi@61.148.242.53) has joined #ceph
[7:24] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[7:25] * flisky (~Thunderbi@210.12.157.85) Quit (Quit: flisky)
[7:30] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[7:31] * vikhyat (~vumrao@49.248.79.120) has joined #ceph
[7:34] * lmg (~tuhnis@9YSAAA80N.tor-irc.dnsbl.oftc.net) Quit ()
[7:43] * Diablodoct0r (~Unforgive@ip95.ip-94-23-150.eu) has joined #ceph
[7:43] * vikhyat (~vumrao@49.248.79.120) Quit (Ping timeout: 480 seconds)
[7:44] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:53] * vikhyat (~vumrao@123.252.219.11) has joined #ceph
[7:53] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) Quit (Read error: Connection reset by peer)
[7:54] * raso (~raso@ns.deb-multimedia.org) Quit (Ping timeout: 480 seconds)
[7:55] * _ndevos (~ndevos@nat-pool-ams2-5.redhat.com) has joined #ceph
[7:55] * _ndevos is now known as ndevos
[7:58] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) has joined #ceph
[7:59] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:04] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[8:07] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[8:13] * Diablodoct0r (~Unforgive@61TAAA70X.tor-irc.dnsbl.oftc.net) Quit ()
[8:15] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[8:24] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Killed (NickServ (Too many failed password attempts.)))
[8:24] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[8:36] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[8:37] * LiamM (~liam.monc@mail.moncur.eu) has joined #ceph
[8:38] * kefu (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[8:40] * kefu (~kefu@114.92.96.253) has joined #ceph
[8:43] * IvanJobs_ (~ivanjobs@103.50.11.146) has joined #ceph
[8:47] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[8:50] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Ping timeout: 480 seconds)
[8:55] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[8:56] * kefu (~kefu@114.92.96.253) has joined #ceph
[8:57] * tsg (~tgohad@134.134.139.82) has joined #ceph
[8:59] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[9:02] * ade (~abradshaw@tmo-098-31.customers.d1-online.com) has joined #ceph
[9:05] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:06] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:06] * art_yo|2 (~kvirc@149.126.169.197) has joined #ceph
[9:08] * truan-wang (~truanwang@58.247.8.186) Quit (Remote host closed the connection)
[9:12] * starcoder (~JohnO@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[9:12] * art_yo (~kvirc@149.126.169.197) Quit (Ping timeout: 480 seconds)
[9:16] * ceph-ircslackbot (~ceph-ircs@ds9536.dreamservers.com) has joined #ceph
[9:17] * ade (~abradshaw@tmo-098-31.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[9:18] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) has joined #ceph
[9:20] * t4nk444 (~oftc-webi@117.247.186.15) has joined #ceph
[9:23] * t4nk444 (~oftc-webi@117.247.186.15) Quit ()
[9:23] * kingcu_ (~kingcu@kona.ridewithgps.com) Quit (Ping timeout: 480 seconds)
[9:24] * ceph-ircslackbot2 (~ceph-ircs@ds9536.dreamservers.com) Quit (Ping timeout: 480 seconds)
[9:28] * truan-wang (~truanwang@140.206.89.178) has joined #ceph
[9:30] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[9:37] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:41] * truan-wang (~truanwang@140.206.89.178) Quit (Ping timeout: 480 seconds)
[9:41] * starcoder (~JohnO@61TAAA72N.tor-irc.dnsbl.oftc.net) Quit ()
[9:50] * kiasyn (~Plesioth@tor.exit.relay.dedicatedpi.com) has joined #ceph
[9:55] * tsg (~tgohad@134.134.139.82) Quit (Remote host closed the connection)
[9:58] * truan-wang (~truanwang@220.248.17.34) has joined #ceph
[10:03] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:05] * baojg (~baojg@61.135.155.34) Quit (Ping timeout: 480 seconds)
[10:06] * baojg (~baojg@61.135.155.34) has joined #ceph
[10:07] * DanFoster (~Daniel@2a00:1ee0:3:1337:5868:2f63:5fcc:d5e2) has joined #ceph
[10:09] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[10:12] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:19] * scuttlemonkey (~scuttle@nat-pool-rdu-t.redhat.com) Quit (Ping timeout: 480 seconds)
[10:20] * kiasyn (~Plesioth@61TAAA73O.tor-irc.dnsbl.oftc.net) Quit ()
[10:24] * truan-wang (~truanwang@220.248.17.34) Quit (Remote host closed the connection)
[10:25] * kingcu (~kingcu@kona.ridewithgps.com) has joined #ceph
[10:29] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[10:29] * Frymaster (~adept256@tor.exit.relay.dedicatedpi.com) has joined #ceph
[10:33] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[10:35] * rendar (~I@host118-131-dynamic.59-82-r.retail.telecomitalia.it) has joined #ceph
[10:35] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[10:35] * haomaiwang (~oftc-webi@61.148.242.53) Quit (Ping timeout: 480 seconds)
[10:49] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[10:53] * ira (~ira@121.244.87.118) has joined #ceph
[10:58] * walcubi_ is now known as walcubi
[10:58] <walcubi> <walbuci> ceph osd pool create rbd 256 256 replicated replicated_ruleset expected_num_objects 2147483648
[10:59] <walcubi> Maybe erasure_code_profile is eating up the "expected_num_objects" string...
[10:59] * Frymaster (~adept256@5AEAAAV2F.tor-irc.dnsbl.oftc.net) Quit ()
[11:00] * The1_ (~the_one@5.186.54.143) has joined #ceph
[11:01] * fcj (~fcauzard@ippon-1g-224-170.fib.nerim.net) has joined #ceph
[11:02] * sankarshan (~sankarsha@121.244.87.117) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[11:03] <Gugge-47527> walcubi: if im reading the docs right, you should not have "expected_num_objects" there
[11:03] <Gugge-47527> only the number
[11:03] <walcubi> Gugge-47527, let me just bring up git blame
[11:04] <Gugge-47527> ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] [crush-ruleset-name] [expected-num-objects]
[11:04] <Gugge-47527> is what http://docs.ceph.com/docs/jewel/rados/operations/pools/ have
[11:04] * Sophie1 (~Lite@tsn109-201-154-200.dyn.nltelcom.net) has joined #ceph
[11:04] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:4d98:7dea:2462:19d7) has joined #ceph
[11:06] * art_yo (~kvirc@149.126.169.197) has joined #ceph
[11:07] * TMM (~hp@185.5.121.201) has joined #ceph
[11:07] * T1 (~the_one@5.186.54.143) Quit (Ping timeout: 480 seconds)
[11:08] <walcubi> Gugge-47527, I think it's this: https://github.com/ceph/ceph/blob/2209a8c68fb001545f293f5ba6ed1b9a77b5def9/src/mon/OSDMonitor.cc#L6955-L6956
[11:08] * fcj (~fcauzard@ippon-1g-224-170.fib.nerim.net) Quit (Read error: Connection reset by peer)
[11:09] <Gugge-47527> i have no idea
[11:09] <Gugge-47527> but i think you should use "ceph osd pool create rbd 256 256 replicated replicated_ruleset 2147483648" and not "ceph osd pool create rbd 256 256 replicated replicated_ruleset expected_num_objects 2147483648"
[11:10] <walcubi> I'm just saying, that doesn't work. :)
[11:11] <Gugge-47527> same error with and without the "expected_num_objects" string?
[11:11] <walcubi> No error, just the pool is created without prehashing directories.
[11:11] <walcubi> The docs also say: [erasure-code-profile] [crush-ruleset-name] [expected_num_objects]
[11:12] <Gugge-47527> yes, without the "expected_num_objects" string in the command
[11:13] <walcubi> Oh wait, the cmdmap is probably built somewhere else maybe?
[11:13] * art_yo|2 (~kvirc@149.126.169.197) Quit (Ping timeout: 480 seconds)
[11:14] <walcubi> So the order I see it being read in may not be the order.
[11:15] <Gugge-47527> i feel like im missing something, what problem are you trying to solve?
[11:15] <walcubi> *the order that they are retrieved.
[11:15] <Gugge-47527> why do you care? :)
[11:15] * fcj (~fcauzard@195.5.224.170) has joined #ceph
[11:15] * swami2 (~swami@49.38.0.169) has joined #ceph
[11:15] <walcubi> Get to the bottom of this discrepancy. ;-)
[11:15] <Gugge-47527> what discrepancy ?
[11:16] <walcubi> I need to put something between [crush-ruleset-name] and [expected-num-objects] in order for it to pick up the expected-num-objects value.
[11:17] <fcj> Hello. I found this bug https://github.com/ceph/ceph/pull/10524 on my ceph cluster. I have ceph 10.2.2. I upgrade from hammer to jewel. So how can i apply the patch on my cluster ? With ceph-deploy ?
[11:17] <Gugge-47527> walcubi: ahh, you are saying the docs are wrong?
[11:17] <walcubi> Yes.
[11:17] <walcubi> Or... there's something amiss
[11:17] <Gugge-47527> that was the part i was missing :P
[11:19] * swami1 (~swami@49.38.0.169) Quit (Ping timeout: 480 seconds)
[11:22] <fcj> My distribution is an Ubuntu 14.0.4 LTS
[11:24] * swami1 (~swami@49.38.0.169) has joined #ceph
[11:29] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[11:29] <fcj> or the best way to do this is to delete my instance and to recreate an radosgw directly in ceph jewel ? I use 2 instance to ceph radosgw
[11:30] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Read error: Connection reset by peer)
[11:31] * swami2 (~swami@49.38.0.169) Quit (Ping timeout: 480 seconds)
[11:31] * saintpablo (~saintpabl@gw01.mhitp.dk) has joined #ceph
[11:34] * Sophie1 (~Lite@5AEAAAV22.tor-irc.dnsbl.oftc.net) Quit ()
[11:36] * kefu (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:38] <walcubi> Gugge-47527, I *think* that it's just a limitation in ceph's commandline parsing technology
[11:39] <walcubi> But it looks like you must always pass an erasure code profile name, even if it's a replicated pool.
[11:40] <walcubi> I can't see anything in the testsuite that checks replicated + expected_num_objects, so... :-P
[11:49] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[11:53] * b0e (~aledermue@213.95.25.82) has joined #ceph
[11:53] * raso (~raso@ns.deb-multimedia.org) has joined #ceph
[11:56] * Sliker (~csharp@tor.exit.relay.dedicatedpi.com) has joined #ceph
[12:00] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[12:00] * baater (~oftc-webi@222.74.214.122) has joined #ceph
[12:00] <baater> hello all
[12:02] * baater (~oftc-webi@222.74.214.122) Quit ()
[12:03] * analbeard (~shw@support.memset.com) has joined #ceph
[12:04] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[12:15] * koma (~koma@0001c112.user.oftc.net) has joined #ceph
[12:24] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[12:26] * Sliker (~csharp@26XAAAYM1.tor-irc.dnsbl.oftc.net) Quit ()
[12:26] * jfaj (~jan@p20030084AF337B005EC5D4FFFEBB68A4.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[12:27] * jfaj (~jan@p4FE4FF94.dip0.t-ipconnect.de) has joined #ceph
[12:30] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:34] <koollman> question mostly for my curiosity ... I managed to destroy my ceph data (not worrying, there was nothing stored, merely been playing with deployment and wiped data too quickly). I have '64 pgs are stuck inactive for more than 300 seconds', and I'm fairly sure they will never be unstuck or recovered. Is there a way to tell ceph 'forget about it'?
[12:34] <koollman> I can, and will, redeploy this cluster many more time. But I wondered what I would do if it happened in production
[12:35] <koollman> http://paste.debian.net/hidden/38a11e9e/ here's the current ceph status
[12:36] <koollman> (I'm trying to understand failure modes, I guess)
[12:41] <doppelgrau> koollman: what does ceph pg dump_stuck inactive and ceph pg stat for an inactive pg say?
[12:42] <doppelgrau> koollman: or better ceph pg <id> query
[12:43] <doppelgrau> koollman: but the "sledge hammer" is ceph pg force_create_pg <id> - recreates that PG and whipes all (existing) data away
[12:44] * haomaiwang (~oftc-webi@119.6.102.203) has joined #ceph
[12:47] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[12:47] <koollman> http://paste.debian.net/hidden/df30f37a/
[12:49] <doppelgrau> koollman: ok, I think It's time for the sledge hammer :)
[12:50] * doppelgrau was curios if a recovery was an option, but looks like too many osds are gone to me
[12:50] <koollman> I'm pretty sure recovery is impossible. I know I've destroyed too much data at the same time :)
[12:51] <koollman> it's more of 'how I clean up this mess if I'm very unlucky some day' question
[12:51] <doppelgrau> koollman: ceph pg force_create_pg <id>
[12:52] <walcubi> Someone discovered it before me it seems: http://tracker.ceph.com/issues/9371
[12:52] <koollman> yep, doing that. although it seems stuck on creating
[12:57] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[12:58] * swami2 (~swami@49.44.57.242) has joined #ceph
[12:59] <koollman> doppelgrau: http://paste.debian.net/hidden/ef5b2ab2/ my intuition is that I need a bigger hammer, and/or I have something else still broken
[13:00] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit ()
[13:02] <doppelgrau> koollman: thats strange, you have 12 OSDs up, have you changed the roots in the crush-maps?
[13:02] <koollman> no
[13:03] <doppelgrau> just a wild guess (in that case the crush rules would ne able to find suiting PGs)
[13:03] <doppelgrau> simpel soulution: remove the pool with the 64 pgs
[13:04] <doppelgrau> but I'm wondering why they are not created
[13:05] <brians_> hi guys - what is your opinion on size = 2 = safe in enterprise ssd only cluster?
[13:05] * swami1 (~swami@49.38.0.169) Quit (Ping timeout: 480 seconds)
[13:05] <Gugge-47527> brians: not really, with size 2 you dont know what copy is ok if there is a difference
[13:06] <Gugge-47527> and there is a real chance that two ssd's could die at the same time when you have many servers
[13:06] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:06] <doppelgrau> brians: or the SSD with the current copy dies while one node is down for maintenance (and with old data=inkonsistency)
[13:08] * getup (~getup@gw.office.cyso.net) has joined #ceph
[13:08] <brians_> thanks Gugge-47527 doppelgrau its a 5 node 4 ssd each cluster - each ssd is intel s3610 - I've read at this scale with proper ssd size = 2 maybe ok
[13:08] <doppelgrau> brians_: but in the end, it depends how valuable your data is, the chance of corruption and data loss is IMHO lower than with spinning disks (faster recovery = shorter timeframe where you're vulnerable)
[13:08] <brians_> doppelgrau the backfils are pretty instant compared to rust in fairness
[13:08] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:09] <brians_> 2 / 3 is difference between 8 and 5.3 tb usable..
[13:12] <doppelgrau> brians_: Since there are much more errors covered with three copies (e.g. how to repair if deep scrub finds diferences), size=2 is a higher risk
[13:13] <brians_> ok
[13:14] <koollman> doppelgrau: health HEALTH_OK. thanks :)
[13:14] <doppelgrau> brians_: but if the risk is too high, depends, if you have a good backup and the storage is not so mission critical, that you "survive" a restore, size=2, I'd go with size=2, but if a service interruption means lots of trubble, I'd allways go with size=3
[13:14] <brians_> Thanks for your input doppelgrau
[13:15] <brians_> I think you are right
[13:15] <brians_> better to have 5.3tb of good data than 8tb of garbage
[13:15] <brians_> :)
[13:17] <doppelgrau> cheap, fast, reliabale, pick two ;)
[13:17] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[13:18] * rakeshgm (~rakesh@121.244.87.118) has joined #ceph
[13:22] <koollman> I've been thinking about the way to make a 'prettier' map for ssd/hdd separations. Would it make sense to define a new type, between osd and host, to not have two separate roots but have two kind of leaves below host in the tree ? this would still allow to chose independant host, but with a preference for ssd or hdd below that
[13:22] <koollman> or am I going to have too much trouble doing this ?
[13:23] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[13:25] * tZ (~Redshift@159.203.23.163) has joined #ceph
[13:27] <lri_> why would it be more difficult to repair with size=2 than size=3
[13:28] <koollman> lri_: compare repairing this string of three identical letters : BAA , to this one : BA
[13:28] <koollman> (also it means you still have two copies around when you lose the first one)
[13:29] <lri_> so there is no checksum to reliably see which one is the correct copy
[13:32] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[13:33] * rakeshgm (~rakesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[13:34] <doppelgrau> koollman: might work, but only if all your nodes are identical (also in the future), else I'm pretty sure it will lead to undesired effects (e.g. bad data distribution)
[13:34] <doppelgrau> lri_: yes, IIRC should change with bluestore
[13:35] <lri_> https://www.sebastien-han.fr/blog/2014/02/17/ceph-io-patterns-the-bad/ <- this seems to indicate that "Deep scrubbing (weekly) reads the data and uses checksums to ensure data integrity."
[13:36] <lri_> ... indicate that some kind of checksumming exists
[13:36] <lri_> quick googling didnt bring up much further info on this
[13:37] <doppelgrau> lri_: checksumes are used to compare the data (avoid transmitting the data), but not on disk /on read
[13:38] <doppelgrau> (except if you use a filesystem that has checksumme support like zfs)
[13:40] <flaf> koollman: personally, I use hosts with names ceph01-sata, ceph01-ssd etc.
[13:40] <lri_> yes that would be another matter
[13:43] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[13:44] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[13:48] * salwasser (~Adium@2601:197:101:5cc1:1d37:d5e5:6f51:870e) has joined #ceph
[13:49] <koollman> doppelgrau: 'in theory' my nodes will be pretty identical in the future. I'm guessing if they change a lot, I will attach the others to a different root and/or migrate data
[13:49] * rmart04 (~rmart04@support.memset.com) has joined #ceph
[13:50] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[13:52] <doppelgrau> koollman: I was just thinking about "we need meore hdd/ssd space, can you not put <other size> hdd/ssd in the new nodes" => weight changes and your approach would lead to bad data distribution
[13:52] <koollman> doppelgrau: I'm not sure about the bad data distribution, at least compared to a similar case with the default tree. wouldn't it also happen if the nodes are different anyway ?
[13:53] <koollman> like if my next node is all ssd, and I go with the 'typical' separate root for ssd nodes... it will still be not well balanced
[13:54] <doppelgrau> koollman: assume server version a 2x1TB SSD and 2x4TB hdd => weight 10, server version b 2x1TB SSD and 2x8TB => weight 18
[13:55] * tZ (~Redshift@26XAAAYOE.tor-irc.dnsbl.oftc.net) Quit ()
[13:55] <doppelgrau> koollman: since you choose ssd or hdd after choosing the server the versiob 8 gets 1,8times as much SSD-PGs although it has the same SSD storage
[13:55] <koollman> ah. right. the weight of osd that can't be chosen would influence the higher level
[13:55] <doppelgrau> koollman: no, weight helps, in that case it's proportional to the storage capacity
[13:56] <koollman> well, I guess I will go with the documented example of separate root for ssd
[13:56] <koollman> it just doesn't feel completely right :)
[13:57] <doppelgrau> koollman: it is a bit ugly (especially with mixed setups), but works
[13:58] * salwasser (~Adium@2601:197:101:5cc1:1d37:d5e5:6f51:870e) Quit (Quit: Leaving.)
[13:59] <koollman> side-node, the documentation for osd crush location hook could benefit from a line or two with example outputs :)
[13:59] * Bwana (~Spessu@46.166.190.245) has joined #ceph
[14:00] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[14:03] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[14:03] * Wahmed (~wahmed@206.174.203.195) has joined #ceph
[14:04] * sankarshan (~sankarsha@122.166.90.132) has joined #ceph
[14:10] * Wahmed2 (~wahmed@206.174.203.195) Quit (Ping timeout: 480 seconds)
[14:12] * kaisan_ (~kai@zaphod.kamiza.nl) has joined #ceph
[14:12] * kaisan (~kai@zaphod.kamiza.nl) Quit (Read error: Connection reset by peer)
[14:16] * icey (~Chris@0001bbad.user.oftc.net) Quit (Remote host closed the connection)
[14:17] * srk (~Siva@2605:6000:ed04:ce00:fccd:7de1:6468:eb51) has joined #ceph
[14:18] * icey (~Chris@pool-74-103-175-25.phlapa.fios.verizon.net) has joined #ceph
[14:20] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:22] * dennis_ (~dennis@2a00:801:7:1:1a03:73ff:fed6:ffec) has joined #ceph
[14:24] * getup (~getup@gw.office.cyso.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[14:25] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:26] * srk (~Siva@2605:6000:ed04:ce00:fccd:7de1:6468:eb51) Quit (Ping timeout: 480 seconds)
[14:27] * Hemanth (~hkumar_@103.228.221.135) has joined #ceph
[14:29] * Bwana (~Spessu@46.166.190.245) Quit ()
[14:33] * saintpablos (~saintpabl@2.106.149.41) has joined #ceph
[14:37] * getup (~getup@gw.office.cyso.net) has joined #ceph
[14:38] * FierceForm (~Izanagi@tsn109-201-154-200.dyn.nltelcom.net) has joined #ceph
[14:39] * rakeshgm (~rakesh@121.244.87.117) Quit (Quit: Leaving)
[14:39] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[14:40] * saintpablo (~saintpabl@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[14:40] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[14:40] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[14:41] * Nicho1as (~nicho1as@14.52.121.20) has joined #ceph
[14:44] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[14:47] * art_yo (~kvirc@149.126.169.197) Quit (Ping timeout: 480 seconds)
[14:49] * wes_dillingham (~wes_dilli@65.112.8.207) has joined #ceph
[14:51] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[14:52] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Read error: Connection reset by peer)
[14:52] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[14:59] * madkiss2 (~madkiss@ip5b40b89b.dynamic.kabel-deutschland.de) has joined #ceph
[15:00] * swami2 (~swami@49.44.57.242) Quit (Quit: Leaving.)
[15:03] * zokier` (zokier@kapsi.fi) has joined #ceph
[15:04] * madkiss (~madkiss@ip5b4029be.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[15:04] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[15:05] * lxxl (~oftc-webi@177.135.35.215.dynamic.adsl.gvt.net.br) has joined #ceph
[15:06] <lxxl> Has anyone run ceph successfuly on lxd containers? i am getting a "ERROR: error creating empty object store in /mnt/osd: (13) Permission denied" when activating an osd
[15:06] <lxxl> 2016-08-10 13:04:05.982022 7f1c672c38c0 -1 filestore(/mnt/osd) mkfs: write_version_stamp() failed: (13) Permission denied
[15:07] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[15:08] * getup (~getup@gw.office.cyso.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[15:08] * shaunm (~shaunm@nat-eduroam-02.scc.kit.edu) has joined #ceph
[15:08] * FierceForm (~Izanagi@5AEAAAV6N.tor-irc.dnsbl.oftc.net) Quit ()
[15:13] * neurodrone_ (~neurodron@162.243.191.67) has joined #ceph
[15:13] * krypto (~krypto@G68-90-105-143.sbcis.sbc.com) has joined #ceph
[15:14] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[15:19] * getup (~getup@gw.office.cyso.net) has joined #ceph
[15:20] * ira (~ira@121.244.87.118) Quit (Quit: Leaving)
[15:21] * rdas (~rdas@121.244.87.116) has joined #ceph
[15:21] * madkiss (~madkiss@ip5b40289c.dynamic.kabel-deutschland.de) has joined #ceph
[15:23] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[15:23] * scuttle|afk is now known as scuttlemonkey
[15:25] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) Quit (Ping timeout: 480 seconds)
[15:26] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[15:26] * Racpatel (~Racpatel@2601:87:0:24af::53d5) Quit (Quit: Leaving)
[15:26] * Racpatel (~Racpatel@2601:87:0:24af::53d5) has joined #ceph
[15:26] * madkiss2 (~madkiss@ip5b40b89b.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[15:27] * kefu (~kefu@114.92.96.253) has joined #ceph
[15:28] * madkiss2 (~madkiss@ip5b4029b4.dynamic.kabel-deutschland.de) has joined #ceph
[15:29] * madkiss (~madkiss@ip5b40289c.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[15:32] <koollman> doppelgrau: I had another idea that will mostly works (in my setup at least) : different hosts, different dc, same root: http://paste.debian.net/hidden/d2120326/
[15:32] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) Quit (Remote host closed the connection)
[15:33] <koollman> (I do not intend to mix either datacenter or hdd/ssd for now. but at least it fits in a single tree and I can have some "global" pools with rules that may be defined later
[15:35] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:48] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) has joined #ceph
[15:51] <rmart04> Hi, Does anyone happen to know if true object versioning in swift (With ACL support) might be on the horizon? I notice its not in the supported caps list on Jewel.
[15:52] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[15:53] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:53] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[15:54] * ZombieL (~phyphor@ip95.ip-94-23-150.eu) has joined #ceph
[16:00] * bene3 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[16:03] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:03] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[16:04] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[16:11] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:12] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[16:17] * joshd1 (~jdurgin@2602:30a:c089:2b0:e8be:edb:d7be:db8e) has joined #ceph
[16:18] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[16:23] * ZombieL (~phyphor@61TAAA8B5.tor-irc.dnsbl.oftc.net) Quit ()
[16:24] * salwasser (~Adium@72.246.3.14) has joined #ceph
[16:25] * jpierre03 (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[16:29] * georgem (~Adium@76-10-180-154.dsl.teksavvy.com) has joined #ceph
[16:30] * georgem (~Adium@76-10-180-154.dsl.teksavvy.com) Quit ()
[16:30] <brians_> lxxl you need to use krdb
[16:31] * georgem (~Adium@76-10-180-154.dsl.teksavvy.com) has joined #ceph
[16:31] <brians_> actually lxxl I misread your question.
[16:31] <brians_> lxxl I have no idea why you would want to do that - good luck!
[16:31] * getup (~getup@gw.office.cyso.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[16:31] <lxxl> thanks anyway brians_ , i want to run ceph in inside lxd containers
[16:34] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[16:41] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[16:41] * Nephyrin (~uhtr5r@tor-exit.gansta93.com) has joined #ceph
[16:43] <doppelgrau> lxxl: does the ceph-User the neccessary access rights to the devices?
[16:44] * georgem (~Adium@76-10-180-154.dsl.teksavvy.com) Quit (Quit: Leaving.)
[16:44] <lxxl> doppelgrau: i did do a setfacl on the host with the id from the container
[16:44] <lxxl> anything else i would be missing?
[16:46] <doppelgrau> <- did not know lxl => no idea how to to change in that case
[16:48] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[16:48] * kefu (~kefu@45.32.49.168) has joined #ceph
[16:51] * ircolle (~Adium@2601:285:201:633a:b157:72ef:6e7e:10c9) has joined #ceph
[16:55] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[16:56] * mykola (~Mikolaj@91.245.73.133) has joined #ceph
[16:58] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[16:58] * ntpttr_ (~ntpttr@134.134.139.74) has joined #ceph
[16:59] * karnan (~karnan@2405:204:5502:b48e:3602:86ff:fe56:55ae) Quit (Quit: Leaving)
[16:59] <SamYaple> morning
[17:00] <SamYaple> lri_: even with size=1, there are still checksums and other ways to ensure the correct data is returned.
[17:01] <SamYaple> lri_: scrubbing does a metadata check (ensure the objects exist where expected, and validate teh sizes)
[17:01] <SamYaple> lri_: deep_scrubbing actually reads every bit of the object and validates it
[17:01] <SamYaple> lri_: in the case of an error, it will also repair when safe to do so with other valid copies
[17:02] * srk_ (~Siva@32.97.110.55) has joined #ceph
[17:02] * vikhyat (~vumrao@123.252.219.11) Quit (Quit: Leaving)
[17:03] * kefu_ (~kefu@114.92.96.253) has joined #ceph
[17:03] * kefu (~kefu@45.32.49.168) Quit (Read error: Connection reset by peer)
[17:03] * Georgyo (~georgyo@shamm.as) Quit (Remote host closed the connection)
[17:03] * hrast (~hrast@204.155.27.220) has joined #ceph
[17:04] * saintpablos (~saintpabl@2.106.149.41) Quit (Ping timeout: 480 seconds)
[17:05] * Georgyo (~georgyo@shamm.as) has joined #ceph
[17:08] * kefu_ (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[17:08] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[17:09] * kefu (~kefu@114.92.96.253) has joined #ceph
[17:10] * valeech (~valeech@173-14-113-41-richmond.hfc.comcastbusiness.net) has joined #ceph
[17:10] <doppelgrau> SamYaple: which checksumes exists on a recommended installation (xfs)? In the docs I did not find any (only that checksumes should be introduced with bluestore)
[17:11] * wushudoin (~wushudoin@38.140.108.2) has joined #ceph
[17:11] * Nephyrin (~uhtr5r@5AEAAAV92.tor-irc.dnsbl.oftc.net) Quit ()
[17:14] <SamYaple> doppelgrau: the same checksums taht exist for all of the other FS types since it isnt fs related
[17:14] <SamYaple> doppelgrau: i dont think what you read is saying checksums are going to be introduced in bluestore in the sense that they dont exist elsewhere
[17:14] <SamYaple> but rather they will now _exist_ in bluestore
[17:16] * nastidon (~loft@tor2r.ins.tor.net.eu.org) has joined #ceph
[17:17] <SamYaple> doppelgrau: http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/#scrubbing
[17:17] <SamYaple> doppelgrau: "Deep scrubbing (weekly) reads the data and uses checksums to ensure data integrity."
[17:21] * joshd1 (~jdurgin@2602:30a:c089:2b0:e8be:edb:d7be:db8e) Quit (Quit: Leaving.)
[17:22] * b0e (~aledermue@213.95.25.82) has joined #ceph
[17:24] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[17:24] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:24] * kefu (~kefu@45.32.49.168) has joined #ceph
[17:26] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[17:28] <bene3> gregsfortytwo, MDS presentation happening on Mark's Ceph mtg
[17:29] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[17:36] * valeech (~valeech@173-14-113-41-richmond.hfc.comcastbusiness.net) Quit (Quit: valeech)
[17:36] * yanzheng1 (~zhyan@125.70.22.133) Quit (Quit: This computer has gone to sleep)
[17:37] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[17:39] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[17:39] * ntpttr_ (~ntpttr@134.134.139.74) Quit (Ping timeout: 480 seconds)
[17:40] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[17:40] * art_yo (~kvirc@93-80-233-59.broadband.corbina.ru) has joined #ceph
[17:41] <rkeene> Checksums exist in Ceph but AFAIK are not checked when you read, only during scrubs
[17:41] <doppelgrau> SamYaple: these checksumes are computed on the fly and compared => size=1 and bit rott can not be detected
[17:41] <rkeene> Which makes them practically useless
[17:42] <SamYaple> rkeene: doppelgrau that can't be right...
[17:42] <SamYaple> this is suprising info. ill have to dig into this more
[17:42] <doppelgrau> SamYaple: and without majority voring (3 copies) ceph currently can not detect which copy is faulty
[17:43] <rkeene> SamYaple, Hmm ? Which part is surprising ? YOU told me most of this, IIR
[17:43] <doppelgrau> SamYaple: thats the reason they want to introduce checksumes like in zfs in bluestore, that every read can be checked to detect bitrot on the fly (and not a week later)
[17:44] <SamYaple> rkeene: that bit rot cannot be detected with one copy
[17:44] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[17:44] <rkeene> Hmm, I assumed it was using a checksum to verify it
[17:44] <rkeene> Let me look into scrubbing
[17:44] <SamYaple> right. thats what deepscrubing does
[17:44] <SamYaple> it _has_ a checksum, so bitrot should be found
[17:45] <rkeene> But that checksum isn't checked on reads
[17:45] <rkeene> Only on scrubs
[17:45] <SamYaple> yea that sounds right
[17:45] <rkeene> So you could read in a rotted sector and write a fresh new checksum based on rotten data
[17:45] <doppelgrau> SamYaple: rkeene: when is that checksume computed & updated?
[17:45] <SamYaple> doppelgrau: at time of object creation
[17:46] <doppelgrau> SamYaple: had to be updated on every write
[17:46] <rkeene> SamYaple, So rotten bits could end up with a correct checksum, since reads don't check it
[17:46] <doppelgrau> I do not think so
[17:46] * nastidon (~loft@9YSAAA9DW.tor-irc.dnsbl.oftc.net) Quit ()
[17:46] <doppelgrau> computing on the fly seems more efficent (if you don't check on every read)
[17:46] <rkeene> Read, Update, Write. Read rotten (without checking the checksum), update rotten, write rotten (but with good checksum)
[17:47] <SamYaple> this requires testing. should be simple enough
[17:47] <rkeene> So scrubbing checks metadata, deep scrubbing checks the checksum weekly -- and thus if you read rotten data that is less than a week old on average and write out a new block based on it, rotten data will have a valid checksum.
[17:47] <doppelgrau> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/005149.html
[17:48] <rkeene> Hmm, that's different from what I expected
[17:48] <doppelgrau> - During deep scrub, we read the objects and metadata, calculate a crc32c, and compare across replicas. This detects missing objects, bitrot, failing disks, or anything other source of inconistency.
[17:48] <doppelgrau> - Ceph does not calculate and store a per-object checksum. Doing so is difficult because rados allows arbitrary overwrites of parts of an object.
[17:48] <rkeene> It compares checksums to other replicas checksums
[17:49] <doppelgrau> not very current, but IIRC the core storage has not changed much (till bluestore)
[17:50] * Borf (~Kyso@ip95.ip-94-23-150.eu) has joined #ceph
[17:51] <SamYaple> http://tracker.ceph.com/projects/ceph/wiki/Osd_-_opportunistic_whole-object_checksums
[17:51] <rkeene> My distributed replicating filesystem names every object after the SHA256 of its contents -- this makes things slightly more tricky, but I always have a valid checksum to compare against and with size=2 I can tell which side is wrong
[17:51] <rkeene> SamYaple, That's mentioned, and only checked sometimes
[17:52] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[17:52] <SamYaple> rkeene: what do you mean thats mentioned?
[17:53] <doppelgrau> SamYaple: nice addition in hammer. so it detects bitrott, if no write operation has haopend since last deep-scrub
[17:54] <doppelgrau> better than nothing but in any rbd-usecase motly useless IMHO
[17:54] <SamYaple> doppelgrau: no, its per scrub, and it only invalidates if its a partial write
[17:55] <doppelgrau> rbd => 99,9% partial writes
[17:55] <SamYaple> fair enough
[17:55] <SamYaple> im not defending the current state of affairs, but its not as bad as its being made to sound
[17:55] <doppelgrau> SamYaple: scrub in that blueprint means deep-scrub
[17:56] <doppelgrau> (reading whole object, computing crc, sending over wire = deep scrub)
[17:56] <SamYaple> doppelgrau: yea youre right
[17:56] * evilrob is reading about erasure code pools this morning
[17:56] <doppelgrau> http://tracker.ceph.com/issues/9059
[17:57] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[17:57] <doppelgrau> implemented in 0.9.2
[17:57] <SamYaple> evilrob: those actually have checksuming
[17:57] <evilrob> I'm seeing no RBD images, but I'm wanting to create a pool just for object storage. it should be fine
[17:57] <SamYaple> ok lunch time
[17:57] <SamYaple> evilrob: object storage should have no partitial writes, even better
[18:01] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit ()
[18:01] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[18:03] <srk_> hi, http://tracker.ceph.com/issues/16654 anyone encountered this issue with Jewel?
[18:08] <rkeene> SamYaple, That's mentioned in the email doppelgrau linked to
[18:09] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[18:11] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[18:13] * shaunm (~shaunm@nat-eduroam-02.scc.kit.edu) Quit (Ping timeout: 480 seconds)
[18:13] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:13] * DanFoster (~Daniel@2a00:1ee0:3:1337:5868:2f63:5fcc:d5e2) Quit (Ping timeout: 480 seconds)
[18:13] * fcj (~fcauzard@195.5.224.170) Quit (Ping timeout: 480 seconds)
[18:14] * cyphase (~cyphase@c-50-148-131-137.hsd1.ca.comcast.net) has joined #ceph
[18:17] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[18:17] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:18] * DanFoster (~Daniel@81.174.207.235) has joined #ceph
[18:18] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[18:19] * gregmark (~Adium@68.87.42.115) has joined #ceph
[18:20] * Borf (~Kyso@5AEAAAWBX.tor-irc.dnsbl.oftc.net) Quit ()
[18:21] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:22] * penguinRaider (~KiKo@146.185.31.226) has joined #ceph
[18:23] * rmart04 (~rmart04@support.memset.com) Quit (Ping timeout: 480 seconds)
[18:23] * srk_ (~Siva@32.97.110.55) Quit (Ping timeout: 480 seconds)
[18:23] * sankarshan (~sankarsha@122.166.90.132) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[18:23] * georgem (~Adium@76-10-180-154.dsl.teksavvy.com) has joined #ceph
[18:23] * dan__ (~Daniel@81.174.207.235) has joined #ceph
[18:24] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:24] * srk (~Siva@32.97.110.55) has joined #ceph
[18:25] * cyphase (~cyphase@c-50-148-131-137.hsd1.ca.comcast.net) has joined #ceph
[18:26] * zokier` (zokier@kapsi.fi) has left #ceph
[18:27] * jiffe (~jiffe@nsab.us) Quit (Ping timeout: 480 seconds)
[18:27] * DanFoster (~Daniel@81.174.207.235) Quit (Ping timeout: 480 seconds)
[18:27] * DanFoster (~Daniel@2a00:1ee0:3:1337:5868:2f63:5fcc:d5e2) has joined #ceph
[18:28] <evilrob> is there a max number of buckets that the radosgw will handle? I know about the quota, but is there an architectural limit? Or one where performance falls off?
[18:31] * lcurtis_ (~lcurtis@47.19.105.250) has joined #ceph
[18:31] * dan__ (~Daniel@81.174.207.235) Quit (Ping timeout: 480 seconds)
[18:31] * madkiss (~madkiss@2a02:8109:8680:14f2:9568:de2d:adfd:9e9d) has joined #ceph
[18:32] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[18:35] * madkiss2 (~madkiss@ip5b4029b4.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[18:38] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[18:39] * madkiss1 (~madkiss@ip5b40b877.dynamic.kabel-deutschland.de) has joined #ceph
[18:39] * madkiss (~madkiss@2a02:8109:8680:14f2:9568:de2d:adfd:9e9d) Quit (Ping timeout: 480 seconds)
[18:40] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:42] * georgem (~Adium@76-10-180-154.dsl.teksavvy.com) Quit (Quit: Leaving.)
[18:43] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[18:43] * DanFoster (~Daniel@2a00:1ee0:3:1337:5868:2f63:5fcc:d5e2) Quit (Quit: Leaving)
[18:45] * madkiss (~madkiss@2a02:8109:8680:14f2:9568:de2d:adfd:9e9d) has joined #ceph
[18:45] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[18:47] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:48] * cyphase (~cyphase@c-50-148-131-137.hsd1.ca.comcast.net) has joined #ceph
[18:49] * madkiss1 (~madkiss@ip5b40b877.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[18:51] * rmart04 (~rmart04@host109-155-213-112.range109-155.btcentralplus.com) has joined #ceph
[18:55] * rakeshgm (~rakesh@106.51.29.33) has joined #ceph
[18:56] * haomaiwang (~oftc-webi@119.6.102.203) Quit (Ping timeout: 480 seconds)
[18:58] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[18:58] * madkiss1 (~madkiss@2a02:8109:8680:14f2:9568:de2d:adfd:9e9d) has joined #ceph
[18:58] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[18:59] <lxxl> guys, restarted my ceph monitor node (i only have one still testing) and now i get this when running from an admin node... I imagine it has to do with something not being started correctly, can someone assist?: pipe(0x7fd3180057f0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fd318004ec0).fault
[19:00] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[19:02] * madkiss (~madkiss@2a02:8109:8680:14f2:9568:de2d:adfd:9e9d) Quit (Ping timeout: 480 seconds)
[19:04] * penguinRaider (~KiKo@146.185.31.226) Quit (Ping timeout: 480 seconds)
[19:05] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:05] * penguinRaider (~KiKo@146.185.31.226) has joined #ceph
[19:05] * madkiss (~madkiss@ip5b40cfcb.dynamic.kabel-deutschland.de) has joined #ceph
[19:06] * madkiss1 (~madkiss@2a02:8109:8680:14f2:9568:de2d:adfd:9e9d) Quit (Ping timeout: 480 seconds)
[19:09] * dnunez (~dnunez@ceas-nat.EECS.Tufts.EDU) has joined #ceph
[19:10] * rmart04 (~rmart04@host109-155-213-112.range109-155.btcentralplus.com) Quit (Quit: rmart04)
[19:10] * Sigma (~danielsj@tsn109-201-154-210.dyn.nltelcom.net) has joined #ceph
[19:10] * BrianA1 (~BrianA@fw-rw.shutterfly.com) has joined #ceph
[19:12] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:14] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[19:18] <lxxl> got it, solution for my case was ip changed on dhcp, did http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/ and it worked fine
[19:20] * derjohn_mob (~aj@88.128.80.37) has joined #ceph
[19:23] * danieagle (~Daniel@187.75.21.173) has joined #ceph
[19:27] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[19:28] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[19:30] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:32] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[19:32] * krypto (~krypto@G68-90-105-143.sbcis.sbc.com) Quit (Quit: Leaving)
[19:32] * madkiss1 (~madkiss@ip5b407871.dynamic.kabel-deutschland.de) has joined #ceph
[19:34] * dnunez (~dnunez@ceas-nat.EECS.Tufts.EDU) Quit (Ping timeout: 480 seconds)
[19:36] * madkiss (~madkiss@ip5b40cfcb.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[19:38] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[19:40] * Sigma (~danielsj@5AEAAAWD4.tor-irc.dnsbl.oftc.net) Quit ()
[19:41] * BrianA1 (~BrianA@fw-rw.shutterfly.com) Quit (Read error: Connection reset by peer)
[19:41] * madkiss (~madkiss@ip5b40cf31.dynamic.kabel-deutschland.de) has joined #ceph
[19:42] * BrianA1 (~BrianA@fw-rw.shutterfly.com) has joined #ceph
[19:45] * dnunez (~dnunez@130.64.25.58) has joined #ceph
[19:46] * madkiss1 (~madkiss@ip5b407871.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[19:47] * kefu_ (~kefu@114.92.96.253) has joined #ceph
[19:48] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:4d98:7dea:2462:19d7) Quit (Ping timeout: 480 seconds)
[19:48] * kefu (~kefu@45.32.49.168) Quit (Ping timeout: 480 seconds)
[19:50] * madkiss1 (~madkiss@ip5b4069e0.dynamic.kabel-deutschland.de) has joined #ceph
[19:54] * kefu_ (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:54] * derjohn_mob (~aj@88.128.80.37) Quit (Ping timeout: 480 seconds)
[19:55] * madkiss2 (~madkiss@ip5b4069e0.dynamic.kabel-deutschland.de) has joined #ceph
[19:55] * wido__ (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[19:55] * wido__ (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit ()
[19:55] * madkiss (~madkiss@ip5b40cf31.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[19:56] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Quit: Leaving.)
[19:56] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Quit: Leaving)
[19:56] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[19:57] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) Quit (Remote host closed the connection)
[19:59] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[20:00] * madkiss1 (~madkiss@ip5b4069e0.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[20:00] * madkiss (~madkiss@2a02:8109:8680:14f2:9568:de2d:adfd:9e9d) has joined #ceph
[20:01] <srk> wido: hi, Are you aware of any issues with Jewel rbd writethrough option?
[20:01] <wido> No, not aware of those. Haven't seen them
[20:01] * georgem (~Adium@24.114.77.224) has joined #ceph
[20:02] <srk> wido: Thanks, someone opened this issue about a month ago, http://tracker.ceph.com/issues/16654
[20:03] * madkiss2 (~madkiss@ip5b4069e0.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[20:03] <srk> 'm trying to do fio benchmark and and *think* we might be hitting this issue, as there is big drop in iops with Jewel
[20:04] <wido> No, sorry, can't help you there
[20:04] <srk> thats fine, thank you
[20:05] * rmart04 (~rmart04@host109-155-213-112.range109-155.btcentralplus.com) has joined #ceph
[20:06] * rene (~rene@x4db32bf4.dyn.telefonica.de) has joined #ceph
[20:06] * rene is now known as RsK
[20:06] * RsK is now known as _rsK
[20:14] * rmart04 (~rmart04@host109-155-213-112.range109-155.btcentralplus.com) Quit (Quit: rmart04)
[20:19] * joshd1 (~jdurgin@2602:30a:c089:2b0:e8be:edb:d7be:db8e) has joined #ceph
[20:21] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[20:23] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[20:23] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[20:31] * prpplague (~David@107-206-67-71.lightspeed.rcsntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[20:31] * neurodrone__ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[20:32] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[20:33] * neurodrone_ (~neurodron@162.243.191.67) Quit (Ping timeout: 480 seconds)
[20:33] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[20:34] * _rsK (~rene@x4db32bf4.dyn.telefonica.de) Quit (Quit: _rsK)
[20:41] * omg_im_dead (8087646b@107.161.19.109) has joined #ceph
[20:41] <omg_im_dead> i just lost power and 2/3 of my monitors came back with fs corruption
[20:42] <rkeene> Victory !
[20:42] <rkeene> What filesystem were you using for your monitors ?
[20:42] <omg_im_dead> I tried copying the monmap from the 3rd monitor to the other 2 and now i'm seeing cephx errors for the admin key and everything seems to be dead
[20:42] <omg_im_dead> ext4
[20:42] <rkeene> There's no particular reason to copy the monmap, as long as 1 monitor is working just turn off the other two broken ones
[20:42] <omg_im_dead> even on the working one i'm getting cephx admin errors
[20:43] <omg_im_dead> but the ceph admin mon_status shows that the quorum is workiing afaik
[20:43] <omg_im_dead> just delete the other two monitors and it should come back?
[20:43] <omg_im_dead> sorry for the franticness
[20:43] <rkeene> Is your cluster health OK, WARN, or ERR ?
[20:43] <lxxl> omg_im_dead: can it be ip changes?
[20:44] * art_yo (~kvirc@93-80-233-59.broadband.corbina.ru) Quit (Ping timeout: 480 seconds)
[20:44] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[20:44] <lxxl> i had something similar some hours ago
[20:44] <rkeene> (It should be WARN, I would expect)
[20:44] <omg_im_dead> the ips are all the same. rkeene I can't seem to get status from ceph tool as ceph admin key isn't working
[20:44] <rkeene> Fix the admin key first
[20:46] * georgem (~Adium@24.114.77.224) Quit (Ping timeout: 480 seconds)
[20:46] <omg_im_dead> the admin keyring looks right. Is there a doc on how to regen or remake that without having access anylonger
[20:48] * xarses (~xarses@172.56.39.126) has joined #ceph
[20:49] * georgem (~Adium@24.114.71.46) has joined #ceph
[20:55] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:56] * davidzlap (~Adium@2605:e000:1313:8003:b42b:fd22:c2c5:c5c2) has joined #ceph
[20:58] * rakeshgm (~rakesh@106.51.29.33) Quit (Quit: Leaving)
[21:00] <lxxl> "rbd ls -l" lists block device images and their sizes, but how can i see what space is unused?
[21:00] <jdillaman> lxxl: 'rbd du'
[21:01] <lxxl> jdillaman: error parsing du
[21:01] * ChanServ sets mode +o scuttlemonkey
[21:02] <jdillaman> lxxl: it's included in the Jewel release of Ceph
[21:02] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[21:02] <jdillaman> lxxl: prior to Jewel you could use this trick: http://ceph.com/planet/real-size-of-a-ceph-rbd-image/
[21:03] * davidzlap (~Adium@2605:e000:1313:8003:b42b:fd22:c2c5:c5c2) Quit (Quit: Leaving.)
[21:04] <lxxl> jdillaman: great my nodes have the correct version but the one i was trying didnt, however i guess i phrased it wrong, i dont want to know the free size on each image, but rather how much size i have free to create more images
[21:04] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[21:05] <jdillaman> lxxl: ceph df?
[21:06] <omg_im_dead> is there a way to regenerate an admin.key? I'm trying 'ceph auth get-or-create client.admin mon 'allow *' mds 'allow *' osd 'allow *' -o /etc/ceph/ceph.client.admin.keyring' but that is giving me a permission denied
[21:06] * georgem (~Adium@24.114.71.46) Quit (Quit: Leaving.)
[21:06] * georgem (~Adium@206.108.127.16) has joined #ceph
[21:06] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[21:07] <lxxl> jdillaman: sounds about right thank you very much
[21:08] * xarses (~xarses@172.56.39.126) Quit (Ping timeout: 480 seconds)
[21:09] * davidzlap (~Adium@2605:e000:1313:8003:49a:34df:fe9e:be17) has joined #ceph
[21:12] * derjohn_mob (~aj@x590e2d72.dyn.telefonica.de) has joined #ceph
[21:12] * rendar (~I@host118-131-dynamic.59-82-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:14] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) has joined #ceph
[21:16] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[21:16] * Hemanth (~hkumar_@103.228.221.135) Quit (Quit: Leaving)
[21:18] * hrast (~hrast@204.155.27.220) Quit (Quit: hrast)
[21:20] * ircolle (~Adium@2601:285:201:633a:b157:72ef:6e7e:10c9) Quit (Quit: Leaving.)
[21:21] * ircolle (~Adium@2601:285:201:633a:1c8b:71f5:60f8:4f79) has joined #ceph
[21:28] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) Quit (Remote host closed the connection)
[21:38] * rendar (~I@host118-131-dynamic.59-82-r.retail.telecomitalia.it) has joined #ceph
[21:42] * omg_im_dead (8087646b@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[21:43] * omg_im_dead (8087646b@107.161.19.109) has joined #ceph
[21:43] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[21:47] * cathode (~cathode@50.232.215.114) has joined #ceph
[21:55] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Quit: Leaving.)
[21:57] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) has joined #ceph
[21:59] * xarses (~xarses@64.124.158.32) has joined #ceph
[22:02] * cronburg- (~cronburg@ceas-nat.EECS.Tufts.EDU) has joined #ceph
[22:04] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) Quit (Remote host closed the connection)
[22:05] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Quit: Leaving)
[22:07] * haplo37 (~haplo37@199.91.185.156) Quit (Read error: Connection reset by peer)
[22:11] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[22:11] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) has joined #ceph
[22:12] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[22:16] * Thononain (~Jourei@3.tor.exit.babylon.network) has joined #ceph
[22:25] * mykola (~Mikolaj@91.245.73.133) Quit (Quit: away)
[22:34] * Jeffrey4l__ (~Jeffrey@110.252.58.4) has joined #ceph
[22:35] * borei1 (~dan@216.13.217.230) has joined #ceph
[22:35] * borei1 (~dan@216.13.217.230) has left #ceph
[22:35] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[22:36] * beardo_ (~sma310@207-172-244-241.c3-0.atw-ubr5.atw.pa.cable.rcn.com) has joined #ceph
[22:36] * hrast (~hrast@cpe-24-55-26-86.austin.res.rr.com) has joined #ceph
[22:37] * wes_dillingham (~wes_dilli@65.112.8.207) Quit (Quit: wes_dillingham)
[22:39] * Jeffrey4l_ (~Jeffrey@110.252.71.193) Quit (Ping timeout: 480 seconds)
[22:40] <hrast> Any place one should start looking for flushing/throtting writes? I???m seeing very bursty write (and read) throughput. Running 10.2.2, ssd for journals, dual 10gbe. Moveing the journal to ssd from colocated OSDs had minimal impact???
[22:44] <SamYaple> hrast: if your workload is lots of small writes, I would lower your filesync threshold
[22:45] <SamYaple> if it stops being so bursty you have your answer
[22:46] <hrast> Its all about 50kb images (I???m testing with cosbench) so, you could say that.
[22:46] * Thononain (~Jourei@26XAAAY3P.tor-irc.dnsbl.oftc.net) Quit ()
[22:46] <SamYaple> i would lower your filesync to 1 second and see if its as bursty
[22:47] <SamYaple> (default is 5 seconds)
[22:47] <hrast> Hmm, I want to say I tried that, but I???ve tried so many things over the past week, I don???t remeber any more.
[22:48] <hrast> I???ll give it a shot though.
[22:48] * analbeard (~shw@host86-142-132-208.range86-142.btcentralplus.com) has joined #ceph
[22:50] <hrast> Its set to 10 seconds right now, which now that I look at the graphs, seems like it maps pretty closely.
[22:51] <SamYaple> i would lower it to at least teh default, 5 seconds. for that small workload 2-3 seconds isnt unreasonable
[22:51] <SamYaple> or stop using ssd journals (not a real recommendation)
[22:56] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[22:57] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[22:59] * bene3 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[22:59] <hrast> The other thing that might be contributing to this behavior is that I???m trying to get consistancy with 256 workers worth of concurrency.
[23:04] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:09] <brians> pasted 256 twice today into a pg create command. ceph happily started creating a pool with 256256 pgs
[23:09] <brians> things got a bit grindy for a few minutes
[23:12] <hrast> Still quite bursty, set the filestore syn to something redicously low (.5 seconds) and still seeing simialr behavior. I???m watch top on one of the nodes, and when I???m in the lowest part of the wave, the cpu utilization for the radosgw and the OSDs is quite low.
[23:12] * rmart04 (~rmart04@host109-155-213-112.range109-155.btcentralplus.com) has joined #ceph
[23:13] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:14] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[23:16] * omg_im_dead (8087646b@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[23:17] * omg_im_dead (8087646b@107.161.19.109) has joined #ceph
[23:20] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[23:20] * rmart04 (~rmart04@host109-155-213-112.range109-155.btcentralplus.com) Quit (Quit: rmart04)
[23:21] <rkeene> SamYaple, I finally made a drawing of the descriptor block format for my distributed replicating filesystem objects: http://www.rkeene.org/viewer/tmp/auraefs-descriptor-block.png.htm (It's one of the only parts that actually has code behind it so far :-D)
[23:31] * penguinRaider (~KiKo@146.185.31.226) Quit (Ping timeout: 480 seconds)
[23:33] * chunmei (~chunmei@134.134.139.83) has joined #ceph
[23:37] * omg_im_dead (8087646b@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[23:38] * omg_im_dead (8087646b@107.161.19.109) has joined #ceph
[23:38] * srk (~Siva@32.97.110.55) Quit (Ping timeout: 480 seconds)
[23:39] * BrianA1 (~BrianA@fw-rw.shutterfly.com) Quit (Ping timeout: 480 seconds)
[23:39] * penguinRaider (~KiKo@146.185.31.226) has joined #ceph
[23:42] * BrianA (~BrianA@fw-rw.shutterfly.com) has joined #ceph
[23:43] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[23:44] * ircolle (~Adium@2601:285:201:633a:1c8b:71f5:60f8:4f79) Quit (Quit: Leaving.)
[23:52] * dnunez (~dnunez@130.64.25.58) Quit (Remote host closed the connection)
[23:52] * srk (~Siva@32.97.110.55) has joined #ceph
[23:55] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) Quit (Ping timeout: 480 seconds)
[23:58] <omg_im_dead> fyi. The fix was to revert my change. stop all the monitors. Remove the two broken ones from the working monitors mon map and start the working monitor
[23:58] <omg_im_dead> then cephx worked
[23:58] <omg_im_dead> and the cluster was "healthy"
[23:58] <omg_im_dead> but now I have another cluster and all 3 are dead
[23:58] <omg_im_dead> all 3 monitors
[23:58] <omg_im_dead> so now i don't know what to do
[23:59] <omg_im_dead> error opening mon data directory at '/var/lib/ceph/mon/...'

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.