#ceph IRC Log

Index

IRC Log for 2016-07-19

Timestamps are in GMT/BST.

[0:00] * reed_ (~reed@184-23-0-196.dsl.static.fusionbroadband.com) has joined #ceph
[0:00] * rendar (~I@host1-139-dynamic.49-82-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:03] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[0:07] * reed_ (~reed@184-23-0-196.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[0:10] * debian112 (~bcolbert@64.235.154.81) Quit (Ping timeout: 480 seconds)
[0:11] * rnowling (~rnowling@104-186-210-225.lightspeed.milwwi.sbcglobal.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:11] * karnan (~karnan@106.51.130.90) Quit (Quit: Leaving)
[0:12] * johnavp19891 (~jpetrini@8.39.115.8) has joined #ceph
[0:12] <- *johnavp19891* To prove that you are human, please enter the result of 8+3
[0:12] * squizzi (~squizzi@107.13.31.195) Quit (Ping timeout: 480 seconds)
[0:17] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[0:20] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:21] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[0:26] * debian112 (~bcolbert@207.183.247.46) has joined #ceph
[0:38] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[0:42] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[0:43] * skinnejo (~skinnejo@173-27-199-104.client.mchsi.com) has joined #ceph
[0:47] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:48] * kuku (~kuku@119.93.91.136) has joined #ceph
[0:49] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[0:49] * penguinRaider (~KiKo@103.6.219.219) Quit (Ping timeout: 480 seconds)
[0:51] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[0:58] * penguinRaider (~KiKo@103.6.219.219) has joined #ceph
[0:58] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (Ping timeout: 480 seconds)
[1:01] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:05] * salwasser (~Adium@2601:197:101:5cc1:7d43:5b30:1701:397a) has joined #ceph
[1:08] * measter (~Kwen@104.ip-167-114-238.eu) has joined #ceph
[1:08] * blizzow (~jburns@50.243.148.102) Quit (Ping timeout: 480 seconds)
[1:08] * salwasser (~Adium@2601:197:101:5cc1:7d43:5b30:1701:397a) Quit ()
[1:09] * salwasser (~Adium@2601:197:101:5cc1:7d43:5b30:1701:397a) has joined #ceph
[1:09] * salwasser (~Adium@2601:197:101:5cc1:7d43:5b30:1701:397a) Quit ()
[1:11] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) has joined #ceph
[1:12] * oms101 (~oms101@p20030057EA48B800C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:15] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[1:16] * InIMoeK (~InIMoeK@105-183-045-062.dynamic.caiway.nl) Quit ()
[1:19] * kefu (~kefu@114.92.96.253) has joined #ceph
[1:21] * oms101 (~oms101@p20030057EA612300C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:33] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[1:37] * measter (~Kwen@26XAAAE5W.tor-irc.dnsbl.oftc.net) Quit ()
[1:38] * Rehevkor (~Schaap@109.236.90.209) has joined #ceph
[1:41] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[1:45] * johnavp19891 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[1:50] * KindOne_ (kindone@h134.148.29.71.dynamic.ip.windstream.net) has joined #ceph
[1:51] * derjohn_mob (~aj@x590e62fa.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[1:52] * fsimonce (~simon@host99-64-dynamic.27-79-r.retail.telecomitalia.it) Quit (Remote host closed the connection)
[1:56] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:56] * KindOne_ is now known as KindOne
[1:57] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[2:07] * borei (~dan@216.13.217.230) Quit (Ping timeout: 480 seconds)
[2:07] * Rehevkor (~Schaap@9YSAAAPCA.tor-irc.dnsbl.oftc.net) Quit ()
[2:12] * truan-wang (~truanwang@220.248.17.34) has joined #ceph
[2:12] * davidzlap (~Adium@2605:e000:1313:8003:1835:cf0e:b7dd:bf85) has joined #ceph
[2:14] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[2:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[2:16] * truan-wang (~truanwang@220.248.17.34) Quit (Remote host closed the connection)
[2:16] * truan-wang (~truanwang@58.247.8.186) has joined #ceph
[2:20] * neurodrone_ (~neurodron@162.243.191.67) has joined #ceph
[2:42] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[2:47] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[2:56] * davidzlap (~Adium@2605:e000:1313:8003:1835:cf0e:b7dd:bf85) Quit (Quit: Leaving.)
[2:56] * davidzlap (~Adium@2605:e000:1313:8003:1835:cf0e:b7dd:bf85) has joined #ceph
[2:57] * swami1 (~swami@27.7.162.30) has joined #ceph
[2:58] <Anticimex> TheSov: hrmpf. one of the two network ports suddenly ungood :s that was the root cause
[2:58] <Anticimex> s/ports/interfaces
[2:58] <Anticimex> anyway, learned a lot reading about parameters.
[2:58] <Anticimex> i'm curious about the new async messenger in jewel
[2:58] * salwasser1 (~Adium@2601:197:101:5cc1:10fe:4877:50b5:8be8) has joined #ceph
[2:59] <Anticimex> supposedly it does away with need to compile in JEMalloc to get better messaging performance
[2:59] <Anticimex> (eg http://www.slideshare.net/Red_Hat_Storage/ceph-performance-projects-leading-up-to-jewel-61050682 )
[3:00] <Anticimex> but i'm struggling a bit to google up info on async messenger
[3:01] * salwasser1 (~Adium@2601:197:101:5cc1:10fe:4877:50b5:8be8) Quit ()
[3:02] * davidzlap (~Adium@2605:e000:1313:8003:1835:cf0e:b7dd:bf85) Quit (Quit: Leaving.)
[3:04] * swami1 (~swami@27.7.162.30) Quit (Quit: Leaving.)
[3:07] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[3:07] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[3:08] * toast (~oracular@tor-exit3-readme.dfri.se) has joined #ceph
[3:10] * Jeffrey4l_ (~Jeffrey@119.251.239.159) has joined #ceph
[3:15] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[3:25] * rnowling (~rnowling@104-186-210-225.lightspeed.milwwi.sbcglobal.net) has joined #ceph
[3:25] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[3:25] <- *johnavp19891* To prove that you are human, please enter the result of 8+3
[3:26] * reed (~reed@184-23-0-196.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[3:28] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[3:32] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[3:36] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:37] * toast (~oracular@5AEAAAEMY.tor-irc.dnsbl.oftc.net) Quit ()
[3:37] * Solvius (~ylmson@marcuse-2.nos-oignons.net) has joined #ceph
[3:39] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[3:42] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[3:43] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[3:44] * vbellur (~vijay@71.234.224.255) has joined #ceph
[3:45] * rwmjones (~rwmjones@230.83.187.81.in-addr.arpa) Quit (Ping timeout: 480 seconds)
[3:45] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) Quit (Ping timeout: 480 seconds)
[3:46] * lightspeed (~lightspee@2001:8b0:16e:1:8326:6f70:89f:8f9c) has joined #ceph
[3:48] * rwmjones (~rwmjones@230.83.187.81.in-addr.arpa) has joined #ceph
[3:51] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[3:55] * scg (~zscg@146-115-134-246.c3-0.nwt-ubr1.sbo-nwt.ma.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[3:57] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[4:03] * yanzheng (~zhyan@125.70.23.222) has joined #ceph
[4:04] * rnowling (~rnowling@104-186-210-225.lightspeed.milwwi.sbcglobal.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:06] * rnowling (~rnowling@104-186-210-225.lightspeed.milwwi.sbcglobal.net) has joined #ceph
[4:07] * Solvius (~ylmson@5AEAAAENZ.tor-irc.dnsbl.oftc.net) Quit ()
[4:07] * Arfed (~Da_Pineap@26XAAAFAL.tor-irc.dnsbl.oftc.net) has joined #ceph
[4:19] * Racpatel (~Racpatel@2601:87:0:24af::1fbc) Quit (Quit: Leaving)
[4:24] * flisky (~Thunderbi@210.12.157.87) has joined #ceph
[4:29] * rnowling (~rnowling@104-186-210-225.lightspeed.milwwi.sbcglobal.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[4:30] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:37] * Arfed (~Da_Pineap@26XAAAFAL.tor-irc.dnsbl.oftc.net) Quit ()
[4:37] * Bobby (~Rosenblut@65.19.167.131) has joined #ceph
[4:41] * chengpeng (~chengpeng@180.168.126.243) has joined #ceph
[4:53] * rnowling (~rnowling@104-186-210-225.lightspeed.milwwi.sbcglobal.net) has joined #ceph
[4:55] * kuku (~kuku@119.93.91.136) has joined #ceph
[5:00] * micw_ (~micw@ip92346916.dynamic.kabel-deutschland.de) has joined #ceph
[5:01] * dan__ (~Daniel@2a00:1ee0:3:1337:2879:3fee:1f90:5474) Quit (Quit: Leaving)
[5:01] * DanFoster (~Daniel@2a00:1ee0:3:1337:2879:3fee:1f90:5474) has joined #ceph
[5:02] * rnowling (~rnowling@104-186-210-225.lightspeed.milwwi.sbcglobal.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:03] * micw__ (~micw@p50992bfa.dip0.t-ipconnect.de) has joined #ceph
[5:07] * Bobby (~Rosenblut@61TAAAOPK.tor-irc.dnsbl.oftc.net) Quit ()
[5:07] * Vacuum__ (~Vacuum@i59F79D31.versanet.de) has joined #ceph
[5:08] * micw (~micw@p50992bfa.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[5:08] * Vidi (~PcJamesy@tor-exit.eecs.umich.edu) has joined #ceph
[5:10] * micw_ (~micw@ip92346916.dynamic.kabel-deutschland.de) Quit (Ping timeout: 480 seconds)
[5:14] * Vacuum_ (~Vacuum@i59F791AA.versanet.de) Quit (Ping timeout: 480 seconds)
[5:18] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[5:27] * skinnejo (~skinnejo@173-27-199-104.client.mchsi.com) Quit (Remote host closed the connection)
[5:32] * neurodrone_ (~neurodron@162.243.191.67) Quit (Quit: neurodrone_)
[5:37] * Vidi (~PcJamesy@61TAAAOP6.tor-irc.dnsbl.oftc.net) Quit ()
[5:40] * flisky (~Thunderbi@210.12.157.87) Quit (Quit: flisky)
[5:42] * lobstar (~Borf@torrelay6.tomhek.net) has joined #ceph
[5:43] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[5:44] * vimal (~vikumar@114.143.165.70) has joined #ceph
[5:45] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[5:48] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[5:48] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[6:00] * vimal (~vikumar@114.143.165.70) Quit (Quit: Leaving)
[6:08] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[6:10] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[6:12] <chengpeng> how can I find osd id on one host?
[6:12] * lobstar (~Borf@5AEAAAEQ3.tor-irc.dnsbl.oftc.net) Quit ()
[6:12] * Pieman (~djidis__@217.13.197.5) has joined #ceph
[6:19] * MentalRay (~MentalRay@LPRRPQ1401W-LP130-02-1242363207.dsl.bell.ca) has joined #ceph
[6:25] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit (Quit: jermudgeon)
[6:25] * vimal (~vikumar@121.244.87.116) has joined #ceph
[6:30] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Quit: valeech)
[6:42] * Pieman (~djidis__@26XAAAFDG.tor-irc.dnsbl.oftc.net) Quit ()
[6:42] * oracular (~kiasyn@atlantic480.us.unmetered.com) has joined #ceph
[6:50] * Karcaw_ (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (Quit: Changing server)
[6:50] <[arx]> need more context
[7:01] * theTrav (~theTrav@203.35.9.142) Quit (Remote host closed the connection)
[7:03] * joao (~joao@8.184.114.89.rev.vodafone.pt) Quit (Ping timeout: 480 seconds)
[7:05] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[7:06] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit ()
[7:12] * oracular (~kiasyn@5AEAAAESB.tor-irc.dnsbl.oftc.net) Quit ()
[7:13] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[7:15] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit ()
[7:15] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[7:21] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit (Quit: jermudgeon)
[7:24] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[7:34] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:42] * matx (~DougalJac@46.101.197.155) has joined #ceph
[7:42] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) has joined #ceph
[7:42] * swami1 (~swami@49.38.1.205) has joined #ceph
[7:43] * derjohn_mob (~aj@88.128.80.20) has joined #ceph
[7:46] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit (Quit: jermudgeon)
[7:48] * garphy`aw is now known as garphy
[7:50] * dgurtner (~dgurtner@209.132.186.254) has joined #ceph
[7:51] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:56] * jlayton (~jlayton@cpe-2606-A000-1125-405B-C5-7FF-FE41-3227.dyn6.twc.com) Quit (Ping timeout: 480 seconds)
[7:56] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:57] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:05] * jlayton (~jlayton@107.13.71.30) has joined #ceph
[8:12] * matx (~DougalJac@61TAAAOTV.tor-irc.dnsbl.oftc.net) Quit ()
[8:12] * Oddtwang (~drupal@chomsky.torservers.net) has joined #ceph
[8:19] * MentalRay (~MentalRay@LPRRPQ1401W-LP130-02-1242363207.dsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[8:28] * derjohn_mob (~aj@88.128.80.20) Quit (Ping timeout: 480 seconds)
[8:29] * micw__ (~micw@p50992bfa.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[8:29] * garphy is now known as garphy`aw
[8:32] * derjohn_mob (~aj@88.128.80.20) has joined #ceph
[8:33] * natarej_ (~natarej@101.188.54.14) Quit (Read error: Connection reset by peer)
[8:34] * natarej_ (~natarej@101.188.54.14) has joined #ceph
[8:42] * derjohn_mob (~aj@88.128.80.20) Quit (Ping timeout: 480 seconds)
[8:42] * Oddtwang (~drupal@61TAAAOUJ.tor-irc.dnsbl.oftc.net) Quit ()
[8:46] * Kurt (~Adium@2001:628:1:5:104:2704:e8c9:18b9) Quit (Quit: Leaving.)
[8:50] * garphy`aw is now known as garphy
[9:00] * natarej (~natarej@101.188.54.14) has joined #ceph
[9:06] * natarej_ (~natarej@101.188.54.14) Quit (Ping timeout: 480 seconds)
[9:09] * garphy is now known as garphy`aw
[9:22] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:26] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:37] * maybebuggy (~maybebugg@2a01:4f8:191:2350::2) has joined #ceph
[9:41] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[9:42] * Helleshin (~Plesioth@62-210-37-82.rev.poneytelecom.eu) has joined #ceph
[9:42] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[9:43] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[9:43] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[9:44] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[9:44] * kuku (~kuku@119.93.91.136) has joined #ceph
[9:45] <T1> Anticimex: what did you mean with "ungood" for one of your interfaces?
[9:45] * joao (~joao@8.184.114.89.rev.vodafone.pt) has joined #ceph
[9:45] * ChanServ sets mode +o joao
[9:46] * garphy`aw is now known as garphy
[9:47] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[9:48] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[9:53] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[9:58] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:59] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[10:02] * pdrakewe_ (~pdrakeweb@oh-76-5-108-60.dhcp.embarqhsd.net) has joined #ceph
[10:04] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:07] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) Quit (Ping timeout: 480 seconds)
[10:08] * pdrakeweb (~pdrakeweb@oh-76-5-108-60.dhcp.embarqhsd.net) Quit (Ping timeout: 480 seconds)
[10:12] * Helleshin (~Plesioth@5AEAAAEWR.tor-irc.dnsbl.oftc.net) Quit ()
[10:12] * offer (~Coestar@Relay-J.tor-exit.network) has joined #ceph
[10:16] * dan__ (~Daniel@2a00:1ee0:3:1337:8547:cca4:ce31:ebae) has joined #ceph
[10:16] * jarrpa (~jarrpa@63.225.131.166) has joined #ceph
[10:16] * georgem (~Adium@85.204.4.209) has joined #ceph
[10:17] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:21] * georgem (~Adium@85.204.4.209) Quit ()
[10:23] * DanFoster (~Daniel@2a00:1ee0:3:1337:2879:3fee:1f90:5474) Quit (Ping timeout: 480 seconds)
[10:23] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[10:27] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[10:27] * truan-wang (~truanwang@58.247.8.186) Quit (Ping timeout: 480 seconds)
[10:33] <Anticimex> T1: i saw lots of dup acks, not necessarily due to the interface itself, can be paths in network too. either way, downing the interface removed the errors
[10:34] <T1> Anticimex: and that fixed your blocked requests?
[10:35] <Anticimex> yes, it was due to slow network throughput on one of the interfaces
[10:39] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[10:39] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[10:39] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[10:41] * TMM (~hp@185.5.121.201) has joined #ceph
[10:41] * Vacuum_ (~Vacuum@88.130.192.141) has joined #ceph
[10:42] * offer (~Coestar@5AEAAAEXH.tor-irc.dnsbl.oftc.net) Quit ()
[10:45] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[10:48] * Vacuum__ (~Vacuum@i59F79D31.versanet.de) Quit (Ping timeout: 480 seconds)
[10:53] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e189:abfd:7eae:1796) has joined #ceph
[11:04] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[11:09] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:09] * fsimonce (~simon@host99-64-dynamic.27-79-r.retail.telecomitalia.it) has joined #ceph
[11:10] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[11:11] * kefu is now known as kefu|afk
[11:12] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[11:12] * notmyname1 (~GuntherDW@torrelay4.tomhek.net) has joined #ceph
[11:12] * ngoswami (~ngoswami@121.244.87.116) Quit ()
[11:14] * maybebuggy (~maybebugg@2a01:4f8:191:2350::2) Quit (Remote host closed the connection)
[11:17] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[11:18] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e189:abfd:7eae:1796) Quit (Ping timeout: 480 seconds)
[11:18] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:20] * kefu|afk is now known as kefu
[11:27] <T1> Anticimex: intereseting
[11:37] * georgem (~Adium@85.204.4.209) has joined #ceph
[11:37] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[11:40] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit ()
[11:41] * georgem (~Adium@85.204.4.209) Quit ()
[11:42] * notmyname1 (~GuntherDW@61TAAAOYF.tor-irc.dnsbl.oftc.net) Quit ()
[11:42] * dusti1 (~rushworld@9YSAAAPO5.tor-irc.dnsbl.oftc.net) has joined #ceph
[11:44] * Jeeves_ (~mark@2a03:7900:1:1:4cac:cad7:939b:67f4) has joined #ceph
[11:44] <Jeeves_> Hi!
[11:44] <Jeeves_> Q: I have radosgw with civetweb behind haproxy. I cannot put a directory with a space in it
[11:44] <Jeeves_> 2016-07-19 11:41:10.991409 7fc77d7b2700 1 civetweb: 0x7fc7b4014050: 10.0.0.35 - - [19/Jul/2016:11:41:10 +0200] "PUT /foo+bar/blabla HTTP/1.1" 403 0 - -
[11:45] * b0e (~aledermue@213.95.25.82) has joined #ceph
[11:46] <Jeeves_> Can anyone explain why I get that? And how I can fix that?
[11:46] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Remote host closed the connection)
[11:46] * flisky (~Thunderbi@106.38.61.189) has joined #ceph
[11:48] <TMM> For a backup pool, would it be safe to create a min_size 2 pool that picks from root?
[11:49] <TMM> will that guarantee that at least one copy of each block will show up on each of my pods?
[11:49] <boolman> TMM: depends on your crushmap
[11:51] <TMM> boolman, I have 1 root, two pods, 15 hosts per pod, 8 osds per host. If I create a standard distributed pool which picks from root, will it automatically make sure one copy is on each pod then?
[11:51] <TMM> err min_size 1, size 2
[11:51] <TMM> is what I want
[11:51] <TMM> alternatively I was thinking an ec pool with k+m of 2+2
[11:55] <TMM> boolman, was that not the right information? :)
[11:57] <TMM> step chooseleaf firstn 0 type pod maybe?
[12:01] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) has joined #ceph
[12:09] * jargonmonk (jargonmonk@00022354.user.oftc.net) has left #ceph
[12:10] <Jeeves_> Hmm. Dragondisk does it ok. s3cmd doesn't
[12:11] * flisky (~Thunderbi@106.38.61.189) Quit (Quit: flisky)
[12:12] * dusti1 (~rushworld@9YSAAAPO5.tor-irc.dnsbl.oftc.net) Quit ()
[12:13] <TMM> oh crap, I did something stupid, I accidentally replaced my admin keyring
[12:13] <TMM> with something that doesn't have admin rights
[12:13] <TMM> shit
[12:15] * rendar (~I@host200-143-dynamic.59-82-r.retail.telecomitalia.it) has joined #ceph
[12:16] * dgurtner (~dgurtner@209.132.186.254) Quit (Ping timeout: 480 seconds)
[12:16] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Read error: Connection reset by peer)
[12:16] * Sun7zu (~redbeast1@192.87.28.28) has joined #ceph
[12:17] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[12:19] * i_m (~ivan.miro@deibp9eh1--blueice4n0.emea.ibm.com) has joined #ceph
[12:22] * skoude (~skoude@193.142.1.54) Quit (Quit: Lost terminal)
[12:25] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[12:37] <sto> Hi, I'm doing a test deployment of CEPH on Debian Jessie using ceph-deploy and everything seems to work OK, but when I reboot the nodes the OSDs don't come back... I'm using virtual disks as OSD and partitions on the OS disk as journal (that's the plan for the production system) ... any pointers on what to look at?
[12:42] <sto> I'm using systemd, btw (the default on jessie)
[12:43] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Remote host closed the connection)
[12:45] * dgurtner (~dgurtner@178.197.227.166) has joined #ceph
[12:46] * Sun7zu (~redbeast1@26XAAAFK3.tor-irc.dnsbl.oftc.net) Quit ()
[12:47] * QuantumBeep (~aldiyen@195.228.45.176) has joined #ceph
[13:00] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) has joined #ceph
[13:06] <Jeeves_> sto: That seems to be broken indeed
[13:07] * salwasser (~Adium@2601:197:101:5cc1:406c:2968:2406:9c15) has joined #ceph
[13:07] <Jeeves_> sto: Starting them manually seems to work, even though they don't seem to exist
[13:08] <sto> Jeeves_: does using sysvint fix the issue?
[13:08] <Jeeves_> I didn't try that
[13:09] <sto> Ok, I'll try and see how it goes
[13:09] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:12] * salwasser (~Adium@2601:197:101:5cc1:406c:2968:2406:9c15) Quit (Quit: Leaving.)
[13:13] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e189:abfd:7eae:1796) has joined #ceph
[13:14] * IvanJobs (~ivanjobs@122.14.140.7) has joined #ceph
[13:15] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[13:16] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:e189:abfd:7eae:1796) Quit ()
[13:16] * QuantumBeep (~aldiyen@5AEAAAE0O.tor-irc.dnsbl.oftc.net) Quit ()
[13:16] * Quackie (~Schaap@185.80.50.33) has joined #ceph
[13:17] <Jeeves_> Start them manually via systemctl start ceph-mon@osdnode01.service
[13:17] <Jeeves_> (for me, that was the command)
[13:18] * IvanJobs (~ivanjobs@122.14.140.7) Quit (Read error: Connection reset by peer)
[13:22] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[13:26] <sto> I'll try, because the deployment fails: [ceph01][INFO ] Running command: sudo systemctl enable ceph.target
[13:26] <sto> It tries to use systemctl and I have removed it
[13:28] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[13:34] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:95e5:5468:e78b:6573) has joined #ceph
[13:43] * ira (~ira@nat-pool-bos-u.redhat.com) has joined #ceph
[13:46] * Quackie (~Schaap@26XAAAFL5.tor-irc.dnsbl.oftc.net) Quit ()
[13:47] * ricin (~MatthewH1@195.228.45.176) has joined #ceph
[13:48] <sto> Jeeves_: The ceph-mon works, what fails is the osd daemons
[13:48] <sto> are
[13:48] * sherv (250906df@107.161.19.109) has joined #ceph
[13:48] <Jeeves_> sto: Yes, that's called differently
[13:49] <Jeeves_> ceph-osd@0.service
[13:49] <Jeeves_> Where 0 is the osd id, obviously
[13:49] <Jeeves_> The osd service is started for me, btw. Via udev, I think?
[13:49] * isti (~isti@fw.alvicom.hu) has joined #ceph
[13:50] * Kurt (~Adium@2001:628:1:5:2439:619d:6184:b145) has joined #ceph
[13:51] * art_yo (~kvirc@149.126.169.197) has joined #ceph
[13:51] <sherv> hi. can anyone explain to me some strange ceph bahavior? i have a test lab with 3 OSDs, and in whatever circumstances, CEPH uses only osd 0 and 1, but not 2. i can't understand why
[13:51] <art_yo> Hi all!
[13:52] <sto> Jeeves_: here it is failing, there must be something else
[13:53] <sto> Jeeves_: are you using a drive for each OSD and a partition for journaling?
[13:53] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) has joined #ceph
[13:53] <sto> Maybe that has something to do with it?
[13:54] <art_yo> I had a server with RBD device and there were some problem, I decided to create new RBD (that's a test server, so I'm not afraid). But now I can't create fs on rbd.
[13:54] <art_yo> [root@hulk ~]# rbd rm main
[13:54] <art_yo> 2016-07-19 18:49:11.828501 7fcfe556f760 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
[13:54] <art_yo> Removing image: 100% complete...done.
[13:54] <art_yo> [root@hulk ~]# rbd create bar --size 15728640 -m 192.168.127.12 -k /etc/ceph/ceph.client.admin.keyring
[13:54] <art_yo> [root@hulk ~]# rbd map bar --name client.admin -m 192.168.127.12 -k /etc/ceph/ceph.client.admin.keyring
[13:54] <art_yo> /dev/rbd0
[13:54] <art_yo> [root@hulk ~]# rbd locck remove ^C
[13:54] <art_yo> [root@hulk ~]# rbs ls
[13:54] <art_yo> -bash: rbs: command not found
[13:54] <art_yo> [root@hulk ~]# rbd ls
[13:54] <art_yo> bar
[13:54] <art_yo> [root@hulk ~]# rbd info bar
[13:54] <art_yo> rbd image 'bar':
[13:54] * art_yo (~kvirc@149.126.169.197) Quit (Read error: Connection reset by peer)
[13:55] * art_yo (~kvirc@149.126.169.197) has joined #ceph
[13:55] <isti> Hi, is it generally safe to recreate a non-failed OSD without waiting for the double recovery? just down it, crush remove, rm, create etcetc. So that I can avoid the double recovery - i can live with degraded cluster for the duration.
[13:55] <art_yo> I'm sorry. Did you see my message? I've got disconnect
[13:55] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[13:56] <etienneme> last was
[13:56] <etienneme> art_yo> rbd image 'bar':
[13:56] <etienneme> (use pastebin)
[13:57] <art_yo> ok
[13:58] <art_yo> http://pastebin.com/nTnX2VNT
[13:59] <art_yo> here is the log. And mkfs stops after "Discarding device blocks 0/xxxxxxx"
[14:00] <art_yo> I tried to delete all RBDs and create new ones
[14:01] <art_yo> And I also tried to run "ceph-deploy uninstall hulk" -> "ceph-deploy install hulk" -> "ceph-deploy admin hulk"
[14:02] <etienneme> art_yo: You dont use rbd map?
[14:02] <art_yo> I do
[14:02] <art_yo> line 5:
[14:02] <art_yo> [root@hulk ~]# rbd map bar --name client.admin -m 192.168.127.12 -k /etc/ceph/ceph.client.admin.keyring
[14:03] <sto> Jeeves_: oh, for some reason the script fails to activate the OSD, but if I do it manually it works: for _cdd in `ceph-disk list | grep prepared | awk '{ print $1 }'`; do ceph-disk activate $_cdd; done
[14:03] <etienneme> I would do mkfs.ext4 on /dev/rbd0
[14:04] <sto> Jeeves_: I'll look into the scripts to see where is the problem, but at least I can reboot now... ;) Thanks
[14:04] <art_yo> 2 mins
[14:04] <art_yo> I gonna try
[14:05] <Jeeves_> sto: I learned that ceph-deploy does not do activate automaticallly anymore
[14:08] <art_yo> http://pastebin.com/E3x4hC7n
[14:08] <art_yo> same result
[14:10] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[14:11] * garphy is now known as garphy`aw
[14:11] <art_yo> http://pastebin.com/mmVQC6ns
[14:12] <art_yo> seems like ceph works fine
[14:13] * Racpatel (~Racpatel@2601:87:0:24af::1fbc) has joined #ceph
[14:14] <sto> Jeeves_: and what am I supposed to do to fix my problem, then?
[14:15] <art_yo> And I :)
[14:16] * ricin (~MatthewH1@61TAAAO1S.tor-irc.dnsbl.oftc.net) Quit ()
[14:17] * Teddybareman (~Helleshin@89.43.62.11) has joined #ceph
[14:18] <Jeeves_> sto: I did ceph-deploy prepare and ceph-deploy activate seperatly
[14:23] * nikbor (~n.borisov@admins.1h.com) has joined #ceph
[14:24] <nikbor> hello, i have a running ceph cluster, in which i have created a pool and in the pool i create a block device. After that I map the device and run mkfs.ext4 and it hangs while trying to do blkdev_fsync
[14:24] <nikbor> any ideas how to debug this
[14:24] <nikbor> ceph_health returns HEALTH_OK
[14:26] <art_yo> nikbor: I have almost similar issue
[14:26] <nikbor> this is on kernel 4.4.14 (as the client)
[14:27] <nikbor> ceph version 0.94.7
[14:27] * fmanana (~fdmanana@2001:8a0:6e0c:6601:2ab2:bdff:fe87:46f) has joined #ceph
[14:28] * fmanana (~fdmanana@2001:8a0:6e0c:6601:2ab2:bdff:fe87:46f) Quit ()
[14:30] <Amto_res> Hello, I have upgrade Client (Hypervisor KVM with librbd)... from 0.94 to 10.2.2 Jewel. Now i have error : rbd --id <POOL> copy <SRC> <DST> : [....] 0 -1 librbd: error writing header: (38) Function not implemented .... Is it normal for my cluster is not even a day? Or this is not a normal behavior?
[14:31] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[14:33] * mattia (20026@ninthfloor.org) Quit (Remote host closed the connection)
[14:34] <etienneme> Have you upgraded server too?
[14:35] <Amto_res> etienneme: i have upgrade with command : :~$ ceph-deploy --username cephnew install --stable jewel ih-prd-onenode01
[14:37] * johnavp1989 (~jpetrini@yakko.coredial.com) has joined #ceph
[14:37] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[14:38] <etienneme> It looks like you have an issue of incompatible version between client and server, check tunable too
[14:38] <sep> Amto_res, you upgraded in the reccomended order ? http://docs.ceph.com/docs/master/install/upgrading-ceph/ clients are the last thing to upgrade
[14:39] <Amto_res> sep:
[14:39] <Amto_res> I recently had a conference in France .. He first told to update clients. . . . : (
[14:40] <Amto_res> Luckily I upgrade a client to test ..
[14:41] * alexxy[home] (~alexxy@biod.pnpi.spb.ru) has joined #ceph
[14:41] <Amto_res> I'll upgrade my cluster .. then I would see after for that client.
[14:41] <sep> if you read the link , you will find almost on the bottom "Once you have upgraded the packages and restarted daemons on your Ceph cluster, we recommend upgrading ceph-common and client libraries (librbd1 and librados2) on your client nodes too." i suspect this means clients are the latest you upgrade
[14:42] <Amto_res> sep: yes. Let's Go for upgrade cluster :D
[14:43] <sep> hope you know what you are doing :) i am waiting for 1 more point release on jewel atleast. are a few bugs that would affect my debian based cluster
[14:43] <Jeeves_> Although I would expect the clients to be able to talk to older clusters. ..
[14:43] * alexxy (~alexxy@biod.pnpi.spb.ru) Quit (Ping timeout: 480 seconds)
[14:44] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[14:44] <sep> one would assume...
[14:45] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[14:45] <art_yo> can anybody help me with my issue?
[14:46] <art_yo> mkfs.ext4 freeze on "Discarding device blocks"
[14:46] * Teddybareman (~Helleshin@26XAAAFNT.tor-irc.dnsbl.oftc.net) Quit ()
[14:47] * Unforgiven (~Snowcat4@104.ip-167-114-238.eu) has joined #ceph
[14:48] <sep> art_yo, how large is the image? i accidentaly created a 2048 PB image once when i forgot that rbd took the size in MB's, did'nt detect it until mkfs never finished
[14:49] <art_yo> sep: 15 TB
[14:51] <art_yo> not so large
[14:51] * rnowling (~rnowling@104-186-210-225.lightspeed.milwwi.sbcglobal.net) has joined #ceph
[14:54] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[14:55] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[14:56] <art_yo> sep: and I created it with comands on history, so I didn't type anything manually
[14:58] <sep> i know you can tell mkfs to not go thru the whole image and discard blocks. but i do not know why it's slow.
[14:58] <sep> does the cluster have any io atm ?
[14:59] <art_yo> Im sorry
[14:59] <art_yo> what does "atm" meam?
[14:59] <sep> at the moment
[14:59] <liiwi> at the moment
[14:59] <art_yo> no
[15:00] <liiwi> dict.org ftw :)
[15:00] <sep> hum sorry i do not know.
[15:00] <art_yo> whait
[15:00] <art_yo> how am I supposed to checkout io?
[15:00] <sep> what happens if you make the fs without discard. and the do a discard using fstrim manually afterwards ?
[15:01] <sep> ceph -s io would be shown on the bottom
[15:01] <art_yo> Linux utilities, like iotop,ftop,nmon etc..?
[15:01] <art_yo> one sec
[15:02] <sep> example:
[15:02] <sep> recovery io 251 MB/s, 291 objects/s
[15:02] <sep> client io 0 B/s rd, 0 B/s wr, 2 op/s
[15:02] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[15:02] <art_yo> pardon
[15:03] <art_yo> http://pastebin.com/49tmMbQe
[15:03] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[15:04] <art_yo> I can't find out how to get io status.
[15:05] <sep> if there was any io it would be written below that output
[15:05] <art_yo> oh, ok
[15:05] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:06] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[15:06] <art_yo> I realize what it means, but how can I fix that? :)
[15:06] <sep> i often have the command 'watch ceph -s ' on a terminal while working on ceph
[15:07] <sep> perhaps strace the mkfs command to see if it's doing anythnig at all ? and check if there is any logs on the rbd client
[15:07] <sep> not been doing much of that kind of throubleshooting i am afraid.
[15:08] * garphy`aw is now known as garphy
[15:09] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:09] <art_yo> do you know where are rbd logs located?
[15:10] <art_yo> dmesg shows: INFO: task mkfs.ext4:2125 blocked for more than 120 seconds.
[15:11] <sep> ceph ceph health detail show anything ?
[15:11] <art_yo> [root@ceph-admin ~]# ceph health detail
[15:11] <art_yo> HEALTH_WARN 1 near full osd(s)
[15:12] <sep> i would have tried stopping the mkfs, and and mkfs without discard. and perhaps try fstrim manually afterwards
[15:12] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:12] <art_yo> I will try
[15:13] * rotbeard (~redbeard@aftr-109-90-233-215.unity-media.net) Quit (Quit: Leaving)
[15:13] <art_yo> mkfs.ext4 nodiscard -m0 /dev/rbd0 - this way?
[15:15] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[15:16] * Unforgiven (~Snowcat4@61TAAAO3G.tor-irc.dnsbl.oftc.net) Quit ()
[15:17] * Zeis (~legion@chaucer.relay.coldhak.com) has joined #ceph
[15:17] * bara (~bara@213.175.37.12) has joined #ceph
[15:18] * mhack (~mhack@nat-pool-bos-u.redhat.com) has joined #ceph
[15:18] <art_yo> sep: thank you!
[15:19] <art_yo> looks like you were right
[15:19] <sep> i think it is -E nodiscard
[15:19] <art_yo> yep.
[15:20] <art_yo> now RBD is being formatting
[15:20] <art_yo> is being formatted and ceph -s shows io activity
[15:20] <art_yo> cluster f8aa3ef3-e5c9-4bd1-9ee8-2b141ff2f485 health HEALTH_WARN 1 near full osd(s) monmap e1: 1 mons at {ceph-admin=192.168.127.12:6789/0} election epoch 1, quorum 0 ceph-admin osdmap e15404: 11 osds: 11 up, 11 in pgmap v305166: 256 pgs, 1 pools, 6676 GB data, 1689 kobjects 13412 GB used, 3919 GB / 18260 GB avail 255 act
[15:21] <art_yo> sory
[15:21] <art_yo> cluster f8aa3ef3-e5c9-4bd1-9ee8-2b141ff2f485
[15:21] <art_yo> health HEALTH_WARN
[15:21] <art_yo> 1 near full osd(s)
[15:21] <art_yo> monmap e1: 1 mons at {ceph-admin=192.168.127.12:6789/0}
[15:21] <art_yo> election epoch 1, quorum 0 ceph-admin
[15:21] <art_yo> osdmap e15404: 11 osds: 11 up, 11 in
[15:21] <art_yo> pgmap v305166: 256 pgs, 1 pools, 6676 GB data, 1689 kobjects
[15:21] <art_yo> 13412 GB used, 3919 GB / 18260 GB avail
[15:21] <art_yo> 255 active+clean
[15:21] <art_yo> 1 active+clean+scrubbing
[15:21] <art_yo> client io 835 kB/s wr, 0 op/s
[15:22] <art_yo> I don't know if it is going to work fine, but it's my test lab and I don't care
[15:24] * sugoruyo (~georgev@paarthurnax.esc.rl.ac.uk) has joined #ceph
[15:25] * DanFoster (~Daniel@2a00:1ee0:3:1337:e15c:f0b0:e98a:226d) has joined #ceph
[15:29] * swami1 (~swami@49.38.1.205) Quit (Quit: Leaving.)
[15:29] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[15:29] <sugoruyo> hello folks, can someone help me figure out why all my objects are misplaced? I have 3 EC 8+3 pools with 2048, 2048 and 1024 PGs and all those PGs are active+remapped. There are 2160 OSDs in the cluster meaning it's complaining there are two few PGs per OSD (2 > min 30) but I'm not exactly sure how that would cause all of them to be misplaced
[15:30] <nils_> any recent changes to the cluster?
[15:31] * johnavp1989 (~jpetrini@yakko.coredial.com) Quit (Ping timeout: 480 seconds)
[15:31] * dan__ (~Daniel@2a00:1ee0:3:1337:8547:cca4:ce31:ebae) Quit (Ping timeout: 480 seconds)
[15:35] <sugoruyo> nils_: I think I figured out what it is, the config. mgmt system seems to have removed the CRUSH rules for those pools
[15:35] * bara_ (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[15:36] <nils_> sugoruyo, making mistakes is human, deploying mistakes everywhere is devops ;)
[15:37] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[15:38] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[15:40] * topro__ (~prousa@p578af414.dip0.t-ipconnect.de) has joined #ceph
[15:41] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[15:42] * topro_ (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[15:46] * Zeis (~legion@5AEAAAE4A.tor-irc.dnsbl.oftc.net) Quit ()
[15:47] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Ping timeout: 480 seconds)
[15:47] * Gecko1986 (~Guest1390@atlantic850.dedicatedpanel.com) has joined #ceph
[15:48] * rraja (~rraja@121.244.87.117) has joined #ceph
[15:49] * topro__ (~prousa@p578af414.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[15:49] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[15:49] * salwasser (~Adium@72.246.3.14) has joined #ceph
[15:52] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[15:52] * mhack (~mhack@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[15:54] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:56] * skinnejo (~skinnejo@32.97.110.52) has joined #ceph
[15:57] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[15:58] <sugoruyo> nils_: yep
[15:59] * rnowling (~rnowling@104-186-210-225.lightspeed.milwwi.sbcglobal.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:01] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[16:01] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[16:01] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[16:02] * rnowling (~rnowling@104-186-210-225.lightspeed.milwwi.sbcglobal.net) has joined #ceph
[16:02] <sugoruyo> when I created the pools I did not specify the rulesets explicitly and we do have one that is deployed by config. mgmt.
[16:02] * simonada (~oftc-webi@static.ip-171-033-130-093.signet.nl) has joined #ceph
[16:02] <sugoruyo> so Ceph generated its own and used those, which the config. mgmt. was unaware of and it decided to take a hatchet to them.
[16:03] <s3an2> When mounting cephfs in fstab (kernel >4.5) is it only possible to list one monitor IP?
[16:04] * topro_ (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Remote host closed the connection)
[16:05] * topro_ (~prousa@p578af414.dip0.t-ipconnect.de) has joined #ceph
[16:05] <rkeene> s3an2, You can list as many as you want
[16:07] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[16:07] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[16:08] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) has joined #ceph
[16:09] <simonada> Hello everyone. We are trying to deploy new disks on our ceph cluster. We are using ceph-deploy tool to do so. It always ends up saying the deployment has been successful, however, sometimes the OSD is just half created. It isn't added to the crush map, and no authentication key is set. Having a look at the new created OSD, ceph partition and journal partitions are created, vut the ceph does not all the expected files (e.g. active, w
[16:10] <simonada> *but the ceph partition does not contain all the expected files (e.g. active, whoami, current....)
[16:11] <simonada> We haven't found anything similar on ceph-user list. Any hints?
[16:12] * jarrpa (~jarrpa@63.225.131.166) Quit (Ping timeout: 480 seconds)
[16:12] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:15] <sugoruyo> simonada: I believe most files in there are created on the first run of the ceph-osd daemon, I would just manually try creating the OSD again in those cases
[16:16] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[16:16] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[16:16] * Gecko1986 (~Guest1390@26XAAAFQ5.tor-irc.dnsbl.oftc.net) Quit ()
[16:17] * Quackie (~shishi@tor-exit.ohdoom.net) has joined #ceph
[16:18] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[16:21] <simonada> sugoruyo, thanks. we tried that and it worked. Though it would be nice to know why it doesn't always work
[16:26] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) has joined #ceph
[16:34] <nils_> I'm wondering, how would I figure out ideal values for filestore_min_sync_interval and filestore_max_sync_interval?
[16:34] <s3an2> rkeene, you are right - works well, I will see if I can update the docs about that(http://docs.ceph.com/docs/master/cephfs/fstab/)
[16:37] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[16:39] * alexxy[home] (~alexxy@biod.pnpi.spb.ru) Quit (Ping timeout: 480 seconds)
[16:39] * vata (~vata@207.96.182.162) has joined #ceph
[16:40] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[16:40] * scuttle|afk is now known as scuttlemonkey
[16:41] * alexxy (~alexxy@biod.pnpi.spb.ru) has joined #ceph
[16:44] * yanzheng (~zhyan@125.70.23.222) Quit (Quit: This computer has gone to sleep)
[16:44] * markl (~mark@knm.org) Quit (Ping timeout: 480 seconds)
[16:46] * ircolle (~Adium@166.177.56.214) has joined #ceph
[16:46] * joshd1 (~jdurgin@2602:30a:c089:2b0:a800:dcb7:247f:f297) has joined #ceph
[16:46] * Quackie (~shishi@9YSAAAPWH.tor-irc.dnsbl.oftc.net) Quit ()
[16:47] * N3X15 (~lobstar@torrelay4.tomhek.net) has joined #ceph
[16:47] * boredatwork (~overonthe@199.68.193.62) has joined #ceph
[16:47] * kawa2014 (~kawa@212.110.41.244) Quit (Ping timeout: 480 seconds)
[16:49] * ircolle1 (~Adium@166.177.56.214) has joined #ceph
[16:49] * ircolle (~Adium@166.177.56.214) Quit (Read error: Connection reset by peer)
[16:49] <sugoruyo> simonada: glad it worked for you, if you can't get any output or logs from ceph-deploy to tell you what's gone wrong, you may want to bring this up as an issue on the mailing lsit
[16:50] <SamYaple> s3an2: even if you only list one monitor, it will connect to all monitors just fyi. the initial connection will be to just one though
[16:50] * sherv (250906df@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[16:50] * xarses (~xarses@64.124.158.100) has joined #ceph
[16:52] * ircolle (~Adium@166.177.56.214) has joined #ceph
[16:53] * rburkholder (~overonthe@199.68.193.54) Quit (Ping timeout: 480 seconds)
[16:54] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: Leaving)
[16:55] * rafa (~ralf@xdsl-87-79-156-179.netcologne.de) has joined #ceph
[16:56] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[16:56] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:57] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[16:57] <rafa> hello @all
[16:57] * ircolle1 (~Adium@166.177.56.214) Quit (Ping timeout: 480 seconds)
[16:57] <rafa> i do test rbd-mirror on a customer production server
[16:58] <rafa> i run into following problem searching for a solution
[16:58] <s3an2> SamYaple, yea, it was the case with a mon offline and a fresh mount request that I wanted to protect against. Thanks for your help!
[16:58] * newnick (~chatzilla@115.119.152.66) has joined #ceph
[16:58] <rafa> who would be the right person to contact?
[16:59] * kefu (~kefu@114.92.96.253) Quit (Quit: Textual IRC Client: www.textualapp.com)
[17:00] * newnick (~chatzilla@115.119.152.66) has left #ceph
[17:00] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[17:01] <rafa> 10 rbd images are replaying nicely
[17:02] <rafa> for a 6TB image i can't make it work
[17:03] <rafa> when first enabeling the image in charge it starts bootstrapping, but the process cracked
[17:03] * kefu (~kefu@183.193.182.196) has joined #ceph
[17:03] <rafa> now, can't remove the mal synced image from the backup-server
[17:04] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[17:05] <rafa> first, i disabled mirroring for the given image
[17:06] <rafa> then i try to rbd --cluster <cephcluster> rm <pool-name>/<rbd-name>
[17:06] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:06] <rafa> as expected, that can't work out, since rbd recognizes that it still has watchers
[17:07] <rafa> re-enabling mirriring ether fails
[17:07] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:07] * ccourtaut (~ccourtaut@157.173.31.93.rev.sfr.net) Quit (Quit: I'll be back!)
[17:07] * lpabon (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[17:09] <rafa> error message: librbd: cannot enable mirroring: last journal tag not owned by local cluster
[17:12] <rafa> consulting the source (master/src/librbd/internal.cc) the message is thrown from line 273
[17:12] * kefu_ (~kefu@114.92.96.253) has joined #ceph
[17:12] <rafa> this is correct, since remote cluster is not the primary
[17:12] <rafa> this is a deadlock situation, and i haven't found a way to resolve this situation ....
[17:14] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:14] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:14] <rafa> I'm running ceph version 10.2.2-101-gb15cf42 (b15cf42a4be7bb290e095cd5027d7f9ac604a97d) on ubuntu (16.04)
[17:16] <rafa> sure i can pastebin more neede infos as needed ....
[17:16] * N3X15 (~lobstar@5AEAAAE7N.tor-irc.dnsbl.oftc.net) Quit ()
[17:18] * mhack (~mhack@nat-pool-bos-u.redhat.com) has joined #ceph
[17:18] * garphy is now known as garphy`aw
[17:19] * kefu (~kefu@183.193.182.196) Quit (Ping timeout: 480 seconds)
[17:19] * ntpttr (~ntpttr@134.134.139.76) has joined #ceph
[17:19] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[17:21] * kefu_ (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[17:21] * kefu (~kefu@114.92.96.253) has joined #ceph
[17:21] * ntpttr (~ntpttr@134.134.139.76) Quit ()
[17:22] * kefu (~kefu@114.92.96.253) Quit ()
[17:23] <dillaman> rafa: reading
[17:23] <rafa> nice, thanks in advance
[17:24] <dillaman> rafa: what do you mean by "process cracked"? you mean rbd-mirror daemon crashed?
[17:26] <rafa> actually i can't remember the correct order, since i have tested arround with lots of different actions
[17:26] * mhack (~mhack@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[17:26] <rafa> for sure i also disabled the mirror deamon on both sides
[17:26] <rkeene> Hmm, I'm trying to upgrade to Ceph 10.2.2 from 0.94.7 -- it seems to want cython ?
[17:27] <rafa> i read as much of the docu as possible, to understand the process and not doing bullshit stuff.
[17:28] <dillaman> rafa: disabling mirroring on the primary image should cause the secondary image to be deleted (unless they are split-brained)
[17:28] * jlayton (~jlayton@107.13.71.30) Quit (Read error: Connection reset by peer)
[17:28] <dillaman> rafa: does "rbd mirror image status <image name> --cluster <secondary>" provide any insight?
[17:29] <rafa> i tried that as well. It should delete the image on remote without any further image or pool action, right?
[17:29] <rafa> @dillaman: will check ...
[17:30] * rakeshgm (~rakesh@121.244.87.117) Quit (Remote host closed the connection)
[17:30] * isti (~isti@fw.alvicom.hu) Quit (Ping timeout: 480 seconds)
[17:30] <dillaman> rafa: so long as the "secondary" cluster's rbd-mirror daemon is running
[17:31] * rnowling (~rnowling@104-186-210-225.lightspeed.milwwi.sbcglobal.net) Quit (Remote host closed the connection)
[17:31] <rafa> @dillaman: yes, it is
[17:31] * jlayton (~jlayton@107.13.71.30) has joined #ceph
[17:32] * dgurtner (~dgurtner@178.197.227.166) Quit (Read error: Connection reset by peer)
[17:32] <rafa> dillaman: output global_id: \nl state: down+unknown\nl description: status not found\nl last_update: 1970-01-01 01:00:00
[17:33] <rafa> dillaman: so, it has no connected id and has never got a sync
[17:34] <dillaman> can you run "rbd info" against it and pastebin the output?
[17:34] <rafa> dillaman: rbd --cluster <secondary> info <pool>/<image name>
[17:35] <rafa> size 6100 GB in 1561600 objects
[17:35] <rafa> order 22 (4096 kB objects)
[17:35] <rafa> block_name_prefix: rbd_data.31d41746f2e30
[17:35] <rafa> format: 2
[17:35] <rafa> features: layering, striping, exclusive-lock, object-map, fast-diff, journaling
[17:35] <rafa> flags:
[17:35] <rafa> stripe unit: 65536 bytes
[17:35] <rafa> stripe count: 4
[17:35] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[17:35] <rafa> journal: 31d41746f2e30
[17:35] <rafa> mirroring state: disabled
[17:35] <rafa> dillaman: do you prefere pastebin?
[17:35] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[17:35] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[17:37] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) has joined #ceph
[17:37] * joshd1 (~jdurgin@2602:30a:c089:2b0:a800:dcb7:247f:f297) Quit (Quit: Leaving.)
[17:39] * scuttlemonkey is now known as scuttle|afk
[17:40] * scuttle|afk is now known as scuttlemonkey
[17:43] * ircolle (~Adium@166.177.56.214) Quit (Quit: Leaving.)
[17:44] * Jeffrey4l_ (~Jeffrey@119.251.239.159) Quit (Ping timeout: 480 seconds)
[17:44] * kawa2014 (~kawa@46.166.137.238) has joined #ceph
[17:45] * allaok (~allaok@machine107.orange-labs.com) Quit (Quit: Leaving.)
[17:45] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[17:46] * kefu (~kefu@183.193.182.196) has joined #ceph
[17:48] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[17:51] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[17:51] <dillaman> rafa: sorry -- pastebin is better for multiline output to avoid spamming irc
[17:52] <rafa> ok. go ahead
[17:52] * kefu_ (~kefu@114.92.96.253) has joined #ceph
[17:52] <dillaman> rafa: you should be able to force-promote the image on the secondary cluster and then remove it
[17:53] <dillaman> rafa: oh -- perhaps not
[17:53] * garphy`aw is now known as garphy
[17:53] <rafa> dillaman: like rbd --cluster <secondary> mirror image promote <pool-name>/<image-name> --force
[17:54] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[17:54] <rafa> but that requests that mirririong i enabled for the image on the <secondaray>
[17:54] <dillaman> rafa: yeah -- but since mirroring it off, that won't do anything
[17:55] <rafa> dillaman: to bad, yes!
[17:57] * dillaman thinking
[17:57] * kefu (~kefu@183.193.182.196) Quit (Ping timeout: 480 seconds)
[17:59] <dillaman> rafa: sadly, best solution might be to manually delete the image via rados -- it apparently crashed at an awkward point
[18:00] <dillaman> rafa: "rados --cluster <secondary> --pool <pool> ls | grep 31d41746f2e30" and pastebin the result
[18:00] * swami1 (~swami@27.7.162.30) has joined #ceph
[18:00] * bstillwell (~bryan@bokeoa.com) has joined #ceph
[18:01] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:01] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[18:01] * bara_ (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:02] <bstillwell> I'm looking at my ceph-mon logs and I'm wondering what the two numbers after the timestamp stand for. Anyone know?
[18:03] <dillaman> bstillwell: one is the thread id and the other is the log level (or -1 for errors)
[18:03] <bstillwell> dillaman: What usefulness is the thread id?
[18:03] <dillaman> bstillwell: debugging
[18:04] <dillaman> bstillwell: as a developer, I can trace intermingled log messages from different thread contexts
[18:04] <bstillwell> Ahh, but as an operator is there anything I can do with it?
[18:05] <bstillwell> Except hand them over to a developer when there's a problem that's not obvious?
[18:05] * rafa (~ralf@xdsl-87-79-156-179.netcologne.de) Quit (Read error: No route to host)
[18:05] * borei1 (~dan@216.13.217.230) has joined #ceph
[18:06] <dillaman> bstillwell: hard to say -- perhaps if you were investigating an issue
[18:06] * blizzow (~jburns@50.243.148.102) has joined #ceph
[18:07] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[18:07] <bstillwell> dillaman: ok, thanks for the info!
[18:08] <dillaman> bstillwell: np
[18:10] * jarrpa (~jarrpa@67-4-129-67.mpls.qwest.net) has joined #ceph
[18:13] * bara_ (~bara@213.175.37.12) has joined #ceph
[18:15] * kawa2014 (~kawa@0SUAAAT5U.tor-irc.dnsbl.oftc.net) Quit (Read error: Connection reset by peer)
[18:16] * ira (~ira@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:17] * swami1 (~swami@27.7.162.30) Quit (Quit: Leaving.)
[18:20] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[18:20] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[18:21] * rafa (~ralf@xdsl-84-44-232-165.netcologne.de) has joined #ceph
[18:21] <rafa> @dillman: my wifi crashed ....
[18:22] <jdillaman> rafa: did you get my ???rados??? command?
[18:24] <jdillaman> rafa: basically, follow http://cephnotes.ksperis.com/blog/2014/07/04/remove-big-rbd-image but also delete any objects named (regex) ???^journal.31d41746f2e30??? and ???^journal_data.31d41746f2e30???
[18:24] <rafa> @dillaman: no, missed the command.
[18:24] <rafa> @dillaman: reading ....
[18:25] <jdillaman> rafa: i opened a tracker ticket to fix this in the future
[18:26] <rafa> @dillaman: do you need any further info to ease you job?
[18:27] * simonada (~oftc-webi@static.ip-171-033-130-093.signet.nl) Quit (Quit: Page closed)
[18:27] <jdillaman> rafa: if you can provide a core dump or log from the crash, that would be great. otherwise, the tracker is to help recover
[18:28] <rafa> @dillaman: fortunately i did not save any crash log .... sorry
[18:28] <jdillaman> rafa: no worries
[18:29] <rafa> @dillaman: try to replay the rados cleanup ... will send any success.
[18:33] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[18:33] <- *johnavp19891* To prove that you are human, please enter the result of 8+3
[18:37] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:37] * DanFoster (~Daniel@2a00:1ee0:3:1337:e15c:f0b0:e98a:226d) Quit (Quit: Leaving)
[18:39] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[18:41] * sudocat (~dibarra@192.185.1.20) Quit (Remote host closed the connection)
[18:42] * kefu_ (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:43] * cathode (~cathode@50.232.215.114) has joined #ceph
[18:46] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[18:47] * pepzi (~QuantumBe@1.tor.exit.babylon.network) has joined #ceph
[18:53] * jarrpa (~jarrpa@67-4-129-67.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[18:55] * ffilzwin2 (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[18:57] * garphy is now known as garphy`aw
[18:59] * i_m (~ivan.miro@deibp9eh1--blueice4n0.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[19:02] * ffilzwin (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[19:02] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[19:03] * ira (~ira@nat-pool-bos-u.redhat.com) has joined #ceph
[19:05] * reed (~reed@216.38.134.18) has joined #ceph
[19:05] * haplo37 (~haplo37@199.91.185.156) Quit (Ping timeout: 480 seconds)
[19:05] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Killed (NickServ (Too many failed password attempts.)))
[19:05] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[19:09] * mykola (~Mikolaj@91.245.77.8) has joined #ceph
[19:12] * ira (~ira@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving)
[19:14] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[19:16] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[19:17] * pepzi (~QuantumBe@9YSAAAP1X.tor-irc.dnsbl.oftc.net) Quit ()
[19:19] <rafa> @dillaman: successfully remove all rados block_name objects (including the journal parts). Finaly removed the rbd object.
[19:21] <rafa> @dillaman: last not least ... enabled the rbd mirroring on the <primary> ... right now: image is bootstrapping , IMAGE_COPY/COPY_OBJECT
[19:21] * karnan (~karnan@106.51.130.90) has joined #ceph
[19:22] <rafa> @dillaman: it will take it's time to mirror the objects and snapshots .... will await the sync-success. Thank you for the support! Do appreceate your job very much!
[19:24] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[19:37] * bara_ (~bara@213.175.37.12) Quit (Quit: Bye guys! (??????????????????? ?????????)
[19:37] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:40] * evelu (~erwan@129.16.90.92.rev.sfr.net) has joined #ceph
[19:40] * alram (~alram@206.169.83.146) has joined #ceph
[19:46] * evelu (~erwan@129.16.90.92.rev.sfr.net) Quit (Read error: Connection reset by peer)
[19:47] * kalmisto1 (~Crisco@freeciv.nmte.ch) has joined #ceph
[19:52] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) Quit (Quit: Leaving.)
[19:54] * garphy`aw is now known as garphy
[19:56] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[19:58] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[19:59] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[19:59] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) has joined #ceph
[20:03] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Remote host closed the connection)
[20:04] * garphy is now known as garphy`aw
[20:07] * jarrpa (~jarrpa@2602:43:485:b200:56ee:75ff:fe2b:de07) has joined #ceph
[20:08] * gregmark (~Adium@68.87.42.115) has joined #ceph
[20:13] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[20:15] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:16] * rendar (~I@host200-143-dynamic.59-82-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:17] * kalmisto1 (~Crisco@5AEAAAFDH.tor-irc.dnsbl.oftc.net) Quit ()
[20:19] * jarrpa (~jarrpa@2602:43:485:b200:56ee:75ff:fe2b:de07) Quit (Ping timeout: 480 seconds)
[20:20] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) has joined #ceph
[20:20] * debian112 (~bcolbert@207.183.247.46) Quit (Quit: Leaving.)
[20:23] * haplo37 (~haplo37@199.91.185.156) Quit (Ping timeout: 480 seconds)
[20:29] * pdrakewe_ (~pdrakeweb@oh-76-5-108-60.dhcp.embarqhsd.net) Quit (Ping timeout: 480 seconds)
[20:30] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[20:30] * karnan (~karnan@106.51.130.90) Quit (Quit: Leaving)
[20:31] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit ()
[20:31] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[20:31] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[20:33] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[20:33] * alram (~alram@206.169.83.146) Quit (Quit: leaving)
[20:35] * rafa (~ralf@xdsl-84-44-232-165.netcologne.de) has left #ceph
[20:37] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[20:42] * rendar (~I@host200-143-dynamic.59-82-r.retail.telecomitalia.it) has joined #ceph
[20:44] * penguinRaider (~KiKo@103.6.219.219) Quit (Ping timeout: 480 seconds)
[20:47] * Keiya1 (~vegas3@61TAAAPFH.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:48] * isti (~isti@BC06E559.dsl.pool.telekom.hu) has joined #ceph
[20:48] * mhack (~mhack@nat-pool-bos-u.redhat.com) has joined #ceph
[20:52] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[20:56] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[20:58] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) Quit (Quit: Leaving.)
[21:01] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) has joined #ceph
[21:02] * penguinRaider (~KiKo@103.6.219.219) has joined #ceph
[21:03] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[21:03] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[21:06] * mhackett (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[21:08] * david__ (~david@207.107.71.71) has joined #ceph
[21:09] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[21:09] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[21:10] * mhack (~mhack@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[21:16] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) Quit (Quit: Leaving.)
[21:17] * Keiya1 (~vegas3@61TAAAPFH.tor-irc.dnsbl.oftc.net) Quit ()
[21:17] * legion (~Shnaw@192.42.116.16) has joined #ceph
[21:20] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[21:20] <- *johnavp19891* To prove that you are human, please enter the result of 8+3
[21:27] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[21:32] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[21:35] * karnan (~karnan@106.51.130.90) has joined #ceph
[21:38] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving)
[21:39] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[21:43] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[21:43] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[21:47] * legion (~Shnaw@5AEAAAFFZ.tor-irc.dnsbl.oftc.net) Quit ()
[21:47] * Popz (~Kwen@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[21:48] * karnan (~karnan@106.51.130.90) Quit (Ping timeout: 480 seconds)
[21:53] * lpabon (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[21:55] * Vacuum__ (~Vacuum@88.130.219.117) has joined #ceph
[21:56] * georgem (~Adium@85.204.4.209) has joined #ceph
[22:02] * Vacuum_ (~Vacuum@88.130.192.141) Quit (Ping timeout: 480 seconds)
[22:17] * Popz (~Kwen@26XAAAF4Z.tor-irc.dnsbl.oftc.net) Quit ()
[22:17] * verbalins (~Joppe4899@109.236.90.209) has joined #ceph
[22:20] * squizzi (~squizzi@107.13.31.195) Quit (Ping timeout: 480 seconds)
[22:26] * mykola (~Mikolaj@91.245.77.8) Quit (Quit: away)
[22:29] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[22:29] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[22:45] * jargonmonk (jargonmonk@00022354.user.oftc.net) has joined #ceph
[22:46] * skinnejo (~skinnejo@32.97.110.52) Quit (Remote host closed the connection)
[22:47] * verbalins (~Joppe4899@26XAAAF5R.tor-irc.dnsbl.oftc.net) Quit ()
[22:47] * Crisco (~Hidendra@50.7.151.127) has joined #ceph
[22:49] * analbeard (~shw@host109-149-32-128.range109-149.btcentralplus.com) has joined #ceph
[22:55] * georgem (~Adium@85.204.4.209) Quit (Quit: Leaving.)
[22:58] * analbeard (~shw@host109-149-32-128.range109-149.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[22:59] * analbeard (~shw@support.memset.com) has joined #ceph
[22:59] * davidzlap (~Adium@2605:e000:1313:8003:4cc6:2246:b05b:6cd7) has joined #ceph
[23:02] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:03] * cathode (~cathode@50.232.215.114) has joined #ceph
[23:15] <bstillwell> Does anyone know if ceph-disk supports putting the BlueStore WAL on an SSD/NVMe device yet?
[23:16] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[23:17] * Crisco (~Hidendra@61TAAAPI6.tor-irc.dnsbl.oftc.net) Quit ()
[23:19] <T1> probably a bit early for that in jewel
[23:20] <bstillwell> So doing it manually like specified here is the current method people are using?:
[23:20] <bstillwell> https://www.sebastien-han.fr/blog/2016/05/04/Ceph-Jewel-configure-BlueStore-with-multiple-devices/
[23:20] * krypto (~krypto@G68-90-105-6.sbcis.sbc.com) has joined #ceph
[23:21] <T1> sorry, no idea
[23:37] * vbellur (~vijay@71.234.224.255) has joined #ceph
[23:37] <blizzow> I was thinking of doing a ceph jewel installation on ubuntu 16.04 (xenial) that uses bluestore. The documentation at http://docs.ceph.com/docs/master/install/get-packages/ only shows hammer as the latest major release. A) Should I use the default ubuntu ceph package, or install the ceph repo?
[23:38] <blizzow> B) Is ceph-deploy still the accepted manner to install osds and mons?
[23:38] <blizzow> C) Is calamari working in jewell?
[23:39] <bstillwell> blizzow: Just use 'jewel' for the release name. It should work fine.
[23:39] <bstillwell> jewel is mentioned multiple times on that page.
[23:40] <bstillwell> afaik, ceph-deploy is still acceptable way to install osds and mons. Although a production system will usually use puppet-ceph or ceph-ansible.
[23:40] <bstillwell> I don't know about calamari. I believe it's in the process of being replaced by ceph manager.
[23:41] <xarses> ceph-deploy is good for small and simple deployments
[23:41] * Roland- (~Roland-@46.7.150.9) has joined #ceph
[23:41] <xarses> alot more needs to be considered for bigger ones
[23:41] <Roland-> yellow
[23:41] <xarses> <- puppet-ceph maintainer
[23:42] <Roland-> I am planning to build a 2 node ceph cluster with 2 copy, but each node has 4 disks in this case osds, is ceph smart enough to put the second copy on the second node in case node1 goes bust with 4 osds?
[23:42] <xarses> Roland-: the default placement group will require that the replica is not on the same node
[23:43] <bstillwell> Roland-: Yeah, the default failure domain is the host level
[23:43] <Roland-> oh perfect
[23:44] <Roland-> I can use partitions as osds I presume?
[23:44] <bstillwell> So I've created the block.wal and block.db symlinks in /var/lib/ceph/osd/ceph-0/, but it looks like I need some way to tell rocksdb to initialize them.. Any ideas?
[23:45] <bstillwell> Roland-: You can, but usually you want to use the whole device.
[23:45] <Roland-> I need something for the OS as well
[23:45] <bstillwell> Also if you're using partitions the drive should have a GPT partition table.
[23:45] <Roland-> it has to, 4tb anyway
[23:46] <Roland-> is there a capacity or performance calculator?
[23:46] <Roland-> let me see google
[23:46] <bstillwell> I've done it with msdos partition tables, but they don't start at boot that way.
[23:46] <xarses> you should avoid using the root volume as a OSD store
[23:47] * slowriot (~Bored@114-227-47-212.rev.cloud.scaleway.com) has joined #ceph
[23:47] <Roland-> I am not, I have 4 disks, just willing to create a raid1 4GB partition for os on all drives and then the rest up to 4TB for ceph
[23:48] <bstillwell> It's possible, just not recommended.
[23:49] <xarses> that is better, but the background IOP/s can stave each other, so FYI
[23:49] <bstillwell> Yeah, you can end up with some performance issues in that config.
[23:49] <xarses> keep in mind this is even more of a problem if the OSD is also a MON
[23:49] <bstillwell> yep
[23:50] <xarses> in which case IO starvation can bring down the whole cluster
[23:50] <Roland-> allocating 4 disks out of 4 , I might get away with an slowass usb stick
[23:50] <bstillwell> Mon's like to be on SSDs from my experience.
[23:50] <bstillwell> I wonder if a SATA DOM would work for you?
[23:50] <Roland-> no sata dom, I wish
[23:50] <Roland-> blade M710
[23:51] <xarses> again, it will work, and is great for a small/lab/personal cluster just want to make sure you are informed
[23:51] <Roland-> at least I have two usbs I can raid1them
[23:51] <xarses> you don't need to raid the OSD side
[23:51] <Roland-> no, on the os side
[23:51] <xarses> ok
[23:51] <Roland-> so I can use the whole disk for ceph
[23:51] <Roland-> I don't need iops
[23:51] <Roland-> but I do need linear speed
[23:53] <Roland-> in this case I am builing a backup system
[23:53] <Roland-> where large 20GB+ files will be placed
[23:53] <Roland-> ocasionally
[23:53] <Roland-> but since I am restrained to this 4 disk thing I am looking for a solution not to lose too much data
[23:54] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:56] <bstillwell> I would make it easier on yourself if you haven't played with Ceph much and start with using one of the disks for the OS and the other 3 as OSDs.
[23:56] <bstillwell> You can always reconfigure the cluster later to a more optimal config
[23:56] <bstillwell> Don't make it too hard on yourself at the beginning.
[23:57] <Roland-> alright
[23:57] <bstillwell> A USB flash drive probably won't be fast enough for hosting mons on.
[23:57] <Roland-> no mon
[23:57] <Roland-> just OS
[23:57] <Roland-> :)
[23:57] <Roland-> boot and stuff
[23:57] <bstillwell> Where will the ceph mons be?
[23:57] <Roland-> but I might be able to squeeze an ssd
[23:57] <bstillwell> An SSD would be ideal
[23:58] <Roland-> but then I will have 3 disks per node, two nodes
[23:58] <blizzow> Why do mons need to be OSDs?
[23:58] <blizzow> rather *SSDs
[23:58] <Roland-> I only have two servers on the blade for this
[23:58] * Italux (~Italux@186.202.97.3) has joined #ceph
[23:58] <Roland-> full height, 4disks
[23:58] <bstillwell> A single SSD for the OS, mon, and journals would be the way to go.
[23:59] <blizzow> I thought they were low on load and disk IO, just keeping cluster state and maps.
[23:59] <Roland-> yes these are very low on load
[23:59] <Roland-> nightly backups and that's pretty much it
[23:59] <Roland-> nothing else would access the cluster
[23:59] * jarrpa (~jarrpa@2602:43:481:4300:eab1:fcff:fe47:f680) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.