#ceph IRC Log

Index

IRC Log for 2016-06-29

Timestamps are in GMT/BST.

[0:01] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Ping timeout: 480 seconds)
[0:04] * Concubidated (~cube@208.186.243.52) has joined #ceph
[0:08] * davidzlap (~Adium@ip-64-134-236-233.public.wayport.net) has joined #ceph
[0:15] * ntpttr (~ntpttr@134.134.139.83) Quit (Quit: Leaving)
[0:22] * matx (~SquallSee@120.29.217.46) has joined #ceph
[0:27] * vbellur (~vijay@68.177.129.155) has joined #ceph
[0:30] * Titin (~textual@LFbn-1-1560-65.w90-65.abo.wanadoo.fr) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:32] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[0:33] * Racpatel (~Racpatel@c-73-170-66-165.hsd1.ca.comcast.net) has joined #ceph
[0:34] * Racpatel (~Racpatel@c-73-170-66-165.hsd1.ca.comcast.net) Quit ()
[0:34] * Racpatel (~Racpatel@c-73-170-66-165.hsd1.ca.comcast.net) has joined #ceph
[0:35] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[0:38] * xarses_ (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[0:41] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) has joined #ceph
[0:42] * vbellur (~vijay@68.177.129.155) Quit (Quit: Leaving.)
[0:42] * vbellur (~vijay@68.177.129.155) has joined #ceph
[0:45] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[0:51] <ronrib> weird, after increasing the number of PGs on the cache pool, it won't auto flush any more
[0:51] * danieagle (~Daniel@191.205.88.237) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[0:52] * gauravbafna (~gauravbaf@122.172.200.43) has joined #ceph
[0:52] * matx (~SquallSee@4MJAAG50L.tor-irc.dnsbl.oftc.net) Quit ()
[0:52] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[0:53] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[0:57] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:57] * SinZ|offline (~pepzi@91.109.29.120) has joined #ceph
[0:57] * niknakpaddywak (~xander.ni@outbound.lax.demandmedia.com) Quit (Quit: Lost terminal)
[0:58] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[1:00] * gauravbafna (~gauravbaf@122.172.200.43) Quit (Ping timeout: 480 seconds)
[1:02] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[1:05] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:06] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[1:08] * igoryonya (~kvirc@80.83.239.24) Quit (Ping timeout: 480 seconds)
[1:12] * onyb (~ani07nov@119.82.105.66) has joined #ceph
[1:12] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[1:13] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[1:13] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[1:16] * swami2 (~swami@27.7.164.158) has joined #ceph
[1:17] * dcwangmit01 (~dcwangmit@162-245.23-239.PUBLIC.monkeybrains.net) Quit (Quit: leaving)
[1:17] * dcwangmit01 (~dcwangmit@162-245.23-239.PUBLIC.monkeybrains.net) has joined #ceph
[1:18] * igoryonya (~kvirc@80.83.238.68) has joined #ceph
[1:19] * Skaag (~lunix@65.200.54.234) has joined #ceph
[1:22] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[1:24] * rendar (~I@host107-169-dynamic.116-80-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[1:27] * SinZ|offline (~pepzi@06SAAEMXE.tor-irc.dnsbl.oftc.net) Quit ()
[1:29] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (Quit: Leaving)
[1:30] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:31] * niknakpaddywak (~xander.ni@outbound.lax.demandmedia.com) has joined #ceph
[1:31] * Concubidated (~cube@208.186.243.52) Quit (Quit: Leaving.)
[1:36] * swami2 (~swami@27.7.164.158) Quit (Ping timeout: 480 seconds)
[1:36] * vbellur (~vijay@68.177.129.155) Quit (Ping timeout: 480 seconds)
[1:43] * sudocat (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[1:44] * MrAbaddon (~MrAbaddon@184.99.136.95.rev.vodafone.pt) has joined #ceph
[1:45] * utugi______ (~Inverness@06SAAEM1H.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:46] * Infected (~Infected@peon.lantrek.fi) has joined #ceph
[1:53] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Quit: Leaving.)
[1:53] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[1:53] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[1:55] * vata (~vata@cable-173.246.3-246.ebox.ca) has joined #ceph
[2:01] * scg (~zscg@2620:15c:6:fd00:25c1:e9ff:2b2a:efa0) has joined #ceph
[2:02] * scg (~zscg@2620:15c:6:fd00:25c1:e9ff:2b2a:efa0) Quit ()
[2:09] * jarrpa (~jarrpa@2602:3f:e183:a600:eab1:fcff:fe47:f680) Quit (Ping timeout: 480 seconds)
[2:10] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[2:15] * utugi______ (~Inverness@06SAAEM1H.tor-irc.dnsbl.oftc.net) Quit ()
[2:20] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[2:25] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[2:27] * raghu (~raghu@chippewa-nat.cray.com) has joined #ceph
[2:29] * NTTEC (~nttec@49.146.70.133) has joined #ceph
[2:34] * raghu (~raghu@chippewa-nat.cray.com) Quit (Quit: Leaving...)
[2:40] <ronrib> ah and i can't remove the cache tier because the data pool is erasure coded and in use by cephfs
[2:41] <ronrib> does anyone know how I can trigger the cache pool to start flushing again? I've tried resetting all the options
[2:44] * penguinRaider (~KiKo@146.185.31.226) has joined #ceph
[2:50] * jarrpa (~jarrpa@184.97.246.86) has joined #ceph
[2:50] <motk> ronrib: surprise there's no command to do that
[2:51] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[2:51] * gauravbafna (~gauravbaf@122.172.200.43) has joined #ceph
[2:52] <gregsfortytwo> ronrib: scrub the cache pool
[2:53] <gregsfortytwo> I thought you had to force splits on cache pools because of this
[2:53] <gregsfortytwo> splitting breaks the pg stats and they need to get rebuilt before it can flush stuff properly
[2:54] <ronrib> cool thanks gregsfortytwo
[2:58] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Read error: Connection reset by peer)
[2:58] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[2:58] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[2:59] * gauravbafna (~gauravbaf@122.172.200.43) Quit (Ping timeout: 480 seconds)
[3:01] * joey_ (~oftc-webi@199.15.100.254) has joined #ceph
[3:01] <joey_> Hi Everyone
[3:01] <joey_> I made a mistake
[3:01] <ronrib> :(
[3:01] <joey_> I accidentally deleted the vm that I used to create a ceph cluster
[3:02] <joey_> I can do basic thing from the monitor node
[3:02] <joey_> but I can't prepare additional disks because I don't have ceph.bootstrap-osd.keyring
[3:02] <joey_> Is there any way to re-create that?
[3:03] <gregsfortytwo> you can get them all out of the monitors
[3:03] <gregsfortytwo> if you have an admin keyring anywhere, just by fetching them directly
[3:03] <ronrib> gregsfortytwo: yep that did it, woo! i actually read the scrub happened automatically v0v
[3:03] <NTTEC> what command should I use to transfer a directory file using rados. ex. I have a folder name foldearly and wanted to put it inside my pool called notebook. I'm trying to use ceph as a storage for my website.
[3:03] <gregsfortytwo> joey_: if not, you'll want to play some shenanigans by retrieving the admin key, using the monitor keyring
[3:04] <joey_> i have the admin key
[3:04] <gregsfortytwo> so you should be able to ready the bootstrap-osd key then
[3:04] <joey_> but when I try to prepare a disk it complains that I don't have the bootstrap key
[3:04] <joey_> uh.....
[3:04] <gregsfortytwo> right, you'll need to retrieve it and put it in place
[3:04] <joey_> can you point me in the right direction?
[3:04] <gregsfortytwo> I believe this is all at docs.ceph.com if you go search or google :)
[3:05] <joey_> I spent the last hour googling?
[3:05] <joey_> then 15 minutes freaking out
[3:05] <joey_> then I hopped on here
[3:05] <joey_> :)
[3:06] <gregsfortytwo> http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-keys/#gather-keys
[3:06] <gregsfortytwo> "Note To retrieve the keys, you specify a host that has a Ceph monitor."
[3:06] <gregsfortytwo> will probably do it?
[3:07] <gregsfortytwo> if not, find all the stuff about retrieving auth keys and keyrings
[3:07] <gregsfortytwo> http://docs.ceph.com/docs/master/rados/operations/control/#authentication-subsystem will give you pieces to start with
[3:08] <gregsfortytwo> I'm off, night all
[3:09] <joey_> thanks so much
[3:15] * Jeffrey4l (~Jeffrey@110.244.243.149) has joined #ceph
[3:15] * allen_gao (~allen_gao@58.213.72.214) Quit (Remote host closed the connection)
[3:18] * NTTEC (~nttec@49.146.70.133) Quit (Remote host closed the connection)
[3:18] * NTTEC (~nttec@49.146.70.133) has joined #ceph
[3:24] * onyb (~ani07nov@119.82.105.66) Quit (Read error: Connection reset by peer)
[3:26] * NTTEC (~nttec@49.146.70.133) Quit (Ping timeout: 480 seconds)
[3:30] * EinstCrazy (~EinstCraz@203.79.187.188) has joined #ceph
[3:41] * sebastian-w (~quassel@212.218.8.139) Quit (Remote host closed the connection)
[3:41] * sebastian-w (~quassel@212.218.8.138) has joined #ceph
[3:41] * rony (~rony@125-227-147-112.HINET-IP.hinet.net) has joined #ceph
[3:42] * rony (~rony@125-227-147-112.HINET-IP.hinet.net) Quit ()
[3:44] * yanzheng (~zhyan@125.70.22.48) has joined #ceph
[3:46] * rony (~rony@125-227-147-112.HINET-IP.hinet.net) has joined #ceph
[3:49] * Izanagi (~qable@torsrvs.snydernet.net) has joined #ceph
[3:55] * rony (~rony@125-227-147-112.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[3:55] * rony (~rony@125-227-147-112.HINET-IP.hinet.net) has joined #ceph
[4:00] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[4:02] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[4:05] * shyu (~Frank@218.241.172.114) has joined #ceph
[4:08] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[4:09] * EinstCrazy (~EinstCraz@203.79.187.188) Quit (Ping timeout: 480 seconds)
[4:09] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[4:10] * flisky (~Thunderbi@210.12.157.94) has joined #ceph
[4:12] * flisky (~Thunderbi@210.12.157.94) Quit ()
[4:19] * flisky (~Thunderbi@210.12.157.86) has joined #ceph
[4:19] * Izanagi (~qable@06SAAENFN.tor-irc.dnsbl.oftc.net) Quit ()
[4:20] * jarrpa (~jarrpa@184.97.246.86) Quit (Ping timeout: 480 seconds)
[4:29] * squizzi (~squizzi@107.13.31.195) Quit (Quit: bye)
[4:42] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[4:43] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[4:50] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[4:51] * Redshift (~blip2@5.189.188.111) has joined #ceph
[4:56] * titzer (~titzer@cs.13ad.net) Quit (Quit: WeeChat 1.3)
[5:02] * igoryonya (~kvirc@80.83.238.68) Quit (Ping timeout: 480 seconds)
[5:07] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[5:13] * igoryonya (~kvirc@80.83.238.43) has joined #ceph
[5:21] * yanzheng1 (~zhyan@125.70.22.48) has joined #ceph
[5:21] * Redshift (~blip2@06SAAENHE.tor-irc.dnsbl.oftc.net) Quit ()
[5:21] * danieagle (~Daniel@177.138.223.148) has joined #ceph
[5:23] * yanzheng (~zhyan@125.70.22.48) Quit (Ping timeout: 480 seconds)
[5:29] * Vacuum__ (~Vacuum@88.130.192.96) has joined #ceph
[5:34] * BlS (~Kealper@159.203.11.12) has joined #ceph
[5:36] * Vacuum_ (~Vacuum@i59F79FB1.versanet.de) Quit (Ping timeout: 480 seconds)
[5:38] * Brochacho (~alberto@c-73-45-127-198.hsd1.il.comcast.net) Quit (Quit: Brochacho)
[5:51] * vbellur (~vijay@12.232.194.107) has joined #ceph
[5:56] * igoryonya (~kvirc@80.83.238.43) Quit (Ping timeout: 480 seconds)
[5:59] * overclk (~quassel@2400:6180:100:d0::54:1) has joined #ceph
[6:01] * jamespage (~jamespage@culvain.gromper.net) Quit (Read error: Connection reset by peer)
[6:01] * jamespag` (~jamespage@culvain.gromper.net) has joined #ceph
[6:04] * BlS (~Kealper@7V7AAGU9X.tor-irc.dnsbl.oftc.net) Quit ()
[6:07] * igoryonya (~kvirc@80.83.238.13) has joined #ceph
[6:15] * dmanchad (~dmanchad@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 482 seconds)
[6:16] * dmanchad (~dmanchad@nat-pool-bos-t.redhat.com) has joined #ceph
[6:19] * linjan__ (~linjan@176.195.70.132) has joined #ceph
[6:21] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[6:23] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[6:32] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[6:33] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[6:48] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[6:49] * swami1 (~swami@49.38.0.153) has joined #ceph
[6:52] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[6:53] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[6:59] * penguinRaider (~KiKo@146.185.31.226) Quit (Remote host closed the connection)
[7:02] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:05] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:13] * penguinRaider (~KiKo@146.185.31.226) has joined #ceph
[7:19] * kawa2014 (~kawa@94.162.101.54) has joined #ceph
[7:22] * Skaag (~lunix@65.200.54.234) has joined #ceph
[7:33] * Racpatel (~Racpatel@c-73-170-66-165.hsd1.ca.comcast.net) Quit (Ping timeout: 483 seconds)
[7:33] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[7:34] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[7:39] * linjan__ (~linjan@176.195.70.132) Quit (Ping timeout: 480 seconds)
[7:46] * tries (~tries__@2a01:2a8:2000:ffff:1260:4bff:fe6f:af91) Quit (Ping timeout: 480 seconds)
[7:47] * briner (~briner@2001:620:600:1000:5d26:8eaa:97f0:8115) Quit (Quit: briner)
[7:52] * tries (~tries__@2a01:2a8:2000:ffff:1260:4bff:fe6f:af91) has joined #ceph
[7:54] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[7:55] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[7:57] * EinstCrazy (~EinstCraz@203.79.187.188) has joined #ceph
[8:01] <TheSov> i would like to reweight every OSD all at once from 1.0 to 0.1 is that ok to do?
[8:03] * rdas (~rdas@121.244.87.116) has joined #ceph
[8:03] * davidzlap (~Adium@ip-64-134-236-233.public.wayport.net) Quit (Read error: Connection reset by peer)
[8:05] * davidzlap (~Adium@ip-64-134-236-233.public.wayport.net) has joined #ceph
[8:05] * lifeboy (~roland@196.32.234.206) has joined #ceph
[8:08] * joey_ (~oftc-webi@199.15.100.254) Quit (Quit: Page closed)
[8:09] * arcimboldo (~antonio@84-75-174-248.dclient.hispeed.ch) has joined #ceph
[8:10] * Foysal (~Foysal@202.84.42.5) has joined #ceph
[8:11] * nardial (~ls@dslb-084-063-234-150.084.063.pools.vodafone-ip.de) has joined #ceph
[8:14] * gauravbafna (~gauravbaf@49.38.0.105) has joined #ceph
[8:14] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:19] * EinstCrazy (~EinstCraz@203.79.187.188) Quit (Read error: Connection reset by peer)
[8:19] * EinstCrazy (~EinstCraz@203.79.187.188) has joined #ceph
[8:27] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[8:30] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[8:32] <TheSov> ok so i wrote a script that rewieghted all my osd's at the same time, i didnt fuck shit up did i?
[8:37] <Chojin_> hello
[8:43] <TheSov> hi
[8:43] <MrBy2> TheSov: it should not make any difference if you reweight them all, because relatively it is still the same weight if you compare them to each other
[8:43] <TheSov> yes
[8:43] <TheSov> im trying to reweight them according to size
[8:43] <TheSov> they were all 1.0
[8:43] <TheSov> so i made them .1
[8:43] <TheSov> 1tb = .1
[8:43] <TheSov> so a 10tb disk would be 1.0
[8:44] <TheSov> i intend to introduce different sizes of drives to the cluster
[8:44] <TheSov> or maybe add a new root
[8:48] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[8:48] * penguinRaider (~KiKo@146.185.31.226) Quit (Read error: Connection reset by peer)
[8:49] * IvanJobs (~ivanjobs@103.50.11.146) has joined #ceph
[8:53] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[8:59] * flisky (~Thunderbi@210.12.157.86) Quit (Quit: flisky)
[9:00] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:01] * sickolog1 (~mio@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[9:01] * sickology (~mio@vpn.bcs.hr) has joined #ceph
[9:04] * penguinRaider (~KiKo@146.185.31.226) has joined #ceph
[9:05] * analbeard (~shw@host109-150-56-204.range109-150.btcentralplus.com) has joined #ceph
[9:05] * saintpablo (~saintpabl@gw01.mhitp.dk) has joined #ceph
[9:07] * Titin (~textual@ALyon-658-1-192-23.w90-14.abo.wanadoo.fr) has joined #ceph
[9:09] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[9:11] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[9:12] * analbeard (~shw@host109-150-56-204.range109-150.btcentralplus.com) Quit (Quit: Leaving.)
[9:15] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[9:16] * EinstCrazy (~EinstCraz@203.79.187.188) Quit (Read error: Connection timed out)
[9:17] * karnan (~karnan@121.244.87.117) has joined #ceph
[9:17] * EinstCrazy (~EinstCraz@203.79.187.188) has joined #ceph
[9:18] * analbeard (~shw@host109-150-56-204.range109-150.btcentralplus.com) has joined #ceph
[9:19] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:22] * analbeard (~shw@host109-150-56-204.range109-150.btcentralplus.com) Quit ()
[9:25] * Concubidated (~cube@208.186.243.52) has joined #ceph
[9:26] * IvanJobs_ (~ivanjobs@103.50.11.146) has joined #ceph
[9:29] * MrAbaddon (~MrAbaddon@184.99.136.95.rev.vodafone.pt) Quit (Remote host closed the connection)
[9:31] * IvanJobs (~ivanjobs@103.50.11.146) Quit (Ping timeout: 480 seconds)
[9:31] * liamchou (~liamchou@14.117.25.102) has joined #ceph
[9:34] * arcimboldo (~antonio@84-75-174-248.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:35] * thomnico (~thomnico@2001:720:410:3343:8ce2:4072:97bc:d12b) has joined #ceph
[9:37] * liamchouz (~liamchou@113.76.115.101) Quit (Ping timeout: 480 seconds)
[9:41] * mewald (~Adium@185.80.187.212) has joined #ceph
[9:42] * mewald (~Adium@185.80.187.212) has left #ceph
[9:43] * mewald (~Adium@185.80.187.212) has joined #ceph
[9:43] * mewald (~Adium@185.80.187.212) has left #ceph
[9:43] * arcimboldo (~antonio@84-75-174-248.dclient.hispeed.ch) has joined #ceph
[9:43] * mewald_ (~mewald@185.80.187.212) has joined #ceph
[9:44] <mewald_> is this the right channel to ask for support with a ceph cluster?
[9:45] <mewald_>
[9:45] <mewald_>
[9:45] <mewald_>
[9:46] * Swompie` (~Xerati@tor.secretvpn.net) has joined #ceph
[9:49] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[9:49] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:51] <badone> mewald_: don't ask to ask, just ask :)
[9:51] * penguinRaider (~KiKo@146.185.31.226) Quit (Read error: Connection reset by peer)
[9:51] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[9:52] * arcimboldo (~antonio@84-75-174-248.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:52] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:52] <mewald_> yeah ok but you directed me somewhere else only a couple of minutes ago :D
[9:53] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[9:53] <badone> mewald_: I sent you straight here from freenode
[9:53] <badone> mewald_: this is the official channel, freenode is not
[9:53] <mewald_> Ok, so I have a ceph cluster in which a large amount of OSD failed yesterday. We started all OSD again and so far they are available, BUT 13 PGs show as incomplete and we have 849 ops blocked. I have no clue how to solve this.
[9:54] <Be-El> TheSov: osd weight != osd crush weight. the osd weight defines how much of the available space should be used( 0.0 - 1.0); the osd crush weight gives the weight of an osd in relation of other osds and can have any values, especially > 1.0
[9:54] <badone> mewald_: start by posting ceph health detail
[9:55] <Be-El> TheSov: if you want to use disks > 1TB feel free, the _crush_ weight will be 2.0 for 2 TB etc.
[9:55] <badone> mewald_: you need to work out what is common about the blocked tasks
[9:55] <badone> s/tasks/ops/
[9:55] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit ()
[9:55] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[9:55] <mewald_> here we go :) https://gist.github.com/e25d655c8b5614be24cc80c616163507
[9:56] <mewald_> badone: ok makes sense. how can this be found out?
[9:56] * analbeard (~shw@support.memset.com) has joined #ceph
[9:56] <badone> mewald_: anything common to these?
[9:56] <badone> 100 ops are blocked > 2097.15 sec on osd.40
[9:56] <badone> 100 ops are blocked > 67108.9 sec on osd.31
[9:56] <badone> 100 ops are blocked > 2097.15 sec on osd.28
[9:56] <badone> 100 ops are blocked > 67108.9 sec on osd.4
[9:56] <badone> 100 ops are blocked > 2097.15 sec on osd.36
[9:56] <badone> 37 ops are blocked > 67108.9 sec on osd.35
[9:56] <badone> 27 ops are blocked > 2097.15 sec on osd.35
[9:57] <badone> 47 ops are blocked > 8388.61 sec on osd.6
[9:57] <badone> 8 ops are blocked > 2097.15 sec on osd.6
[9:57] <badone> 100 ops are blocked > 8388.61 sec on osd.55
[9:57] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[9:57] <badone> 84 ops are blocked > 2097.15 sec on osd.20
[9:57] <badone> 1 ops are blocked > 1048.58 sec on osd.20
[9:57] <badone> 56 ops are blocked > 2097.15 sec on osd.44
[9:58] <Be-El> mewald_: and add ceph version + output of ceph pg query for one of the affected pgs
[9:58] <Walex> badone: paste sites exist on the web... :-)
[9:58] <mewald_> ceph version 10.1.2 (4a2a6f72640d6b74a3bbd92798bb913ed380dcd4)
[9:58] <badone> mewald_: those osds are all one of the acting osds for the pgs that are in bad states
[9:59] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:59] <badone> Walex: noted, thought I might sneak one in but you caught me
[9:59] <mewald_> "ceph pg 12.9b query " hands forever
[9:59] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:00] <mewald_> hands = hangs
[10:00] * karnan (~karnan@121.244.87.117) has joined #ceph
[10:00] * DanFoster (~Daniel@2a00:1ee0:3:1337:8cd9:94fc:7bcc:371a) has joined #ceph
[10:00] * linjan (~linjan@86.62.112.22) has joined #ceph
[10:00] <badone> mewald_: so start taking a good look at osds 40 and 36
[10:00] <badone> both have slow requests and are acting for 12.9b
[10:01] <mewald_> badone: what can I look at? logs? what else?
[10:01] <badone> mewald_: query the admin socket to dump historic ops and ops in flight
[10:03] <badone> Walex: rightly so, BTW :) I know I did wrong...
[10:03] <mewald_> badone: never done that, have to check how that works :)
[10:03] <badone> let me see what I've got on that
[10:04] <badone> mewald_: sudo ceph --admin-daemon /var/run/ceph/ceph-osd.40.asok help
[10:04] <badone> that will list the commands available
[10:05] <badone> you should see dump_historic_ops, dump_ops_in_flight
[10:06] <badone> you should see some ops that have been taking a long time, then looking for those ops in the logs may offser some indsight why
[10:07] <badone> mewald_: why did the OSDs fail? What happened?
[10:08] <mewald_> we dont really know what happened. we lost really a lot of OSDs withing only few minutes all spread accross hosts. No clue why.
[10:09] <mewald_> I pasted the outputs for you: https://gist.github.com/0787157509497cea848068a8911a64a0 https://gist.github.com/8168b188b1946004167bc442aec7ff9e
[10:10] * penguinRaider (~KiKo@146.185.31.226) has joined #ceph
[10:11] <badone> mewald_: what day/time is it now on the systems?
[10:11] <mewald_> everything is set to "Wed Jun 29 10:11:41 CEST 2016"
[10:13] <badone> mewald_: the last entry for some of these ops are "time": "2016-06-29 09:29:07.456707",
[10:14] * lmb_ (~Lars@nat.nue.novell.com) has joined #ceph
[10:15] <badone> mewald_: since they are waiting for a peer you'll need to find out which peer that is
[10:15] <badone> and do the same there
[10:15] <mewald_> where do you see that they are waiting for a peer?
[10:16] <badone> "time": "2016-06-29 09:29:07.456707",
[10:16] <badone> "event": "waiting for peered"
[10:16] <badone> that's where they are stuck
[10:16] * Swompie` (~Xerati@06SAAENSG.tor-irc.dnsbl.oftc.net) Quit ()
[10:16] <badone> it's not clear which OSD that is from...
[10:16] <badone> need to check it's peers
[10:17] <badone> *its*
[10:17] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[10:17] <mewald_> the peers are the other OSD of the acting set, right?
[10:18] <badone> mewald_: yes, should be
[10:18] <badone> look at the same ops on them
[10:20] <mewald_> So I take a look at this PG now "pg 12.9b is incomplete, acting [40,36,49]" We already have the info for osd.40. Gathering 36 and 49 now
[10:21] <mewald_> 36: https://gist.github.com/4da88bf5d5a96041240d461a9cffbf4a https://gist.github.com/cd663ba66d05c4623e71c1e7abe3010d
[10:23] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[10:24] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[10:26] <mewald_> 49: https://gist.github.com/3749c25c7782bd4ef09a7a323291a36f https://gist.github.com/743cdfa425e9368a0215a79a9e89d1b8
[10:28] * EinstCrazy (~EinstCraz@203.79.187.188) Quit (Read error: Connection timed out)
[10:30] * EinstCrazy (~EinstCraz@203.79.187.188) has joined #ceph
[10:30] <badone> I can't see a common op in those outputs, which is odd
[10:31] <badone> mewald_: I'd start tailing some of the OSD logs and see if that tells you anything...
[10:33] <sep> TheSov, i do not think the weight functions the way you think it does. afaik there is no need to give different weights to different size osd's.
[10:35] * arcimboldo (~antonio@dhcp-wlan-uzh-89-206-85-115.uzh.ch) has joined #ceph
[10:35] <badone> mewald_: can you query any of the pgs?
[10:36] * rendar (~I@95.233.118.203) has joined #ceph
[10:37] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[10:38] <mewald_> badone: yeah 12.64 can be queried: https://gist.github.com/75b966bbea0d4618cb1588cca0f8663f
[10:39] <mewald_> line 121 seems odd to me, but dont really know what it means
[10:42] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[10:42] * thomnico (~thomnico@2001:720:410:3343:8ce2:4072:97bc:d12b) Quit (Ping timeout: 480 seconds)
[10:42] <badone> mewald_: "down_osds_we_would_probe": [
[10:42] <badone> 37
[10:42] <badone> ],
[10:42] <badone> mewald_: it's blocked by osd 37
[10:43] <badone> what is 37's status?
[10:44] <badone> seems 37 is not available?
[10:46] <badone> so the OSD can't peer, it's a peering problem
[10:47] * rraja (~rraja@121.244.87.117) has joined #ceph
[10:49] <mewald_> badone: yeah no such OSD exists. ceph osd tree doensnt list it and not filesystem is mounted called like that (checked with mount | grep 37)
[10:49] <mewald_> why is it even trying to peer with it?
[10:50] <badone> mewald_: it was around at some stage because it is listed as having been acting at some stage
[10:51] <badone> mewald_: look at past_intervals
[10:51] <mewald_> what is "past_intervals"?
[10:51] <badone> mewald_: why would there be no 37?
[10:52] <badone> mewald_: it's a field in the output you just posted
[10:54] <badone> mewald_: "acting": [
[10:54] <badone> 37
[10:54] <badone> ],
[10:54] <badone> "primary": 37,
[10:54] <badone> "up_primary": 37
[10:54] <badone> at one time 37 was the *only* OSD holding that pg, that's why it needs to query it
[10:55] <badone> 37 may know of operations no other OSD knows of
[10:56] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[10:56] * TMM (~hp@185.5.121.201) has joined #ceph
[10:56] * allaok (~allaok@80.12.58.22) has joined #ceph
[10:58] * lmb_ (~Lars@nat.nue.novell.com) Quit (Quit: Leaving)
[10:59] <mewald_> badone: good question. I can only say its not currently in the crushmap
[11:00] <badone> marking it lost may work, but I'm not sure without knowing the full history...
[11:01] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:01] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[11:03] <badone> mewald_: ahh, one minute, may have found something
[11:04] * wjw-freebsd (~wjw@62.72.192.112) has joined #ceph
[11:06] <badone> mewald_: ok, for that pg you could try this, http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-March/008183.html
[11:07] <badone> set the key is the following
[11:07] <badone> "peering_blocked_by_detail": [
[11:07] <badone> {
[11:07] <badone> "detail": "peering_blocked_by_history_les_bound"
[11:08] <badone> so on the primary you should temporarily set osd_find_best_info_ignore_history_les to true and mark it donw
[11:08] <badone> *down*
[11:08] <badone> then set it back to 0
[11:11] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:f82d:8716:80df:41b) has joined #ceph
[11:14] * lmb (~Lars@charybdis-ext.suse.de) has joined #ceph
[11:14] * EinstCra_ (~EinstCraz@203.79.187.188) has joined #ceph
[11:16] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[11:19] * wjw-freebsd (~wjw@62.72.192.112) Quit (Ping timeout: 480 seconds)
[11:20] * EinstCrazy (~EinstCraz@203.79.187.188) Quit (Ping timeout: 480 seconds)
[11:22] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[11:26] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[11:29] <mewald_> badone: how do I set this value? via admin socket?
[11:29] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[11:29] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[11:31] <badone> mewald_: no, I think you'll need to add a section for the osd in the local /etc/ceph/ceph.conf as it needs to survive the reboot
[11:31] <badone> mewald_: once you don't need it any more you can unset it via the admin socket and remove it from ceph.conf, don't forget
[11:31] <badone> once the pg is active and clean
[11:32] * dotblank1 (~darkid@7V7AAGVMB.tor-irc.dnsbl.oftc.net) has joined #ceph
[11:32] <badone> mewald_: afraid I'm off for the day but I'm sure others can/will help
[11:32] <mewald_> ok
[11:35] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[11:38] <badone> mewald_: one final thing, you can check whether the variable is successfully set by dumping the config from the admin socket
[11:48] <IvanJobs_> Hi, cephers, how can I make ceph.conf take effects? I restarted osd instances, but when I check runing-time config, it kept no change.
[11:49] <IvanJobs_> Should I restart MONs too?
[11:49] * jcsp (~jspray@fpc101952-sgyl38-2-0-cust21.18-2.static.cable.virginm.net) has joined #ceph
[11:52] * hawk_ (~oftc-webi@115.119.152.66.static-hyderabad.vsnl.net.in) has joined #ceph
[11:52] * karnan (~karnan@121.244.87.117) has joined #ceph
[11:53] * hawk_ (~oftc-webi@115.119.152.66.static-hyderabad.vsnl.net.in) Quit ()
[11:56] * shylesh__ (~shylesh@45.124.225.184) has joined #ceph
[12:01] * dotblank1 (~darkid@7V7AAGVMB.tor-irc.dnsbl.oftc.net) Quit ()
[12:06] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[12:07] * Titin (~textual@ALyon-658-1-192-23.w90-14.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[12:13] <IvanJobs_> I use injectarg instead, btw, ceph --show-config doesn't reflect running-time config, so ps about that.
[12:14] * igoryonya (~kvirc@80.83.238.13) Quit (Ping timeout: 480 seconds)
[12:14] <vikhyat> IvanJobs_: --show-config is not right way to check the config change after changing the config
[12:15] <vikhyat> you should check it with daemon command or admin socket
[12:16] <vikhyat> for example : ceph daemon osd.0 config get osd_op_threads
[12:16] * allaok1 (~allaok@machine107.orange-labs.com) has joined #ceph
[12:17] <vikhyat> this is for osd_op_threads
[12:20] * sickology (~mio@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[12:20] * allaok (~allaok@80.12.58.22) Quit (Ping timeout: 480 seconds)
[12:20] * sickology (~mio@vpn.bcs.hr) has joined #ceph
[12:21] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[12:21] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[12:21] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[12:23] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[12:34] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[12:35] * igoryonya (~kvirc@80.83.238.118) has joined #ceph
[12:37] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) has joined #ceph
[12:40] <IvanJobs_> thx, vikhyat
[12:44] * bjornar_ (~bjornar@109.247.131.38) has joined #ceph
[12:44] <vikhyat> IvanJobs_: _o/
[12:55] * EinstCra_ (~EinstCraz@203.79.187.188) Quit (Remote host closed the connection)
[13:00] * gregmark (~Adium@68.87.42.115) has joined #ceph
[13:03] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[13:04] * turmeric (~jcastro@89.152.250.115) has joined #ceph
[13:06] * kawa2014 (~kawa@94.162.101.54) Quit (Ping timeout: 480 seconds)
[13:07] * SaneSmith (~tritonx@185.100.86.86) has joined #ceph
[13:10] <turmeric> morning guys
[13:10] * Hemanth (~hkumar_@121.244.87.118) has joined #ceph
[13:10] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:11] * shylesh__ (~shylesh@45.124.225.184) Quit (Ping timeout: 480 seconds)
[13:17] * igoryonya (~kvirc@80.83.238.118) Quit (Ping timeout: 480 seconds)
[13:17] * sep (~sep@2a05:6d47::2) Quit (Ping timeout: 480 seconds)
[13:21] * rakeshgm (~rakesh@121.244.87.118) has joined #ceph
[13:25] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[13:26] * sep (~sep@95.62-50-191.enivest.net) has joined #ceph
[13:29] * igoryonya (~kvirc@80.83.238.118) has joined #ceph
[13:33] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[13:34] * erikh (~erikh@c-73-15-101-109.hsd1.ca.comcast.net) has joined #ceph
[13:34] <erikh> hello
[13:35] <erikh> I'm trying to configure the container images ceph/daemon and ceph/rbd across hosts; I think I have everything configured right, but the mons don't achieve quorum
[13:36] <erikh> let me gist my setup script
[13:37] <erikh> https://gist.github.com/erikh/79bdf33692bf79f663438cf6b3b5c467
[13:37] * SaneSmith (~tritonx@06SAAEN04.tor-irc.dnsbl.oftc.net) Quit ()
[13:40] * shylesh (~shylesh@45.124.225.190) has joined #ceph
[13:46] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[13:52] * Hemanth (~hkumar_@121.244.87.118) Quit (Ping timeout: 480 seconds)
[13:52] * rakeshgm (~rakesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[13:52] * igoryonya (~kvirc@80.83.238.118) Quit (Ping timeout: 480 seconds)
[13:52] * allaok1 (~allaok@machine107.orange-labs.com) Quit (Quit: Leaving.)
[13:53] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[13:54] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[13:58] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[14:00] * IvanJobs_ (~ivanjobs@103.50.11.146) Quit ()
[14:00] * Brochacho (~alberto@c-73-45-127-198.hsd1.il.comcast.net) has joined #ceph
[14:01] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[14:02] * igoryonya (~kvirc@80.83.239.55) has joined #ceph
[14:02] * mewald_ (~mewald@185.80.187.212) Quit (Quit: Lost terminal)
[14:04] * Azru (~Unforgive@193.90.12.87) has joined #ceph
[14:05] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[14:11] * dugravot61 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[14:21] * erikh (~erikh@c-73-15-101-109.hsd1.ca.comcast.net) Quit (Quit: WeeChat 1.4)
[14:25] * swami1 (~swami@49.38.0.153) Quit (Quit: Leaving.)
[14:28] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[14:28] * toMeloos (~toMeloos@2a03:fc02:2:1:9eeb:e8ff:fe06:cfbb) has joined #ceph
[14:34] * Azru (~Unforgive@7V7AAGVRX.tor-irc.dnsbl.oftc.net) Quit ()
[14:37] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[14:37] <grw> hi. is content-type header included in signed headers for radosgw? i cant see why my digests aren't matching
[14:39] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:39] * ChengPeng (~chris@180.168.197.82) Quit (Ping timeout: 480 seconds)
[14:40] * ChengPeng (~chris@180.168.197.98) has joined #ceph
[14:41] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[14:42] <grw> the header from browser seems to include a string like: multipart/form-data; boundary=----WebKitFormBoundaryJwRa4vHEVX2Y1Gpr
[14:42] <grw> which is unknowable when i generate the signed url
[14:52] * lifeboy (~roland@196.32.234.206) Quit (Quit: Ex-Chat)
[14:53] * EinstCrazy (~EinstCraz@61.165.229.131) has joined #ceph
[15:00] * vbellur (~vijay@12.232.194.107) Quit (Ping timeout: 480 seconds)
[15:04] * saintpablo (~saintpabl@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[15:07] <topro> hi, my ceph cluster seems to be getting into trouble right now. executing a pg repair i got an unfound object. recovery seems to block indefinetly now, preventing other pgs from recovering (after osd restart). investigating the pg with unfound object shows that 2 OSDs have status "already probed" for the unfound object. My pool has size 3 so there is a third OSD which must have that object too, but that one doesnt seem to get probed
[15:08] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[15:15] * thomnico (~thomnico@2001:720:410:3343:8ce2:4072:97bc:d12b) has joined #ceph
[15:18] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[15:32] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Remote host closed the connection)
[15:33] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[15:37] * gauravbafna (~gauravbaf@49.38.0.105) Quit (Remote host closed the connection)
[15:46] * gmoro (~guilherme@193.120.208.221) has joined #ceph
[15:47] <zdzichu> topro: try lowering min_size
[15:48] <gmoro> hi, I keep getting this on my logs
[15:48] <gmoro> Jun 28 16:57:33 ammostackn2 bash: 2016-06-28 16:57:33.977133 7f45fa9a7700 -1 osd.0 39115 heartbeat_check: no reply from osd.1 ever on either front or back, first ping sent 2016-06-28 16:56:45.990944 (cutoff 2016-06-28 16:57:13.977132)
[15:48] <gmoro> it's a simple install
[15:48] <gmoro> 3 nodes
[15:48] <gmoro> just one spits this
[15:48] <gmoro> and everything seems to work fine tho
[15:48] <gmoro> any idea?
[15:49] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) has joined #ceph
[15:50] <zdzichu> firewall?
[15:50] <gmoro> zdzichu, I double checked
[15:50] <gmoro> everything is correct
[15:51] <gmoro> well, the real problem is that I keep getting a segfault on this same node
[15:51] <gmoro> just trying to figure out why this is happening to get more info to possibly open a bug
[15:52] <gmoro> the OSD keeps segfaulting
[15:52] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:54] <topro> zdzichu: i can try, but whats the idea behind trying that? i already restarted one osd (secondary of that affected pg), didn'nt help. better should have restarted primary-acting OSD for that PG I assume but cannot try now because some more PGs need recovery but recovery is currently blocked by the PG with the 'missing' object
[15:54] <topro> so if i'm loosing more osds for those PGs they will get inactive, but cluster is in use right now, so cannot afford downtime
[15:55] * bene (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[15:56] * arcimboldo (~antonio@dhcp-wlan-uzh-89-206-85-115.uzh.ch) Quit (Ping timeout: 480 seconds)
[15:57] * scg (~zscg@valis.gnu.org) has joined #ceph
[15:57] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[15:58] <topro> zdzichu:reducing min_size for that pool did help to get beyon the blocked recovery, still don't get why. even the "missing"-object notification disappeared. still I have 1 object degraded, but no recovery started
[15:58] * arcimboldo (~antonio@dhcp-wlan-uzh-89-206-85-115.uzh.ch) has joined #ceph
[16:00] <topro> re-issuing the repair command for the affected PG started recovery again, now the one object is unfound again and recovery is blocked again, though min_size is still reduced to 1
[16:01] * ntpttr (~ntpttr@134.134.139.82) has joined #ceph
[16:01] <zdzichu> I don't fully understand min_size
[16:02] <zdzichu> I get that it prevents data loss during writes
[16:02] <zdzichu> but I cannot understand why it should prevent recovery
[16:03] <topro> anyway, now I restarted the primary OSD for the affected PG and now (at least for the moment) everything is back to normal *phew*
[16:03] <topro> thanks a lot for your hint, even if both of us don't know why it helped ;)
[16:05] <zdzichu> don't forget to increase min_size afterwards
[16:06] <topro> I did already
[16:06] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[16:11] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[16:12] * vbellur (~vijay@12.232.194.107) has joined #ceph
[16:12] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:16] * thomnico (~thomnico@2001:720:410:3343:8ce2:4072:97bc:d12b) Quit (Ping timeout: 480 seconds)
[16:16] * lmb (~Lars@charybdis-ext.suse.de) Quit (Ping timeout: 480 seconds)
[16:17] * lmb (~Lars@charybdis-ext.suse.de) has joined #ceph
[16:18] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[16:18] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[16:20] * ntpttr_ (~ntpttr@134.134.139.82) has joined #ceph
[16:20] * ntpttr (~ntpttr@134.134.139.82) Quit (Remote host closed the connection)
[16:20] * arcimboldo (~antonio@dhcp-wlan-uzh-89-206-85-115.uzh.ch) Quit (Ping timeout: 480 seconds)
[16:21] <hoonetorg> hi
[16:21] <hoonetorg> i had overfull osd's
[16:22] <hoonetorg> then i set pool size to 2 on the large pool
[16:22] <hoonetorg> now i removed some rbd images and set pool size to 3 again
[16:23] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[16:23] <hoonetorg> remap+backfil started and was fast (about 33% to backfil as expected)
[16:23] <hoonetorg> now it is stuck at 23%
[16:23] <hoonetorg> is recover pausing sometimes?
[16:25] <hoonetorg> see also https://gist.github.com/hoonetorg/73d6953a7a7d58a3d9bb25efdc0dbc26
[16:27] <hoonetorg> objects degraded, objects misplaced, active+undersized+degraded+remapped+wait_backfill, active+undersized+degraded+remapped+backfilling, active+clean are stuck now for a good amount of time
[16:28] * vata1 (~vata@207.96.182.162) has joined #ceph
[16:30] * xarses_ (~xarses@64.124.158.100) has joined #ceph
[16:33] * nardial (~ls@dslb-084-063-234-150.084.063.pools.vodafone-ip.de) Quit (Quit: Leaving)
[16:33] * mewald_ (~mewald@185.80.187.212) has joined #ceph
[16:34] <mewald_> I need some assistance getting my cluster back to healthy. Current state is: https://gist.github.com/bd4cf1e505a0fa711180c3edbaf865a7 and it is stuck like that for a few hours now.
[16:35] <m0zes> mewald_: you lost 3 disks?
[16:36] * Tarazed (~KeeperOfT@193.90.12.88) has joined #ceph
[16:36] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[16:38] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[16:38] <mewald_> m0zes: what do you derive that from? We had a big outage of many OSD simulatneously but we have no idea why at this point. We are currently trying to recover to healthy. This morning I contacted IRC to solve PGs in "incomplete" looks like I was able to do that with some help, but it left me at this state where stuff is stuck :)
[16:38] <m0zes> 55 osds: 52 up, 52 in;
[16:39] * mshaffer1 (~Adium@2607:fad0:32:a02:4d86:3bcb:5683:47d7) has joined #ceph
[16:39] <mewald_> m0zes: ah right, yeah they just dont seem to come back up. But I think ceph should be able to recover anyways!?
[16:40] <m0zes> want to pastebin 'ceph health detail' ?
[16:40] * yanzheng1 (~zhyan@125.70.22.48) Quit (Quit: This computer has gone to sleep)
[16:40] <mewald_> sure: https://gist.github.com/13f3d29a841f63c3e16ed58aeda0fca3
[16:41] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:42] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[16:42] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[16:42] <m0zes> on the host holding osd 17, want to pastebin 'ceph daemon osd.17 ops'
[16:43] <mewald_> https://gist.github.com/352a217a04a918febc64de4993826a5e
[16:43] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[16:44] * lmb (~Lars@charybdis-ext.suse.de) Quit (Ping timeout: 480 seconds)
[16:44] * mshaffer (~Adium@2607:fad0:32:a02:c9d3:869:56ff:8672) Quit (Ping timeout: 480 seconds)
[16:45] * mshaffer1 (~Adium@2607:fad0:32:a02:4d86:3bcb:5683:47d7) Quit (Quit: Leaving.)
[16:45] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) has joined #ceph
[16:45] <m0zes> how about 'ceph pg 12.a5 query'
[16:45] <mewald_> here we go: https://gist.github.com/929f32b73d39c5d18ee477be3cd8164f
[16:46] <hoonetorg> ^^^ i needed to restart one osd (osd.23) but i couldn't see a reason why
[16:46] <hoonetorg> now it continues recovering
[16:47] * lincolnb (~lincoln@c-71-57-68-189.hsd1.il.comcast.net) has joined #ceph
[16:48] * lmb (~Lars@nat.nue.novell.com) has joined #ceph
[16:49] <m0zes> mewald_: I don't see anything obvious as to why it wouldn't be recovering more.
[16:49] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[16:49] <m0zes> how about your ceph.conf?
[16:49] <lincolnb> hi all, one of my kclients crashed last night on kernel 4.4.6 with the following in dmesg: http://paste.fedoraproject.org/386339/11652146/ anyone else seen this? i'm not sure how to reproduce it, unfortunately.
[16:49] * swami1 (~swami@27.7.164.158) has joined #ceph
[16:50] * arcimboldo (~antonio@dhcp-wlan-uzh-89-206-85-115.uzh.ch) has joined #ceph
[16:50] * yanzheng1 (~zhyan@125.70.22.48) has joined #ceph
[16:51] * yanzheng1 (~zhyan@125.70.22.48) Quit ()
[16:52] <mewald_> m0zes: https://gist.github.com/141fc0630bec34c4c4cef8c79c389585
[16:52] * davidzlap1 (~Adium@107.17.50.42) has joined #ceph
[16:52] * davidzlap (~Adium@ip-64-134-236-233.public.wayport.net) Quit (Read error: No route to host)
[16:52] * davidzlap (~Adium@ip-64-134-236-233.public.wayport.net) has joined #ceph
[16:53] * davidzlap2 (~Adium@107.17.50.42) has joined #ceph
[16:53] * davidzlap1 (~Adium@107.17.50.42) Quit (Read error: Connection reset by peer)
[16:55] * gauravbafna (~gauravbaf@122.167.207.18) has joined #ceph
[16:55] * bjornar_ (~bjornar@109.247.131.38) Quit (Ping timeout: 480 seconds)
[16:56] <TheSov> sep, why not reweight them by size. i know that a uniform weight will distribute data uniformly. and different weights for different size disks will make sure that certain disks dont fill before others
[16:57] <TheSov> the idea here is that if i reweight a 1tb disk to .1 and a 10tb disk to 1.0 then i can easily look at the disks size and weight it appropriately for the amount of space it has
[16:58] * thomnico (~thomnico@2001:720:410:3343:8ce2:4072:97bc:d12b) has joined #ceph
[16:58] <mewald_> m0zes: no more ideas? :(
[16:59] * swami1 (~swami@27.7.164.158) Quit (Quit: Leaving.)
[16:59] * ngoswami (~ngoswami@121.244.87.116) Quit (Read error: Connection reset by peer)
[17:00] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[17:00] * davidzlap (~Adium@ip-64-134-236-233.public.wayport.net) Quit (Ping timeout: 480 seconds)
[17:02] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Remote host closed the connection)
[17:03] * gauravbafna (~gauravbaf@122.167.207.18) Quit (Ping timeout: 480 seconds)
[17:06] * Tarazed (~KeeperOfT@06SAAEOAL.tor-irc.dnsbl.oftc.net) Quit ()
[17:08] * wushudoin (~wushudoin@38.99.12.237) has joined #ceph
[17:10] * gauravbafna (~gauravbaf@122.167.207.18) has joined #ceph
[17:15] * lmb (~Lars@nat.nue.novell.com) Quit (Ping timeout: 480 seconds)
[17:15] * gauravbafna (~gauravbaf@122.167.207.18) Quit (Remote host closed the connection)
[17:17] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[17:17] * gauravbafna (~gauravbaf@122.167.207.18) has joined #ceph
[17:18] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[17:18] <mewald_> can anyone else have a look please? ceph -s : https://gist.github.com/e19f4d3210dc3b94c27a503f29223fbd ceph health detail: https://gist.github.com/e8464e8e8de8ceee3647de3dbeea30ab
[17:19] <hoonetorg> mewald_ my ceph health detail looked similar
[17:20] <hoonetorg> it seems your osd.17 is hanging
[17:20] * gauravbafna (~gauravbaf@122.167.207.18) Quit (Remote host closed the connection)
[17:20] <mewald_> hoonetorg: would stopping it, then ceph osd rm, ceph crush rm help to recover the cluster then?
[17:20] <hoonetorg> i restarted manually osd.xx on the host, where it's located
[17:20] <mewald_> ah ok
[17:20] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[17:20] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[17:21] <hoonetorg> service ceph restart osd.17
[17:21] <hoonetorg> in your case
[17:21] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:21] <hoonetorg> if ceph version ! > hammer
[17:21] <hoonetorg> otherwise systemctl
[17:22] <hoonetorg> systemctl restart ceph-osd@17.service
[17:23] <mewald_> hoonetorg: I restarted it, now something is recovering. Not sure how far it will get though. Can you tell me how you identified osd.17 to be hanging?
[17:25] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[17:25] * davidzlap2 (~Adium@107.17.50.42) Quit (Quit: Leaving.)
[17:26] * gauravbafna (~gauravbaf@122.167.207.18) has joined #ceph
[17:27] <hoonetorg> https://gist.github.com/anonymous/e8464e8e8de8ceee3647de3dbeea30ab#L536-L538
[17:27] <hoonetorg> lines 536-538
[17:27] <hoonetorg> mewald_ ^^^
[17:28] * scg (~zscg@valis.gnu.org) Quit (Ping timeout: 480 seconds)
[17:28] * antongribok (~antongrib@216.207.42.140) has joined #ceph
[17:29] * Mraedis (~Enikma@185.62.190.38) has joined #ceph
[17:30] <mewald_> hoonetorg: https://gist.github.com/2f40ab74c4162d004535b5116e0201b6 its stuck again :(
[17:30] * gauravbafna (~gauravbaf@122.167.207.18) Quit (Remote host closed the connection)
[17:31] <mewald_> hoonetorg: oh well identifing osd.17 was easy :D
[17:31] * georgem (~Adium@45.72.156.229) has joined #ceph
[17:31] <mewald_> osd.17 is currently down
[17:31] <mewald_> the service seems to crash all the time
[17:32] * gauravbafna (~gauravbaf@122.167.207.18) has joined #ceph
[17:33] * derjohn_mob (~aj@88.128.80.92) has joined #ceph
[17:33] <hoonetorg> ok
[17:34] <mewald_> hoonetorg: so now osd.22 was hanging
[17:34] <mewald_> restarted that too, crashes almost instantly
[17:34] * gauravbafna (~gauravbaf@122.167.207.18) Quit (Remote host closed the connection)
[17:34] <mewald_> but looks like recovery continues further
[17:34] <hoonetorg> so your osds crash after a restart ? (17 and 22)
[17:34] <mewald_> yes
[17:35] <hoonetorg> can you see something in the logs of that osd in /var/log/ceph/ceph-osd.17.log f.e.?
[17:37] <mewald_> hoonetorg: osd.17 log: https://gist.github.com/258022436c0e3264a20d8276c8b957d7
[17:38] <mewald_> I captured the log from service start till crash
[17:39] <mewald_> osd.35 => same story
[17:39] * scg (~zscg@pubnet.fsf.org) has joined #ceph
[17:39] <mewald_> found it to be hanging, restarted, instant crash
[17:39] <mewald_> looks like I could continue this until the cluster is out of OSDs :D
[17:40] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[17:40] * Racpatel (~Racpatel@c-73-170-66-165.hsd1.ca.comcast.net) has joined #ceph
[17:42] <hoonetorg> mewald_ you run 10.1.2?
[17:42] <hoonetorg> isn't it a beta version of jewel?
[17:42] * bjornar_ (~bjornar@ti0099a430-1262.bb.online.no) has joined #ceph
[17:42] * raghu (~raghu@chippewa-nat.cray.com) has joined #ceph
[17:42] <mewald_> uhh, good question :D its been a while since I set this up
[17:43] <hoonetorg> probably you should think about updating to 10.2.2
[17:44] <raghu> any messenger devs in the house? Got a question about the xio messenger. Posted to ceph-users last night, but the post has not been approved by the listserv mod yet.
[17:45] <mewald_> hoonetorg: can I do this node by node without breaking the cluster entirely?
[17:45] <mewald_> updating a broken cluster is kind of scary :D
[17:46] <hoonetorg> i understand
[17:47] * Skaag (~lunix@65.200.54.234) has joined #ceph
[17:48] <hoonetorg> first upgrade monitors
[17:48] <hoonetorg> then restart all monitors
[17:48] <hoonetorg> one by one
[17:49] <hoonetorg> then upgrade all osds
[17:49] <hoonetorg> restart osd's one by one
[17:49] <hoonetorg> then same with mds, gateways if you have
[17:49] <hoonetorg> i'm not sure if upgrade is safe
[17:49] <mewald_> hoonetorg: which version should I install?
[17:50] <hoonetorg> 1. on a broken cluster
[17:50] <hoonetorg> 2. from beta to stable lts
[17:50] <hoonetorg> you should install ceph jewel version 10.2.2
[17:50] * ntpttr_ (~ntpttr@134.134.139.82) Quit (Remote host closed the connection)
[17:50] <hoonetorg> probably any developers here can help out
[17:50] * racpatel__ (~Racpatel@c-73-170-66-165.hsd1.ca.comcast.net) has joined #ceph
[17:51] <hoonetorg> i've seen your mon's are ok
[17:51] <hoonetorg> so upgrading mon's should at least work
[17:52] <hoonetorg> mewald_: what operating system do you use
[17:52] <hoonetorg> how did you deploy ceph
[17:52] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[17:52] <hoonetorg> with ceph-deploy?
[17:52] <mewald_> xenial
[17:52] <hoonetorg> ok
[17:52] <mewald_> yeah cephdeploy
[17:53] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[17:53] <SamYaple> mewald_: you deployed hammer on xenial?
[17:53] <mewald_> jewel I think
[17:53] <mewald_> 10.1.2
[17:54] <SamYaple> thats a dev version of jewel
[17:54] <hoonetorg> http://changelogs.ubuntu.com/changelogs/pool/main/c/ceph/ceph_10.1.2-0ubuntu1/changelog
[17:54] <mewald_> yeah hoonetorg told me :D
[17:54] * ntpttr (~ntpttr@192.55.55.41) has joined #ceph
[17:54] * ade (~abradshaw@dslb-094-223-093-171.094.223.pools.vodafone-ip.de) has joined #ceph
[17:54] <SamYaple> ah sorry. ill read scrollback
[17:54] <hoonetorg> http://packages.ubuntu.com/xenial/ceph
[17:55] <hoonetorg> they have a rc version of ceph in stable ubuntu xx repo????
[17:55] <hoonetorg> that is not well thought
[17:55] <mewald_> hoonetorg: agree :D
[17:56] <hoonetorg> James Page <james.page@ubuntu.com> (the package maintainer) should think about urgent update
[17:56] * bjornar_ (~bjornar@ti0099a430-1262.bb.online.no) Quit (Ping timeout: 480 seconds)
[17:57] <SamYaple> hoonetorg: it is how it is
[17:57] <mewald_> so should I go for infernalis then and try a downgrade?
[17:57] <SamYaple> hoonetorg: because they pinned 16.04 when they pinned it, packages freeze
[17:57] <SamYaple> mewald_: you cant downgrade
[17:58] <SamYaple> mewald_: i would use the ceph repos for jewel
[17:58] <hoonetorg> http://download.ceph.com/debian-jewel/
[17:58] <BlaXpirit> fwiw http://packages.ubuntu.com/xenial-updates/ceph
[17:58] <BlaXpirit> 10.2.0
[17:58] * Mraedis (~Enikma@06SAAEOC2.tor-irc.dnsbl.oftc.net) Quit ()
[17:59] <mewald_> SamYaple: added "deb http://download.ceph.com/debian-jewel/ xenial main" to sources.list but when I "apt-get update" I get "W: http://download.ceph.com/debian-jewel/dists/xenial/InRelease: Signature by key 08B73419AC32B4E966C1A330E84AC2C0460F3994 uses weak digest algorithm (SHA1)"
[17:59] <mewald_> that prevents me from installing the packages
[17:59] <SamYaple> mewald_: known issue. and that doesnt prevent you from installing
[17:59] <SamYaple> its a warning
[18:00] <mewald_> ok let me run apt-get upgrade then
[18:00] * takarider (~takarider@KD175108208098.ppp-bb.dion.ne.jp) has joined #ceph
[18:00] <SamYaple> mewald_: i wouldnt recomend upgrading until you are healthy
[18:00] <hoonetorg> first monitors and restart them
[18:00] <SamYaple> upgrading isnt going to magically bring your osds up
[18:00] <mewald_> ahh I need dist-upgrade for it to work, that's why I thought it didnt
[18:01] * swami1 (~swami@27.7.164.158) has joined #ceph
[18:01] <mewald_> well, magic is what I was hoping for :D
[18:02] <SamYaple> trying to do an upgrade in this state is not something I would do. why are all of your osds not up?
[18:02] * joshd1 (~jdurgin@71-92-201-212.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[18:03] <mewald_> SamYaple: I find the recovery process hanging all the time, I find the OSD, I restart, it instantly crashes again and again => repeat
[18:03] <SamYaple> mewald_: there are logs for the osds. what are they saying?
[18:03] <SamYaple> curropt OS maybe?
[18:03] <SamYaple> err FS
[18:04] <mewald_> hoonetorg: do you still have the gist link with the logs lying around?
[18:04] <mewald_> ahh found it: https://gist.github.com/258022436c0e3264a20d8276c8b957d7
[18:05] <hoonetorg> osd/PGLog.cc: In function 'static void PGLog::read_log(ObjectStore*, coll_t, coll_t, ghobject_t, const pg_info_t&, std::map<eversion_t, hobject_t>&, PGLog::IndexedLog&, pg_missing_t&, std::ostringstream&, const DoutPrefixProvider*, std::set<std::__cxx11::basic_string<char> >*)' thread 7f7ac2d038c0 time 2016-06-29 17:37:18.773988
[18:05] * thomnico (~thomnico@2001:720:410:3343:8ce2:4072:97bc:d12b) Quit (Ping timeout: 480 seconds)
[18:05] <hoonetorg> osd/PGLog.cc: 970: FAILED assert(last_e.version.version < e.version.version)
[18:05] <hoonetorg> that's your 1st error
[18:06] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:07] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[18:07] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[18:07] * rraja (~rraja@121.244.87.117) has joined #ceph
[18:08] * rraja (~rraja@121.244.87.117) Quit ()
[18:10] <mewald_> SamYaple: hoonetorg: I just went kamikaze and upgraded ceph00 to 10.2.2 then rebooted. MON and OSDs came back up *sweat*
[18:11] <SamYaple> good to hear
[18:11] <SamYaple> glad it worked out
[18:11] * gauravbafna (~gauravbaf@122.167.207.18) has joined #ceph
[18:12] <mewald_> well, doesnt mean the cluster is back healthy :D ceph00 still shows one OSD it cannot seem to bring up. Let me check if it shows the same error message
[18:13] * Concubidated (~cube@208.186.243.52) Quit (Quit: Leaving.)
[18:13] * linjan (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[18:14] <hoonetorg> now you should update ceph01 and ceph02
[18:15] * EinstCrazy (~EinstCraz@61.165.229.131) Quit (Remote host closed the connection)
[18:15] <hoonetorg> and then on ceph01
[18:15] <mewald_> hoonetorg: I am on it :)
[18:15] <hoonetorg> k
[18:16] * davidzlap (~Adium@2605:e000:1313:8003:ddef:4cbc:659d:2d6d) has joined #ceph
[18:16] <mewald_> hoonetorg: what where you about to say about ceph01?
[18:17] <mewald_> didnt mean to interrupt :)
[18:17] <hoonetorg> systemctl restart ceph-mon@ceph01
[18:17] <hoonetorg> and on ceph02
[18:17] <hoonetorg> systemctl restart ceph-mon@ceph02
[18:18] <mewald_> thx :)
[18:18] <hoonetorg> then all mon's are restarted
[18:19] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[18:20] <hoonetorg> and after you upgraded all your osd's you can restart alll osd's on one host with:
[18:20] <hoonetorg> systemctl restart ceph-osd.target
[18:20] <hoonetorg> this restarts all osd's on that host
[18:20] * gauravbafna (~gauravbaf@122.167.207.18) Quit (Remote host closed the connection)
[18:21] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[18:22] * jcsp (~jspray@fpc101952-sgyl38-2-0-cust21.18-2.static.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[18:23] * mykola (~Mikolaj@193.93.217.42) has joined #ceph
[18:24] * davidzlap (~Adium@2605:e000:1313:8003:ddef:4cbc:659d:2d6d) Quit (Ping timeout: 480 seconds)
[18:27] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[18:27] * scg (~zscg@pubnet.fsf.org) Quit (Ping timeout: 480 seconds)
[18:27] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[18:27] * vbellur (~vijay@12.232.194.107) Quit (Ping timeout: 480 seconds)
[18:28] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[18:29] * davidzlap (~Adium@2605:e000:1313:8003:ddef:4cbc:659d:2d6d) has joined #ceph
[18:31] * arcimboldo (~antonio@dhcp-wlan-uzh-89-206-85-115.uzh.ch) Quit (Ping timeout: 480 seconds)
[18:34] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:34] * anadrom (~PuyoDead@192.160.102.166) has joined #ceph
[18:36] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:38] * ade (~abradshaw@dslb-094-223-093-171.094.223.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[18:38] * scg (~zscg@valis.gnu.org) has joined #ceph
[18:38] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[18:40] * derjohn_mob (~aj@88.128.80.92) Quit (Ping timeout: 480 seconds)
[18:43] * vbellur (~vijay@68.177.129.155) has joined #ceph
[18:49] * EthanL (~lamberet@cce02cs4036-fa12-z.ams.hpecore.net) has joined #ceph
[18:55] * gauravbafna (~gauravbaf@122.167.207.18) has joined #ceph
[18:55] * toMeloos (~toMeloos@2a03:fc02:2:1:9eeb:e8ff:fe06:cfbb) Quit (Ping timeout: 480 seconds)
[18:56] * DanFoster (~Daniel@2a00:1ee0:3:1337:8cd9:94fc:7bcc:371a) Quit (Quit: Leaving)
[19:00] * liumxnl (~liumxnl@li1209-40.members.linode.com) has joined #ceph
[19:03] * liumxnl (~liumxnl@li1209-40.members.linode.com) Quit (Remote host closed the connection)
[19:04] * anadrom (~PuyoDead@06SAAEOF4.tor-irc.dnsbl.oftc.net) Quit ()
[19:07] * liamchouz (~liamchou@183.45.16.97) has joined #ceph
[19:13] * liamchou (~liamchou@14.117.25.102) Quit (Ping timeout: 480 seconds)
[19:13] * Racpatel (~Racpatel@c-73-170-66-165.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[19:14] * Jeffrey4l (~Jeffrey@110.244.243.149) Quit (Ping timeout: 480 seconds)
[19:15] * racpatel__ (~Racpatel@c-73-170-66-165.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[19:15] * Racpatel (~Racpatel@c-73-170-66-165.hsd1.ca.comcast.net) has joined #ceph
[19:15] * Brochacho (~alberto@c-73-45-127-198.hsd1.il.comcast.net) Quit (Quit: Brochacho)
[19:19] * turmeric (~jcastro@89.152.250.115) Quit (Remote host closed the connection)
[19:22] * narthollis (~N3X15@chulak.enn.lu) has joined #ceph
[19:24] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:f82d:8716:80df:41b) Quit (Quit: Leaving)
[19:25] * kawa2014 (~kawa@212.110.41.244) Quit (Quit: Leaving)
[19:27] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:29] * gregmark (~Adium@68.87.42.115) has joined #ceph
[19:30] * sickology (~mio@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[19:30] * sickology (~mio@vpn.bcs.hr) has joined #ceph
[19:31] * karnan (~karnan@106.206.142.221) has joined #ceph
[19:32] * karnan (~karnan@106.206.142.221) Quit ()
[19:32] * georgem (~Adium@45.72.156.229) Quit (Quit: Leaving.)
[19:34] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:bcf3:8ccc:9640:d06) has joined #ceph
[19:37] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Ping timeout: 480 seconds)
[19:39] * reed (~reed@216.38.134.18) has joined #ceph
[19:40] * saintpablo (~saintpabl@0117800363.0.fullrate.ninja) has joined #ceph
[19:42] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[19:45] * garphy is now known as garphy`aw
[19:46] * swami1 (~swami@27.7.164.158) Quit (Ping timeout: 480 seconds)
[19:49] * linjan (~linjan@176.195.70.132) has joined #ceph
[19:52] * narthollis (~N3X15@4MJAAG7RI.tor-irc.dnsbl.oftc.net) Quit ()
[19:52] * dontron (~Keiya@watchme.tor-exit.network) has joined #ceph
[19:56] * fenfen (~fenfen@mail.pbsnetwork.eu) has joined #ceph
[19:56] <fenfen> can sombody help with cors and buckets?
[20:06] * gauravbafna (~gauravbaf@122.167.207.18) Quit (Remote host closed the connection)
[20:07] * vbellur (~vijay@68.177.129.155) Quit (Ping timeout: 480 seconds)
[20:08] * mewald_ (~mewald@185.80.187.212) Quit (Quit: Lost terminal)
[20:11] * doppelgrau (~doppelgra@dslb-088-072-094-168.088.072.pools.vodafone-ip.de) has joined #ceph
[20:12] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[20:22] * dontron (~Keiya@06SAAEOJV.tor-irc.dnsbl.oftc.net) Quit ()
[20:22] * Helleshin (~cmrn@hessel2.torservers.net) has joined #ceph
[20:23] * shylesh (~shylesh@45.124.225.190) Quit (Remote host closed the connection)
[20:35] * Concubidated (~cube@2600:1:8818:dc7b:60b8:ef9a:783:fc28) has joined #ceph
[20:40] * wushudoin (~wushudoin@38.99.12.237) Quit (Quit: Leaving)
[20:41] * wushudoin (~wushudoin@38.99.12.237) has joined #ceph
[20:42] * Hemanth (~hkumar_@103.228.221.134) has joined #ceph
[20:42] * mlg9000 (~matt@67.107.56.250.ptr.us.xo.net) has joined #ceph
[20:44] * bene (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[20:44] <mlg9000> Hi all, I rebooted all my OSD's and now the osd service won't start and the ceph command hangs, nothing in the logs. Where do I start?
[20:50] <codice> check permissions on your config and admin keyring file
[20:51] <codice> check if firewall is active on osd hosts, and if so, check rules
[20:52] * Helleshin (~cmrn@4MJAAG7TM.tor-irc.dnsbl.oftc.net) Quit ()
[20:52] <codice> try to start up osd manually and see if it gives you an error, i.e. ceph-osd -f --cluster cluster_name --id # --setuser username --setgroup group
[20:56] * Concubidated (~cube@2600:1:8818:dc7b:60b8:ef9a:783:fc28) Quit (Quit: Leaving.)
[20:58] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[21:04] * ffilzwin (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[21:04] <fenfen> codice: what should i choose for pg and pgp? should i take the number from a specific existing pool?
[21:04] <fenfen> codice: i have 16 a lot
[21:05] <codice> the pgnum is a based on the number of OSDs you ahve
[21:05] <codice> have
[21:05] <codice> http://ceph.com/pgcalc/
[21:06] <codice> you can get the value used in one of your other pools with something like ceph osd pool get pool_name pg_num
[21:07] <codice> then use that number in your new pool
[21:08] <fenfen> i know all the values i used for setup - if the allocation is similar to .rgw.root (0.10%) i would use 16, but i don't now the percent value for meta since it is not listed in the wizard ond the page
[21:08] * rakeshgm (~rakesh@106.51.31.148) has joined #ceph
[21:09] <codice> not sure what you mean by allocation
[21:10] <fenfen> codice: the %Data column on http://ceph.com/pgcalc/
[21:12] <codice> oh right, nevermind. I usually ignore that column
[21:12] <codice> I look for the pg count
[21:12] <fenfen> codice: i created it with 16, like most of the rgw pools
[21:12] <codice> ok
[21:12] <fenfen> ok how do i get this in the zone config
[21:13] <fenfen> looks like this: "metadata_heap": "",
[21:13] <codice> not sure, tbh
[21:14] <fenfen> ok i'll figure this out...
[21:15] <fenfen> ok - done
[21:16] <codice> brb
[21:18] <fenfen> codice: thank you so much - now it works :)
[21:18] <fenfen> codice: you made me really happy today ;)
[21:19] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[21:26] <mlg9000> codice: thanks, I was able to recover
[21:27] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[21:28] * georgem (~Adium@45.72.156.229) has joined #ceph
[21:31] * Hazmat (~Bonzaii@exit.tor.uwaterloo.ca) has joined #ceph
[21:33] * fenfen (~fenfen@mail.pbsnetwork.eu) Quit (Quit: Leaving...)
[21:34] <codice> back
[21:34] <codice> mlg9000: cool, glad it worked
[21:38] * Hemanth (~hkumar_@103.228.221.134) Quit (Ping timeout: 480 seconds)
[21:39] <mlg9000> another question.. trying to test ceph with libvirt/kvm... I built a VM with virt install and converted the disk from raw to rdb, edited the XML as required and tried to boot but it says "not a bootable disk"
[21:40] <mlg9000> what am I missing?
[21:44] * saintpablo (~saintpabl@0117800363.0.fullrate.ninja) Quit (Ping timeout: 480 seconds)
[21:46] <codice> no idea. did you follow some instructions somewhere?
[21:48] <mlg9000> yep, ceph docs: http://docs.ceph.com/docs/jewel/rbd/libvirt/
[21:48] <mlg9000> I can attach/read the disk to another VM but not boot off of it
[21:49] <mlg9000> there might be some qemu-img I'm missing
[21:49] <mlg9000> option*
[21:49] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[21:54] <codice> wish I could help, but haven't really done much with libvirt and rbd
[21:54] * ntpttr (~ntpttr@192.55.55.41) Quit (Remote host closed the connection)
[21:54] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:54] * ntpttr (~ntpttr@192.55.55.41) has joined #ceph
[21:58] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[22:01] * Hazmat (~Bonzaii@7V7AAGWB4.tor-irc.dnsbl.oftc.net) Quit ()
[22:01] * tZ (~zviratko@185.100.86.69) has joined #ceph
[22:04] * gregsfortytwo (~gregsfort@transit-86-181-132-209.redhat.com) Quit (Ping timeout: 480 seconds)
[22:11] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[22:13] * rakeshgm (~rakesh@106.51.31.148) Quit (Quit: Leaving)
[22:15] * mlg9000 (~matt@67.107.56.250.ptr.us.xo.net) has left #ceph
[22:16] * scg (~zscg@valis.gnu.org) Quit (Ping timeout: 480 seconds)
[22:19] * jclm (~jclm@ip-64-134-228-71.public.wayport.net) has joined #ceph
[22:20] * jclm (~jclm@ip-64-134-228-71.public.wayport.net) Quit ()
[22:28] * arcimboldo (~antonio@84-75-174-248.dclient.hispeed.ch) has joined #ceph
[22:30] * LeaChim (~LeaChim@host86-150-160-78.range86-150.btcentralplus.com) has joined #ceph
[22:30] <hoonetorg> hi
[22:31] <hoonetorg> i hv a problem with recovering osds
[22:31] <hoonetorg> today i set pool size of my pool for rbd's from 2 to 3
[22:31] * tZ (~zviratko@06SAAEOPI.tor-irc.dnsbl.oftc.net) Quit ()
[22:31] <hoonetorg> lot of backfilling
[22:32] * doppelgrau (~doppelgra@dslb-088-072-094-168.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[22:32] <hoonetorg> one of the osd's (osd.23 - the osd with the highest number) always gets stuck
[22:32] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[22:32] <hoonetorg> the osd process spins more than 100%cpu
[22:32] <hoonetorg> see log of that osd
[22:33] <hoonetorg> https://gist.github.com/hoonetorg/16a0b660049af9c7b2ecb7e5eea79bd8
[22:33] <hoonetorg> in the end it says 3 slow requests, 2 included below; oldest blocked for > 4100.282515 secs
[22:33] <hoonetorg> and
[22:34] <hoonetorg> 1 slow requests, 1 included below; oldest blocked for > 7680.158445 secs
[22:34] <hoonetorg> what can be the cause of this
[22:36] * doppelgrau (~doppelgra@dslb-088-072-094-168.088.072.pools.vodafone-ip.de) has joined #ceph
[22:36] * doppelgrau (~doppelgra@dslb-088-072-094-168.088.072.pools.vodafone-ip.de) Quit ()
[22:40] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * georgem (~Adium@45.72.156.229) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * reed (~reed@216.38.134.18) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:bcf3:8ccc:9640:d06) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * sickology (~mio@vpn.bcs.hr) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * EthanL (~lamberet@cce02cs4036-fa12-z.ams.hpecore.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * davidzlap (~Adium@2605:e000:1313:8003:ddef:4cbc:659d:2d6d) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * mykola (~Mikolaj@193.93.217.42) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * haplo37 (~haplo37@199.91.185.156) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * raghu (~raghu@chippewa-nat.cray.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * lincolnb (~lincoln@c-71-57-68-189.hsd1.il.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * gmoro (~guilherme@193.120.208.221) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * igoryonya (~kvirc@80.83.239.55) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Foysal (~Foysal@202.84.42.5) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * dmanchad (~dmanchad@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * overclk (~quassel@2400:6180:100:d0::54:1) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * danieagle (~Daniel@177.138.223.148) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * dcwangmit01 (~dcwangmit@162-245.23-239.PUBLIC.monkeybrains.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * brians (~brian@80.111.114.175) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * bniver (~bniver@108-60-118-130.static.wiline.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * nwf_ (~nwf@172.56.23.33) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * DV (~veillard@2001:41d0:a:f29f::1) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Kingrat (~shiny@2605:a000:161a:c0f6:4899:5e0d:3f:f14d) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * yibo (~Yibo@101.230.208.200) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ceph-devel (uid171189@id-171189.highgate.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * diegows (~diegows@main.woitasen.com.ar) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * bassam (sid154933@id-154933.brockwell.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * rinek (~o@62.109.134.112) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * yebyen (~yebyen@martyfunkhouser.csh.rit.edu) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * zdzichu (zdzichu@pipebreaker.pl) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * alexxy (~alexxy@biod.pnpi.spb.ru) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Georgyo (~georgyo@shamm.as) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * trociny (~mgolub@93.183.239.2) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * stein (~stein@185.56.185.82) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Animazing (~Wut@94.242.217.235) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * oliveiradan (~doliveira@137.65.133.10) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * scubacuda (sid109325@0001fbab.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * devicenull (sid4013@ealing.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * scalability-junk (sid6422@ealing.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * DeMiNe0 (~DeMiNe0@104.131.119.74) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * joao (~joao@8.184.114.89.rev.vodafone.pt) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Nats (~natscogs@114.31.195.238) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Wahmed (~wahmed@206.174.203.195) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * LiftedKilt (~LiftedKil@dragons.have.mostlyincorrect.info) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * mlovell (~mlovell@69-195-66-94.unifiedlayer.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Tene (~tene@173.13.139.236) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * shaon (~shaon@shaon.me) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * essjayhch (sid79416@id-79416.highgate.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * gtrott (sid78444@id-78444.tooting.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * benner (~benner@188.166.111.206) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ccourtaut (~ccourtaut@178.62.125.124) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ElNounch (sid150478@id-150478.ealing.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * harbie (~notroot@2a01:4f8:211:2344:0:dead:beef:1) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Bosse (~bosse@erebus.klykken.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * med (~medberry@71.74.177.250) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * evilrob (~evilrob@2600:3c00::f03c:91ff:fedf:1d3d) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * `10` (~10@69.169.91.14) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * _nick (~nick@zarquon.dischord.org) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Kruge_ (~Anus@198.211.99.93) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * elder_ (sid70526@id-70526.charlton.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * jnq (sid150909@0001b7cc.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * SamYaple (~SamYaple@162.209.126.134) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * motk (~motk@2600:3c00::f03c:91ff:fe98:51ee) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * mitchty (~quassel@130-245-47-212.rev.cloud.scaleway.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Hazelesque_ (~hazel@phobos.hazelesque.uk) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Larsen (~andreas@2001:67c:578:2::15) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * carter (~carter@li98-136.members.linode.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * brians_ (~brianoftc@brian.by) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * rektide (~rektide@eldergods.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * react (~react@2001:4800:7815:103:f0d7:c55:ff05:60e8) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * jmn (~jmn@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * pasties (~pasties@00021c52.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * braderhart (sid124863@braderhart.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * icey (~Chris@0001bbad.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * wushudoin (~wushudoin@38.99.12.237) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * gregmark (~Adium@68.87.42.115) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Racpatel (~Racpatel@c-73-170-66-165.hsd1.ca.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * antongribok (~antongrib@216.207.42.140) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * johnavp1989 (~jpetrini@8.39.115.8) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * vata1 (~vata@207.96.182.162) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * squizzi (~squizzi@107.13.31.195) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * penguinRaider (~KiKo@146.185.31.226) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * vata (~vata@cable-173.246.3-246.ebox.ca) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * niknakpaddywak (~xander.ni@outbound.lax.demandmedia.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ffilzwin (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ceph-ircslackbot (~ceph-ircs@ds9536.dreamservers.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * andrewschoen (~andrewsch@50.56.86.195) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * sage (~quassel@2607:f298:5:101d:f816:3eff:fe21:1966) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * seosepa (~sepa@aperture.GLaDOS.info) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * GeoTracer (~Geoffrey@41.77.153.99) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * KindOne (kindone@0001a7db.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * jowilkin_ (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * \ask (~ask@oz.develooper.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Sketch (~Sketch@2604:180:2::a506:5c0d) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * raeven_ (~raeven@h89n10-oes-a31.ias.bredband.telia.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * toastydeath (~toast@pool-71-255-253-39.washdc.fios.verizon.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * nathani (~nathani@2607:f2f8:ac88::) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * JohnPreston78 (sid31393@ealing.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * rossdylan (~rossdylan@2605:6400:1:fed5:22:68c4:af80:cb6e) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * aiicore (~aiicore@s30.linuxpl.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * darkfader (~floh@88.79.251.60) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * abhishekvrshny (~abhishekv@180.179.116.54) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * lurbs (user@uber.geek.nz) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * goberle_ (~goberle@mid.ygg.tf) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * kiranos_ (~quassel@109.74.11.233) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * _are__ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Randleman (~jesse@89.105.204.182) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * kingcu (~kingcu@kona.ridewithgps.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * s3an2 (~root@korn.s3an.me.uk) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * jiffe (~jiffe@nsab.us) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * arthurh (~arthurh@38.101.34.128) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * qman__ (~rohroh@2600:3c00::f03c:91ff:fe69:92af) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * GooseYArd (~GooseYArd@ec2-52-5-245-183.compute-1.amazonaws.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * hughsaunders (~hughsaund@2001:4800:7817:101:1843:3f8a:80de:df65) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Aeso (~aesospade@aesospadez.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * MrBy2 (~MrBy@85.115.23.2) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * sw3 (sweaung@2400:6180:0:d0::66:100f) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ndru_ (~jawsome@104.236.94.35) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * cronburg_ (~cronburg@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * chutz (~chutz@rygel.linuxfreak.ca) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * jklare (~jklare@185.27.181.36) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * jlayton (~jlayton@107.13.84.55) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * mnaser (~mnaser@162.253.53.193) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ben2 (ben@pearl.meh.net.nz) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * markl (~mark@knm.org) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * masterpe (~masterpe@2a01:670:400::43) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * jgornick (~jgornick@2600:3c00::f03c:91ff:fedf:72b4) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * rburkholder (~overonthe@199.68.193.54) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * remix_tj (~remix_tj@bonatti.remixtj.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * makz (~makz@2a00:d880:6:2d7::e463) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * koma (~koma@0001c112.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * zirpu (~zirpu@00013c46.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * dynamicudpate (~overonthe@199.68.193.54) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Superdawg (~Superdawg@ec2-54-243-59-20.compute-1.amazonaws.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Walex (~Walex@72.249.182.114) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * folivora (~out@devnull.drwxr-xr-x.eu) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * logan (~logan@63.143.60.136) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * destrudo (~destrudo@tomba.sonic.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * dustinm` (~dustinm`@68.ip-149-56-14.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * zerick_ (~zerick@104.131.101.65) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * LeaChim (~LeaChim@host86-150-160-78.range86-150.btcentralplus.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ntpttr (~ntpttr@192.55.55.41) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * linjan (~linjan@176.195.70.132) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * liamchouz (~liamchou@183.45.16.97) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * sudocat (~dibarra@192.185.1.20) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * takarider (~takarider@KD175108208098.ppp-bb.dion.ne.jp) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Skaag (~lunix@65.200.54.234) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * xarses_ (~xarses@64.124.158.100) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ChengPeng (~chris@180.168.197.98) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * whatevsz (~quassel@b9168e24.cgn.dg-w.de) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ronrib (~boswortr@45.32.242.135) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * verleihnix (~verleihni@195-202-198-60.dynamic.hispeed.ch) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * sig_wall (adjkru@xn--hwgz2tba.lamo.su) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * jamespd (~mucky@mucky.socket7.org) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * arbrandes (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * dec (~dec@71.29.197.104.bc.googleusercontent.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Guest858 (~herrsergi@ec2-107-21-210-136.compute-1.amazonaws.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Kurt^ (~wipa@171.25.179.111) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * shaunm (~shaunm@74.83.215.100) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * blynch (~blynch@vm-nat.msi.umn.edu) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Pintomatic (sid25118@ealing.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * getzburg (sid24913@ealing.irccloud.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * krogon (~krogon@irdmzpr02-ext.ir.intel.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Pies (~Pies@srv229.opcja.pl) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * etienneme (~arch@5.ip-167-114-253.eu) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * gmmaha (~gmmaha@00021e7e.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * leseb (~leseb@81-64-223-102.rev.numericable.fr) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Gugge-47527 (gugge@92.246.2.105) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * Sgaduuw (~eelco@willikins.srv.eelcowesemann.nl) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * DrewBeer_ (~DrewBeer@216.152.240.203) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * mfa298 (~mfa298@krikkit.yapd.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * thadood (~thadood@slappy.thunderbutt.org) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * scheuk (~scheuk@204.246.67.78) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * beardo (~beardo__@beardo.cc.lehigh.edu) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * JoeJulian (~JoeJulian@108.166.123.190) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * marco208 (~root@159.253.7.204) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * dis (~dis@00018d20.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * fli_ (fli@eastside.wirebound.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * badone (~badone@66.187.239.16) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * espeer (~quassel@phobos.isoho.st) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * iggy (~iggy@mail.vten.us) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * joshd (~jdurgin@206.169.83.146) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * SpamapS (~SpamapS@xencbyrum2.srihosting.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * singler (~singler@zeta.kirneh.eu) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * TiCPU (~owrt@c216.218.54-96.clta.globetrotter.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:40] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (synthon.oftc.net resistance.oftc.net)
[22:41] * owlbot (~supybot@pct-empresas-50.uc3m.es) Quit (Remote host closed the connection)
[22:41] * owlbot (~supybot@pct-empresas-50.uc3m.es) has joined #ceph
[22:42] * rendar (~I@95.233.118.203) Quit (Ping timeout: 480 seconds)
[22:43] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[22:43] * LeaChim (~LeaChim@host86-150-160-78.range86-150.btcentralplus.com) has joined #ceph
[22:43] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[22:43] * ntpttr (~ntpttr@192.55.55.41) has joined #ceph
[22:43] * georgem (~Adium@45.72.156.229) has joined #ceph
[22:43] * ffilzwin (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[22:43] * wushudoin (~wushudoin@38.99.12.237) has joined #ceph
[22:43] * linjan (~linjan@176.195.70.132) has joined #ceph
[22:43] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[22:43] * reed (~reed@216.38.134.18) has joined #ceph
[22:43] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:bcf3:8ccc:9640:d06) has joined #ceph
[22:43] * sickology (~mio@vpn.bcs.hr) has joined #ceph
[22:43] * gregmark (~Adium@68.87.42.115) has joined #ceph
[22:43] * Racpatel (~Racpatel@c-73-170-66-165.hsd1.ca.comcast.net) has joined #ceph
[22:43] * liamchouz (~liamchou@183.45.16.97) has joined #ceph
[22:43] * EthanL (~lamberet@cce02cs4036-fa12-z.ams.hpecore.net) has joined #ceph
[22:43] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[22:43] * davidzlap (~Adium@2605:e000:1313:8003:ddef:4cbc:659d:2d6d) has joined #ceph
[22:43] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[22:43] * linuxkidd (~linuxkidd@ip70-189-207-54.lv.lv.cox.net) has joined #ceph
[22:43] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[22:43] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[22:43] * takarider (~takarider@KD175108208098.ppp-bb.dion.ne.jp) has joined #ceph
[22:43] * Skaag (~lunix@65.200.54.234) has joined #ceph
[22:43] * raghu (~raghu@chippewa-nat.cray.com) has joined #ceph
[22:43] * antongribok (~antongrib@216.207.42.140) has joined #ceph
[22:43] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[22:43] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[22:43] * lincolnb (~lincoln@c-71-57-68-189.hsd1.il.comcast.net) has joined #ceph
[22:43] * xarses_ (~xarses@64.124.158.100) has joined #ceph
[22:43] * vata1 (~vata@207.96.182.162) has joined #ceph
[22:43] * squizzi (~squizzi@107.13.31.195) has joined #ceph
[22:43] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) has joined #ceph
[22:43] * gmoro (~guilherme@193.120.208.221) has joined #ceph
[22:43] * evelu (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[22:43] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[22:43] * ChengPeng (~chris@180.168.197.98) has joined #ceph
[22:43] * igoryonya (~kvirc@80.83.239.55) has joined #ceph
[22:43] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) has joined #ceph
[22:43] * penguinRaider (~KiKo@146.185.31.226) has joined #ceph
[22:43] * Foysal (~Foysal@202.84.42.5) has joined #ceph
[22:43] * dmanchad (~dmanchad@nat-pool-bos-t.redhat.com) has joined #ceph
[22:43] * overclk (~quassel@2400:6180:100:d0::54:1) has joined #ceph
[22:43] * danieagle (~Daniel@177.138.223.148) has joined #ceph
[22:43] * vata (~vata@cable-173.246.3-246.ebox.ca) has joined #ceph
[22:43] * niknakpaddywak (~xander.ni@outbound.lax.demandmedia.com) has joined #ceph
[22:43] * dcwangmit01 (~dcwangmit@162-245.23-239.PUBLIC.monkeybrains.net) has joined #ceph
[22:43] * batrick (~batrick@2600:3c00::f03c:91ff:fe96:477b) has joined #ceph
[22:43] * brians (~brian@80.111.114.175) has joined #ceph
[22:43] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[22:43] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[22:43] * whatevsz (~quassel@b9168e24.cgn.dg-w.de) has joined #ceph
[22:43] * bniver (~bniver@108-60-118-130.static.wiline.com) has joined #ceph
[22:43] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) has joined #ceph
[22:43] * Pies (~Pies@srv229.opcja.pl) has joined #ceph
[22:43] * nwf_ (~nwf@172.56.23.33) has joined #ceph
[22:43] * ceph-ircslackbot (~ceph-ircs@ds9536.dreamservers.com) has joined #ceph
[22:43] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[22:43] * ronrib (~boswortr@45.32.242.135) has joined #ceph
[22:43] * Kingrat (~shiny@2605:a000:161a:c0f6:4899:5e0d:3f:f14d) has joined #ceph
[22:43] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) has joined #ceph
[22:43] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) has joined #ceph
[22:43] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[22:43] * yibo (~Yibo@101.230.208.200) has joined #ceph
[22:43] * ceph-devel (uid171189@id-171189.highgate.irccloud.com) has joined #ceph
[22:43] * andrewschoen (~andrewsch@50.56.86.195) has joined #ceph
[22:43] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[22:43] * verleihnix (~verleihni@195-202-198-60.dynamic.hispeed.ch) has joined #ceph
[22:43] * sig_wall (adjkru@xn--hwgz2tba.lamo.su) has joined #ceph
[22:43] * jamespd (~mucky@mucky.socket7.org) has joined #ceph
[22:43] * diegows (~diegows@main.woitasen.com.ar) has joined #ceph
[22:43] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[22:43] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[22:43] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[22:43] * bassam (sid154933@id-154933.brockwell.irccloud.com) has joined #ceph
[22:43] * rinek (~o@62.109.134.112) has joined #ceph
[22:43] * arbrandes (~arbrandes@ec2-54-172-54-135.compute-1.amazonaws.com) has joined #ceph
[22:43] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[22:43] * dec (~dec@71.29.197.104.bc.googleusercontent.com) has joined #ceph
[22:43] * Guest858 (~herrsergi@ec2-107-21-210-136.compute-1.amazonaws.com) has joined #ceph
[22:43] * sage (~quassel@2607:f298:5:101d:f816:3eff:fe21:1966) has joined #ceph
[22:43] * yebyen (~yebyen@martyfunkhouser.csh.rit.edu) has joined #ceph
[22:43] * seosepa (~sepa@aperture.GLaDOS.info) has joined #ceph
[22:43] * zdzichu (zdzichu@pipebreaker.pl) has joined #ceph
[22:43] * Kurt^ (~wipa@171.25.179.111) has joined #ceph
[22:43] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[22:43] * GeoTracer (~Geoffrey@41.77.153.99) has joined #ceph
[22:43] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[22:43] * alexxy (~alexxy@biod.pnpi.spb.ru) has joined #ceph
[22:43] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[22:43] * jowilkin_ (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[22:43] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[22:43] * Georgyo (~georgyo@shamm.as) has joined #ceph
[22:43] * trociny (~mgolub@93.183.239.2) has joined #ceph
[22:43] * \ask (~ask@oz.develooper.com) has joined #ceph
[22:43] * Sketch (~Sketch@2604:180:2::a506:5c0d) has joined #ceph
[22:43] * stein (~stein@185.56.185.82) has joined #ceph
[22:43] * Animazing (~Wut@94.242.217.235) has joined #ceph
[22:43] * raeven_ (~raeven@h89n10-oes-a31.ias.bredband.telia.com) has joined #ceph
[22:43] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) has joined #ceph
[22:43] * toastydeath (~toast@pool-71-255-253-39.washdc.fios.verizon.net) has joined #ceph
[22:43] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[22:43] * nathani (~nathani@2607:f2f8:ac88::) has joined #ceph
[22:43] * oliveiradan (~doliveira@137.65.133.10) has joined #ceph
[22:43] * scubacuda (sid109325@0001fbab.user.oftc.net) has joined #ceph
[22:43] * devicenull (sid4013@ealing.irccloud.com) has joined #ceph
[22:43] * JohnPreston78 (sid31393@ealing.irccloud.com) has joined #ceph
[22:43] * Pintomatic (sid25118@ealing.irccloud.com) has joined #ceph
[22:43] * scalability-junk (sid6422@ealing.irccloud.com) has joined #ceph
[22:43] * getzburg (sid24913@ealing.irccloud.com) has joined #ceph
[22:43] * DeMiNe0 (~DeMiNe0@104.131.119.74) has joined #ceph
[22:43] * krogon (~krogon@irdmzpr02-ext.ir.intel.com) has joined #ceph
[22:43] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[22:43] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[22:43] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[22:43] * icey (~Chris@0001bbad.user.oftc.net) has joined #ceph
[22:43] * jklare (~jklare@185.27.181.36) has joined #ceph
[22:43] * espeer (~quassel@phobos.isoho.st) has joined #ceph
[22:43] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) has joined #ceph
[22:43] * destrudo (~destrudo@tomba.sonic.net) has joined #ceph
[22:43] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[22:43] * etienneme (~arch@5.ip-167-114-253.eu) has joined #ceph
[22:43] * rossdylan (~rossdylan@2605:6400:1:fed5:22:68c4:af80:cb6e) has joined #ceph
[22:43] * jlayton (~jlayton@107.13.84.55) has joined #ceph
[22:43] * TiCPU (~owrt@c216.218.54-96.clta.globetrotter.net) has joined #ceph
[22:43] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[22:43] * aiicore (~aiicore@s30.linuxpl.com) has joined #ceph
[22:43] * darkfader (~floh@88.79.251.60) has joined #ceph
[22:43] * joao (~joao@8.184.114.89.rev.vodafone.pt) has joined #ceph
[22:43] * Nats (~natscogs@114.31.195.238) has joined #ceph
[22:43] * dustinm` (~dustinm`@68.ip-149-56-14.net) has joined #ceph
[22:43] * gmmaha (~gmmaha@00021e7e.user.oftc.net) has joined #ceph
[22:43] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[22:43] * jmn (~jmn@nat-pool-bos-t.redhat.com) has joined #ceph
[22:43] * zerick_ (~zerick@104.131.101.65) has joined #ceph
[22:43] * Wahmed (~wahmed@206.174.203.195) has joined #ceph
[22:43] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) has joined #ceph
[22:43] * leseb (~leseb@81-64-223-102.rev.numericable.fr) has joined #ceph
[22:43] * zirpu (~zirpu@00013c46.user.oftc.net) has joined #ceph
[22:43] * LiftedKilt (~LiftedKil@dragons.have.mostlyincorrect.info) has joined #ceph
[22:43] * mlovell (~mlovell@69-195-66-94.unifiedlayer.com) has joined #ceph
[22:43] * rburkholder (~overonthe@199.68.193.54) has joined #ceph
[22:43] * dynamicudpate (~overonthe@199.68.193.54) has joined #ceph
[22:43] * Tene (~tene@173.13.139.236) has joined #ceph
[22:43] * Gugge-47527 (gugge@92.246.2.105) has joined #ceph
[22:43] * abhishekvrshny (~abhishekv@180.179.116.54) has joined #ceph
[22:43] * lurbs (user@uber.geek.nz) has joined #ceph
[22:43] * goberle_ (~goberle@mid.ygg.tf) has joined #ceph
[22:43] * kiranos_ (~quassel@109.74.11.233) has joined #ceph
[22:43] * mitchty (~quassel@130-245-47-212.rev.cloud.scaleway.com) has joined #ceph
[22:43] * _are__ (~quassel@2a01:238:4325:ca00:f065:c93c:f967:9285) has joined #ceph
[22:43] * ben2 (ben@pearl.meh.net.nz) has joined #ceph
[22:43] * Kruge_ (~Anus@198.211.99.93) has joined #ceph
[22:43] * cronburg_ (~cronburg@nat-pool-bos-t.redhat.com) has joined #ceph
[22:43] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[22:43] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[22:43] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) has joined #ceph
[22:43] * badone (~badone@66.187.239.16) has joined #ceph
[22:43] * markl (~mark@knm.org) has joined #ceph
[22:43] * _nick (~nick@zarquon.dischord.org) has joined #ceph
[22:43] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) has joined #ceph
[22:43] * `10` (~10@69.169.91.14) has joined #ceph
[22:43] * fli_ (fli@eastside.wirebound.net) has joined #ceph
[22:43] * braderhart (sid124863@braderhart.user.oftc.net) has joined #ceph
[22:43] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[22:43] * evilrob (~evilrob@2600:3c00::f03c:91ff:fedf:1d3d) has joined #ceph
[22:43] * ndru_ (~jawsome@104.236.94.35) has joined #ceph
[22:43] * sw3 (sweaung@2400:6180:0:d0::66:100f) has joined #ceph
[22:43] * MrBy2 (~MrBy@85.115.23.2) has joined #ceph
[22:43] * react (~react@2001:4800:7815:103:f0d7:c55:ff05:60e8) has joined #ceph
[22:43] * jgornick (~jgornick@2600:3c00::f03c:91ff:fedf:72b4) has joined #ceph
[22:43] * Aeso (~aesospade@aesospadez.com) has joined #ceph
[22:43] * logan (~logan@63.143.60.136) has joined #ceph
[22:43] * dis (~dis@00018d20.user.oftc.net) has joined #ceph
[22:43] * makz (~makz@2a00:d880:6:2d7::e463) has joined #ceph
[22:43] * hughsaunders (~hughsaund@2001:4800:7817:101:1843:3f8a:80de:df65) has joined #ceph
[22:43] * folivora (~out@devnull.drwxr-xr-x.eu) has joined #ceph
[22:43] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[22:43] * med (~medberry@71.74.177.250) has joined #ceph
[22:43] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[22:43] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[22:43] * Bosse (~bosse@erebus.klykken.com) has joined #ceph
[22:43] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) has joined #ceph
[22:43] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[22:43] * marco208 (~root@159.253.7.204) has joined #ceph
[22:43] * GooseYArd (~GooseYArd@ec2-52-5-245-183.compute-1.amazonaws.com) has joined #ceph
[22:43] * remix_tj (~remix_tj@bonatti.remixtj.net) has joined #ceph
[22:43] * pasties (~pasties@00021c52.user.oftc.net) has joined #ceph
[22:43] * brians_ (~brianoftc@brian.by) has joined #ceph
[22:43] * JoeJulian (~JoeJulian@108.166.123.190) has joined #ceph
[22:43] * beardo (~beardo__@beardo.cc.lehigh.edu) has joined #ceph
[22:43] * elder_ (sid70526@id-70526.charlton.irccloud.com) has joined #ceph
[22:43] * Larsen (~andreas@2001:67c:578:2::15) has joined #ceph
[22:43] * Hazelesque_ (~hazel@phobos.hazelesque.uk) has joined #ceph
[22:43] * harbie (~notroot@2a01:4f8:211:2344:0:dead:beef:1) has joined #ceph
[22:43] * ElNounch (sid150478@id-150478.ealing.irccloud.com) has joined #ceph
[22:43] * motk (~motk@2600:3c00::f03c:91ff:fe98:51ee) has joined #ceph
[22:43] * qman__ (~rohroh@2600:3c00::f03c:91ff:fe69:92af) has joined #ceph
[22:43] * ccourtaut (~ccourtaut@178.62.125.124) has joined #ceph
[22:43] * benner (~benner@188.166.111.206) has joined #ceph
[22:43] * carter (~carter@li98-136.members.linode.com) has joined #ceph
[22:43] * gtrott (sid78444@id-78444.tooting.irccloud.com) has joined #ceph
[22:43] * essjayhch (sid79416@id-79416.highgate.irccloud.com) has joined #ceph
[22:43] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[22:43] * jnq (sid150909@0001b7cc.user.oftc.net) has joined #ceph
[22:43] * shaon (~shaon@shaon.me) has joined #ceph
[22:43] * rektide (~rektide@eldergods.com) has joined #ceph
[22:43] * scheuk (~scheuk@204.246.67.78) has joined #ceph
[22:43] * arthurh (~arthurh@38.101.34.128) has joined #ceph
[22:43] * mnaser (~mnaser@162.253.53.193) has joined #ceph
[22:43] * jiffe (~jiffe@nsab.us) has joined #ceph
[22:43] * s3an2 (~root@korn.s3an.me.uk) has joined #ceph
[22:43] * Walex (~Walex@72.249.182.114) has joined #ceph
[22:43] * Superdawg (~Superdawg@ec2-54-243-59-20.compute-1.amazonaws.com) has joined #ceph
[22:43] * kingcu (~kingcu@kona.ridewithgps.com) has joined #ceph
[22:43] * thadood (~thadood@slappy.thunderbutt.org) has joined #ceph
[22:43] * mfa298 (~mfa298@krikkit.yapd.net) has joined #ceph
[22:43] * singler (~singler@zeta.kirneh.eu) has joined #ceph
[22:43] * koma (~koma@0001c112.user.oftc.net) has joined #ceph
[22:43] * Randleman (~jesse@89.105.204.182) has joined #ceph
[22:43] * iggy (~iggy@mail.vten.us) has joined #ceph
[22:43] * SpamapS (~SpamapS@xencbyrum2.srihosting.com) has joined #ceph
[22:43] * DrewBeer_ (~DrewBeer@216.152.240.203) has joined #ceph
[22:43] * Sgaduuw (~eelco@willikins.srv.eelcowesemann.nl) has joined #ceph
[22:43] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[22:44] * ChanServ sets mode +v nhm
[22:44] * ChanServ sets mode -o scuttle|afk
[22:48] <antongribok> hoonetorg, how many OSDs do you have in the cluster, and what are your settings for osd_max_backfills and osd_recovery_max_active?
[22:50] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[22:50] * vbellur (~vijay@68.177.129.155) has joined #ceph
[22:55] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[22:59] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[23:02] <hoonetorg> 24 osds
[23:02] * vbellur (~vijay@68.177.129.155) Quit (Ping timeout: 480 seconds)
[23:03] <hoonetorg> osd max backfills = 1
[23:04] <hoonetorg> osd recovery max active = 1
[23:04] <hoonetorg> antongribok ^^^
[23:08] * rendar (~I@95.233.118.203) has joined #ceph
[23:09] <hoonetorg> antongribok: does the setting of max backfills, max active to >1< make a deadlock possible?
[23:09] * linjan (~linjan@176.195.70.132) Quit (Ping timeout: 480 seconds)
[23:09] <hoonetorg> i did ceph tell osd.* injectargs --osd_max_backfills 3
[23:10] <hoonetorg> and osd.23 resumed backfill operation
[23:10] <hoonetorg> after that i set
[23:10] <hoonetorg> ceph tell osd.* injectargs --osd_recovery_max_active 3
[23:10] <hoonetorg> just to be sure (i don't know what i'm doing here) :)
[23:14] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[23:14] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[23:20] <antongribok> for a relatively small cluster like yours it probably does not matter, I think that either 1 or 3 for either of those should not cause a problem, so something else is likely going on :(
[23:22] <hoonetorg> but interestingly after raising max backfills >1 -> osd.23 "un"stalled
[23:23] <hoonetorg> so i'm thinking of the fact that 1 backfill operation and/or 1 recovery active is sometimes not enough.
[23:23] <hoonetorg> antongribok: may that be possible?
[23:23] <antongribok> can you paste the output of "ceph osd pool ls detail"
[23:24] <hoonetorg> https://gist.github.com/hoonetorg/fd2d271df3675b1fe04dfafc46f6f776
[23:25] * sigsegv (~sigsegv@188.26.140.132) has joined #ceph
[23:26] * owlbot (~supybot@pct-empresas-50.uc3m.es) Quit (Remote host closed the connection)
[23:26] * sigsegv (~sigsegv@188.26.140.132) Quit ()
[23:27] * owlbot (~supybot@pct-empresas-50.uc3m.es) has joined #ceph
[23:28] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[23:30] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[23:33] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[23:38] <hoonetorg> antongribok:
[23:38] <antongribok> I'm not sure what's going on :(
[23:38] <hoonetorg> 2 ops are blocked > 2097.15 sec
[23:38] <hoonetorg> 2 ops are blocked > 2097.15 sec on osd.23
[23:38] <hoonetorg> 1 osds have slow requests
[23:39] <antongribok> I need to step away for an OpenStack meetup, but I'm probably missing something obvious... sorry I could not help, but would be curious what others suggest
[23:39] <hoonetorg> thx
[23:39] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[23:40] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[23:40] <hoonetorg> i think the 2 blocked requests are backfills and when i set max backfills <3 recovery stops
[23:40] <hoonetorg> can i find out what the slow requests are and why they occur?
[23:41] <[arx]> when you have a full cluster, how do you unset the full flag so you can delete rbd images
[23:41] <[arx]> ceph osd unset full doesn't actually remove it for me on hammer
[23:43] <antongribok> hoonetorg: try looking in output of: "ceph health detail"
[23:46] <hoonetorg> antongribok: i do already
[23:46] <hoonetorg> thx
[23:46] * takarider (~takarider@KD175108208098.ppp-bb.dion.ne.jp) Quit (Quit: Textual IRC Client: www.textualapp.com)
[23:49] * gwinger (~gwinger@ip5f5be42d.dynamic.kabel-deutschland.de) has joined #ceph
[23:50] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[23:50] <hoonetorg> here i have the pg-query of such a blocked pg (7.f)
[23:50] <hoonetorg> https://gist.github.com/hoonetorg/bd3153c9138f0f6c43474c5624a66ba9
[23:50] <hoonetorg> can someone help me with that???
[23:51] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[23:51] * dnunez (~dnunez@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[23:51] <hoonetorg> i see these:
[23:51] <hoonetorg> "last_backfill_started": "-1\/0\/\/0",
[23:51] <hoonetorg> "begin": "-1\/0\/\/0",
[23:52] <hoonetorg> "end": "-1\/0\/\/0",
[23:52] <hoonetorg> in recovery_state
[23:52] <hoonetorg> are these correct???
[23:56] * ntpttr_ (~ntpttr@134.134.139.77) has joined #ceph
[23:56] * ntpttr (~ntpttr@192.55.55.41) Quit (Remote host closed the connection)
[23:56] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:59] * LeaChim (~LeaChim@host86-150-160-78.range86-150.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[23:59] * georgem (~Adium@45.72.156.229) Quit (Quit: Leaving.)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.