#ceph IRC Log

Index

IRC Log for 2015-12-08

Timestamps are in GMT/BST.

[0:02] * ychen_ (~ychen@69.25.143.34) has joined #ceph
[0:03] * ychen_ (~ychen@69.25.143.34) Quit (Remote host closed the connection)
[0:04] * rendar (~I@95.238.180.128) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:06] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) has joined #ceph
[0:07] * dyasny (~dyasny@104.158.24.36) has joined #ceph
[0:08] * bloatyfloat (~bloatyflo@46.37.172.253.srvlist.ukfast.net) Quit (Ping timeout: 480 seconds)
[0:08] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) Quit (Quit: Leaving.)
[0:09] * ychen (~ychen@69.25.143.32) Quit (Ping timeout: 480 seconds)
[0:09] * ychen (~ychen@69.25.143.34) has joined #ceph
[0:10] * gregmark (~Adium@68.87.42.115) has joined #ceph
[0:14] * ibravo (~ibravo@72.83.69.64) Quit (Quit: Leaving)
[0:14] * ychen (~ychen@69.25.143.34) Quit ()
[0:14] <shinobu> have any of you tried this upgrade path: dumpling -> firefly -> hammer
[0:21] * olid1111117 (~olid1982@aftr-185-17-204-204.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[0:22] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:98fc:ff00:50f6:96ed) Quit (Ping timeout: 480 seconds)
[0:25] * bloatyfloat (~bloatyflo@46.37.172.253.srvlist.ukfast.net) has joined #ceph
[0:25] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[0:27] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[0:29] * derjohn_mobi (~aj@tmo-111-158.customers.d1-online.com) has joined #ceph
[0:35] * shinobu (~oftc-webi@nat-pool-nrt-t1.redhat.com) Quit (Ping timeout: 480 seconds)
[0:36] * darkfader (~floh@88.79.251.60) Quit (Read error: No route to host)
[0:37] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[0:37] * darkfader (~floh@88.79.251.60) has joined #ceph
[0:38] * osso (~osso@sgp01-1-78-233-150-179.fbx.proxad.net) Quit (Quit: osso)
[0:44] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[0:44] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) Quit (Ping timeout: 480 seconds)
[0:47] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[0:48] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[0:53] * doppelgrau (~doppelgra@p54894075.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[0:56] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[0:59] * gregmark (~Adium@68.87.42.115) has joined #ceph
[1:01] * yguang11 (~yguang11@2001:4998:effd:600:15ff:b152:eb89:b24d) Quit (Remote host closed the connection)
[1:02] * yguang11 (~yguang11@2001:4998:effd:600:15ff:b152:eb89:b24d) has joined #ceph
[1:02] * bandrus (~brian@port-83-236-242-66.static.qsc.de) Quit (Quit: Leaving.)
[1:04] * dyasny (~dyasny@104.158.24.36) Quit (Ping timeout: 480 seconds)
[1:06] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[1:08] * hassifa (~Esvandiar@7V7AABR4T.tor-irc.dnsbl.oftc.net) has joined #ceph
[1:10] * yguang11 (~yguang11@2001:4998:effd:600:15ff:b152:eb89:b24d) Quit (Ping timeout: 480 seconds)
[1:13] * osso (~osso@sgp01-1-78-233-150-179.fbx.proxad.net) has joined #ceph
[1:20] * fcape (~fcape@107-220-57-73.lightspeed.austtx.sbcglobal.net) has joined #ceph
[1:24] * fsimonce (~simon@host28-31-dynamic.30-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:28] * brian_ (~textual@109.255.114.93) Quit (Read error: Connection reset by peer)
[1:28] * xarses (~xarses@64.124.158.100) Quit (Ping timeout: 480 seconds)
[1:29] * brian (~textual@109.255.114.93) has joined #ceph
[1:31] * swami1 (~swami@27.7.168.203) has joined #ceph
[1:32] * hassifa (~Esvandiar@7V7AABR4T.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[1:32] * thansen (~thansen@17.253.sfcn.org) Quit (Quit: Ex-Chat)
[1:34] * garphy is now known as garphy`aw
[1:35] <KaneK> my cluster reports:
[1:35] <KaneK> health HEALTH_ERR
[1:35] <KaneK> 14 pgs inconsistent
[1:35] <KaneK> 14 scrub errors
[1:35] <KaneK> will it fix inconsistencies automatically or manual pg repair is needed?
[1:38] <davidz> pg repair is required
[1:38] <KaneK> why it won???t invoke repair automatically?
[1:40] <davidz> No, it doesn't.
[1:40] <KaneK> I mean what???s the reasoning for that? is it something that can go wrong
[1:49] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[1:49] * vata (~vata@cable-21.246.173-197.electronicbox.net) has joined #ceph
[1:51] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[1:53] * C2J (~c2j@116.226.230.50) has joined #ceph
[1:55] * EinstCrazy (~EinstCraz@111.30.21.47) has joined #ceph
[1:57] * HOT_M (none@as54.tz2.dlp147.bih.net.ba) has joined #ceph
[1:58] * HOT_M (none@as54.tz2.dlp147.bih.net.ba) has left #ceph
[1:58] * HOT_M (none@as54.tz2.dlp147.bih.net.ba) has joined #ceph
[1:58] * HOT_M (none@as54.tz2.dlp147.bih.net.ba) has left #ceph
[2:02] * C2J (~c2j@116.226.230.50) Quit (Ping timeout: 480 seconds)
[2:02] * wyang (~wyang@199.115.114.199) has joined #ceph
[2:02] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[2:02] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[2:06] * LeaChim (~LeaChim@host86-185-146-193.range86-185.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:09] * cathode (~cathode@50-198-166-81-static.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[2:11] * yguang11 (~yguang11@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[2:11] * EinstCra_ (~EinstCraz@111.30.21.47) has joined #ceph
[2:14] * osso (~osso@sgp01-1-78-233-150-179.fbx.proxad.net) Quit (Quit: osso)
[2:15] * EinstCrazy (~EinstCraz@111.30.21.47) Quit (Ping timeout: 480 seconds)
[2:18] * samx (~Adium@12.206.204.58) Quit (Quit: Leaving.)
[2:21] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:30] * analbeard (~shw@host86-140-202-165.range86-140.btcentralplus.com) has joined #ceph
[2:31] * analbeard (~shw@host86-140-202-165.range86-140.btcentralplus.com) Quit ()
[2:33] * KaneK (~kane@12.206.204.58) Quit (Quit: KaneK)
[2:34] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[2:34] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[2:35] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[2:39] * swami1 (~swami@27.7.168.203) Quit (Quit: Leaving.)
[2:41] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[2:42] * zc00gii (~aldiyen@94.102.63.16) has joined #ceph
[2:45] * mhuang (~mhuang@119.254.120.72) has joined #ceph
[2:50] * EinstCrazy (~EinstCraz@111.30.21.47) has joined #ceph
[2:53] * EinstCra_ (~EinstCraz@111.30.21.47) Quit (Ping timeout: 480 seconds)
[2:56] * wyang (~wyang@199.115.114.199) Quit (Remote host closed the connection)
[2:56] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[2:59] <Nats__> the reasoning is likely that its a mutative action
[3:00] * yguang11 (~yguang11@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[3:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[3:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[3:06] * mhuang (~mhuang@119.254.120.72) Quit (Ping timeout: 480 seconds)
[3:07] * linuxkidd (~linuxkidd@72.sub-70-210-225.myvzw.com) Quit (Ping timeout: 480 seconds)
[3:07] * linuxkidd (~linuxkidd@72.sub-70-210-225.myvzw.com) has joined #ceph
[3:11] * wyang (~wyang@server213-171-196-75.live-servers.net) has joined #ceph
[3:11] * zc00gii (~aldiyen@4Z9AABT97.tor-irc.dnsbl.oftc.net) Quit ()
[3:14] * EinstCra_ (~EinstCraz@111.30.21.47) has joined #ceph
[3:14] * KaZeR (~KaZeR@64.201.252.132) Quit (Remote host closed the connection)
[3:18] * naoto (~naotok@27.131.11.254) has joined #ceph
[3:18] * mhuang (~mhuang@119.254.120.71) has joined #ceph
[3:19] * EinstCrazy (~EinstCraz@111.30.21.47) Quit (Ping timeout: 480 seconds)
[3:20] * C2J (~c2j@116.226.230.50) has joined #ceph
[3:22] * Mika_c (~Mika@122.146.93.152) has joined #ceph
[3:23] * zhaochao (~zhaochao@60.206.230.18) has joined #ceph
[3:24] * Thunderbird (~roderick@100.42.98.197) Quit (Ping timeout: 480 seconds)
[3:27] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) has joined #ceph
[3:29] * KaZeR (~KaZeR@2600:1010:b067:fbeb:ec3c:c169:fd73:3391) has joined #ceph
[3:32] * Thunderbird (~roderick@100.42.98.197) has joined #ceph
[3:33] * KaneK (~kane@cpe-172-88-240-14.socal.res.rr.com) has joined #ceph
[3:42] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[3:44] * mhuang (~mhuang@119.254.120.71) Quit (Quit: This computer has gone to sleep)
[3:45] * mhuang (~mhuang@119.254.120.71) has joined #ceph
[3:47] * KaZeR (~KaZeR@2600:1010:b067:fbeb:ec3c:c169:fd73:3391) Quit (Remote host closed the connection)
[4:00] * dyasny (~dyasny@104.158.24.36) has joined #ceph
[4:00] * georgem (~Adium@75-119-226-89.dsl.teksavvy.com) has joined #ceph
[4:00] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[4:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[4:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[4:02] * georgem (~Adium@75-119-226-89.dsl.teksavvy.com) Quit ()
[4:02] * georgem (~Adium@206.108.127.16) has joined #ceph
[4:11] * yanzheng (~zhyan@182.139.205.155) has joined #ceph
[4:15] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[4:15] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:16] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:16] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:17] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:17] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:18] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:18] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:18] * derjohn_mobi (~aj@tmo-111-158.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[4:18] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[4:19] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:19] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:20] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:20] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:21] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:21] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:22] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:24] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:25] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:25] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:26] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:26] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:27] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:27] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:28] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:28] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:29] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:29] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:30] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:30] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:31] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:31] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:32] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:32] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:33] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:33] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:34] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:34] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:35] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:35] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) has joined #ceph
[4:35] * haomaiwang (~haomaiwan@60-250-10-240.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[4:36] * haomaiwang (~haomaiwan@li715-221.members.linode.com) has joined #ceph
[4:47] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[4:48] * vikhyat (~vumrao@114.143.46.57) has joined #ceph
[4:49] * mhuang (~mhuang@119.254.120.71) Quit (Quit: This computer has gone to sleep)
[5:01] * haomaiwang (~haomaiwan@li715-221.members.linode.com) Quit (Remote host closed the connection)
[5:01] * haomaiwang (~haomaiwan@li715-221.members.linode.com) has joined #ceph
[5:10] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[5:15] * dlan_ (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[5:16] * samx (~Adium@cpe-172-90-97-62.socal.res.rr.com) has joined #ceph
[5:16] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:20] * mhuang (~mhuang@119.254.120.72) has joined #ceph
[5:24] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) has joined #ceph
[5:25] * vikhyat (~vumrao@114.143.46.57) Quit (Quit: Leaving)
[5:27] * derjohn_mob (~aj@tmo-111-158.customers.d1-online.com) has joined #ceph
[5:31] * swami1 (~swami@49.44.57.245) has joined #ceph
[5:32] * Vacuum__ (~Vacuum@88.130.214.231) has joined #ceph
[5:36] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[5:36] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[5:38] * Vacuum_ (~Vacuum@i59F79BD8.versanet.de) Quit (Ping timeout: 480 seconds)
[5:46] * swami2 (~swami@49.32.0.190) has joined #ceph
[5:46] * swami1 (~swami@49.44.57.245) Quit (Ping timeout: 480 seconds)
[5:58] * KeeperOfTheSoul (~xanax`@tsn109-201-152-9.dyn.nltelcom.net) has joined #ceph
[6:01] * haomaiwang (~haomaiwan@li715-221.members.linode.com) Quit (Remote host closed the connection)
[6:01] * haomaiwang (~haomaiwan@li715-221.members.linode.com) has joined #ceph
[6:02] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[6:03] * C2J (~c2j@116.226.230.50) Quit (Read error: No route to host)
[6:04] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[6:09] * mhuang (~mhuang@119.254.120.72) Quit (Quit: This computer has gone to sleep)
[6:10] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:24] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[6:28] * KeeperOfTheSoul (~xanax`@7V7AABSAD.tor-irc.dnsbl.oftc.net) Quit ()
[6:29] * overclk (~vshankar@121.244.87.124) has joined #ceph
[6:30] * MACscr (~Adium@2601:247:4101:a0be:ecd3:8e5a:b8df:f6fd) Quit (Quit: Leaving.)
[6:32] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[6:32] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[6:33] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[6:42] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[6:43] * C2J (~c2j@223.104.5.229) has joined #ceph
[6:44] * derjohn_mobi (~aj@tmo-100-244.customers.d1-online.com) has joined #ceph
[6:45] * overclk (~vshankar@121.244.87.124) Quit (Ping timeout: 480 seconds)
[6:48] * derjohn_mob (~aj@tmo-111-158.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[6:48] * rburkholder (~overonthe@199.68.193.62) Quit (Read error: Connection reset by peer)
[6:49] * rburkholder (~overonthe@199.68.193.54) has joined #ceph
[6:50] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[6:50] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[6:50] * karnan (~karnan@106.206.156.68) has joined #ceph
[6:51] * C2J (~c2j@223.104.5.229) Quit (Ping timeout: 480 seconds)
[6:54] * derjohn_mobi (~aj@tmo-100-244.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[6:58] * mhuang (~mhuang@119.254.120.72) has joined #ceph
[7:01] * haomaiwang (~haomaiwan@li715-221.members.linode.com) Quit (Remote host closed the connection)
[7:01] * haomaiwang (~haomaiwan@li715-221.members.linode.com) has joined #ceph
[7:02] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[7:07] * samx (~Adium@cpe-172-90-97-62.socal.res.rr.com) Quit (Quit: Leaving.)
[7:09] * Peaced (~Enikma@tsn109-201-152-9.dyn.nltelcom.net) has joined #ceph
[7:31] * dlan (~dennis@116.228.88.131) has joined #ceph
[7:32] * haomaiwang (~haomaiwan@li715-221.members.linode.com) Quit (Remote host closed the connection)
[7:33] * haomaiwang (~haomaiwan@li715-221.members.linode.com) has joined #ceph
[7:39] * Peaced (~Enikma@4Z9AABUEJ.tor-irc.dnsbl.oftc.net) Quit ()
[7:40] * C2J (~c2j@116.231.213.111) has joined #ceph
[7:47] * derjohn_mobi (~aj@88.128.82.85) has joined #ceph
[7:49] * mhuang_ (~mhuang@119.254.120.72) has joined #ceph
[7:50] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[7:52] * ade (~abradshaw@dslb-188-106-108-253.188.106.pools.vodafone-ip.de) has joined #ceph
[7:55] * mhuang (~mhuang@119.254.120.72) Quit (Ping timeout: 480 seconds)
[7:56] * jwilkins (~jowilkin@2601:644:4000:97c0::4a04) Quit (Quit: Leaving)
[8:01] * haomaiwang (~haomaiwan@li715-221.members.linode.com) Quit (Remote host closed the connection)
[8:01] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[8:01] * haomaiwang (~haomaiwan@li715-221.members.linode.com) has joined #ceph
[8:04] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) Quit (Quit: Bye)
[8:08] * jtaguinerd (~androirc@15.211.146.17) has joined #ceph
[8:09] * haomaiwang (~haomaiwan@li715-221.members.linode.com) Quit (Quit: Leaving...)
[8:09] * haomaiwang (~haomaiwan@li715-221.members.linode.com) has joined #ceph
[8:19] * Knuckx (~Kwen@94.102.63.16) has joined #ceph
[8:21] * swami1 (~swami@49.32.0.167) has joined #ceph
[8:24] * swami3 (~swami@49.32.0.167) has joined #ceph
[8:24] * swami4 (~swami@49.44.57.245) has joined #ceph
[8:25] * swami2 (~swami@49.32.0.190) Quit (Ping timeout: 480 seconds)
[8:28] * haomaiwang (~haomaiwan@li715-221.members.linode.com) Quit (Remote host closed the connection)
[8:28] * haomaiwang (~haomaiwan@103.15.217.218) has joined #ceph
[8:29] * swami1 (~swami@49.32.0.167) Quit (Ping timeout: 480 seconds)
[8:32] * swami3 (~swami@49.32.0.167) Quit (Ping timeout: 480 seconds)
[8:41] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[8:42] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[8:47] * derjohn_mobi (~aj@88.128.82.85) Quit (Ping timeout: 480 seconds)
[8:48] * overclk (~vshankar@121.244.87.117) has joined #ceph
[8:49] * enax (~enax@hq.ezit.hu) has joined #ceph
[8:49] * Knuckx (~Kwen@7V7AABSCN.tor-irc.dnsbl.oftc.net) Quit ()
[8:49] * Sirrush (~Linkshot@94.102.63.18) has joined #ceph
[8:49] * overclk (~vshankar@121.244.87.117) Quit ()
[8:50] * mhuang_ (~mhuang@119.254.120.72) Quit (Quit: This computer has gone to sleep)
[8:50] * ade_b (~abradshaw@dslb-088-075-018-252.088.075.pools.vodafone-ip.de) has joined #ceph
[8:50] * ade (~abradshaw@dslb-188-106-108-253.188.106.pools.vodafone-ip.de) Quit (Read error: Connection reset by peer)
[8:51] * ade_b (~abradshaw@dslb-088-075-018-252.088.075.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[8:51] * ade (~abradshaw@dslb-088-075-018-252.088.075.pools.vodafone-ip.de) has joined #ceph
[8:51] * MACscr (~Adium@2601:247:4101:a0be:b043:3948:365b:3326) has joined #ceph
[8:54] * mhuang_ (~mhuang@119.254.120.72) has joined #ceph
[8:54] <sep76> 2 active+undersized+degraded ; stuck like this for days now. why does it not become active+clean ? http://paste.debian.net/342175/
[8:55] <MACscr> because you need 3?
[8:55] <sep76> yes... but when a osd failes. should it not rebalance ?
[8:56] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:57] * wyang (~wyang@server213-171-196-75.live-servers.net) Quit (Remote host closed the connection)
[8:58] * dgurtner (~dgurtner@178.197.231.21) has joined #ceph
[8:58] * enax (~enax@hq.ezit.hu) Quit (Remote host closed the connection)
[8:58] * ade_b (~abradshaw@dslb-188-102-055-215.188.102.pools.vodafone-ip.de) has joined #ceph
[8:59] * ade (~abradshaw@dslb-088-075-018-252.088.075.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[9:00] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[9:00] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[9:00] <sep76> http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/ ; this document talks about having to mark osd's lost. but in that case there was blocked peerings. i do not see that in my lab
[9:01] * Wielebny (~Icedove@cl-927.waw-01.pl.sixxs.net) Quit (Quit: Wielebny)
[9:02] * taguinerd (~androirc@15.211.146.17) has joined #ceph
[9:02] * jtaguinerd (~androirc@15.211.146.17) Quit (Read error: Connection reset by peer)
[9:04] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[9:10] * taguinerd (~androirc@15.211.146.17) Quit (Ping timeout: 480 seconds)
[9:11] <sep76> do you need to rm the osd's before the cluster balances ? so you can not just leave osd's in the down state for any extended time ?
[9:13] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[9:13] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[9:13] <mfa298> sep76: I've had similar issues with lots of OSDs which are down and out. I think the issue is that they still play a part in the host weights and crush rules.
[9:13] * pabluk_ is now known as pabluk
[9:14] <mfa298> you may want to "ceph osd crush remove osd.<id>" at least - this is likely to cause a rebalance
[9:14] * enax (~enax@hq.ezit.hu) has joined #ceph
[9:14] * Wielebny (~Icedove@wr1.contium.pl) has joined #ceph
[9:15] <mfa298> if the drives are really gone it probably makes sense to "osd rm" and "auth del" them as well
[9:15] <mfa298> http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual
[9:17] <sep76> naturally. i just didn't think you had to do it so quickly to avoid getting into problems.
[9:19] * Sirrush (~Linkshot@94.102.63.18) Quit ()
[9:19] * osuka_ (~Sami345@178.162.216.42) has joined #ceph
[9:19] <mfa298> you look to have a fair propotion of down+out drives.
[9:20] * analbeard (~shw@support.memset.com) has joined #ceph
[9:20] <mfa298> my experience from having to shuffle some of our hardware around over the last few weeks is that it's worth doing the crush remove part fairly quickly as it gives some control over rebalancing.
[9:23] * yuriw1 (~Adium@2601:645:4380:112c:c83a:bdfa:b54c:c1c3) has joined #ceph
[9:24] <sep76> i see.
[9:25] <sep76> thank you for your insight, it just means it's a little less self healing then I assumed
[9:28] * derjohn_mobi (~aj@fw.gkh-setu.de) has joined #ceph
[9:30] * yuriw (~Adium@2601:645:4380:112c:789d:a0ec:803:ac12) Quit (Ping timeout: 480 seconds)
[9:30] <sep76> a different question... i am testing ceph on a older cluster. 36 x 3TB drives X 6 nodes ; but the hardware have very little ram compared to the requirements on http://docs.ceph.com/docs/master/start/hardware-recommendations/ ; i only have 12 GB compared to 108 GB ("1GB per 1TB of osd data") ; i tried running ceph with only 10 osd's per node. but even that seams to be way to heavy. ; are there any way to reduce the memory requirements ? eg lvm or ra
[9:30] <sep76> id together the osd's into fewer larger osd's forinstance ?
[9:31] <sep76> or should i just give up on this hardware, optinally just run 4 3TB osd's /node since that matches the 1GB ram per TB osd data that are in the requirements
[9:32] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:33] * ram_ (~oftc-webi@static-202-65-140-146.pol.net.in) has joined #ceph
[9:34] * overclk (~vshankar@121.244.87.117) has joined #ceph
[9:39] * linjan_ (~linjan@176.195.62.254) has joined #ceph
[9:39] <boolman> can anyone tell me what this means: "mds0: Client 33206 failing to respond to cache pressure" ? , I see this on all of my clients
[9:42] * rakeshgm (~rakesh@121.244.87.124) has joined #ceph
[9:47] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[9:49] * osuka_ (~Sami345@6YRAABE39.tor-irc.dnsbl.oftc.net) Quit ()
[9:51] * IcePic (~jj@2001:6b0:5:1688:3022:8598:f882:8a08) Quit (Remote host closed the connection)
[9:51] * IcePic (~jj@c66.it.su.se) has joined #ceph
[9:52] * bandrus (~brian@dhcp57-205.laptop-urz.uni-heidelberg.de) has joined #ceph
[9:54] * bara (~bara@213.175.37.10) has joined #ceph
[9:56] <mfa298> sep76: there are some tuneables which can help reduce the memory footprint. Some of the later docs also suggest 1-2GB per drive rather than per TB
[9:56] <sep76> do you have a link to these tuneables ?
[9:56] <mfa298> In a healthy state you'll probably find the ram usage isn't too bad, but with recovery it can grow.
[9:57] <mfa298> not to hand, but some of them are related to the number of OSD maps that are kept normally.
[9:59] * thomnico (~thomnico@2a01:e35:8b41:120:30e8:96da:2de2:a747) has joined #ceph
[9:59] <mfa298> you probably want to increase your ram as a first step though. Personally I'd aim for something like 2GB per OSD as a starting point.
[9:59] * branto1 (~branto@ip-78-102-208-28.net.upcbroadband.cz) has joined #ceph
[10:01] <mfa298> in terms of self healing, ceph will normally do a reasonable job if a drive / host fails, I suspect in your case with so many osds down+out but still in the crush map it's trying to rebalance to various down drives so struggling. (This is based on my observations so could be wrong)
[10:02] * garphy`aw is now known as garphy
[10:02] * taguinerd (~androirc@119.56.118.108) has joined #ceph
[10:05] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[10:10] * joao (~joao@8.184.114.89.rev.vodafone.pt) has joined #ceph
[10:10] * ChanServ sets mode +o joao
[10:11] * taguinerd (~androirc@119.56.118.108) Quit (Remote host closed the connection)
[10:12] * rakeshgm (~rakesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[10:13] * shakamunyi (~shakamuny@c-67-180-191-38.hsd1.ca.comcast.net) has joined #ceph
[10:13] * tdb_ (~tdb@myrtle.kent.ac.uk) has joined #ceph
[10:14] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[10:15] * jluis (~joao@8.184.114.89.rev.vodafone.pt) Quit (Ping timeout: 480 seconds)
[10:15] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[10:18] * Mika_ (~Mika@122.146.93.152) has joined #ceph
[10:21] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[10:21] * shakamunyi (~shakamuny@c-67-180-191-38.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[10:23] * toMeloos (~toMeloos@53568B3D.cm-6-7c.dynamic.ziggo.nl) has joined #ceph
[10:25] * Mika_c (~Mika@122.146.93.152) Quit (Ping timeout: 480 seconds)
[10:27] * xavpaice (~oftc-webi@121-73-4-100.cable.telstraclear.net) has joined #ceph
[10:29] <xavpaice> hey there, 'scuse me barging in :)
[10:29] <xavpaice> is there a known issue with http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-trusty-x86_64-basic ?
[10:29] <xavpaice> I get a 403 when trying to connect, and that means I can't install Rados Gateway
[10:30] * Nijikokun (~W|ldCraze@104.238.169.87) has joined #ceph
[10:32] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:33] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:33] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:37] * yuriw (~Adium@2601:645:4380:112c:383c:64f5:fd5:be7c) has joined #ceph
[10:40] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[10:41] * jordanP (~jordan@bdv75-2-81-57-250-57.fbx.proxad.net) has joined #ceph
[10:42] <jcsp> xavpaice: there are a bunch of servers getting moved around this week, so that'll probably be the cause
[10:42] <jcsp> the people handling that are in the US, so if you post to ceph-users you might get a response when they wake up
[10:43] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[10:43] * gucki (~smuxi@mx01.lifecodexx.com) has joined #ceph
[10:45] * yuriw1 (~Adium@2601:645:4380:112c:c83a:bdfa:b54c:c1c3) Quit (Ping timeout: 480 seconds)
[10:47] * mhuang_ (~mhuang@119.254.120.72) Quit (Quit: This computer has gone to sleep)
[10:49] <xavpaice> great stuff, will do. Thanks for that!
[10:51] * olid1111117 (~olid1982@aftr-185-17-204-175.dynamic.mnet-online.de) has joined #ceph
[10:52] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[10:52] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[10:52] * LPG_ (~LPG@c-50-181-212-148.hsd1.wa.comcast.net) has joined #ceph
[10:53] * brian_ (~textual@109.255.114.93) has joined #ceph
[10:53] * gabrtv_ (sid36209@id-36209.charlton.irccloud.com) has joined #ceph
[10:53] * essjayhch_ (sid79416@id-79416.highgate.irccloud.com) has joined #ceph
[10:54] * mhuang_ (~mhuang@119.254.120.72) has joined #ceph
[10:54] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:55] * yanzheng1 (~zhyan@182.139.205.155) has joined #ceph
[10:55] * DeMiNe0_ (~DeMiNe0@104.131.119.74) has joined #ceph
[10:56] <liiwi> /win 11
[10:56] <liiwi> erp
[10:56] * evilrob_ (~evilrob@2600:3c00::f03c:91ff:fedf:1d3d) has joined #ceph
[10:56] * wayneseguin (~wayneeseg@mp64.overnothing.com) has joined #ceph
[10:56] * benner_ (~benner@188.166.111.206) has joined #ceph
[10:56] * Hazelesque_ (~hazel@phobos.hazelesque.uk) has joined #ceph
[10:56] * kingcu_ (~kingcu@kona.ridewithgps.com) has joined #ceph
[10:56] * DrewBeer_ (~DrewBeer@216.152.240.203) has joined #ceph
[10:57] * ndevos_ (~ndevos@nat-pool-ams2-5.redhat.com) has joined #ceph
[10:57] * disposab1e (disposable@shell.websupport.sk) has joined #ceph
[10:57] * rektide_ (~rektide@eldergods.com) has joined #ceph
[10:57] * bro__ (~flybyhigh@panik.darksystem.net) has joined #ceph
[10:57] * phantomcircuit (~phantomci@strateman.ninja) Quit (Max SendQ exceeded)
[10:57] * phantomcircuit (~phantomci@2600:3c01::f03c:91ff:fe73:6892) has joined #ceph
[10:57] * jordanP (~jordan@bdv75-2-81-57-250-57.fbx.proxad.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * joao (~joao@8.184.114.89.rev.vodafone.pt) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * MACscr (~Adium@2601:247:4101:a0be:b043:3948:365b:3326) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * haomaiwang (~haomaiwan@103.15.217.218) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * yanzheng (~zhyan@182.139.205.155) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * linuxkidd (~linuxkidd@72.sub-70-210-225.myvzw.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * brian (~textual@109.255.114.93) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * hchen (~hchen@nat-pool-bos-t.redhat.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * KHJared (~KHJared@65.99.235.248) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * Pies (~Pies@srv229.opcja.pl) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * benner (~benner@188.166.111.206) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * marco208 (~root@159.253.7.204) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * disposable (disposable@shell.websupport.sk) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * marrusl (~mark@209-150-46-243.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * DrewBeer (~DrewBeer@216.152.240.203) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * jidar (~jidar@r2d2.fap.me) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * kingcu (~kingcu@kona.ridewithgps.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * jbran (~jbran@104.131.146.128) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * rossdylan (~rossdylan@losna.helixoide.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * cetex (~oskar@nadine.juza.se) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * rturk-away (~rturk@ds3553.dreamservers.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * `10` (~10@69.169.91.14) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * nickvanw (~nick@104.131.69.27) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * DeMiNe0 (~DeMiNe0@104.131.119.74) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * jnq (~jnq@0001b7cc.user.oftc.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * jgornick (~jgornick@2600:3c00::f03c:91ff:fedf:72b4) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * Nats__ (~natscogs@114.31.195.238) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * qman (~rohroh@2600:3c00::f03c:91ff:fe69:92af) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * lookcrabs (~lookcrabs@tail.seeee.us) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * Georgyo_ (~georgyo@2600:3c03:e000:71::cafe:3) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * essjayhch (sid79416@id-79416.highgate.irccloud.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * kamalmarhubi (sid26581@id-26581.tooting.irccloud.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * gabrtv (sid36209@id-36209.charlton.irccloud.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * getzburg (sid24913@id-24913.ealing.irccloud.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * m0zes (~mozes@beocat.cis.ksu.edu) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * LPG (~LPG@c-50-181-212-148.hsd1.wa.comcast.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * evilrob (~evilrob@2600:3c00::f03c:91ff:fedf:1d3d) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * atg (~atg@10.127.254.xxx) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * aeroevan (~aeroevan@00015f77.user.oftc.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * lincolnb (~lincoln@c-71-57-68-189.hsd1.il.comcast.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * Kupo1 (~tyler.wil@23.111.254.159) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * ndevos (~ndevos@nat-pool-ams2-5.redhat.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * dec (~dec@ec2-54-66-50-124.ap-southeast-2.compute.amazonaws.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * toabctl (~toabctl@toabctl.de) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * bro_ (~flybyhigh@panik.darksystem.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * markl (~mark@knm.org) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * braderhart (sid124863@braderhart.user.oftc.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * SpamapS (~SpamapS@xencbyrum2.srihosting.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * mrasmus (~mrasmus@mrasm.us) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * motk (~motk@2600:3c00::f03c:91ff:fe98:51ee) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * KindOne (kindone@0001a7db.user.oftc.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * Aeso (~aesospade@aesospadez.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * wkennington (~william@c-50-184-242-109.hsd1.ca.comcast.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * dustinm` (~dustinm`@2607:5300:100:200::160d) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * a1-away (~jelle@62.27.85.48) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * rektide (~rektide@eldergods.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * cephalobot (~ceph@ds3553.dreamservers.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * scubacuda (sid109325@0001fbab.user.oftc.net) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * Hazelesque (~hazel@phobos.hazelesque.uk) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * wayneeseguin (~wayneeseg@mp64.overnothing.com) Quit (synthon.oftc.net charm.oftc.net)
[10:57] * wayneseguin is now known as wayneeseguin
[10:57] * gabrtv_ is now known as gabrtv
[10:57] * essjayhch_ is now known as essjayhch
[10:58] * jordanP (~jordan@bdv75-2-81-57-250-57.fbx.proxad.net) has joined #ceph
[10:58] * joao (~joao@8.184.114.89.rev.vodafone.pt) has joined #ceph
[10:58] * MACscr (~Adium@2601:247:4101:a0be:b043:3948:365b:3326) has joined #ceph
[10:58] * haomaiwang (~haomaiwan@103.15.217.218) has joined #ceph
[10:58] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) has joined #ceph
[10:58] * yanzheng (~zhyan@182.139.205.155) has joined #ceph
[10:58] * linuxkidd (~linuxkidd@72.sub-70-210-225.myvzw.com) has joined #ceph
[10:58] * brian (~textual@109.255.114.93) has joined #ceph
[10:58] * hchen (~hchen@nat-pool-bos-t.redhat.com) has joined #ceph
[10:58] * KHJared (~KHJared@65.99.235.248) has joined #ceph
[10:58] * Pies (~Pies@srv229.opcja.pl) has joined #ceph
[10:58] * marco208 (~root@159.253.7.204) has joined #ceph
[10:58] * marrusl (~mark@209-150-46-243.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[10:58] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[10:58] * wkennington (~william@c-50-184-242-109.hsd1.ca.comcast.net) has joined #ceph
[10:58] * markl (~mark@knm.org) has joined #ceph
[10:58] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[10:58] * lincolnb (~lincoln@c-71-57-68-189.hsd1.il.comcast.net) has joined #ceph
[10:58] * aeroevan (~aeroevan@00015f77.user.oftc.net) has joined #ceph
[10:58] * atg (~atg@10.127.254.xxx) has joined #ceph
[10:58] * LPG (~LPG@c-50-181-212-148.hsd1.wa.comcast.net) has joined #ceph
[10:58] * m0zes (~mozes@beocat.cis.ksu.edu) has joined #ceph
[10:58] * getzburg (sid24913@id-24913.ealing.irccloud.com) has joined #ceph
[10:58] * kamalmarhubi (sid26581@id-26581.tooting.irccloud.com) has joined #ceph
[10:58] * Georgyo_ (~georgyo@2600:3c03:e000:71::cafe:3) has joined #ceph
[10:58] * cephalobot (~ceph@ds3553.dreamservers.com) has joined #ceph
[10:58] * motk (~motk@2600:3c00::f03c:91ff:fe98:51ee) has joined #ceph
[10:58] * lookcrabs (~lookcrabs@tail.seeee.us) has joined #ceph
[10:58] * scubacuda (sid109325@0001fbab.user.oftc.net) has joined #ceph
[10:58] * theanalyst (theanalyst@open.source.rocks.my.socks.firrre.com) has joined #ceph
[10:58] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) has joined #ceph
[10:58] * braderhart (sid124863@braderhart.user.oftc.net) has joined #ceph
[10:58] * qman (~rohroh@2600:3c00::f03c:91ff:fe69:92af) has joined #ceph
[10:58] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[10:58] * Aeso (~aesospade@aesospadez.com) has joined #ceph
[10:58] * SpamapS (~SpamapS@xencbyrum2.srihosting.com) has joined #ceph
[10:58] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[10:58] * Nats__ (~natscogs@114.31.195.238) has joined #ceph
[10:58] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[10:58] * mrasmus (~mrasmus@mrasm.us) has joined #ceph
[10:58] * toabctl (~toabctl@toabctl.de) has joined #ceph
[10:58] * jgornick (~jgornick@2600:3c00::f03c:91ff:fedf:72b4) has joined #ceph
[10:58] * jnq (~jnq@0001b7cc.user.oftc.net) has joined #ceph
[10:58] * DeMiNe0 (~DeMiNe0@104.131.119.74) has joined #ceph
[10:58] * nickvanw (~nick@104.131.69.27) has joined #ceph
[10:58] * `10` (~10@69.169.91.14) has joined #ceph
[10:58] * rturk-away (~rturk@ds3553.dreamservers.com) has joined #ceph
[10:58] * dustinm` (~dustinm`@2607:5300:100:200::160d) has joined #ceph
[10:58] * a1-away (~jelle@62.27.85.48) has joined #ceph
[10:58] * cetex (~oskar@nadine.juza.se) has joined #ceph
[10:58] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[10:58] * rossdylan (~rossdylan@losna.helixoide.com) has joined #ceph
[10:58] * jbran (~jbran@104.131.146.128) has joined #ceph
[10:58] * dec (~dec@ec2-54-66-50-124.ap-southeast-2.compute.amazonaws.com) has joined #ceph
[10:58] * jidar (~jidar@r2d2.fap.me) has joined #ceph
[10:58] * DrewBeer (~DrewBeer@216.152.240.203) has joined #ceph
[10:58] * ChanServ sets mode +v joao
[10:58] * DrewBeer (~DrewBeer@216.152.240.203) Quit (Ping timeout: 480 seconds)
[10:59] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[10:59] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[10:59] * LPG (~LPG@c-50-181-212-148.hsd1.wa.comcast.net) Quit (Ping timeout: 480 seconds)
[10:59] * DeMiNe0 (~DeMiNe0@104.131.119.74) Quit (Ping timeout: 480 seconds)
[10:59] * yanzheng (~zhyan@182.139.205.155) Quit (Ping timeout: 480 seconds)
[10:59] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[10:59] * brian (~textual@109.255.114.93) Quit (Ping timeout: 480 seconds)
[11:00] * Nijikokun (~W|ldCraze@7V7AABSFG.tor-irc.dnsbl.oftc.net) Quit ()
[11:00] * wyang (~wyang@h88-150-237-2.host.redstation.co.uk) has joined #ceph
[11:03] * LeaChim (~LeaChim@host86-185-146-193.range86-185.btcentralplus.com) has joined #ceph
[11:04] * mhuang_ (~mhuang@119.254.120.72) Quit (Quit: This computer has gone to sleep)
[11:06] * bandrus (~brian@dhcp57-205.laptop-urz.uni-heidelberg.de) Quit (Ping timeout: 480 seconds)
[11:11] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) has joined #ceph
[11:12] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[11:13] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) has joined #ceph
[11:19] * xavpaice (~oftc-webi@121-73-4-100.cable.telstraclear.net) Quit (Quit: Page closed)
[11:21] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c985:bab3:55c1:eca7) has joined #ceph
[11:22] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[11:24] * osso (~osso@sgp01-1-78-233-150-179.fbx.proxad.net) has joined #ceph
[11:28] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[11:28] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[11:29] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) has joined #ceph
[11:29] * mhuang_ (~mhuang@119.254.120.72) has joined #ceph
[11:30] * linuxkidd (~linuxkidd@72.sub-70-210-225.myvzw.com) Quit (Ping timeout: 480 seconds)
[11:31] * C2J (~c2j@116.231.213.111) Quit (Ping timeout: 480 seconds)
[11:32] * linjan_ (~linjan@176.195.62.254) Quit (Ping timeout: 480 seconds)
[11:33] * Mika_ (~Mika@122.146.93.152) Quit (Read error: Connection reset by peer)
[11:34] * karnan (~karnan@106.206.156.68) Quit (Quit: Leaving)
[11:34] * karnan (~karnan@106.206.156.68) has joined #ceph
[11:35] * C2J (~c2j@116.231.213.111) has joined #ceph
[11:39] * linuxkidd (~linuxkidd@72.sub-70-210-225.myvzw.com) has joined #ceph
[11:40] * daviddcc (~dcasier@LAubervilliers-656-1-16-160.w217-128.abo.wanadoo.fr) has joined #ceph
[11:43] * linjan_ (~linjan@176.195.62.254) has joined #ceph
[11:59] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Ping timeout: 480 seconds)
[11:59] * AppleHead (~jog@nat.mrc-lmb.cam.ac.uk) Quit (Ping timeout: 480 seconds)
[11:59] * AppleHead (~jog@nat.mrc-lmb.cam.ac.uk) has joined #ceph
[11:59] * EinstCra_ (~EinstCraz@111.30.21.47) Quit (Remote host closed the connection)
[12:00] * bandrus (~brian@hoeckerm.urz.uni-heidelberg.de) has joined #ceph
[12:03] <sep76> mfa298, thanks for your assistance. there is a limit to how much ram the mainboard can deal with. so that's a problem. but this is a lab tho. so no disaster.
[12:04] * vbellur (~vijay@c-71-234-227-202.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[12:04] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[12:05] * badone (~badone@66.187.239.16) Quit (Remote host closed the connection)
[12:06] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[12:06] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[12:10] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[12:10] * badone (~badone@66.187.239.16) has joined #ceph
[12:11] * bandrus (~brian@hoeckerm.urz.uni-heidelberg.de) Quit (Quit: Leaving.)
[12:12] * naoto (~naotok@27.131.11.254) Quit (Quit: Leaving...)
[12:14] * botho (~botho@pD9F92AA7.dip0.t-ipconnect.de) has joined #ceph
[12:21] * AppleHead (~jog@nat.mrc-lmb.cam.ac.uk) Quit (Remote host closed the connection)
[12:24] * EinstCrazy (~EinstCraz@117.13.201.130) has joined #ceph
[12:25] * olid1111117 (~olid1982@aftr-185-17-204-175.dynamic.mnet-online.de) Quit (Ping timeout: 480 seconds)
[12:27] * brian_ is now known as brian
[12:29] * linuxkidd (~linuxkidd@72.sub-70-210-225.myvzw.com) Quit (Ping timeout: 480 seconds)
[12:33] * Qiasfah (~Thononain@104.40.1.143) has joined #ceph
[12:33] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:34] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[12:34] * zhaochao (~zhaochao@60.206.230.18) Quit (Quit: ChatZilla 0.9.92 [Iceweasel 38.4.0/20151103222033])
[12:39] * linuxkidd (~linuxkidd@72.sub-70-210-225.myvzw.com) has joined #ceph
[12:40] * jayjay (~jjs@2001:9e0:4:12:68d2:7caf:f839:8afd) has joined #ceph
[12:43] * overclk (~vshankar@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:47] * yanzheng1 (~zhyan@182.139.205.155) Quit (Quit: This computer has gone to sleep)
[12:52] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) has joined #ceph
[12:57] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[12:57] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[13:00] * thomnico (~thomnico@2a01:e35:8b41:120:30e8:96da:2de2:a747) Quit (Quit: Ex-Chat)
[13:02] * hkraal_ (~hkraal@vps05.rootdomains.eu) has joined #ceph
[13:03] * Qiasfah (~Thononain@6YRAABFD6.tor-irc.dnsbl.oftc.net) Quit ()
[13:12] * karnan (~karnan@106.206.156.68) Quit (Ping timeout: 480 seconds)
[13:12] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c985:bab3:55c1:eca7) Quit (Ping timeout: 480 seconds)
[13:13] * Kupo1 (~tyler.wil@23.111.254.159) Quit (Read error: Connection reset by peer)
[13:13] * bandrus (~brian@dhcp57-205.laptop-urz.uni-heidelberg.de) has joined #ceph
[13:14] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[13:14] * bene2 (~bene@2601:18c:8300:f3ae:ea2a:eaff:fe08:3c7a) has joined #ceph
[13:20] * karnan (~karnan@106.51.240.17) has joined #ceph
[13:28] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[13:28] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[13:30] * Wielebny (~Icedove@wr1.contium.pl) Quit (Remote host closed the connection)
[13:30] * Wielebny (~Icedove@wr1.contium.pl) has joined #ceph
[13:35] * EinstCra_ (~EinstCraz@218.69.65.22) has joined #ceph
[13:38] * thomnico (~thomnico@2a01:e35:8b41:120:30e8:96da:2de2:a747) has joined #ceph
[13:40] * i_m (~ivan.miro@deibp9eh1--blueice1n1.emea.ibm.com) has joined #ceph
[13:40] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[13:41] * swami4 (~swami@49.44.57.245) Quit (Quit: Leaving.)
[13:41] * EinstCrazy (~EinstCraz@117.13.201.130) Quit (Ping timeout: 480 seconds)
[13:42] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (Quit: ZNC - http://znc.in)
[13:45] * Icey (~Icey@0001bbad.user.oftc.net) Quit (Remote host closed the connection)
[13:46] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[13:46] * Icey (~Icey@pool-74-109-7-163.phlapa.fios.verizon.net) has joined #ceph
[13:46] * mhuang_ (~mhuang@119.254.120.72) Quit (Ping timeout: 480 seconds)
[13:47] * tenshi (~David@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[13:47] <tenshi> Hi everyone.
[13:48] * bandrus (~brian@dhcp57-205.laptop-urz.uni-heidelberg.de) Quit (Ping timeout: 480 seconds)
[13:49] * Exchizz (~kvirc@stud-141.sdu.dk) has joined #ceph
[13:50] * nardial (~ls@dslb-178-006-188-146.178.006.pools.vodafone-ip.de) has joined #ceph
[13:54] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:58] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[14:01] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c985:bab3:55c1:eca7) has joined #ceph
[14:03] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) Quit (Quit: Leaving.)
[14:05] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[14:05] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[14:06] * danieagle (~Daniel@201-26-27-10.dsl.telesp.net.br) has joined #ceph
[14:08] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) has joined #ceph
[14:14] * georgem (~Adium@75-119-226-89.dsl.teksavvy.com) has joined #ceph
[14:16] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[14:16] * georgem (~Adium@75-119-226-89.dsl.teksavvy.com) Quit ()
[14:19] * rakeshgm (~rakesh@121.244.87.117) Quit (Remote host closed the connection)
[14:30] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) has joined #ceph
[14:31] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) has joined #ceph
[14:31] * ChanServ sets mode +o nhm
[14:37] * ram_ (~oftc-webi@static-202-65-140-146.pol.net.in) Quit (Quit: Page closed)
[14:37] * ram_ (~oftc-webi@static-202-65-140-146.pol.net.in) has joined #ceph
[14:38] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) has joined #ceph
[14:44] * allaok (~allaok@machine107.orange-labs.com) Quit (Remote host closed the connection)
[14:46] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[14:48] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[14:48] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[14:48] * C2J (~c2j@116.231.213.111) Quit (Remote host closed the connection)
[14:53] * MannerMan (~oscar@user170.217-10-117.netatonce.net) has joined #ceph
[14:56] * bandrus (~brian@dhcp557-069.laptop-wlc.uni-heidelberg.de) has joined #ceph
[14:57] * fcape (~fcape@107-220-57-73.lightspeed.austtx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[14:57] * shaunm (~shaunm@208.102.161.229) has joined #ceph
[14:59] * mhack (~mhack@66-168-117-78.dhcp.oxfr.ma.charter.com) has joined #ceph
[15:01] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[15:01] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[15:01] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[15:02] * wyang (~wyang@h88-150-237-2.host.redstation.co.uk) Quit (Remote host closed the connection)
[15:02] * ira (~ira@nat-pool-bos-t.redhat.com) has joined #ceph
[15:08] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[15:13] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) has joined #ceph
[15:17] <Anticimex> ceph monitors' memory usage scales with ... what? number of versions of $foo-maps to keep around, number of hosts, number of OSDs, number of PGs ?
[15:17] * Anticimex trying to ascertain wheter actual object-counts in pgs affects monitor's memory usage or not
[15:17] <Anticimex> for the purpose of trying to size RAM for monitors
[15:21] * karnan (~karnan@106.51.240.17) Quit (Ping timeout: 480 seconds)
[15:23] * gucki (~smuxi@mx01.lifecodexx.com) Quit (Read error: Connection reset by peer)
[15:26] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:26] * alrick (~alrick@46.39.53.53) has joined #ceph
[15:31] * karnan (~karnan@106.206.146.188) has joined #ceph
[15:31] * karnan (~karnan@106.206.146.188) Quit (Remote host closed the connection)
[15:32] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) Quit (Remote host closed the connection)
[15:37] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:37] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:40] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[15:42] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) has joined #ceph
[15:44] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[15:44] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[15:44] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[15:48] * C2J (~c2j@114.93.152.67) has joined #ceph
[15:49] * aquaturtle (~oftc-webi@65.182.109.4) has joined #ceph
[15:50] <aquaturtle> We have a single pg that requires manual scrubbing every few days, anyone every experience something similar?
[15:51] * bara (~bara@213.175.37.10) Quit (Ping timeout: 480 seconds)
[15:52] * C2J (~c2j@114.93.152.67) Quit (Read error: No route to host)
[15:52] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[15:53] * C2J (~c2j@114.93.152.67) has joined #ceph
[15:53] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[15:55] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[15:56] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[15:58] * bandrus (~brian@dhcp557-069.laptop-wlc.uni-heidelberg.de) Quit (Ping timeout: 480 seconds)
[15:58] * ngoswami (~ngoswami@121.244.87.115) has joined #ceph
[15:59] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[16:01] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[16:03] * bara (~bara@213.175.37.12) has joined #ceph
[16:03] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[16:03] * Exchizz (~kvirc@stud-141.sdu.dk) Quit (Ping timeout: 480 seconds)
[16:05] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) has joined #ceph
[16:08] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[16:14] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) has joined #ceph
[16:14] <analbeard> I wonder if anyone can point me in the right direction? we currently have a standalone swift storage platform with the various bits of infrastructure to support it, but it would be good to get the data stored in Ceph instead. We'd like to utilise the existing infrastructure for auth etc - can anyone point me in the direction of some relevant docs etc?
[16:14] <analbeard> I know of this - https://github.com/openstack/swift-ceph-backend
[16:15] * Exchizz (~kvirc@emp.sdu.dk) has joined #ceph
[16:16] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[16:17] * Icey (~Icey@0001bbad.user.oftc.net) Quit (Remote host closed the connection)
[16:18] * Icey (~Icey@pool-74-109-7-163.phlapa.fios.verizon.net) has joined #ceph
[16:20] * Icey (~Icey@0001bbad.user.oftc.net) Quit (Remote host closed the connection)
[16:20] * Icey (~Icey@pool-74-109-7-163.phlapa.fios.verizon.net) has joined #ceph
[16:21] * i_m (~ivan.miro@deibp9eh1--blueice1n1.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[16:22] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) has joined #ceph
[16:25] * C2J (~c2j@114.93.152.67) Quit (Remote host closed the connection)
[16:25] * Exchizz (~kvirc@emp.sdu.dk) Quit (Ping timeout: 480 seconds)
[16:26] * C2J (~c2j@114.93.152.67) has joined #ceph
[16:33] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) Quit (Remote host closed the connection)
[16:36] * moore (~moore@71-211-73-118.phnx.qwest.net) has joined #ceph
[16:38] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[16:40] * Exchizz (~kvirc@stud-141.sdu.dk) has joined #ceph
[16:43] * KaZeR (~KaZeR@64.201.252.132) has joined #ceph
[16:43] * kanagaraj (~kanagaraj@27.7.12.9) has joined #ceph
[16:43] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:43] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:44] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) has joined #ceph
[16:44] * swami1 (~swami@27.7.168.203) has joined #ceph
[16:45] * vata1 (~vata@207.96.182.162) has joined #ceph
[16:48] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Remote host closed the connection)
[16:50] * bara (~bara@213.175.37.12) Quit (Remote host closed the connection)
[16:51] * b0e (~aledermue@213.95.25.82) has joined #ceph
[16:51] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:51] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[16:51] * bara (~bara@213.175.37.12) has joined #ceph
[16:52] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[16:53] * nardial (~ls@dslb-178-006-188-146.178.006.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[16:53] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) Quit ()
[16:54] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:55] * jayjay (~jjs@2001:9e0:4:12:68d2:7caf:f839:8afd) Quit (Read error: Connection reset by peer)
[16:55] * nardial (~ls@dslb-178-006-188-146.178.006.pools.vodafone-ip.de) has joined #ceph
[17:03] * C2J (~c2j@114.93.152.67) Quit (Read error: Connection reset by peer)
[17:08] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[17:09] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) has joined #ceph
[17:11] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[17:12] * reed (~reed@75-101-54-18.dsl.static.fusionbroadband.com) has joined #ceph
[17:14] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[17:14] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[17:16] * toMeloos (~toMeloos@53568B3D.cm-6-7c.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[17:18] * yanzheng (~zhyan@182.139.205.155) has joined #ceph
[17:21] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[17:21] * ngoswami (~ngoswami@121.244.87.115) Quit (Ping timeout: 480 seconds)
[17:23] * jwilkins (~jowilkin@2601:644:4000:97c0::4a04) has joined #ceph
[17:27] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[17:27] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) Quit (Remote host closed the connection)
[17:28] * dgurtner (~dgurtner@178.197.231.21) Quit (Ping timeout: 480 seconds)
[17:31] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:34] * tenshi1 (~David@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[17:34] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[17:35] * yanzheng (~zhyan@182.139.205.155) Quit (Quit: This computer has gone to sleep)
[17:38] * tenshi2 (~David@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[17:39] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) has joined #ceph
[17:40] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:40] * tenshi (~David@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[17:43] * nardial (~ls@dslb-178-006-188-146.178.006.pools.vodafone-ip.de) Quit (Quit: Leaving)
[17:44] * linjan__ (~linjan@176.195.225.178) has joined #ceph
[17:44] * tenshi1 (~David@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Ping timeout: 480 seconds)
[17:46] * dugravot6 (~dugravot6@4cy54-1-88-187-244-6.fbx.proxad.net) has joined #ceph
[17:46] * dugravot6 (~dugravot6@4cy54-1-88-187-244-6.fbx.proxad.net) Quit ()
[17:47] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:51] * linjan_ (~linjan@176.195.62.254) Quit (Ping timeout: 480 seconds)
[17:52] * derjohn_mobi (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[17:56] * enax (~enax@hq.ezit.hu) Quit (Ping timeout: 480 seconds)
[17:57] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[17:57] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[17:58] * samx (~Adium@12.206.204.58) has joined #ceph
[18:01] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[18:01] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[18:01] * moore (~moore@71-211-73-118.phnx.qwest.net) Quit (Remote host closed the connection)
[18:01] * qhartman (~qhartman@den.direwolfdigital.com) Quit (Quit: Ex-Chat)
[18:01] * moore (~moore@64.202.160.233) has joined #ceph
[18:03] * Concubidated (~Adium@pool-98-119-93-148.lsanca.fios.verizon.net) has joined #ceph
[18:04] <Heebie> Does anyone know if it's possible to switch from a single network, to separate cluster & client networks on-the-fly? How about switching from IPv4 to IPv6? (Mainly curiosity here.. not in production, so I can start over easily enough)
[18:08] * fractalirish (~fractal@skynet.skynet.ie) has joined #ceph
[18:09] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[18:09] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[18:09] * allaok (~allaok@machine107.orange-labs.com) Quit ()
[18:09] * rushworld (~osuka_@176.123.6.154) has joined #ceph
[18:10] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[18:15] <magicrobotmonkey> does infernalis change to run daemons as ceph user instead of root?
[18:16] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:18] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) has joined #ceph
[18:19] * derjohn_mobi (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[18:20] * tsg (~tgohad@192.55.54.43) has joined #ceph
[18:23] * davidzlap (~Adium@2605:e000:1313:8003:a454:1532:ca30:d74b) has joined #ceph
[18:26] * davidz (~davidz@2605:e000:1313:8003:20f1:6dfa:24cd:5f85) Quit (Quit: Leaving.)
[18:26] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[18:27] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:27] <m0zes> yes
[18:28] <m0zes> it was in the release notes for upgrading.
[18:29] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[18:34] * pabluk is now known as pabluk_
[18:37] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:39] * rushworld (~osuka_@4Z9AABU1H.tor-irc.dnsbl.oftc.net) Quit ()
[18:39] * allaok (~allaok@machine107.orange-labs.com) Quit (Quit: Leaving.)
[18:41] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:41] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[18:44] * toMeloos (~toMeloos@53568B3D.cm-6-7c.dynamic.ziggo.nl) has joined #ceph
[18:46] * dmick (~dmick@206.169.83.146) has joined #ceph
[18:46] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:46] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[18:48] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[18:49] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) has joined #ceph
[18:50] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:51] * linuxkidd_ (~linuxkidd@mobile-166-171-121-057.mycingular.net) has joined #ceph
[18:52] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:52] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[18:53] * botho (~botho@pD9F92AA7.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[18:55] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c985:bab3:55c1:eca7) Quit (Ping timeout: 480 seconds)
[18:57] * brutuscat (~brutuscat@196.Red-83-50-59.dynamicIP.rima-tde.net) Quit (Ping timeout: 480 seconds)
[18:58] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:59] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[18:59] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:03] * branto1 (~branto@ip-78-102-208-28.net.upcbroadband.cz) Quit (Quit: Leaving.)
[19:03] * analbeard (~shw@host86-140-202-165.range86-140.btcentralplus.com) has joined #ceph
[19:05] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[19:05] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:05] * swami1 (~swami@27.7.168.203) Quit (Ping timeout: 480 seconds)
[19:05] * swami1 (~swami@27.7.168.203) has joined #ceph
[19:10] * swami1 (~swami@27.7.168.203) Quit ()
[19:11] * ibravo (~ibravo@72.198.142.104) Quit (Quit: This computer has gone to sleep)
[19:16] * swami1 (~swami@27.7.168.203) has joined #ceph
[19:17] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[19:25] * shakamunyi (~shakamuny@c-67-180-191-38.hsd1.ca.comcast.net) has joined #ceph
[19:25] * fedgoat (~theengine@45-31-177-36.lightspeed.austtx.sbcglobal.net) has joined #ceph
[19:28] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[19:28] <fedgoat> I have a large 3 node cluster with 2 main pools setup. Someone decided to not architect this properly for glance and cinder, and let the PG count per OSD go up to 800-1000 rather than 50-100. We are experiencing huge CPU loads on a cluster...Cisco UCS C240's 40 core machines, 66 x 1TB OSD's split between the nodes...locks up the entire cluster...anyone experience anything like this or have an easy way to recover? version
[19:28] <fedgoat> 80.7 Firefly
[19:28] * analbeard (~shw@host86-140-202-165.range86-140.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[19:29] * dmick (~dmick@206.169.83.146) has left #ceph
[19:29] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[19:30] <fedgoat> The OSD's will start throwing wrong node errors in the logs, and timing out..we suspected a possible network issue at first, but have ruled this out. I'm suspecting the overload of PG's per OSD is causing the possible issue.
[19:31] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) Quit (Quit: It's just that easy)
[19:34] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[19:35] * xarses (~xarses@64.124.158.100) has joined #ceph
[19:42] * olid1111117 (~olid1982@aftr-185-17-204-188.dynamic.mnet-online.de) has joined #ceph
[19:42] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[19:42] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:43] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[19:44] * mykola (~Mikolaj@91.225.201.130) has joined #ceph
[19:46] * vata1 (~vata@207.96.182.162) Quit (Quit: Leaving.)
[19:52] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c985:bab3:55c1:eca7) has joined #ceph
[19:57] * ira (~ira@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[19:58] * osso (~osso@sgp01-1-78-233-150-179.fbx.proxad.net) Quit (Quit: osso)
[20:00] <lincolnb> what is the proper way to give a journal to an OSD if I'm manually deploying the disks? i've been just moving the journal file from an existing OSD to SSD and re-symlinking and forcing aio, but surely there must be a correct way to do this? :)
[20:01] * ira (~ira@nat-pool-bos-t.redhat.com) has joined #ceph
[20:02] * kanagaraj (~kanagaraj@27.7.12.9) Quit (Quit: Leaving)
[20:02] * alrick (~alrick@46.39.53.53) Quit (Remote host closed the connection)
[20:02] * linjan__ (~linjan@176.195.225.178) Quit (Ping timeout: 480 seconds)
[20:02] * herrsergio (~herrsergi@200.77.224.239) has joined #ceph
[20:03] <herrsergio> hi
[20:03] * herrsergio is now known as Guest684
[20:04] <rkeene> lincolnb, 1. Stop the OSD; 2. Flush the journal; 3. Update the OSD entry in the ceph.conf to point the journal to the new location ([osd.X]\nosd journal = /dev/blahblah)
[20:04] <rkeene> lincolnb, 1. Stop the OSD; 2. Flush the journal; 3. Update the OSD entry in the ceph.conf to point the journal to the new location ([osd.X]\nosd journal = /dev/blahblah); 4. Initialize the new journal (ceph-osd .. --mkjournal); 5. Start the OSD again
[20:06] <lincolnb> rkeene: thanks. wasn't sure if it could consume raw block devices like that, good to know!
[20:06] <rkeene> It can, and I use it that way
[20:07] * tsg (~tgohad@192.55.54.43) Quit (Remote host closed the connection)
[20:07] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[20:07] <lincolnb> cool, much appreciated
[20:10] * ibravo (~ibravo@72.198.142.104) Quit (Quit: This computer has gone to sleep)
[20:13] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[20:13] * samx1 (~Adium@12.206.204.58) has joined #ceph
[20:14] * swami1 (~swami@27.7.168.203) Quit (Quit: Leaving.)
[20:14] * ibravo (~ibravo@72.198.142.104) Quit ()
[20:15] * linuxkidd_ (~linuxkidd@mobile-166-171-121-057.mycingular.net) Quit (Quit: This computer has gone to sleep)
[20:15] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[20:17] * samx (~Adium@12.206.204.58) Quit (Ping timeout: 480 seconds)
[20:20] * Superdawg (~Superdawg@ec2-54-243-59-20.compute-1.amazonaws.com) has joined #ceph
[20:20] * Superdawg (~Superdawg@ec2-54-243-59-20.compute-1.amazonaws.com) Quit ()
[20:21] * ira (~ira@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[20:21] * Superdawg (~Superdawg@ec2-54-243-59-20.compute-1.amazonaws.com) has joined #ceph
[20:22] <Superdawg> Trying to understand some information about ceph. Is there a documented maximum capacity for a single OSD?
[20:22] <fedgoat> yes
[20:23] <Superdawg> Where could I find that number? Every search has come back without a real answer to that.
[20:23] <fedgoat> there are some calculations for PG's per odd
[20:23] <fedgoat> placement groups calculation
[20:23] <fedgoat> http://docs.ceph.com/docs/master/rados/operations/placement-groups/
[20:23] <fedgoat> capacity? depends on the disk
[20:24] <fedgoat> or are you asking about disk size?
[20:24] <Superdawg> Yes, disk size.
[20:24] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) has joined #ceph
[20:24] <fedgoat> we run 4TB disks in most of our clusters
[20:24] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:24] <Superdawg> If I were to provision an OSD-node with multiple OSDs from something like a Storage Array, I'm curious on what the max size *could* be.
[20:24] <T1> there is really no "largest size" for that
[20:25] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) Quit (Remote host closed the connection)
[20:25] <Superdawg> Yeah. Docs seem to indicate the larger the better.
[20:25] <Superdawg> T1: That's what I'm finding. Which makes me more curious to the limit. :)
[20:25] <T1> but giving OSDs disks from a SAN is a bit wrong
[20:25] <Superdawg> Sure
[20:25] <T1> afaik there is no limit
[20:25] <Superdawg> But the question is more of a curiosity
[20:26] <Superdawg> I guess the limit could be how much RAM you have to support the OSDs
[20:26] <T1> there is a therotical limit on the amount of PGs per OSD, but since that is tied to everything from the number of replicas, the number of OSDs, the number of pools etc etc etc
[20:27] * linjan (~linjan@176.195.68.96) has joined #ceph
[20:28] <T1> the size of the data disk is mainly reflected in the OSD weight in the CRUSH map
[20:28] <T1> it tells CRUSH how much data to pour into the OSD compared to other OSDs
[20:28] <Superdawg> ahh ok
[20:28] <T1> a weight of 1 per TB od space is the usual way of doing things
[20:28] <T1> of even
[20:29] <T1> so if you start out with 4TB disks for your initial OSDs you give them a weight of 4.. some time later you add new OSDs with 6TB disks - and you give the new OSDs a weight of 6
[20:30] <T1> then CRUSH knows that the OSDs can keep more data
[20:30] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) has joined #ceph
[20:30] <Superdawg> Assuming you can keep enough copies
[20:31] <T1> http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-map-bucket-types - scroll down to the box that begins with "Weighting Bucket Items"
[20:35] * ade_b (~abradshaw@dslb-188-102-055-215.188.102.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[20:35] * tsg (~tgohad@192.55.54.43) has joined #ceph
[20:35] <Superdawg> Thanks. I'm gonna do a bunch of reading.
[20:35] <T1> np
[20:35] <mfa298> Superdawg: Ive tested with some raid0 md devices giving around 24TB which was accepted, However the generally prefered method is to have a 1:1 mapping of disks to OSDs
[20:35] * aquaturtle (~oftc-webi@65.182.109.4) Quit (Ping timeout: 480 seconds)
[20:36] <mfa298> you're likely to get the best performance and capacity that way (let ceph handle redundancy rather than the storage array)
[20:36] * daviddcc (~dcasier@LAubervilliers-656-1-16-160.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[20:37] <Superdawg> mfa298: Yup. All of my reading has reflected that same notion. But considering things like thin provisioning, I was curious about how larger capacities could be considered.
[20:37] <Superdawg> Thanks for all of the help guys
[20:37] <Superdawg> s/guys/everyone/
[20:38] <mfa298> carving out several OSDs from the same set of disks in a storage array and thin provisioning seems to be going against the design of ceph.
[20:40] <mfa298> At that point ceph may not truely know about the points of failure (i.e. what hosts might be impacted if part of the storage array goes down)
[20:40] <T1> it creates a whole new failure domain that ceph cannot handle
[20:40] <Superdawg> and a large one at that.
[20:40] <Superdawg> which you want to minimize.
[20:43] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c985:bab3:55c1:eca7) Quit (Ping timeout: 480 seconds)
[20:48] * samx (~Adium@12.206.204.58) has joined #ceph
[20:49] * mykola (~Mikolaj@91.225.201.130) Quit (Quit: away)
[20:54] * samx1 (~Adium@12.206.204.58) Quit (Ping timeout: 480 seconds)
[20:54] * ibravo (~ibravo@72.198.142.104) Quit (Quit: This computer has gone to sleep)
[20:54] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[20:54] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[20:59] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[21:00] * Destreyf (~quassel@email.newagecomputers.info) has joined #ceph
[21:09] * Exchizz (~kvirc@stud-141.sdu.dk) Quit (Ping timeout: 480 seconds)
[21:13] * osso (~osso@sgp01-1-78-233-150-179.fbx.proxad.net) has joined #ceph
[21:17] * gucki (~smuxi@212-51-155-49.fiber7.init7.net) has joined #ceph
[21:19] * KaneK (~kane@cpe-172-88-240-14.socal.res.rr.com) Quit (Quit: KaneK)
[21:23] * Nicola-1980 (~Nicola-19@2-234-77-205.ip222.fastwebnet.it) Quit (Remote host closed the connection)
[21:23] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[21:26] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) Quit (Quit: leaving)
[21:27] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) has joined #ceph
[21:33] * Nicola-1980 (~Nicola-19@2-234-77-205.ip222.fastwebnet.it) has joined #ceph
[21:37] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:37] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[21:38] * Tumm (~Mraedis@77.247.178.122) has joined #ceph
[21:39] * davidzlap (~Adium@2605:e000:1313:8003:a454:1532:ca30:d74b) Quit (Ping timeout: 480 seconds)
[21:39] * davidzlap (~Adium@2605:e000:1313:8003:a454:1532:ca30:d74b) has joined #ceph
[21:39] <TMM> I use ceph rbd with my openstack setup and it's all working very nicely indeed! I was wondering though if anyone can think of a way to force the image properties in openstack
[21:39] <TMM> to allow TRIM on all instances
[21:40] <TMM> it seems kind of opt-in now
[21:42] <joshd> TMM: it's inherently dependent on the guest, since you need the fs in the guest to tell the disk which parts are ok to trim
[21:43] <TMM> joshd, I understand that, but the vast majority of the guests are modern linux versions.
[21:43] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[21:43] <TMM> joshd, we are providing our users with packer scripts to help them build their own images but they can still upload whatever too
[21:44] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:44] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[21:44] <TMM> and packer can't force image properties either
[21:45] <joshd> there may be a way to change nova's defaults to expose discard
[21:45] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[21:45] <TMM> joshd, only if you use virtio-scsi, the default is virtio :-/
[21:45] * davidzlap (~Adium@2605:e000:1313:8003:a454:1532:ca30:d74b) Quit (Quit: Leaving.)
[21:46] <TMM> I checked the code of the nova-compute libvirt driver and it will only switch to virtio-scsi if the image metadata has the right keys
[21:46] * wogri_ (~wolf@nix.wogri.at) Quit (Quit: Lost terminal)
[21:46] * wogri (~wolf@nix.wogri.at) has joined #ceph
[21:47] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[21:48] <joshd> yeah, seems the default bus is hardcoded
[21:49] <joshd> blockinfo.py is the main (maybe only) place to change that
[21:49] <TMM> Hmm, I wonder if I should make a local fork of nova or of glance
[21:49] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[21:50] <joshd> the nova change is simpler imo
[21:50] * moore (~moore@64.202.160.233) Quit (Remote host closed the connection)
[21:50] <TMM> yeah, I suppose so
[21:51] <TMM> I guess I could just make it still follow properties
[21:51] <TMM> just different defaults
[21:52] <TMM> virtio-scsi still works just fine with windows hosts too
[21:52] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[21:52] <TMM> and if someone wants to boot dos, I'll just tell them to go fuck themselves ;)
[21:54] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[21:54] <zao> TMM: OS/2.
[21:54] <joshd> there is hw_disk_discard=unmap for libvirt
[21:55] <TMM> yes
[21:55] <TMM> but it will still only function on virtio-scsi
[21:55] <TMM> I tried :-/
[21:55] <joshd> so if you change nova to use virtio-scsi, seems you wouldn't need the glance property anymore
[21:56] <TMM> yeah, but if someone insists on changing it I think they should still be able to
[21:56] <TMM> almost all my users are lazy
[21:56] <TMM> but some of them know what they are doing :)
[21:57] <joshd> fair enough
[21:57] <TMM> ok, I'll just patch nova then
[21:58] <TMM> thank you
[21:59] <TMM> zao, I think I'll tell them the same thing :P
[22:01] * moore (~moore@71-211-73-118.phnx.qwest.net) has joined #ceph
[22:02] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:02] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:03] * vbellur (~vijay@c-71-234-227-202.hsd1.ma.comcast.net) has joined #ceph
[22:04] <TMM> rbd + openstack really is a great combination
[22:04] * bjozet (~bjozet@82-183-17-144.customers.ownit.se) Quit (Quit: leaving)
[22:05] <joshd> thanks!
[22:08] * moore (~moore@71-211-73-118.phnx.qwest.net) Quit (Remote host closed the connection)
[22:08] * Tumm (~Mraedis@77.247.178.122) Quit ()
[22:08] * moore (~moore@64.202.160.233) has joined #ceph
[22:11] * vbellur (~vijay@c-71-234-227-202.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[22:15] <TMM> joshd, I'm running a cluster of 30 hypervisors now in 2 availability zones, 8 1TB SSDs in all hypervisors. The ceph cluster spans all the hypervisors.
[22:15] * lcurtis_ (~lcurtis@47.19.105.250) has joined #ceph
[22:15] <TMM> joshd, I use systemd cgroups to ensure enough resources exist for ceph and the vms
[22:16] <TMM> joshd, and ngroups to ensure that there's bandwidth
[22:16] <TMM> it at least benchmarks really well
[22:16] <TMM> it's all connected with two 10Gb links in a simple bond
[22:17] * ibravo (~ibravo@72.198.142.104) Quit (Quit: This computer has gone to sleep)
[22:18] * Exchizz (~kvirc@0x3ec6d532.inet.dsl.telianet.dk) has joined #ceph
[22:18] <TMM> I should maybe write something up about it, I think this is a pretty decent solution
[22:20] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[22:21] * fcape (~fcape@rrcs-97-77-228-30.sw.biz.rr.com) Quit (Ping timeout: 480 seconds)
[22:21] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:21] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[22:23] * ibravo (~ibravo@72.198.142.104) Quit ()
[22:24] * ibravo (~ibravo@72.198.142.104) has joined #ceph
[22:25] * samx1 (~Adium@12.206.204.58) has joined #ceph
[22:25] <TMM> joshd, do you think there's anything in particular I should take care of in that setup?
[22:28] * KaneK (~kane@12.206.204.58) has joined #ceph
[22:30] * samx2 (~Adium@12.206.204.58) has joined #ceph
[22:30] * samx1 (~Adium@12.206.204.58) Quit (Read error: Connection reset by peer)
[22:31] * samx (~Adium@12.206.204.58) Quit (Ping timeout: 480 seconds)
[22:32] <TMM> Also: I'm running my cluster on hammer now, would I gain anything from upgrading to infernalis?
[22:32] * vbellur (~vijay@c-71-234-227-202.hsd1.ma.comcast.net) has joined #ceph
[22:34] * osso (~osso@sgp01-1-78-233-150-179.fbx.proxad.net) Quit (Quit: osso)
[22:35] <T1> easier upgrade to Jewel once it's released?
[22:35] * bene2 (~bene@2601:18c:8300:f3ae:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[22:36] <TMM> T1, apart from that :P
[22:36] <joshd> TMM: infernalis should get better perf from ssds
[22:37] <joshd> upgrading to jewel should be just as easy
[22:37] <T1> I was soo close to installing infernalis when setting up a new cluster, but since hammer i a LTS release, and systemd support was a bit newish I decided to keep what is known to be stable and working
[22:37] <TMM> I the cache tier migrations look interesting to me
[22:38] <TMM> I run a cache tier on top of an erasure coded pool for rbd
[22:39] <TMM> it's all on the same ssds
[22:39] <joshd> TMM: sounds like an interesting setup. I'm sure folks would be interested in a write up
[22:40] <joshd> I'd be careful about monitoring the osds and vm resources to make sure cgroups are working as expected
[22:41] <joshd> esp. for mem and cpu, when you're doing ec and using ssds
[22:41] <TMM> well, the way it's setup it only really limits the vms
[22:44] * thomnico (~thomnico@2a01:e35:8b41:120:30e8:96da:2de2:a747) Quit (Quit: Ex-Chat)
[22:48] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[22:49] * arsenaali (~Blueraven@4Z9AABVDI.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:49] * gucki (~smuxi@212-51-155-49.fiber7.init7.net) Quit (Ping timeout: 480 seconds)
[22:50] * fcape (~fcape@107-220-57-73.lightspeed.austtx.sbcglobal.net) has joined #ceph
[22:51] * KaneK (~kane@12.206.204.58) Quit (Quit: KaneK)
[22:51] * KaneK (~kane@12.206.204.58) has joined #ceph
[22:52] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c985:bab3:55c1:eca7) has joined #ceph
[22:54] * bjozet (~bjozet@82-183-17-144.customers.ownit.se) has joined #ceph
[22:58] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[23:01] * allaok (~allaok@ARennes-658-1-83-253.w90-32.abo.wanadoo.fr) has joined #ceph
[23:04] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:a10e:6b06:2cae:82f1) Quit (Quit: Leaving.)
[23:05] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Quit: Leaving.)
[23:08] * Infected_ (infected@peon.lantrek.fi) has joined #ceph
[23:08] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[23:08] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[23:10] * Infected (infected@peon.lantrek.fi) Quit (Ping timeout: 480 seconds)
[23:10] * samx (~Adium@12.206.204.58) has joined #ceph
[23:10] * samx2 (~Adium@12.206.204.58) Quit (Read error: Connection reset by peer)
[23:14] <loicd> ktdreyer: do you know what command I should run to get the output of the udev action ?
[23:15] <TMM> loicd, for disk activation? They are just regular units you should be able to find them in the normal journalctl output
[23:15] * davidzlap (~Adium@2605:e000:1313:8003:9576:217c:26b8:8622) has joined #ceph
[23:15] <loicd> ktdreyer: there apparently is an error but I have no clue where the output was stored on rhel 7.2
[23:15] <loicd> TMM: what's the unit of a udev action ?
[23:16] <TMM> loicd, sec, vpnning into my prod system
[23:16] <loicd> TMM: sorry for the dumb question, I'm still unfamiliar with udev / systemctl integration
[23:16] <TMM> it's fine
[23:18] <loicd> journalctl --unit=systemd-udevd ?
[23:18] <loicd> shows
[23:18] <loicd> Dec 08 12:59:51 magna031 python[32704]: detected unhandled Python exception in '/usr/sbin/ceph-disk'
[23:18] <loicd> TMM: stdout is somewhere...
[23:18] <TMM> loicd, something like this: ceph-disk-activate@-dev-sda1.service
[23:19] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:c985:bab3:55c1:eca7) Quit (Ping timeout: 480 seconds)
[23:19] <TMM> there's also ceph-osd@200.service
[23:19] <loicd> ah
[23:19] <TMM> you can get stdout with systemctl status <unit name>
[23:19] <loicd> where can I get a list of those ?
[23:19] * arsenaali (~Blueraven@4Z9AABVDI.tor-irc.dnsbl.oftc.net) Quit ()
[23:19] <TMM> systemctl | grep osd
[23:19] <loicd> I mean the list of things like ceph-disk-activate@-dev-sda1.service
[23:19] * loicd trying
[23:20] <TMM> loicd, this is my output http://pastebin.com/tsnwe9Ky
[23:21] <TMM> this may be more useful: http://pastebin.com/raw.php?i=tsnwe9Ky
[23:21] * loicd looking
[23:21] <loicd> I don't have any of that
[23:21] <TMM> what version of ceph did you install on what OS?
[23:22] <loicd> the thing is the first udev even just fails to run ceph-disk, therefore nothing related to udev activation happens
[23:22] <loicd> 0.94.3 on rhel 7.2
[23:23] <TMM> ah, that is hammer
[23:23] <TMM> hammer doesn't come with those units
[23:23] <TMM> it's all sysv-init
[23:23] <loicd> TMM: I think I need to see the output of the udev action and I'm pretty sure that it's not associated to ceph except by the disk uuid at this point
[23:24] <TMM> I used the infernalis units for my hammer install
[23:24] <loicd> ah
[23:24] <loicd> adventurous I see :-)
[23:24] <TMM> Just really done with sysv :P
[23:24] <TMM> let me check the original udev rules, sec
[23:25] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[23:25] <loicd> I'm not sure why it's so incredibly difficult to get the stdout/stderr of a udev action
[23:25] <TMM> ok, so it just runs ceph-disk activate from udev
[23:26] <TMM> you should be able to find the output from, I think, journalctl -u udev
[23:26] <TMM> err
[23:26] <TMM> journalctl -u systemd-udevd
[23:27] <TMM> you may also be able to find it by just running journalctl
[23:27] <TMM> and searching :)
[23:28] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[23:28] <joshd> loicd: last time I had to do this iirc I needed to enable debug level logging for udev
[23:29] <loicd> joshd: ah, that rings a bell, thanks
[23:29] <TMM> sorry, I haven't ran ceph without systemd myself
[23:29] <TMM> I found the integration between udev an sysv init scripts to be intolerable
[23:32] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[23:32] <loicd> TMM: right: no integration ;-)
[23:33] <TMM> yeah, systemd doesn't really know about the osds
[23:33] <TMM> makes it hard to monitor
[23:33] * jclm (~jclm@ip68-108-16-17.lv.lv.cox.net) has joined #ceph
[23:33] <TMM> well, apart from a process list
[23:34] * jclm (~jclm@ip68-108-16-17.lv.lv.cox.net) Quit ()
[23:37] <Anticimex> this got better in infernalis
[23:37] <Anticimex> iirc
[23:37] <TMM> according to the changelog it did
[23:38] <TMM> well, I use the infernalis units for hammer and it all seems to work pretty decetly
[23:38] <TMM> decently*
[23:39] * jordanP (~jordan@bdv75-2-81-57-250-57.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[23:40] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[23:43] <loicd> TMM: spoiler alert : https://bugzilla.redhat.com/show_bug.cgi?id=1270019#c13
[23:43] * jclm (~jclm@ip68-108-16-17.lv.lv.cox.net) has joined #ceph
[23:43] * ibravo (~ibravo@72.198.142.104) Quit (Quit: Leaving)
[23:47] * jwilkins (~jowilkin@2601:644:4000:97c0::4a04) Quit (Quit: Leaving)
[23:48] <Anticimex> loicd: i'm missing cluster name agnosticism in ceph-disk (FWIW)
[23:48] <loicd> Anticimex: how do you mean ?
[23:49] <Anticimex> it, last i checked (august), assumes cluster name = "ceph"
[23:49] <loicd> ah
[23:49] <loicd> it's not supposed to be I kind of remember there was a bug report about it
[23:49] <Anticimex> it was only missing the configurability in a few two-three places when i checked
[23:49] <loicd> and possibly a pull request
[23:50] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[23:50] <loicd> Anticimex: do you know if there is an issue about that ?
[23:50] <Anticimex> i dont recall :]
[23:50] <Anticimex> sorry!
[23:50] <TMM> loicd, ah, looks like I should run abrt then :)
[23:51] <Anticimex> loicd: i'll return, in a few days or week i will start puppetizing the osd-making on our clusters, i guess
[23:51] <Anticimex> and ugh, openstack's puppet-ceph doesn't do infernalis either
[23:51] <TMM> Anticimex, I have made my own puppet module for ceph, I've decided against trying to actually build the cluster in puppet though
[23:51] * shawniverson (~shawniver@208.38.236.8) has joined #ceph
[23:51] <Anticimex> i started patching it but only covered mon so far, couldn't really figure out the launchpad structure for puppet-ceph even, let alone "who's in charge"
[23:51] <Anticimex> TMM: yeah, i'm a bit undecided
[23:52] <TMM> Anticimex, basically I have the osd disks in hiera but I only add them once I have the monitors up
[23:52] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[23:52] <TMM> let me just stick it up real quick, it's may offer some useful ideas
[23:52] <Anticimex> but i think it can work to prepare disks that should be prepared, via some 4-5 completely different tests including local state (some "pidfile" or similar)
[23:52] <Anticimex> i can make available devices for the cluster
[23:52] <Anticimex> but not 'in' them, as per cern's issues
[23:52] <loicd> the problem with puppet-ceph (as with all puppet modules really) is integration testing. the tooling for that is not adequate and it blocks many things
[23:53] <loicd> hopefully it will make it's way to teuthology at some point
[23:53] <Anticimex> i'd say a big problem specifically in puppet-ceph is all frigging inline shell scripting
[23:53] <TMM> Anticimex, https://notabug.org/hp/vanbraam-ceph
[23:54] <Anticimex> TMM: thanks!! i'll come back to it.
[23:54] <TMM> in my case I've just decided to not solve the really hard problems, you really only should setup a cluster once anyway
[23:54] <Anticimex> now is bed time in CET, cya
[23:54] <loicd> Anticimex: i'm not sure the langage in which it is coded would make a difference if integration tests are missing
[23:54] <TMM> adding OSDs etc fully automatic though
[23:54] <Anticimex> loicd: nod, orthogonal issues
[23:54] <Anticimex> TMM: that's what i mean. i think it is a decidedly stupid idea to manage cluster itself from hiera->puppet
[23:54] <Anticimex> wrong tool
[23:55] <TMM> Anticimex, If you want to look at what hiera would look like, there are tests and the fixtures are here : https://notabug.org/hp/vanbraam-ceph/src/master/spec/fixtures/hieradata
[23:55] * georgem (~Adium@75-119-226-89.dsl.teksavvy.com) has joined #ceph
[23:56] <TMM> it has types for rbd on physical machines, distributing keys and and managing some parts of ceph.conf
[23:56] <Anticimex> i'm undecided on wheter i can actually use ceph-disk though
[23:56] <Anticimex> due to disk/journal specialities
[23:56] <TMM> well, my module uses ceph-disk-prepare
[23:57] <TMM> https://notabug.org/hp/vanbraam-ceph/src/master/manifests/osd_disk.pp
[23:57] <lurbs> Anyone else getting a 403 on http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-trusty-x86_64-basic/ ?
[23:57] <Anticimex> i did read through ceph-disk-prepare from line 0 to EOF 3 times in august
[23:57] <loicd> on the bright side jewel has ceph-disk destroy / deactivate which will remove half the complexity of the puppet-ceph integration tests
[23:57] <TMM> I only have unit tests :-/
[23:58] <Anticimex> lurbs:
[23:58] <Anticimex> 10:29:10 < xavpaice> is there a known issue with http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-trusty-x86_64-basic ?
[23:58] <Anticimex> 10:29:30 < xavpaice> I get a 403 when trying to connect, and that means I can't install Rados Gateway
[23:58] <Anticimex> 10:42:46 < jcsp> xavpaice: there are a bunch of servers getting moved around this week, so that'll probably be the cause
[23:58] <Anticimex> 10:42:51 < jcsp> the people handling that are in the US, so if you post to ceph-users you might get a response when they wake up
[23:58] <lurbs> Ha, xavpaice is who was asking me about it.
[23:58] <loicd> Anticimex: why couldn't you use ceph-disk ?
[23:58] <Anticimex> i wanted to put journal on a very special block device partition
[23:58] <Anticimex> as i recall
[23:59] <lurbs> Ta.
[23:59] <TMM> this module also contains my adaptations of the infernalis systemd units for hammer btw
[23:59] <Anticimex> i recall mising a bit configurability there
[23:59] <Anticimex> but i'll have to hit bed now, will return to this
[23:59] <TMM> https://notabug.org/hp/vanbraam-ceph/src/master/files/units

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.