#ceph IRC Log

Index

IRC Log for 2016-04-29

Timestamps are in GMT/BST.

[0:02] * DJComet (~Azru@76GAAEXCP.tor-irc.dnsbl.oftc.net) Quit ()
[0:02] * Kyso (~xENO_@tor00.telenet.unc.edu) has joined #ceph
[0:04] * xarses (~xarses@2001:470:108:1300:921a:b67c:a5a8:be09) Quit (Ping timeout: 480 seconds)
[0:06] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[0:07] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[0:16] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[0:21] * Kurimus1 (~measter@93.115.95.201) has joined #ceph
[0:23] * fsimonce (~simon@host201-70-dynamic.26-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:24] * vata1 (~vata@207.96.182.162) Quit (Quit: Leaving.)
[0:32] * csoukup (~csoukup@159.140.254.106) Quit (Ping timeout: 480 seconds)
[0:32] * Kyso (~xENO_@6AGAABGZI.tor-irc.dnsbl.oftc.net) Quit ()
[0:37] * cooey (~AotC@chomsky.torservers.net) has joined #ceph
[0:41] * noah (~noah@2601:647:cb00:95ef:c66e:1fff:fe16:4e79) has joined #ceph
[0:47] * badone (~badone@66.187.239.16) Quit (Quit: k?thxbyebyenow)
[0:50] <noah> i'm having some difficulty getting osd debug output. i've got a standard ceph-deploy jewel install. i've cranked up debug osd,ms,monc,etc... = 20 and verified with `ceph daemon config show`, but /var/log/ceph/*.log files are not getting any output. what might i be missing here?
[0:51] * Kurimus1 (~measter@6AGAABGZ8.tor-irc.dnsbl.oftc.net) Quit ()
[0:51] * EdGruberman (~Rosenblut@176.10.99.207) has joined #ceph
[1:01] * i_m1 (~ivan.miro@31.173.101.54) has joined #ceph
[1:04] <noah> hmm.. probably something i screwed up open("/var/log/ceph/ceph-osd.0.log", O_WRONLY|O_CREAT|O_APPEND, 0644) = -1 EACCES (Permission denied)
[1:04] * i_m (~ivan.miro@31.173.100.48) Quit (Ping timeout: 480 seconds)
[1:07] * cooey (~AotC@4MJAAEI33.tor-irc.dnsbl.oftc.net) Quit ()
[1:07] * hgjhgjh (~Esge@109.201.133.100) has joined #ceph
[1:09] * lcurtis_ (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[1:10] * dgurtner (~dgurtner@cpe-70-112-224-144.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:10] * rendar (~I@host41-48-dynamic.1-87-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[1:14] * i_m (~ivan.miro@83.149.37.241) has joined #ceph
[1:14] * i_m1 (~ivan.miro@31.173.101.54) Quit (Read error: Connection reset by peer)
[1:21] * xolotl (~Qiasfah@djb.tor-exit.calyxinstitute.org) has joined #ceph
[1:25] * EdGruberman (~Rosenblut@06SAABUI3.tor-irc.dnsbl.oftc.net) Quit ()
[1:27] <diq> log file permissions?
[1:33] <noah> diq: what i found was that after running `ceph-deploy osd create` the ceph-osd log file has owner root:ceph and then the osd fails to open the log file. this doesn't happen for the monitor
[1:37] * hgjhgjh (~Esge@4MJAAEI46.tor-irc.dnsbl.oftc.net) Quit ()
[1:44] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[1:46] <noah> it looks like when ceph-disk invokes ceph-osd it doesn't use --setuser
[1:46] * i_m (~ivan.miro@83.149.37.241) Quit (Read error: Connection reset by peer)
[1:48] * i_m (~ivan.miro@31.173.121.25) has joined #ceph
[1:51] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[1:51] * xolotl (~Qiasfah@76GAAEXEW.tor-irc.dnsbl.oftc.net) Quit ()
[1:51] * Grimhound (~Lattyware@5.254.102.185) has joined #ceph
[2:02] <diq> I actually opened a bug on that
[2:03] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[2:04] <diq> err wait that was another one
[2:05] * oms101 (~oms101@p20030057EA0D0600C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:07] * LeaChim (~LeaChim@host86-147-119-244.range86-147.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:07] * CoMa (~bret@freedom.ip-eend.nl) has joined #ceph
[2:10] * arthurh (~arthurh@38.101.34.128) has joined #ceph
[2:13] * oms101 (~oms101@p20030057EA0C1200C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:17] * i_m (~ivan.miro@31.173.121.25) Quit (Ping timeout: 480 seconds)
[2:20] * mattbenjamin1 (~mbenjamin@aa2.linuxbox.com) Quit (Quit: Leaving.)
[2:21] * ibravo (~ibravo@12.14.132.5) has joined #ceph
[2:21] * Grimhound (~Lattyware@6AGAABG2Q.tor-irc.dnsbl.oftc.net) Quit ()
[2:21] * geli (~geli@geli-2015.its.utas.edu.au) has joined #ceph
[2:25] * wushudoin (~wushudoin@38.78.4.5) Quit (Ping timeout: 480 seconds)
[2:27] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[2:31] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[2:37] * CoMa (~bret@76GAAEXFR.tor-irc.dnsbl.oftc.net) Quit ()
[2:56] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[2:56] * Kizzi1 (~SquallSee@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[3:03] * dyasny (~dyasny@cable-192.222.176.13.electronicbox.net) Quit (Ping timeout: 480 seconds)
[3:04] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[3:05] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[3:07] * brannmar (~SEBI@hessel3.torservers.net) has joined #ceph
[3:07] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) Quit (Ping timeout: 480 seconds)
[3:12] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:15] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[3:20] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[3:20] * hellertime (~Adium@pool-173-48-155-219.bstnma.fios.verizon.net) has joined #ceph
[3:20] <hellertime> when you run 'ceph osd tree' what does the REWEIGHT column mean?
[3:22] <noah> Looking through the docs quickly: "Two OSDs with the same weight will receive roughly the same number of I/O requests and store approximately the same amount of data. ceph osd reweight sets an override weight on the OSD. This value is in the range 0 to 1, and forces CRUSH to re-place (1-weight) of the data that would otherwise live on this drive."
[3:24] * csoukup (~csoukup@2605:a601:9c8:6b00:2cdb:f142:8c29:6cb5) has joined #ceph
[3:26] * Kizzi1 (~SquallSee@6AGAABG4U.tor-irc.dnsbl.oftc.net) Quit ()
[3:26] * ylmson (~galaxyAbs@torsrvs.snydernet.net) has joined #ceph
[3:26] <hellertime> oh ok. so if my OSD are all homogenous, reweight should == 1
[3:29] <noah> presumably, but i'm no expert :)
[3:32] * csoukup (~csoukup@2605:a601:9c8:6b00:2cdb:f142:8c29:6cb5) Quit (Ping timeout: 480 seconds)
[3:36] * zhaochao (~zhaochao@125.39.9.151) has joined #ceph
[3:37] * brannmar (~SEBI@06SAABUOG.tor-irc.dnsbl.oftc.net) Quit ()
[3:37] * Chrissi_ (~Sun7zu@93.115.95.216) has joined #ceph
[3:38] * dyasny (~dyasny@cable-192.222.176.13.electronicbox.net) has joined #ceph
[3:39] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[3:42] * Cybertinus (~Cybertinu@cybertinus.customer.cloud.nl) has joined #ceph
[3:44] * yanzheng (~zhyan@118.116.113.70) has joined #ceph
[3:46] * derjohn_mobi (~aj@x590e5c3f.dyn.telefonica.de) has joined #ceph
[3:48] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[3:53] * derjohn_mob (~aj@x590c0e5f.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:56] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[3:56] * ylmson (~galaxyAbs@7V7AAD8VH.tor-irc.dnsbl.oftc.net) Quit ()
[3:59] * huangjun|2 (~kvirc@113.57.168.154) has joined #ceph
[4:01] * EinstCrazy (~EinstCraz@58.247.117.134) has joined #ceph
[4:05] * huangjun (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[4:05] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[4:07] * Chrissi_ (~Sun7zu@6AGAABG51.tor-irc.dnsbl.oftc.net) Quit ()
[4:07] * Snowman (~Salamande@3.tor.exit.babylon.network) has joined #ceph
[4:07] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[4:08] * flisky (~Thunderbi@36.110.40.24) has joined #ceph
[4:16] * Meths (~meths@95.151.244.200) Quit (Ping timeout: 480 seconds)
[4:17] * IvanJobs (~hardes@103.50.11.146) has joined #ceph
[4:21] * MentalRay (~MentalRay@107.171.161.165) has joined #ceph
[4:23] * haomaiwang (~haomaiwan@rrcs-67-79-205-19.sw.biz.rr.com) has joined #ceph
[4:23] * haomaiwang (~haomaiwan@rrcs-67-79-205-19.sw.biz.rr.com) Quit (Remote host closed the connection)
[4:23] * haomaiwang (~haomaiwan@rrcs-67-79-205-19.sw.biz.rr.com) has joined #ceph
[4:23] * Meths (~meths@95.151.244.244) has joined #ceph
[4:26] * PuyoDead (~Scrin@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[4:29] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[4:30] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[4:32] <sfrode> with multitenancy for RGW merged, how do I get the swift protocol to play nicely with it? when i create a new "container" (horizon terminology) and make it public, it still ends up under the default namespace of <host>/swift/v1/<container> meaning that there are no tenant separation in the url
[4:33] <sfrode> i'm on 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9) btw
[4:34] * kefu (~kefu@183.193.162.205) has joined #ceph
[4:37] * Snowman (~Salamande@7V7AAD8VU.tor-irc.dnsbl.oftc.net) Quit ()
[4:37] * Sketchfile (~oracular@192.42.115.101) has joined #ceph
[4:38] * ibravo (~ibravo@12.14.132.5) Quit (Quit: Leaving)
[4:39] * kefu_ (~kefu@114.92.122.74) has joined #ceph
[4:44] * i_m (~ivan.miro@31.173.121.143) has joined #ceph
[4:46] * kefu (~kefu@183.193.162.205) Quit (Ping timeout: 480 seconds)
[4:50] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[4:55] * i_m1 (~ivan.miro@31.173.120.59) has joined #ceph
[4:56] * PuyoDead (~Scrin@76GAAEXHQ.tor-irc.dnsbl.oftc.net) Quit ()
[4:56] * Deiz (~tokie@62.102.148.67) has joined #ceph
[4:59] * i_m (~ivan.miro@31.173.121.143) Quit (Ping timeout: 480 seconds)
[5:07] * Sketchfile (~oracular@6AGAABG7W.tor-irc.dnsbl.oftc.net) Quit ()
[5:07] * hassifa (~storage@4MJAAEJA0.tor-irc.dnsbl.oftc.net) has joined #ceph
[5:10] * vbellur (~vijay@71.234.224.255) has joined #ceph
[5:13] * noah (~noah@2601:647:cb00:95ef:c66e:1fff:fe16:4e79) Quit (Ping timeout: 480 seconds)
[5:26] * hellertime (~Adium@pool-173-48-155-219.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[5:26] * Deiz (~tokie@4MJAAEJAR.tor-irc.dnsbl.oftc.net) Quit ()
[5:26] * ZombieTree (~drdanick@anonymous.sec.nl) has joined #ceph
[5:29] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[5:33] * Vacuum_ (~Vacuum@88.130.213.185) has joined #ceph
[5:37] * hassifa (~storage@4MJAAEJA0.tor-irc.dnsbl.oftc.net) Quit ()
[5:37] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[5:39] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[5:40] * Vacuum__ (~Vacuum@88.130.196.204) Quit (Ping timeout: 480 seconds)
[5:41] * MentalRay (~MentalRay@107.171.161.165) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:44] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[5:44] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[5:51] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[5:56] * ZombieTree (~drdanick@06SAABUS4.tor-irc.dnsbl.oftc.net) Quit ()
[5:56] * _s1gma (~Rens2Sea@tor2.asmer.com.ua) has joined #ceph
[6:07] * FNugget (~Moriarty@vmd7136.contabo.host) has joined #ceph
[6:12] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[6:25] * linuxkidd (~linuxkidd@241.sub-70-210-192.myvzw.com) Quit (Remote host closed the connection)
[6:26] * _s1gma (~Rens2Sea@06SAABUT1.tor-irc.dnsbl.oftc.net) Quit ()
[6:26] * Drezil1 (~Maariu5_@06SAABUUX.tor-irc.dnsbl.oftc.net) has joined #ceph
[6:26] * shyu_ (~shyu@119.254.120.71) has joined #ceph
[6:26] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[6:29] * shyu_ (~shyu@119.254.120.71) Quit (Remote host closed the connection)
[6:37] * FNugget (~Moriarty@06SAABUUE.tor-irc.dnsbl.oftc.net) Quit ()
[6:37] * drupal1 (~Morde@6AGAABHA7.tor-irc.dnsbl.oftc.net) has joined #ceph
[6:40] * overclk (~quassel@121.244.87.117) has joined #ceph
[6:49] * flisky (~Thunderbi@36.110.40.24) Quit (Quit: flisky)
[6:56] * Drezil1 (~Maariu5_@06SAABUUX.tor-irc.dnsbl.oftc.net) Quit ()
[6:56] * cooey (~Sami345@93.174.90.30) has joined #ceph
[6:58] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[7:03] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[7:05] * wwdillingham (~LobsterRo@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wwdillingham)
[7:07] * drupal1 (~Morde@6AGAABHA7.tor-irc.dnsbl.oftc.net) Quit ()
[7:07] * toast (~galaxyAbs@dourneau.me) has joined #ceph
[7:14] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[7:20] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[7:21] * bvi (~bastiaan@185.56.32.1) has joined #ceph
[7:24] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:26] * cooey (~Sami345@4MJAAEJDE.tor-irc.dnsbl.oftc.net) Quit ()
[7:26] * EdGruberman (~Mattress@edwardsnowden2.torservers.net) has joined #ceph
[7:30] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[7:32] * IvanJobs (~hardes@103.50.11.146) Quit (Ping timeout: 480 seconds)
[7:33] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[7:34] * xarses (~xarses@rrcs-24-173-18-66.sw.biz.rr.com) has joined #ceph
[7:35] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[7:37] * flisky (~Thunderbi@36.110.40.27) has joined #ceph
[7:37] * toast (~galaxyAbs@4MJAAEJDM.tor-irc.dnsbl.oftc.net) Quit ()
[7:37] * Pettis (~Sophie@chomsky.torservers.net) has joined #ceph
[7:39] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:39] * natarej (~natarej@2001:8003:483a:a900:da8:6a6c:304e:e427) Quit (Read error: Connection reset by peer)
[7:39] * natarej (~natarej@2001:8003:483a:a900:da8:6a6c:304e:e427) has joined #ceph
[7:44] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[7:50] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[7:52] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[7:54] * Qu310 (~qnet@qu310.qnet.net.au) Quit (Read error: Connection reset by peer)
[7:55] * Qu310 (~qnet@qu310.qnet.net.au) has joined #ceph
[7:56] * EdGruberman (~Mattress@4MJAAEJD3.tor-irc.dnsbl.oftc.net) Quit ()
[7:56] * starcoder (~w2k@93.174.93.133) has joined #ceph
[7:56] * flisky (~Thunderbi@36.110.40.27) Quit (Read error: Connection reset by peer)
[7:58] * dyasny_ (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[7:58] * IvanJobs (~hardes@103.50.11.146) has joined #ceph
[8:02] * dyasny (~dyasny@cable-192.222.176.13.electronicbox.net) Quit (Ping timeout: 480 seconds)
[8:04] * b0e (~aledermue@p54AFE233.dip0.t-ipconnect.de) has joined #ceph
[8:07] * Pettis (~Sophie@6AGAABHC9.tor-irc.dnsbl.oftc.net) Quit ()
[8:08] * flisky (~Thunderbi@36.110.40.27) has joined #ceph
[8:15] * trociny (~mgolub@93.183.239.2) Quit (Quit: ??????????????)
[8:15] * trociny (~mgolub@93.183.239.2) has joined #ceph
[8:15] * garphy is now known as garphy`aw
[8:17] * yanzheng1 (~zhyan@118.116.113.70) has joined #ceph
[8:18] * yanzheng (~zhyan@118.116.113.70) Quit (Ping timeout: 480 seconds)
[8:21] * mohmultihouse (~mohmultih@gw01.mhitp.dk) has joined #ceph
[8:21] * karnan (~karnan@121.244.87.117) has joined #ceph
[8:22] * derjohn_mobi (~aj@x590e5c3f.dyn.telefonica.de) Quit (Remote host closed the connection)
[8:26] * starcoder (~w2k@6AGAABHDX.tor-irc.dnsbl.oftc.net) Quit ()
[8:27] * Concubidated (~cube@c-50-173-245-118.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:37] * verbalins (~spate@4MJAAEJE8.tor-irc.dnsbl.oftc.net) has joined #ceph
[8:39] * derjohn_mob (~aj@2001:6f8:1337:0:845a:fca:1e39:c10e) has joined #ceph
[9:04] * davidzlap (~Adium@2605:e000:1313:8003:483a:763a:64b:80a7) Quit (Quit: Leaving.)
[9:05] * krypto (~krypto@G68-90-105-92.sbcis.sbc.com) has joined #ceph
[9:07] * verbalins (~spate@4MJAAEJE8.tor-irc.dnsbl.oftc.net) Quit ()
[9:09] * shohn (~shohn@dslb-094-223-167-135.094.223.pools.vodafone-ip.de) has joined #ceph
[9:10] * fsimonce (~simon@host201-70-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[9:11] * b0e (~aledermue@p54AFE233.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[9:16] * krypto (~krypto@G68-90-105-92.sbcis.sbc.com) Quit (Read error: Connection reset by peer)
[9:16] * krypto (~krypto@103.252.26.90) has joined #ceph
[9:25] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[9:33] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[9:35] * Dinnerbone1 (~Popz@65.19.167.132) has joined #ceph
[9:41] * shohn1 (~shohn@dslb-094-223-167-135.094.223.pools.vodafone-ip.de) has joined #ceph
[9:48] * linjan__ (~linjan@176.195.69.163) has joined #ceph
[9:56] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:58] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[9:59] <sep> morning :) ; i am in the process of migrating my osd disks from 20 3tb to 7 12tb on each osd node. this means i get a lot less osd's then i used to... is there a way to merge pool pg's from 4096 into 2048 forinstance ?
[10:00] <Be-El> sep: no, you cannot reduce the number of pgs
[10:02] * krypto (~krypto@103.252.26.90) Quit (Read error: Connection reset by peer)
[10:05] <sep> pity.. guess i just have to invest in larger cluster :P
[10:05] * Dinnerbone1 (~Popz@06SAABU12.tor-irc.dnsbl.oftc.net) Quit ()
[10:05] * maku (~hgjhgjh@freeciv.nmte.ch) has joined #ceph
[10:05] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[10:05] * allaok (~allaok@machine107.orange-labs.com) Quit (Remote host closed the connection)
[10:06] <Be-El> more pg gives you a smoother data distribution; their downside is an increased memory requirement for OSDs
[10:06] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[10:07] <PoRNo-MoRoZ> TMM sup ?
[10:07] * PeterRabbit (~Lattyware@tor-exit.hermes.bendellar.com) has joined #ceph
[10:10] * rendar (~I@host146-39-dynamic.57-82-r.retail.telecomitalia.it) has joined #ceph
[10:10] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) has joined #ceph
[10:11] <TMM> PoRNo-MoRoZ, got the cluster mostly back, still one incomplete pg, I think I'm just going to force create it
[10:11] <TMM> and tell all my users I did fucked up
[10:12] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) has joined #ceph
[10:12] <PoRNo-MoRoZ> ><
[10:13] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:16] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[10:16] * nardial (~ls@dslb-178-001-225-172.178.001.pools.vodafone-ip.de) has joined #ceph
[10:19] * shyu_ (~shyu@119.254.120.71) has joined #ceph
[10:33] * pabluk__ is now known as pabluk_
[10:35] * maku (~hgjhgjh@4MJAAEJG8.tor-irc.dnsbl.oftc.net) Quit ()
[10:37] * PeterRabbit (~Lattyware@4MJAAEJHD.tor-irc.dnsbl.oftc.net) Quit ()
[10:37] * Defaultti1 (~matx@korematsu.tor-exit.calyxinstitute.org) has joined #ceph
[10:40] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:42] * LeaChim (~LeaChim@host86-147-119-244.range86-147.btcentralplus.com) has joined #ceph
[10:44] * yingying (~oftc-webi@114.247.245.138) has joined #ceph
[10:45] * yingying (~oftc-webi@114.247.245.138) Quit ()
[10:48] * brians__ (~brian@80.111.114.175) Quit (Max SendQ exceeded)
[10:48] * brians (~brian@80.111.114.175) has joined #ceph
[10:51] * EinstCrazy (~EinstCraz@58.247.117.134) Quit (Ping timeout: 480 seconds)
[10:55] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[10:55] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:7875:fe8c:7399:b0c8) has joined #ceph
[10:58] <post-factum> PoRNo-MoRoZ: nice nick, dude
[10:59] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[11:02] * svenx (~Sven@rubin.ifi.uio.no) Quit (Remote host closed the connection)
[11:03] <Heebie> AvengerMoJo: Thanks for that.
[11:05] * Thayli (~notarima@tor-exit7-readme.dfri.se) has joined #ceph
[11:07] * Defaultti1 (~matx@76GAAEXOH.tor-irc.dnsbl.oftc.net) Quit ()
[11:07] * rapedex (~verbalins@readme.tor.camolist.com) has joined #ceph
[11:10] <PoRNo-MoRoZ> :DD
[11:10] <PoRNo-MoRoZ> post-factum it's really old
[11:10] <PoRNo-MoRoZ> when i was young and stupid
[11:11] <PoRNo-MoRoZ> just like i am now
[11:11] <PoRNo-MoRoZ> :D
[11:11] <post-factum> those russians
[11:11] <PoRNo-MoRoZ> :DD
[11:11] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[11:15] <PoRNo-MoRoZ> okay since last rebalance i got
[11:15] <PoRNo-MoRoZ> 512 pgs stuck unclean
[11:15] <PoRNo-MoRoZ> recovery 2/9079304 objects misplaced (0.000%)
[11:15] <PoRNo-MoRoZ> is that okay ? :D
[11:15] <PoRNo-MoRoZ> ah
[11:15] <PoRNo-MoRoZ> i removed osds from pool
[11:15] <PoRNo-MoRoZ> that's probably caused that
[11:15] <PoRNo-MoRoZ> i should remove that pool
[11:15] <PoRNo-MoRoZ> cuz it's empty now
[11:17] <PoRNo-MoRoZ> yep
[11:17] <PoRNo-MoRoZ> 2048 active+clean
[11:17] <PoRNo-MoRoZ> it's okay now
[11:22] <smerz> your nick doesn't do you justice :-)
[11:23] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[11:24] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[11:24] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit ()
[11:27] * TMM (~hp@185.5.122.2) has joined #ceph
[11:34] <etienneme> :p
[11:35] * Thayli (~notarima@4MJAAEJIE.tor-irc.dnsbl.oftc.net) Quit ()
[11:37] * rapedex (~verbalins@06SAABU5H.tor-irc.dnsbl.oftc.net) Quit ()
[11:37] * kalmisto (~Sliker@exit1.ipredator.se) has joined #ceph
[11:38] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[11:39] * Kakeru (~CoZmicShR@dourneau.me) has joined #ceph
[11:42] * dlan (~dennis@116.228.88.131) has joined #ceph
[11:44] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[11:46] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit ()
[11:46] * allaok (~allaok@machine107.orange-labs.com) Quit (Remote host closed the connection)
[11:47] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[11:47] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[11:47] * allaok (~allaok@machine107.orange-labs.com) Quit (Remote host closed the connection)
[11:47] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit ()
[11:47] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) has joined #ceph
[11:48] * allaok (~allaok@machine107.orange-labs.com) has joined #ceph
[11:48] * natarej (~natarej@2001:8003:483a:a900:da8:6a6c:304e:e427) Quit (Read error: Connection reset by peer)
[11:48] * natarej (~natarej@2001:8003:483a:a900:da8:6a6c:304e:e427) has joined #ceph
[11:52] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[11:54] * beat8 (~beat@46.250.135.1) Quit (Ping timeout: 480 seconds)
[11:54] * EinstCrazy (~EinstCraz@58.247.117.134) has joined #ceph
[11:59] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:02] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Quit: valeech)
[12:02] <TMM> I had to force_create a pg, but now it seems to be stuck in 'creating' state
[12:07] * kalmisto (~Sliker@4MJAAEJI2.tor-irc.dnsbl.oftc.net) Quit ()
[12:08] * EinstCrazy (~EinstCraz@58.247.117.134) Quit (Remote host closed the connection)
[12:09] * Kakeru (~CoZmicShR@76GAAEXPW.tor-irc.dnsbl.oftc.net) Quit ()
[12:09] * poller (~mLegion@93.115.95.202) has joined #ceph
[12:11] * MonkeyJamboree (~notmyname@tor-exit.gansta93.com) has joined #ceph
[12:18] * karnan (~karnan@121.244.87.117) has joined #ceph
[12:20] * nardial (~ls@dslb-178-001-225-172.178.001.pools.vodafone-ip.de) Quit (Quit: Leaving)
[12:21] <IvanJobs> I have made a s3 API doc for Ceph, just FYI: http://ivanjobs.github.io/2016/03/10/ceph-oss-api-doc-cn/
[12:22] <IvanJobs> btw, in Chinese.
[12:27] * treenerd_ (~Gerhard@85.193.140.98) Quit (Quit: leaving)
[12:30] * haomaiwang (~haomaiwan@rrcs-67-79-205-19.sw.biz.rr.com) Quit (Remote host closed the connection)
[12:32] * IvanJobs (~hardes@103.50.11.146) Quit (Quit: Leaving)
[12:32] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[12:35] * haomaiwang (~haomaiwan@rrcs-67-79-205-19.sw.biz.rr.com) has joined #ceph
[12:36] * haomaiwa_ (~haomaiwan@rrcs-67-79-205-19.sw.biz.rr.com) has joined #ceph
[12:36] * haomaiwang (~haomaiwan@rrcs-67-79-205-19.sw.biz.rr.com) Quit (Read error: Connection reset by peer)
[12:37] * huangjun|2 (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[12:37] * rdas (~rdas@121.244.87.116) has joined #ceph
[12:39] * poller (~mLegion@4MJAAEJJG.tor-irc.dnsbl.oftc.net) Quit ()
[12:39] * Bored (~pakman__@107.181.161.205) has joined #ceph
[12:40] <Walex> sep: "migrating my osd disks from 20 3tb to 7 12tb" that's interesting, you seem to have decided to get rid of around 75% of your overall IOPS, how do you intend to deal with that when rebalancing and doing other whole-cluster operations?
[12:41] * MonkeyJamboree (~notmyname@4MJAAEJJH.tor-irc.dnsbl.oftc.net) Quit ()
[12:44] * haomaiwa_ (~haomaiwan@rrcs-67-79-205-19.sw.biz.rr.com) Quit (Ping timeout: 480 seconds)
[12:45] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[12:47] <PoRNo-MoRoZ> anyone on 4.5 kernel ?
[12:47] <TMM> Not with Ceph
[12:47] <PoRNo-MoRoZ> why not ?
[12:48] <TMM> No need for it yet it seems.
[12:48] <PoRNo-MoRoZ> ah
[12:48] <TMM> I just thought if you wanted to ask generic things about 4.5 I can maybe help, if you want to know specific things about 4.5 and ceph I cannot.
[12:48] <PoRNo-MoRoZ> nope
[12:48] <PoRNo-MoRoZ> i just wanna ask - is it stable ? :)
[12:49] <TMM> Works for me(r) :)
[12:49] <PoRNo-MoRoZ> okay :)
[12:49] <PoRNo-MoRoZ> thanks
[12:49] <TMM> I run it on my laptop as well as several instances inside my openstack without incident so far
[12:49] * shyu_ (~shyu@119.254.120.71) Quit (Ping timeout: 480 seconds)
[12:50] <PoRNo-MoRoZ> i'm currently updating some really old vm
[12:50] <PoRNo-MoRoZ> found 4.5 is just released in backports repo
[12:50] <TMM> What does it mean if an OSD is marked "DNE" in 'ceph osd tree'?
[12:50] <TMM> It's an osd that was removed quite a while ago
[12:54] <sep> Walex, i need the space. but very low iops. and i have a problem with memory consumption. so moving from 20*3tb = 60 tb to 7*12tb (using software raid5) = 84TB hopefully will reduce memory consumption by osd's while giving more space. and ofcourse sacrifice iops,
[12:55] <Walex> sep: as long as it is an archival system as you says it seems sensibleish.
[12:58] * _28_ria (~kvirc@opfr028.ru) Quit (Read error: Connection reset by peer)
[12:58] <Walex> sep: I am guessing 7*12tb => 7*(3*6TB)?
[12:58] <sep> 5 3tb in raid5
[13:00] <Walex> sep: ahhh, so you actually increased the drive count to 35 from 20, that (for reads at least) actually will overall *increase* total IOPS
[13:01] <Walex> sep: interesting, and BTW 5-drive RAID5 is one of the few cases where I like RAID5.
[13:02] <sep> yes the nodes have 36 slots. but i could not get more then 15 ish running without running into memory problems when osd's dropped or was added due to OOM killer
[13:02] <sep> labbing using old hardware... :/
[13:03] <Walex> sep: that's very useful to know, how many GB have you got on a node?
[13:03] <sep> but with 7 raid5 osd's i can probalby even have memory for erasure coding.
[13:03] <sep> 32
[13:03] <sep> max the old mainboard can deal with
[13:03] <sep> and 6 nodes
[13:03] * zhaochao (~zhaochao@125.39.9.151) Quit (Quit: ChatZilla 0.9.92 [Firefox 45.1.0/20160426232238])
[13:03] <sep> + 3 mons
[13:03] <Walex> sep: uhmmm, I had hoped that OSDs could run reasonably in 1GB with some performance sacrifice.
[13:04] <Walex> sep: but yes, not for erasure coding.
[13:05] <Walex> I have mixed feelings about erasure coding: it seems to require a *lot* of extra network traffic, and I would think that is only affordable if one has a separate "Ceph" network independent of the client network.
[13:05] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:06] <Walex> sep: in case you don't go erasure coding, do you think that having RAID5 redundancy would allow you to switch to 2-way replication in Ceph?
[13:07] <Walex> been wondering about 2-way replication
[13:07] * haomaiwang (~haomaiwan@166.184.9.163) has joined #ceph
[13:07] <sep> notice the green committed line on the lower image ... http://imgur.com/a/s0dUX
[13:08] * kefu_ is now known as kefu|afk
[13:08] <sep> well... i am using 4 way replication at the momemt. since i am tearing down 3tb osds's and adding 12tb osd's while the "lab" is in "production"
[13:08] <sep> also these old nodes have the infamous failacuda drives so drivs fail alsmost faster then i can replace them
[13:09] <sep> but have not lost any data, so ceph's resilliency is almost mindboggeling.
[13:09] <Walex> sep: uhhhh hadn't noticed the line is *above* :-). that's a hard working cluster indeed
[13:09] * Bored (~pakman__@6AGAABHMU.tor-irc.dnsbl.oftc.net) Quit ()
[13:10] <Walex> sep: yes, Ceph is one of those things (unlike say Juju or OpenStack) that seems to be simple and robust.
[13:10] <sep> hum the files names was not visisble.
[13:10] <sep> but the upper image is 7 12tb osds
[13:10] <sep> and the lower is 20 3tb osds
[13:10] <sep> my coworkers does not agree... but i am sloowwly turning them around
[13:11] <sep> they like things that have a vendor and a web interface, and someone to blame i guess.
[13:11] <sep> so i was hoping to get calmari installed so that might help on the web visual overview atleast.
[13:11] <Walex> sep: ahhh, *that* type of sysadmin.
[13:11] * maku (~Tralin|Sl@tor-exit.stigatle.no) has joined #ceph
[13:12] <sep> am the lone black linux dude in a hard of windows admins :)
[13:12] <sep> herd...
[13:12] <sep> black sheep i mean
[13:12] <PoRNo-MoRoZ> i'm some kinda swiss knife
[13:12] <PoRNo-MoRoZ> windows users that sits mostly all time in linux console
[13:12] <PoRNo-MoRoZ> windows user that sits mostly all time in linux console
[13:12] <sep> <- potato...
[13:12] <PoRNo-MoRoZ> :D
[13:12] <PoRNo-MoRoZ> a ya tomato
[13:13] <post-factum> those russians!
[13:13] <PoRNo-MoRoZ> :DDD
[13:13] <Walex> sep: BTW I quite like that you have thought it through and sensibly.
[13:13] <Walex> sep: I have learned a few useful things about tradeoffs in your explanation.
[13:14] <sep> speaking of calmari. i notice there is packages for ubuntu, http://eu.ceph.com/calamari/1.3.1/ubuntu/trusty/pool/main/c/calamari/ ; but i am running jessie. but i see nothing in the debian directories
[13:14] * Helleshin (~storage@edwardsnowden2.torservers.net) has joined #ceph
[13:14] <post-factum> PoRNo-MoRoZ: "romania"?
[13:14] <PoRNo-MoRoZ> you
[13:14] <PoRNo-MoRoZ> from ?
[13:14] <PoRNo-MoRoZ> that's irc notice :)
[13:14] <post-factum> Ukraine
[13:14] <PoRNo-MoRoZ> that was irc notice :)
[13:14] <PoRNo-MoRoZ> ah
[13:15] <PoRNo-MoRoZ> a opredelilsa kak romania :)
[13:15] <PoRNo-MoRoZ> pofig
[13:15] * haomaiwang (~haomaiwan@166.184.9.163) Quit (Ping timeout: 480 seconds)
[13:16] <sep> are there any debian packages? or is calmari part of the propritary inktank software ?
[13:17] * dsl_ (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[13:21] * mohmultihhouse (~mohmultih@gw01.mhitp.dk) has joined #ceph
[13:22] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[13:24] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Ping timeout: 480 seconds)
[13:27] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving.)
[13:27] * flisky (~Thunderbi@36.110.40.27) Quit (Quit: flisky)
[13:28] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[13:30] * kefu|afk is now known as kefu_
[13:34] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[13:37] * kefu_ (~kefu@114.92.122.74) Quit (Max SendQ exceeded)
[13:38] * EinstCrazy (~EinstCraz@61.165.228.28) has joined #ceph
[13:39] * kefu (~kefu@114.92.122.74) has joined #ceph
[13:39] * SWAT (~swat@cyberdyneinc.xs4all.nl) Quit (Ping timeout: 480 seconds)
[13:41] * EinstCrazy (~EinstCraz@61.165.228.28) Quit (Remote host closed the connection)
[13:41] * maku (~Tralin|Sl@76GAAEXRI.tor-irc.dnsbl.oftc.net) Quit ()
[13:42] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[13:43] * valeech (~valeech@50-205-143-162-static.hfc.comcastbusiness.net) has joined #ceph
[13:44] * Helleshin (~storage@4MJAAEJKW.tor-irc.dnsbl.oftc.net) Quit ()
[13:44] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit ()
[13:44] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[13:45] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[13:48] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:48] * SWAT (~swat@cyberdyneinc.xs4all.nl) has joined #ceph
[13:51] * ibravo (~ibravo@12.252.7.226) has joined #ceph
[13:52] * ibravo (~ibravo@12.252.7.226) Quit ()
[13:59] * hellertime (~Adium@pool-173-48-155-219.bstnma.fios.verizon.net) has joined #ceph
[13:59] * ade (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) has left #ceph
[14:00] * ade (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) has joined #ceph
[14:00] * ade (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) Quit ()
[14:00] * ade (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) has joined #ceph
[14:00] * olqs (~olqs@cpe90-146-85-69.liwest.at) has joined #ceph
[14:01] * ade (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[14:01] <olqs> Hi, i have added two bluestore osds to my cluster but no pgs are moved from the existing xfs osds to the two new
[14:03] <olqs> is this a expected behaviour and i cant mix xfs and bluestore osds, or is the error in my config
[14:05] <olqs> i have prepared the osds with "ceph-deploy disk prepare --bluestore host:/dev/name"
[14:06] * alexxy[home] (~alexxy@biod.pnpi.spb.ru) has joined #ceph
[14:07] * alexxy (~alexxy@biod.pnpi.spb.ru) Quit (Ping timeout: 480 seconds)
[14:07] <dvanders> you can mix xfs/bluestore... probably something in your config
[14:08] <dvanders> bluestore osds need this in ceph.conf to start
[14:08] <dvanders> enable experimental unrecoverable data corrupting features = bluestore rocksdb
[14:09] * SWAT (~swat@cyberdyneinc.xs4all.nl) Quit (Ping timeout: 480 seconds)
[14:09] <olqs> a have this line in my config, the 2 bluestore osds are up and running, but no data is moved
[14:10] <dvanders> probably a problem with the crush weight... the default crush weight for bluestore osds is tiny
[14:10] <Walex> olqs: check the crush map to be sure,
[14:10] <dvanders> use ceph osd crush reweight ...
[14:12] * ade (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) has joined #ceph
[14:12] <olqs> thanks, i haven't looked at the crush map, all are 2tb osds, the xfs ones have a weight of 1.81, the bluestore of 0.000092, now after adjusting a lot pgs were remapped
[14:13] <TMM> is there anything in particular I need to do if I have an incomplete PG I'd like to recreate? I've tried just doing a force create but that doesn't seem to do anything
[14:13] <dvanders> yup thats it. fix is already planned for 10.2.1.
[14:14] * DougalJacobs (~Kakeru@nooduitgang.schmutzig.org) has joined #ceph
[14:15] <dvanders> TMM: ceph pg <id> query might help ...
[14:15] <dvanders> TMM: maybe you need to mark an osd as lost...
[14:16] <TMM> I don't have any down osds at the moment though
[14:16] <TMM> and all other pgs are consistent
[14:17] <dvanders> hmm... do you understand why the pg is incomplete?
[14:17] <dvanders> that is not normal
[14:17] <TMM> yeah, because I'm an idiot
[14:17] <dvanders> lol
[14:17] <TMM> I deleted some osds that were giving me problems while I thought that the cluster was healthy
[14:18] <TMM> but apparently it wasn't 100% healthy when I deleted that last osd
[14:18] * ade_b (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) has joined #ceph
[14:18] <dvanders> what does ceph pg <id> query say... waiting for ...
[14:18] <TMM> so now I have 1 pg with too few osds but nothing is 'down' o
[14:18] * ade_b (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) Quit ()
[14:18] * ade (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[14:18] <TMM> it says "down_osds_we_would_probe": [
[14:18] <TMM> 9
[14:18] <TMM> ],
[14:18] * ade (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) has joined #ceph
[14:18] <TMM> but '9' is one of the osds that were removed
[14:18] <TMM> it doesn't exist in crushmap anymore
[14:19] <dvanders> ceph osd dump | grep osd.9 ??
[14:19] <TMM> dvanders, none
[14:19] <dvanders> ceph osd lost 9 ?
[14:19] * ade (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) Quit ()
[14:19] <TMM> it's warning me about dataloss
[14:19] * ade (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) has joined #ceph
[14:20] <TMM> should I care about it given it's not part of my cluster any longer?
[14:20] <dvanders> you have backups?
[14:20] <TMM> of everything? no
[14:20] <TMM> the pool that is affected by the missing pg, yes though
[14:21] <TMM> Ideally I just want to zero that pg
[14:21] <TMM> I've already told my users that their admin is an idiot
[14:21] <dvanders> yeah... i can't guarantee this will work
[14:21] <dvanders> if osd.9 really is gone, then it *should* be ok to mark it lost
[14:22] <TMM> well, it's not running
[14:22] <TMM> and it's not in the crushmap
[14:22] <dvanders> anyway, recreating a pg also leads to dataloss
[14:22] * dsl_ (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[14:22] <TMM> I understand
[14:22] <TMM> but recreating that one PG only affects the pool that the pg is a part of
[14:22] <TMM> I'm not sure if this command could only affect the pool that's already kind of iffy anyway
[14:23] <dvanders> ceph pg dump | grep 9
[14:23] <dvanders> the [x,y,z] part is which osd's hold a given pg
[14:23] <dvanders> that way you can see if osd.9 is needed for anything else
[14:24] <TMM> searching for ,9, [9, and ,9] yields no results
[14:24] <TMM> I guess that means that osd.9 really isn't used for anything right now, right?
[14:25] <dvanders> seems correct
[14:25] <TMM> ok, I'll just mark it lost then
[14:25] <dvanders> :)
[14:25] <TMM> well, that was anticlimactic
[14:25] <TMM> "osd.9 is not down or doesn't exist"
[14:25] <dvanders> haha
[14:26] <dvanders> so you already did pg force_create_pg ?
[14:26] <TMM> yeah, it's stuck in 'creating' now
[14:27] <TMM> I'm vaguely wondering if I should just recreate the entire pool
[14:27] <dvanders> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-February/007397.html
[14:27] <TMM> but that's not going to be particularly easy too
[14:27] <dvanders> what is the pool used for .. rbd?
[14:27] <TMM> yeah
[14:28] <TMM> dvanders, that looks pretty much like my problem ,yes
[14:28] <dvanders> well, by losing an entire PG, it means you've shot holes in every single rbd image
[14:28] <TMM> yeah, I know
[14:28] <dvanders> then you want to restore from backup anyway, i suppose
[14:29] <TMM> well, that's up to my users
[14:29] <TMM> but the problem is that those rbds are still registered in the cinder database
[14:29] <TMM> so just nuking the pool is not necessarily the best way to do this
[14:29] <TMM> I guess I could just delete all the images
[14:30] <dvanders> do you still have osd.9 somewhere?
[14:30] <PoRNo-MoRoZ> as a data
[14:30] <TMM> yeah, I still have some of the osds that I wrongly deleted yesterday too
[14:30] <PoRNo-MoRoZ> TMM did u sleep last night ? :)
[14:30] <dvanders> i've never done it, but perhaps there is a way to restore osd.9
[14:30] <TMM> but the recovery procedures I found don't work, importing the missing pg data leads to the temporary OSD crashing
[14:31] <TMM> na'h, I've given up on the actual data, now I'd just like to recover the pool
[14:31] <dvanders> ceph osd pool rename cinder cinder.old
[14:31] <dvanders> ceph osd pool create cinder ...
[14:31] <TMM> PoRNo-MoRoZ, yeah, I did sleep, I went to bed at 2
[14:31] <PoRNo-MoRoZ> :)
[14:31] <TMM> dvanders, what happens to the existing connections to those volumes?
[14:32] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[14:32] <TMM> dvanders, it's also a cache hierarchy with a distributed pool on top of an ec pool
[14:32] <TMM> I probably need to recreate both then
[14:32] <dvanders> probably any new IOs would error
[14:33] <dvanders> is it the cache or cold pool that's broken?
[14:33] <TMM> the cold pool
[14:33] <TMM> I'd like to just recover the pool, there are probably quite a few images that are fine
[14:33] <TMM> the newer ones are probably still entirely in the cache tier
[14:34] <dvanders> i'm not too experienced with cache pools...
[14:34] <dvanders> if I were you, i'd rbd export what I can, then rbd import to a new pool
[14:35] <dvanders> there is some cache pool magic to flush things to a new pool, but that would be pretty sketchy
[14:37] * allaok (~allaok@machine107.orange-labs.com) has left #ceph
[14:37] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[14:38] <dvanders> btw, i'm surprised your VMs aren't locked now... as soon as they try to IO with teh incomplete PG, they'll block forever
[14:38] * derjohn_mob (~aj@2001:6f8:1337:0:845a:fca:1e39:c10e) Quit (Ping timeout: 480 seconds)
[14:40] * derjohn_mob (~aj@2001:6f8:1337:0:845a:fca:1e39:c10e) has joined #ceph
[14:44] * DougalJacobs (~Kakeru@6AGAABHPV.tor-irc.dnsbl.oftc.net) Quit ()
[14:44] * superdug1 (~mason@76GAAEXTP.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:46] * BillyBobJohn (~Random@192.42.115.101) has joined #ceph
[14:49] * i_m1 (~ivan.miro@31.173.120.59) Quit (Ping timeout: 480 seconds)
[14:51] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Remote host closed the connection)
[14:52] <TMM> dvanders, yeah, not all volumes are actually affected it seems
[14:52] <dvanders> i have to run... rbd export mght be a good option, then?
[14:53] <TMM> yeah, I may try that
[14:54] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[14:57] * valeech (~valeech@50-205-143-162-static.hfc.comcastbusiness.net) Quit (Quit: valeech)
[15:00] * dvanders_ (~dvanders@2001:1458:202:225::102:124a) has joined #ceph
[15:03] * mhack (~mhack@66-168-117-78.dhcp.oxfr.ma.charter.com) has joined #ceph
[15:04] * evelu (~erwan@37.160.194.36) has joined #ceph
[15:04] * dvanders__ (~dvanders@2001:1458:202:225::101:124a) has joined #ceph
[15:05] * dvanders (~dvanders@dvanders-pro.cern.ch) Quit (Ping timeout: 480 seconds)
[15:08] * dvanders_ (~dvanders@2001:1458:202:225::102:124a) Quit (Ping timeout: 480 seconds)
[15:09] * Nixx_ (~quassel@bulbasaur.sjorsgielen.nl) Quit (Remote host closed the connection)
[15:11] * Nixx (~quassel@bulbasaur.sjorsgielen.nl) has joined #ceph
[15:14] * superdug1 (~mason@76GAAEXTP.tor-irc.dnsbl.oftc.net) Quit ()
[15:14] * zc00gii (~MJXII@81.89.0.197) has joined #ceph
[15:16] * BillyBobJohn (~Random@7V7AAD8ZY.tor-irc.dnsbl.oftc.net) Quit ()
[15:16] * redbeast12 (~Esvandiar@orion.enn.lu) has joined #ceph
[15:16] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) has joined #ceph
[15:18] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:19] * brad[] (~Brad@184-94-17-26.dedicated.allstream.net) Quit (Quit: Leaving)
[15:19] * abradsha (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) has joined #ceph
[15:19] * abradsha (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[15:21] * evelu (~erwan@37.160.194.36) Quit (Ping timeout: 480 seconds)
[15:21] * daiver (~daiver@95.85.8.93) has joined #ceph
[15:21] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[15:24] <daiver> can someone point me to the right direction. S3 API. when I access radosgw with domain s3.example.com with valid credentials- works fine. I created bucket 'testbucket', then when I tries to access radosgw with address testbucket.s3.example.com - it gives 403 error, failed to authorize request
[15:29] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[15:30] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[15:32] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[15:33] * derjohn_mob (~aj@2001:6f8:1337:0:845a:fca:1e39:c10e) Quit (Ping timeout: 480 seconds)
[15:36] * i_m (~ivan.miro@88.206.104.168) has joined #ceph
[15:36] * wjw-freebsd2 (~wjw@176.74.240.1) has joined #ceph
[15:41] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[15:41] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[15:43] <IcePic> daiver: https? perhaps you'd need a wildcard cert to cover random.s3...
[15:44] * ade (~abradshaw@dslb-088-072-191-127.088.072.pools.vodafone-ip.de) Quit (Quit: Too sexy for his shirt)
[15:44] * zc00gii (~MJXII@76GAAEXUK.tor-irc.dnsbl.oftc.net) Quit ()
[15:44] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[15:45] * wwdillingham (~LobsterRo@140.247.242.44) has joined #ceph
[15:45] <daiver> SSL wasn't a problem - with letsencrypt :) looks like endpoint 'bucketname.s3.example.com' for auth is incorrect in general. I should do an auth to s3.example.com, and then connect to bucket address 'bucketname.s3.example.com'. At least that's how the 's3cmd' does and it works perfectly
[15:45] <daiver> I can share small how to for single node install (for tests) on CentOS7 if someone interested
[15:46] * redbeast12 (~Esvandiar@76GAAEXUO.tor-irc.dnsbl.oftc.net) Quit ()
[15:46] * Fapiko (~Xa@162.221.202.230) has joined #ceph
[15:47] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[15:47] <rkeene> My relmon patterns are a little aggressive here... :-D Newer version of package "ceph" is available: 10.2.0 -- current version is 0.94.6
[15:48] <Be-El> daiver: which authentication mechanism and which ceph version do you use?
[15:49] * wjw-freebsd (~wjw@176.74.240.1) has joined #ceph
[15:51] * wjw-freebsd2 (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[15:52] <daiver> so... I used Cyberduck to do the S3 API tests.
[15:52] <daiver> here the debug log of that 403 error:
[15:52] <daiver> http://pastebin.com/qNGA4Fe1
[15:53] <daiver> version - latest 10.2.0, OS - CentOS7
[15:55] <Be-El> the hammer release did not support AWS4 authentication, but according to the log you are using AWS/AWS3
[15:56] * mohmultihouse (~mohmultih@gw01.mhitp.dk) Quit (Read error: Connection reset by peer)
[15:57] * lifeboy (~roland@196.32.235.233) has joined #ceph
[15:57] * natarej (~natarej@2001:8003:483a:a900:da8:6a6c:304e:e427) Quit (Read error: Connection reset by peer)
[15:57] * natarej (~natarej@2001:8003:483a:a900:da8:6a6c:304e:e427) has joined #ceph
[15:57] <daiver> that's Jewel, not hammer
[15:59] * overclk (~quassel@121.244.87.117) Quit (Remote host closed the connection)
[16:00] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[16:01] * bene2 (~bene@2601:18c:8501:41d0:ea2a:eaff:fe08:3c7a) has joined #ceph
[16:02] * yanzheng1 (~zhyan@118.116.113.70) Quit (Quit: This computer has gone to sleep)
[16:03] * Concubidated (~cube@c-50-173-245-118.hsd1.ca.comcast.net) has joined #ceph
[16:03] * yanzheng1 (~zhyan@118.116.113.70) has joined #ceph
[16:04] * mohmultihhouse (~mohmultih@gw01.mhitp.dk) Quit (Ping timeout: 480 seconds)
[16:05] * bvi (~bastiaan@185.56.32.1) Quit (Ping timeout: 480 seconds)
[16:07] * dvanders__ (~dvanders@2001:1458:202:225::101:124a) Quit (Ping timeout: 480 seconds)
[16:09] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[16:09] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) Quit (Quit: billwebb)
[16:09] * csoukup (~csoukup@159.140.254.106) has joined #ceph
[16:09] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) has joined #ceph
[16:13] * dvanders (~dvanders@2001:1458:202:225::101:124a) has joined #ceph
[16:14] <daiver> small how to for single node install on centos7: http://pastebin.com/dvDEcRbk
[16:14] * xanax` (~w0lfeh@65.19.167.130) has joined #ceph
[16:16] * Fapiko (~Xa@76GAAEXVM.tor-irc.dnsbl.oftc.net) Quit ()
[16:16] * derjohn_mob (~aj@x590e5c3f.dyn.telefonica.de) has joined #ceph
[16:16] * PappI (~Shesh@76GAAEXWB.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:16] * shylesh (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[16:16] * etienneme (~arch@75.ip-167-114-228.eu) Quit (Quit: WeeChat 1.4)
[16:17] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:20] * wjw-freebsd2 (~wjw@176.74.240.1) has joined #ceph
[16:21] * MannerMan (~oscar@user170.217-10-117.netatonce.net) Quit (Remote host closed the connection)
[16:24] * wjw-freebsd (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[16:24] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) Quit (Quit: billwebb)
[16:25] * shyu_ (~shyu@221.217.61.139) has joined #ceph
[16:25] * xarses (~xarses@rrcs-24-173-18-66.sw.biz.rr.com) Quit (Ping timeout: 480 seconds)
[16:26] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) has joined #ceph
[16:26] <TMM> I deleted and recreated the pools. all is well now
[16:26] * dvanders (~dvanders@2001:1458:202:225::101:124a) Quit (Ping timeout: 480 seconds)
[16:26] * mattbenjamin1 (~mbenjamin@aa2.linuxbox.com) has joined #ceph
[16:27] <TMM> PoRNo-MoRoZ, being nice to someone isn't 'gay' :) Thanks.
[16:27] <TMM> oh
[16:28] <TMM> well
[16:28] <PoRNo-MoRoZ> :DD
[16:28] <PoRNo-MoRoZ> palivo
[16:28] <PoRNo-MoRoZ> i just noticed him that i'm happy he's ok :D
[16:28] <TMM> I'm tired after all this stuff didn't notice the nice purple text
[16:28] <TMM> sorry
[16:35] <PoRNo-MoRoZ> no prob :D
[16:36] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:37] <PoRNo-MoRoZ> btw
[16:37] <PoRNo-MoRoZ> if someone someday will try to use SanDisk UltraFit as system disk
[16:37] <PoRNo-MoRoZ> some series of them NOT BOOTING on 3.xx kernel
[16:38] <PoRNo-MoRoZ> it will boot on atleast 4.4
[16:38] <PoRNo-MoRoZ> maybe 4.3
[16:38] <PoRNo-MoRoZ> not 4.2
[16:38] <xcezzz> PoRNo-MoRoZ: man you trying to do some weird stuff lol
[16:39] <PoRNo-MoRoZ> :DDD
[16:39] <PoRNo-MoRoZ> what are you waiting from a guy with a such nick ? :D
[16:40] <post-factum> we are waining for frosty porn, of course
[16:40] <PoRNo-MoRoZ> http://phun.ru/files/avatars/PoRNo-MoRoZ/01.png
[16:40] <PoRNo-MoRoZ> here is my old avatar for you
[16:40] <PoRNo-MoRoZ> :D
[16:40] * danieagle (~Daniel@201-1-132-74.dsl.telesp.net.br) has joined #ceph
[16:40] <post-factum> fits well to LOR avatar
[16:40] <PoRNo-MoRoZ> ofc there is no real porn
[16:41] <PoRNo-MoRoZ> so it's sfw if someone wonder :)
[16:41] <PoRNo-MoRoZ> okay time to write jira logs
[16:44] * xanax` (~w0lfeh@06SAABVHL.tor-irc.dnsbl.oftc.net) Quit ()
[16:46] * PappI (~Shesh@76GAAEXWB.tor-irc.dnsbl.oftc.net) Quit ()
[16:46] * sixofour (~tritonx@udp078200uds.hawaiiantel.net) has joined #ceph
[16:47] * daiver (~daiver@95.85.8.93) Quit ()
[16:48] * starcoder (~verbalins@nl11x.mullvad.net) has joined #ceph
[16:50] * fsimonce` (~simon@87.13.130.124) has joined #ceph
[16:52] <olqs> /quit
[16:52] * olqs (~olqs@cpe90-146-85-69.liwest.at) Quit (Quit: leaving)
[16:52] <PoRNo-MoRoZ> bye
[16:52] <PoRNo-MoRoZ> :D
[16:52] * dvanders (~dvanders@dvanders-pro.cern.ch) has joined #ceph
[16:54] * fsimonce (~simon@host201-70-dynamic.26-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[16:58] * xarses (~xarses@2001:470:108:1300:921a:b67c:a5a8:be09) has joined #ceph
[17:01] * penguinRaider (~KiKo@14.139.82.6) Quit (Read error: No route to host)
[17:03] * Nacer (~Nacer@14.186.140.221) has joined #ceph
[17:05] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:07] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[17:16] * sixofour (~tritonx@76GAAEXWW.tor-irc.dnsbl.oftc.net) Quit ()
[17:16] * Keiya (~Jourei@tor-exit4-readme.dfri.se) has joined #ceph
[17:16] * dvanders (~dvanders@dvanders-pro.cern.ch) Quit (Ping timeout: 480 seconds)
[17:18] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[17:18] * starcoder (~verbalins@nl11x.mullvad.net) Quit ()
[17:19] * zc00gii (~Nephyrin@politkovskaja.torservers.net) has joined #ceph
[17:27] <wwdillingham> can deep-flatten feature not be applied to images created pre-jewel?
[17:32] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[17:32] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:33] <wwdillingham> also, after retroactively applying object map and fast diff to new objects I get flags: object map invalid, fast diff invalid (is this to be expected)
[17:35] * shyu_ (~shyu@221.217.61.139) Quit (Ping timeout: 480 seconds)
[17:36] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[17:43] * yanzheng1 (~zhyan@118.116.113.70) Quit (Quit: This computer has gone to sleep)
[17:45] * TMM (~hp@185.5.122.2) Quit (Quit: Ex-Chat)
[17:46] * Keiya (~Jourei@4MJAAEJQG.tor-irc.dnsbl.oftc.net) Quit ()
[17:46] * _br_ (~Maza@3.tor.exit.babylon.network) has joined #ceph
[17:47] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) has joined #ceph
[17:47] * billwebb (~billwebb@50-203-47-138-static.hfc.comcastbusiness.net) Quit (Quit: billwebb)
[17:48] * hellertime (~Adium@pool-173-48-155-219.bstnma.fios.verizon.net) has left #ceph
[17:48] * zc00gii (~Nephyrin@4MJAAEJQL.tor-irc.dnsbl.oftc.net) Quit ()
[17:49] * jacoo (~phyphor@146.0.74.160) has joined #ceph
[17:50] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[17:52] * rmart04 (~rmart04@support.memset.com) has joined #ceph
[17:53] * penguinRaider (~KiKo@14.139.82.6) has joined #ceph
[17:55] <wwdillingham> rbd object-map rebuild seemed to take care of it.
[17:56] <via> are there any bluestore best practices coming from an old hdd with journal on ssd perspective? should the wal and db go on partitions on the ssd to keep it similar?
[18:03] * shohn (~shohn@dslb-094-223-167-135.094.223.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[18:14] * xarses (~xarses@2001:470:108:1300:921a:b67c:a5a8:be09) Quit (Ping timeout: 480 seconds)
[18:14] * Skaag (~lunix@65.200.54.234) has joined #ceph
[18:16] * _br_ (~Maza@6AGAABHWL.tor-irc.dnsbl.oftc.net) Quit ()
[18:16] * tunaaja (~Bonzaii@tor-exit7-readme.dfri.se) has joined #ceph
[18:18] * jacoo (~phyphor@6AGAABHWO.tor-irc.dnsbl.oftc.net) Quit ()
[18:19] * rmart04 (~rmart04@support.memset.com) Quit (Quit: rmart04)
[18:20] * randyorr (~rorr@38.122.57.99) has joined #ceph
[18:20] * valeech (~valeech@166.170.29.196) has joined #ceph
[18:20] <Heebie> Are there performance benefits, reliability benefits from "bluestore" ?
[18:21] * linuxkidd (~linuxkidd@241.sub-70-210-192.myvzw.com) has joined #ceph
[18:21] * pabluk_ is now known as pabluk__
[18:21] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[18:21] <atg> Heebie: Check out http://www.sebastien-han.fr/blog/2016/03/21/ceph-a-new-store-is-coming/
[18:22] <Heebie> Has Sebastian fixed his SSL certificate issues yet? (using his site with https:// complains.. and obviously I've read a lot of his stuff already if I recognise the URL) ;)
[18:23] <atg> https seems to work
[18:23] <atg> cert issued ~10 days ago
[18:23] <Heebie> That would have fixed it :)
[18:24] * lifeboy (~roland@196.32.235.233) Quit (Quit: Ex-Chat)
[18:25] * untoreh (~untoreh@151.50.215.50) has joined #ceph
[18:26] * randyorr (~rorr@38.122.57.99) Quit (Quit: randyorr)
[18:27] * xarses (~xarses@2001:470:108:1300:921a:b67c:a5a8:be09) has joined #ceph
[18:29] * randyorr (~rorr@38.122.57.99) has joined #ceph
[18:30] * rdas (~rdas@121.244.87.116) Quit (Ping timeout: 480 seconds)
[18:31] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Remote host closed the connection)
[18:32] <untoreh> can ceph-docker be used with `osd prepare` ?
[18:34] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Killed (NickServ (Too many failed password attempts.)))
[18:34] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[18:35] * shylesh (~shylesh@45.124.226.121) has joined #ceph
[18:39] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:40] * davidzlap (~Adium@2605:e000:1313:8003:483a:763a:64b:80a7) has joined #ceph
[18:41] * noooxqe (~noooxqe@1.ip-51-255-167.eu) Quit (Ping timeout: 480 seconds)
[18:45] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[18:46] * tunaaja (~Bonzaii@4MJAAEJR7.tor-irc.dnsbl.oftc.net) Quit ()
[18:46] * joshd (~jdurgin@2001:470:108:1300:9cb9:e1cd:8eab:2b7f) has joined #ceph
[18:46] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:48] * Catsceo (~Swompie`@relay2.cavefelem.com) has joined #ceph
[18:48] * untoreh (~untoreh@151.50.215.50) Quit (Remote host closed the connection)
[18:50] * SweetGirl (~Kristophe@justus.impium.de) has joined #ceph
[18:52] * untoreh (~untoreh@151.50.215.50) has joined #ceph
[18:53] * evelu (~erwan@37.167.7.5) has joined #ceph
[18:55] * wer (~wer@216.197.66.226) Quit (Ping timeout: 480 seconds)
[18:56] * azizulhakim (~oftc-webi@neptune.cs.fiu.edu) has joined #ceph
[18:56] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[18:56] <azizulhakim> can someone tell me which channel is specific for Google SoC?
[18:59] * billwebb (~billwebb@vpngac.ccur.com) has joined #ceph
[19:01] <randyorr> anyone available to help with an osd issue after an upgrade from hammer to jewel?
[19:01] <randyorr> i have OSDs stuck in state: preboot
[19:03] * mykola (~Mikolaj@91.245.72.101) has joined #ceph
[19:03] * wer (~wer@216.197.66.226) has joined #ceph
[19:04] * rmart04 (~rmart04@support.memset.com) has joined #ceph
[19:04] * rmart04 (~rmart04@support.memset.com) has left #ceph
[19:04] * wjw-freebsd2 (~wjw@176.74.240.1) Quit (Ping timeout: 480 seconds)
[19:05] * shohn (~shohn@dslb-094-223-167-135.094.223.pools.vodafone-ip.de) has joined #ceph
[19:05] * shohn (~shohn@dslb-094-223-167-135.094.223.pools.vodafone-ip.de) Quit ()
[19:09] <squisher> randyorr, I haven't tried jewel yet myself, but did you make sure you followed the instructions regarding the permissions / new ceph user?
[19:09] <randyorr> yes, absolutely permissions are set to ceph:ceph
[19:12] <randyorr> the osd process stays running, but never goes into up/in state
[19:12] <randyorr> last of the osd log looks like this:
[19:12] <randyorr> 2016-04-29 09:36:39.018086 7ff430459800 -1 osd.3 0 log_to_monitors {default=true}
[19:12] <randyorr> 2016-04-29 09:36:39.020910 7ff430459800 0 osd.3 0 done with init, starting boot process
[19:12] <randyorr> 2016-04-29 09:36:39.038103 7ff420581700 0 osd.3 12507 crush map has features 1107558400, adjusting msgr requires for clients
[19:12] <randyorr> 2016-04-29 09:36:39.038113 7ff420581700 0 osd.3 12507 crush map has features 1107558400 was 2199057080833, adjusting msgr requires for mons
[19:12] * scuttle|afk is now known as scuttlemonkey
[19:12] <randyorr> 2016-04-29 09:36:39.038117 7ff420581700 0 osd.3 12507 crush map has features 1107558400, adjusting msgr requires for osds
[19:12] <randyorr> 2016-04-29 09:36:39.100279 7ff41cbf8700 0 log_channel(cluster) log [WRN] : failed to encode map e12745 with expected crc
[19:12] <randyorr> 2016-04-29 09:36:39.371521 7ff420581700 0 osd.3 12807 crush map has features 2200130813952, adjusting msgr requires for clients
[19:12] <randyorr> 2016-04-29 09:36:39.371525 7ff420581700 0 osd.3 12807 crush map has features 2200130813952 was 1107567105, adjusting msgr requires for mons
[19:12] <randyorr> 2016-04-29 09:36:39.371529 7ff420581700 0 osd.3 12807 crush map has features 2200130813952, adjusting msgr requires for osds
[19:14] * xarses (~xarses@2001:470:108:1300:921a:b67c:a5a8:be09) Quit (Ping timeout: 480 seconds)
[19:18] * Catsceo (~Swompie`@6AGAABHYR.tor-irc.dnsbl.oftc.net) Quit ()
[19:20] * SweetGirl (~Kristophe@06SAABVPF.tor-irc.dnsbl.oftc.net) Quit ()
[19:20] * mLegion (~SurfMaths@tor-exit7-readme.dfri.se) has joined #ceph
[19:23] * karnan (~karnan@106.51.131.118) has joined #ceph
[19:28] * joshd (~jdurgin@2001:470:108:1300:9cb9:e1cd:8eab:2b7f) Quit (Ping timeout: 480 seconds)
[19:28] * randyorr (~rorr@38.122.57.99) Quit (Quit: randyorr)
[19:31] * noooxqe (~noooxqe@1.ip-51-255-167.eu) has joined #ceph
[19:35] * Heebie (~thebert@dub-bdtn-office-r1.net.digiweb.ie) Quit (Read error: Connection reset by peer)
[19:36] * Heebie (~thebert@dub-bdtn-office-r1.net.digiweb.ie) has joined #ceph
[19:37] * valeech (~valeech@166.170.29.196) Quit (Read error: Connection reset by peer)
[19:41] * brians__ (~brian@80.111.114.175) has joined #ceph
[19:41] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[19:41] * valeech (~valeech@23.30.18.249) has joined #ceph
[19:47] * brians (~brian@80.111.114.175) Quit (Ping timeout: 480 seconds)
[19:49] * rapedex (~anadrom@Relay-J.tor-exit.network) has joined #ceph
[19:49] * xarses (~xarses@2001:470:108:1300:921a:b67c:a5a8:be09) has joined #ceph
[19:50] * mLegion (~SurfMaths@6AGAABHZS.tor-irc.dnsbl.oftc.net) Quit ()
[19:52] * brians (~brian@80.111.114.175) has joined #ceph
[19:53] * brians (~brian@80.111.114.175) Quit (Max SendQ exceeded)
[19:53] * brians (~brian@80.111.114.175) has joined #ceph
[19:54] * brians (~brian@80.111.114.175) Quit (Max SendQ exceeded)
[19:55] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) Quit (Remote host closed the connection)
[19:55] * brians (~brian@80.111.114.175) has joined #ceph
[19:56] * brians (~brian@80.111.114.175) Quit (Max SendQ exceeded)
[19:57] * brians (~brian@80.111.114.175) has joined #ceph
[19:58] * brians__ (~brian@80.111.114.175) Quit (Ping timeout: 480 seconds)
[19:58] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[19:58] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[20:00] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[20:00] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[20:00] * kefu (~kefu@114.92.122.74) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:05] * billwebb (~billwebb@vpngac.ccur.com) Quit (Quit: billwebb)
[20:07] * karnan (~karnan@106.51.131.118) Quit (Ping timeout: 480 seconds)
[20:08] * billwebb (~billwebb@vpngac.ccur.com) has joined #ceph
[20:10] * randyorr (~rorr@45.73.146.238) has joined #ceph
[20:10] * evelu (~erwan@37.167.7.5) Quit (Ping timeout: 480 seconds)
[20:12] * joshd (~jdurgin@rrcs-97-79-186-2.sw.biz.rr.com) has joined #ceph
[20:13] * shylesh (~shylesh@45.124.226.121) Quit (Remote host closed the connection)
[20:14] * valeech_ (~valeech@23.30.18.249) has joined #ceph
[20:16] * karnan (~karnan@106.51.138.248) has joined #ceph
[20:17] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) has joined #ceph
[20:18] * valeech (~valeech@23.30.18.249) Quit (Ping timeout: 480 seconds)
[20:18] * valeech_ is now known as valeech
[20:18] * rapedex (~anadrom@76GAAEX0X.tor-irc.dnsbl.oftc.net) Quit ()
[20:19] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[20:20] * tZ (~n0x1d@politkovskaja.torservers.net) has joined #ceph
[20:21] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[20:22] * randyorr (~rorr@45.73.146.238) Quit (Quit: randyorr)
[20:22] <post-factum> whould that man like to use pastebin next time?
[20:23] * Hazmat (~Da_Pineap@91.109.29.120) has joined #ceph
[20:25] * bniver (~bniver@2600:100c:b023:6f4:5dbe:d16a:615f:133c) has joined #ceph
[20:30] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[20:34] * Skaag (~lunix@65.200.54.234) has joined #ceph
[20:35] * Skaag (~lunix@65.200.54.234) Quit ()
[20:41] * randyorr (~rorr@45.73.146.238) has joined #ceph
[20:50] * billwebb (~billwebb@vpngac.ccur.com) Quit (Quit: billwebb)
[20:50] * billwebb (~billwebb@vpngac.ccur.com) has joined #ceph
[20:50] * sepa (~sepa@aperture.GLaDOS.info) Quit (Ping timeout: 480 seconds)
[20:50] * tZ (~n0x1d@4MJAAEJVX.tor-irc.dnsbl.oftc.net) Quit ()
[20:50] * Eric1 (~arsenaali@06SAABVT6.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:51] * seosepa (~sepa@aperture.GLaDOS.info) has joined #ceph
[20:52] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[20:53] * Hazmat (~Da_Pineap@06SAABVS8.tor-irc.dnsbl.oftc.net) Quit ()
[20:53] * Behedwin (~Aal@06SAABVUB.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:53] * valeech_ (~valeech@23.30.18.249) has joined #ceph
[20:56] * valeech_ (~valeech@23.30.18.249) Quit (Read error: Connection reset by peer)
[20:57] * valeech (~valeech@23.30.18.249) Quit (Ping timeout: 480 seconds)
[20:58] * valeech (~valeech@23.30.18.249) has joined #ceph
[21:00] * getup (~textual@dhcp-077-251-206-162.chello.nl) has joined #ceph
[21:05] * billwebb (~billwebb@vpngac.ccur.com) Quit (Quit: billwebb)
[21:10] * fmanana (~fdmanana@2001:8a0:6e0c:6601:7875:fe8c:7399:b0c8) has joined #ceph
[21:10] * fmanana (~fdmanana@2001:8a0:6e0c:6601:7875:fe8c:7399:b0c8) Quit (Remote host closed the connection)
[21:11] * valeech (~valeech@23.30.18.249) Quit (Ping timeout: 480 seconds)
[21:15] * Racpatel (~Racpatel@2601:87:3:3601::675d) Quit (Quit: Leaving)
[21:17] * Racpatel (~Racpatel@2601:87:3:3601::675d) has joined #ceph
[21:19] * billwebb (~billwebb@vpngac.ccur.com) has joined #ceph
[21:20] * Eric1 (~arsenaali@06SAABVT6.tor-irc.dnsbl.oftc.net) Quit ()
[21:22] * jcsp (~jspray@rrcs-97-79-186-2.sw.biz.rr.com) has joined #ceph
[21:23] * Behedwin (~Aal@06SAABVUB.tor-irc.dnsbl.oftc.net) Quit ()
[21:28] * zaitcev (~zaitcev@rrcs-97-79-186-2.sw.biz.rr.com) has joined #ceph
[21:33] <penguinRaider> hi all, I am planning to build a script that can be run on any ceph cluster and would provide some diagnostic information regarding whats wrong with it. My proposal recently got selected by linux foundation in google summer of code. My current plan can be found here https://goo.gl/8XSZ4z. Would appreciate any input :-) with cholcombe
[21:34] <cholcombe> welcome penguinRaider :)
[21:34] * Skaag (~lunix@65.200.54.234) has joined #ceph
[21:34] <penguinRaider> thanks cholcombe :-D
[21:39] * xarses (~xarses@2001:470:108:1300:921a:b67c:a5a8:be09) Quit (Ping timeout: 480 seconds)
[21:41] <jcsp> penguinRaider: it would be a good idea to post that to ceph-devel to get eyes on it
[21:42] <penguinRaider> sure thanks jcsp
[21:42] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:7875:fe8c:7399:b0c8) Quit (Ping timeout: 480 seconds)
[21:43] * bniver (~bniver@2600:100c:b023:6f4:5dbe:d16a:615f:133c) Quit (Remote host closed the connection)
[21:45] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[21:48] * mykola (~Mikolaj@91.245.72.101) Quit (Quit: away)
[21:49] * billwebb (~billwebb@vpngac.ccur.com) Quit (Quit: billwebb)
[21:51] * dgurtner (~dgurtner@c-75-74-127-185.hsd1.fl.comcast.net) has joined #ceph
[21:52] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[21:53] * puvo (~Jase@tor-relay.zwiebeltoralf.de) has joined #ceph
[21:53] * xarses (~xarses@2001:470:108:1300:921a:b67c:a5a8:be09) has joined #ceph
[21:54] * rendar (~I@host146-39-dynamic.57-82-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:54] <jcsp> penguinRaider: oh, I meant the mailing list ceph-devel, but the IRC channel is useful too :-)
[21:55] <jcsp> the mailing list will also get you people who happen to be asleep at the moment
[21:55] * Neon (~Doodlepie@46.183.218.199) has joined #ceph
[21:56] <penguinRaider> jcsp, sure will shoot a mail too :-)
[21:56] * rendar (~I@host146-39-dynamic.57-82-r.retail.telecomitalia.it) has joined #ceph
[21:58] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[22:03] * dgurtner (~dgurtner@c-75-74-127-185.hsd1.fl.comcast.net) Quit (Ping timeout: 480 seconds)
[22:03] * dvanders (~dvanders@46.227.20.178) has joined #ceph
[22:04] * Skaag (~lunix@65.200.54.234) has joined #ceph
[22:11] <getup> how does the multi-tenancy work in jewel? i can't seem to find a whole lot of documentation about it aside from a --tenant flag in radosgw-admin
[22:12] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[22:14] * mattbenjamin1 (~mbenjamin@aa2.linuxbox.com) Quit (Ping timeout: 480 seconds)
[22:23] * puvo (~Jase@06SAABVWN.tor-irc.dnsbl.oftc.net) Quit ()
[22:25] * Neon (~Doodlepie@06SAABVWP.tor-irc.dnsbl.oftc.net) Quit ()
[22:25] * Snowcat4 (~Bj_o_rn@06SAABVXW.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:45] <PoRNo-MoRoZ> is there any command to 'reload' ceph.conf on osds ? like /etc/init.d/nginx/reload
[22:45] <PoRNo-MoRoZ> nginx reload
[22:45] <PoRNo-MoRoZ> i know about injectargs
[22:45] <PoRNo-MoRoZ> via admin socket maybe ?
[22:45] <PoRNo-MoRoZ> i'll google
[22:48] <PoRNo-MoRoZ> can i use it instead of injectargs ?
[22:48] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[22:49] * xarses (~xarses@2001:470:108:1300:921a:b67c:a5a8:be09) Quit (Ping timeout: 480 seconds)
[22:52] <PoRNo-MoRoZ> looks like so
[22:54] * getup (~textual@dhcp-077-251-206-162.chello.nl) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:55] * Snowcat4 (~Bj_o_rn@06SAABVXW.tor-irc.dnsbl.oftc.net) Quit ()
[22:55] * Tenk (~CobraKhan@192.87.28.28) has joined #ceph
[22:58] * xarses (~xarses@2001:470:108:1300:921a:b67c:a5a8:be09) has joined #ceph
[22:59] * MentalRay (~MentalRay@142.169.78.163) has joined #ceph
[23:04] * MentalRay (~MentalRay@142.169.78.163) Quit ()
[23:10] * _28_ria (~kvirc@opfr028.ru) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[23:11] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[23:13] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[23:13] * wwdillingham (~LobsterRo@140.247.242.44) Quit (Ping timeout: 480 seconds)
[23:17] * dvanders (~dvanders@46.227.20.178) Quit (Ping timeout: 480 seconds)
[23:19] * Skaag (~lunix@65.200.54.234) has joined #ceph
[23:23] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:25] * Tenk (~CobraKhan@06SAABVYX.tor-irc.dnsbl.oftc.net) Quit ()
[23:30] * nastidon (~Freddy@freedom.ip-eend.nl) has joined #ceph
[23:31] * Nacer_ (~Nacer@14.186.135.213) has joined #ceph
[23:34] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[23:37] * Nacer (~Nacer@14.186.140.221) Quit (Ping timeout: 480 seconds)
[23:40] * LeaChim (~LeaChim@host86-147-119-244.range86-147.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[23:42] * gregmark (~Adium@68.87.42.115) has joined #ceph
[23:46] * mhack (~mhack@66-168-117-78.dhcp.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[23:47] * dsl (~dsl@72-48-250-184.dyn.grandenetworks.net) Quit (Remote host closed the connection)
[23:50] * azizulhakim (~oftc-webi@neptune.cs.fiu.edu) Quit (Ping timeout: 480 seconds)
[23:56] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:57] * xarses (~xarses@2001:470:108:1300:921a:b67c:a5a8:be09) Quit (Ping timeout: 480 seconds)
[23:57] * Blueraven (~PcJamesy@exit1.ipredator.se) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.